How to access conference video : gst-meet

Hello,

I want to access each participant video in a jitsi conference as stream and planning to use gst-meet. At the end a python program will be used to read streaming video frame by frame.

Could you please suggest me how to achieve this?

@jbg @damencho

Thanks,

For the tightest integration with your Python program you would want to look at making Python bindings to lib-gst-meet. That should be reasonably easy using something like pyo3. Then you can use appsink in your python program to receive a callback for every frame.

Alternatively, you could do a looser integration (separate gst-meet and Python processes) by using a network or file sink (with a pipe/fifo) for each participant in the GStreamer pipeline in gst-meet, to get the frames from the pipeline to your Python program.

Thanks @jbg

want to make tightest integration and I’m still not clear which method has to do modification to call python module with video data.

Would that method be in lib-gst-meet/src/conference.rs file or something else to make early get video stream out ?

Could you help on this please.

Regards,
Manoj.

lib-gst-meet handles signalling; all of the audio and video frames are handled in your gstreamer pipeline. There shouldn’t be any need to modify lib-gst-meet for this use case; its API already lets you inject any element into the pipeline used for each participant, so you can use an appsink which gives you programmatic access to each audio/video frame.

For a tight integration you would want to either:

(A) write a small Rust program (like gst-meet — you could use gst-meet as a starting point) that wraps lib-gst-meet and adds a gstreamer appsink element to each participant recv pipeline to get a callback with each audio/video frame (you can then pass that frame to Python however you want). This could also be done without writing any Rust by just passing a udpsink or filesink (with a fifo) in the recv participant pipeline template and then listening on that port/fifo from Python.

or

(B) write pyo3 bindings for lib-gst-meet so that you can use the lib-gst-meet API directly from Python. Then you instantiate an appsink in Python using the existing gstreamer Python bindings, and pass that into lib-gst-meet (via the pyo3 bindings) for each participant’s recv pipeline. Then GStreamer will call your Python code with every frame.

2 Likes

I’m reusing gst-meet main.rs file for scenario (A). @jbg

could you please tell me I need to add appsink element in below code line method set_remote_participant_video_sink_element?


  if let Some(bin) = recv_pipeline {
    conference.add_bin(&bin).await?;

    if let Some(audio_element) = bin.by_name("audio") {
      info!(
        "recv pipeline has an audio element, a sink pad will be requested from it for each participant"
      );
      conference
        .set_remote_participant_audio_sink_element(Some(audio_element))
        .await;
    }

    if let Some(video_element) = bin.by_name("video") {
      info!(
        "recv pipeline has a video element, a sink pad will be requested from it for each participant"
      );
      conference
        .set_remote_participant_video_sink_element(Some(video_element))
        .await;
    }
  }

I’m new to the Rust, need some help from you @jbg and really appreciate !.

Yes, this code is looking for a recv pipeline (specified on the command line), and if one is provided, adding it to the conference and, if audio and video pads are found in it, linking them up to the participant.

Instead of doing this you can build your own bin containing appsinks (gstreamer docs, gstreamer rust bindings docs) — one appsink named audio for audio and one named video for video — and then call pull_sample() on the appsink in a loop to get each sample. If you’re doing both audio and video you’ll want a thread for each since pull_sample() blocks until the next sample is available.

1 Like

For general Rust help see rust-lang.org and ##rust on Libera.Chat IRC. For general GStreamer help see gstreamer.freedesktop.org and #gstreamer on OFTC IRC.

I did some clean up and was able to import AppSink from gstreamer-app = “0.18.7”. Still I don’t get how to get bin and pull_sample from video channel. Could you please help on this ? @jbg

 let bin = gstreamer::Bin::new(None);
  let sink_video = gstreamer::ElementFactory::make("appsink", None)?;
  bin.add(&sink_video)?;
  conference.add_bin(&bin).await?;

  let appsink_video: gstreamer_app::AppSink = sink_video
    .dynamic_cast::<gstreamer_app::AppSink>()
    .expect("Sink element is expected to be an appsink!");

  if let Some(video_element) = bin.by_name("video") {
    conference
    .set_remote_participant_video_sink_element(Some(video_element))
    .await;
  }
  
  thread::spawn(move || {
      loop {
        if let sample = appsink_video.pull_sample().map_err(|_| gstreamer::FlowError::Eos) {
          println!("appsink => {:?}",sample);
        }
      }
  });

@jbg could you please help on this ? Add AppSink as above code and link with conference. when do the pull_sample method it return with error.

appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)
appsink => Err(Eos)