I am hoping to generate audio/video stream with gstreamer and send the stream to a jitsi meet like a user/browser would. so a gstreamer sink.
and also I would like to connect to the server and ingest the stream as a gstreamer source.
This is to add some automation around using jitsi for conference presentations.
Currently a collection of humans work together and figure it out, but we would like to automate whatever we can, and replace things like [browser, vlc, vnc, human hits Play button at the right time] with a script that runs at the right time and doesn’t require the amount of config and apt to app connections and permissions that are needed for jitsi in browser to read from what vlc has rendered to it’s window.
and: I want a count down HH:MM:SS clock so we all know when the next presentation starts, using a clock we are all in sync with. currently a person in the meeting is in charge of talking this information, and sometimes they don’t for all the reasons humans don’t do what they should.
We also want to send a single composited stream to a CDN with a standard html 5 player embeded in a web page, or people can use whatever media player they want.
searching around, I found:
“I finally make simple gstreamer webrtcbin video/audio connecting to jicofo/jvb works.”
But 2 years later: “Can you share the code” [jitsi-dev] rtcp-mux without bundle
I am also interested working in having a gstreamer sink that could be configured to connect to a Jitsi Meeting. I really don’t get why this feature seems to get ignored, several people have expressed interest in such a use case, but the developers seem to respond with either ignorance or by suggesting subpar options like screen-sharing or non-portable work-arounds using loopback devices.
Anyway, in short, I would be interested in having a go at doing this. I think the approach should be to start with a simple sink that does nothing, and then to study the code of lib-jitsi-meet (written in javascript) to figure out how to make the sink talk to a Jitsi server.
Anyway, if I can find another developer to collaborate with, I would be willing to throw a couple days at this to see if it is possible to get something basic working. @CarlFK do you have any experience with Gstreamer/coding skills?
I don’t think anyone on the team has much experience with gstreamer, but if you ask specific questions we can try and guide you around what else you’d need to do.
AIUI (without checking at all), the main reason noone has done it is that the Janus javascript protocol is well documented and easy to implement, and Jitsi’s is not documented outside their code and would effectively have to be reverse engineered.
The work to be done is primarily javascript-based to set up a call. The GStreamer Webrtc components should already do the necessary things otherwise (hopefully).
So my interest in this has just dropped a bit, but I’ll be happy to test things if someone else does the real work.
"Create the file /usr/local/bin/ffmpeg"
That is pretty hacky. I do not want to replace, augment, manage etc such things. the ffmpeg command should run the ffmpeg binary, not gstreamer.
Hi Carl, if you want to rebuild all the work done by the JavaScript part of jitsi-meet it will be really really complexe.
You need to manage and maintain :
2 differents websockets connections, one for XMPP-muc and one for the colibri channel to the videobridge.
implement a xmpp muc protocol to connect your client to a jitsi-meet room and get media description
transcode the xmpp media description to werbtc sdp and your local sdp to xmpp media description to initiate the gstreamer part
implement a colibri stack for the client to bridge channel and link it to your gstreamer part.
So having all this work done by the browser and the jitsi-meet JavaScript is more simple.
Regards.
It was fairly complex, but I’m not sure about really really complex!
It’s very alpha at this point but we plan to actively develop it. Pull requests are welcome.
Some examples…
Stream a .webm video with VP8 video and Vorbis audio to the conference. The video will be passed through efficiently with no transcoding, the audio will be transcoded to Opus.
Record a .webm file for each of the other participants in the conference, containing VP8 video and Opus audio, without needing to do any transcoding. A modest system can record a very large number of participants this way, since no transcoding is required.
Passing through colibri events to allow the pipeline to respond to changes in who is speaking etc
Building video mixing elements for stage view and tile view, and implement Jibri-style recording in terms of those — it should be much more efficient than Jibri since no browser + screen capture is involved
Adding RTX and TCC support
We are really keen to hear any suggestions for other ways this framework could be used. PRs and issues (both feature requests and bugs) are most welcomed!
A feature that might be interesting to me would be to be able to limit reception to 1 participant (given their endpoint ID). This would involve setting last N to 1, and selecting that endpoint. In addition, selecting the receive quality would be nice, to choose in what (video) quality one would like to receive that participant.
I have been meaning to learn some Rust, this may be the right time to get my feet wet! No promises though, I need to find the time first!
Latest version (0.3.1) of lib-gst-meet now has colibri message support, both sending and receiving, so this was quite simple to add. Latest gst-meet (0.2.2) can now do this:
Hi @jbg ,
I try to use gst-meet on different Jitsi-Meet server and it was never able to initiate correctly the XMPP-MUC connection.
What is your recommended JItsi version to test it ?
Hi @Damien_FETIS. For the XMPP connection, we’ve only implemented websockets, so if your server only supports the older BOSH connection, it won’t work. If you’re using websockets for XMPP and it still doesn’t work properly, please file an issue on GitHub and include the output from running gst-meet with --verbose (added in 0.2.1).
What is your recommended JItsi version to test it?
Any recent version of Jicofo, Prosody & JVB should be fine, but there are a lot of variables with a Jitsi deployment and we’ve only tested with our own platform and a handful of other deployments, so undoubtedly there are incompatibilities yet to be found!
We’ve been busy on other things for a few months, but now that GStreamer 1.20 has been released, I spent some more time on this today.
TWCC feedback is now working, so the received video doesn’t freeze any more, and the JVB steps up to forwarding higher layers if bandwidth is sufficient. For recording received streams it’s now quite usable.
I’ve also implemented XMPP pings from the client to the server, which should help keep the connection active when there is a NAT/firewall/LB in the path with a short timeout.
On the debugging side, there’s now the ability to log information about every RTP and RTCP packet (which was very helpful with getting TWCC working).