Tip: how to share any video on a Jitsi session

It’s possible to share a Youtube video on Jitsi but if you want to share a video from another source…?

I have tried some tricks on my Debian Buster client and I can stream a high quality video/audio to a Jitsi room. It’s possible to use an MP4 file or a stream from an IP camera or an RTMP/RTSP stream or data from a video/audio capture card as a source.

These are the steps in summary:

  • create a virtual camera
  • create a virtual microphone
  • send the video data to the virtual camera and the audio data to the virtual microphone while playing the video using ffmpeg
  • open chromium, join a Jitsi room, select the virtual camera and the virtual microphone

packages

Install the following packages as root

apt-get install v4l2loopback-dkms ffmpeg

modules

Load the following modules as root

modprobe v4l2loopback video_nr=100 card_label=virtualcam-100 exclusive_caps=1
modprobe snd-aloop id=aloop100

Use the following command to check the result. The output can be a bit different in your case.

ls /dev/video*
>>> /dev/video0  /dev/video1  /dev/video100

ls /proc/asound
>>> aloop100  card0  card1  cards ...

virtual microphone

Switch to your desktop user account, you don’t need to be root from now on.

First, find your snd_aloop input device name

pactl list sources short | grep alsa_input | grep snd_aloop | awk '{print $2}'
>>> alsa_input.platform-snd_aloop.0.analog-stereo

And create a virtual microphone using this device name.

pacmd load-module module-remap-source source_name=mic100 master=alsa_input.platform-snd_aloop.0.analog-stereo source_properties=device.description=mic100 channels=2

Check your source list again, there should be a new device named as mic100

pactl list sources short
>>> ...
>>> 4    mic100  module-remap-source.c   s16le 2ch 44100Hz       RUNNING

virtual camera

No need to do anything for the virtual camera. It’s created when loading the v4l2loopback module.

ffmpeg

It’s possible to use any source as long as ffmpeg supports it. I will use an MP4 file in my example. The order of the video/audio channels may be different for some sources. Change the map parameters if this is the case

ffmpeg -re -i parabellum.mp4 -map 0:0 -f v4l2 -c:v rawvideo /dev/video100 -map 0:1 -f alsa hw:aloop100,1

stream to the Jitsi room

Open chromium or chrome and join a Jitsi room. Disable the audio processing because the audio will be clean most probably. The link will be something like that:

https://meet.jit.si/parabellum#config.disableAP=true

Choose the virtual camera (virtualcam-100) and the virtual microphone (mic100). Join the same room using a second tab to see the result. Don’t select the virtual devices on the second tab.

7 Likes

If you made something wrong and want to reset the sound devices

pulseaudio -k

And restart chromium

MUCH too complicated. In ZOOM, I click 2, maybe 3 buttons. In Jitsi there needs to be a better way!

what are the steps for the docker ? in order to achieve any video sharing instead of only youtube video’s

This are for the client’s desktop. Not related to Docker

This works well, however I had to make the /dev/video100 rw for everyone first, otherwise chrome wouldn’t find it and vlc wouldn’t play it. Edit: Seems to work fine without chmod but might need a relog after creating the video device for chrome to see it.

The quality is sadly not as good as the original source (movie file) and the screen proportions are slightly off. Quite the egg heads now :smiley:

The quality depends on the ffmpeg params and the network capacity

Is there a way to do this with a bot / code (no need to manually open the browser)?

Also, I assume the follow me option needs to be enabled so everybody would see the shared content. Is there a way to enable follow me for the bot, by default?

@_samueldmq I am also trying to achieve this. Currently looking into achieving that with puppeteer combined with the method for virtual webcam in this thread. I will keep you posted if I make some progress and would appreciate if you could do the same. Thanks!

Update to the above: I have a working system using Puppeteer and a little express server API so you can control it. I couldn’t get virtual webcam working on my VM so I ended up using flags to use the fake webcam testing features of Chromium to get it working. The videos have to be encoded first into a .y4m format using ffmpeg (a process which has it’s own quirks).

I am still deep in that project now but I will happily document the steps required and release the code once I have a bit more time. If anyone is desparate for the info in the mean time let me know.

1 Like

I would love to join the efforts on this :slightly_smiling_face:

is there a way to share a video stream from an android smartphone?

hi L-Wo, this is amazing, would you be willing to share the code??

Hi all,

I ended up using the code on a very specific client project, so I’m not allowed (yet!) to open-source my work on it.

I used puppeteer to join calls, and the --use-file-for-fake-video-capture and --use-file-for-fake-audio-capture arguments to pass in video/audio file, using ffmpeg to convert the files into the required WAV and Y4M format. My use case required many videos to be played in many conferences at once, for between 1-10 mins. I have a system for organising and scaling these “puppets” which I won’t go into, I’ll give you the information for the core part which allows you to join a call and play a video.

I ended up using this at quite a low resolution - it coped OK with higher resolutions but I needed to play many videos into many conferences from one server in my use case so I had to keep things efficient.

I have uploaded a very basic version here: GitHub - L-wo/jitsi-puppeteer-basic: A basic version of my Jitsi Puppeteer implementation

Most of the inspiration for that comes from Streaming a webcam to a Jitsi Meet room · GitHub

It’s all very untested and I expect it will throw errors for you but hopefully a good starting point if you are exploring this!

Thanks

Leo

Thank you for putting this together. Unfortunately I cannot get it working as you intended. It looks like the modules are created and inserted and the audio and video devices created:

hbarta@ash:/mnt/share/Video/Cosmos$ pactl list sources short
1	alsa_input.pci-0000_00_1b.0.analog-stereo	module-alsa-card.c	s16le 2ch 44100Hz	SUSPENDED
2	alsa_output.platform-snd_aloop.0.analog-stereo.monitor	module-alsa-card.c	s16le 2ch 44100Hz	SUSPENDED
3	alsa_input.platform-snd_aloop.0.analog-stereo	module-alsa-card.c	s16le 2ch 44100Hz	RUNNING
4	mic100	module-remap-source.c	s16le 2ch 44100Hz	RUNNING
5	alsa_output.pci-0000_00_1b.0.analog-stereo.monitor	module-alsa-card.c	s16le 2ch 44100Hz	SUSPENDED
hbarta@ash:/mnt/share/Video/Cosmos$ ls /dev/video*
/dev/video0  /dev/video1  /dev/video100
hbarta@ash:/mnt/share/Video/Cosmos$ 

The result I get when I execute the ffmpeg command is

hbarta@ash:/mnt/share/Video/Cosmos$ ffmpeg -re -i 'Cosmos 01 I The Shores of the Cosmic Ocean.m4v' -map 0:0 -f v4l2 -c:v rawvideo /dev/video100 -map 0:1 -f alsa hw:aloop100,1
ffmpeg version 4.1.6-1~deb10u1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 8 (Debian 8.3.0-6)
  configuration: --prefix=/usr --extra-version='1~deb10u1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
  libpostproc    55.  3.100 / 55.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Cosmos 01 I The Shores of the Cosmic Ocean.m4v':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42isomavc1
    creation_time   : 2011-02-05T16:52:42.000000Z
    encoder         : HandBrake 0.9.5 2011010300
  Duration: 01:00:24.93, start: 0.000000, bitrate: 2521 kb/s
    Chapter #0:0: start -123.256467, end 0.000000
    Metadata:
      title           : Chapter  1
    Chapter #0:1: start 0.000000, end 49.853978
    Metadata:
      title           : Chapter  2
    Chapter #0:2: start 49.853978, end 300.137344
    Metadata:
      title           : Chapter  3
    Chapter #0:3: start 300.137344, end 627.330878
    Metadata:
      title           : Chapter  4
    Chapter #0:4: start 627.330878, end 889.292578
    Metadata:
      title           : Chapter  5
    Chapter #0:5: start 889.292578, end 1239.192144
    Metadata:
      title           : Chapter  6
    Chapter #0:6: start 1239.192144, end 1606.175411
    Metadata:
      title           : Chapter  7
    Chapter #0:7: start 1606.175411, end 2162.898244
    Metadata:
      title           : Chapter  8
    Chapter #0:8: start 2162.898244, end 2885.136422
    Metadata:
      title           : Chapter  9
    Chapter #0:9: start 2885.136422, end 3063.998444
    Metadata:
      title           : Chapter 10
    Chapter #0:10: start 3063.998444, end 3358.058878
    Metadata:
      title           : Chapter 11
    Chapter #0:11: start 3358.058878, end 3499.961989
    Metadata:
      title           : Chapter 12
    Chapter #0:12: start 3499.961989, end 3624.929211
    Metadata:
      title           : Chapter 13
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt709), 704x480 [SAR 8:9 DAR 176:135], 1907 kb/s, 27.73 fps, 29.97 tbr, 90k tbn, 180k tbc (default)
    Metadata:
      creation_time   : 2011-02-05T16:52:42.000000Z
      encoder         : JVT/AVC Coding
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 159 kb/s (default)
    Metadata:
      creation_time   : 2011-02-05T16:52:42.000000Z
    Stream #0:2(eng): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 448 kb/s
    Metadata:
      creation_time   : 2011-02-05T16:52:42.000000Z
    Side data:
      audio service type: main
    Stream #0:3(und): Data: bin_data (text / 0x74786574)
    Metadata:
      creation_time   : 2011-02-05T16:52:42.000000Z
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
  Stream #0:1 -> #1:0 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
[alsa @ 0x56275eac8e40] sample rate 48000 not available, nearest is 44100
Could not write header for output file #1 (incorrect codec parameters ?): Input/output error
Error initializing output stream 1:0 -- 
Conversion failed!
hbarta@ash:/mnt/share/Video/Cosmos$ 

I’m unfamiliar with ffmpeg and suspect I have something wrong in the command like arguments but am clueless as to what.

Update: If I go into Gnome sound settings and set the microphone to mic100 and

  • open the Jitsi session
  • open my content with totem
  • share the screen on jitsy and set totem to full screen

I get decent results. Audio is clear and video is OK. (Content is SD) My goal is to share this with my grandson and be able to start/stop the video and converse with him at the same time. It seem like I need to run a Jitsi session on a second PC with normal audio and video via Webcam and use that for the conversation (and to hear the video content.)

That works, but I’m wondering if there is a better way.

Thanks!

You may want to check jitas

Thanks! That looks interesting but the system requirements seem a bit high at 4 cores + 8GB RAM. That sounds like an expensive VPS. I prefer not to host it on my personal LAN for security reasons. I guess I could try it out on a minimal VPS to see what happens.
At worst, I should be able to script the setup so I can create a node, get it going and then destroy it when the session is finished to minimize cost.