How-to to get a WORKING setup of Google Drive, One Drive or other cloud services in Jibri, my comprehensive tutorial for the beginner

Copying video’s to the cloud after Jibri recording ended.

In this document I will describe the steps to setup a synchronization of Jibri-recordings with Google Drive. We will use /etc/jitsi/jibri/config.json to invoke a post processing script in the “finalize_recording_script_path”. To achieve my goals, I will be installing the open source synchronisation tool ‘Rclone’ and configure it to be called by Jibri and handle the file-sync.


Rclone is written in Go. The Rclone documentation pages ( list many services that can be setup for upload and download. From the list, I read many popular cloud services like Amazon (Drive, S3), Google (Drive, Cloud, Photos), Dropbox, OneDrive, Digitalocean, Nextcloud, WebDAV, FTP and many more.

Preparations before installing

We will be installing Rclone on a headless remote server. However, to create the remote connection profile (involving oauth2), we require an Internet connected web browser together with Rclone. Since I Am a Windows guy (I’m running Windows 10 on my laptop), I will setup Rclone on my Windows 10. For this, we can download a windows package (zip-file) and we will create the config on the windows 10 machine. Afterwards we will upload this to our server.

Creating the rclone Remote Google Drive Connection

Detailed documentation is available on Rclone’s website: Here is what I did:

On the windows 10 pc/laptop, download rclone ( extract the executable (I extracted to c:\rclone\rclone.exe). Now open a command line (press windows-key and type cmd ):

cd /d c:\rclone
rclone config

Now you will be prompted with a series of questions:

  • No remotes found message: type ‘n’
  • name: ‘googledrive’


Now you will be presented with a long list of services. Find the service to configure.

  • Enter the number of Google drive: ‘13’

  • client_id:
  • client secret:
  • scope: ‘1’ (I use the least restrictive scope here)


  • root_folder_id:
  • service_account_file:
  • Edit advanced config?: ‘n’ (default)
  • Use auto config?: ‘y’ (default)


At this stage, rclone will open your browser to authenticate access for your google account:



Some remaining questions to finish the configuration:

  • Configure this as a team drive?: ‘n’ (default)
  • Current remotes and config details are shown, type: ‘y’ (default)
  • Final menue, we can quit now: ‘q’

Our Rclone configuration is now available. Let’s find where it is located:

rclone config file


Configuration file is stored at:

–> Remember this location: we will need this file for upload in the next step!

Now we can switch out attention to our Jibri server.

Rclone installation (Debian 10)

All below commands are executed as ‘root’. (I know!..)

apt update
apt install curl -y
curl | bash

Rclone is now installed.

We need to find where rclone expects it’s config file:

rclone config file


Configuration file doesn't exist, but rclone will use this path:

So we need to upload the file from our windows pc/laptop (C:\Users\[user]\.config\rclone\rclone.conf) to the location on the Jibri server (/root/.config/rclone/rclone.conf). (I used WinSCP for this)

After upload, we check once more to be sure rclone finds it’s config:

rclone config file


Configuration file is stored at:

Now we can test rclone. Make sure you have an existing folder with a file in it on your google drive, so we can expect output from rclone.

we ask a directory listing from the google drive with:

rclone ls googledrive:

43509 rclone test/woodworkerlogo.png

Success, Rclone is now configured!

To sync recorded files to our Google drive, we can run a command like:

rclone copy /srv/recordings/ -v --log-file=/var/log/jitsi/jibri/googledrive_upload.log

Short explanation:
rclone <-- call rclone
copy <-- copy files (can also move)
/srv/recordings/ <-- source location of the files
googledrive: <-- rclone-profile we have created <-- destination location in the cloud
-v <-- verbose (get information in the logs)
--log-file=/var/log/jitsi/jibri/googledrive_upload.log <-- logfile location and name

Output in the logfile:

2020/04/23 00:15:17 INFO  : Google drive root '': Waiting for checks to finish
2020/04/23 00:15:17 INFO  : Google drive root '': Waiting for transfers to finish
2020/04/23 00:15:23 INFO  : oinwsjahyibvtnxk/metadata.json: Copied (new)
2020/04/23 00:15:23 INFO  : oinwsjahyibvtnxk/test_2020-04-16-19-19-20.mp4: Copied (new)
2020/04/23 00:15:23 INFO  : 
Transferred:   	  685.605k / 685.605 kBytes, 100%, 118.716 kBytes/s, ETA 0s
Transferred:            2 / 2, 100%
Elapsed time:         5.7s

It works, files are uploaded!

Creating the Sync Script

We copy the rclone config file to a location where the jibri-user can read it (currently only ‘root’ can read the file):

rsync --recursive ~/.config/rclone/rclone.conf /home/jibri/.config/rclone/

And then we create the upload script:

touch /etc/jitsi/jibri/
chmod +x /etc/jitsi/jibri/
nano /etc/jitsi/jibri/

Copy below text in the script:

#! /usr/bin/bash

# Rclone can be invoked to upload local recording to a remote location at a cloud provider.
/usr/bin/rclone copy /srv/recordings/ googledrive:[]/videos/ -v --log-file=/var/log/jitsi/jibri/jitsi_googledrive_upload.log

Run the script and check the log:

(log can be found at /var/log/jitsi/jibri/jitsi_googledrive_upload.log)

Setting Jibri to call our Script

Finally we make sure Jibri will finalize the recording by uploading our video to Google Drive:

nano /etc/jitsi/jibri/config.json

// The path to the script which will be run on completed recordings
    "finalize_recording_script_path": "/etc/jitsi/jibri/",

(Set the finalize_recording_script_path)

All should be good for automated immediate uploads when recording is finished!

Cheers, Igor


Another comprehensive document from you “How-to” series. Thank you again!

If at some point you plan on creating a how-to for scaling Jibri, it would be amazing.

Love it, perfect timing! Thank you for this helpful post.

Does anybody know if the exact filename (foldername- AND filename) could be parsed to the script? (The sub-folder storing the recording contains a randomized format)

I found this example script in the jibri repo. So you could get the exact foldername with $1 and then continue from there with bash commands to do what you want.

Just to give you an impression what the output of the example script looks like:

$ cat /tmp/finalize.out
This is a dummy finalize script
The script was invoked with recordings directory /config/recordings/ppsqylbdukmvqopv.
You should put any finalize logic (renaming, uploading to a service
or storage provider, etc.) in this script

Here is an example bash script we now use in our environment:




/usr/bin/rclone copy $VIDEO_FILE_PATH nextcloud:$UPLOAD_DIR_NAME -v --log-file=/config/logs/nextcloud_upload.log

RECORDINGS_DIR Example: /config/recordings/ppsqylbdukmvqopv

VIDEO_FILE_PATH=$(find $RECORDINGS_DIR -name *.mp4) gets the full path of the video file by searching for mp4 files in RECORDINGS_DIR. Example: /config/recordings/ppsqylbdukmvqopv/testmeeting_2020-04-26-14-26-05.mp4

VIDEO_FILE_NAME=${VIDEO_FILE_PATH:36} uses bash parameter expansion to strip the path from VIDEO_FILE_PATH and only get the video’s filename. Note this assumes the exact path name we’re using. If your path holding the saved recordings is different, you’ll have to adjust the location 36 accordingly. Example: testmeeting_2020-04-26-14-26-05.mp4

UPLOAD_DIR_NAME=${VIDEO_FILE_NAME/%_*/} uses bash string manipulation to remove the date and file extension from the filename (basically removes everything behind the underscore _). This will be the folder where the uploaded file is saved. Example: testmeeting


This is good stuff, many thanks!

I am not familiar with sh.
Could you help me.
Now my recording files are save on the jibri server.
I want to save those files on the jisti-meet server.
Both server are AWS EC2.
And I just want to save mp4 file without sub-folder.(The sub-folder storing the recording contains a randomized format)
Thanks in advance!

Hi @Khine_Nyo_Thant, please read carefully the post by teutat3s (2 posts up from yours); there you will find how to save just the mp4 file without the directory name. I would not really be able to help you on how to copy the files from the jibri to the jitsi server, you don’t really need the Rclone tool for that; a script to use scp for copying would be the easiest way to implement.

1 Like

Thank you so much for your advice.

Hi Woodworker_life,
I am getting the following message when I finish record the conference:
cat /var/log/jitsi/jibri/jitsi_googledrive_upload.log

2020/06/29 12:22:14 Failed to save config after 10 tries: Failed to create temp file for new config: open /home/jibri/.config/rclone/rclone.conf263013215: permission denied

but when I execute it manually it works:

Just for more information, I am using Docker.
Apart of that, I would like to pass to the script /etc/jitsi/jibri/ the name of the recording file to move it to their Google drive account when they finish their recordings.
I think should ask to loging to Google account as you did before?, how should ask to logging to an Google account before starting to recording?

Thanks for your advise.

hi @Woodworker_Life, can you help?, i really need to solve this issue.


hi @Woodworker_Life,
Finally, it works!, but what i want to do is that every user can save his file to their own Google drive, currently all users are using my credentials to save the file in my Google drive.
Actually, before copy the file to Google Drive, it should asks about their credentials.

Please could your help!


Great you got it working! I am not an avid user of rclone myself, to get this working I had to follow examples from the rclone official documentation, reading through their forums and otherwise searching with various search criteria in my search engine. I can only suggest looking up what rclone documentation or -examples you can find. Sorry, probably not what you hoped for… :wink:

Thanks a lot @Woodworker_Life for this wonderful toturial. I tried to replicate it for docker-jibri and below is Dockerfile.

FROM ${JITSI_REPO}/base-java


    BUILD_DATE=20202408T131603 \
    ARCH=amd64 \
    SUBCMD="" \
    CONFIG="--config /config/rclone/rclone.conf" \

LABEL build_version="Version:- ${RCLONE_VER} Build-date:- ${BUILD_DATE}"

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

        apt-dpkg-wrap apt-get update \
        && apt-dpkg-wrap apt-get install -y jibri libgl1-mesa-dri unzip wget rsync \
        && apt-cleanup

        [ "${CHROME_RELEASE}" = "latest" ] \
        && curl -4s | apt-key add - \
        && echo "deb [arch=amd64] stable main" > /etc/apt/sources.list.d/google-chrome.list \
        && apt-dpkg-wrap apt-get update \
        && apt-dpkg-wrap apt-get install -y google-chrome-stable \
        && apt-cleanup \
        || true

        [ "${CHROME_RELEASE}" != "latest" ] \
        && curl -4so "/tmp/google-chrome-stable_${CHROME_RELEASE}-1_amd64.deb" "${CHROME_RELEASE}-1_amd64.deb" \
        && apt-dpkg-wrap apt-get update \
        && apt-dpkg-wrap apt-get install -y "/tmp/google-chrome-stable_${CHROME_RELEASE}-1_amd64.deb" \
        && apt-cleanup \
        || true

        [ "${CHROMEDRIVER_MAJOR_RELEASE}" = "latest" ] \
        && CHROMEDRIVER_RELEASE="$(curl -4Ls" \
        && curl -4Ls "${CHROMEDRIVER_RELEASE}/" \
        | zcat >> /usr/bin/chromedriver \
        && chmod +x /usr/bin/chromedriver \
        && chromedriver --version

        apt-dpkg-wrap apt-get update \
        && apt-dpkg-wrap apt-get install -y jitsi-upload-integrations jq \
        && apt-cleanup

RUN curl -O${RCLONE_VER}/rclone-v${RCLONE_VER}-linux-${ARCH}.zip && \
    unzip rclone-v${RCLONE_VER}-linux-${ARCH}.zip && \
    cd rclone-*-linux-${ARCH} && \
    cp rclone /usr/bin/ && \
    chown root:root /usr/bin/rclone && \
    chmod 755 /usr/bin/rclone && \
    cd ../ && \
    rm -f rclone-v${RCLONE_VER}-linux-${ARCH}.zip && \
    rm -r rclone-*-linux-${ARCH}

RUN mkdir -p /home/jibri/.config/rclone/

COPY rootfs/ /

VOLUME /config


  1. Once you have rclone.conf file by following @Woodworker_Life tutorial then build your own docker image using above Dockerfile.

  2. Put rclone.conf file to config/rclone/rclone.conf. I’m using this directory for storing recording too under config/jibri/recordings. Also script is under ./config/jibri/

  3. Mapped ./config:/config:Z under volumes. By default ${CONFIG}/jibri:/config:Z is mapped.

  4. start docker-compose -f jibri.yml up -d

  5. Now execute docker exec docker-jitsi-meet_jibri_1 rsync --recursive ./config/rclone/rclone.conf /home/jibri/.config/rclone/rclone.conf

#! /bin/bash

echo "Recording dir is: $RECORDINGS_DIR"
echo "\n Using Rclone to move the file to googledrive"
/usr/bin/rclone copy ${RECORDINGS_DIR}/*.mp4 -v --log-file=/config/rclone/rclone.log
echo "\n Waiting for 60 seconds for file to upload"
sleep 1m

NOTE: I’m still unable to figure out how to automate step 5. It would be great if someone help here.

when running rsync command then make sure to check the directory and file permission of /home/jibri/.config/rclone/rclone.conf. If the permission is wrong then you might face this error Failed to save config after 10 tries: Failed to create temp file for new config: open /home/jibri/.config/rclone/rclone.conf701042153: permission denied. You have to check all these inside jibri container.


you can completely remove the Step5 i.e no need to use rsync(remove it from Dockerfile too). Map ./config/rclone/rclone.conf:/home/jibri/.config/rclone/rclone.conf.

support for AWS S3
Use below config as rclone.conf for AWS S3 bucket. Change the config according to your requirement.

type = s3
provider = AWS
env_auth = false
access_key_id = XXX
secret_access_key = XXXX
acl = private

Hi everyone!

@metadata works upload to amazon S3?? in my case, the log tells me this:

I don’t know why forbidden, because the folder in s3 I made it “public”

So, my finalize script is this:




/usr/bin/rclone copy $VIDEO_FILE_PATH amazons3:destinationfolderserver -v --log-file=/var/log/jitsi/amazons3_upload.log

Any ideas?

Probably your S3 config file is not in the /home/jibri/ folder

Hi @emrah

Yes, it exists

I didn’t use S3 for a long time but IIRC your acl is not OK. It seems that you have read-only access, no write permission…

Mmmmm, I changed to “public-read-write” (both, acl and bucket_acl) and the error persists…