Best server configuration for low bandwidth with lowest latency

we have already live jitsi server running and can create rooms through custom lib-jitsi meet api.
but still we are facing some problems regarding low bandwidths in both client and server side. our current configuration :

server config
/* eslint-disable no-unused-vars, no-var */

var config = {
    // Configuration

    // Alternative location for the configuration.
    // configLocation: './config.json',

    // Custom function which given the URL path should return a room name.
    // getroomnode: function (path) { return 'someprefixpossiblybasedonpath'; },

    // Connection

    hosts: {
        // XMPP domain.
        domain: 'meet.jitsi',

        // When using authentication, domain for guest users.
        anonymousdomain: '',

        // Domain for authenticated users. Defaults to <domain>.
        authdomain: 'meet.jitsi',

        // Jirecon recording component domain.
        // jirecon: '',

        // Call control component (Jigasi).
        // call_control: '',

        // Focus component domain. Defaults to focus.<domain>.
        // focus: '',

        // XMPP MUC domain. FIXME: use XEP-0030 to discover it.
        muc: '',

    // BOSH URL. FIXME: use XEP-0156 to discover it.
    bosh: '/http-bind',

    // The name of client node advertised in XEP-0115 'c' stanza
    clientNode: '',

    // The real JID of focus participant - can be overridden here
    focusUserJid: '',

    // Testing / experimental features.

    testing: {
        // Enables experimental simulcast support on Firefox.
        enableFirefoxSimulcast: false,

        // P2P test mode disables automatic switching to P2P when there are 2
        // participants in the conference.
        p2pTestMode: false

        // Enables the test specific features consumed by jitsi-meet-torture
        // testMode: false

    // Disables ICE/UDP by filtering out local and remote UDP candidates in
    // signalling.
    // webrtcIceUdpDisable: false,

    // Disables ICE/TCP by filtering out local and remote TCP candidates in
    // signalling.
    // webrtcIceTcpDisable: false,

    // Media

    // Audio

    // Disable measuring of audio levels.
    // disableAudioLevels: false,

    // Start the conference in audio only mode (no video is being received nor
    // sent).
    // startAudioOnly: false,

    // Every participant after the Nth will start audio muted.
    // startAudioMuted: 10,

    // Start calls with audio muted. Unlike the option above, this one is only
    // applied locally. FIXME: having these 2 options is confusing.
    // startWithAudioMuted: false,

    // Enabling it (with #params) will disable local audio output of remote
    // participants and to enable it back a reload is needed.
    // startSilent: false

    // Video

    // Sets the preferred resolution (height) for local video. Defaults to 720.
    resolution: 720,
    startBitrate: "800",
    // w3c spec-compliant video constraints to use for video capture. Currently
    // used by browsers that return true from lib-jitsi-meet's
    // util#browser#usesNewGumFlow. The constraints are independency from
    // this config's resolution value. Defaults to requesting an ideal aspect
    // ratio of 16:9 with an ideal resolution of 720.
     constraints: {
         video: {
             aspectRatio: 16 / 9,
             height: {
                 ideal: 720,
                 max: 720,
                 min: 180
	     width: {
	    	 ideal: 1280,
		 max: 1280,
		 min: 320

    // Enable / disable simulcast support.
    // disableSimulcast: false,

    // Enable / disable layer suspension.  If enabled, endpoints whose HD
    // layers are not in use will be suspended (no longer sent) until they
    // are requested again.
    // enableLayerSuspension: false,

    // Suspend sending video if bandwidth estimation is too low. This may cause
    // problems with audio playback. Disabled until these are fixed.
    disableSuspendVideo: true,

    // Every participant after the Nth will start video muted.
    // startVideoMuted: 10,

    // Start calls with video muted. Unlike the option above, this one is only
    // applied locally. FIXME: having these 2 options is confusing.
    // startWithVideoMuted: false,

    // If set to true, prefer to use the H.264 video codec (if supported).
    // Note that it's not recommended to do this because simulcast is not
    // supported when  using H.264. For 1-to-1 calls this setting is enabled by
    // default and can be toggled in the p2p section.
    // preferH264: true,

    // If set to true, disable H.264 video codec by stripping it out of the
    // SDP.
    // disableH264: false,

    // Desktop sharing

    // The ID of the jidesha extension for Chrome.
    desktopSharingChromeExtId: null,

    // Whether desktop sharing should be disabled on Chrome.
    // desktopSharingChromeDisabled: false,

    // The media sources to use when using screen sharing with the Chrome
    // extension.
    desktopSharingChromeSources: [ 'screen', 'window', 'tab' ],

    // Required version of Chrome extension
    desktopSharingChromeMinExtVersion: '0.1',

    // Whether desktop sharing should be disabled on Firefox.
    // desktopSharingFirefoxDisabled: false,

    // Optional desktop sharing frame rate options. Default value: min:5, max:5.
     desktopSharingFrameRate: {
         min: 10,
         max: 30

    // Try to start calls with screen-sharing instead of camera video.
    // startScreenSharing: false,

    // Recording

    // Whether to enable file recording or not.
    // fileRecordingsEnabled: false,
    // Enable the dropbox integration.
    // dropbox: {
    //     appKey: '<APP_KEY>' // Specify your app key here.
    //     // A URL to redirect the user to, after authenticating
    //     // by default uses:
    //     // 'https://meet.jitsi/static/oauth.html'
    //     redirectURI:
    //          'https://meet.jitsi/subfolder/static/oauth.html'
    // },
    // When integrations like dropbox are enabled only that will be shown,
    // by enabling fileRecordingsServiceEnabled, we show both the integrations
    // and the generic recording service (its configuration and storage type
    // depends on jibri configuration)
    // fileRecordingsServiceEnabled: false,
    // Whether to show the possibility to share file recording with other people
    // (e.g. meeting participants), based on the actual implementation
    // on the backend.
    // fileRecordingsServiceSharingEnabled: false,

    // Whether to enable live streaming or not.
    // liveStreamingEnabled: false,

    // Transcription (in interface_config,
    // subtitles and buttons can be configured)
    // transcribingEnabled: false,

    // Misc

    // Default value for the channel "last N" attribute. -1 for unlimited.
    channelLastN: -1,

    // Disables or enables RTX (RFC 4588) (defaults to false).
    // disableRtx: false,

    // Disables or enables TCC (the default is in Jicofo and set to true)
    // (draft-holmer-rmcat-transport-wide-cc-extensions-01). This setting
    // affects congestion control, it practically enables send-side bandwidth
    // estimations.
     enableTcc: true,

    // Disables or enables REMB (the default is in Jicofo and set to false)
    // (draft-alvestrand-rmcat-remb-03). This setting affects congestion
    // control, it practically enables recv-side bandwidth estimations. When
    // both TCC and REMB are enabled, TCC takes precedence. When both are
    // disabled, then bandwidth estimations are disabled.
     enableRemb: true,

    // Defines the minimum number of participants to start a call (the default
    // is set in Jicofo and set to 2).
    // minParticipants: 2,

    // Use XEP-0215 to fetch STUN and TURN servers.
     useStunTurn: true,

    // Enable IPv6 support.
     useIPv6: true,

    // Enables / disables a data communication channel with the Videobridge.
    // Values can be 'datachannel', 'websocket', true (treat it as
    // 'datachannel'), undefined (treat it as 'datachannel') and false (don't
    // open any channel).
     openBridgeChannel: true,

    // UI

    // Use display name as XMPP nickname.
    // useNicks: false,

    // Require users to always specify a display name.
    // requireDisplayName: true,

    // Whether to use a welcome page or not. In case it's false a random room
    // will be joined when no room is specified.
    enableWelcomePage: true,

    // Enabling the close page will ignore the welcome page redirection when
    // a call is hangup.
    // enableClosePage: false,

    // Disable hiding of remote thumbnails when in a 1-on-1 conference call.
    // disable1On1Mode: false,

    // Default language for the user interface.
     defaultLanguage: 'en',

    // If true all users without a token will be considered guests and all users
    // with token will be considered non-guests. Only guests will be allowed to
    // edit their profile.
    enableUserRolesBasedOnToken: false,

    // Whether or not some features are checked based on token.
    // enableFeaturesBasedOnToken: false,

    // Enable lock room for all moderators, even when userRolesBasedOnToken is enabled and participants are guests.
    // lockRoomGuestEnabled: false,

    // When enabled the password used for locking a room is restricted to up to the number of digits specified
    // roomPasswordNumberOfDigits: 10,
    // default: roomPasswordNumberOfDigits: false,

    // Message to show the users. Example: 'The service will be down for
    // maintenance at 01:00 AM GMT,
    // noticeMessage: '',

    // Enables calendar integration, depends on googleApiApplicationClientID
    // and microsoftApiApplicationClientID
    // enableCalendarIntegration: false,

    // Stats

    // Whether to enable stats collection or not in the TraceablePeerConnection.
    // This can be useful for debugging purposes (post-processing/analysis of
    // the webrtc stats) as it is done in the jitsi-meet-torture bandwidth
    // estimation tests.
    // gatherStats: false,

    // To enable sending statistics to you must provide the
    // Application ID and Secret.
    // callStatsID: '',
    // callStatsSecret: '',

    // enables callstatsUsername to be reported as statsId and used
    // by callstats as repoted remote id
    // enableStatsID: false

    // enables sending participants display name to callstats
    // enableDisplayNameInStats: false

    // Privacy

    // If third party requests are disabled, no other server will be contacted.
    // This means avatars will be locally generated and callstats integration
    // will not function.
    // disableThirdPartyRequests: false,

    // Peer-To-Peer mode: used (if enabled) when there are just 2 participants.

    p2p: {
        // Enables peer to peer mode. When enabled the system will try to
        // establish a direct connection when there are exactly 2 participants
        // in the room. If that succeeds the conference will stop sending data
        // through the JVB and use the peer to peer connection instead. When a
        // 3rd participant joins the conference will be moved back to the JVB
        // connection.
        enabled: true,

        // Use XEP-0215 to fetch STUN and TURN servers.
         useStunTurn: true,

        // The STUN servers that will be used in the peer to peer connections
        stunServers: [
            { urls: '' },
            { urls: '' },
            { urls: '' }

        // Sets the ICE transport policy for the p2p connection. At the time
        // of this writing the list of possible values are 'all' and 'relay',
        // but that is subject to change in the future. The enum is defined in
        // the WebRTC standard:
        // If not set, the effective value is 'all'.
        // iceTransportPolicy: 'all',

        // If set to true, it will prefer to use H.264 for P2P calls (if H.264
        // is supported).
        preferH264: true,

        // If set to true, disable H.264 video codec by stripping it out of the
        // SDP.
         disableH264: false

        // How long we're going to wait, before going back to P2P after the 3rd
        // participant has left the conference (to filter out page reload).
        // backToP2PDelay: 5

    analytics: {
        // The Google Analytics Tracking ID:
        // googleAnalyticsTrackingId: 'your-tracking-id-UA-123456-1'

        // The Amplitude APP Key:
        // amplitudeAPPKey: '<APP_KEY>'

        // Array of script URLs to load as lib-jitsi-meet "analytics handlers".
        // scriptURLs: [
        //      "libs/analytics-ga.min.js", // google-analytics
        //      ""
        // ],

    // Information about the jitsi-meet instance we are connecting to, including
    // the user region as seen by the server.
    deploymentInfo: {
        // shard: "shard1",
        // region: "europe",
        // userRegion: "asia"

    // Local Recording

    // localRecording: {
    // Enables local recording.
    // Additionally, 'localrecording' (all lowercase) needs to be added to
    // TOOLBAR_BUTTONS in interface_config.js for the Local Recording
    // button to show up on the toolbar.
    //     enabled: true,

    // The recording format, can be one of 'ogg', 'flac' or 'wav'.
    //     format: 'flac'

    // }

    // Options related to end-to-end (participant to participant) ping.
    // e2eping: {
    //   // The interval in milliseconds at which pings will be sent.
    //   // Defaults to 10000, set to <= 0 to disable.
    //   pingInterval: 10000,
    //   // The interval in milliseconds at which analytics events
    //   // with the measured RTT will be sent. Defaults to 60000, set
    //   // to <= 0 to disable.
    //   analyticsInterval: 60000,
    //   }

    // If set, will attempt to use the provided video input device label when
    // triggering a screenshare, instead of proceeding through the normal flow
    // for obtaining a desktop stream.
    // NOTE: This option is experimental and is currently intended for internal
    // use only.
    // _desktopSharingSourceDevice: 'sample-id-or-label'

    // If true, any checks to handoff to another application will be prevented
    // and instead the app will continue to display in the current browser.
    // disableDeepLinking: false

    // A property to disable the right click context menu for localVideo
    // the menu has option to flip the locally seen video for local presentations
    // disableLocalVideoFlip: false

    // List of undocumented settings used in jitsi-meet

    // List of undocumented settings used in lib-jitsi-meet


/* eslint-enable no-unused-vars, no-var */

I wanna know hear some suggestions about some fields in here like startBitRate (800), resolution(720), framerate (10-30), simulcasting (diabled),off stage layer suppression (dont know), video encoding , disableSuspendVideo (enabled).

we will test with simulcasting today (i have understanding about it) but I am confused about off stage layer suppression (is this related to simulcasting) and startBitrate field how can they affect the conference in good and bad way. and how can I set the default video encoding (there is only H.264 preferable option and didn’t enable it, only enabled for p2p) like vp8 or others… and the values I can use (already implemented). also wanna know feedback about optimized framerate/resolution.

I can sacrifice some quality if needed but I want the lowest latency and dont want removals of participants video in case of network issue. I would also love to use best quality possible only after ensuring low latency. I just want some feedback who already dealt with these issues and thanx in advance :heart:
@damencho @saghul @xranby @Boris_Grozev

Your configuration uses 720p but in the title you said you want low bandwidth. These 2 are at odds. You are going to need to go lower (maybe 540p or 360p) if you want to lower your bandwidth utilization.

1 Like

yeah… I will reduce that but I wanna know more about other fields too that how can they affect conference… like framerate and off stage layer suppression? is startBitrate also have a big issue?
Thanx for the reply :heart:


Internet speed is still not up to the mark in my country so making sure low latency and highest possible conference quality/experience in low/medium internet speed is my priority even if I have to lower the resolution/fps/bitrate a bit. I am just sharing my findings.

As I am sure that enabling simulcast and layerSuspension has a great effect in saving bandwidth both from the server-side and client-side. but to make it working perfectly you have to set enableTcc/enableRemb or both to true which means server/receiver side congestion control. Also great impact on capping bitrate for every simulcast layer. like for 540p, webRTC can go up to 1.2Mbit/s or more but I capped it at 1Mbit and checked that whether the quality is good enough for me or not. also did it for 270p and 180p. Also openBridgechannel setting to ‘websocket’ and configuring it has good impact on simulcast resolution changing. I will look into VP9 codec which also has a great impact on saving bandwidth but it is not stable yet in all platform.
also using maxOpusBitrate (and using enableOpusRed may improve sound quality and consume some bandwidth) has great impact. enableRTX also has good impact, enableLipsync to false has good impact on continuity on audio but audio can come after video in case of low bandwidth. disableAudioLevels to true can save some client side processing (UI rendering) and make conference little more smoother but you have to sacrifice the little audio visualizer. disabling stereo and Audio processings (disableAP) can also save client side processing but it may degrade audio quality, my advice is to check all the parameter for audio processing like noiseSuppression/autoGainControl for your environment and just enable which you mainly need for your use case.

But I still has some confusion like “startBitrate” (can it make worse for low bandwidth person to enter the room) and then using personal turn/stun server or enabling/disabling turn udp/sdp and so on. That’s all I have and please share anything whenever you think that can be helpful for anyone as I believe the developers push the parameters to config.js but commented documentation is not enough to judge for many of us and it will be helpful if someone with experience can elaborate all the configs or atleast the important ones like for our case. All the best :heartbeat:

1 Like

note that maxOpusBitrate is not useful at all if what you want is good speech understanding. 64kbit/s of effective sound bitrate (not taking packet loss in account) is excellent, more is useless (music is another concern)

we are using max average opus bitrate as 16000 but it was not bad from our experience…! what bitrate you are using and what’s your experience?

IIRC, 20000 is the default max-average-bitrate used for Opus, so about 40k when encoding stereo. if you set this to 16k, the quality will likely still be good enough for speech. However, the performance/bandwidth gain you can get by tuning audio encoding is almost negligible in comparison to adjusting video settings.

@Fuji I am very interested in your finding and tweaks, we as well are covering areas that have very poor bandwidth and are trying to find ways to help. We also want to be able to apply these adjustments per region, so ones in areas with good internet have a certain config and others with poor bandwidth can have a different config. Can you please share more details as to what your final configs are? Thanks!!

1 Like

yes it’s enough in most cases. I’d say that using 64kbits/s in stereo (thanks to @plotka for the precision) gives great quality for voice. You can very much get away with less and still get good quality. IMO 16kbits/s mono is a bit limitating with Opus and even with my poor ear performance I can feel the difference. For example when listening to a foreign language I very much appreciate a good sound quality. When listening to my mother tongue it’s less important because the brain compensates faster for missing or deformed parts.

Thanks to Fuji and to others for your comments and suggestions !

Saving bandwidth is important for us (Australia has some area with very limited bandwidth).

Sound quality is nice.

Stereo is not required for our use. Searching the Jitsi Community posts, I found: adding “stereo: true” in config.js, so I added “stereo: false”, though from another post, I read “…browsers’ default (i.e. mono) audio settings…”, so I guess the default is false, however setting stereo to false should not be an issue.

I did find that “disableAudioLevels: true,” reduced load on older hardware (CPU usage dropped by about 20%). Fuji, I did not realise that this disabled auto focus changing, but this might be a good thing, as while I liked auto focus changing, some users were confused. If I were to have large groups, then auto focus changing might be necessary, but when everyone easily fits in on a tile view, we tend to prefer this mode when the moderator is not leading the meeting (i.e. presenting)

What would “aspectRatio: 4 / 3,” do for bandwidth? I expect it would reduce the bandwidth, and would be OK for individual participants, but for meeting rooms or multiple people in front of a single camera, then the reduced width could be an issue? I tried a test and my test found it only saved just less than 3% bandwidth and to me the video quality was less for some reason I don’t understand, so not much point to me and I set things back to 16x9.
What are your thoughts/experiences with 4/3 aspect ratio?

    resolution: 720,
    constraints: {
        video: {
            aspectRatio: 4 / 3,
            height: {
                ideal: 720,
                max: 720,
                min: 180
            width: {
                ideal: 960,
                max: 960,
                min: 320

yeah, we are still making changes and testing to see the output… In the post I have been included all that we found positive. I don’t think stun/turn will have good impact but I am worried about server location as currently we are using north america server for video bridge and ping is still high. Bangladeshi server performance was not stable enough to use as per our experience (not sure about current condition). channelLastN will also have great impact (but it is not currently working perfectly as per their developper), actually so far, I know jitsi team is working on pagination like thing (on signalling part mainly) so that client can request specific or list of participants video to video bridge so that bandwidth can be saved a lot and also large meetings can be arranged. also VP9 can have great impact, we haven’t tried it yet…!!


Thanks for posting those settings, most of them I had already implement, a few I had not. And your post helped confirm what my decisions which was great. During a meeting we usually all set to LD to reduce bandwidth. The presenter/moderator can run in HD so the few who have enough bandwidth for HD can see a higher resolution image, and those who don’t can use LD.
Please keep us informed of your progress.

In our recent finding we saw that “disableAudioLevels” to true don’t actually dismiss the “auto focus changing” (sorry for not noticing that before, I assumed that this was handled in client end, client just measure enough to render the Ui where JVB do the complex computing and make the decission). auto changing focus still happening as it is handled by the JVB. the audio level detection actually creates problem or waste cpu usage by being rendered all the time. Mainly JVB calculates the one to be on focus but it can be altered from client end I believe.

Thanks Fuji. Recently I found that while I have disableAudioLevels set to true, that the “auto focus changing” was still happening, which I am pleased for, as people who do not use tiled view appreciate this feature.

However I still believe that having disableAudioLevels set to true, reduces the load on participant’s CPUs by 20% or greater. Very helpful to disable as a number of our participant are using old and underpowered computers.

Well, this is expected - they are two different flags. If you were trying to disable “auto focus changing” (dominant speaker indicator), you would need to do that in interface_config.js:


1 Like