Jisti/Jibri Localhost Connection

I am trying to connect to Jibri, and I keep getting this error:

BridgeChannel.js:83 WebSocket connection to 'wss://localhost/colibri-ws/default-id/b8d2fdc0f429527e/0f19a61d?pwd=4o9bc21gui4b8mnuaim85o738n' failed: 
_initWebSocket @ BridgeChannel.js:83
p @ BridgeChannel.js:72
initializeBridgeChannel @ RTC.js:224
ae._setBridgeChannel @ JitsiConference.js:2105
ae._acceptJvbIncomingCall @ JitsiConference.js:2036
ae.onIncomingCall @ JitsiConference.js:1984
a.emit @ events.js:152
onJingle @ strophe.jingle.js:171
run @ strophe.umd.js:1875
(anonymous) @ strophe.umd.js:3157
forEachChild @ strophe.umd.js:830
_dataRecv @ strophe.umd.js:3146
_onRequestStateChange @ strophe.umd.js:5012
Logger.js:154 2021-11-17T11:24:16.123Z [modules/RTC/BridgeChannel.js] <WebSocket.e.onclose>:  Channel closed: 1006 

Is this error being caused by trying to connect to the localhost or is this something else?

Your deployment seems to be misconfigured. Unless Jibri is on the same machine as the rest of the Jitsi components trying to connect to localhost will fail.

Do you have any clues and where I should start looking for this misconfiguration? My /etc/jitsi/jibri/jibri.conf looks like this:

jibri {

// A unique identifier for this Jibri
  // TODO: eventually this will be required with no default
  id = ""
  // Whether or not Jibri should return to idle state after handling
  // (successfully or unsuccessfully) a request.  A value of 'true'
  // here means that a Jibri will NOT return back to the IDLE state
  // and will need to be restarted in order to be used again.
  single-use-mode = false
  api {
    http {
      external-api-port = 2222
      internal-api-port = 3333
    }
    xmpp {
      // See example_xmpp_envs.conf for an example of what is expected here
       environments = [
              {
                name = "prod environment"
                xmpp-server-hosts = ["meet.example.com"]
                xmpp-domain = "meet.example.com"

                control-muc {
                    domain = "internal.auth.meet.example.com"
                    room-name = "JibriBrewery"
                    nickname = "jibri-nickname"
                }

                control-login {
                    domain = "auth.meet.example.com"
                    username = "xxxxx"
                    password = "xxxxx"
                }

                call-login {
                    domain = "recorder.meet.example.com"
                    username = "xxxxx"
                    password = "xxxxx"
                }

                       strip-from-room-domain = "conference."
                usage-timeout = 0
                trust-all-xmpp-certs = true
            }]
    }
  }
  recording {
    recordings-directory = "/srv/recordings"
    # TODO: make this an optional param and remove the default
    finalize-script = "/srv/finalize_recording.sh"
  }
  streaming {
    // A list of regex patterns for allowed RTMP URLs.  The RTMP URL used
    // when starting a stream must match at least one of the patterns in
    // this list.
    rtmp-allow-list = [
      // By default, all services are allowed
      ".*"
    ]
  }
  ffmpeg {
    resolution = "1920x1080"
    // The audio source that will be used to capture audio on Linux
    audio-source = "alsa"
    // The audio device that will be used to capture audio on Linux
    audio-device = "plug:bsnoop"
  }
  chrome {
    // The flags which will be passed to chromium when launching
    flags = [
      "--use-fake-ui-for-media-stream",
      "--start-maximized",
      "--kiosk",
      "--enabled",
      "--disable-infobars",
      "--autoplay-policy=no-user-gesture-required"
    ]
  }
  stats {
    enable-stats-d = true
  }
  webhook {
    // A list of subscribers interested in receiving webhook events
    subscribers = []
  }
  jwt-info {
    // The path to a .pem file which will be used to sign JWT tokens used in webhook
    // requests.  If not set, no JWT will be added to webhook requests.
    # signing-key-path = "/path/to/key.pem"

    // The kid to use as part of the JWT
    # kid = "key-id"

    // The issuer of the JWT
    # issuer = "issuer"

    // The audience of the JWT
    # audience = "audience"

    // The TTL of each generated JWT.  Can't be less than 10 minutes.
    # ttl = 1 hour
  }
  call-status-checks {
    // If all clients have their audio and video muted and if Jibri does not
    // detect any data stream (audio or video) comming in, it will stop
    // recording after NO_MEDIA_TIMEOUT expires.
    no-media-timeout = 30 seconds

    // If all clients have their audio and video muted, Jibri consideres this
    // as an empty call and stops the recording after ALL_MUTED_TIMEOUT expires.
    all-muted-timeout = 10 minutes

    // When detecting if a call is empty, Jibri takes into consideration for how
    // long the call has been empty already. If it has been empty for more than
    // DEFAULT_CALL_EMPTY_TIMEOUT, it will consider it empty and stop the recording.
    default-call-empty-timeout = 30 seconds
  }
}



How is this log related with jibri? It seems like a JVB issue.
Can you share your jvb.conf?

My jvb.conf

videobridge {
    http-servers {
        public {
            port = 9090
        }
    }
    websockets {
        enabled = true
        domain = "meet.example.com:443"
        tls = true
    }
}

Do you have the real domain in this field or is it exacly like you pasted?

@emrah I have my real domain in there in the actual config.

When you ping your domain in jibri server, what is the result?
localhost…?

@emrah WhenI ping the jibri instance it hangs:

root@ip-10-0-0-230:/home/ubuntu# ping1.234.567.89

PING 1.234.567.89 (1.234.567.89) 56(84) bytes of data.

Also when I go to 1.234.567.89//jibri/api/v1.0/health, the site cannot be reached.

On the instance, Jibri is running:

root@ip-10-0-0-208:/home/ubuntu# sudo systemctl status jibri
● jibri.service - Jibri Process
   Loaded: loaded (/etc/systemd/system/jibri.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-11-17 13:25:43 UTC; 11min ago
  Process: 2239 ExecStop=/opt/jitsi/jibri/graceful_shutdown.sh (code=exited, status=0/SUCCESS)
 Main PID: 2247 (java)
    Tasks: 48 (limit: 4915)
   CGroup: /system.slice/jibri.service
           └─2247 java -Djava.util.logging.config.file=/etc/jitsi/jibri/logging.properties -Dconfig.file=/etc/jitsi/jibri/jibri.conf -jar /opt/jit

Nov 17 13:25:43 ip-10-0-0-208 systemd[1]: Started Jibri Process.
Nov 17 13:25:45 ip-10-0-0-208 launch.sh[2247]: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
Nov 17 13:25:45 ip-10-0-0-208 launch.sh[2247]: SLF4J: Defaulting to no-operation (NOP) logger implementation
Nov 17 13:25:45 ip-10-0-0-208 launch.sh[2247]: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
lines 1-13/13 (END)

And it also saying the jibri log:

2021-11-17 13:36:44.993 FINE: [19] org.jitsi.jibri.webhooks.v1.WebhookClient.log() Updating 0 subscribers of status

2021-11-17 13:37:44.993 FINE: [19] org.jitsi.jibri.webhooks.v1.WebhookClient.log() Updating 0 subscribers of status

ping the jitsi-meet domain name on jibri instance, not IP…

@emrah So pinging works

ping meet.example.com

PING meet.example.com (1.23.45.67) 56(84) bytes of data.

64 bytes from ec2-23-45-67.compute-1.amazonaws.com (1.23.45.67): icmp_seq=1 ttl=63 time=0.261 ms

64 bytes from ec2-23-45-67.compute-1.amazonaws.com (1.23.45.67): icmp_seq=2 ttl=63 time=0.526 ms

I’m trying to find where this localhost comes from… There should be your jitsi-meet address here

@emrah You know whats hiliarious, I just did an apt-get update/upgrade and restarted, and magically the error is gone. I have a new issue where its no longer finding any of the jibri instances, but thanks for your help on the above.

1 Like