How to send ArrayBuffer using lib-jitsi-meet API

I am implementing file transfer using lib-jitsi-meet API, and need expert’s help.

I’ve read file as ArrayBuffer type and sent this payload as the parameter of SendCommandOnce().
But at the receive side, I only receive string, not binary data.

I tried with all value,attribute,children property but couldn’t get what I want.

{

            value: the_value_of_the_command,


            attributes: {}, // map with keys the name of the attribute and values - the values of the attributes.


            children: [] // array with JS object with the same structure.

}

This is what I get in all three cases. It only says me the data type of binary data.

“[object ArrayBuffer]”

How can I overcome this and get real buffer?

PS: Converting binary data to base64 encoded string seems to increase the capacity and not efficient.

my experimental values
binary length
16384
base64 encoded length
21848

And one more question: If I send 50MB file, doesn’t video or audio get suck a while for sudden server load increase ??

Sorry one more question :slight_smile: If I send file data, then it comes back to me, and I don’t want this comeback. For server load decrease, how can I prevent this? (for a short text message, it’s no problem, but for file transfer, :speak_no_evil:)

I sent 50MB file with base64 string for test purpose, and Ah, no!!

Triggered this.JitsiMeetJS.events.connection.CONNECTION_FAILED event and
disconnected from server with this error in chrome console.

[Violation] ‘change’ handler took 2271ms
[Violation] ‘setTimeout’ handler took 702ms
Access to XMLHttpRequest at ‘https://test.com/http-bind’ from origin ‘https://192.168.1.176:5001’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.

2021-06-19T01:34:17.099Z [modules/xmpp/strophe.util.js] <Object.r.Strophe.log>: Strophe: request id 19.1 error 0 happened

[Violation] ‘readystatechange’ handler took 594ms

POST https://test.com/http-bind net::ERR_FAILED

Please will you give me any advice?

https://192.168.1.176:5001 is my web server, https://test.com is my jitsi server.
working fine without file transfer.
I repeat and repeat and found that if I send big data(50MB), 100% disconnected with same error.

What is a good practice to send data “slowly” ?

I added 100ms delay per chunk read, and worked without disconnect, though seems to be a very slow transfer rate.(took 58s for 10MB file)

            setTimeout(_ => {
                readSlice(offset);
            }, 100);

For 50ms delay, I’m experiencing disconnect again.

100ms delay works now, and slow is not so serious, but I’m scared of causing such disconnect again in the future even with 100ms delay, for the some server condition I’m not experiencing now.
:pleading_face:

I’ll try more with 90, 80, 70, 60ms.

faildeveloper

:frowning: disconnect again with 90ms delay.
Now I can’t trust 100ms, too, though worked without any disconnect for several tests.

Help me !!!

Maybe you need some customizations in the Nginx config

1 Like

Thank you for your reply.
Do you mean this? (result from google on “max transfer rate limit in nginx”)

Rate – Sets the maximum request rate . In the example, the rate cannot exceed 10 requests per second. NGINX actually tracks requests at millisecond granularity, so this limit corresponds to 1 request every 100 milliseconds (ms)

That sounds reasonable, thanks! If it works, you’re hero!!!

@iDLE This looks like a promising feature.

One other approach is to simply upload your file to the webserver sperate from Jitsi, using standard nginx http methods. Once the file is uploaded, share link to the file in the chat.

You can still use the Jitsi UI to select a file and trigger the upload but you can avoid using the rest of the overhead. This should help with the disconnects and solve the “getting data back” issue. It also allows multiple users to pull the file down (as needed) instead of sending it individually to all users through the server.

1 Like

I meant the post size to allow to send very big data

I’d recommend to have a look at XEP-0363: HTTP File Upload and the respective prosody module. There is also a strophe.js plugin, however, it seems to be not actively maintained. There may be other plugins I did not find.

Of course, this would also require some adjustments to your nginx config and it’d be probably a good idea to purge all related files after the meeting is over.

You are trying to send data through presence of xmpp protocol. XMPP is XML text based protocol and it is not recommended to send big data through it.
We had recently worked on minimizing the size of the presences to improve performance. If you try to send big messages through prosody you will get disconnections and problems to all conferences running on that prosody.

There is nothing at the moment to recommend to use… you better look for external service or software you can install in use that in parallel.

2 Likes

Thats exactly what XEP-363 is about, afaik: prosody negotiates a temporary upload slot with a UploadComponent to which the client can upload a file via HTTP and the download link to the file is then shared in the MUC. In this case, the file is not sent via xmpp presences.

The prosody module mod_http_upload.lua is just an implementation of the UploadComponent as a prosody module. Its possible to use external UploadCompnent services instead to reduce load on prosody. To this end, one would configure prosody with mod_http_upload_external - Prosody Community Modules (the page includes some links to external UploadComponent implementations, but you can find more on Github)

2 Likes