Did our Jitsi Meet Torture work?

Hey everyone, I recently ran a test on jitsi-meet-torture using a GCP Kubernetes cluster with 5 participants sending audio, and I received these log messages. I am relatively new to using jitsi-meet-torture, and I couldn’t find a lot of information on skipped tests on this forum, can someone tell me if the 298 skipped tests mean we didn’t run anything? If so, how do we go about fixing this?
[WARNING] Tests run: 299, Failures: 0, Errors: 0, Skipped: 298, Time elapsed: 133.695 s - in TestSuite
[INFO]
[INFO] Results:
[INFO]
[WARNING] Tests run: 299, Failures: 0, Errors: 0, Skipped: 298
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:20 min
[INFO] Finished at: 2020-07-21T11:00:40-04:00
[INFO] ------------------------------------------------------------------------

What is the command you used to run torture?

./scripts/malleus.sh --conferences=2 --participants=4 --senders=1 --audio-senders=2 --duration=120 --room-name-prefix=hamertesting --hub-url=http://:4444/wd/hub --instance-url=https://jitsi.dylantknguyen.com

So you are running one test so the rest are skipped … This is normal.

By the way, I know that since running Jitsi Meet Torture reqiures a Selenium Grid, which would need a lot of infrastructure to run a intensive load test. Since we would like to be loadtesting with users in the hundreds with all of them streaming audio and video, would we need to use a service?
Even running a meeting with 5 participants is already intensive for my computer, how did the jitsi team solve this problem?

We are running a selenium grid where we can auto-scale as many nodes that we need.