[jitsi-dev] Jitsi and EFF's Secure Messaging Scorecard


#1

Dear Jitsi team,

As part of the Electronic Frontier Foundation's Campaign for Secure and
Usable Cryptography, we are putting together a scorecard to evaluate secure
messaging apps, tools and protocols. We are working on this project in
conjunction with Julia Angwin at ProPublica, and Joseph Bonneau at
Princeton's Center for Information Technology Policy.

Based on publicly available information we could find about your
application, our team of experts has made a preliminary assessment of how
your software currently fares in each of our evaluation criteria. If there
is any nuance that we should be aware of, and certainly if there are any
inaccuracies in this assessment, please let us know by ***October 14***.
Otherwise, we will proceed with the evaluation below.

We hope that this scorecard and process will encourage enhancements in the
state of messaging security industry-wide. If you make any future changes
which you believe will improve your rating in this scorecard, please advise
us after those changes have been fully implemented, and we will update the
ratings accordingly.

Here are the criteria and our assessments of your tool:

1. Is your communication encrypted in transit? *YES*

This criterion requires that all user communications are encrypted along
all the links in the communication path.

2. Is your communication encrypted with a key the provider doesn't have
access to? *YES*

This criterion requires that all user communications are end-to-end
encrypted. This means the keys necessary to decrypt messages must be
generated and stored at the endpoints, and never leave endpoints except
with explicit user action (such as to backup a key or synchronize keys
between two devices). It is sufficient if users' public keys are exchanged
using a centralized server. It is not required that metadata (such as user
names or addresses) are encrypted.

3. Can you independently verify your correspondent's identity? *YES*

This criterion requires that a built-in method exists for users to verify
either the identity of correspondents they are speaking with, or the
integrity of the channel, even if the service provider or other third
parties are compromised. Two acceptable solutions are:

    - an interface for users to view the fingerprint (hash) of their
      correspondent's public keys as well as their own, which users can
      verify manually or out of band.

    - a key exchange protocol with a short-authentication-string
      comparison, such as the Socialist Millionaire's protocol.

Other solutions are possible, but we require the solution to verify a
binding between users and the cryptographic channel which has been set up.
For the scorecard, we are simply requiring that a mechanism is implemented
and not evaluating the usability and security of the mechanism.

4. Is the code open to independent review? *YES*

This criterion requires that sufficient source-code has been published for
the application that a compatible implementation can be independently
compiled. Although it is preferable, we do not require the code to be
released under any specific free/open source license, only that all code
which could affect the communication and encryption performed by the app is
available for review, to detect bugs, back doors, and structural problems.

5. Is the crypto design well-documented? *YES*

This criterion requires clear and detailed explanations of the cryptography
used by the application. Preferably this should take the form of a
white-paper written for review by an audience of professional
cryptographers. This must provide answers to the following questions:

    - Which algorithms and parameters (such as key sizes or elliptic curve
      groups) are used in every step of the encryption and authentication
      process

    - How keys are generated, stored, and exchanged between users

    - The life-cycle of keys and the process for users to change or revoke
      their keys

    - A clear statement of the properties and protections the software
      aims to provide (implicitly, this tends to also provide a threat
      model, though it's good to have an explicit threat model too). This
      should also include a clear statement of scenarios in which the
      protocol is not secure.

6. Has there been an independent security audit? *NO*

This criterion requires an independent security review has been performed
within the 12 months prior to evaluation. This review must cover both the
design and the implementation of the app and must be performed by a named
auditing party that is independent of the tool's main development team
(audits by a separate security group within a large organization are
sufficient). Recognizing that unpublished audits can be valuable, we do
not require that the results of the audit have been made public, only that
a named party is willing to verify that the audit took place. In the long
term we believe that

7. Are past communications secure if your keys are stolen? *YES*

This criterion requires that the app provide "forward secrecy"; that is,
all communications must be encrypted with ephemeral keys which are
routinely deleted (along with the random values used to derive them) after
transmission or receipt of messages. It is imperative that these keys
cannot be reconstructed after the fact by anybody even given access to both
party's long-term private keys, ensuring that if users choose to delete
their local copies of correspondence, they are permanently deleted. Note
that this criterion requires criterion 2, end-to-end encryption.

Lastly, we should let you know that the Secure Messaging Scorecard is the
first phase of a longer term EFF campaign to identify the most secure and
usable communications tools available on the Internet today. We anticipate
doing more work in the future to evaluate and reward tools that couple
security with other essential user objectives, including usability,
interoperability, verifiability and openness.


#2

Hey Peter,

Dear Jitsi team,

As part of the Electronic Frontier Foundation's Campaign for Secure and
Usable Cryptography, we are putting together a scorecard to evaluate secure
messaging apps, tools and protocols. We are working on this project in
conjunction with Julia Angwin at ProPublica, and Joseph Bonneau at
Princeton's Center for Information Technology Policy.

Great to know!

One of the points was somewhat unclear to me:

6. Has there been an independent security audit? *NO*

Who needs to have permitted this audit? I assume someone outside the community?

Emil

···

On Wed, Oct 8, 2014 at 4:46 AM, Peter Eckersley <pde@eff.org> wrote:

--
https://jitsi.org


#3

The rule says "independent of the tool's main development team".

For corporate projects, we wanted to allow audits by other teams or
departments within that organisation.

For free / open source projects, an auditor who has filed a few bugs or
sent a few patches is fine, but someone who has written significant
portions of the code would be in the "main development team".

···

On Thu, Oct 09, 2014 at 02:00:49AM +0200, Emil Ivov wrote:

Hey Peter,

On Wed, Oct 8, 2014 at 4:46 AM, Peter Eckersley <pde@eff.org> wrote:
> Dear Jitsi team,
>
> As part of the Electronic Frontier Foundation's Campaign for Secure and
> Usable Cryptography, we are putting together a scorecard to evaluate secure
> messaging apps, tools and protocols. We are working on this project in
> conjunction with Julia Angwin at ProPublica, and Joseph Bonneau at
> Princeton's Center for Information Technology Policy.

Great to know!

One of the points was somewhat unclear to me:

> 6. Has there been an independent security audit? *NO*

Who needs to have permitted this audit? I assume someone outside the community?

--
Peter Eckersley pde@eff.org
Technology Projects Director Tel +1 415 436 9333 x131
Electronic Frontier Foundation Fax +1 415 436 9993


#4

Well I am not aware of anyone purposefully going through all that and then putting their findings in a report.

I do believe however that most security-related code, has been viewed by more than a single pair of eyes and a number of issues have been raised and fixed (I am not trying to imply there are absolutely none left ... one never knows that until new ones pop up).

Emil

···

On 09.10.14, 02:18, Peter Eckersley wrote:

On Thu, Oct 09, 2014 at 02:00:49AM +0200, Emil Ivov wrote:

Hey Peter,

On Wed, Oct 8, 2014 at 4:46 AM, Peter Eckersley <pde@eff.org> wrote:

Dear Jitsi team,

As part of the Electronic Frontier Foundation's Campaign for Secure and
Usable Cryptography, we are putting together a scorecard to evaluate secure
messaging apps, tools and protocols. We are working on this project in
conjunction with Julia Angwin at ProPublica, and Joseph Bonneau at
Princeton's Center for Information Technology Policy.

Great to know!

One of the points was somewhat unclear to me:

6. Has there been an independent security audit? *NO*

Who needs to have permitted this audit? I assume someone outside the community?

The rule says "independent of the tool's main development team".

For corporate projects, we wanted to allow audits by other teams or
departments within that organisation.

For free / open source projects, an auditor who has filed a few bugs or
sent a few patches is fine, but someone who has written significant
portions of the code would be in the "main development team".

--
https://jitsi.org


#5

It would definitely be good to get this kind of systematic, professional
auditing for Jitsi. There are at least two organisations which might
considering funding such an effort for Jitsi; one is the Linux Core
Infrastructure Initiative:

http://www.linuxfoundation.org/programs/core-infrastructure-initiative

Another is the Radio Free Asia's Open Technology Fund:

https://www.opentechfund.org/labs

···

On Thu, Oct 09, 2014 at 02:24:37AM +0200, Emil Ivov wrote:

Well I am not aware of anyone purposefully going through all that
and then putting their findings in a report.

I do believe however that most security-related code, has been
viewed by more than a single pair of eyes and a number of issues
have been raised and fixed (I am not trying to imply there are
absolutely none left ... one never knows that until new ones pop
up).

--
Peter Eckersley pde@eff.org
Technology Projects Director Tel +1 415 436 9333 x131
Electronic Frontier Foundation Fax +1 415 436 9993


#6

Well I am not aware of anyone purposefully going through all that
and then putting their findings in a report.

I do believe however that most security-related code, has been
viewed by more than a single pair of eyes and a number of issues
have been raised and fixed (I am not trying to imply there are
absolutely none left ... one never knows that until new ones pop
up).

It would definitely be good to get this kind of systematic, professional
auditing for Jitsi.

Agreed! A contribution like this would be most welcome!

Emil

···

On Thu, Oct 9, 2014 at 2:32 AM, Peter Eckersley <pde@eff.org> wrote:

On Thu, Oct 09, 2014 at 02:24:37AM +0200, Emil Ivov wrote:

There are at least two organisations which might
considering funding such an effort for Jitsi; one is the Linux Core
Infrastructure Initiative:

http://www.linuxfoundation.org/programs/core-infrastructure-initiative

Another is the Radio Free Asia's Open Technology Fund:

https://www.opentechfund.org/labs

--
Peter Eckersley pde@eff.org
Technology Projects Director Tel +1 415 436 9333 x131
Electronic Frontier Foundation Fax +1 415 436 9993

--
https://jitsi.org