Communications security

From OLPC
Revision as of 04:49, 30 December 2009 by Emesee (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


Pencil.png NOTE: The contents of this page are not set in stone, and are subject to change!

This page is a draft in active flux ...
Please leave suggestions on the talk page.

Pencil.png

This page documents work that has been done to formulate a good description of OLPC's goals in the field of communications security. To this end, we will begin with some paraphrases and quotes from Bitfrost that seem appropriate, then offer a subdivision of the term "communications security" into more primitive notions, and finally, we will present and reflect on some simple use cases.

Reflections from Bitfrost

  • ...the intent of our complete software security model is that it "tries to prevent software from doing bad things": e.g., attempt to damage the machine, compromise the user's privacy, damage the user's information, do "bad things" to people other than the machine's user, and lastly, impersonate the user
  • there's no trust mapping between people and software: trusting a friend isn't, and cannot be, the same as trusting code coming from that friend
  • the security of the laptop cannot depend on the user's ability to remember a password (though passwords may be used by more advanced users)
  • authentication of laptops or users will not depend upon identifiers that are sent unencrypted over the network
  • ...users will be identified... without a certified chain of trust

Security Properties of Communications

"Secure communications" can be thought of in terms of the logical security of communications channels, the isolation properties of software engaged in communication on physical nodes, and the physical security of the human carrying a networked laptop.

Here I use "logical security" to refer to issues like "can an attacker forge messages? read confidential communications? modify messages in transit?" and so on. I use "isolation properties" to describe security issues arising from the reification of abstract protocols into real software. Finally, I use "physical security" to denote all that can be inferred about a human operator through surveillance of the operator's laptop.

Use Cases

I want to have digital conversations which are at least as "secure" (private, confidential, authentic, and reliable) as those that I can have in person. To do this, I want to consider prefabricating and sharing "inflatable channels" with desirable properties. I also want to carefully restrict what properties the UI tries to convey about my channels (and hence my interlocutors, which I regard as being properties of the channels) and how software running at my pleasure is isolated from my identity and my channels (along with my data).

One of the properties that I'd like to have is to correctly recognize conversation partners who wish to be recognized. For example, I may wish to send some files to someone sitting next to me (e.g. Daf) and to be sure that any information leaks occur through me or through Daf. Alternately, I might be playing in a chess tournament where I want to authenticate the endpoints of the conversation so that I can blame cheaters. However, in a competitive team game, I want to distinguish my team from the opposing team and to ensure both privacy of identity and confidentiality of content.

Uses of Secondary Channels

Interactive audio/video handshakes, audio channels, barcode reading, sign language...

Basic Tools

  • Encrypted but un-authenticated TCP connection (D-H with transient AES keys)
  • Encrypted and authenticated TCP conn (STS protocol with XO keys)

Ideally the 'tubes' should have these as optional overlays?