Correlating Bitfrost and threats: Difference between revisions
m (→"File store rate limiting": mark comment explicitly) |
|||
(36 intermediate revisions by 15 users not shown) | |||
Line 1: | Line 1: | ||
==Executive Summary== |
==Executive Summary== |
||
Shortly after the page [[Threats and Mitigation]] (hereafter referred to as "T&M") was posted on the OLPC wiki, the Bitfrost specification for the OLPC security architecture was published. Those two documents developed independently. Unsurprisingly, they are only weakly correlated. This document, written by the author of T&M, attempts to correlate the security with the threat. |
Shortly after the page [[Threats and Mitigation]] (hereafter referred to as "T&M") was posted on the OLPC wiki, the [[Bitfrost]] specification for the OLPC security architecture was published. Those two documents developed independently. Unsurprisingly, they are only weakly correlated. This document, written by the author of T&M, attempts to correlate the security with the threat. |
||
The conclusions are as follows: the Bitfrost architecture is both stronger and easier to use than the traditional model of security. It exceeds in many ways both expectations and hopes of the author of T&M. However, Bitfrost only weakly addresses the two threats considered, in T&M, to be the greatest risks. These risks are, the use of the nigerian hoax for fraud, and the transformation of olpc computers into spambots by use of email and chat attachments. Key recommendations include: |
The conclusions are as follows: the Bitfrost architecture is both stronger and easier to use than the traditional model of security. It exceeds in many ways both expectations and hopes of the author of T&M. However, Bitfrost only weakly addresses the two threats considered, in T&M, to be the greatest risks. These risks are, the use of the nigerian hoax for fraud, and the transformation of olpc computers into spambots by use of email and chat attachments. Key recommendations include: |
||
Line 11: | Line 11: | ||
* The Bitfrost mechanism for updating firmware should be given a detailed end-to-end security review to ensure attackers cannot breach the system and render olpc computers unrecoverable. |
* The Bitfrost mechanism for updating firmware should be given a detailed end-to-end security review to ensure attackers cannot breach the system and render olpc computers unrecoverable. |
||
* Resources for these additional development efforts can be acquired by postponing development of the centralized "anti-theft" user identification system and the centralized backup system until a later release. |
* Resources for these additional development efforts can be acquired by postponing development of the centralized "anti-theft" user identification system and the centralized backup system until a later release. |
||
:* If olpc is unable to abandon these centralized systems, decentralized architectures are proposed that achieve the same goals while reducing single point of failure risk and privacy risk. |
|||
==Introduction== |
==Introduction== |
||
Line 17: | Line 19: | ||
The approach taken in this paper is to walk linearly through the Bitfrost paper and comment on items of interest. While many of the individual comments are based on a comparison with T&M, the Bitfrost spec opens some additional questions, so this document is broader than what might be suggested just by comparing the two original documents. Moreover, this document suffers from common problems shared by documents intended to critique other documents. Some of the criticisms here are certainly based on misunderstandings of the spec, for which the author apologizes even before specific examples are identified. A possibly worse flaw is, this document is fulsome in highlighting criticisms, while rarely even mentioning the good and excellent parts of the specification. Anyone interested in hearing this author wax poetic on the many merits of the Bitfrost spec is welcome to email him directly. |
The approach taken in this paper is to walk linearly through the Bitfrost paper and comment on items of interest. While many of the individual comments are based on a comparison with T&M, the Bitfrost spec opens some additional questions, so this document is broader than what might be suggested just by comparing the two original documents. Moreover, this document suffers from common problems shared by documents intended to critique other documents. Some of the criticisms here are certainly based on misunderstandings of the spec, for which the author apologizes even before specific examples are identified. A possibly worse flaw is, this document is fulsome in highlighting criticisms, while rarely even mentioning the good and excellent parts of the specification. Anyone interested in hearing this author wax poetic on the many merits of the Bitfrost spec is welcome to email him directly. |
||
This document correllates snapshots of |
This document correllates snapshots of [[Threats and Mitigation]] and the full Bitfrost spec (at http://dev.laptop.org/git.do?p=security;a=blob;hb=HEAD;f=bitfrost.txt) taken on Feb 19, 2007. Later versions of either document may be changed in ways that make this correllation confusing, erroneous, or irrelevant. Indeed, the author hopes that later versions of the Bitfrost spec will render this document obsolete. |
||
==Detailed Comments== |
==Detailed Comments== |
||
Line 24: | Line 26: | ||
The Bitfrost spec leaves open the question of how long keys last before they expire. Traditional expiration rules, such as Verisign's 1-year expiration, disregard the human costs and vulnerabilities associated with key rollover. This document recommends that keys never expire, or that they last at least as long as Verisign's own root key, i.e., 30 years. If a key is breached, the consequences can be handled through social mechanisms (i.e., telling all your friends that there is a problem, and telling them to tell everyone they know who might care). |
The Bitfrost spec leaves open the question of how long keys last before they expire. Traditional expiration rules, such as Verisign's 1-year expiration, disregard the human costs and vulnerabilities associated with key rollover. This document recommends that keys never expire, or that they last at least as long as Verisign's own root key, i.e., 30 years. If a key is breached, the consequences can be handled through social mechanisms (i.e., telling all your friends that there is a problem, and telling them to tell everyone they know who might care). |
||
* '''Comment:''' PKI reliance should to be kept to a minimum. Jumping from a 1 year minimum (likely too much hassle) to an infinite maximum (seems like a bad, bad idea) with a recommended 30 year (terribly excessive given the most optimistic of use cases) seem to be outside the realm of our needs. Given that lifetime of these machines is an expected 5 years, if we were to set a time, I think that would be a reasonable place to start. Handling key migration is going to be a known and manageable process. (this ignores some of the more advanced niche cases, mind you). [[user:Mburns|Mburns]] |
|||
==="No permanent data loss"=== |
==="No permanent data loss"=== |
||
The Bitfrost architecture calls for replication to a centralized backup location. First, considering that the goal of olpc is to support education, and since the value of unique educational material (i.e., homework) decays rapidly with time, and since the boxes are rugged, and since the boxes are running reliable Linux systems, one would expect data loss to be quite rare anyway, making this sophisticated scheme a low priority. Furthermore, centralizing backups seem like a questionable element of an olpc network. "Centralized" means that a central authority can more easily obtain control, since there are fewer sites against which force must be exerted. A centralized database seems better suited to the needs of an oppressive government than to the needs of the individual student. Email letters, chat transcripts, and private documents, any of which might intentionally or accidentally question tyrannical strictures (" |
The Bitfrost architecture calls for replication to a centralized backup location. First, considering that the goal of olpc is to support education, and since the value of unique educational material (i.e., homework) decays rapidly with time, and since the boxes are rugged, and since the boxes are running reliable Linux systems, one would expect data loss to be quite rare anyway, making this sophisticated scheme a low priority. Furthermore, centralizing backups seem like a questionable element of an olpc network. "Centralized" means that a central authority can more easily obtain control, since there are fewer sites against which force must be exerted. A centralized database seems better suited to the needs of an oppressive government than to the needs of the individual student. Email letters, chat transcripts, and private documents, any of which might intentionally or accidentally question tyrannical strictures ("the theory of evolution sounds right, even if it is heresy", or "my Dad thinks the emperor is an idiot, not the divine representative of God on Earth", or "the government is at fault for the pollution in our river that is making us all sick", a criticism that recently caused many people to be carted off for "re-education" in one dictatorship recently) can be identified easily by central authorities and used to identify "hotbeds of counter-revolutionary thinking". It must be remembered that, while the first governments to embrace olpc may be models of virtue, governments change over time, not always for the better: to steal a phrase, "so this is how democracy dies, with thunderous applause". |
||
This document recommends discarding the planned central database as being at best a low priority. This hopefully would free up resources to be invested in ensuring that the higher-priority elements of the plan are implemented with the needed speed and quality. |
This document recommends discarding the planned central database as being at best a low priority. This hopefully would free up resources to be invested in ensuring that the higher-priority elements of the plan are implemented with the needed speed and quality. |
||
Line 34: | Line 38: | ||
If the olpc team is unable to discard the centralized backup scheme, this document proposes that all backups, including the backup to the primary server, be encrypted, and that the decryption key be stored only on a buddy's olpc laptop, not on the primary server. Using a buddy for this simple purpose should be easy, since the data is static, i.e., the copy only needs to be made once. |
If the olpc team is unable to discard the centralized backup scheme, this document proposes that all backups, including the backup to the primary server, be encrypted, and that the decryption key be stored only on a buddy's olpc laptop, not on the primary server. Using a buddy for this simple purpose should be easy, since the data is static, i.e., the copy only needs to be made once. |
||
* '''Comment''': OLPC experience in the field has suggested that backup is a key deployment requirement (whether central or not), forcing a deployment using straight rsync during the field trials. Agreement on the need to encrypt the backups. Where to store the backup of the decryption key is another matter and while "buddies" might suit for some data, it is rather a lot of power to grant even to a buddy. As for backing up the actual data to a buddy; the volume of created material is going to be large when children start using the machines to create video and audio, the 1GB flash RAM just won't support all that material IMO. --[[User:mcfletch]] |
|||
===Sections from "Factory Production" through "arrival at school site"=== |
===Sections from "Factory Production" through "arrival at school site"=== |
||
This author finds the discussion so complex that he is unable to assess countermeasures that an attacker might take to defeat the scheme. It is also unclear whether this proposal can meet the threat that this author considers most likely, namely, theft not by the mafia or random burglars, but rather theft by high-ranking government officials. How robust is this scheme if the Deputy Secretary of Education is a member of the gang? |
This author finds the discussion of antitheft machinery in these sections so complex that he is unable to assess countermeasures that an attacker might take to defeat the scheme. It is also unclear whether this proposal can meet the threat that this author considers most likely, namely, theft not by the mafia or random burglars, but rather theft by high-ranking government officials. How robust is this scheme if the Deputy Secretary of Education is a member of the gang? |
||
History suggests that complex security schemes favor the attacker: the greater the complexity, the larger the number of countermeasure opportunities afforded the attack. As a minimum, the "one hour before retrying activation" seems more hostile to the intended users than to the attacker: the intended users will probably have limited skills at handling the needed security machinery, and a one-hour retry period may cause them to give up before succeeding. This author suggests development of a second description of this system, from the human perspective, i.e., examining the user interface at each step (not just the software user interface, the system user interface) and assessing whether the skill levels of the people responsible are adequate to the tasks. If the user interfaces are too complex, not only will it open sociologically-based attacks, but it may make the whole olpc program fail out-of-the-box, as it is discovered during deployment that few people can actually bring the laptops all the way up through activation. If analysis of the system user interface suggests complexity risk, perhaps it would be better to adopt the strategies used by shippers of bulk quantities of cell phones, which is perhaps the best available analogy. |
History suggests that complex security schemes favor the attacker: the greater the complexity, the larger the number of countermeasure opportunities afforded the attack. As a minimum, the "one hour before retrying activation" seems more hostile to the intended users than to the attacker: the intended users will probably have limited skills at handling the needed security machinery, and a one-hour retry period may cause them to give up before succeeding. This author suggests development of a second description of this system, from the human perspective, i.e., examining the user interface at each step (not just the software user interface, the system user interface) and assessing whether the skill levels of the people responsible are adequate to the tasks. If the user interfaces are too complex, not only will it open sociologically-based attacks, but it may make the whole olpc program fail out-of-the-box, as it is discovered during deployment that few people can actually bring the laptops all the way up through activation. If analysis of the system user interface suggests complexity risk, perhaps it would be better to adopt the strategies used by shippers of bulk quantities of cell phones, which is perhaps the best available analogy. |
||
Line 47: | Line 53: | ||
Mitigating theft seems the least important of the consequences of this centralized authentication proposal. Creating a ubiquituous citizen identification system for the government to exploit in unanticipated ways seems out of scope for the olpc effort. |
Mitigating theft seems the least important of the consequences of this centralized authentication proposal. Creating a ubiquituous citizen identification system for the government to exploit in unanticipated ways seems out of scope for the olpc effort. |
||
The proposal also seems to jeopardize all the laptops, since it is a central point of failure. How many laptops would expire their leases, producing an olpc lockout fiasco, if the central server stopped renewing leases for a week? It appears that the current proposal assumes that the central server will be reliably maintained, on net and fully operational, with |
The proposal also seems to jeopardize all the laptops, since it is a central point of failure. How many laptops would expire their leases, producing an olpc lockout fiasco, if the central server stopped renewing leases for a week? It appears that the current proposal assumes that the central server will be reliably maintained, on net and fully operational, with many nines of reliability. Is this interpretation correct? These levels of reliability are not met by systems deployed by high-tech governments. Should olpc require such reliability for a system run by a third world government? |
||
The proposal also seems inflexible. There are numerous legitimate reasons why ownership of an olpc might change. In practice, the sensible patterns of voluntary exchange among individuals cannot be predicted. Making these exchanges lightweight empowers the individual. Centralized command and control of the style proposed here generally leads to oppression of the human spirit (as discovered by anyone who has ever tried, behind one organizational firewall, to interoperate with a project team member behind a different organizational firewall). OLPC should empower, not discount, the individual in making surprising yet sensible decisions. This document recommends discarding this whole part of the Bitfrost proposal. |
The proposal also seems inflexible. There are numerous legitimate reasons why ownership of an olpc might change. In practice, the sensible patterns of voluntary exchange among individuals cannot be predicted. Making these exchanges lightweight empowers the individual. Centralized command and control of the style proposed here generally leads to oppression of the human spirit (as discovered by anyone who has ever tried, behind one organizational firewall, to interoperate with a project team member behind a different organizational firewall). OLPC should empower, not discount, the individual in making surprising yet sensible decisions. This document recommends discarding this whole part of the Bitfrost proposal. |
||
If elimination of anti-theft would jeopardize olpc (as suggested later in the spec), a decentralized alternative is proposed later in this document that would work at least as well against theft and requires neither a national identification system nor a single point of failure capable of causing nationwide olpc shutdown. |
|||
* '''Comment''': While there may be "legitimate reasons why ownership of an olpc might change" it's worth remembering that in most cases the laptops will be ''given'' to students for educational purposes, not bought. If a black market in these laptops develops, there may be incentive for the children (or more likely, their parents) to try to sell the laptops, defeating the educational goals of the project that gave them the laptops. You can argue about whether preventing this type of sale (some might even call it theft, although I wouldn't) is oppressive or protects children from pressure to make a short-sighted and perhaps foolish decision, but it is certainly a significant reason to have the central control of a lease system. --[[User:Dupuy|@alex]] 03:05, 7 March 2008 (EST) |
|||
==="Software Installation"=== |
==="Software Installation"=== |
||
Line 61: | Line 71: | ||
Here we insert a concrete proposal for a set of application categories and their endowments. This author credits Ken Kahn with the original insight that this is the sensible approach for an object-capability desktop to use in negotiating with its user on behalf of the application for its installation endowment. This concrete set of categories should be considered an exemplar, not a final spec, i.e., there are probably other sensible categories to include, and certainly better naming conventions and user-oriented explanations to be adopted. |
Here we insert a concrete proposal for a set of application categories and their endowments. This author credits Ken Kahn with the original insight that this is the sensible approach for an object-capability desktop to use in negotiating with its user on behalf of the application for its installation endowment. This concrete set of categories should be considered an exemplar, not a final spec, i.e., there are probably other sensible categories to include, and certainly better naming conventions and user-oriented explanations to be adopted. |
||
In the following discussion, the user decision to launch an application is the act of designation used to grant the following authorities: all launched applications are allowed to consume CPU and RAM resources, and to open windows to communicate with the user. They are also allowed to read their own resources, i.e., their own icons, strings, images, dynamic libraries, etc. They are allowed to use the speakers when they are in the foreground. They are allowed to invoke the Bitfrost api that will request dynamic authorities from the user on behalf of the application. Authorities granted through dynamic negotiation are generally transient, i.e., they belong to the running instance of the application and are revoked when the instance shuts down (exceptions noted later). |
In the following discussion, the user decision to launch an application is the act of designation used to grant the following authorities: all launched applications are allowed to consume CPU and RAM resources, and to open windows to communicate with the user. They are also allowed to read their own resources, i.e., their own icons, strings, images, dynamic libraries, etc. They are allowed to use the speakers when they are in the foreground. They are allowed to read the virtualized clock described elsewhere in the Bitfrost spec. They are allowed to invoke the Bitfrost api that will request dynamic authorities from the user on behalf of the application. Authorities granted through dynamic negotiation are generally transient, i.e., they belong to the running instance of the application and are revoked when the instance shuts down (exceptions noted later). |
||
Alas, the details of this description are tied to traditional desktops such as Windows and KDE, and need to be recrafted for the Sugar environment. This author has had insufficient experience with Sugar to |
Alas, the details of this description are tied to traditional desktops such as Windows and KDE, and need to be recrafted for the Sugar environment. This author has had insufficient experience with Sugar to confidently suggest translations of these ideas to specific Sugar user interface elements. |
||
The following discussion also skips over issues of trusted path and window forgery; trusted path will be discussed in a separate later section. |
The following discussion also skips over issues of trusted path and window forgery; trusted path will be discussed in a separate later section. |
||
Line 69: | Line 79: | ||
==== Safest Categories for Application Installation==== |
==== Safest Categories for Application Installation==== |
||
* Unknown: Applications that identify themselves as "unknown" receive only the default launch endowment. An application that identifies itself to the installation system as "unknown" can be installed |
* Unknown: Applications that identify themselves as "unknown" receive only the default launch endowment. An application that identifies itself to the installation system as "unknown" can be installed without asking the user to pick a category, making the installation fully automatic once the user has requested installation. However, the user can also specify "unknown" explicitly off the list of possible categories. Unknown applications can still make dynamic requests to the user with Bitfrost dialogs for access to specific files, sites, and other authorities noted later. |
||
* Document processing: Applications like MS-Word, Excel, PowerPoint, and Photoshop enable the user to transform documents. Such applications need only the following endowment beyond the standard launch endowments: they need the ability to designate a document type (a "file suffix") which should be associated with this application by default. Other than this, document processing applications get their authority to interact with specific documents via individual user acts of designation. Such acts of designation might include, drag-dropping of a file from a file explorer window to an application window, selecting a file through the file dialog box, and double-clicking on a document that is of the default file suffix type associated with the application. In the olpc context, the author recommends that the first application to request a file suffix association receive that association, and subsequent requestors of the same suffix are silently denied the request (file suffixes could be reassigned by a more sophisticated user, of course; and file suffixes must not be limited to 3 characters, of course). Within the threat model it would also be legal to endow the application with a tiny editable folder where the application can store preferences. We reluctantly recommend including such a folder in the document processing endowment |
* Document processing: Applications like MS-Word, Excel, PowerPoint, and Photoshop enable the user to transform documents. Such applications need only the following endowment beyond the standard launch endowments: they need the ability to designate a document type (a "file suffix") which should be associated with this application by default. Other than this, document processing applications get their authority to interact with specific documents via individual user acts of designation. Such acts of designation might include, drag-dropping of a file from a file explorer window to an application window, selecting a file through the file dialog box, and double-clicking on a document that is of the default file suffix type associated with the application. In the olpc context, the author recommends that the first application to request a file suffix association receive that association, and subsequent requestors of the same suffix are silently denied the request (file suffixes could be reassigned by a more sophisticated user, of course; and file suffixes must not be limited to 3 characters, of course). Within the threat model it would also be legal to endow the application with a tiny editable folder where the application can store preferences. We reluctantly recommend including such a folder in the document processing endowment.<br><br>Some document processing applications need to work on a whole folder at the same time, not a document at a time. The obvious example is the html editor. The Bitfrost file dialog must allow the user to designate either a single file or a folder for the document processor. The dialog box should probably have a "read-only" check box, though the value of such a check box is doubtful, for reasons too complicated for this writeup. |
||
** '''Comment''': I would recommend omitting the entire set of interactions related to default file type associations. The invisible battle of applications fighting over file extensions is tricky to manage, and makes the user experience more complicated and less predictable. I'd recommend either sticking to a single way of opening files (always start the activity first, then use its file chooser to select a file), or using file metadata to record which activity last saved the file. I believe the OLPC doesn't use file extensions, and that a preference folder ("conf/") is already part of the default storage endowment. Thus, this category and "unknown" can be folded into single category. --[[User:Ping|Ping]] 14:00, 2 March 2007 (EST) |
|||
*** '''Comment on Comment''': While I agree that file extension management is tricky, and while I would like to get rid of it, I am not thrilled by the alternatives so far identified. My experiences teaching novices how to use computers in community college (one of the odd things I've done in my dark past :-) strongly demonstrated to me that the file dialog box is the most user-hostile file selection mechanism since the advent of the command line. I.e., if I could get rid of one user-designation mechanism, the one I would discard is the file dialog. I would favor drag/dropping documents onto application shortcuts, except Sugar's one-window-only seems to preclude extensive use of the drag/drop metaphor (I would love to learn that I am wrong). Hiding the launch app declaration in metadata seems even more surprising for the user than the file suffix proposal: if the first-come-first-served policy proposed here were implemented, the file suffix would enable the user to predict, with full reliability, which app would be launched by just looking at the filename, rather than trying to remember what app launched 4 months ago. Alas, I predict that, if we do not supply a mechanism for recognizing at a glance, which app should be launched with which document, the users will wind up inventing naming conventions for themselves, conventions that will be much less reliable than just about anything we might cook up. Having said all of that, any of the alternatives we have discussed would be better than that traditional desktop that says, most determined abuser of the authority to set file prefixes wins. Meanwhile, if olpc can implement a trusted path as suggested later in the document -- a big if -- the most important difference between Unknown and Doc Processing is the selection of a petname: Unknowns have the lightest weight mechanism, a system-assigned name and icon that requires no user interaction, while doc processors may offer a default name/icon, but the user is presented a choice. --marcs, 18:00 2 March 2007 (EST). |
|||
*Single-player Game: Games like Solitaire and MineSweeper. A single player game has the same endowment needs as a document processing program. One can save game states in files with game-associated suffixes. |
*Single-player Game: Games like Solitaire and MineSweeper. A single player game has the same endowment needs as a document processing program. One can save game states in files with game-associated suffixes. |
||
*Mesh Gaming and Communication: These would be peer-to-peer applications, for which each connection is dynamically chosen by the user. |
*Mesh Gaming and Communication: These would be peer-to-peer applications, for which each connection is dynamically chosen by the user. In one implementation strategy, each connection authority could be embodied as a file, and the user would designate a desired connection by using a dialog similar to the file dialog (but different: a file dialog grants read/write authority on the file, while a network connection dialog grants authority to communicate with the site specified in the file in a Bitfrost-specified format). See the echat peer-to-peer sample application in "E in a Walnut" for an example of such a system. For mesh applications, since the authorities are embodied in dynamically selected files, these applications need only the same endowment as a document processing application. |
||
====Less Safe Categories==== |
====Less Safe Categories==== |
||
Line 81: | Line 95: | ||
The following categories embody enough authority that they can be used in dangerous ways. For the youngest olpc owners, we recommend the categories below be shut off. It is possible within the threat model to simply allow without discussion, for the youngest users, only the above categories, i.e., the youngest user would never see a request to specify an application category, and all apps would install with either unknown or document processing authority. This is not actually the recommendation for the youngest users, however. The exact recommendation is made later in the discussion of trusted path. |
The following categories embody enough authority that they can be used in dangerous ways. For the youngest olpc owners, we recommend the categories below be shut off. It is possible within the threat model to simply allow without discussion, for the youngest users, only the above categories, i.e., the youngest user would never see a request to specify an application category, and all apps would install with either unknown or document processing authority. This is not actually the recommendation for the youngest users, however. The exact recommendation is made later in the discussion of trusted path. |
||
*Multi-player Game: This is for traditional multiplayer games like Diablo and EverQuest, which always connect to a central server. A multiplayer game may be authorized the document processing endowment plus the authority to connect to a single server site specified at installation time. Bitfrost would support a trusted dialog box that allowed the application to request that the user switch its network authority from one site to another. |
*Multi-player Game: This is for traditional multiplayer games like Diablo and EverQuest, which always connect to a central server. A multiplayer game may be authorized the document processing endowment plus the authority to connect to a single server site specified by the application at installation time. Bitfrost would support a trusted dialog box that allowed the application to request that the user switch its network authority from one site to another. |
||
*Web Browser: Web browsers inherently need a dangerous authority, namely, the authority to connect to any arbitrary web site. Due to the poor architecture of traditional browsers, it also needs a larger, probably unbound, editable folder to store its preferences (which today include bookmarks and cookies and plugins and other oddments that can grow without bound). |
*Web Browser: Web browsers inherently need a dangerous authority, namely, the authority to connect to any arbitrary web site. Due to the poor architecture of traditional browsers, it also needs a larger, probably unbound, editable folder to store its preferences (which today include bookmarks and cookies and plugins and other oddments that can grow without bound). |
||
Line 99: | Line 113: | ||
Access to the clipboard is a problematic authority. For the key threats identified in T&M, endowing all applications with ubiquitous access to the clipboard is tolerable. Better solutions, required to meet the enhanced threat model of societies that have become sophisticated enough to engage in financial transactions online, are probably too costly to be implemented in the first olpc release. In the absence of a simplifying insight, this is probably a bridge that must be burned so that olpc can be successful enough early, so that later developers will be able to rue the early decision to make clipboard access ambient. Alas. Would it be possible to eschew traditional invisible clipboards in Sugar, using a part of the Sugar border as a visible clipboard, and allow only drag/drop to/from the border clipboard? Then the drag/drop operation could be the explicit human action that indicates that a one-time-only clipboard access is now authorized. The code in CapDesk demonstrates one way of doing this, though the Sugar infrastructure may not be adequate to support the CapDesk approach. |
Access to the clipboard is a problematic authority. For the key threats identified in T&M, endowing all applications with ubiquitous access to the clipboard is tolerable. Better solutions, required to meet the enhanced threat model of societies that have become sophisticated enough to engage in financial transactions online, are probably too costly to be implemented in the first olpc release. In the absence of a simplifying insight, this is probably a bridge that must be burned so that olpc can be successful enough early, so that later developers will be able to rue the early decision to make clipboard access ambient. Alas. Would it be possible to eschew traditional invisible clipboards in Sugar, using a part of the Sugar border as a visible clipboard, and allow only drag/drop to/from the border clipboard? Then the drag/drop operation could be the explicit human action that indicates that a one-time-only clipboard access is now authorized. The code in CapDesk demonstrates one way of doing this, though the Sugar infrastructure may not be adequate to support the CapDesk approach. |
||
For camera and microphone access, it would meet the T&M threat model to allow applications ambient access. However, Bitfrost is already proposing to do much better than this, and this document merely suggests tweaks to the Bitfrost proposal. Bitfrost seems to propose that applications must request at install time an endowment for the authority to request mike and camera access. Since all application developers can request the endowment, and since there is a dynamic grant requirement in addition already in the spec, there is little security benefit to requiring an endowment for making the request. This document proposes that all application be granted, at launch, the authority to request mike/camera access. Camera/mike accesses then become |
For camera and microphone access, it would meet the T&M threat model to allow applications ambient access. However, Bitfrost is already proposing to do much better than this, and this document merely suggests tweaks to the Bitfrost proposal. Bitfrost seems to propose that applications must request at install time an endowment for the authority to request mike and camera access. Since all application developers can request the endowment, and since there is a dynamic grant requirement in addition already in the spec, there is little security benefit to requiring an endowment for making the request. This document proposes that all application be granted, at launch, the authority to request mike/camera access. Camera/mike accesses then become dynamic grants comparable to the network and file grants. Earlier, this document suggested that dynamic grants generally be transient. However, having applications like the tape recorder program always, reliably, immediately request the obvious authority seems more hardship for the user than impediment to the attacker. Putting a check box on the dynamic authority-granting dialog box that says, "Grant (microphone or camera) access every time this application launches), and then adding it to the endowment, seems an adequate and appropriate solution. |
||
The 30 minute timeout required in the spec also seems more an inconvenience to the user than an inhibitor for the malicious application: after all, the user knows what he granted, under what circumstances would he not grant the continued usage of the authority for the 31st minute (especially assuming that applications are shut down when they make themselves invisible, as discussed earlier), and what crucial threat is being addressed to compensate for this annoyance? This document recommends eliminating the 30 minute timeout. |
|||
Regardless of what decisions are made about the camera/mike, there must be a system application (perhaps the installer itself) that the user can launch that will show which category an application was installed under, and a list of the current endowments, including camera/mike endowments, and enable the user to change them. the technique of allowing the user to choose to transform a dynamic grant into an endowment can be applied to many other authorities with limited risk, such as the background sound permission explicitly called out in the spec. |
Regardless of what decisions are made about the camera/mike, there must be a system application (perhaps the installer itself) that the user can launch that will show which category an application was installed under, and a list of the current endowments, including camera/mike endowments, and enable the user to change them. the technique of allowing the user to choose to transform a dynamic grant into an endowment can be applied to many other authorities with limited risk, such as the background sound permission explicitly called out in the spec. |
||
Line 110: | Line 126: | ||
The second recommendation, if time and resources permit, is to set aside an element of the Sugar interface as the trusted path which will assist (both subliminally and explicitly) the user achieving certain knowledge about which application is currently active. To have such a set aside achieve its goals, it must be strictly impossible for any application to achieve direct access to all the pixels on the screen: applications that write outside their windows will be clipped, and no application window can consume the entire screen. The trusted area can be quite small, but it must be inviolate; applications like Doom can be given virtualized device drivers that give the program full access to a screen that is ever so slightly smaller than the real one. |
The second recommendation, if time and resources permit, is to set aside an element of the Sugar interface as the trusted path which will assist (both subliminally and explicitly) the user achieving certain knowledge about which application is currently active. To have such a set aside achieve its goals, it must be strictly impossible for any application to achieve direct access to all the pixels on the screen: applications that write outside their windows will be clipped, and no application window can consume the entire screen. The trusted area can be quite small, but it must be inviolate; applications like Doom can be given virtualized device drivers that give the program full access to a screen that is ever so slightly smaller than the real one. |
||
* '''Comment''': As long as apps can't disable the frame, there is a trusted path. The user just moves the pointer to the corner of the screen to make the sugar frame show itself. The frame is then a very large trusted area. |
|||
As an example of how trusted path can be implemented, examine the CapDesk prototype of a secure desktop, in which the application does not have authority to modify either the application icon or the application name presented in the top left corner of each window. The icon and name are endowments, selected during the installation as part of the installation negotiation: the application suggests a name and an icon, which the user typically accepts as the default -- but the user can modify either the icon or the name if they are confusingly similar to either the name or icon of another application (such judgement can only be made by the user). Applications that are launched without an installation process ("unknown" applications) are given icons and names that are intentionally slightly uncomfortable as a warning to the user. |
As an example of how trusted path can be implemented, examine the CapDesk prototype of a secure desktop, in which the application does not have authority to modify either the application icon or the application name presented in the top left corner of each window. The icon and name are endowments, selected during the installation as part of the installation negotiation: the application suggests a name and an icon, which the user typically accepts as the default -- but the user can modify either the icon or the name if they are confusingly similar to either the name or icon of another application (such judgement can only be made by the user). Applications that are launched without an installation process ("unknown" applications) are given icons and names that are intentionally slightly uncomfortable as a warning to the user. |
||
Line 121: | Line 139: | ||
==="P_Document_RO"=== |
==="P_Document_RO"=== |
||
Based on experience with the Polaris pilot program, the author asserts that, given adequate acts of designation (such as both file and folder open dialog boxes as discussed in both Bitfrost and this document), photo viewing programs do not need any special type-specific authority. The user would designate the folder containing the slide show he wishes to view. This is more flexible in addition to requiring less mechanism and being more POLA-oriented. Remember that the slide show program needs access not only to the jpeg and png and gif files (which already violates the rule that only one type can be specified), but such programs also need to read/edit the type of file that describes the sequencing of the slides. This document recommends |
Based on experience with the Polaris pilot program, the author asserts that, given adequate acts of designation (such as both file and folder open dialog boxes as discussed in both Bitfrost and this document), photo viewing programs do not need any special type-specific authority. The user would designate the folder containing the slide show he wishes to view. This is more flexible in addition to requiring less mechanism and being more POLA-oriented. Remember that the slide show program needs access not only to the jpeg and png and gif files (which already violates the rule that only one type can be specified), but such programs also need to read/edit the type of file that describes the sequencing of the slides. This document recommends eliminating the P_Document_RO authority. Such programs are document processor programs, and would therefore by default not get network access, as properly demanded by the Bitfrost spec. |
||
* '''Comment:''' One 'type' does not necessarily mean one file extensions. Images (or the comprehensive list of mimetypes that constitute an image {png|jpg|jpeg|gif|...}) is one 'type' of file. This is the opening sentence of P_Document_RO. [[User:Mburns]] |
|||
* '''Comment:''' We are not using files and folders per se, and relying on them negates the use of the datastore and puts unwanted organizational and technical responsibilities on the child. This is a core OLPC decision. As mentioned, a 'slide show' program can use P_Document_RO to request all, or a wide subset, of images to present them to the user. [[User:Mburns]] |
|||
==="File store rate limiting"=== |
==="File store rate limiting"=== |
||
It is not clear what important threat this mitigates. If the system slows down abysmally every time a particular app is running, regardless of reason, the user has enough information to deduce the culprit and stop running that application. This document recommends eliminating the file store rate limiting. |
It is not clear what important threat this mitigates. If the system slows down abysmally every time a particular app is running, regardless of reason, the user has enough information to deduce the culprit and stop running that application. This document recommends eliminating the file store rate limiting. |
||
* '''Comment''': This prevents malicious apps from wearing out the flash. |
|||
==="Antitheft protection"=== |
==="Antitheft protection"=== |
||
In the presence of "very strong requests from certain countries that a powerful anti-theft service be provided", here is a decentralized alternative. The spec hints that each school will have a server (used in the spec for laptop backup). Rather than tying each laptop to a great national central database in the sky, tie it to the local school server. No identification snapshots of the children need to be taken; the child is well known in the community; the child's name is enough. Local communities have informal, but very strong, understandings of property ownership (read Hernando de Soto's works for extended discussion of this assertion). If a child loses his laptop, the local school olpc administrator can enter the name of the student (the unique name by which the student is locally known, it does not even have to be a government-acknowledged name) and issue a shutdown demand. This document recommends that the lease between server pings be substantially longer, on the order of at least a month, in case the student travels. As the spec notes, a longer lease would still provide excellent theft protection. Indeed, this document recommends going further: allow the local school olpc administrator to specify the duration of the lease, enabling these people on-the-ground, armed with the best information for a specific community, to trade off shorter lease duration (to enhance anti-theft) versus longer lease duration (to minimize the frequency with which students get locked out because of a glitch. Let us enter into this leasing business with a clear understanding of what will happen: with millions of systems in the field, with the unforeseeable consequences of living that attend to all human activity, there will be students egregiously locked out of their systems. Please allow the local administrator to minimize this usability nightmare whenever possible). |
In the presence of "very strong requests from certain countries that a powerful anti-theft service be provided", here is a decentralized alternative. The spec hints that each school will have a server (used in the spec for laptop backup). Rather than tying each laptop to a great national central database in the sky, tie it to the local school server. No identification snapshots of the children need to be taken; the child is well known in the community; the child's name is enough. Local communities have informal, but very strong, understandings of property ownership (read Hernando de Soto's works for extended discussion of this assertion). If a child loses his laptop, the local school olpc administrator can enter the name of the student (the unique name by which the student is locally known, it does not even have to be a government-acknowledged name) and issue a shutdown demand. This document recommends that the lease between server pings be substantially longer, on the order of at least a month, in case the student travels. As the spec notes, a longer lease would still provide excellent theft protection. Indeed, this document recommends going further: allow the local school olpc administrator to specify the duration of the lease, enabling these people on-the-ground, armed with the best information for a specific community, to trade off shorter lease duration (to enhance anti-theft) versus longer lease duration (to minimize the frequency with which students get locked out because of a glitch. Let us enter into this leasing business with a clear understanding of what will happen: with millions of systems in the field, with the unforeseeable consequences of living that attend to all human activity, there will be students egregiously locked out of their systems. The stories of such system lockouts will travel at lightspeed through the mesh. Everyone will know stories, both true and exaggerated, of student lockout problems. The whole system will work to the detriment of the olpc initiative's reputation. Please allow the local administrator to minimize this usability nightmare whenever possible. This will minimize the harm done to olpc's reputation). |
||
==="Doing bad things to other people"=== |
==="Doing bad things to other people"=== |
||
Line 137: | Line 160: | ||
==="core BIOS protection"=== |
==="core BIOS protection"=== |
||
T&M recommends that the bootware allowing selection of a boot device be put in hardware, not firmware, to ensure that there exists a method of reliably replacing virus-corrupted laptops. It is not clear from the spec whether boot device selection is even possible, much less whether it is in hardware. |
T&M recommends that the bootware allowing selection of a boot device be put in hardware, not firmware, to ensure that there exists a method of reliably replacing virus-corrupted laptops. It is not clear from the spec whether boot device selection is even possible, much less whether it is in hardware. Here we assume that the boot selection is in firmware. If so, then the integrity of the boot selection process is dependent on the smallest details of how the core bios protection works. As one obvious example, the signature verification software must be in the firmware that cannot be changed without a verified signature. Is it? Presumably this one is so obvious, the answer is, "of course". Is the signature verification software off-the-shelf, or is someone writing their own (not relentlessly inspected) signature software? The point is, this whole mechanism must be scrutinized in detail for confidence. This document recommends such a detailed scrutiny, performed in conjunction with people not involved in olpc design and implementation, who will not suffer any dangerous forms of group-think that might lead to a failure to ask critical questions. |
||
==="Laptop disposal and transfer security"=== |
==="Laptop disposal and transfer security"=== |
||
The laptop re-initialization |
The laptop re-initialization application is necessary in order to support sensible exchange among human beings, as discussed earlier. Indeed, the software needs to be widely available: if an owner must seek authorization from outside his local community to transfer ownership, it will present an egregious impediment to human action. At the same time, the reinitialization application is of course even more valuable to thieves than it is to individual owners, since it is the achilles heel of the anti-theft system. Since this program would be so valuable to burglars, the naive inclination would be to make the program very closely held, i.e., only the Deputy Secretary of Education should have the program (yes, the same Deputy Secretary identified earlier as the most likely threat to the batch theft of laptops). Such close holding will only produce disappointment. Surely, once there are a million or so of these machines in the world, someone somewhere will post the application on the Web. To make a sensible analysis, one must start by assuming that everyone has the program readily available. |
||
If the antitheft machinery cannot be abandoned, a possible strategy is to have the reinitialization program run on the same school server that controls the anti-theft lease. Only this server can re-initialize the laptop, only when the laptop makes a request to extend its anti-theft lease. |
If the antitheft machinery cannot be abandoned, a possible strategy is to have the reinitialization program run on the same school server that controls the anti-theft lease. Only this server can re-initialize the laptop, only when the laptop makes a request to extend its anti-theft lease. |
||
by Marc Stiegler |
by Marc Stiegler |
||
[[Category:Security]] |
Latest revision as of 08:14, 7 March 2008
Executive Summary
Shortly after the page Threats and Mitigation (hereafter referred to as "T&M") was posted on the OLPC wiki, the Bitfrost specification for the OLPC security architecture was published. Those two documents developed independently. Unsurprisingly, they are only weakly correlated. This document, written by the author of T&M, attempts to correlate the security with the threat.
The conclusions are as follows: the Bitfrost architecture is both stronger and easier to use than the traditional model of security. It exceeds in many ways both expectations and hopes of the author of T&M. However, Bitfrost only weakly addresses the two threats considered, in T&M, to be the greatest risks. These risks are, the use of the nigerian hoax for fraud, and the transformation of olpc computers into spambots by use of email and chat attachments. Key recommendations include:
- The installation of applications under Bitfrost should be tweaked so that, in addition to asking the application for a list of requested endowments, the user is asked what kind of application is being installed ("category-based installation"). The installation endowment becomes the intersection of those endowments requested, and those endowments appropriate for the application type.
- A computer-based training system that makes olpc owners resistant to nigerian hoaxes should be explicitly included in the security specification.
- The Bitfrost mechanism for updating firmware should be given a detailed end-to-end security review to ensure attackers cannot breach the system and render olpc computers unrecoverable.
- Resources for these additional development efforts can be acquired by postponing development of the centralized "anti-theft" user identification system and the centralized backup system until a later release.
- If olpc is unable to abandon these centralized systems, decentralized architectures are proposed that achieve the same goals while reducing single point of failure risk and privacy risk.
Introduction
The approach taken in this paper is to walk linearly through the Bitfrost paper and comment on items of interest. While many of the individual comments are based on a comparison with T&M, the Bitfrost spec opens some additional questions, so this document is broader than what might be suggested just by comparing the two original documents. Moreover, this document suffers from common problems shared by documents intended to critique other documents. Some of the criticisms here are certainly based on misunderstandings of the spec, for which the author apologizes even before specific examples are identified. A possibly worse flaw is, this document is fulsome in highlighting criticisms, while rarely even mentioning the good and excellent parts of the specification. Anyone interested in hearing this author wax poetic on the many merits of the Bitfrost spec is welcome to email him directly.
This document correllates snapshots of Threats and Mitigation and the full Bitfrost spec (at http://dev.laptop.org/git.do?p=security;a=blob;hb=HEAD;f=bitfrost.txt) taken on Feb 19, 2007. Later versions of either document may be changed in ways that make this correllation confusing, erroneous, or irrelevant. Indeed, the author hopes that later versions of the Bitfrost spec will render this document obsolete.
Detailed Comments
"Limited Institutional PKI"
The Bitfrost spec leaves open the question of how long keys last before they expire. Traditional expiration rules, such as Verisign's 1-year expiration, disregard the human costs and vulnerabilities associated with key rollover. This document recommends that keys never expire, or that they last at least as long as Verisign's own root key, i.e., 30 years. If a key is breached, the consequences can be handled through social mechanisms (i.e., telling all your friends that there is a problem, and telling them to tell everyone they know who might care).
- Comment: PKI reliance should to be kept to a minimum. Jumping from a 1 year minimum (likely too much hassle) to an infinite maximum (seems like a bad, bad idea) with a recommended 30 year (terribly excessive given the most optimistic of use cases) seem to be outside the realm of our needs. Given that lifetime of these machines is an expected 5 years, if we were to set a time, I think that would be a reasonable place to start. Handling key migration is going to be a known and manageable process. (this ignores some of the more advanced niche cases, mind you). Mburns
"No permanent data loss"
The Bitfrost architecture calls for replication to a centralized backup location. First, considering that the goal of olpc is to support education, and since the value of unique educational material (i.e., homework) decays rapidly with time, and since the boxes are rugged, and since the boxes are running reliable Linux systems, one would expect data loss to be quite rare anyway, making this sophisticated scheme a low priority. Furthermore, centralizing backups seem like a questionable element of an olpc network. "Centralized" means that a central authority can more easily obtain control, since there are fewer sites against which force must be exerted. A centralized database seems better suited to the needs of an oppressive government than to the needs of the individual student. Email letters, chat transcripts, and private documents, any of which might intentionally or accidentally question tyrannical strictures ("the theory of evolution sounds right, even if it is heresy", or "my Dad thinks the emperor is an idiot, not the divine representative of God on Earth", or "the government is at fault for the pollution in our river that is making us all sick", a criticism that recently caused many people to be carted off for "re-education" in one dictatorship recently) can be identified easily by central authorities and used to identify "hotbeds of counter-revolutionary thinking". It must be remembered that, while the first governments to embrace olpc may be models of virtue, governments change over time, not always for the better: to steal a phrase, "so this is how democracy dies, with thunderous applause".
This document recommends discarding the planned central database as being at best a low priority. This hopefully would free up resources to be invested in ensuring that the higher-priority elements of the plan are implemented with the needed speed and quality.
It seems more in tune with the goals of olpc to implement, in a second phase, a buddy system in which student/owners pair off, and any time both buddies are meshed, their computers back each other up.
If the olpc team is unable to discard the centralized backup scheme, this document proposes that all backups, including the backup to the primary server, be encrypted, and that the decryption key be stored only on a buddy's olpc laptop, not on the primary server. Using a buddy for this simple purpose should be easy, since the data is static, i.e., the copy only needs to be made once.
- Comment: OLPC experience in the field has suggested that backup is a key deployment requirement (whether central or not), forcing a deployment using straight rsync during the field trials. Agreement on the need to encrypt the backups. Where to store the backup of the decryption key is another matter and while "buddies" might suit for some data, it is rather a lot of power to grant even to a buddy. As for backing up the actual data to a buddy; the volume of created material is going to be large when children start using the machines to create video and audio, the 1GB flash RAM just won't support all that material IMO. --User:mcfletch
Sections from "Factory Production" through "arrival at school site"
This author finds the discussion of antitheft machinery in these sections so complex that he is unable to assess countermeasures that an attacker might take to defeat the scheme. It is also unclear whether this proposal can meet the threat that this author considers most likely, namely, theft not by the mafia or random burglars, but rather theft by high-ranking government officials. How robust is this scheme if the Deputy Secretary of Education is a member of the gang?
History suggests that complex security schemes favor the attacker: the greater the complexity, the larger the number of countermeasure opportunities afforded the attack. As a minimum, the "one hour before retrying activation" seems more hostile to the intended users than to the attacker: the intended users will probably have limited skills at handling the needed security machinery, and a one-hour retry period may cause them to give up before succeeding. This author suggests development of a second description of this system, from the human perspective, i.e., examining the user interface at each step (not just the software user interface, the system user interface) and assessing whether the skill levels of the people responsible are adequate to the tasks. If the user interfaces are too complex, not only will it open sociologically-based attacks, but it may make the whole olpc program fail out-of-the-box, as it is discovered during deployment that few people can actually bring the laptops all the way up through activation. If analysis of the system user interface suggests complexity risk, perhaps it would be better to adopt the strategies used by shippers of bulk quantities of cell phones, which is perhaps the best available analogy.
"First boot"
The spec seems to propose that the system enforce a requirement for each child to be digitally identified and registered in a massive government-controlled central database. The justification is to prevent theft. Yet these laptops are worth less than $50 to a thief (the fence will take at least half), and the computer owner will presumably carry it with him much of the time, meaning that a reliable witness (the owner) is likely to be present during the burglary. So stealing an olpc laptop is already both high-risk and low-payoff.
Mitigating theft seems the least important of the consequences of this centralized authentication proposal. Creating a ubiquituous citizen identification system for the government to exploit in unanticipated ways seems out of scope for the olpc effort.
The proposal also seems to jeopardize all the laptops, since it is a central point of failure. How many laptops would expire their leases, producing an olpc lockout fiasco, if the central server stopped renewing leases for a week? It appears that the current proposal assumes that the central server will be reliably maintained, on net and fully operational, with many nines of reliability. Is this interpretation correct? These levels of reliability are not met by systems deployed by high-tech governments. Should olpc require such reliability for a system run by a third world government?
The proposal also seems inflexible. There are numerous legitimate reasons why ownership of an olpc might change. In practice, the sensible patterns of voluntary exchange among individuals cannot be predicted. Making these exchanges lightweight empowers the individual. Centralized command and control of the style proposed here generally leads to oppression of the human spirit (as discovered by anyone who has ever tried, behind one organizational firewall, to interoperate with a project team member behind a different organizational firewall). OLPC should empower, not discount, the individual in making surprising yet sensible decisions. This document recommends discarding this whole part of the Bitfrost proposal.
If elimination of anti-theft would jeopardize olpc (as suggested later in the spec), a decentralized alternative is proposed later in this document that would work at least as well against theft and requires neither a national identification system nor a single point of failure capable of causing nationwide olpc shutdown.
- Comment: While there may be "legitimate reasons why ownership of an olpc might change" it's worth remembering that in most cases the laptops will be given to students for educational purposes, not bought. If a black market in these laptops develops, there may be incentive for the children (or more likely, their parents) to try to sell the laptops, defeating the educational goals of the project that gave them the laptops. You can argue about whether preventing this type of sale (some might even call it theft, although I wouldn't) is oppressive or protects children from pressure to make a short-sighted and perhaps foolish decision, but it is certainly a significant reason to have the central control of a lease system. --@alex 03:05, 7 March 2008 (EST)
"Software Installation"
A Category Based Approach to Installation
The paragraph starting with "It must be noted here that this system _only_ protects benign software" is important. T&M suggests that a key attack vector will be executable attachments on email and chat. These will be intentionally malicious programs, rendering the benign application machinery irrelevant. If the mutually exclusive, disallowed combinations of endowments (the CapDesk/Polaris term for "initially requestable permissions") intended to hinder malicious code are stringent enough to effectively prevent malice, they will prohibit the installation of benign applications that need those same combinations. This violates the Bitfrost Unobstrusive Security Principle.
As it happens, only a small enhancement to the current Bitfrost spec is required to largely discard the distinction made between benign and malicious software, and to give the user greater control. This enhancement is the presentation to the user, at the beginning of the installation process, of a request that the user describe what kind/category of application he is installing. Each category of application has a different endowment requirement; most categories are benign, and less benign categories can be excluded from the list of choices for the youngest users. One might object, "but then the user needs to know what kind of application he is installing", to which one answer is, "Yes! Absolutely. Even putting security aside, why would we want to install software whose purpose we do not at least think we know?" The other answer, probably the more compelling answer, is, "Unknown application" is one of the supported categories, the category that receives the most restrictive endowment.
Here we insert a concrete proposal for a set of application categories and their endowments. This author credits Ken Kahn with the original insight that this is the sensible approach for an object-capability desktop to use in negotiating with its user on behalf of the application for its installation endowment. This concrete set of categories should be considered an exemplar, not a final spec, i.e., there are probably other sensible categories to include, and certainly better naming conventions and user-oriented explanations to be adopted.
In the following discussion, the user decision to launch an application is the act of designation used to grant the following authorities: all launched applications are allowed to consume CPU and RAM resources, and to open windows to communicate with the user. They are also allowed to read their own resources, i.e., their own icons, strings, images, dynamic libraries, etc. They are allowed to use the speakers when they are in the foreground. They are allowed to read the virtualized clock described elsewhere in the Bitfrost spec. They are allowed to invoke the Bitfrost api that will request dynamic authorities from the user on behalf of the application. Authorities granted through dynamic negotiation are generally transient, i.e., they belong to the running instance of the application and are revoked when the instance shuts down (exceptions noted later).
Alas, the details of this description are tied to traditional desktops such as Windows and KDE, and need to be recrafted for the Sugar environment. This author has had insufficient experience with Sugar to confidently suggest translations of these ideas to specific Sugar user interface elements.
The following discussion also skips over issues of trusted path and window forgery; trusted path will be discussed in a separate later section.
Safest Categories for Application Installation
- Unknown: Applications that identify themselves as "unknown" receive only the default launch endowment. An application that identifies itself to the installation system as "unknown" can be installed without asking the user to pick a category, making the installation fully automatic once the user has requested installation. However, the user can also specify "unknown" explicitly off the list of possible categories. Unknown applications can still make dynamic requests to the user with Bitfrost dialogs for access to specific files, sites, and other authorities noted later.
- Document processing: Applications like MS-Word, Excel, PowerPoint, and Photoshop enable the user to transform documents. Such applications need only the following endowment beyond the standard launch endowments: they need the ability to designate a document type (a "file suffix") which should be associated with this application by default. Other than this, document processing applications get their authority to interact with specific documents via individual user acts of designation. Such acts of designation might include, drag-dropping of a file from a file explorer window to an application window, selecting a file through the file dialog box, and double-clicking on a document that is of the default file suffix type associated with the application. In the olpc context, the author recommends that the first application to request a file suffix association receive that association, and subsequent requestors of the same suffix are silently denied the request (file suffixes could be reassigned by a more sophisticated user, of course; and file suffixes must not be limited to 3 characters, of course). Within the threat model it would also be legal to endow the application with a tiny editable folder where the application can store preferences. We reluctantly recommend including such a folder in the document processing endowment.
Some document processing applications need to work on a whole folder at the same time, not a document at a time. The obvious example is the html editor. The Bitfrost file dialog must allow the user to designate either a single file or a folder for the document processor. The dialog box should probably have a "read-only" check box, though the value of such a check box is doubtful, for reasons too complicated for this writeup.
- Comment: I would recommend omitting the entire set of interactions related to default file type associations. The invisible battle of applications fighting over file extensions is tricky to manage, and makes the user experience more complicated and less predictable. I'd recommend either sticking to a single way of opening files (always start the activity first, then use its file chooser to select a file), or using file metadata to record which activity last saved the file. I believe the OLPC doesn't use file extensions, and that a preference folder ("conf/") is already part of the default storage endowment. Thus, this category and "unknown" can be folded into single category. --Ping 14:00, 2 March 2007 (EST)
- Comment on Comment: While I agree that file extension management is tricky, and while I would like to get rid of it, I am not thrilled by the alternatives so far identified. My experiences teaching novices how to use computers in community college (one of the odd things I've done in my dark past :-) strongly demonstrated to me that the file dialog box is the most user-hostile file selection mechanism since the advent of the command line. I.e., if I could get rid of one user-designation mechanism, the one I would discard is the file dialog. I would favor drag/dropping documents onto application shortcuts, except Sugar's one-window-only seems to preclude extensive use of the drag/drop metaphor (I would love to learn that I am wrong). Hiding the launch app declaration in metadata seems even more surprising for the user than the file suffix proposal: if the first-come-first-served policy proposed here were implemented, the file suffix would enable the user to predict, with full reliability, which app would be launched by just looking at the filename, rather than trying to remember what app launched 4 months ago. Alas, I predict that, if we do not supply a mechanism for recognizing at a glance, which app should be launched with which document, the users will wind up inventing naming conventions for themselves, conventions that will be much less reliable than just about anything we might cook up. Having said all of that, any of the alternatives we have discussed would be better than that traditional desktop that says, most determined abuser of the authority to set file prefixes wins. Meanwhile, if olpc can implement a trusted path as suggested later in the document -- a big if -- the most important difference between Unknown and Doc Processing is the selection of a petname: Unknowns have the lightest weight mechanism, a system-assigned name and icon that requires no user interaction, while doc processors may offer a default name/icon, but the user is presented a choice. --marcs, 18:00 2 March 2007 (EST).
- Single-player Game: Games like Solitaire and MineSweeper. A single player game has the same endowment needs as a document processing program. One can save game states in files with game-associated suffixes.
- Mesh Gaming and Communication: These would be peer-to-peer applications, for which each connection is dynamically chosen by the user. In one implementation strategy, each connection authority could be embodied as a file, and the user would designate a desired connection by using a dialog similar to the file dialog (but different: a file dialog grants read/write authority on the file, while a network connection dialog grants authority to communicate with the site specified in the file in a Bitfrost-specified format). See the echat peer-to-peer sample application in "E in a Walnut" for an example of such a system. For mesh applications, since the authorities are embodied in dynamically selected files, these applications need only the same endowment as a document processing application.
Less Safe Categories
The following categories embody enough authority that they can be used in dangerous ways. For the youngest olpc owners, we recommend the categories below be shut off. It is possible within the threat model to simply allow without discussion, for the youngest users, only the above categories, i.e., the youngest user would never see a request to specify an application category, and all apps would install with either unknown or document processing authority. This is not actually the recommendation for the youngest users, however. The exact recommendation is made later in the discussion of trusted path.
- Multi-player Game: This is for traditional multiplayer games like Diablo and EverQuest, which always connect to a central server. A multiplayer game may be authorized the document processing endowment plus the authority to connect to a single server site specified by the application at installation time. Bitfrost would support a trusted dialog box that allowed the application to request that the user switch its network authority from one site to another.
- Web Browser: Web browsers inherently need a dangerous authority, namely, the authority to connect to any arbitrary web site. Due to the poor architecture of traditional browsers, it also needs a larger, probably unbound, editable folder to store its preferences (which today include bookmarks and cookies and plugins and other oddments that can grow without bound).
- Mail Client: The mail client is, by definition, capable of acting as a spambot. It is therefore the most desireable category for an attacker to request. For its endowment it needs authority to connect to 2 sites specified during installation by the user (the mail reading site and the mail sending site). If there are general-purpose address books or mail server sites and passwords in the Sugar environment, the mail client needs read (and probably write) access to all of them. It also needs an unboundedly large editable folder for storing mail.
- System Tool: This is an application for repairing/upgrading the system. It would receive unbound authority -- but only after passing the verification checks showing it had been signed by OLPC, as described in other parts of the Bitfrost documentation. System tools are so hazardous, this document proposes that an intentionally egregious warning dialog box be imposed upon the user before installation proceeds. System tools should never be allowed to run without an explicit act of authorization by the user. If both OLPC and the government are allowed to produce system tools, the warning dialog should clearly indicate which one is the author of the tool currently asking to be installed. The youngest users should not be able to allow system tools to run by themselves, i.e., a more sophisticated user should be required to turn on the authority to recognized system tools and answer the question of whether to run them or not. Explicit human intervention is necessary because a single erroneous distribution of a system upgrade that introduced a vulnerability could enable an MS-Blaster-style compromise of the all olpcs globally if there were not a human step involved to slow down the spread.
Other Authorities and Endowments
If an application needs more or different authorities than those offered by any of the categories, the slightly advanced user can custom craft an endowment. Those who have not studied the matter of endowment in depth generally believe that handcrafting an endowment needs a most sophisticated user to avert social attacks. This this both overestimates the difficulty of the problem and underestimates the skill of the ordinary person in assessing risk -- when the risk is presented in human-meaningful terms at a human-relevant level of abstraction. The actual, limited skills required are suggested by Granma's Rules of POLA, at http://www.skyhunter.com/marcs/granmaRulesPola.html.
An important authority that should not be available as part of the installation system is the always-launch-at-boot authority. The user should enable launch-at-boot, not through a dialog or automated action, but rather by the act of designation of drag/dropping a shortcut into a launch-at-boot folder. Users too young or too inexperienced should not be able to grant this powerful authority without assistance, until they know how to place things into the appropriate folder.
Another important authority is the right to run invisibly in the background. Applications that shut down all their windows, and become effectively invisible to the user (i.e., can only be identified by users capable of interpreting the output of "ps -eaf") should be shut down. It is arguable that applications placed in the startup folder can be allowed to run invisibly (i.e., invisible operation could be an authority granted by the act of designation of placing the application shortcut in the folder; the user knows which apps they are because he put them there, and can always review his choices by looking inside the startup folder).
Access to the clipboard is a problematic authority. For the key threats identified in T&M, endowing all applications with ubiquitous access to the clipboard is tolerable. Better solutions, required to meet the enhanced threat model of societies that have become sophisticated enough to engage in financial transactions online, are probably too costly to be implemented in the first olpc release. In the absence of a simplifying insight, this is probably a bridge that must be burned so that olpc can be successful enough early, so that later developers will be able to rue the early decision to make clipboard access ambient. Alas. Would it be possible to eschew traditional invisible clipboards in Sugar, using a part of the Sugar border as a visible clipboard, and allow only drag/drop to/from the border clipboard? Then the drag/drop operation could be the explicit human action that indicates that a one-time-only clipboard access is now authorized. The code in CapDesk demonstrates one way of doing this, though the Sugar infrastructure may not be adequate to support the CapDesk approach.
For camera and microphone access, it would meet the T&M threat model to allow applications ambient access. However, Bitfrost is already proposing to do much better than this, and this document merely suggests tweaks to the Bitfrost proposal. Bitfrost seems to propose that applications must request at install time an endowment for the authority to request mike and camera access. Since all application developers can request the endowment, and since there is a dynamic grant requirement in addition already in the spec, there is little security benefit to requiring an endowment for making the request. This document proposes that all application be granted, at launch, the authority to request mike/camera access. Camera/mike accesses then become dynamic grants comparable to the network and file grants. Earlier, this document suggested that dynamic grants generally be transient. However, having applications like the tape recorder program always, reliably, immediately request the obvious authority seems more hardship for the user than impediment to the attacker. Putting a check box on the dynamic authority-granting dialog box that says, "Grant (microphone or camera) access every time this application launches), and then adding it to the endowment, seems an adequate and appropriate solution.
The 30 minute timeout required in the spec also seems more an inconvenience to the user than an inhibitor for the malicious application: after all, the user knows what he granted, under what circumstances would he not grant the continued usage of the authority for the 31st minute (especially assuming that applications are shut down when they make themselves invisible, as discussed earlier), and what crucial threat is being addressed to compensate for this annoyance? This document recommends eliminating the 30 minute timeout.
Regardless of what decisions are made about the camera/mike, there must be a system application (perhaps the installer itself) that the user can launch that will show which category an application was installed under, and a list of the current endowments, including camera/mike endowments, and enable the user to change them. the technique of allowing the user to choose to transform a dynamic grant into an endowment can be applied to many other authorities with limited risk, such as the background sound permission explicitly called out in the spec.
Trusted Paths and Petname Systems
The above discussion meticulously avoided discussion of trusted paths and window forgery. For the threats identified in T&M, it would be tolerable to allow window forgery. The excellence of the basic Bitfrost specification lures one to hope to do better. Remember that, the moment the owners rise to a level of sophistication that they start dealing with intrinsically valuable resources over the web, phishing and similar forms of fraud will cripple further advancement. Furthermore, since olpc is an educational enterprise, it seems sensible to use it as a vehicle to teach people, at least subliminally, about the crucial, inescapable, fundamental principle of trusted path, i.e., identifying the key elements of the user interface that can and cannot be trusted.
This document has 2 recommendations for implementing trusted path. The first is to ship olpc with the waterken petname tool (at http://www.waterken.com/user/PetnameTool/) pre-installed into FireFox. The petname tool is an anti-phishing mechanism that does not require the user to understand anything at all about certificates, public keys, or certificate authorities; it only requires that the user assign a personal name to each site that manipulates valuable resources (the sites must have SSL keys, but they do not need certificate authorities, i.e., they can be self-certified). Only the more advanced students will learn how to use the petname tool. That is fine. Only the more advanced students will engage valuable web-based resources.
The second recommendation, if time and resources permit, is to set aside an element of the Sugar interface as the trusted path which will assist (both subliminally and explicitly) the user achieving certain knowledge about which application is currently active. To have such a set aside achieve its goals, it must be strictly impossible for any application to achieve direct access to all the pixels on the screen: applications that write outside their windows will be clipped, and no application window can consume the entire screen. The trusted area can be quite small, but it must be inviolate; applications like Doom can be given virtualized device drivers that give the program full access to a screen that is ever so slightly smaller than the real one.
- Comment: As long as apps can't disable the frame, there is a trusted path. The user just moves the pointer to the corner of the screen to make the sugar frame show itself. The frame is then a very large trusted area.
As an example of how trusted path can be implemented, examine the CapDesk prototype of a secure desktop, in which the application does not have authority to modify either the application icon or the application name presented in the top left corner of each window. The icon and name are endowments, selected during the installation as part of the installation negotiation: the application suggests a name and an icon, which the user typically accepts as the default -- but the user can modify either the icon or the name if they are confusingly similar to either the name or icon of another application (such judgement can only be made by the user). Applications that are launched without an installation process ("unknown" applications) are given icons and names that are intentionally slightly uncomfortable as a warning to the user.
If such a trusted path can be implemented then it should be supported by the installation system, in a manner similar to CapDesk. Earlier, this document stated that it would make a proposal for what to present to the youngest olpc users during application installation. For all applications that are not in the Unknown category, even the youngest user can be asked to pick an icon for the application that is distinct from the icons for all the other applications the child has installed. As in CapDesk, the application can propose an icon from its own resources, and the installation system can present that icon as the default that the child will get if the child simply clicks "ok". Note that we do not describe this as a security matter when explaining it to the user. Rather, we describe it, simply and correctly, as a mechanism to help the user avoid being confused.
Both the petname tool and the CapDesk desktop implement petname systems, which are described at http://www.skyhunter.com/marcs/petnames/IntroPetNames.html.
We now return to our previously scheduled program, namely, going through the sections of the Bitfrost document in linear order.
"P_Document_RO"
Based on experience with the Polaris pilot program, the author asserts that, given adequate acts of designation (such as both file and folder open dialog boxes as discussed in both Bitfrost and this document), photo viewing programs do not need any special type-specific authority. The user would designate the folder containing the slide show he wishes to view. This is more flexible in addition to requiring less mechanism and being more POLA-oriented. Remember that the slide show program needs access not only to the jpeg and png and gif files (which already violates the rule that only one type can be specified), but such programs also need to read/edit the type of file that describes the sequencing of the slides. This document recommends eliminating the P_Document_RO authority. Such programs are document processor programs, and would therefore by default not get network access, as properly demanded by the Bitfrost spec.
- Comment: One 'type' does not necessarily mean one file extensions. Images (or the comprehensive list of mimetypes that constitute an image {png|jpg|jpeg|gif|...}) is one 'type' of file. This is the opening sentence of P_Document_RO. User:Mburns
- Comment: We are not using files and folders per se, and relying on them negates the use of the datastore and puts unwanted organizational and technical responsibilities on the child. This is a core OLPC decision. As mentioned, a 'slide show' program can use P_Document_RO to request all, or a wide subset, of images to present them to the user. User:Mburns
"File store rate limiting"
It is not clear what important threat this mitigates. If the system slows down abysmally every time a particular app is running, regardless of reason, the user has enough information to deduce the culprit and stop running that application. This document recommends eliminating the file store rate limiting.
- Comment: This prevents malicious apps from wearing out the flash.
"Antitheft protection"
In the presence of "very strong requests from certain countries that a powerful anti-theft service be provided", here is a decentralized alternative. The spec hints that each school will have a server (used in the spec for laptop backup). Rather than tying each laptop to a great national central database in the sky, tie it to the local school server. No identification snapshots of the children need to be taken; the child is well known in the community; the child's name is enough. Local communities have informal, but very strong, understandings of property ownership (read Hernando de Soto's works for extended discussion of this assertion). If a child loses his laptop, the local school olpc administrator can enter the name of the student (the unique name by which the student is locally known, it does not even have to be a government-acknowledged name) and issue a shutdown demand. This document recommends that the lease between server pings be substantially longer, on the order of at least a month, in case the student travels. As the spec notes, a longer lease would still provide excellent theft protection. Indeed, this document recommends going further: allow the local school olpc administrator to specify the duration of the lease, enabling these people on-the-ground, armed with the best information for a specific community, to trade off shorter lease duration (to enhance anti-theft) versus longer lease duration (to minimize the frequency with which students get locked out because of a glitch. Let us enter into this leasing business with a clear understanding of what will happen: with millions of systems in the field, with the unforeseeable consequences of living that attend to all human activity, there will be students egregiously locked out of their systems. The stories of such system lockouts will travel at lightspeed through the mesh. Everyone will know stories, both true and exaggerated, of student lockout problems. The whole system will work to the detriment of the olpc initiative's reputation. Please allow the local administrator to minimize this usability nightmare whenever possible. This will minimize the harm done to olpc's reputation).
"Doing bad things to other people"
The Bitfrost spec seems to claim that the olpc will be unattractive as a spambot because of cpu and bandwidth quotas. Alas. Due to the economics of the internet, limited-quota spambots are still financially attractive. It takes little cpu or bandwidth to send out a profitable stream of spam that may include nigerian hoaxes. The cost of building the attack virus is amortized over millions of target computers, the cost of distribution is approximately zero, and so the yield can be low while still being profitable. The Bitfrost spec, if augmented with the category-based installation strategy described here, can indeed strongly mitigate the threat that a particular machine will be turned into a spambot. This is an excellent first step, implementing half of the proposals that T&M thought would be impossibly expensive for the first release. However, as noted in T&M, even a tiny penetration rate would be enough, at large scale, to enable the reliable transmission of nigerian hoaxes to everyone in the mesh. Therefore, even with augmented Bitfrost, this author predicts that nigerian hoaxes will abound. This document therefore renews the recommendation made in T&M, that the security spec (i.e., the Bitfrost spec unless there is a higher-level spec that considers not just the technology on the computer, but also the holistic needs of the computer/user symbiote) explicitly include a requirement for a computer-based training system that will teach the student how to create nigerian hoaxes, and lead them through mesh-gaming exercises in which they try to hoax one another, as a technique to make the owners more resistant to the hoax threat.
"core BIOS protection"
T&M recommends that the bootware allowing selection of a boot device be put in hardware, not firmware, to ensure that there exists a method of reliably replacing virus-corrupted laptops. It is not clear from the spec whether boot device selection is even possible, much less whether it is in hardware. Here we assume that the boot selection is in firmware. If so, then the integrity of the boot selection process is dependent on the smallest details of how the core bios protection works. As one obvious example, the signature verification software must be in the firmware that cannot be changed without a verified signature. Is it? Presumably this one is so obvious, the answer is, "of course". Is the signature verification software off-the-shelf, or is someone writing their own (not relentlessly inspected) signature software? The point is, this whole mechanism must be scrutinized in detail for confidence. This document recommends such a detailed scrutiny, performed in conjunction with people not involved in olpc design and implementation, who will not suffer any dangerous forms of group-think that might lead to a failure to ask critical questions.
"Laptop disposal and transfer security"
The laptop re-initialization application is necessary in order to support sensible exchange among human beings, as discussed earlier. Indeed, the software needs to be widely available: if an owner must seek authorization from outside his local community to transfer ownership, it will present an egregious impediment to human action. At the same time, the reinitialization application is of course even more valuable to thieves than it is to individual owners, since it is the achilles heel of the anti-theft system. Since this program would be so valuable to burglars, the naive inclination would be to make the program very closely held, i.e., only the Deputy Secretary of Education should have the program (yes, the same Deputy Secretary identified earlier as the most likely threat to the batch theft of laptops). Such close holding will only produce disappointment. Surely, once there are a million or so of these machines in the world, someone somewhere will post the application on the Web. To make a sensible analysis, one must start by assuming that everyone has the program readily available.
If the antitheft machinery cannot be abandoned, a possible strategy is to have the reinitialization program run on the same school server that controls the anti-theft lease. Only this server can re-initialize the laptop, only when the laptop makes a request to extend its anti-theft lease.
by Marc Stiegler