Saturday, June 26, 2010

White House Seeks Comment on Trusted ID Plan

It's important to protect privacy! This is a comment on a federal draft plan calling for the U.S. government to work with private companies to create what the White House Blog calls The Identity Ecosystem — an online environment "where individuals, organizations, services, and devices can trust each other because authoritative sources establish and authenticate their digital identities," also called "The Trusted ID Plan".

Introduction

Some have said that The Identity Ecosystem proposal is a call for a mandated Internet "driver's license". Others believe that The Identity Ecosystem would allow Internet users to complete transactions with confidence.

Our opinion is that while it could be the former, it cannot be the latter as it stands.

We are, thus, motivated to present this proposal, which supports the objectives of The Identity Ecosystem without adding any form of central control (or even fear of), while it allays privacy concerns, especially online. Further, it can be easily commercialized using an ecosystem with open participation, and internationalized, without any changes in current laws, which many privacy advocates argue do not sufficiently protect privacy online in the US and elsewhere.

How Did We Get Into This?

Before we present our contribution, we would like to invite us all to step back and ask "How did we get into this?"

Accounts may vary somewhat on how the Internet came to be what it is today. However, our exposition will not start controversially. We will stay on undisputed facts in terms of authority and authoritative sources in the Internet, which is quite central to the The Identity Ecosystem proposal.

Starting with the transfer of responsibility and authority for Internet policy increasingly from DARPA to NSF and to the Federal Networking Council after about 1988, NSF's role increased substantially with the creation of the NSFNET.

At that time, central control of the Internet as initially provided by DARPA was effective, as evidenced by the fact that spam was famously not allowed and did not exist.

But as the transfer process from DARPA continued, the central control paradigm started to change and weaken. Other points can also be mentioned, such as the role played by BBN in showing that it could work, and the gradual decrease of NSF's role to the point were the NSF was prohibited from spending money on it. And, on October 24, 1995, the Federal Networking Council famously passed a resolution defining the term Internet.

Of course, control did not evaporate immediately after 1988 or even 1995. As control was relaxed, fear of control remained for a long while, as shown in Dr. Jon Postel's famous failed attempt to relax control by redirecting Root to the IANA (Internet Assigned Numbers Authority) on Jan. 28, 1998.

This is the big picture that matters now, in 2010. From 1988 onwards, the US Government has deliberately relaxed its control over policy as the Internet has become increasingly commercialized and internationalized. Hence, gradually, confidence has evaporated because it was based on that, now missing, central control.

This is how we got to be where we are, by the very nature of growing into and being a network of networks, where no one can control both ends of a general communication channel (neither sending nor receiving).

The Internet, Confidence, and Trust

The Internet was born within a strict central control system (DARPA), but then it grew and now (as we saw above) abhors central control. This development was enabled by technology, allowing networks of networks to be easily built and expand. However, it is not unlike what we find elsewhere. It seems to be a natural process, and not one that we can reverse (or should try to reverse).

If central control cannot be restored, how can confidence be restored? How can rogue operators be detected and prevented from using protected resources?

The word confidence means "with faith". It is a good concept to use when we can have an authoritative source (e.g., DARPA). But fails miserably when such source does not, or cannot exist.

In such cases, we can use a concept called trust that has been intuitively known for thousands of years. Trust has been formally defined (Gerck, 1998) in terms of Information Theory, as "trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel".

This is an implicit definition, allowing us to derive many equivalent positive and negative statements and use them as "domain definitions", so that we can use the concept of trust coherently through different domains including with people and machines. See http://bit.ly/TRUST and http://bit.ly/IT-TRUST

For example, the definition implies that a decision to trust someone, the source of a communication, the name on a certificate, or a record must be based on factors outside the assertion of trustworthiness that the entity makes for itself.

As another example, in terms of a conversation, we can use the equivalent domain definition (see references) "trust is that which provides meaning to information". Indeed, according to the trust we have on the information source, the same words may have quite different meanings -- as it can be easily demonstrated by watching the different meanings presented in newscasts by FoxNews and MSNBC, for the very same information.

The important point here is that trust can be based on other factors, in addition to control or even fear of control.

Solution Considerations

A solution to the perceived identity problem in the Internet should, thus, investigate what other factors must now be introduced in order for trust to be induced without trying --and failing-- to re-introduce control, or fear of control.

In other words, there's no "Doomsday" scenario of control vs anarchy either way. Rather, there are many reasons to abandon the mindset of recourse to control (or fear of) as the only solution.

For example, we need to take into account that the Internet is essentially open-ended. Anyone can join some network connected to networks comprising the Internet. The Internet is a network of networks, with no common reporting of any kind that could allow objective authorizations to be authoritatively defined and enforced, for all the different networks in the Internet.

We also share the same network as the attackers. Even if all identities were somehow centrally controlled, the false use of identities would not be preventable (as we know well from the brick-and-mortar world, for example). This further indicates that we shall not hope to be able to confine what we cannot control.

Users, furthermore, are bound in their defenses by their own usability and resource limitations, and service providers must only deploy defenses that users may tolerate. This creates an "arms race" scenario with a large information and power asymmetry against users. Attackers know more and are much more powerful than users and even service providers, and can rather easily mount massive campaigns with massive resources (e.g., a phishing attack using thousands of bots), while defenders are frequently one step behind with the defenses they can deploy.

It would, therefore, be better for users that we avoid an "arms race" scenario, where defenders (users) will lag behind. Instead, we motivate the need to find ways and "methods of peace" in achieving Internet security, where security must be primarily viewed as a form of understanding, not of confinement or subjugation. For example, it makes little sense to confine use to X, if one has no justification to trust that which one is using in order to confine use to X or, if one does not know what one is confining in or out.

Thus, unless we are conditioned to think that the only way to effect security shall be by subjugation and fighting the "thing" out to the bitter end, which will mean the defeat of users and their rights, we should not pursue control (or fear of) as the only solution. It does not fit the problem and does not help users.

And the main visible point to users should be about how to make non-conformance public rather than certifying conformance.

Not only would there be then much less liability for the service, but the user is kept in the verification loop --as the user should be-- rather than blindly rely on some sort of oracle. Also, in security terms, not only less attacks are possible but attacks are so not direct in creating an error condition. This does not mean that the user would manually have to make the determination in every case and be burdened by it. The non-conformance detection can be automated, classified by risk, cached, and previous results can used until they expire according to their respective risk class.

Along these lines, identity should not only refer to an attribute (name) or some aggregation of attributes (name, address, SSN) but, even more strongly and effectively refer to relationships ("Is there an identity relationship between the true owner of this Mastercard and the person presenting it to me?"), connections, absence of connections, and other connection properties. It's not about one authoritative channel either, to vouch for the identity, but using multiple factors to prevent a single point of failure.

Privacy Rights

Let us think about the US Government getting involved in assuring identity and protecting privacy at the same time. I'd like to mention Judge Scalia's private (but not secret) comment to a previous team member, who had an opportunity to speak with Justice Scalia in 1999, and asked him if privacy warranted strong legal protection in the US. Of course, this would be at least tangentially relevant to any US Government identity initiative such as the Trusted ID Plan.

"Not based on the constitution", Scalia said. "Nowhere does the constitution address the right to privacy. If people want the Right for Privacy, they will have to change the constitution."

More importantly, Scalia added that "Free Speech is defended by the constitution, and that makes the Right for Privacy very difficult to defend."

The Scalia quotes are as verbatim as memory can allow, are not jurisprudence, and other Justices may disagree, but the idea is quite easy to verify — the US Constitution does not support the Supreme Court to rule in favor of privacy rights purely on constitutional grounds and could deny privacy rights in the event of a clash with the well-established constitutional right to freedom of expression (by a person or company).

In other words, it is possible that a company's right to freedom of expression (as in the recent election spending Supreme Court decision) would trump an individual's right to privacy.

It is further well-known that users' online privacy rights are reduced in the US by the 1986 Electronic Communications Privacy Act (ECPA), which is complex and often unclear. What is clear is that the ECPA does not mandate a search warrant to access online private communications and the locations of mobile devices. Data or files stored in the Internet "cloud," such as webmail or identity data, are subject to an often lower privacy standard than the standard that applies when the same data is stored on an individual's hard drive at home or in the office.

Considering also the lack of privacy protection in many US States' constitution (except, for example, California and Washington as examples where privacy is strongly protected by their constitutions), and in the US Constitution, in addition to the well-known information asymmetry problems and other aspects (e.g., power asymmetry) that are behind the increasing market failure to protect users' privacy in the realm of private companies, as well as lack of international harmonization in this regard, the conclusion is clear:

Currently, we should be careful in relying on government, market (private companies), or legal recourse to protect users' identities.

Thus, a missing and yet important part of The Identity Ecosystem proposal in this matter should be on helping to improve laws regarding identity and privacy, which will facilitate the adoption of useful technologies.

How about The Identity Ecosystem?

Based on the discussion above, both technically and in terms of privacy protection, it will not work to have the Internet fall back to the idea of confidence supported by authoritative sources that would purportedly "establish and authenticate users' digital identities". Central control, and fear of it, is both not possible and not privacy-preserving.

Thus, if we think of it in terms of a "Trusted ID Plan" supported by authoritative sources, it will be restricted in use to those authorities each user can trust (and, likely, pay) within their own desired extent, which varies for each user, each case, and also over time and location. This will likely lead to a number of disjoint local systems that will not be able to talk to each other or internationally.

However, The Identity Ecosystem may use a more comprehensive approach.

Many real solutions are possible, but they should all be founded on the idea that trust can be based on other factors, in addition to control or even fear of control. For example, a user should be able to effectively refer to relationships or just even prior communication established online with other entities, in defining an assertion of identity. A solution should, thus, investigate what other factors must be introduced in order for trust to be induced.

In so doing, we point out that self-assertions cannot induce trust (http://bit.ly/IT-TRUST). Saying "trust me" should not make you trust me. The verifier needs to have access to multiple and varied sources of trust channels.

Further, to be usable, we want to reduce user frustration in having to use a different tool if one needs security and regulation compliance. This also would help reduce the focus on security, so that people can long at last focus on what they want to do, not how they have to do it.

A Definite Proposal

In addition to the considerations and suggested guidelines above, I also submit a definite proposal, that has been field-tested in millions of cases in the US and worldwide since 2004, and can be demonstrated today, to anyone, with zero cost.

The author, based on the ideas of trust developed earlier (Gerck, 1998, op. cit), has developed a protocol that fits the above considerations, provides multiple and varied sources of trust channels, does not rely on "trust me", and yet provides a "no target" approach to eliminate concerns of attacks stealing identity credentials in online servers. It is the "identify & authorize" engine, as used by ZSentry.

ZSentry is available without cost for personal use (Free Basic) and can be used with mail, webmail, file transfer & storage, IM, SMS, HTTP, fax and other communication protocols. ZSentry supports the ZS authentication mode, as well as PKI/X.509 and PGP. ZSentry also extends these standards in significant ways. An important issue solved, of course, is the problem of initial contact. ZSentry allows secure first contact and reply without previous interaction (e.g., exchanging passwords, requiring registration) or work (e.g., searching a directory, solving puzzles), and provides a number of life-cycle control functions, including release, expiration, and delivery verification.

ZSentry also supports SAML and SSO, so that it can be part of a federated-identity ecosystem. And you do not have to worry about your identity being stolen online at the ZSentry site, or at other sites you visit. ZSentry uses its "no target" technology to protect your login credentials and keys at ZSentry, whereas the SAML identity authorization does not carry them either.

Preventing false login (eg, by stealing a user's credentials with a key-logger) and duplicate use of the same account, may be a threat in some cases, especially with SSO. ZSentryID can be used to introduce a fresh second-channel challenge that changes for every authentication, for example by cell phone SMS. ZSentry also uses Adaptive Security, and other techniques to help allay such concerns.

The trusted introducer function provided by ZSentry does not need to be carried over forever. This is not a single provider, lock-in proposal.

Much like a booster rocket, once the transaction starts, other sources of trust are introduced (e.g., who do you know that I trust and can verify you by? What is your signed library key?) to the point that the ZSentry introducer function can be jettisoned without prejudice.

With our proposal, there is no "trusted ID" that will suddenly lose all its evidence value if not renewed.

Technical information on ZSentry identity verification is available at http://zsentry.com/identity.htm

I submit that ZSentry supports the objectives of The Identity Ecosystem without adding privacy concerns, especially online where it's not a matter of if but when and how often information on servers (even hosted at the Pentagon or FBI) will be disclosed.

Best regards,
Ed Gerck

It's important to protect privacy! You can vote on our proposal if you wish at: http://www.nstic.ideascale.com/a/dtd/The-ZSentry-Proposal/45785-9351