It's important to protect privacy! This is a comment on a federal draft plan calling for the U.S. government to work with private companies to create what the White House Blog calls The Identity Ecosystem — an online environment "where individuals, organizations, services, and devices can trust each other because authoritative sources establish and authenticate their digital identities," also called "The Trusted ID Plan".
Introduction
Some have said that The Identity Ecosystem proposal is a call for a mandated Internet "driver's license". Others believe that The Identity Ecosystem would allow Internet users to complete transactions with confidence.
Our opinion is that while it could be the former, it cannot be the latter as it stands.
We are, thus, motivated to present this proposal, which supports the objectives of The Identity Ecosystem without adding any form of central control (or even fear of), while it allays privacy concerns, especially online. Further, it can be easily commercialized using an ecosystem with open participation, and internationalized, without any changes in current laws, which many privacy advocates argue do not sufficiently protect privacy online in the US and elsewhere.
How Did We Get Into This?
Before we present our contribution, we would like to invite us all to step back and ask "How did we get into this?"
Accounts may vary somewhat on how the Internet came to be what it is today. However, our exposition will not start controversially. We will stay on undisputed facts in terms of authority and authoritative sources in the Internet, which is quite central to the The Identity Ecosystem proposal.
Starting with the transfer of responsibility and authority for Internet policy increasingly from DARPA to NSF and to the Federal Networking Council after about 1988, NSF's role increased substantially with the creation of the NSFNET.
At that time, central control of the Internet as initially provided by DARPA was effective, as evidenced by the fact that spam was famously not allowed and did not exist.
But as the transfer process from DARPA continued, the central control paradigm started to change and weaken. Other points can also be mentioned, such as the role played by BBN in showing that it could work, and the gradual decrease of NSF's role to the point were the NSF was prohibited from spending money on it. And, on October 24, 1995, the Federal Networking Council famously passed a resolution defining the term Internet.
Of course, control did not evaporate immediately after 1988 or even 1995. As control was relaxed, fear of control remained for a long while, as shown in Dr. Jon Postel's famous failed attempt to relax control by redirecting Root to the IANA (Internet Assigned Numbers Authority) on Jan. 28, 1998.
This is the big picture that matters now, in 2010. From 1988 onwards, the US Government has deliberately relaxed its control over policy as the Internet has become increasingly commercialized and internationalized. Hence, gradually, confidence has evaporated because it was based on that, now missing, central control.
This is how we got to be where we are, by the very nature of growing into and being a network of networks, where no one can control both ends of a general communication channel (neither sending nor receiving).
The Internet, Confidence, and Trust
The Internet was born within a strict central control system (DARPA), but then it grew and now (as we saw above) abhors central control. This development was enabled by technology, allowing networks of networks to be easily built and expand. However, it is not unlike what we find elsewhere. It seems to be a natural process, and not one that we can reverse (or should try to reverse).
If central control cannot be restored, how can confidence be restored? How can rogue operators be detected and prevented from using protected resources?
The word confidence means "with faith". It is a good concept to use when we can have an authoritative source (e.g., DARPA). But fails miserably when such source does not, or cannot exist.
In such cases, we can use a concept called trust that has been intuitively known for thousands of years. Trust has been formally defined (Gerck, 1998) in terms of Information Theory, as "trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel".
This is an implicit definition, allowing us to derive many equivalent positive and negative statements and use them as "domain definitions", so that we can use the concept of trust coherently through different domains including with people and machines. See http://bit.ly/TRUST and http://bit.ly/IT-TRUST
For example, the definition implies that a decision to trust someone, the source of a communication, the name on a certificate, or a record must be based on factors outside the assertion of trustworthiness that the entity makes for itself.
As another example, in terms of a conversation, we can use the equivalent domain definition (see references) "trust is that which provides meaning to information". Indeed, according to the trust we have on the information source, the same words may have quite different meanings -- as it can be easily demonstrated by watching the different meanings presented in newscasts by FoxNews and MSNBC, for the very same information.
The important point here is that trust can be based on other factors, in addition to control or even fear of control.
Solution Considerations
A solution to the perceived identity problem in the Internet should, thus, investigate what other factors must now be introduced in order for trust to be induced without trying --and failing-- to re-introduce control, or fear of control.
In other words, there's no "Doomsday" scenario of control vs anarchy either way. Rather, there are many reasons to abandon the mindset of recourse to control (or fear of) as the only solution.
For example, we need to take into account that the Internet is essentially open-ended. Anyone can join some network connected to networks comprising the Internet. The Internet is a network of networks, with no common reporting of any kind that could allow objective authorizations to be authoritatively defined and enforced, for all the different networks in the Internet.
We also share the same network as the attackers. Even if all identities were somehow centrally controlled, the false use of identities would not be preventable (as we know well from the brick-and-mortar world, for example). This further indicates that we shall not hope to be able to confine what we cannot control.
Users, furthermore, are bound in their defenses by their own usability and resource limitations, and service providers must only deploy defenses that users may tolerate. This creates an "arms race" scenario with a large information and power asymmetry against users. Attackers know more and are much more powerful than users and even service providers, and can rather easily mount massive campaigns with massive resources (e.g., a phishing attack using thousands of bots), while defenders are frequently one step behind with the defenses they can deploy.
It would, therefore, be better for users that we avoid an "arms race" scenario, where defenders (users) will lag behind. Instead, we motivate the need to find ways and "methods of peace" in achieving Internet security, where security must be primarily viewed as a form of understanding, not of confinement or subjugation. For example, it makes little sense to confine use to X, if one has no justification to trust that which one is using in order to confine use to X or, if one does not know what one is confining in or out.
Thus, unless we are conditioned to think that the only way to effect security shall be by subjugation and fighting the "thing" out to the bitter end, which will mean the defeat of users and their rights, we should not pursue control (or fear of) as the only solution. It does not fit the problem and does not help users.
And the main visible point to users should be about how to make non-conformance public rather than certifying conformance.
Not only would there be then much less liability for the service, but the user is kept in the verification loop --as the user should be-- rather than blindly rely on some sort of oracle. Also, in security terms, not only less attacks are possible but attacks are so not direct in creating an error condition. This does not mean that the user would manually have to make the determination in every case and be burdened by it. The non-conformance detection can be automated, classified by risk, cached, and previous results can used until they expire according to their respective risk class.
Along these lines, identity should not only refer to an attribute (name) or some aggregation of attributes (name, address, SSN) but, even more strongly and effectively refer to relationships ("Is there an identity relationship between the true owner of this Mastercard and the person presenting it to me?"), connections, absence of connections, and other connection properties. It's not about one authoritative channel either, to vouch for the identity, but using multiple factors to prevent a single point of failure.
Privacy Rights
Let us think about the US Government getting involved in assuring identity and protecting privacy at the same time. I'd like to mention Judge Scalia's private (but not secret) comment to a previous team member, who had an opportunity to speak with Justice Scalia in 1999, and asked him if privacy warranted strong legal protection in the US. Of course, this would be at least tangentially relevant to any US Government identity initiative such as the Trusted ID Plan.
"Not based on the constitution", Scalia said. "Nowhere does the constitution address the right to privacy. If people want the Right for Privacy, they will have to change the constitution."
More importantly, Scalia added that "Free Speech is defended by the constitution, and that makes the Right for Privacy very difficult to defend."
The Scalia quotes are as verbatim as memory can allow, are not jurisprudence, and other Justices may disagree, but the idea is quite easy to verify — the US Constitution does not support the Supreme Court to rule in favor of privacy rights purely on constitutional grounds and could deny privacy rights in the event of a clash with the well-established constitutional right to freedom of expression (by a person or company).
In other words, it is possible that a company's right to freedom of expression (as in the recent election spending Supreme Court decision) would trump an individual's right to privacy.
It is further well-known that users' online privacy rights are reduced in the US by the 1986 Electronic Communications Privacy Act (ECPA), which is complex and often unclear. What is clear is that the ECPA does not mandate a search warrant to access online private communications and the locations of mobile devices. Data or files stored in the Internet "cloud," such as webmail or identity data, are subject to an often lower privacy standard than the standard that applies when the same data is stored on an individual's hard drive at home or in the office.
Considering also the lack of privacy protection in many US States' constitution (except, for example, California and Washington as examples where privacy is strongly protected by their constitutions), and in the US Constitution, in addition to the well-known information asymmetry problems and other aspects (e.g., power asymmetry) that are behind the increasing market failure to protect users' privacy in the realm of private companies, as well as lack of international harmonization in this regard, the conclusion is clear:
Currently, we should be careful in relying on government, market (private companies), or legal recourse to protect users' identities.
Thus, a missing and yet important part of The Identity Ecosystem proposal in this matter should be on helping to improve laws regarding identity and privacy, which will facilitate the adoption of useful technologies.
How about The Identity Ecosystem?
Based on the discussion above, both technically and in terms of privacy protection, it will not work to have the Internet fall back to the idea of confidence supported by authoritative sources that would purportedly "establish and authenticate users' digital identities". Central control, and fear of it, is both not possible and not privacy-preserving.
Thus, if we think of it in terms of a "Trusted ID Plan" supported by authoritative sources, it will be restricted in use to those authorities each user can trust (and, likely, pay) within their own desired extent, which varies for each user, each case, and also over time and location. This will likely lead to a number of disjoint local systems that will not be able to talk to each other or internationally.
However, The Identity Ecosystem may use a more comprehensive approach.
Many real solutions are possible, but they should all be founded on the idea that trust can be based on other factors, in addition to control or even fear of control. For example, a user should be able to effectively refer to relationships or just even prior communication established online with other entities, in defining an assertion of identity. A solution should, thus, investigate what other factors must be introduced in order for trust to be induced.
In so doing, we point out that self-assertions cannot induce trust (http://bit.ly/IT-TRUST). Saying "trust me" should not make you trust me. The verifier needs to have access to multiple and varied sources of trust channels.
Further, to be usable, we want to reduce user frustration in having to use a different tool if one needs security and regulation compliance. This also would help reduce the focus on security, so that people can long at last focus on what they want to do, not how they have to do it.
A Definite Proposal
In addition to the considerations and suggested guidelines above, I also submit a definite proposal, that has been field-tested in millions of cases in the US and worldwide since 2004, and can be demonstrated today, to anyone, with zero cost.
The author, based on the ideas of trust developed earlier (Gerck, 1998, op. cit), has developed a protocol that fits the above considerations, provides multiple and varied sources of trust channels, does not rely on "trust me", and yet provides a "no target" approach to eliminate concerns of attacks stealing identity credentials in online servers. It is the "identify & authorize" engine, as used by ZSentry.
ZSentry is available without cost for personal use (Free Basic) and can be used with mail, webmail, file transfer & storage, IM, SMS, HTTP, fax and other communication protocols. ZSentry supports the ZS authentication mode, as well as PKI/X.509 and PGP. ZSentry also extends these standards in significant ways. An important issue solved, of course, is the problem of initial contact. ZSentry allows secure first contact and reply without previous interaction (e.g., exchanging passwords, requiring registration) or work (e.g., searching a directory, solving puzzles), and provides a number of life-cycle control functions, including release, expiration, and delivery verification.
ZSentry also supports SAML and SSO, so that it can be part of a federated-identity ecosystem. And you do not have to worry about your identity being stolen online at the ZSentry site, or at other sites you visit. ZSentry uses its "no target" technology to protect your login credentials and keys at ZSentry, whereas the SAML identity authorization does not carry them either.
Preventing false login (eg, by stealing a user's credentials with a key-logger) and duplicate use of the same account, may be a threat in some cases, especially with SSO. ZSentryID can be used to introduce a fresh second-channel challenge that changes for every authentication, for example by cell phone SMS. ZSentry also uses Adaptive Security, and other techniques to help allay such concerns.
The trusted introducer function provided by ZSentry does not need to be carried over forever. This is not a single provider, lock-in proposal.
Much like a booster rocket, once the transaction starts, other sources of trust are introduced (e.g., who do you know that I trust and can verify you by? What is your signed library key?) to the point that the ZSentry introducer function can be jettisoned without prejudice.
With our proposal, there is no "trusted ID" that will suddenly lose all its evidence value if not renewed.
Technical information on ZSentry identity verification is available at http://zsentry.com/identity.htm
I submit that ZSentry supports the objectives of The Identity Ecosystem without adding privacy concerns, especially online where it's not a matter of if but when and how often information on servers (even hosted at the Pentagon or FBI) will be disclosed.
Best regards,
Ed Gerck
It's important to protect privacy! You can vote on our proposal if you wish at: http://www.nstic.ideascale.com/a/dtd/The-ZSentry-Proposal/45785-9351
10 comments:
Ed,
Thanks.
It's all a difficult problem set to be sure, particularly since there are so many intangibles involved -- not to mention so much politics. But I think we're in agreement that the approach being promulgated by the white house this round is not a reasonable way forward.
L
[as received by email]
Hi Ed,
It's been a while. How is everything going?
Nice artice, as always. Your take on making non-conformance public is
interesting and thinking out of the box. It reminds me of a story I heard years ago. A group of elementary students was given a weekend homework assignment to prepare to recite the alphabet backwards on Monday. Al weekend the kids memorized - z,y, x, etc. Except one girl that just
played. Monday arrives and the kids in turn tried to recite thhe alphabet backwards. The little girl that played went last and she was very successful. She got up in front of the class and turned around so she was backawards and recited - a, b, c, etc.
One of the biggest issues I see is places like IRC - where the hard-core (and otherwise) hackers hang out. Most people don't go there - IRC is too murky.
Another issue - I assume (maybe incorrectly) non-compliance would be on a list of some type similar to a black hole list for spam. How many would actually look? If a user name - or other identifer was listed thatt person could just adopt another. The list can be endless. Some may find some
useful info, but I think most would not even look.
I also think this is a much better idea than the present one discussed on IP.
I was cracking up at your lime: Saying "trust me" should not make you trust me. It reminds me of a teenage boy trying to get somewhere with a girl - trust me was their line :)
And: information on servers (even hosted at the Pentagon or FBI) - they get hacked too. The only really safe servers/workstations are those not connected to anythng else. But don't get me started on that or other security issues.
A few years ago I read 'The Coocoo's Egg'. I think it was published in the 70s and a true story. The same stupid issues exist today. Doesn't anyone
learn?
Lynn
[as received by email]
I said something on the aba list: that natural fears (of control etc) had not been admitted, qualified or addressed. Thus trust is not possible (since one has not provided a framework for the act of qualification).
Peter
Thanks all for the interest. This is a reply to all previous comments.
Yes, it is critical that the main visible point to users should be about how to make non-conformance public rather than certifying conformance.
Not only there is then much less liability for the service, but the user is kept in the verification loop --as the user should-- rather than blindly rely on some sort of oracle. Also, in security terms, not only less attacks are possible but attacks are so not direct in creating an error condition.
Of course, I am simplifying but you can go and try yourself for free. It can work directly from Gmail or Outook, or Apple Mail, or from a web browser doing SSL SMTP through HTTPS by way of ZSentry. There is no plugin or installation.
And, once you have your identity through ZSentry, you can use it at another place through the ZSentry-SAML interface and you do not have to worry about your identity being stolen online. ZSentry uses its "no target" technology to protect your login credentials and keys, whereas the SAML-ized identity authorization does not carry them either.
An important issue to solve, of course, is the problem of initial contact.
The main point is that, try as you may, the initial contact does not happen in vacuum. One of the points, most likely the initiator (sender), must have a previous contact with a service (eg, the gmail account where the ZSentry or ZSentry-PGP mail is purportedly sent from). That service may or may not have the full extent of trust needed to be a trusted introducer for the needs of the recipient, but it is a point of trust that can be evaluated and used to contribute to a final measure of trust.
Furthermore, the trusted introducer function provided by ZSentry does not need to be carried over forever. Much like a booster rocket, once the transaction starts, other sources of trust are introduced (eg, who do you know that I trust and can verify you by? What is your signed PGP key?) to the point that the ZSentry introducer function can be jettisoned without prejudice.
The http://bit.ly/TRUST reference in the article has more.
Best regards,
Ed Gerck
[received by email]
Thanks, I think I am getting a bit closer. I see elements of Web of Trust in your proposal, and also some ideas from social networking, because it seems like a person uses relationships established online with other people to bolster his or her assertion of identity. Is that right?
A.
[in reply to A.]
Yes, and this is all automated.
And it can extend beyond ZSentry, where it can become more useful. For example, a person who has N>>1 address book entries created (in ZSentry, not disclosed) by successfully communicating over time with N diverse people (eg, as evidenced by IP and browser diversity) could be evaluated differently from someone else with just few and recent contacts.
We also note that trust is a "slow" process. It must be earned. You see a counter-example in scams, where criminals like to add an element of urgency to win over the expected time factor that the victim may intuitively require.
That's why in "successfully communicating over time with N diverse people" one of the non-conformance requirements is evident: if it all happens too quickly. These requirements are not willy-nilly but follow from the extensive work on trust reported in the reference cited, and others such as http://nma.com/papers/it-trust-part1.pdf
Best regards,
Ed Gerck
[received by email]
Hi Ed,
ZSentry is only part of a solution - it just does mail. There are other solutions available that do more than mail, both free and paid.
Just like security theater though, how much does this accomplish? One
solution I know the sender must receive permission from the recipient to send mail that will be received.
Kynn
[reply to Kynn]
>ZSentry is only part of a
>solution - it just does mail.
It already does mail, webmail, IM, SMS, and secure storage (see site). And, remember those mail-fax gateways? Mail is just an STMP protocol -- at the end of it you can have anything you want: fax, even HTTPS in an Ajax window. I bet you could make it do FTP or anything you want. Maybe even IRC.
It can also include elements in upper tiers, such as billing and ecommerce -- using mail, webmail, IM, SMS, fax, HTTPS, FTP,...
so, where's the limit?
> There are other solutions
> available that do more than
> mail, both free and paid.
But there is no other solution available with the "no target" property, and others that make a real difference to reduce risk online and improve usability.
> Just like security theater
> though, how much does this
> accomplish? One solution I know
> the sender must receive
> permission from the recipient to
> send mail that will be received.
PKI does that -- unless you have the public-key cert for the recipient and can verify the CA sig, no deal. It's not that useful and is often cited as one of the shortcomings of PKI ("where's your new cert?" and "I cannot validate your CA").
The post office seems to me to offer a more natural paradigm to follow. It allows that but does it post-sending. So the recipient can decide at a later time, and does not impact the sender. This is the method used by zsentry.
Best regards,
Ed Gerck
[received online]
so you are saying trust Zme? Single technology solutions make me uncomfortable.
I read through your site and can see how ZSentry uses middleware to create a persistent connection between parties and encrypt data transfer, but I don't see how the identity problem (reliably tying an individual to a connection endpoint)is solved.
(anonymous)
(reply to anonymous)
Single technology is not single provider. But this is not even single technology. We say "Many real solutions are possible, but they should all be founded on the idea that trust can be based on other factors, in addition to control or even fear of control."
And we also want to get specific, with a definite proposal, and that's why we propose ZSentry, which complies with the privacy/control considerations and adds some unique benefits including the "no target" property.
About how ZSentry solves the identity problem, including for first-contact and first-reply, please see ZSentry Identity Verification
Post a Comment