Thursday, December 30, 2010

Prevent Attacks and Improve Usability By Understanding Asymmetric Threats

How asymmetric threats can blindside your organization, and how to prevent attacks supporting productivity.

Users are typically blamed as the weak link in security. Yet, we also need to take into account that users are often just naively reacting to asymmetric threats, that are difficult to evaluate even by experts.

Asymmetric Threat: a low-probability event with high-consequence.

The difficulty with preventing this type of threat is that even though the consequences could be disastrous (including collateral damage, information leak, and reputation loss), users may mistakenly accept the risk solely based on the event's low-probability of failure. As known in the medical field for an elective procedure called VBAC, many patients choose to undergo the procedure because the risk is low (say 1%), even though the consequence can be death or serious life impairments for both mother and child.

For example, in spite of all the attacks that we hear about on the Internet, it is known that the probability that any specific location will be successfully attacked is likely to be low for organization users. Consequently, users commonly see and rate online attacks as low-probability events ("Attackers will not guess my password.") and are not usually as concerned as they should be about the high-consequence side.

As another example, consider phishing. It is a typical asymmetric threat, where the user is asked to do something that seems safe (low-probability of failure) without realizing the high-consequences.

Users are unreliable security-partners to evaluate asymmetric threats.

Users tend to dismiss potential problems if they are perceived as low-probability events. Organizations, on the other hand, have to look more carefully also at what is at stake and the expected loss per event, which can capture the high-consequence risk and motivate adequate counter-measures.

Also important to many organizations today, for regulatory compliance with HIPAA and privacy regulations, are the potential high fines and breach notification duties that can be imposed, not just direct losses.

Usability and Security Aspects

Adding a conventional security system to better counter asymmetric threats and provide regulatory compliance, will most likely require users to change.

For example, if you must send to each recipient a password to allow reading the email, this makes it hard to use and is not natural. Or, if you must require recipients to register and select a password, just to read your email, this forces recipients to use yet another service they did not choose. Alternatively, if the solution requires going through a particular interface for webmail and/or install plugins for a desktop mail client, then users have to change their work environment.

Further, requiring users to change burdens users with new procedures and disrupts use when desktop updates and plugins clash, reducing productivity. This can also end up blocking cost-saving and desirable options, such as Google Apps and phones, if they are not protected.

Thus, notwithstanding the security and privacy needs, usability is also of critical consideration. Making it harder to use in order to be secure is not secure (either will not be used or be used poorly). Moreover, organizations have various non-compliant desktop, cloud, and phone systems that are in service and people already know how to use. There are other significant limitations in practice, such as non-compliant systems used by partners and customers, and customer support for new applications.

Users Do Not Want Change

If there is any unanimity in what users want, it is that they want to use their systems without change!

Second, users want to be able to switch to cloud or phone if they are not in the office, or if the office system is down. Third, users also want to communicate with their customers and partners without asking them to change. Fourth, users view security as "that which reduces productivity" and so will try to bypass security rather than follow a security process — for example, users will likely close a warning notice without even reading it.

The IT challenge today is how to provide the functionality that users want with the security that the organization needs.

The ZSentry Solution

To maximize return on investment, organizations should invest first in areas that support multiple objectives within a diverse spectrum of asymmetric threats. The goal is to effectively enable security and privacy compliance while assuring usability and versatility anywhere, for all systems that people can or already know how to use.

ZSentry is unique in providing these capabilities, through the various Use Options >> that can be used in separate or concurrently, including ZSentry On-Site (desktop Mail clients), ZSentry Cloud (FISMA compliant), ZSentry App (web browsers), ZSentry API (custom services), and ZSentry SMS (secure text messages for phones).

And given that no one is likely to successfully deter, prevent, or preempt every attack, organizations must invest in capabilities to help eliminate or at least mitigate the effects of a potentially catastrophic attack.

ZSentry is unique also in allaying access and data security concerns locally, in the infrastructure, and in the cloud. Each customer's login and data are protected in separate by the Sans Target ZSentry technology, with configurable, encrypted metadata (keys are also protected by the Sans Target ZSentry technology) providing a protected, standards-compliant, unique user experience and feature set for each customer.

This is even more important in the context of "cloud computing" and SaaS (Software as a Service), when user data may be stored in the "cloud". With ZSentry, customer access audit trails and customer data storage can be securely maintained in the "cloud" with encrypted, de-identified numbers, which access keys are provided and secured by the ZSentry technology.

Read more at Use Options >>

Friday, November 26, 2010

Inclusive Specialization with HIPAA for Desktop and Cloud

With or without HIPAA compliance needs, your organization is likely facing two clear choices today: Desktop or Cloud.

The Desktop choice is interesting for business users, who commonly prefer to have their data local for privacy and control. In addition, Desktop systems such as Outlook are much easier for corporate setup and dealing with moderate to high mail volume, incoming or outgoing. And you can integrate data from different applications and different sources on the Desktop in ways that you cannot do with Cloud based solutions, such as in sending secure personalized messages merging each recipient's name and records.

On the other hand, with the Cloud choice, well-known systems such as Google Apps, Gmail, and Yahoo, offer easy access from anywhere, much lower cost (even free), 24/7 maintenance, and other benefits. But Google Apps, Gmail, Yahoo, and other Cloud systems are not HIPAA-compliant.

The Cloud choice privacy problem is solved by ZSentry, which enables Google Apps, Gmail, Yahoo, and other Cloud systems to be HIPAA-compliant. This allows the Cloud to be a good choice for Desktop replacement also in terms of HIPAA and privacy regulation compliance.

However, because each choice has good points (otherwise, would not be a choice), choosing also means losing.

This problem is also solved by ZSentry, which offers the On-Site setup, an inclusive specialization approach that works and is HIPAA-compliant for both Desktop and Cloud systems, as well as Web and Mobile systems.

With ZSentry there is no need to choose, and lose. Based on metrics that are important to your case, each choice can be specialized to areas where it performs best, such as in terms of cost, usability, and security. The same applies to email, webmail, IM, SMS, storage and other communication choices.

Rather than exclude valuable choices, the ZSentry Setup choices allow you to use each one where each performs best according to your metrics. And use the benefits of the Cloud platform also with HIPAA-compliant messages.

HIPAA-compliant Desktop and Cloud use is further discussed in the ZSentry On-Site use option, and you can try it without cost to see how it works for you. Other use options are available, including a HIPPA-compliant Cloud setup for Google Apps, that only uses a Web Browser.

Comments are welcome.

Thursday, October 21, 2010

Identity Theft vs Impersonation Fraud

If identity theft is a crime, mustn't identity be a property?

No. And, clearly, this should show that there is no such as a thing as "identity theft".

But why people, including laws in Congress, use Identity Theft instead of the long-established and legal term Impersonation Fraud? What's happening here? Was this just a change of fashion?

It's because language is being used to shift the burden to the consumer. Using the soundbite "identity theft", instead of "impersonation fraud", redirects the loss to the victim and frees those actually responsible from any legally relevant questions that could be asked about their loose security of customer's accounts.

"Impersonation fraud" as a term focuses attention on the fact that the criminal is deceiving someone in order to gain advantage by claiming to have some valuable characteristics or authorizations in fact belonging not to the criminal but to some other person. The person deceived is the primary victim in contemplation when this terminology is used.

"Identity theft", by contrast, suggests that the victim is the person impersonated, because his or her "identity" has been "stolen".

This way of looking at things implies that the losses which arise out of the impersonation fall on the person impersonated, rather than on the person deceived by the impersonation.

"Identity theft" as a label is attractive to, for example, some banks who may wish to suggest that losses must be carried by their customers because they failed to take proper care of their "identity".

Indeed, there is no such thing as an "identity theft" and there should be no responsibility to you if someone else uses your name and you do not know about it, in the same way that there's no responsibility to you if someone sends spam or mails a letter on your name.

The burden for preventing the real crime of impersonation fraud should fall on those who accept the false identity and also on those who do not protect private records of their customers. Certainly not the consumer.

If you feel that it is time to see real solutions to this problem, rather than let the discussion continue to be dominated by a special choice of language, What can you do?

Let's look at five key points that you should think of:

1. Think it through a bit more before considering to "buy protection". In fact, as discussed here, we should refuse to buy such "protection" and, instead request that those holding or using our identifiers follow the law and be held responsible for their misuse (as now enforced in the US with medical records) -- not the victims.

2. Note that the use of the term "identity theft" by an organization trying to sell "protection" against it to the very victims should be illegal. At the very least, it should alert us to that organization selling a service and then, in case of loss caused by that service, trying to pass both the blame and the loss to the victims.

3. The fact that people got used to talk in dumbed down soundbites such as "identity theft", instead of using well-established words like "impersonation fraud", does not mean that any legally relevant conclusions can be drawn from the misuse of technical terms like "theft" in the soundbite (as might be wrongly asked: If identity theft is a crime, mustn't identity be a property?).

4. Keep in mind that "identity theft" is not a theft, and the victim may not even have been remotely at fault, as in terms of negligence.

5. Do not allow loose language to redirect the focus elsewhere but where the problem really lies.

Now, how can you protect yourself against "identity theft"? What if your information falls into the wrong hands?

We increasingly communicate online instead of by telephone.   Email, SMS and IM are the psychological equivalent of a voice conversation. People often forget, however, that a text message or an email conversation is stored online in many different servers and can be replayed many years later, often without the authorization of any of the parties, and often with undesired consequences.

According to a 2010 survey commissioned by Microsoft, 90 percent of the general population and senior business leaders are concerned about the privacy and security of their personal data in the cloud.

With ZSentry, you can actually eliminate the disclosure risk of email, webmail, SMS and IM by properly setting your message to self-destruct. This includes a combination of technical and legal measures, as done by secure email Zmail provided by ZSentry, and works with all editions.

Read more at http://zsentry.com/encrypt-and-self-destruct.htm

Monday, September 27, 2010

ZSentry Premium launch

Anyone who was an early adopter of PKI (Public Key Infrastructure), PGP (Pretty Good Privacy) and other email security solutions will recall how difficult it was to explain how to send and receive secure email.

Secure email was one of those things that you couldn't really explain to people. It was something that senders and recipients had to see in action, something that they both had to learn and experience to really appreciate the way the technology would make their email secure.

Sending and receiving email is also one of those experiences that people just don't want to disrupt, not even to make it secure.

However, people more and more want to use email, webmail, SMS, and IM, and store documents online for easier access, while they also need to comply with HIPAA and other privacy regulations.

That's all solved with the commercial launch of ZSentry Premium in Oct/2010.

Since 2004, ZSentry has been in continuous use as a beta service and has served millions of secure email messages worldwide. ZSentry provides the needed security but does not change the way people use email or anything else, and works with desktop solutions such as Outlook as well as Gmail and Google Apps without plugins.

A fully-functional free version, called ZSentry Basic, will also continue to be available for personal use.

To see how it works, please explore the ZSentry How-To

While ZSentry can be provided on-site, it provides a convenient "pay-as-you-go" cloud service, where it innovates by eliminating critical security and privacy risks associated with cloud computing. Each customer's data is protected in separate by ZSentry's "Sans Target" ZSentry technology, with configurable, encrypted metadata (keys also protected by the "Sans Target" ZSentry technology) providing a protected, standards-compliant, unique user experience and feature set for each customer.

With its innovative design, ZSentry helps allay data storage security concerns, both locally and in the infrastructure. This is even more important in the context of "cloud computing" and SaaS (Software-as-a-Service), when user data may be stored in the "cloud". With ZSentry, customer access audit trails and customer data storage can be securely maintained in the "cloud" with encrypted, de-identified numbers, which access keys are provided and secured by the ZSentry technology.

Free Trial and other options are available.

Saturday, August 21, 2010

Why you need to self-destruct your email

Email, SMS and IM are the psychological equivalent of a voice conversation.

People often forget, however, that a text message or an email conversation is stored online in many different servers and can be replayed many years later, even without the authorization of any of the parties, and likely with undesired consequences.

The Internet "cloud", with webmail and more online processing services, just increases the privacy risks. According to a 2010 survey commissioned by Microsoft, 90 percent of the general population and senior business leaders are concerned about the privacy and security of their personal data in the cloud.

Your data privacy rights also vary in online versus local use (e.g., on an individual's USB token, or hard drive in the home or office), as well as "in transit" versus stored. Anyone familiar with the privacy limitations of current laws written in the pre-Internet era, such as the 1986 U.S. Electronic Communications Privacy Act, knows that the individual's legal rights to privacy and protection from unreasonable search are starkly reduced for information stored online. Data that individuals store online, in cloud computing services such as webmail, receives a much lower level of privacy protection than data "in transit" or stored locally.

Therefore, even though email may be the legal equivalent of a written conversation, email is mostly not legally protected online. That's one important reason to limit email liability. Other reasons include preventing harassment, coercion, blackmail, or just plain embarrassment.

But, how? An email message may live in many clients, servers, and repositories, some of which may be covert and unreachable by you and your ISP.

Surprisingly as it may seem, you can actually eliminate the disclosure risk of email by properly setting your email to self-destruct. This includes a combination of technical and legal measures, as done by secure email Zmail provided by ZSentry.

This is how it works.

ZSentry encrypts and self-destructs (expires, with no action on your part) your webmail, email, SMS or IM message, so that the intended recipients are allowed to read it only within its retention time, and neither you nor the recipients are responsible or liable for destroying it after it expires.

The command to expire is clearly noted and also provided with US and international legal support by notifying the reader that the message is copyrighted and that the sender only allows reading during its retention time. Therefore, if someone wants for example to take a picture of the message, then using that picture after the expiration could be considered a breach of copyright and illegal circumvention of protection. While the first motivation is to reduce exposure, the second is to provide a legal recourse in case of exposure.

After expiration there is no risk, as keys are deleted and any physical copy is therefore erased and not recoverable. Because self-destruct happens after a point in time that was known beforehand by all parties, claims of intentional destruction should be void.

What about before expiration? With ZSentry, you can request a Return Receipt so that any attempt to breach access security of your zmail is immediately logged and traced, and you can be notified as well by a Return Receipt that is sent to you with full access information including Who, Where, When, and How.

More information at How Zmail Works

Wednesday, July 21, 2010

Spam, Spoofing, Phishing, Pharming

Are you concerned about email fraud, how to keep your message private, and avoiding identity theft? Do you think that people receiving an email attachment that was clearly sent from you (as stated in the From line) may fear opening it?

I frequently receive email from myself, that is with my name in the From line, that I never sent. It's well-known that anyone can send a regular email using your name and email address. This is used in spam attacks.

But, why should you care? "After all, it's just an email and I can simply delete it."

However, hidden persuaders in a message (or just in its Subject) may trigger your inadvertent reply to it, such as hearing that you are now listed in "The Internet Harvard Who's Who" (a fake honor), that your site can get a free evaluation (a scam offer), or an "alert" that someone is registering the left-most label of your .COM domain name in the Taiwan registry under .TW (which is legal and should not affect you). Or, and even more effective for the attacker, that triggers an irrational fear factor, where you fear things you shouldn't and then put yourself in greater danger. For example, a message from your bank, looking as the "real thing" and asking you to login to change your password.

So, as in many other cases, an initial, simple attack that is not checked can lead to secondary attacks with far more damaging consequences than originally thought possible.

What's gaining in importance in this space, although much less well-known than spam, is that attackers often use spam as a vector for delivering secondary attacks.

The most relevant secondary attacks today include not only spoofing, phishing, and pharming (see GLOSSARY), but also installing a "trojan horse", a computer virus, or crippling your computer with a ransom request to fix it.

Of these attacks, the most insidious and prevalent today is phishing, which can lure recipients to disclose their private data, that criminals worldwide (not necessarily just the attacker) can use to mount a real-world threat to your identity, bank account, and business, leading to potentially large losses. And criminals can do it all from an Internet cafe, somewhere in a far, far way place, or even next to you, and you (or the FBI) will not likely ever be able to hold anyone accountable for it.

How did we get to this point? Some think that the reason is that your email address is global and even searchable; that's why your mailbox is overflowing with spam. In short, one is led to think that there is no way to prevent it. You must fight it by purchasing solution X (a spam/spoofing/phishing filter), which will require frequent paid updates to be effective. But such solutions can only protect you against yesterday's attacks, and even that may fail as attacks have many variants.
Do I need to change my email address, Mail app, or provider, to be secure? No, the problems are not due to your email address or Mail app, but how you send and receive email. You should be able to use any email address, Mail app, and provider you want, including webmail.

How about if I add a firewall and always use SSL? Still, solution X (a spam/spoofing/phishing filter) will be killing good emails, and you really shouldn't open attachments even if you know the sender.
In addition, anyone can read any email that you send and receive, so that you cannot use regular email in HIPAA and regulatory privacy compliance. Regular email communication is simply not secure. Sending an email is similar to sending a postcard. Any regular email that is sent by you or to you may be copied, held and changed by various computers it passes through, as it goes from you or to you. Persons not participating in your email communications may intercept your communications by improperly accessing your computer or other computers, even some computer unconnected to any party in the communication, which the email passed or was made to pass through. That's also at the root of the spam problem. In the same way that anyone can send a postcard in your name, anyone can send a regular email using your name and email address.

NMA has developed ZSentry to protect your email against spam, spoofing and phishing emails, while adding a number of security and usability features that are missing in email, without changing your email address, Mail app, or provider. ZSentry also encrypts your email per-message, providing HIPAA and HITECH Safe Harbor compliance, as well as compliance with other privacy rules, with no Business Associate Agreement to be signed.

How does this work against spam, spoofing, phishing, and pharming? Rather than fight them, ZSentry prevents them by (among other features):
  1. authenticating the source (including the sender's location) of a message; and
  2. authenticating the name and email address of senders and recipients.
For example, if a Zmail (ZSentry Mail) comes to you from the email address <friend@isp.com>, and you can decrypt it using the ZSentry service, then you have strong, cryptographic evidence that it did come from that address as cryptographically authenticated during signup, with the original subject, date, body and attachment intact.

To read a Zmail, ZSentry also reminds users that they can copy-and-paste the ZSentry link (when, selected by the sender, Zmail uses an encrypted link). Rather than just click on the link, this simple procedure prevents users from landing at a destination that was encoded in the email to be different from what users can read on the screen before they click.

Users can also beneficially apply a spam/spoofing/phishing filter prior to reading the Zmail, to reduce input email volume. However, the filter no longer has a critical, final function. This also means that the filter does not have to be set so tight as to increase too much the number of false positives (the number of rejected but good email), or should one fear that the filter is letting through too many false negatives (the number of accepted but bad email).

A further benefit of the ZSentry approach is that it does not require the customer to update anything in order to remain protected.

Other approaches (solution X, a spam/spoofing/phishing filter) such as those based on email headers, reputation, non-verifiable metrics (eg, community detection), blacklists, pattern detection, heuristics, zombie detection, and message scanning, can break privacy and may easily fail. One of the reasons to fail is that spam and phishing emails are created in an arms race scenario, where defenders lag behind with less knowledge and resources and are often fighting (perhaps, even well) the last exploit but not the next. Exploits are also hard to filter because they have many variants, spoof various parts of email headers and body, and come in the name of people or organizations you trust — your friends and business contacts. You probably receive several emails from yourself (surely, it is a valid email address and one that belongs to a real, reputable person) that you never sent.

Training users to detect spam, spoofing and phishing adds costs and also frequently fails, as users cannot be trusted to follow procedures, are easily distracted, and may not understand the instructions in the first place. ZSentry does not depend on users learning ever-changing patterns, as this is one of the few things that is actually proven not to work, or just work poorly.

How about spam? ZSentry also has a zero-tolerance spam policy. There are several mechanisms in place to prevent any ZSentry user from abusing the system and sending Zmail spam. For example, ZSentry BASIC users can send a limited number of secure email Zmail messages a day. ZSentry PREMIUM users, who must provide a valid payment information and physical address in order to use the service, are allowed to send larger amounts of secure email.

GLOSSARY

What is a "spoof web site"?

A spoof website is one that mimics another website to lure you into disclosing confidential information. This can be done even with SSL (Secure Sockets Layer) using 128-bit encryption. To make spoof sites seem legitimate, spoof web sites use the names, logos, graphics and even code of the real company's site. They can even fake the https web address that appears in the address field at the top of your browser window and the "SSL padlock" that appears in the lower right corner of your browser.

What is a "spoof email"?

A spoof email has the "From:" header of the email, and possibly other headers as well, set to the email address of a different sender, to lure the recipient to read and act on the email. For example, using the email address of a friend, a legitimate company, a bank or a government agency. This is very easy to do with regular email. To make spoof emails seem legitimate, the email body uses the names, logos, graphics and even legitimate web addresses and email addresses in some fields. The action links in the spoof e-mails almost always take you to a spoof web site. Spoof emails can be sent also as an attack against you or your organization, with fraudulent offers, bogus announcements or malicious content.

What is a "phishing email"?

Phishing (or hoax) emails appear to be from a well-known company but can put you at risk. Although they can be difficult to spot, they generally ask you to click a link back to a spoof web site and provide, update or confirm sensitive personal information. To bait you, they may allude to an urgent or threatening condition concerning your account. Even if you don't provide what they ask for, simply clicking the link could subject you to background installations of key logging software or viruses. Every business on the Internet is a potential victim of phishing email attacks, eroding the trust of their customers in the company's communications.

What is "pharming"?

A pharming attack redirects as many users as possible from the legitimate website they intend to visit and lead them to malicious ones, without the users' knowledge or consent. A malicious site can look exactly the same as the genuine site. But when users enter their login name and password, the information is captured. Emailed viruses that rewrite local host files on individual PCs, and DNS poising have been used to conduct pharming attacks. Even if the user types the correct web address, the user can be directed to the false, malicious site.

What is "spam"?

All Internet users should by now know about spam. The word spam as applied to email means Unsolicited Bulk Email. Unsolicited means that the recipient has not granted verifiable permission for the message to be sent. Bulk means that the message is sent as part of a larger collection of messages, all having substantially identical content. Usually, a message is spam if it is both Unsolicited and Bulk. Unsolicited email is usually normal email (examples include first contact inquiries, job inquiries, and sales inquiries). Bulk email is usually normal email (examples include subscriber newsletters, discussion lists, information lists, and update announcements).

Comments are welcome.

Saturday, June 26, 2010

White House Seeks Comment on Trusted ID Plan

It's important to protect privacy! This is a comment on a federal draft plan calling for the U.S. government to work with private companies to create what the White House Blog calls The Identity Ecosystem — an online environment "where individuals, organizations, services, and devices can trust each other because authoritative sources establish and authenticate their digital identities," also called "The Trusted ID Plan".

Introduction

Some have said that The Identity Ecosystem proposal is a call for a mandated Internet "driver's license". Others believe that The Identity Ecosystem would allow Internet users to complete transactions with confidence.

Our opinion is that while it could be the former, it cannot be the latter as it stands.

We are, thus, motivated to present this proposal, which supports the objectives of The Identity Ecosystem without adding any form of central control (or even fear of), while it allays privacy concerns, especially online. Further, it can be easily commercialized using an ecosystem with open participation, and internationalized, without any changes in current laws, which many privacy advocates argue do not sufficiently protect privacy online in the US and elsewhere.

How Did We Get Into This?

Before we present our contribution, we would like to invite us all to step back and ask "How did we get into this?"

Accounts may vary somewhat on how the Internet came to be what it is today. However, our exposition will not start controversially. We will stay on undisputed facts in terms of authority and authoritative sources in the Internet, which is quite central to the The Identity Ecosystem proposal.

Starting with the transfer of responsibility and authority for Internet policy increasingly from DARPA to NSF and to the Federal Networking Council after about 1988, NSF's role increased substantially with the creation of the NSFNET.

At that time, central control of the Internet as initially provided by DARPA was effective, as evidenced by the fact that spam was famously not allowed and did not exist.

But as the transfer process from DARPA continued, the central control paradigm started to change and weaken. Other points can also be mentioned, such as the role played by BBN in showing that it could work, and the gradual decrease of NSF's role to the point were the NSF was prohibited from spending money on it. And, on October 24, 1995, the Federal Networking Council famously passed a resolution defining the term Internet.

Of course, control did not evaporate immediately after 1988 or even 1995. As control was relaxed, fear of control remained for a long while, as shown in Dr. Jon Postel's famous failed attempt to relax control by redirecting Root to the IANA (Internet Assigned Numbers Authority) on Jan. 28, 1998.

This is the big picture that matters now, in 2010. From 1988 onwards, the US Government has deliberately relaxed its control over policy as the Internet has become increasingly commercialized and internationalized. Hence, gradually, confidence has evaporated because it was based on that, now missing, central control.

This is how we got to be where we are, by the very nature of growing into and being a network of networks, where no one can control both ends of a general communication channel (neither sending nor receiving).

The Internet, Confidence, and Trust

The Internet was born within a strict central control system (DARPA), but then it grew and now (as we saw above) abhors central control. This development was enabled by technology, allowing networks of networks to be easily built and expand. However, it is not unlike what we find elsewhere. It seems to be a natural process, and not one that we can reverse (or should try to reverse).

If central control cannot be restored, how can confidence be restored? How can rogue operators be detected and prevented from using protected resources?

The word confidence means "with faith". It is a good concept to use when we can have an authoritative source (e.g., DARPA). But fails miserably when such source does not, or cannot exist.

In such cases, we can use a concept called trust that has been intuitively known for thousands of years. Trust has been formally defined (Gerck, 1998) in terms of Information Theory, as "trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel".

This is an implicit definition, allowing us to derive many equivalent positive and negative statements and use them as "domain definitions", so that we can use the concept of trust coherently through different domains including with people and machines. See http://bit.ly/TRUST and http://bit.ly/IT-TRUST

For example, the definition implies that a decision to trust someone, the source of a communication, the name on a certificate, or a record must be based on factors outside the assertion of trustworthiness that the entity makes for itself.

As another example, in terms of a conversation, we can use the equivalent domain definition (see references) "trust is that which provides meaning to information". Indeed, according to the trust we have on the information source, the same words may have quite different meanings -- as it can be easily demonstrated by watching the different meanings presented in newscasts by FoxNews and MSNBC, for the very same information.

The important point here is that trust can be based on other factors, in addition to control or even fear of control.

Solution Considerations

A solution to the perceived identity problem in the Internet should, thus, investigate what other factors must now be introduced in order for trust to be induced without trying --and failing-- to re-introduce control, or fear of control.

In other words, there's no "Doomsday" scenario of control vs anarchy either way. Rather, there are many reasons to abandon the mindset of recourse to control (or fear of) as the only solution.

For example, we need to take into account that the Internet is essentially open-ended. Anyone can join some network connected to networks comprising the Internet. The Internet is a network of networks, with no common reporting of any kind that could allow objective authorizations to be authoritatively defined and enforced, for all the different networks in the Internet.

We also share the same network as the attackers. Even if all identities were somehow centrally controlled, the false use of identities would not be preventable (as we know well from the brick-and-mortar world, for example). This further indicates that we shall not hope to be able to confine what we cannot control.

Users, furthermore, are bound in their defenses by their own usability and resource limitations, and service providers must only deploy defenses that users may tolerate. This creates an "arms race" scenario with a large information and power asymmetry against users. Attackers know more and are much more powerful than users and even service providers, and can rather easily mount massive campaigns with massive resources (e.g., a phishing attack using thousands of bots), while defenders are frequently one step behind with the defenses they can deploy.

It would, therefore, be better for users that we avoid an "arms race" scenario, where defenders (users) will lag behind. Instead, we motivate the need to find ways and "methods of peace" in achieving Internet security, where security must be primarily viewed as a form of understanding, not of confinement or subjugation. For example, it makes little sense to confine use to X, if one has no justification to trust that which one is using in order to confine use to X or, if one does not know what one is confining in or out.

Thus, unless we are conditioned to think that the only way to effect security shall be by subjugation and fighting the "thing" out to the bitter end, which will mean the defeat of users and their rights, we should not pursue control (or fear of) as the only solution. It does not fit the problem and does not help users.

And the main visible point to users should be about how to make non-conformance public rather than certifying conformance.

Not only would there be then much less liability for the service, but the user is kept in the verification loop --as the user should be-- rather than blindly rely on some sort of oracle. Also, in security terms, not only less attacks are possible but attacks are so not direct in creating an error condition. This does not mean that the user would manually have to make the determination in every case and be burdened by it. The non-conformance detection can be automated, classified by risk, cached, and previous results can used until they expire according to their respective risk class.

Along these lines, identity should not only refer to an attribute (name) or some aggregation of attributes (name, address, SSN) but, even more strongly and effectively refer to relationships ("Is there an identity relationship between the true owner of this Mastercard and the person presenting it to me?"), connections, absence of connections, and other connection properties. It's not about one authoritative channel either, to vouch for the identity, but using multiple factors to prevent a single point of failure.

Privacy Rights

Let us think about the US Government getting involved in assuring identity and protecting privacy at the same time. I'd like to mention Judge Scalia's private (but not secret) comment to a previous team member, who had an opportunity to speak with Justice Scalia in 1999, and asked him if privacy warranted strong legal protection in the US. Of course, this would be at least tangentially relevant to any US Government identity initiative such as the Trusted ID Plan.

"Not based on the constitution", Scalia said. "Nowhere does the constitution address the right to privacy. If people want the Right for Privacy, they will have to change the constitution."

More importantly, Scalia added that "Free Speech is defended by the constitution, and that makes the Right for Privacy very difficult to defend."

The Scalia quotes are as verbatim as memory can allow, are not jurisprudence, and other Justices may disagree, but the idea is quite easy to verify — the US Constitution does not support the Supreme Court to rule in favor of privacy rights purely on constitutional grounds and could deny privacy rights in the event of a clash with the well-established constitutional right to freedom of expression (by a person or company).

In other words, it is possible that a company's right to freedom of expression (as in the recent election spending Supreme Court decision) would trump an individual's right to privacy.

It is further well-known that users' online privacy rights are reduced in the US by the 1986 Electronic Communications Privacy Act (ECPA), which is complex and often unclear. What is clear is that the ECPA does not mandate a search warrant to access online private communications and the locations of mobile devices. Data or files stored in the Internet "cloud," such as webmail or identity data, are subject to an often lower privacy standard than the standard that applies when the same data is stored on an individual's hard drive at home or in the office.

Considering also the lack of privacy protection in many US States' constitution (except, for example, California and Washington as examples where privacy is strongly protected by their constitutions), and in the US Constitution, in addition to the well-known information asymmetry problems and other aspects (e.g., power asymmetry) that are behind the increasing market failure to protect users' privacy in the realm of private companies, as well as lack of international harmonization in this regard, the conclusion is clear:

Currently, we should be careful in relying on government, market (private companies), or legal recourse to protect users' identities.

Thus, a missing and yet important part of The Identity Ecosystem proposal in this matter should be on helping to improve laws regarding identity and privacy, which will facilitate the adoption of useful technologies.

How about The Identity Ecosystem?

Based on the discussion above, both technically and in terms of privacy protection, it will not work to have the Internet fall back to the idea of confidence supported by authoritative sources that would purportedly "establish and authenticate users' digital identities". Central control, and fear of it, is both not possible and not privacy-preserving.

Thus, if we think of it in terms of a "Trusted ID Plan" supported by authoritative sources, it will be restricted in use to those authorities each user can trust (and, likely, pay) within their own desired extent, which varies for each user, each case, and also over time and location. This will likely lead to a number of disjoint local systems that will not be able to talk to each other or internationally.

However, The Identity Ecosystem may use a more comprehensive approach.

Many real solutions are possible, but they should all be founded on the idea that trust can be based on other factors, in addition to control or even fear of control. For example, a user should be able to effectively refer to relationships or just even prior communication established online with other entities, in defining an assertion of identity. A solution should, thus, investigate what other factors must be introduced in order for trust to be induced.

In so doing, we point out that self-assertions cannot induce trust (http://bit.ly/IT-TRUST). Saying "trust me" should not make you trust me. The verifier needs to have access to multiple and varied sources of trust channels.

Further, to be usable, we want to reduce user frustration in having to use a different tool if one needs security and regulation compliance. This also would help reduce the focus on security, so that people can long at last focus on what they want to do, not how they have to do it.

A Definite Proposal

In addition to the considerations and suggested guidelines above, I also submit a definite proposal, that has been field-tested in millions of cases in the US and worldwide since 2004, and can be demonstrated today, to anyone, with zero cost.

The author, based on the ideas of trust developed earlier (Gerck, 1998, op. cit), has developed a protocol that fits the above considerations, provides multiple and varied sources of trust channels, does not rely on "trust me", and yet provides a "no target" approach to eliminate concerns of attacks stealing identity credentials in online servers. It is the "identify & authorize" engine, as used by ZSentry.

ZSentry is available without cost for personal use (Free Basic) and can be used with mail, webmail, file transfer & storage, IM, SMS, HTTP, fax and other communication protocols. ZSentry supports the ZS authentication mode, as well as PKI/X.509 and PGP. ZSentry also extends these standards in significant ways. An important issue solved, of course, is the problem of initial contact. ZSentry allows secure first contact and reply without previous interaction (e.g., exchanging passwords, requiring registration) or work (e.g., searching a directory, solving puzzles), and provides a number of life-cycle control functions, including release, expiration, and delivery verification.

ZSentry also supports SAML and SSO, so that it can be part of a federated-identity ecosystem. And you do not have to worry about your identity being stolen online at the ZSentry site, or at other sites you visit. ZSentry uses its "no target" technology to protect your login credentials and keys at ZSentry, whereas the SAML identity authorization does not carry them either.

Preventing false login (eg, by stealing a user's credentials with a key-logger) and duplicate use of the same account, may be a threat in some cases, especially with SSO. ZSentryID can be used to introduce a fresh second-channel challenge that changes for every authentication, for example by cell phone SMS. ZSentry also uses Adaptive Security, and other techniques to help allay such concerns.

The trusted introducer function provided by ZSentry does not need to be carried over forever. This is not a single provider, lock-in proposal.

Much like a booster rocket, once the transaction starts, other sources of trust are introduced (e.g., who do you know that I trust and can verify you by? What is your signed library key?) to the point that the ZSentry introducer function can be jettisoned without prejudice.

With our proposal, there is no "trusted ID" that will suddenly lose all its evidence value if not renewed.

Technical information on ZSentry identity verification is available at http://zsentry.com/identity.htm

I submit that ZSentry supports the objectives of The Identity Ecosystem without adding privacy concerns, especially online where it's not a matter of if but when and how often information on servers (even hosted at the Pentagon or FBI) will be disclosed.

Best regards,
Ed Gerck

It's important to protect privacy! You can vote on our proposal if you wish at: http://www.nstic.ideascale.com/a/dtd/The-ZSentry-Proposal/45785-9351

Friday, May 14, 2010

Cloud security

Some people say that cloud security comes down to this simple question:

Do you have 24/7 unrestricted access to the PHYSICAL machine where your data is stored?

If no, then your DATA IS NOT SECURE.
If so, YOUR DATA MIGHT BE SECURE.


This line of questioning goes to the heart of most security problems, and not just in cloud security terms.

To further an example, rather than requiring a key to open a door (metaphorically, to access a data file), more secure systems would require 2, 3 or N keys such that a subset of M keys (for example, 3 out of 5, where N=5 and M=3) would be required to open the door. So, even if you lose the key in your pocket, or someone takes it from you under gun threat, you still have security within M of N. This is usually called a "security quorum" or "threshold system".

Useful as a threshold system may be for cloud security, is there still room for improvement?

A major improvement would be to eliminate the root of trust (i.e., that which you rely upon to assure that the threshold system and keys work as intended) [1] so that there is no target to attack. The principle is that no one can attack what does not exist. Not only a security quorum would not be needed (which can improve usability and prevent a denial of service attack) but also complexity would be reduced.

This solution, which is implemented in NMA ZSentry [2], effectively shifts the information security solution space from the yet-unsolved security problem of protecting servers and clients against penetration attacks to a connection reliability problem that is easily solved today.

Thus, in terms of cloud security, you can set up layers upon layers of security, but this is all for naught if the root of trust lies within the cloud itself. You solve this problem by eliminating the root of trust inside the cloud, as done by ZSentry, which is available free for tests and personal use [2].

This is important not only for health care records for HIPAA compliance, and privacy regulatory compliance in general, but also for individuals. Email, SMS and IM are the psychological equivalent of a voice conversation. People often forget, however, that a text message or an email conversation in cloud storage (such as in gmail, yahoo, hotmail) can be replayed even many years later, often without the authorization of any of the parties, and often with undesired consequences.

In perspective, while it may not be reasonable (or cost-effective) to assure that you have 24/7 unrestricted access to the PHYSICAL machine where your data is stored, you can use methods that deny the existence of the root of trust within the cloud itself. The cloud should only be used to store de-identified, encrypted data, and without the keys that would allow the data to be unraveled.


[1] For the formal definition of trust, that applies to both computers and humans, see http://bit.ly/TRUST

[2] http://zsentry.com

Monday, April 26, 2010

Bank of America's SafePass Security: strike out

(Posted online by Joel M Snyder. The problem and description are particularly relevant in our security and usability studies of access control systems.)

Bank of America lets their patrons sign up for "SafePass," which is a credit-card-sized one-time password device. You MUST sign up for SafePass for certain transactions (like large transfers) but it is optional for most customers.

I signed up. Much to my woe.

The sign-up fee for the card is $20.

I got my card, and it's physically defective: no number shows up when you push the button.

To get this problem remedied, Bank of America has me in an infinite loop. The phone people cannot send me a new card (why? I don't know). So I am supposed to do this through the web site.

However, once you have signed up for SafePass, you must use your SafePass to make any changes to SafePass (including getting a replacement card, and let's not even start on whether or not it's going to cost me another $20 to get a replacement for the first defective card).

So I was transferred from customer service agent to customer service agent, and each one assured me that they need to get this card activated. Some thought that if they activate it on their end, through the miracle of the ether, this will suddenly cause numbers to show up on my display here. (Attempting to explain that this could not work turned out to be a losing battle). I went through supervisors. I went through supervisors to supervisors.

In the end, the "solution" that they came up with--after 55 minutes on the phone, mind you--is that I should order a new card (not a replacement, which I cannot do, because the current card cannot be activated, but a new card, as if I need two of these things). They agreed to make a $20 credit on my account, and this will then add up to solution required.

And all this because I was trying to do "the right thing..."

jms

Saturday, March 27, 2010

Red Flags In Email Security

REQUEST FOR COMMENTS: This article is based on a work draft submission by Ed Gerck, Ph.D. and Vernon Neppe, MD, Ph.D. -- your comments are welcome.

Solutions for email security are often plagued by hidden costs, hidden liability, and lack of both security and usability. Even though the choice of a security solution involves many aspects, what is clear is that a selection process should not overlook some obvious red flags.

Red Flag #1: Sign a HIPAA Business Associate Agreement. This means that the security solution is exposing you to unnecessary technical, business, and legal risks.

Signing a HIPAA Business Associate Agreement is mandatory if the solution does NOT take advantage of the HIPAA "Safe Harbor", which requires encrypted and de-identified data. In addition to legal model and harmonization problems for you in signing another contract, absence of the "Safe Harbor" means that if a breach of unsecured protected health information (PHI) is discovered, fines may be levied and notices to each person affected are required. In addition, contracts with a "Covered Entity" may be at risk and the reputation of all business involved will almost certainly be harmed.

Red Flag #2: Centralized directory for email encryption. Someone other than yourself has your business partners' and customer's contact information, with names, email addresses and other data.

In addition to being a single point of failure, this means that your data may become available or be sold to solution providers and other businesses. This also means that in the event data is lost or stolen, it is linkable to a particular person, at risk of violating HIPAA, HITECH, and other regulations.

Red Flag #3: It Just Works. Beware of hidden liabilities. For example, make sure that your keys are not escrowed in the servers providing the solution, as with IBE (Voltage). Nothing is safe in servers, not only from attackers but also from service providers and employees.

Red Flag #4: Key Management "in the cloud". Beware that nothing is safe "in the cloud".

Customer data storage, including keys, can only be securely maintained in the "cloud" with encrypted, de-identified numbers, where the access keys should be provided and secured by yourself. See also comments to Red Flag #16.

Red Flag #5: Service automatically detects and encrypts messages that contain personally identifiable information. This means that your service provider actively stores and scans all your communications.

Also sometimes mentioned as a "hosted solution that automatically encrypts email based on your policy definitions". This not only falls within the Red Flag #1 and excludes a "Safe Harbor", but potentially exposes your business to covert surveillance as well as to legal loopholes allowing service providers to use and sell your information as it is not "data in motion".

Red Flag #6: Ensure Business Associates understand HIPAA implications. This means that your business risk and reputation depends on others, who you cannot control. Instead, your security solution should limit the risk posed by others.

Red Flag #7: Keys provided by "Web-of-Trust". This means that there is no entity responsible for revocation of keys, or for assuring that the keys are up-to-date, or valid for the intended purpose. This may work with a group of friends, but not when your risk and reputation depend on it.

Red Flag #8: Require Users to Buy a X.509/PKI or S/MIME Certificate. For a variety of reasons, and not just cost, this has not worked since it was first proposed more than twenty years ago.

Red Flag #9: Protected with passwords. Passwords are notoriously insecure and their management is a nightmare even for a few dozen users.

Passwords are not considered secure by HIPAA, HITECH, the Federal Financial Institutions Examination Council (FFIEC), and other regulatory regimes. Compliant authentication solutions should use two-factor authentication, present familiar user interfaces (to prevent user errors), prevent dictionary attacks, and not store authentication credentials (such as passwords and usernames) online or, more desirably, anywhere.

Red Flag #10: Use the fax. It is a privacy and HIPAA violation to allow faxes with personal information to be available in an office area that can be accessed by anyone other than the intended recipient, which is also notoriously difficult to ensure.

Red Flag #11: Our employees are trusted. Your business data security should not depend on trusting the service provider's employees ("we promise we won't look"), who you cannot select, verify or control. It is known that more than 70% of all security breaches are due to internal causes. Not even the FBI can prevent having a national security traitor for many years among their own directors.

Red Flag #12: We train your employees. This means that your employees will need to be trusted to follow security procedures, which failure will fall on your business and is at the root of more than 70% of all breaches. See Red Flag #11.

Red Flag #13: Just install a plug-in for your Mail client or Web browser. Also mentioned as "downloaded and installed in minutes". This means that you have to download an untrusted new component, that opens your system to new zero-day exploits, will need to be always updated, and may cease to work when you update your application or other plugins.

Red Flag #14: Delivered using a secure SSL/TLS encrypted connection. This delivery process falls short of basic security requirements for email messages (see references below). See also Red Flags #1 and #5.

Red Flag #15: If the recipient is not registered, the message is sent to a secure portal. This means that the recipient must register anyway, and reduces usability.

Red Flag #16: We can use your information for purposes including... This statement may sound assuring but it is open-ended and does not exclude any purpose or person.

This Red Flag is actually worse than the notorious "opt-out" policy, where sharing your protected information is the default mode, as there is no limit that you can request (by opting-out) regarding the use of your protected information.

Red Flag #17: Beware of conflicts of interest between the distinct roles that may be played by the same service provider, which may increase both your risk and liability. Also called the "fox taking care of the hens" scenario.

For example, if (1) a service provider for secure email is also (2) a provider of anti-virus scanning service, then the service is potentially made aware of protected information and the "Safe Harbor" provision may not apply (see Red Flag #1).

Red Flag #18: Impossible to break. Our servers are in a secure facility. It is not a matter of if, but when.

Regardless of how safe and secure people claim something online is, there always seem to be someone who can eventually crack access to it. Not even US Department of Defense and Pentagon servers are secure.

Security claims should not depend on being "impossible to break" (the Fort Knox approach), but on the impossibility of finding anything of value when that happens. This strong concept of security does not assume any limitation on the attacker, which limitations are often found to be unrealistic but only after an attack succeeds.

Regulatory Compliance and Fines

These and other Red Flags are important because security and email encryption are gaining importance, for some very clear reasons:

  • Email phishing, spam, email disclosure, and identity theft are major sources of fraud losses today.
  • Protecting consumer privacy is becoming a duty for organizations worldwide.
  • Organizations in regulatory privacy and security compliance regimes (e.g., HIPAA and FFIEC) need to communicate in a way that is compliant.
  • After 02/2009, fines for non-compliance (HITECH Act) have sharply increased up to $1.5 million.
For example, in the health-sector, service providers must comply with the US HIPAA (Health Insurance Portability and Accountability Act), HITECH Act (Health Information Technology for Economic and Clinical Health Act of 2009) and other regulations.

Conclusions

The choice of a security solution involves many aspects, including not to overlook the 18 red flags we discuss in this article. We also observe that privacy and security compliance is not enough — any security solution must be, first and foremost, usable. Otherwise, it will not be used or used incorrectly, defeating the security objective.

References

Gerck, E. (2007). Secure email technologies X.509/PKI, PGP, IBE and Zmail. In Corporate Email Management, Chapter 12, Edited by Krishna SJ, Raju E., pp.171-196, Hyderabad, India, ICFAI University Press. Available online at http://email-security.net/papers/pki-pgp-ibe-zmail.pdf.

Neppe, V. M. (2008). The email security-usability dichotomy: Necessary antinomy or potential synergism?. In Telicom, 21:3, May-June, pp.15-31. Available online at http://email-security.net/papers/usable-secure-email.pdf.

Whitten, A. and Tygar, J. D. (1999). Why Johnny can't encrypt: A usability evaluation of PGP 5.0. In Proceedings of the 8th USENIX Security Symposium. Available online at http://www.gaudior.net/alma/johnny.pdf

Feghhi, J., Feghhi J., and Williams, P. (1998). In Digital Certificates: Applied Internet Security. Addison-Wesley, ISBN 0-20-130980-7.

Wednesday, February 24, 2010

Large EMR privacy breach notification, two years later -- an exception or a symptom?

NOTE: A colleague and I are working on a paper discussing a number of  privacy and security red flags that can help call attention to these and other issues, especially in the context of secure email. A draft is gladly available to those who are interested, by private email request, for comments before  publication. 
 
Electronic medical records (EMRs) are at the heart of health care reform, and there is both a personal as well as a legal expectation of privacy for EMRs.

Promptly notifying users of privacy breaches can help bring accountability to the system, and help users.

In February 2010, RelayHealth (also known as NDCHealth Corporation) acting as a claims processing company, notified prescription holders that EMRs of two years ago, dating between February 2008 and December  2008, with full name, date of birth,  prescription number, insurance cardholder ID, and drug name, that were  dispensed at Rite Aid as well as other retail chain pharmacies and  independent pharmacies in the State of California, were sent to an  unauthorized pharmacy.

After I mentioned this case online, RelayHealth has contacted us in March 2010 and stated  that the data was sent in error only in November 2009, so the delay in informing consumers was not two years but three months.

However, we note that this information was not provided to RelayHealth's consumers when the privacy breach was disclosed in February 2010.  RelayHealth may want to look into that communication and verify why they did not disclose a delay of three months.

Further, what matters to our analysis here is the consumer privacy risk, which includes the two-year delay.  If, for example, three-year old files are wrongly disclosed today and the EMR processor informs the patient tomorrow,  this is not a lesser problem for the patient (as the next mentioned Fortis case exemplifies).

The 2010 breach notification did not disclose why the information was sent (Who requested it? Under what authorization? Who approved it?), who incorrectly received  the EMR, and who was responsible for the breach, neither what compensation or recourse users may have.

In a recent court case, Fortis (a US health insurance company) was found to have a practice of targeting policyholders with HIV. A computer program and algorithm targeted every policyholder recently diagnosed with HIV for an automatic fraud investigation, as the company searched for any pretext to revoke their policy.

Companies such as Fortis can find out about anyone's diagnosed HIV, or other illness, through pharmacies and claim processors, for example.

This situation underscores the underlying conflicts of interest between at least three distinct roles that RelayHealth plays. They are:
  1. claims processor;
  2. provider of patient EMR to their pharmacies and doctors;
  3. provider/seller of EMR to providers other than the patient's.
This last activity has the greatest potential conflict, as patients are included in a no-opt-out policy at www.RelayHealth.com that says (words in square brackets are comments, not from RelayHealth):

"Your Provider, a Provider-Designated User [pretty much anyone] or  authorized member of a Provider Group  [anyone] can use contact and/or health information about you stored by RelayHealth for many purposes including [ie, this  says that it does not exclude anyone or anything]:
..."

and

"RelayHealth may use the contact, billing and/or health information [EMR] provided by you in our service to provide your physician or other healthcare provider [ie, anyone they want]  with updated and/or supplemental information for their files or systems." 
 
A pattern that seems to emerge here is that because EMRs also have a market value (for example, to insurance companies, pharmacies, etc.), health care service companies can build automated information exchanges where they can make collected EMR available to other entities, and build a business on this activity.

That the same health care service companies (with different hats) also serve on behalf of the patients to protect the EMR from disclosure, is where the fox is taking care of the hens, and where the conflicts in 1-2-3 may play a role.

What this means is that the expansion of health care into larger use of EMRs ought to call for a much broader review of procedures and conflicts of interest than what is currently available. And, obviously, it should also include stricter rules for information security and handling of EMRs than what's currently used.

Your comments are welcome.

Best regards,
Ed Gerck

Sunday, January 17, 2010

SSL would prevent it, Re: Internet security flaw exposes private data

We all read about the most recent Internet security flaw that exposes private data [1]. Without any action on their part, a number of AT&T smartphone users found themselves logged into Facebook, a popular social networking site, under user accounts other than their own. The problem was quickly attributed to "misrouting," a term that suggests that information took a wrong turn somewhere in the network.

It looks like AT&T did something wrong, but in terms of proxy server setup, not routing, and the company is in the process of fixing it. But Facebook should also share some concern with this, as it didn't consider the information tied to a user's account authentication to be important enough to protect with something stronger than clear text.

So, when we look closer, the sky is not falling.

With at least one authenticated end-point (the end-point server, as usual), SSL would NOT allow an AT&T caching server that sits in-between to have had a "correct" SSL session between the wrong two end-points. See A. Menezes et al., /Handook of Applied Cryptography/, CRC Press, New York, 1997.

This statement is true even if the AT&T caching server is able not only to modify packets but also to drop incoming packets and insert its own packets at will into the traffic, and do so in two-way communication without delay, becoming an active man-in-the-middle (which a passive, caching server is not).

This should not be confused with a phishing attack, where the end-user is tricked to go to (for example) paypal.fraud.com instead of paypal.com -- and the server end-point is already wrong, or a "bridge" attack where (without noticing it) the user goes to the wrong server before SSL starts -- and stays there after SSL starts. The second user did reach facebook.com and this would have been enough under one-point authenticated SSL to prevent the first user session to be hijacked (which is what happened) as it would not have the correct session keys.

This should also not be confused either with other attacks on privacy and security, that work in spite of SSL. For example, if SSL is used you may still be connected to the wrong site (eg, using a rogue certificate), and your traffic may still be read by others behind an SSL proxy, but you would not be connected to someone else's previous SSL session, not would your SSL session be usable by someone else.

These attacks, despite much confusion online in discussion lists [for example, see refs. 2 and 3], do not  break SSL nor have any bearing on the SSL being able to prevent the reported facebook case ("Internet security flaw exposes private data").

The AT&T problem that originated this security issue -- which, by all evidence, was not an attack -- would have been prevented by facebook requiring SSL. This may be particularly important for mobile devices,
where users may have less connection information available to verify.

And, google did the right thing now by making SSL the default for gmail.

REFERENCES:
[1] http://arstechnica.com/web/news/2010/01/facebook-att-play-fast-and-loose-with-user-authentication.ars

[2] http://www.listbox.com/member/archive/247/2010/01/sort/time_rev/page/4/entry/13:270/20100117153354:9FD73A7C-03A7-11DF-8606-4E77A52306B0/

[3] http://www.listbox.com/member/archive/247/2010/01/sort/time_rev/page/4/entry/11:270/20100118113841:EE029CD4-044F-11DF-868D-868DA52306B0/