Nice paper! I especially liked the cited 'Johnny' paper.
What about denial of service attacks where super large RSA keys are
used or faked which either crash or tie up the server with decryption
and verification of bogus messages. Even a flood of normal
encrypted/signed messages have to be decrypted before the signature can
be checked. I've seen some messaging systems where the client encrypts
first, then signs (opposite of smime) to avoid decrypting invalid
messages on the server.
You make a good point -- if encryption is made simpler and more people
use it, it will be attacked. Reader "dgustafson1" (see blog at
also comments on not to be forced to spend time handling mass quantities
of unwanted secure messages (e.g., encrypted spam). Providing an
additional signature to be verified before decryption could help,
as you point out for S/MIME.
The paper does take this problem into account, but it does so in a
technologically-neutral way, using F10 (Meets Digital Signature
Requirement), F11 (Authenticates Sender and Recipient) and absence of
several P#'s especially P6 (Unverified Sender's Email Address). Please
check the blog (entry above) for my comments on this.
P6 is important here because if you can trace the sender (there's no
spoofed sender email address), not only the sender becomes liable
but you have now a fixed, verified target to accept or block beforehand.
I used to work with HSMs (hardware crypto & key storage) and we used to
get many requests for accelerated crypto and strong key storage.
Standard interfaces to crypto subsystem was the feature we required.
Some corporate users can justify the cost of this feature.
But even with HSM, root operators can do a lot to undermine user security,
including reading email. Banks routinely separate development from
operation, even with HSM, so that operation personnel do not become aware
of vulnerabilities. Another example is the "no loner" policy for handling
sensitive data, even with HSM.
Not only conflicting business interests are of concern here but also
unnoticed data mining, blackmail, disclosing trade secrets and many other
activities (even legal) that break user's privacy, HSM or not, to benefit
someone. The market knows this, as we could see in the rejection of MS
Passport, proposing to hold data of other businesses' users in secure
storage at Microsoft.
That's why the lists of features and problems / attacks in the paper have
several items to protect user's privacy. What's the point of email
encryption if the supposed secure channel is actually open (even if just
to one eavesdropper)? It may be even worse than no encryption at all,
because there'd be a presumption of confidentiality by the communicating
Key-escrow (encryption key only) allows content filtering and law
enforcement agencies to function. Sounds like IBE has that capability.
IBE has that capability by construction. Some papers suggest that the
effects can be constrained if the IBE reduces each key's usage to a few
or even to one message. However, even if the security parameters for the
IBE change for every user's key (which would be a huge burden for all
recipients and senders) a simple data logging device at the Private
Key Server (and/or at the recipient's end) could log all keys and allow
any message to be read.
Of course, a country's laws may require key escrow or key discovery.
And businesses, to prevent problems if the proverbial bus hits an
employee, may also require key escrow (and that's why a biometrical
access system requires a backdoor, which is yet another problem).
However, providing key escrow without a choice, for everyone, might
create more problems, for example if the watchers become targets
The key escrow debate was very active in the 90's. More problems than
benefits. Law enforcement can break and use communication information
(ie, routing, IP numbers, time, frequency of use, etc.) much more
easily than message information, and use it more effectively too. People
can always use a simple one-time code book or jargon to defeat any key
escrow scheme. The secret channel created by the code-talkers in WWII
was, reportedly, never broken.
Also, these email systems are only discretionary - the user has to
choose to send / receive securely. None of them (afaik) support
mandatory security where the security is always on and cant be
disabled or ignored by the user.
Some do, by enforcing a security policy -- e.g., "email to email@example.com
must be sent encrypted". Receiving securely can also be defined by the
sender; if I sent it encrypted, you have no choice.
At least with some clients they can be set to
default to secure which is a bit better. I guess that is a feature of
the email client regardless of the crypto technology though.
Yes, and that's why I did not include it in the paper.
Thanks for the good comments!