Who ordered the scrambled brains?

A collection of web pages that reside on a web server.

Decentralizing trust on the web

Update 2013-10-30: Yes, I am an idiot. This is all moot, as SSL’s single-point-of-failure is mitigated by cipher suites using perfect-forward secrecy. Carry on, nothing to see here.


I ‘d like to sketch out an idea for a practical, improved encryption mechanism on the web. As it stands, HTTPS relies on SSL key certificates, which are “endowed” with trustworthiness by certificate authorities. There are relatively few certificate authorities on the whole of the Internet. Because a compromise of those certificates means a nefarious agent can create/sign their own authentic-looking certificates and then perpetrate a man-in-the-middle attack on any so-protected web server, I contend the current state of web encryption depends on too few points of failure.

I propose replacing/augmenting this centralized trust model with the decentralized one of asymmetric public-key cryptography. Rather than one-key-serves-all, in public-key cryptography each communicating pair has their own key set. As a practical requirement, I propose relying on HTTP or HTTPS as the transport, but encrypting all request and response bodies with the parties’ individual public keys. Ideally, support is built into the browser, but short of that (or in the interim) we can use browser extensions/add-ons to hook into request/response events and perform the encryption/decryption.

Browsers that support this would notify the web server with a HTTP request header, perhaps X-Accepts-Key with a value of a supported public key’s fingerprint. This would allow the server to lookup the supported public key via fingerprint. Such browsers could also send messages encrypted with the server’s public key and indicate this in the response with the header X-Content-Key specifying the server’s key fingerprint. Likewise, server responses would include X-Content-Key in their responses to indicate the user’s public key. These headers should be considered alongside other HTTP content negotiation parameters (influenced by request Accept* headers and specified in response Content* headers) in determining HTTP cacheability.

Web servers will have to retrieve the public key specified in the request headers. I do not propose the exact mechanism for this, but a simple approach would be to allow users to associate a public key with their “user account” (i.e. their unique security identity), either by POST-ing a web form over plain-old HTTPS—or perhaps in person at a corporate or field office! (I imagine a market for physical key delivery could crop up if the public demanded it… think armored trucks and bankers boxes.) Likewise, the server will provide a public key to the user/user-agent; this associated keypair should be unique to the user to provided enhanced security. (Users can check this among themselves by comparing keys used between their individual communications with the server.)

Servers should also support the OPTIONS request and include something like X-Allows-Encryption: rfc4880. Both server and user agent should dynamically fall back to “plaintext” HTTPS when either side lacks support. In particular, due to non-idempotency of certain HTTP methods, URLs of encrypted requests should first be OPTIONS-ed. Unfortunately, OPTIONS is not cacheable, but this overhead is a small price to pay when security is tantamount. It would be nice to simply try the encrypted request and rely on the server to properly reject it with a 400 (which would indicate the need to retry without encryption), but it’s conceivable that the semantics of certain resources do not allow differentiation between plain and cipher text (PUT-ing or POST-ing binary data).

Ultimately, while not being the end-all of web security, this seems to me to add a “pretty good” layer of complexity to existing security conventions. Of course, I’m no security or cryptography expert, so I can’t assert the merit of this idea. But that doesn’t stop me from discussing and thinking about this important issue.


Update, 2013-10-03: Perhaps another alternative would be to make it easier for users to act as certificate authorities, and for websites to create unique SSL certificates per-user that the user can then sign. For example, a new website user will log into a website that is protected with an SSL certificate signed by a third-party “trusted” authority. The website will then, perhaps in automated fashion, create a unique SSL certificate for that user and request that the user act as a certificate authority and sign the certificate. Thereafter, the user will access the website via a unique subdomain, perhaps of the form https://<username>.example.com. While leaving the initial certificate-signing stage protected by only a common (single point of failure) SSL certificate, this does create a proliferation of SSL certificates thereafter, and cracking/forceful entities will have significant difficulty conducting mass surveillance.

Follow me on Twitter for the latest updates, and make sure to check out my community opinion social networking project, Blocvox.



No Comments Yet

Commenting options at bottom.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>

Comments are subject to moderation.

Commenting Options

Notify me of followup comments via-email

| Comment feed for this page | Trackback URL

1