Subscribe

Vulnerable Web-mail the tip of the iceberg

Jon Tullett
By Jon Tullett, Editor: News analysis
Johannesburg, 07 Jan 2016

This week we learned that a lot of Web-mail providers still support vulnerable SSLv3 encryption, potentially leaving their users open to attack.

SSLv3 is ancient, and has been broken thoroughly enough for the IETF to formally deprecate it, meaning any service claiming to be standards compliant is required not to support SSLv3 at all, not merely to avoid it by preference. The Poodle attack in 2014 demonstrated how SSLv3 could be successfully attacked with relative ease.

So, why the tardiness in updating to more secure options? Unfortunately, there's an easy answer to this: It is really hard to update core protocols without breaking the Internet, and because security has always been considered less of a priority than interoperability and stability.

SSL, like many other encryption technologies, is not a core protocol. It has been layered on top of existing protocols. In the case of the Web, that's HTTP, In the case of e-mail, that's SMTP, and so on. Crypto like SSL is agnostic - it's not a Web technology at all and is applied to all sorts of services.

Some of those existing services, like HTTP and SMTP, are practically universal. When you send e-mail from A to B, it doesn't matter what vendor's implementations are sitting at each end and how many hops the message traverses en route, it will use SMTP - often many times - to get there. Because of that ubiquity, it is highly risky to meddle with the core protocol. If one hop demands a different protocol, large parts of the Internet could simple cease to function. Engineers at the IETF, therefore, prioritise stability and availability, leaving security to be layered on top.

Unfortunately, those security layers are optional, not mandatory, and in many cases they are simply outright kludges - workarounds which, while clever, make the system even more complicated. And since those optional mechanisms are layered on top of the e-mail standards, they are not universally supported, used, or even considered.

Look at e-mail, for example, which is particularly relevant since High-Tech Bridge picked on e-mail providers.

SPF and DKIM are other examples of optional security layers - Sender Policy Framework and DomainKeys Identified Mail use DNS records to publish, respectively, IP addresses and cryptographic keys of valid mail exchanges, so spoofed mail can be identified and processed.

This highlights the difficulties in adding security to a pervasive protocol like SMTP. For a start, those are two of the surviving schemas that emerged from a flurry of competing proposals in the early part of the millennium - many failed to gain traction, and even now DKIM and SPF are not universally supported, much less mandated. So acceptance and adoption is a problem. If you demand DKIM, you won't be able to exchange e-mail with an awful lot of people.

Secondly, they introduce complexity and secondary vulnerabilities - now you have to manage DNS as well as e-mail, and DNS is a common target for attack, either through direct tampering, hijacking or spoofing. That just changes the problem, rather than answering it: When you receive a DNS-delivered DKIM signature, can you trust it? There are DNS security extensions, like DNSSec, but they suffer similar challenges: A patchily adopted and therefore optional add-on to an established protocol, and so the cycle perpetuates.

Public services, like Web-mail, tend to support old protocols because they're still in use, and turning them off may break stuff.

In short, mandating new protocols risks breaking the Internet, so we don't. But because we don't, security suffers. Since SMTP's prime directive is universal interoperability, that leaves security to the recipient, and as a result we've had a very expensive and destructive generation of e-mail woes - spam, malware, fraud and the rest. Should the RFC authors have done things differently? Maybe, but then again, maybe not. Look at the abject failure of the technology industry to establish a universal instant messaging framework - that was where e-mail could have gone, before SMTP took over.

That brings me to SSL, which is layered on top of plain-text protocols to protect their contents.

SSL: old and entrenched

The same protocol wrangling plays out in the SSL space, and the periodic breakage of old standards is a well-sung melody. The most visible of this is in the browser space, because SSL is so prominently used to secure Web sites - online banking, shopping, e-mail and so on.

A server and client, when establishing an SSL connection, exchange a list of protocols they support, and then agree on whatever the best mutually-supported scheme may be. Some attacks leverage this by posing as a man in the middle and modifying this negotiation so the agreed protocol is one which is open to attack.

Sometimes they may not be able to agree on a protocol. Maybe the browser is old, and can only process broken old crypto like SSLv2 and v3. In a UX space like online banking, that's okay - you just redirect the user to a holding page asking him to update his browser. And this is getting even easier as browser vendors are moving to rapid updates to resolve security issues, so the base of users with access to modern crypto is relatively manageable. The main vulnerable audience for SSLv3 is IE6 on Windows XP, a vanishingly small community of warm-blooded users, but unfortunately that's not where the problem ends.

In an automated machine-to-machine space like the global mesh of e-mail exchanges or industrial telemetry, this is much more difficult. There are a vast number of outdated servers, embedded systems, legacy software and home-grown components which may well just break. Embedded systems in particular tend to be stuck on old versions of software, so vulnerabilities can be very difficult, if not outright impossible, to patch.

That leaves us in a quandary: Are we okay with breaking a lot of embedded systems, or segregating the Internet by protocol support?

In the SSL space, you might remember this was a major concern when the Heartbleed vulnerability in OpenSSL came to light. And Heartbleed, a flaw in a widespread SSL implementation, didn't just affect Web servers, it affected many other services which used SSL crypto, including e-mail. (When I e-mailed Dominic White, CTO of SensePost, for insight about Heartbleed, he told me he'd been testing his own mail server for vulnerability, and had seen my e-mail arrive via his attack before it got to his inbox.)

With OpenSSL baked into embedded systems, there still remain a number of services which will be vulnerable to that vulnerability for the foreseeable future. The same goes for Poodle in cases where SSLv3 is baked in, and many other vulnerabilities.

And that brings me to the crux of it. Public services, like Web-mail, tend to support old protocols because they're still in use, and turning them off may break stuff. SSLv3 should be removed from user software like e-mail clients and Web browsers, but back-end servers are likely to continue to offer it. A compromise may be to specifically ring-fence older protocols, so accounts by default may not be accessed insecurely unless an admin requests it.

Browser vendors have been taking a more aggressive stance in expiring old crypto in their software, with plans to remove support for the elderly SHA-1 cryptographic hash. We may see momentum shifting to more rapid adoption of new protocols, but don't count on it. That's just not how the Internet was built.

As users, we're just as guilty. After Heartbleed, how many people audited all their networked services to identify vulnerable instances of OpenSSL - not just e-mail and Web, but everything network-facing? After Poodle, how many scoured their networks for systems which could be affected? It's easy to patch the obvious servers, but a lot of legacy systems were either not found or proved too difficult to fix. Until the entire Internet engineering community stops seeing security as an optional layer, this won't go away.

Share