Charlie Miller: Serious lack of consequences for insecurity
Are we better off than we were seven years ago? In reality infosecurity is in the same place. There were breaches then, there are breaches now. We are still patching, we haven't learned to write secure code. It's hard to say we have improved.
So says Charlie Miller, security engineer at Twitter, speaking at the ITWeb Security Summit 2014 in Sandton this morning. "Seven years ago I was optimistic I could make a difference. We've been trying so hard for so long and haven't really made a difference. Everything can be hacked, whether you try to be secure or not, even if you try really hard to be secure."
Moreover, he says all victims that Mandiant responded to in 2013, therefore organisation that had been breached, operated in a PCI-compliant environment, meaning that compliance does not mean you won't be hacked. Over and above this sort of attack, he says hackers still hack for fun.
Enterprises are spending fortunes on security, they have huge budgets, they think they are secure, and yet they still suffer breaches. "Even when you do everything right you still lose, your security isn't necessarily reliant on the security solutions and tools you have. Your users are using apps and hardware that aren't secure. Unfortunately, companies still rely on products not having vulnerabilities."
He asks why the products we use are insecure. "With all the resources thrown at them, Internet Explorer for example, why are they still not safe?"
Firstly, he says it is difficult to produce code without vulnerabilities, it is costly and takes time. There is also a rush to push new products out the door. It's easy to release a product and worry about security later.
Miller adds that the underlying problem is we cannot measure how secure a product is. "We can't see security. Everyone says they're secure, but there are no reliable measures to prove this. There is also a lack of consequences for insecurity, when a product has a vulnerability in it there are almost no financial consequences for the vendor. They can just push out a patch or ignore it, and typical consumers don't care they, want shiny new features. Companies who suffer breaches due to insecure products they were using don't or can't sue the vendors."
A solution, he says, is to drive up the cost of finding and exploiting a bug, in the attempt to make it high enough to keep out attackers. However, if the attacker is a government entity, it will always be able to outspend you. "If an attacker has a billion dollars, they will be able to get into your system.
"You need to think about who your attackers are, and target your defence that way," says Miller.
Unfortunately, we can't rely on vendors to make secure products, he says. However, there are no laws which require the development of secure products, and lobbying isn't going to set this change easily. There are also no bodies that certify secure software and hardware."
Vulnerability info can be used in two ways. Defensive, which is to notify the vendor and roll out patches; or offensive - create cyber weapons. "It cannot be used for both."
Miller says outlawing exploit sales: "Firstly, what is an exploit? A file? Program? Document? White paper? What else becomes illegal? Network scanners, bug reports, jailbreaks, fuzzers? It is already illegal to use an exploit that should be enough."
He says not to look to the media to help either, as they want to sell newspapers and get page views. "Moreover, talking about compliance failures, application sandboxes, secure coding is boring, the press prefers to talk about thrilling theoretical attacks or scary new attacks, which may be exciting but not necessarily helpful."
In terms of researchers, he says they used to trade vulnerabilities for street cred. "Then they started reporting vulnerabilities either to vendor or bugtraq for the same reason. Not many researchers looked to sell this information."
What has changed? "Finding bugs and exploiting them in significant software is becoming harder. Supply and demand dictates that the remaining bugs are more valuable. Researchers who find bugs today, have three options. Notify the vendor and get your name in security bulletin. Sell with vendor notification for $5K, or sell without vendor notification and get $100k. What choice would you make? What choice would a 17-year-old Romanian make? Should Internet security depend on these choices?"
On the plus side, Miller says there was good coverage of the recent Heartbleed exploit. "It had mass media exposure and discussed relevant issues, such as open source software and the implications."
In addition, he says the security of products is improving, and there are fewer bugs. "Application sandboxes, code signing and anti-exploitation technologies are also helping, as is the fact that the number of people with the ability to breach you is shrinking. Vendors are also getting smarter, and there is paid community involvement, pay for bugs, pay for research. Bug bounties and similar initiatives are also really working, so are crowdsourced audits, for the code we all rely on."
Ultimately, however, Miller says we're in bad shape. "We spend money but are ultimately only as safe as the products we rely on, and these are not getting secure anytime soon."