Want good security? Think like an attacker
In the security world, it helps to understand the way cyber criminals think. "It takes a thief to catch a thief," goes the maxim, and top security researchers frequently hunt for the same types of flaws that cyber criminals hope to exploit.
"People often think of cyber crime as one person trying to break in, then someone sees it happen on the other side and tries to intervene," says Lavi Lazarovitz, Senior Director of Cyber Research, CyberArk. "That's how the movies portray it, but it's not how things work. Most good security depends on prevention and finding those attack opportunities before the bad guys do. That means that the best security experts think like attackers."
Further to the movie example, Hollywood has us believing that a hacker manages to break in, deploy some advanced software, and grab what they can before they disappear. In the real world, things are more prosaic yet also aggressive.
A typical breach can last for months, and, on average, companies could take almost a year before identifying they have been hacked. In all that time, the criminals can sit in the system, stealing data while avoiding attention. In the infamous hack of Sony Pictures, the attackers even intercepted and changed security-related e-mails to help cover their ongoing presence.
Attacks can be swift and brutal. Ransomware, in particular, becomes visible very soon after a breach. It will encrypt data and then publicly demand a ransom. But even in such highly visible attacks, a lot of preparation went in beforehand. Criminals identify soft targets, such as employees, that they could fool with a phishing attack or even blackmail to do their bidding. Many criminals will lurk around to steal information, then release ransomware to make additional gains. If companies don't pay up, that data is sold to other criminal groups.
Finding the flaws
Criminals need to find flaws in a system to launch successful attacks. Flaws come in many forms, and they are the number one reason why company systems should always be up to date on patches.
It helps to know the types of vulnerabilities they aim to exploit. According to Lazarovitz, two of the most common examples are design flaws and insecure coding.
"Design flaws are small mistakes that an attacker can exploit. We can find an example of such a flaw in some credit card chips. When the consumer pays, the first thing that happens is a non-encrypted communication between the card and a terminal when the user inserts the card. The terminal asks for a personal identification number (PIN), receives it, then checks back with the chip in the card to ensure validity. If it is, the transaction goes forward. Finally, the credit card will send some encrypted data to the terminal, which in turn is sent to the bank to validate the transaction so the vendor can get paid."
Yet, in some credit card chips, the pin verification generates an unencrypted yes/no response. Not all credit cards and terminals work like this, adds Lazarovitz, "but some do. And if criminals understand that the only check is when the credit card says yes or no, they can exploit the design flaw and create credit cards that say yes. It doesn't matter what PIN is sent, because the next data transmission in the process is encrypted – so the terminal can't check it."
Simple yet effective – this is the nature of security flaws. Such flaws can creep in due to mistakes at the programming level. Another example, insecure coding vulnerabilities, arises when programmers don't follow the rules for secure programming. These are – unfortunately – very common in the software world.
"These vulns come in many forms – including memory-based bugs, which allow attackers to write code in places in memory where they shouldn't be able to do so, as well as credential management vulns, where attackers can get access to credentials that they're not supposed to see. Sometimes programs just show debugging information that can give adversaries more useful information that they can exploit."
Insecure coding can even surface in places thought to be highly secure. Recently a bug was discovered that exploits the powerful Sudo administration command in Linux systems, giving an attacker powerful privileges. In the cyber security world, such flaws are very valuable, and some groups even keep them a secret to continue exploiting them. This is called a zero-day vulnerability.
Fighting fire with fire
How do attackers find these vulnerabilities? One common technique is fuzzing, an automated software testing technique that looks for hackable software bugs by randomly feeding invalid and unexpected inputs and data into a computer program in order to find coding errors and security loopholes.
"The attacker assumes that somewhere in the program there is a hidden bug, and now has only two problems to solve. The first is finding out exactly where the bug is, and the second is figuring out what input must be passed through the program in order to trigger the bug."
Digital security experts deploy the same tricks, skills and experiences as cyber criminals. Only they use their abilities for fairness and security, not criminal exploitation. Cyber criminals often think they are not common criminals. But in a digital world, they are no different from a street-side mugger or home burglar. Fortunately, just as with good detectives and police constables, security experts learn to think like cyber criminals and anticipate their plans.
"The key takeaway is that the best weapon a researcher has is learning to think like an attacker; to develop a similar mindset. By analysing the same flaws attackers are looking for, researchers can find the exploitable gaps and enhance the security posture of their organisations," says Lazarovitz. "This is useful even for non-security experts. Look at your company's digital assets. What is valuable? Who has access? How could they be compromised? Thinking like an attacker can tell you a lot about the risks your business might face."