There are plenty of articles detailing the uses of patience, creativity, and above all, learning from your failures. Those are all requirements for excelling in any technical field. Instead of rehashing that baseline, let’s take a look at how a threat actor might stage attacks to create the most extensive damage, regardless of the target.
How does a malicious actor find the greatest harm?
You may find it uncomfortable or distressing to contemplate possibilities that can cause loss of life or severe harm to vulnerable people. I will suggest some means of handling these difficulties, but if they arise in your work, please take a step back and steady yourself before continuing. Creating a good threat model relies on clear thought; it’s hard to do well when overcome by anxiety.
You can optimize any threat modeling exercise by understanding it on a technical level. However, to understand what harm a system can do, we must also understand its physical and social context.
Fundamental checks and balances
Understanding on a technical level begins with knowing what the system should do and what it should be prevented from doing. Begin with the stated intent of the system, and branch out. You are probably already familiar with these steps. Some of the fundamental questions are:
- What prevents the system from providing critical data to the wrong person or system?
- Does anything prevent users from giving the system bad information, including updates, signatures, or authentication tokens?
- Can any such precautions be bypassed?
- What’s the worst thing that happens if the system reveals all its data, operates on bad data, or becomes unavailable?
- Can you force the system to halt, leak data, or make bad decisions?
- Which of those decisions will have the greatest impact on the group of users you wish to target?
Motivation behind professional evil
When thinking about physical context, it’s necessary to consider what kind of machine is running the software and where it is. Physical location is sensitive information that can be used to cause harm. Still, the potential is very different for a server in a data center, a satellite in orbit, or a vehicle moving down a crowded highway. Here, the attacker asks: how can the harm be physical and irrevocable?
- Can this machine move near humans or other machines? What prevents collision or destruction of this machine or anything around it?
- What prevents the system from moving in unexpected ways?
- Can this machine move other objects? What keeps those objects moving in a controlled manner?
- Does the system have dangerous physical capabilities, such as intense heat production, equipment with cutting instruments, or significant weight? How are those capabilities controlled?
- Can the system cause another linked or related system to malfunction in a physically dangerous way? Can you move from one system to another that has increased capabilities for physical harm?
Consider what you could do by using the machine in ways it was never intended to be used. Stuxnet is a good example, but physical attacks are not limited to highly specialized infrastructure. For instance, a 2011 study found that many common office printers could be hacked to execute instructions that would result in catching fire.
Examining patterns of harm
When considering social context, it’s important to think about who the system is intended to be used by and who actually uses it.
- Who can be scammed or abused via the system under test?
- Who can be most easily hurt, and where can the largest impact fall?
- Does the system collect large amounts of data? How can that data be used against the people who originated it?
- Are there any precautions to prevent the data from being abused?
The patterns of harm already present in society, and on the internet, do not stop existing in the technical realm. Every pattern that can hurt people in the “real world” can also be found on the internet and in the systems we use daily. The more you seek the perspective of vulnerable people, including groups you are not a part of, the better you will understand the inherent risks in the system under test.
When you understand the system's technical, physical, and social structures under test, you can extrapolate ways to cause harm in all three domains. Nothing prevents a given system from having ways to affect more than one simultaneously. In the absence of a major design flaw, you'll need a technical exploit to perform a social or physical attack.
As professionals and decent human beings, we should aim to do the least possible harm and mitigate harm where possible. Once you've found something with potentially catastrophic consequences, look for the most ethical thing to do. In testing, it may be to write up a report, including the attack path, full effects, and potential fixes; it may be to fix it yourself if the system under test is open source. Your role will determine what you do with the information.
Do you need help securing your critical operations? Email Team GRIMM at [email protected].