Test Mindset
It’s a bit cliche that security testing is just really aggressive QA. However, there’s enough truth there to make it worthwhile for security engineers to examine the mindset of a good tester, and for testers to look at the methods and rigor of security testing to find ways to improve their own practice.
This article will examine some critical traits of the mindset of a tester, and demonstrate how you can implement them in your work. It is mainly geared toward the examination of software or hardware (the system under test). Still, the same traits and methodology can be applied to everything from a single document needing review to an entire system’s architecture.
A tester pays attention.
The best thing you can do as a tester is to pay full attention to the system under test. Paying attention to the system and everyone using it in the test environment - including oneself - is the base for everything else here. Is the system easy to use? Does one function always cause frustration? Is something going on in the timing or responses to a given action?
Whether the tester writes notes, makes a mind map, or enters data into a centralized system, it’s best to record these items. Even the most skillful recall can falter under the sheer volume of information generated by a relatively simple test.
A tester seeks context.
The better the tester understands the system under test, the more quickly they can respond to its outputs. The more the tester knows the goals of that system, the better they can assess whether those goals are being accomplished. Seek to understand the social and physical effects of the system under test as well as the technological impacts.
Relatedly, the tester must prioritize. The system's goals and the test results may be confusing or appear to conflict with one another. A good tester will talk to the entire team to understand and resolve such disputes. A great tester will use the organization's goals as a guideline to help create solutions.
A tester is curious.
When encountering something new during a test, it's good to spend a little time trying to understand why it's doing what it's doing.
- Why is a particular technology in use?
- Why was that chosen instead of another similar technology?
- Why is the function it's performing required?
This principle sounds obvious, but it is the distinction between running a good test and merely executing a script. Curiosity builds context, and the patterns found by examining many applications over time can lead to insight that makes future testing faster and more likely to generate valuable results.
A tester keeps trying.
Anyone interacting with software is already familiar with at least a few ways it can fail. Persistence is part of a tester's mindset: not just observing a failure but recreating it and interpreting the results. Figuring out why a system did something is often more valuable than just fixing the bug. In addition, it builds the foundation for greater understanding and increased technical skills.
In software testing as a whole, bugs are often seen as the output of testing. However, in security work, bugs and errors are not the end of testing but rather a foothold to deepen the examination.
Don't stop when the first round of bugs appear. Even when there's only a little information available, look at the problem from angles, and there's bound to be something to learn.
The role of automation
This article assumes you have some level of automated tooling. It’s good to use whatever automation you have, but beware: when you record a formerly manual test into an automated harness, it becomes a change detector, not a statement of whether the software works as intended. If the intended flow changes, your tests will give results that do not reflect a wanted change.
Security-specific tooling, such as static assessment security testing (SAST) scanning software, has similar quirks. While it is not a substitute for a manual test, it is widely regarded as useful once the false positives have been removed. Testing as a discipline could benefit from a similar attitude shift. Automated tests should be viewed as an assist, not a substitute for a live tester.
Sample Testing Flow
Start by running your automated tests. If you are testing a document, this may be a spelling and grammar checker. If you are testing a software system, it may be a SAST tool run against the source code. In either case, it should be pretty quick. Write down anything of interest.
- Does the system under test do what it's supposed to do?
- Does it do all of what it's supposed to do or only part of it?
- Is there any part of the system you cannot access that you should be able to?
Anything you cannot do - that you should be able to do - is a bug.
Walk through each intended path in the system. Write down everything you notice, whether good or bad. You'll want to analyze it later.
Does the system do what it isn't supposed to do? How hard is it to make the bad things happen, and how bad is it if they do? In security parlance, this is basic risk analysis. Not all bugs are the same. Both software testers and security analysts provide the greatest value by searching for the highest priority bugs first.
Everything you can do, that you shouldn't be able to, is a bug. If your developer (or you) starts thinking, "well, but we didn't stop the user from doing x," you may have a design flaw rather than an implementation bug. The distinction is only important when deciding what the fix should be - design flaws can lead to hydra-like bugs that keep recurring every time they're patched unless dealt with at the level of the actual problem.
What else has come up during the test? As a curious, alert person interacting with an unfamiliar system, a tester has probably noticed and written down a lot of little curiosities. The analysis starts with finding patterns and examining them to see if they match actual system behavior.
Keep in mind that the process of analysis is a loop. As you explore system behavior, you’ll find novel components. Analyzing those components can show a need for more context, even if it’s just asking why that particular thing was chosen. Learning more about the context can reframe other decisions. There is always a need to balance depth of analysis with time taken, but don’t be afraid to repeat earlier steps.