Five questions to ask about your dynamic application security testing (DAST) programme

BY Karl Gonzi, General Manager Malta, Invicti.

  • 2 years ago Posted in

Automated black-box testing has become a must-have for modern DevSecOps, but dynamic application security testing (DAST) systems are, by their very nature, complex, and there are as many variations in functionality as there are tools on the market.

Key considerations in choosing a new DAST tool or evaluating your current one are the degree of automation and how well it integrates into your workflow. Some tools are essentially vulnerability scanners, designed for ad hoc manual scanning that has to be triggered by security engineers or developers and, when run, simply builds a list of potential vulnerabilities which must be verified. This typically means security scans run at limited times in development and staging.

However, taking vulnerability scans a step further gives you continuous, automated black-box security scanning that runs in the background throughout the software development life cycle (SDLC). When integrated with automation servers, scans are triggered at all stages in development, testing and production, with results delivered as tickets in developers’ issue trackers.

While you may have DAST already, not all tools are created equal, and it’s worth reassessing your chosen solution from time to time to ensure it is keeping up with your needs as well as the latest developments in the technology.

Here are five things to consider in assessing your DAST solution.

1. How certain can you be that DAST has identified a vulnerability?

Selecting the right tool for your organisation’s development environment can reduce your dependence on expensive security specialists and streamline your workflow, but only if you can rely on its reports to be as good as a trained security analyst.

Reliability can be measured in terms of the probability that a reported vulnerability is a false positive. Because false positives waste developers’ precious time and undermine confidence in the testing tools, it pays to minimise them, and systems that automatically deliver vulnerability reports with at least 99% reliability allow your teams to function more efficiently, at pace, with fewer specialists required to run and verify security tests.

Tools that only send a request, receive a response, and classify the response as a potential vulnerability run the risk of high false positive rates. The consequences for developers and your application development programme include swamping your development and security teams with meaningless alarms, leading to vulnerability reports being ignored altogether.

The surest way to achieve 99%+ reliability is through proof of exploitation. It is elegant and incontrovertible evidence that your DAST has discovered a vulnerability, but what constitutes proof? In simple terms, it is evidence that a test attack payload has been accepted and processed by the target application, triggering a unique reaction.

2. How naive is your testing tool?

Vulnerability scanning tools have traditionally performed testing by inputting known strings into web forms and API parameters and then looking for those strings in the response. However, simply searching for a string in output can generate false positives, for example because the output coincidentally mimics the input string or it repeats the string in an error message.

The simple implication of this is that echoing a string is not enough because it does not constitute proof that you have exploited a vulnerability.

For an elegant proof of compromise, you need to demonstrate the vulnerability is exploitable by tricking the application into performing a transformation that cannot be accidental, a simple example being a calculation.

If you get the predicted result from a test, you have conclusive proof of a vulnerability, with 100% reliability.

3. How are you selecting what to test?

Modern DAST tools are the mature descendants of simple vulnerability scanners. In the past, it was thought that working from the outside in was the Achilles’ heel of vulnerability scanning.

However, in recent years, it has been recognised that the black-box approach to scanning is a strength rather than a weakness because it is technologically agnostic. It works no matter what server or development language you use.

It looks at your system as an attacker would and methodically tests everything for known vulnerabilities. While you can assist it by pointing it at specific injection points, or even engaging in interactive testing, the ideal system automatically tests every input location that it finds, including form fields and API parameters, and delivers precise details of successful exploitation attempts, complete with proof.

If your DAST tool is forcing you to select what to test, or worse, limiting what you actually can test, then it’s no good because attackers are certainly not going to limit what they ‘test’.

The rule of thumb in effective DAST is don’t choose what to test – test everything all the time.

4. What stage in the development process do you test?

If you are limiting your testing to one stage, such as development, staging, or production – either because your DAST doesn’t support testing at other stages or because it takes too long to test – then you are storing up trouble. Any security vulnerabilities that are being built into your system in the early stages will need to be found during late-stage security testing. If they are not, you’re likely to only learn of them once the application is in production and malicious hackers are having a go at it.

Some DAST tool providers will urge you to shift left or shift right in the SDLC, but we believe the rule of thumb for any AST programme is shift left and right and make security testing part of the entire CI/CD pipeline, just like functional testing.

A comprehensive AST programme involving DAST will test at all stages of development and deployment, triggering the first automated scans already when new code is committed.

This ensures you are matching or even exceeding the coverage of traditional security testing regimes that typically involve static testing during development, vulnerability scanning during staging, and pentesting in production. Proof of exploit ensures that alerts are true positives which, with the assistance of a comprehensive vulnerability report, empower developers to address security issues as early as possible across the SDLC.

5. Are your vulnerability reports developer-friendly?

Developers aren’t security engineers and don’t have the time to research the solution to every security issue.

If your system is generating reports that need to be verified and interpreted by a security specialist, this adds friction to your development cycle in cost and time.

With modern DAST systems, you should expect vulnerability reports to be automatically verified with proof of exploitability and be both comprehensive and comprehensible for a developer. This enables your dev teams to take the report and fix the issue as an integral, not separate, part of their development tasks.

Comprehensive reports contain details of the vulnerability, including proof of exploitability, where it was found, what parameters were used to exploit it, background information on the vulnerability, how it is exploited in the wild, examples of consequences, guidance on how to fix it, and links to external resources.

It’s a lot to ask of a security tool, but it’s what you should expect from a comprehensive DAST solution.

Conclusion

It’s clear that dynamic application security testing has come a long way from the first vulnerability scanners. Modern systems automate much of the testing process, reducing the reliance on expensive and scarce cybersecurity talent. They provide evidence of exploitability to ensure they aren’t wasting your time. And they go that extra mile, giving developers the information to understand and fix a flaw quickly and with minimal external assistance.

Taken together, an effective DAST programme is invaluable in resolving security issues in your code already while the application is being developed, rather than waiting until staging or even production to identify and resolve underlying vulnerabilities.

By Danny Lopez, CEO of Glasswall.
Nadir Izrael, Co-Founder and CTO at Armis discusses the importance of critical infrastructure...
By Darren Thomson, Field CTO EMEAI at Commvault.
By Asher Benbenisty, Director of Product Marketing at AlgoSec.
By Steve Purser, former Head of Core Operations at the EU Agency for Cybersecurity, and Zivver’s...
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist, Business and...
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist, Business and...
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist – Business and...