Measuring the Effectiveness of Your Vulnerability Discovery Strategies
The rise of bug bounty programs and crowdsourced security testing not only reveals the direct cost of security, but also demonstrates that all organizations struggle with application security, regardless of their size and maturity.
DevOps teams already invest in processes like deployment pipelines and feedback loops in order to build quality apps. Ideally, they’ll also invest time and money to discover the security flaws that weaken the quality of their apps.
But DevOps teams don’t have infinite time or money. Trying to prove an app has no vulnerabilities is fraught with challenges, so teams need to choose strategies for securing apps and ways of measuring whether the time and money spent searching for vulnerabilities is effective.
One strategy is to shift the time investment to crowds and only pay for the vulnerabilities the crowd discovers. Other strategies might encompass more focused manual testing, which may come at a higher cost, or embrace automated scanners, which may come at a higher time investment. It’s no good to pour the entire DevOps team’s time and budget into finding vulnerabilities if that doesn’t leave them any resources to fix them.
It’s important to monitor these strategies for hidden or unexpected costs. For example, managing a bounty program requires filtering out the noise of invalid submissions. While bounty programs can build positive social engagement with the security community, they also require investments in social skills in order to navigate challenges that arise from disagreements or misunderstandings.
Focused manual testing, such as penetration testing and code reviews, can complement bounty programs, but it doesn’t have the benefits of continuous monitoring or scale. Automated scanners don’t completely remove people from the equation; they can be efficient, but they can’t replace the skills of a focused manual test.
The balance among these strategies will change over time and in the context of the app. That’s why it’s helpful to define metrics that reflect their effectiveness. For example, how often does each strategy discover vulnerabilities? How meaningful are the vulnerabilities they identify? Is there a time window during which they’re more efficient? How does that window relate to the app’s release cycle? Then throw in the budget-related metrics like how cost relates to vulnerabilities. For example, are you paying to discover risk or for the risk that’s discovered?
Once we start asking questions, it’s important to pause and review whether those questions will lead us to make better decisions. When we’re confident that’s the case, then we need to start looking for answers.
By paying attention to basic metrics around time and money, we can build frameworks to help decide when to deploy a particular strategy—and when it may no longer be the most efficient or cost-effective choice. In order to do this well, we need to pay attention to how these choices relate to the context of our own apps. This means understanding how metrics apply to your specific environment as opposed to carelessly spending money or wasting time on counterproductive efforts.
Mike Shema is presenting the session Measuring and Maximizing Crowdsourced Vulnerability Discovery at STARWEST 2018, September 30–October 5 in Anaheim, California.