What parallels can we draw from the NFL to take a risk-based approach for AppSec?

Aruneesh Salhotra
February 1, 2023

Super Bowl LVII is less than two weeks away and the excitement is growing as we inch closer to the kickoff day on Feb 12th. With new developments and advancements happening across the National Football League (NFL) and cybersecurity, it is imperative for both teams and organizations to stay ahead of the curve. Like the NFL, which is constantly evolving with new rules and technology, the field of cybersecurity sees new threats and technologies emerging continuously. Cybersecurity has become critical for organizations to protect their business from these threats and ensure compliance with regulations and standards.

The Super Bowl is the annual championship game of the NFL, played each year between the winners of the AFC and NFC conferences. The game is the culmination of the NFL season and is one of the biggest events in American sports, attracting millions of viewers and often featuring performances by popular musicians at halftime shows.

With the NFL season in full swing at the time of writing and the Super Bowl only a few weeks away, one can only imagine the tremendous effort that teams would have put in through the year to win as many games as possible to make it to the playoffs and eventually the Super Bowl. All of this is done while efficiently managing the associated risks to the players and teams at large both on and off the field. The team managers, coaches, players, medical staff, and other personnel plan extensively  to prepare for the year.

Often, there might be a difference of opinion when identifying the most critical position in the field. However, the Quarterback (QB) is usually considered the most critical player. A lot of focus and planning goes towards “protecting” the QB, as well as other players, from both known and unknown risks. Teams usually adopt different strategies through the season to ensure the QB is protected from potential threats, such as:

  • Having the offensive line provide cover to the QB during the game.
  • Resting the QB during less critical games and playing the backup QB instead.
  • Assessing the strengths, tactics and known strategies adopted by the opposing team.
  • Seeking expert opinions from those who have personal experience with the games.
  • Instituting a targeted training regimen like strength training and speed training
  • Offering injury and disability insurance

In this manner, the NFL has a comprehensive risk management strategy in place to protect the league from a wide range of risks. By identifying and assessing these risks, the NFL can mitigate and manage them, thereby ensuring the safety and stability of the league.

What is evident from the above is that "context" is key for managing risks. Context helps teams to plan and accordingly allocate time, money, and resources. And managing these risks within a budget and timelines often raises the dilemma - “should all risks be eliminated?” The teams have to operate within these constraints while also managing risks effectively. This gives a holistic picture of a practical risk-based approach to maximize the return on investment while focusing on minimizing risk.

This risk-based approach can be easily extended to enterprises, particularly with regard to how they work towards securing their businesses. Given the current climate where breaches have taken center stage, security has become one of the top priorities for organizations and is quickly becoming a board level discussion topic. With more and more sensitive information being stored and transmitted electronically, the risk of cyber-attacks and data breaches is higher than ever.

Organizations of various sizes, ranging from large enterprises to SMBs, would typically have sizable security-related technical debt. Due to audit and compliance obligations, security and IT teams are tasked with managing risk by addressing these vulnerabilities, and usually within a defined SLA. With organizations continuously growing, there’s a wider adoption of cloud and containers, and security scanning capabilities have seen an uptick, which have increased the size and complexity around the remediation efforts.

Before we talk about factors that organizations can leverage to prioritize their vulnerabilities, it is imperative to understand how we got here in the first place.

Not too long ago, the general vulnerability management scope was, relatively speaking, manageable. Most assets were known and accounted for, computing resources (including data) were stored on-prem, and developers and operations sat within the confines of established boundaries.

However, the scenario has shifted now with ever-growing market pressures giving rise to digital transformation. Agile development methodologies allow enterprises to deliver products and services faster, which can help them stay competitive in the market and respond more quickly to changing customer needs.

Agile and DevOps methodologies can have both positive and negative implications on software security. On the one hand, continuous integration and testing can help to identify and resolve security issues more quickly. The increased collaboration and communication between development and security teams can lead to more efficient approaches to secure infrastructure and software.

On the other hand, the fast-paced and iterative nature of the Agile methodology can create new security risks. The frequent releases and deployments may make it more difficult to properly test and secure software, and the emphasis on speed may lead to a neglect of security best practices. As seen with the CodeCov and SolarWinds breaches, the use of more automation and DevOps tools can expose organizations through new attack vectors that organizations now need to consider.

A generally established practice from a few years ago was to address all known vulnerabilities as soon as possible to minimize the risk of attack. However, it is not always necessary to fix all vulnerabilities in a software program or system, as some may be considered low-risk or may not be exploitable in a particular environment or configuration (think context).

When deciding which vulnerabilities to fix, organizations would typically consider several factors, such as the severity of the vulnerability, the likelihood of exploitation, and the potential impact of an attack. For example, a vulnerability that allows an attacker to gain complete control of a system would be considered more critical than one that only allows an attacker to read sensitive data. It's also important to note that even if a vulnerability is considered low risk, it may still be exploited by attackers in the future. So, it's recommended to always keep systems and software updated and patched. Organizations should also establish a vulnerability management program to help prioritize vulnerabilities based on risk and provide a clear plan of action for addressing them.

DiagramDescription automatically generated

Reference: Gartner

Given the sheer number of vulnerabilities that organizations must deal with, manually assessing the risk of every vulnerability is not only inefficient but also not scalable. Assessments must be based on data, which makes it imperative to establish a data-driven strategy. Employing such approaches can help IT/security organizations to sift through vulnerabilities faster and thus focus on remediating the key risks first.

Here are a few questions that can help organizations with prioritization:

  • Application and Business Context
  • Where does my application run? Is the App client facing?
  • Who are the end users of the applications?
  • What is the criticality of the application?
  • Does the application serve, process, or use PII data?
  • Threat Intel
  • Chatter on the dark web for the CVE is a good reflection of exploitability - what is the complexity and impact?
  • Are PoC kits available?
  • What is the expert opinion? Does the CVE serve itself on the CISA KEV (Known Exploited Vulnerabilities) list?
  • Detective and Protective Controls
  • Is there an inline WAF for Web Applications?
  • Is the data at-rest and in-transit encrypted?

Security is not a one-time task. It is an ongoing process that requires continuous monitoring, testing, and improvement. Therefore, it is important to regularly perform vulnerability scanning, security assessments, and pen testing to identify new vulnerabilities. Organizations can accordingly apply risk treatments to any newly identified vulnerabilities and remediation efforts.

Like the NFL, not all risks can be eliminated. It is only by contextualizing the risks that organizations can move towards effective risk management.

Remember: Security talent is expensive and limited, so choose your investment wisely.

Disclaimer: Opinions are my own and not the views of any of my employers.

Global Head of Application Security, Nomura