Randomness is the New Wild Card in Cyber Defense
By James Bone, Skytop Contributor / September 7th, 2021
James Bone is an Industry leader and practitioner with more than 20 years of senior risk leadership experience. Thought leader in cybersecurity and enterprise risk management. Author of Cognitive Hack: The New Battleground in Cybersecurity…the Human Mind and founder of innovative solutions for the human element of risk management. Frequent speaker and contributing writer for the Institute of Internal Audit magazine, Corporate Compliance Insights, Compliance Week, Forbes magazine and others.
Founder of Global Compliance Associates, LLC and TheGRCBlueBook a risk advisory service to the GRC technology community for 11 years providing strategic product development services including market positioning and insights to understand the integration of effective risk solutions. Lecturer-In-Discipline ERM Risk Management Columbia University. Founder and creator of several courses in the School of Professional Studies ERM program including: Traditional Risk & ERM Practices, Company Failure, Behavioral Science and Cognitive Bias.
Led the development of industry leading enterprise risk management, cybersecurity and operational risk and risk advisory programs for organizations including Fidelity Investments, Department of Treasury (Making Home Affordable), Liberty Mutual, Freddie Mac, private equity, public accounting and public consulting firms.
Graduate of Drury University, BA Business Administration, Boston University, MEd Organizational Design, Harvard University Extension School, MA Management
Discussions of risk have captured the imagination recently and exposed the public to confusion and uncertainty. News stories about aggressive variants of the Coronavirus, ransomware hacks on US industry, social unrest, and climate change tend to amplify events leading to public anxiety, but few answers.
The importance of good risk communications and the framing of risks on social media platforms demonstrate the need for new ways of thinking about uncertainty in public forums. Cyber risks have also taken center stage leading to new ways of looking at cybersecurity. One of the important questions security executives must ask is what is the role of cyber risk management and how is it different from cyber security?
Balancing Risk and Security
The recent adoption of risk management approaches in cybersecurity is interesting to watch evolve as a risk-based approach to security gathers steam. There is little consensus on the right approach nor what it really means to take a “risk-based approach”? Different camps have begun to form around risk-based approaches. Some believe AI computing power is the way. Others believe a risk-based approach requires a focus on risk reduction. Yet another view believes a focus on tactical risk-based approaches, such as better controls, monitoring and data sharing in combination with key performance indicators between industry and government is needed. Each of these viewpoints have merit, yet attempts to accomplish these divergent objectives have stalled because of differences in strategy.
The first step in finding balance is agreeing on shared goals and objectives of any enterprise risk program. With that said, realistic goals and objectives must also be achievable. Simply stating risk reduction is an objective is not enough. To reduce risks, accurate measures must exist to measure both risks and uncertainty. Uncertainty is what is left after risks are known. Too often, risk professionals suffer from the fallacy of thinking that what we know is all there is of the risks that need to be addressed. Uncertainty, or randomness, lives on the tail-end of the risk curve and requires a spectrum of methods to measure what is known and unknown. This is where cyber risks thrive and is the main reason most risk programs fail to capture these risks.
Balance also requires an agreement on an operating model that will meet objectives. One of the objectives for finding balance is to agree on methods to understand the randomness of cyber risk. Randomness is measured in degrees of confidence. Defining a degree of confidence in randomness requires robust statistical analysis and data.
Lack of Cyber Data
One of the conundrums in building a robust cyber risk program is the lack of high quality and contextual stochastic databases of cyber data.
These stores of data exist today in silos across industry, law enforcement, government agencies, military armed services, and social media/technology platforms. Moving forward, solving the cyber risk conundrum will require more than trust and tactical initiatives between government and industry. Solving the cyber risk conundrum will require an ability to develop a robust anonymized stochastic database that is actionable and shared by industry and government agencies. While this may appear to be a herculean task, the consequences of not pursuing this goal will continue to leave both industry and government to fight the cyber battle blindfolded and one hand tied behind its back.
If successful, data alone won’t be sufficient. A backwards-looking approach provides only a snapshot of what has happened, not what may happen going forward. In addition, new risk analysis tools and skills will be needed to mine stores of data and apply machine learning to glean new insights yet to be discovered. More advanced frameworks will be needed that incorporate human risk factors and decision-support to identify exposures not visible today.
To date, real innovation in cyber tools has gone to cybercriminals. In one sense, the Dark Web has democratized organized crime in cyberspace. There is a hierarchy, it is performance-based, and efficient. This gives cybercriminals an upper hand due largely to the fact that we have not figured out how to work out our differences. The adversary has a target rich environment that grows with each new digital product innovation. Randomness is simply a byproduct of how efficient cybercrime has become through product iteration and testing in an active marketplace.
Malware Infection Rates Explode
Recently, Microsoft took a stab at quantifying the risk of malware infection rates and used infection rates to rank cybersecurity performance by country. Microsoft then grouped cybersecurity performers in three categories: (1) Maximizers – countries that outperformed their model; (2) Aspirants – countries that performed on par with their model; and (3) Seekers – countries that underperformed to the model.
Microsoft acknowledged that malware infection rates were a crude measure, and some may argue that other factors lead to higher or lower malware infection rates beyond the performance of cybersecurity programs, but the picture Microsoft paints is instructive.
Microsoft went a step further and compared socioeconomic factors and policy choices at the country level. Microsoft’s findings: “developed countries with declining malware experienced lower malware infection rates, even among countries deemed to have relatively poor cybersecurity practice.” Not surprisingly, developing countries showed higher overall malware infection rates (there were only 27 countries defined as Seekers).
Although the data showed that malware infection rates correlate to national socioeconomic factors, the results were inconclusive. In other words, as developing countries increase their use of the internet, malware infections do grow. By the same token, developed countries who had more experience in dealing with malware showed declines in malware infection rates. However, it may simply be that cyber criminals found more effective and less costly approaches like phishing attacks and ransomware.
Finally, Microsoft identified three national socioeconomic predictors of malware: Digital access, institutional stability, and economic development contributed to reductions in malware infection rates. The hypothesis of the study was that a maturing national cybersecurity posture benefits from socioeconomic factors in countries that chose the right policies. The study suggests that countries will learn by trial and error, which is costly. But is it a valid observation of what it tells us about the risks?
So, what is the point? The point is risks that can be measured may not inform us about the probability or randomness of a cyberattack. We know that malware and weak internal controls are a risk; but we don’t know what the probability is that an organization will become a victim or what the impact will be in the event of an attack. We need better tools than the guess work that goes on now.
This is the Conundrum in Cyber Risk!
Some risk observers have attempted to develop models to quantify cyber risks. A new trend in cybersecurity is to develop a risk-based approach which is advocated by the Fair Institute. F.A.I.R. is an acronym for Factor Analysis of Information Risk. The FAIR model has become very popular as a tool to help communicate cyber risks to the board. FAIR has introduced a new model called FAIR-CAM, or the FAIR Controls Analytics Model.
The FAIR-CAM model proposes “that organizing controls into functional categories by frequency and magnitude and assigning a unit of measure (%, $, time, etc.) to each control type will help demonstrate the value of controls.” What it doesn’t do is predict which control effectively mitigates cyber risk and which control(s) do not.
There is a simple math problem with FAIR/FAIR-CAM: A reasonably conservative estimate of daily malware variants is 350,000. (New data shows that malware variants are on the decline while phishing and ransomware attacks are on the increase.) For illustrative purposes, that represents 127,750,000 new malware variants annually. Now let’s assume an organization with 10,000 employees, as well as known and unknown vulnerabilities including third-party vendors representing thousands of endpoints. If humans are included as control points, FAIR doesn’t work. The number of variations of a potential cyber breach easily becomes incalculable, rendering predictions in the category of simple guesses. Malware is one of thousands of other ways to successfully launch an attack!
Given this reality, organizations have begun to adopt Zero-Trust methodology. “Zero Trust is a security framework requiring all users, whether in or outside the organization’s network, to be authenticated, authorized, and continuously validated for security configuration and posture before being granted or keeping access to applications and data. Zero Trust assumes that there is no traditional network edge; networks can be local, in the cloud, or a combination or hybrid with resources anywhere as well as workers in any location.” Zero Trust is the first cybersecurity posture to recognize randomness as a risk factor. Zero Trust represents the recognition of randomness and is a good first step toward a better understanding of the randomness of cyber risks.
Zero Trust recognizes the need for continuous monitoring and validation of people, process, and technology. According to CrowdStrike, “over 80% of all attacks involve credentials use or misuse in the network.” CrowdStrike’s attack data is consistent with the intelligence from other technology providers. The Human is the target and the weakest link. None of the existing regulatory mandates or risk frameworks provide a comprehensive approach to addressing human risk factors, the greatest source of vulnerability in cyberspace.
What Can be Done Today?
Zero Trust is a good place to start. Instead of assuming we know the risks and have the answers, extensive analysis is needed to better define what is not known. Start with an understanding of what is Robust in your cybersecurity program and what is Fragile.
“A software application is robust, if any exception raised during its execution, in any architecture and with any initial state, is caught by some exception handler.” The internet is robust in its ability to make individual and business connections around the world 24/7/365. However, the internet is also fragile at various nodes in the network such as when an Internet Service Provider (ISP) is disrupted by accident or attack.
The current fast-paced transition to a digital operating environment is vastly expanding the target environment for cybercriminals to weaponize randomness. By anticipating what is fragile and enhancing cybersecurity robustness, cyber risk professionals should be able to thrive in this new dynamic environment.
The next series of articles will discuss robust cybersecurity programs and ways to identify fragility in your risk program.