Next-generation technologies such as artificial intelligence (AI), machine learning (ML), the cloud, robotic process automation (RPA), and the internet of things (IoT) are empowering enterprises to offer a range of differentiated business experiences to their customers. Yet, for cyber risk management professionals, these emerging technologies pose multiple challenges. For one, they increase the attack surface, thus requiring new mitigating controls to protect data assets while still allowing for an elevated customer experience. Secondly, these technologies introduce multiple new cyber compliance and governance requirements which require organizations to be extremely agile in their responses.
Meanwhile, as analytics are adopted to strengthen business decisions, critical and sensitive customer information is moving into data lakes, thus increasing security risks. As organizations demand a bigger API ecosystem to interact with third-party vendors, risks in the extended enterprise are growing. As mobile and cloud technologies are constantly being upgraded, new vulnerabilities are emerging. All these challenges are fundamentally altering the cyber risk landscape and rendering many traditional cybersecurity methodologies ineffective.
To build cyber resilience today, enterprises need a proactive and continuous approach to cyber risk management. It means embedding risk management across business processes and the extended organization in such a way that customers, partners, and third-party vendors are made full-time stakeholders in cyber resilience, while the business is made fully aware of all cyber risks in their decisions. Such a paradigm demands a shift in existing cyber risk management processes and perspectives.
In the following sections, we discuss four key trends that can strengthen an organization’s cyber resilience:
With increasing attack surfaces and new threat actors emerging, manual approaches to cyber risk management are no longer sufficient. Many enterprises are adopting AI and ML technologies to automate their assessments of risks, threats, and vulnerabilities across business-critical assets.
Meanwhile, behavioral analytics are helping stakeholders notice and flag unusual user patterns quickly. Incident response playbooks are accelerating risk responses to cyber incidents, while helping ensure that they are remediated in line with industry best practices. Standard IT control testing processes are being automated, so that organizations can continuously monitor controls, as well as ensure that evidence of their effectiveness is always recent and up-to-date.
All these emerging techniques are optimizing the efficiency, productivity, and accuracy of risk assessment processes, while also enabling risk intelligence to be rolled up faster to management teams for analysis and decision-making.
Today, the number and impact of cyberattacks continues to increase. A single major data breach can cost an enterprise millions of dollars, not just in financial losses but also in reputational damage. That, in turn, raises questions about the efficacy of traditional approaches to cyber risk management.
For years, many enterprises have evaluated their cyber risks using qualitative risk assessments based on likelihood and magnitude. Yet, these assessments only provide subjective information on the most likely and severe risks. They do not usually provide quantitative data on the financial impact of these risks.
A quantified approach, on the other hand, helps assign a specific financial amount to each identified cyber risk, signifying the actual loss that an enterprise could face. This approach helps organizations align cybersecurity with business objectives, while also optimizing cyber investments. It facilitates collaboration by requiring cyber risk teams to sit across the table from business teams, and discuss cyber risk mitigation in line with quantified risk exposures, as well as business priorities. If done effectively, it can help top management allocate cyber risk management budgets in a proactive and transparent manner.
As businesses strive to meet demands for agility, digitization is a must. Yet, it cannot come at the cost of poor security because the consequences, in terms of regulatory fines, financial losses, and reputation damage, can be catastrophic.
Today, many cyber teams are working hard to embed cybersecurity best practices and standards into software development processes as non-intrusively and efficiently as possible. The focus is on “security by design” which helps ensure that application security is not just one step in the software development lifecycle, but a checkpoint at every stage. It strengthens confidence in the inherent security of applications, so that cyber teams need worry only about high-risk and high-impact use cases when conducting in-depth assessments.
Today, SecDevOps methodologies are empowering application teams with APIs, test use cases, best practice guides, and automated tools to help ensure that the agility of software development is maintained, while at the same time, the application is secure by design.
As both cyber risks and insider threats to data become too big to ignore, the zero trust model is being adopted as an important component of cyber resilience. The underlying principle is that enterprises cannot enforce less rigorous authentication and authorization protocols for insiders, compared to outsiders. If they do, they could be exposing themselves to a high degree of risk and vulnerability because insiders cannot be trusted any more than outsiders.
The zero trust model focuses on establishing user credentials, motives, and other meta-data such as location, security perimeters, and end points to determine if users can be trusted with data access. It calls for a strong governance and compliance framework to determine user access rights and authorization matrices. Privileged access rights are minimized with multi-factor authentication, identity and access management, and encryption and behavioral analytics.
With the increasing adoption of the cloud and mobile technologies, organizations have realized that information protection needs a different way of thinking. Restricting access to data based on a “never trust, always verify” approach is an effective way of protecting valuable assets.
The cost of cyber incidents increases when businesses are focused only on reactive risk detection and response. To change the game, they must think ahead of their adversaries by proactively identifying vulnerabilities, and then patching them permanently. This approach will require newer technology paradigms, models, and methodologies. The cybersecurity team will need to connect with business teams more regularly to understand not just technology perspectives, but business perspectives as well.
Emerging focus areas such as automation, security by design, and a zero trust model can help ensure that enterprises have the basic controls, policies, and remediation actions working every time, on time. Meanwhile risk quantification measures can help cybersecurity teams strengthen alignment with business priorities.
However, all these measures also require a shift in human and organizational behavior. It means giving up discretionary privileges, ensuring that decisions are driven by data, and implementing additional front-end effort and scrutiny during software design. Such shifts always need a significant amount of change management and hand-holding. The ability to effect these changes and institutionalize cybersecurity measures remains the biggest challenge in the quest for cyber resilience.