I recently had the privilege of hosting a webinar on one of the most important conversations in GRC right now: the shift from reactive oversight to proactive, AI-powered insight. Joining me were two exceptional GRC practitioners:
Together, we explored how organizations can move beyond periodic audits and siloed risk functions toward what we're calling Intelligent GRC. The conversation was rich, honest, and practical. Here are the six takeaways that I think every risk and compliance leader should sit with.
| What is the real cost of reactive GRC? | The real cost of reactive GRC is undetected risk until it becomes an incident. Risks converge across cyber, logistics, regulatory, and financial exposure simultaneously, and annual audit cycles can't catch this in time. |
| Where should organizations start with Intelligent GRC? | Organizations should start with the data they already have. Connect what exists across ERP, insurance, and ops platforms before investing in new tools. Break down silos between security, legal, IT, and privacy teams first. |
| What are the most immediate AI use cases in GRC? | The most immediate AI use case in GRC is gaining efficiency. For example, automating compliance questionnaires or eliminating manual evidence collection are wins that are measurable and available now. |
| How do you build board trust in AI-driven risk models? | Organizations can use the "hero-proof" approach — establish the board's goal, show your data sources, demonstrate added value, and prove the risk conclusion. Transparency matters more than accuracy alone. |
| How is the role of the internal auditor changing? | The role of an auditor is evolving from accountants to risk managers to engineers. Boards now expect forward-looking risk scenarios, not just reports on what already happened. |
| Why is AI governance non-negotiable in GRC? | AI governance is non-negotiable as AI doesn't govern itself. Clean data, human oversight at every decision layer, explainability, and ethical guardrails are the foundation for sustainable AI adoption in GRC. |
When we asked our webinar audience to name the biggest cost of reactive GRC, the top answer — by a significant margin — was undetected risk until it becomes an incident. Patrick put it well: reactive oversight means solving today's problems with tools designed for yesterday's environment. The risks we face now don't sit in silos. They converge. A disruption in the Red Sea is more than a logistics problem. It triggers sanctions exposure, delivery failures, and cost spikes simultaneously. GRC programs that still operate in annual cycles simply can't keep up.
One of the most grounded pieces of advice came from Thom: before you chase AI tools, find out what data your organization is already sitting on. Many GRC teams don't realize how much structured, usable information already exists in their ERP systems, insurance databases, and operational platforms. There is an immense opportunity to connect what's already there. As Thom put it, collaborating across security, legal, IT governance, and privacy teams is the only way to get a complete picture of risk.
It's tempting to jump straight to predictive analytics, but both Patrick and Thom were clear: some of the biggest early wins from AI adoption come from eliminating manual, repetitive work. At Signify, Thom's team built an AI-powered tool that handles security and compliance questionnaires for the sales team. This has freed both the sales and GRC teams to focus on higher-value work. Patrick's team moved internal audit sample coverage from 20–40% to 100%, in a fraction of the time. These aren't flashy use cases, but they're real, measurable, and happening now. But both of them stressed on the need for and importance of human oversight when AI automates the most mundane and data driven tasks.
A question I put to both of them — how do we avoid creating black-box risk models that boards don't trust? — generated one of the most useful frameworks of the session. Thom described an approach he calls "hero-proof": before presenting anything to your board or supervisory committee, establish their goal, show your data sources, demonstrate the added value, and prove the risk conclusion is warranted. The model doesn't change; the communication does. He also made a point I found particularly valuable: get an external view to validate what you're sharing. It costs something, but it builds credibility fast.
Patrick described a shift I found both accurate and striking: fifteen years ago, the ideal auditor was an accountant. Then they became risk managers. Now, they're expected to be engineers — people who can work with data models, AI systems, and predictive tools. The audit report, once a static snapshot, is becoming something closer to a live film. Boards and regulators no longer just want to know what happened; they want to know what could happen next. Patrick's team at Bank of China built a dedicated Advanced Analytics for Audit (AAA) group specifically to develop this capacity. This is the kind of structural investment more organizations will need to make.
Perhaps the most important thread running through our entire conversation: AI doesn't govern itself. Thom described conducting AI training sessions with Signify's board of management. This was done to not just explain what's possible, but to make them aware of the risks, including sophisticated phishing attacks and hallucination in AI-generated outputs. Human oversight at every decision layer isn't a constraint on AI adoption; it's what makes adoption sustainable. Clean data, explainability, ethical guardrails, and proper access controls aren't optional extras. They're the foundation.
The message from Patrick and Thom was consistent: the organizations that will lead in GRC over the next decade are the ones investing now in data quality, in connected teams, in human-AI collaboration, and in governance frameworks that can keep pace with the technology. We're still early. But as Patrick reminded us, the majority of organizations are still in the exploration or transition phase. That means the window to gain a meaningful edge is open, but it won't stay open forever.
If you'd like to see how MetricStream's AI-first Connected GRC platform puts these principles into practice, we'd love to show you. Request a demo now.
Watch the full webinar.
Intelligent GRC is an approach to governance, risk, and compliance that uses AI, automation, and connected data to move from periodic, reactive oversight to continuous, proactive risk management. It replaces manual evidence collection and annual audit cycles with real-time monitoring, predictive risk indicators, and cross-functional data sharing.
AI is being used in GRC for continuous control monitoring, automated regulatory change tracking, 100% audit sampling (replacing statistical samples), compliance questionnaire automation, anomaly detection in transactions, and policy summarization. The most common starting points are high-volume, data-intensive tasks that currently require significant manual effort.
The biggest risk of reactive GRC is that threats go undetected until they become incidents. Because risks now converge across functions, including cyber, operational, regulatory, and financial, a siloed or periodic approach means organizations are always responding to what already happened, rather than anticipating what's coming.
Board’s buy-in for AI in risk management requires transparency over accuracy. Present the goal the board is trying to achieve, show clearly where your data comes from, demonstrate the added value, and prove your risk conclusion is warranted. Avoid black-box outputs. Always be able to explain how you reached your risk level assessment.
Modern risk leaders need a blend of technical and strategic skills. These include AI and data literacy to interpret risk signals at scale, scenario planning and predictive thinking to anticipate threats before they materialise, and strong business acumen to translate risk exposure into language the board understands. Communication and influence skills are equally critical for GRC leaders. Risk leaders must advocate for proactive action, not just report on what went wrong.
Internal audit has shifted from a backward-looking assurance function to a forward-looking advisory one. Modern internal auditors are expected to go beyond testing controls and provide insights on emerging risks, process inefficiencies, and strategic blind spots. This requires data analytics skills, comfort with continuous auditing tools, and the ability to engage with senior leadership as a trusted adviser, which is more than a compliance checker.
Yes. AI literacy is now a baseline expectation for GRC leaders, not a specialist skill. Leaders need to understand how AI models make decisions, where they introduce new risks (bias, hallucination, data privacy), and how to govern AI use within their organisations. They don't need to write code, but they do need to ask the right questions of their technology teams and vendors.