Do the risks of AI outweigh the benefits?
Advancements in technology, especially in artificial intelligence (AI), are transforming GRC, leading to analytics-driven business tools with an emphasis on tackling future risk scenarios. But we still don’t have enough control over AI to give it up. Here are the technological developments in October – through the GRC lens.
The mainstreaming of artificial intelligence is radically transforming how organizations approach digital transformation. AI is set to dominate enterprise agendas by augmenting decisions. Yet, practical concerns persist. According to an article in Forbes, “…while people will increasingly become used to working alongside AIs, designing and deploying our own AI-based systems will remain an expensive proposition for most businesses.”
Interestingly, recent global research by Oracle, highlighted how AI is changing the relationship between people and technology at work, stating that, “64% of people trust a robot more than their manager.”
The need for AI is also accelerating inside the GRC ecosystem. According to research by Capgemini, 69% of organizations believe they will not be able to respond to security threats without AI.
The Big Question
While we have compelling arguments to prove that AI is a boon for the digital age – is it really fool-proof?An AI-driven world introduces multiple legal implications. Flawed facial recognition, deepfake voice attacks, gender-skewed credit, and biased recruitment tools are just some of the AI-related risks that are emerging.
In the healthcare industry, a recent article published in Harvard Business Review says, “Besides current regulatory ambiguity, another key issue that poses challenges to the adoption of AI applications in the clinical setting is their black-box nature and the resulting trust issues.”
New global research, by Futurum Research and sponsored by analytics firm SAS, finds that technology and trust will be the major driver behind the reimagined customer experience in the next 10 years. Which means for AI to be a successful growth catalyst, it needs to be trusted.
So, the big question is – How do we leverage the capabilities of AI while making sure it does not lead to new discrepancies or aggravate the existing inequalities and biases?
The Need of the Hour
With regard to financial services, an article published in BRINK warned against new risks and regulatory breaches that could be created by AI and machine learning (ML). According to the article, “…for the next three to five years, financial institutions must approach the digitization of risk and compliance with a healthy dose of human supervision, governance and monitoring to ensure that automation is still within the perimeters of auditability and traceability. In short, digitization must not become a new emerging risk in itself.”
Mitigating new risks is imperative in an age of technological innovations where the pace of change is faster than ever. Before deploying AI and ML, businesses need to make sure that these technologies are surrounded by good governance controls to prevent ethical violations and expensive regulatory breaches. Organizations also need to be proactive about identifying new risks, and continually evaluating whether an AI system is operating within acceptable performance levels.
Effective regulation is important. In fact, it is no longer a question of whether regulation is needed in AI, but how best to implement it.
All of these arguments boil down to a single need: GRC for AI.
While businesses explore the possibilities of AI and big data, they must ensure that the development and deployment of algorithms, data, and AI, are based on an ethical approach.
The MetricStream GRC Summit 2019 in Baltimore offered some thought-provoking arguments on the governance of AI. Some of the important questions discussed were: How should GRC steer the narrative towards creating a more socially conscious, ethical form of AI? How do we ensure that humans lead AI, not the other way around? How can regulation keep pace with new AI innovations?
According to the WEF: “Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain…The Forum offers a solution: evolve past ‘one-size-fits-all’ governance ideas to specific transparency requirements that consider the AI use case in question.”
The WEF also proposes frameworks to help financial institutions and regulators explain AI decisions, understand emerging risks from the use of AI, and identify how they might be addressed, in its report.
At the GRC Summit 2019, Anna Felländer, Co-founder, AI Sustainability Center, pointed out, “We shouldn’t be asking ‘What can AI do?’ We should be asking ‘What should AI do?”
“Organizations who want to succeed in an AI world must embed a risk-optimization mindset across the AI lifecycle. They do this by elevating risk from a mere responsive function to a powerful, dynamic and future-facing enabler for building trust,” suggests EY.