×
Blogs

Through the GRC Lens – February 2020

blog
4 min read

Building a Future of Trustworthy AI

The European Commission recently unveiled its long-awaited proposal to regulate artificial intelligence (AI). But will the new proposal stifle innovation? Find out more through the GRC Lens – February 2020 edition. 
_____________________________________________

On the 19th of February, the European Commission (EC) President, Ursula von der Leyen, Executive Vice-President, Margrethe Vestager and EU Commissioner for Internal Market, Thierry Breton, held a press conference at the European Commission headquarters in Brussels, unveiling their ideas and actions to regulate AI.  

Keen on building “a digital Europe that reflects the best of Europe,” the EC released a white paper on AI that defines an extensive framework under which AI can be developed and deployed across the EU. The paper includes considerations to govern high-risk use of AI like facial recognition used in public spaces, with an overall ambition to shape Europe’s digital future”.

The proposal still has a long way to go. For now, the EC plans to gather opinions and reactions from companies, countries, and other interested parties before they begin to draft the laws. And although the AI white paper is open for suggestions until May 19, lobbying has already begun.

Worried AI Vendors: Will Regulation Stifle Innovation?

Although many AI experts have said that the regulation of AI is necessary, especially due to ethical concerns, there is considerable worry around the consequences of regulation. Europe’s new proposal has already had far-reaching implications on the big tech brands that have invested in AI. After the EC declared a 12-week discussion period, several tech leaders from large organizations have journeyed to Brussels to meet with EU officials.

Their major concern – will tough laws hinder innovation?

AI vendors are worried that if the process of regulation, considered a slow process that can be subject to interference and distortion, is applied to a fast-moving field like AI, it can stifle innovation and divert the technology’s enormous potential benefits.

To illustrate this concern, a recent article in Analytics India Magazine, used the example of neural nets to explain how the regulation of AI could possibly hamper innovation. Neural networks work by finding patterns in training data and applying those patterns in new data, enabling researchers to solve problems that they couldn’t earlier.

For instance, CheXnet, an AI algorithm from Stanford, has an incredibly powerful ability to detect pneumonia among older patients through chest X-rays. But for technologies like these to work, they need a certain amount of creative and scientific freedom (within ethical boundaries, of course). If there is a ban on “black box” AI systems that humans can’t interpret, could AI innovation be impacted?

Another area of confusion revolves around the definition of “high-risk” applications of AI. The report seems to be unclear about high-risk applications in low-risk sectors, leaving companies uncertain on how to approach this issue.

The Need for AI Regulation: Consumer Protection

There is no doubt that AI has enormous potential to be used for good. But its accelerating adoption across industries comes with multiple ethical concerns.

According to a survey by KPMG, 80% of risk professionals are not confident about the governance in place around AI.

What happens when decisions are made by AI without human oversight? Recent instances have shown that automated decision-making can perpetuate social biases. In addition, deep fakes, surveillance technology, autonomous weapons, and discriminatory HR recruiting tools come with multiple serious risks. The focus of AI regulatory authorities is on developing frameworks to govern AI.

Like Anna Fellander, Co-founder of the AI Sustainability Center, said at the GRC Summit in London, “It’s no longer just about what AI can do, but what it should do.” In a similar vein, Andreas Diggelmann, “Office of the CEO,” Interim CEO and CTO at MetricStream said, “We need technology that serves humanity, not the other way around.”

Looking Forward to Trusted AI

AI expert Ivana Bartoletti, Technical Director, Deloitte – Cybersecurity and Privacy Division, speaking at Impact 2020 conference, said: “The reason why we’re talking so much about ethics in AI is over the last few years we have seen the best of technology – but also the worst.”

With its novel approach to AI regulation, the EC wants to promote the development of AI while respecting human fundamental rights and addressing potential risks that come with the technology. The EC wants a digital transformation that works for all, reflecting the best of Europe: open, fair, diverse, democratic, and confident.

The new AI proposal has already begun to receive acceptance in some industries. Ted Kwartler, Vice President, DataRobot, said the vendor welcomes calls for regulatory approaches that don’t stifle innovation. Christopher Padilla, VP, Government and Regulatory Affairs, IBM, also was reported saying in Protocol, “By focusing on precision regulation — applying different rules for different levels of risk — Europe can ensure its businesses and consumers have trust in technology.”

It appears now that big tech companies that want to tap into Europe’s market will have to play by the rules that come into force. Like the GDPR in 2018, will the new AI proposal inspire similar, tough regulatory action in other parts of the world? Read the MetricStream Blog to stay updated on more news.

Admin_avatar_1498731489

BLOG ADMIN

Read more about the latest happenings in the GRC universe. MetricStream experts share their valuable insights on how organizations can turn risk into a strategic advantage and thrive on risk.

 
lets-talk-img

Ready to get started?

Speak to our experts Let’s talk