Reducing AI system risk CEOs under pressure to adjust

With over 5,930 artificial intelligence (“AI”) players and 5,570 AI firms in the European Union (“EU”) 27 according to AI Watch, the scale of AI adoption has reached unprecedented levels. We need to talk frankly about AI system risk.

Every day a fresh news article reveals a new development in the AI sector, meaning that it is ripe for disruption by malicious actors.

Can widespread AI adoption optimise the current economic environment without causing irreparable harm to society?

As a board participant, there comes a large amount of responsibility, and in the recent past executives have had to manage significant increases in accountability for operations in their businesses. In particular, this concerns data protection. CEOs, COOs, and CLOs no longer enjoy organisational or procedural safeguards to shield them from transgressions. At this stage in history, European lawmakers are considering imposing a ground-breaking legislation to regulate the use and development of trustworthy AI in the EU, the European Artificial Intelligence Act (“EU AI Act”). It requires actors across the AI value chain to pay the price if a business calamity materially impacts their clients or markets as a result of their AI systems malfunctioning.

Compliance around AI is starting to become a core topic of conversation in boardrooms, as adoption of AI systems is becoming increasingly commonplace, and it’s not just the little guys that are subject to regulatory obligations.

According to research from the European Commission, the successful uptake of AI technologies has the potential 2 to accelerate Europe’s economic growth and global competitiveness. McKinsey Global Institute estimated that by 2030 AI technologies could contribute to about 16% higher cumulative global gross domestic product (“GDP”) compared with 2018, or about 1.2% additional GDP growth per year. In particular, strict liability for AI operators combined with mandatory insurance for AI applications with a specific risk profile are examples of necessary risk management safeguards. In short, Europe’s policymakers are extremely realistic about the requirements necessary to protect society from the risks arising from AI systems.

Similarly, the Artificial Intelligence Index Report 2023 from the Stanford University Human-Centered Artificial Intelligence reported that the number of significant machine learning systems in 2022 shows a disproportionate favouring of language models (23) compared to multimodal (4). Large language models (“LLMs”) are of particular concern, given the rate of adoption is far outstripping the ability to enforce robust regulation. This has, thus far, been disproportionate to other technologies. Is the AI sector expanding too quickly? or are the CTOs right to encourage CEOs ok? Not being aware that under the new EU AI Act that they are now directly liable for AI systems – just like a principal in the traditional finance world.

There are several reasons why companies of varying sizes are potentially vulnerable. Firstly, many companies are using AI systems that do not have adequate security hygiene. General research indicates that a large number of companies have AI systems that don’t map the full lifecycle and types of data used to train their AI models – and not one person in these companies understands the entire AI system lifecycle. It is not only costly but also risky not to understand the data ingestion process of AI systems. The risks are high and, if the model training process is not clearly understood, it’s inevitable that there will be casualties across the board.

Another reason we are seeing potential risks on the horizon is that both training model deficiencies and governance structures around AI systems are not maintained to robust standards, creating performance issues. Going into battle against malicious actors with this 3 technology is akin to running a marathon without or limited training – there will be a clear loser.

Risks from AI are becoming more apparent. Of the key technological global risks mentioned by the World Economic Forum (“WEF”) mentioned in The Global Risks Report 2023 18th Edition Insight Report, those resulting from adverse outcomes of frontier technologies are likely to relate to AI. Of note:

  • Risk preparedness: the majority of respondents responded with ‘Indeterminate effectiveness’ in terms of current effectiveness of risk management, taking into account mechanisms in place to prevent the risk from occurring or prepare to mitigate its impact.
  • Risk governance: of the respondents, the majority of those are ‘International organisations’ in terms of who can most effectively manage the risk.

The above is clear: large enterprises are the likely ones to mitigate the risks from AI systems.

So why is this happening?

There are a range of potential use cases for AI systems because they allow businesses to optimise existing processes together with capitalising on a greater number of commercial opportunities. For example, more sophisticated forms of credit scoring allow businesses to reduce their rates of default incurred from transacting with sub-standard borrowers. Previously, technological innovation was staggered, but with the emergence of large language models, such as ChatGPT, AI systems are becoming increasingly prominent in everyday economic transactions. The intersection between humans and AI is becoming more intertwined. This proliferation has become too big to ignore. Such that calls for the EU AI Act are growing.

Modernity is not always ‘risk-free’.

4 Industries have not been slow to adopt AI given the benefits that it enables; the combination of sub-optimal technology; the desire for productivity; and uncertain organisational gatekeepers reluctant to risk their credibility. All of these make for a cacophony of ambiguity that may seriously impact their bottom lines. Given that a business can set-up a basic AI system and deploy it within a matter of hours, it is not advisable to forego risk concerns when it comes to AI system deployment.

Is there a solution?

There is definitely a new way to mitigate AI system risk – and it’s called an AI risk-based approach. Leading thinktanks consider AI as a primary trend for 2023. However, they stop short of looking at an AI risk-based approach given that the road ahead has not yet been paved. The proposed EU AI Act is leading the charge with AI regulation that transforms the development, use and marketing of AI with its call for trustworthy AI systems that undergo conformity assessments and rigorous quality assurance exercises. Firms operating under the proposed EU AI Act, alongside other regulations, are set to offer regulatory compliant products and services that can add long-term value.

We still have some way to go before we see AI system risks being sufficiently mitigated. That being said significant traction has been made. This is not a simple process, since everyone has the responsibility to secure their AI systems. A small portion of education and awareness constitute practical measures to mitigate risks. All in all, a mitigation mindset is key if we are to surmount the risks posed by AI. We all have a role to play.

“Big Tech firms get away with inflicting privacy harms on us because of the absence of competition in tech, making it especially important for antitrust analysis to be integrated broadly across tech policy domains. Breaking down the silos between tech policy issues will enable a clearer picture of the larger whole.”
Amba Kak and Dr. Sarah Myers West
Amba Kak - Executive Director, AI Now Institute and Sarah Myers West - Managing Director, AI Now Institute

This is a guest article written by Michael Borrelli, AI & Partners, Director, and therefore does not necessarily represent the views of the Institute of Directors.

Better directors for a better world

The IoD supports directors and business leaders across the UK and beyond to learn, network and build successful, responsible businesses.

Fostering innovation in science and technology

Browse valuable science, innovation and tech resources from the IoD.
Internet Explorer
Your web browser is out of date and is not supported by the IoD website. It is important to update your browser for increased security and a better web experience.