The SITuation, with Dr Erin Young Agentic AI, Artificial General Intelligence and innovation
Welcome to the first (bumper!) edition of the SITuation – insights into Science, Innovation and Technology trends and policy developments shaping business in the UK and globally.
I’ll be sharing what’s caught my eye recently across this rapidly evolving landscape, drawing connections between macro trends and their implications for UK business leaders.
The latest AI buzz: agentic AI
If 2023-2024 were the years of generative AI, then 2025 is thus far the year of agentic AI. At the core of most definitions of agentic AI is the concept of autonomous and proactive completion of tasks. The UK government’s newly published AI Playbook for the public sector describes agentic AI as “autonomous AI systems that can make decisions and perform actions with minimal human intervention”. Unlike chatbots such as ChatGPT, these novel and disruptive systems operate ‘with agency’ outside of a chat window, navigating multiple applications to execute complex tasks – like shopping online or booking travel – based on simple user commands.
Silicon Valley is bullish on AI agents, predicting they will fundamentally change how we work. OpenAI CEO Sam Altman thinks agents will “join the workforce” this year, while Salesforce CEO Marc Benioff has stated that Salesforce’s goal is to be “the number one provider of digital labour in the world” via the company’s various “agentic” services. A much-hyped general AI agent, Manus – Chinese-built but Singaporean-owned – was also launched recently. Some have drawn comparisons to DeepSeek, an earlier model that rivalled those developed by US AI labs in its capabilities while costing far less to develop.
Yet, as with the term ‘AI’ itself, no one can agree on what an AI agent actually is – and it is constantly evolving. An agent from Google differs from an agent from Amazon or any other vendor, leading to evaluation and benchmarking issues, and challenges for policymakers designing appropriate governance mechanisms. Agentic AI amplifies such governance difficulties; large language models (LLMs) are already unpredictable, prone to errors and lack transparency (among other risks), and the more autonomous a system becomes, the more we cede human oversight. Governments and enterprises need more nuanced understanding, technical or otherwise, to navigate the AI (agent) landscape, mitigate risks and maximise investment returns.
Despite a rapid increase in commercially available AI agent systems, LLM-based agents are in their infancy, with few domain-specific enterprise use cases yet. As of last year, less than 1% of enterprise software applications included some form of agentic AI. However, Gartner predicts that by 2028 that number will rise to 33% – definitely watch this space.
A global race to AGI – or not?
Is AGI – Artificial General Intelligence – rhetoric or reality? Will the AI bubble burst, is it slowing down already, or will models keep scaling with more data and compute? Are we experiencing a societal and economic paradigm shift, or will AI just become more software? With each new advancement, discussion abounds on whether this is another step towards machines that can perform any cognitive task humans can, or do even better. (Of course, which humans and which tasks we’re talking about makes all the difference in assessing AGI’s feasibility, safety, and societal impact). Creating AGI is a primary goal for frontier AI labs such as OpenAI and Google DeepMind, and many in the tech industry are leaning in, with Google co-founder Sergey Brin declaring “the final race to AGI is afoot”. Yet, while some think it is imminent, others dismiss it as hype. More so than ‘agentic AI’, however, AGI is a hazy, contested and changeable term typically defined by those building the technology. Nevertheless, the conversation is having tangible geopolitical implications, particularly in the US and China, as perceptions are that AGI could be a national security and economic game-changer. For UK business leaders the AGI debate may seem abstract, but its ripple effects – investment trends, regulatory shifts, and workforce impacts – are worth monitoring.
What has struck me over the last year is the gulf between this incredibly interesting yet broadly conceptual AGI debate – often playing out among the upper echelons of the tech sector – and on-the-ground pilots, adoption and diffusion of generative AI across the public and private sector in the UK. The ONS found that AI was adopted by just 9% of UK firms in 2023, and according to Asana’s Work Innovation Lab, 67% of companies are failing to scale AI across their organisations. A recent QuantumBlack (McKinsey) study also found that most organisations globally have yet to see bottom-line impact from gen AI use, with smaller companies in particular struggling to follow adoption and scaling best practices for deployment.
As AI technologies, approaches and systems are implemented across functions, processes and business models, evaluating real impacts versus expected productivity gains will be crucial. Notably, Microsoft CEO Satya Nadella has publicly taken a different view on AGI to many of his Big Tech peers, arguing that “self-claiming some AGI milestone” should not be the goal, but rather that “the real benchmark is the world [economy] growing at 10%”. UK policymakers should focus on mission-driven policies that support innovative businesses, especially SMEs, to deploy small, efficient models responsibly and strategically, where there’s value – and encourage more to do the same. Encouragingly, the UK government’s AI Opportunities Action Plan, an independent report published earlier this year which aims to strengthen the UK’s competitiveness in AI development and adoption and make it a global hub for AI investment and innovation, emphasises the need to address private-sector-user-adoption barriers.
Innovation, Innovation, Innovation
The AI Opportunities Action Plan, which champions being “on the side of the innovators”, is at the heart of the UK’s growth agenda. The Plan lays out 50 large-scale recommendations, all of which have been accepted by Government. These span key areas across the AI stack and value chain, including laying the foundations to enable AI in the UK such as increasing public compute capacity, improving data infrastructure and developing AI talent, reforming regulation, and driving AI adoption across the public and private sectors. It reflects the broader global shift in tech policy away from prioritising safety and risk mitigation, towards innovation, competitiveness and even sovereignty, as evidenced at the Paris AI Action Summit in February. Amid increasing geopolitical tensions, and driven by the Draghi Report which identifies “closing the innovation gap with the US and China, especially in advanced technologies” as one of the key challenges in securing the future of European competitiveness, even the EU is softening its stance on tech regulation, seeking a more ‘innovation friendly’ rollout of the EU AI Act.
Last month Peter Kyle MP, Secretary of State for Science, Innovation and Technology, said that “there is no route to long term growth without innovation” for the Government’s ambitions to deliver on its Plan for Change. Following previous Government announcements on a newly formed digital centre of government to support public sector reform (targeting £45 billion per year in efficiency savings), Kyle outlined a number of fresh commitments to support innovation. These include £23 million in new telecoms R&D investment, and the appointment of former Science Minister Lord Willetts as the first Chair of the Regulatory Innovation Office (RIO). The RIO aims to accelerate the market introduction of high-growth innovations such as AI in healthcare and engineering biology. I attended the Digital Regulation Cooperation Forum’s (DRCF) inaugural Responsible Generative AI Forum last month, which underscored the Government’s commitment to regulatory reform as a driver for innovation.
While these developments are promising, implementation challenges remain. The UK has world-class research institutions, a history of fostering frontier innovation including unicorns such as DeepMind – and has recently made additional investment commitments in defence, which historically drives technological innovation. However, structural challenges, many of which are acknowledged by Government, include comparatively risk-averse capital, a small domestic market, outdated digital and data infrastructure, obstacles to scaling startups and commercialising R&D, dependence on US tech (earlier this year the CMA found that AWS and Microsoft each have a share of up to 40% of UK customer spend on cloud services), high energy costs (and sustainability concerns), and brain drain and skills shortfalls. The government’s approach thus far has also sparked controversy, particularly among Britain’s £124 billion creative sector, which has pushed back against proposed copyright changes that would require creators to opt their work out of AI training datasets.
Bytes of Insight
- AI literacy skills surge: LinkedIn has revealed that AI literacy and LLM utilisation are among the top rising skills in the UK. Data also shows that 45% of UK recruiting teams are now actively integrating or experimenting with AI tools – up from 27% last year.
- Data protection cautionary tale: DNA testing company 23andMe has filed for bankruptcy protection. Following a major data breach in 2023, there are ongoing concerns about the security of sensitive user data.
- UK cybersecurity boom: The UK’s cybersecurity sector generated £13.2 billion over the last year. The sector now employs an estimated 67,300 people, creating 6,600 new jobs in the past year alone.
- ‘The Cybernetic Teammate’, a study which suggests that individuals who use AI tools are as productive as those who work with a team, has been causing ripples.
SIT toolkit
- AI Standards Hub: online training covering processes of developing and using standards, best practices for AI, and the wider AI governance context including risk management frameworks.
- Digital Catapult’s Innovate UK BridgeAI Large Language Models Webinar Series: webinars designed to help businesses leverage LLMs. Join any or all three webinars, selecting sessions that best align with your AI expertise and your business’s stage in the LLM adoption journey.
- ICO Regulatory Sandbox: a free service for organisations across sizes and sectors who intend to or are in the process of developing products and services using personal data, operating under UK data protection law.
