The SITuation, with Dr Erin Young Data debates, AI adoption in the UK, cyber resilience and tech deals

Welcome to the third edition of the SITuation: insights into Science, Innovation and Technology trends and policy developments shaping business in the UK and globally. It’s been an eventful few weeks in tech as usual - from data legislation and cyberattacks, to AI adoption forecasts and international billion-dollar deals.

Data debates

The Data (Use and Access) Bill is still stuck in limbo, bouncing between the House of Lords and the House of Commons during its final parliamentary stages. The ongoing and high-profile generative AI and copyright debate remains the sticking point. The Government proposes that AI developers should be allowed access to scrape (creative) content unless individual creators actively opt out. Members of the House of Lords, however, argue that AI firms should be required to be transparent about which copyrighted material they use to train their models, aimed at ensuring copyright holders are able to see when and where their work has been used (with a view to licensing it).

Supporters of this amendment, including the likes of Elton John and Paul McCartney no less, argue that the UK’s creative industries themselves are at stake, as AI-generated content trained on existing works can mimic and thus could ultimately replace human-created content. In contrast, others warn if the companies building these generative models (namely, Big Tech) don’t get access, the UK may risk losing out on access to frontier technology. How can the UK enable innovation while protecting its rich creative assets?

Difficult and important policy questions such as this – around AI, data and IP – are central to some of the tech policy work we do at the IoD. Recently, I spoke at the London School of Economics (watch here!) alongside Dr Chris Wiggins, Chief Data Scientist at the New York Times, and Dr Alison Powell, Associate Professor in Media and Communications at LSE, discussing ‘The power of data: ethics, politics and public interest’. I also led a delegation of IoD members to the UK government’s Intellectual Property Office (IPO) event ‘Global Growth: IP Protection Overseas’, connecting UK businesses with the IPO’s global IP attache network.

An ‘AI-Ready’ UK? From SMEs to the Civil Service

AI adoption remains an ongoing trend, with fresh figures and forecasts regularly emerging as new capabilities and systems begin to scale across applications and industries. A recent report from Microsoft suggests that increased AI uptake among SMEs could ‘unlock regional growth for the UK economy’ to the tune of £78 billion by 2035. Yet today, they estimate that only around 18.6% of UK SMEs are using AI in the enterprise. Many of the barriers to adoption cited in the report, including skills and management understanding, echo those in our own recent member survey which found major blockers to AI adoption in British business.

LinkedIn’s ‘AI Skills Trends in the UK’ report paints a similar picture, suggesting that 70% of the skills used in jobs today are expected to change by 2030. In the UK, communication ranks as the fastest-growing skill, with AI literacy third. It is important to recognise that building an ‘AI-ready’ UK requires more than tech adoption, however. It demands not only the talent to do so, but also infrastructure, innovative business models, strategic governance and sustained investment – as well as emphasis on development of the application layer (or in the words of the AI Opportunities Action Plan, an ecosystem which allows the UK to become an “AI maker, not taker”). Without these foundations in place, even the most advanced AI models will struggle to deliver economic and societal impact (and thus, growth). I discussed many of these issues across events I attended recently, including Google DeepMind’s AI for Science Funding Summit, and a breakfast at the House of Commons on ‘Becoming an AI Superpower: Innovating for Economic Growth’, hosted by Dr Allison Gardner MP.

Moving from the private sector to the public sector, the last few weeks have seen a flurry of announcements around AI adoption in the civil service. The Department for Work and Pensions (DWP) has reversed its ban on tools like ChatGPT (with the exception of DeepSeek), and experimental tools such as Consult, part of the UK government’s new Humphrey suite (built on models from OpenAI, Anthropic and Google), are being tested, for example, to speed up consultation analysis. A recent government study suggests that up to 30,000 civil servants could be ‘freed’ from carrying out routine admin if AI is rolled out across Whitehall. This comes after more than 20,000 civil servants took part in a three-month trial using generative AI to help with tasks such as drafting documents, summarising meetings, and handling emails.

The UK government’s recent tech-related announcements are all moving parts of this AI (adoption) puzzle across the private and public sectors. These include an updated Science and Technology Framework; a new UK Quantum Skills Taskforce report; and longer-term R&D commitments.

Cyber resilience urgency

Cybersecurity remains a growing concern. My previous blog post reflected on the launch of the Cyber Governance Code of Practice and the forthcoming Cyber Security and Resilience Bill. Since then, a string of serious cyberattacks have targeted major UK retailers, including M&S, Harrods, Co-op, Adidas, Pearson, Cartier – the list goes on. The financial cost is significant, with M&S alone facing a £300 million hit. Co-op, by contrast, was praised for swiftly shutting down its own systems, mitigating longer-term disruption.  It is increasingly clear that cybersecurity should not be an optional expense for organisations and boards, but rather a form of revenue protection. The National Cyber Security Centre (NCSC) has said that this should be a “wake-up call” for retailers, and that it is working alongside the companies to “fully understand the nature of these attacks and to provide expert advice to the wider sector based on the threat picture”.

If the UK is seen as a global leader in cybersecurity, then what does this suggest about the overall threat landscape? Possible threat acceleration is being driven not only by human exploitation (as explored by the National Crime Agency investigating a suspect group called Scattered Spider), but also by advances in AI capabilities (think: deepfakes, automated phishing, synthetic identities), which can lower barriers to entry for prospective threat actors and increase complexities along the supply chain and through third-party vendors.

Elsewhere in cybersecurity developments, preparations are underway for Ofcom potentially to expand its regulatory remit to include data centres, as the government seeks to harden up the “soft points” in the UK’s cyber defences through the forthcoming Cyber Security and Resilience Bill. With data centres dominating a chunk of global AI policy discussions recently, this also comes as the formal application process for AI Growth Zones (AIGZs) (among the flagship recommendations of the AI Opportunities Action Plan), that aim to accelerate the build-out of AI data centres, is launched.

Doing international tech deals

From data centre discussions in Westminster to the Gulf: during a recent visit to the Middle East, President Donald Trump announced new AI data infrastructure deals reportedly worth over $600 billion with the United Arab Emirates and Saudi Arabia, while rescinding a Biden-era ruling that would have limited the sale of US chips globally (as well as seeking to restrict state AI legislation for a 10-year period at the same time…). Many of the US Big Tech companies were also present during the trip, announcing a flood of strategic partnerships. Humain, a newly created AI investment firm owned by Saudi Arabia’s sovereign wealth fund, revealed a partnership with Nvidia to build ‘AI factories’ in the Kingdom. Humain also announced a $5 billion partnership with Amazon, while a new ‘US-UAE AI Acceleration Partnership’ is backed jointly by G42, alongside OpenAI, Oracle, Nvidia and Cisco. These deals not only mark a shift in relations between the regions, but a broader trend: AI across the stack is becoming a diplomatic tool, with export controls, compute access, and talent pipelines increasingly dominant in international relations and geopolitical influence.

Elsewhere in international (tech) deals, reflecting on the UK-US trade agreement, the UK’s Ambassador to the US Peter Mandelson said that it “will now open the door to a deeper long-term UK-US technology partnership”. While the UK’s Digital Services Tax (a tax on the revenues of search engines, social media services and online marketplaces which derive value from UK users) survived US trade negotiations, compromises apparently remain on the table for future negotiations.

Firmly back on this side of the pond, the UK Government has announced plans to collaborate with the EU on AI to ‘accelerate breakthroughs in healthcare, clean energy, and other transformative technologies’. Public research organisations can now apply to host the UK’s AI Factory Antenna, a facility that will connect UK researchers to advanced European supercomputers. It’s part of the UK’s Compute Strategy expected later this year.

Adding to the complex geopolitical science, innovation and technology landscape, the EU has also launched the ‘Choose Europe for Science’ programme. This will invest more than half a billion dollars until 2027 to recruit researchers and scientists, especially from the US in response to the significant science cuts by the Trump administration. Europe is now explicitly selling itself as a “safe space for science and research” for US scientific and tech talent. Could the UK also benefit? With no language barrier and world-leading universities, it’s in a good position to attract at least some American talent, providing it (and Europe) can offer a supportive broader ecosystem for innovation (a topic I discussed in detail on a recent IoD Directors’ Briefing podcast).

Bytes of Insight

About the author

Dr Erin Young

Dr. Erin Young

Head of Innovation and Technology Policy at the IoD

Dr. Erin Young leads the IoD’s policy, strategy and thought leadership work on technology, science and innovation.

Before joining the IoD, Erin was Project Co-Lead and Research Fellow in Public Policy at The Alan Turing Institute, the UK’s National Institute for Data Science and AI, where her work influenced the UK’s National AI Strategy and AI Opportunities Action Plan. Previously, Erin held positions at the United Nations (UN IIEP-UNESCO) in Paris, Kantar/WPP in London, MediaX at Stanford University, and Thomson Reuters in New York City, across policy, strategic communications, research, stakeholder engagement and government affairs.

Erin sits on the Strategy Steering Board for the City of London Corporation’s Women Pivoting to Digital Taskforce and advises the Hg Foundation on AI, skills and inequalities. She is frequently featured in high-profile media and presents across business and government including the European Parliament, GCHQ, KKR and the UK Science and Innovation Network (FCDO/DSIT) at British Embassies and Consulates globally.

Erin holds a BA in Classics from the University of Cambridge, an MSc (Distinction) in Education (Learning and Technology) from the University of Oxford, a PGC in International Business Practice, Finance and Organisational Behaviour from St Mary’s University, and a Ph.D. (D.Phil) in Science and Technology Studies (STS) from the University of Oxford.

Better directors for a better world

The IoD supports directors and business leaders across the UK and beyond to learn, network and build successful, responsible businesses.

Fostering innovation in science and technology

Browse valuable science, innovation and tech resources from the IoD.
Internet Explorer
Your web browser is out of date and is not supported by the IoD website. It is important to update your browser for increased security and a better web experience.