Scientists comment on the Government’s AI Opportunities Action Plan, announced by Prime Minister Keir Starmer.
Louis Barson, Director of Science, Innovation and Skills, Institute of Physics, said:
“AI is going to shape our world in the decades to come and it’s welcome that the UK government is raising ambition around how to build on its strong position to be an AI leader of the future. Of course, none of this will be possible without having the right people with the right skills in place to deliver this technological transformation and the economic growth that goes with it. That includes physics skills – as this year’s Nobel Prize recognised, physics has underpinned the development of AI, and is a heavy user of AI in everything from particle physics to astronomy. This needs a strategic approach targeting every educational stage: from the critical shortage of physics specialists in schools, graduates in our universities and technical pathways like apprenticeships which can help build an AI-ready workforce for the UK.”
Dr Shweta Singh, Assistant Professor of Information Systems and Management at The University of Warwick, said:
“The UK’s AI Opportunities Action Plan offers significant benefits, including enhanced efficiency in public services, the creation of AI Growth Zones, and improved healthcare through faster diagnoses. It also aims to position the UK as a global leader in AI innovation, potentially adding £47bn annually to the economy.
“However, this action plan faces several major challenges despite its promise. One significant issue is regional inequality; while growth zones aim to spread benefits, historically, tech innovation has often been concentrated in specific regions like London or the Southeast, leaving others behind. Additionally, the UK’s reliance on foreign-owned AI firms, such as Google-owned DeepMind, raises concerns about domestic innovation and intellectual property retention.
“Regulatory and ethical challenges also loom, as the government must balance innovation with safeguarding privacy, ensuring fair data use, and preventing misuse. For instance, the use of AI for monitoring roads or administrative tasks raises questions about surveillance, and misuse of AI based surveillance, for instance, The New York Times has already highlighted how facial recognition technologies disproportionately misidentify people of colour, resulting in wrongful arrests and false criminal accusations.
“Lastly, the skills gap in the UK’s workforce could hinder the adoption of AI technologies, requiring substantial investment in education and retraining programs to ensure widespread accessibility and equity.”
Professor Anthony G Cohn, FREng. School of Computer Science, University of Leeds, and the Alan Turing Institute, said:
“The government announcement of its “blueprint for AI” and for more funding for AI is very welcome – it has very high aspirations which would help retain the UK’s place in the AI sector and to allow the country to benefit from the advancements we hear about on a daily basis. Given the certainty of AI becoming more and more prevalent in all aspects of our lives, commerce, health and government, it is vital that the UK has not only the people able to help build a safe and reliable AI-enhanced future, but also the computational infrastructure to do so. The commitment of the government to also exploit AI for good, and to help grow the UK economy is also vital to ensure that the UK can attain its growth targets and remain a leading technological nation. AI should not however be viewed as a “magic panacea” to solve all the nation’s problems, and certainly not to do so in the short term; whilst AI has shown incredible advancements recently (as witnessed by the two recent Nobel prizes) there are many issues remaining to be solved, in particular when complex reasoning is required, and in endowing AI with common sense.
“The challenges facing the government in their worth ambition are huge. There is already an international shortage of skilled AI researchers and developers and this workforce is highly mobile, moving both internationally and between sectors, and commands high salaries which will need to funded. Training “tens of thousands of AI professionals by 2030” will be extraordinarily hard to achieve – the “flagship scholarship programme” to train 100 students is a drop in the ocean in this context. Moreover, the UK’s high visa fees, and NHS surcharge acts as a major disincentive to recruiting from abroad. Universities have recently seen a large reduction in students to their MSc programmes (including many which focus on AI) largely attributed to changes in the visa system. Moreover, to train such levels of PhD level talent by 2030 is almost impossible given that a PhD typically takes around four years and we are already in 2025. The success rate in national funding programmes for AI is low because of resource constraints and much more money will be required to help grow the UK research programme and the associated talent pool.
“Undoubtedly there are many “low hanging fruit” where AI can be relatively easily and safely deployed with relatively little investment, but in general, there are many issues to be solved to ensure AI is effective, safe and reliable. Ensuring public trust will be very important – trust is easily lost and hard to (re)gain. The report mentions the “sovereign data” that the UK owns and which can be exploited by AI developers and this is indeed an excellent opportunity and will be important in their plans; however care will be needed regarding implicit and hidden biases in the data which may lead to ethical problems in subsequent deployment. Whilst there are application areas where AI can be safely deployed with little risk, in general, particularly with so called GenAI, its propensity to hallucinate and our abilities to be sure when it will fail, means that humans should be closely involved in all safety and other critical applications of AI.”
Adam Leon Smith, a Fellow of BCS, The Chartered Institute for IT and international AI expert said:
“The AI Opportunities Action Plan is a statement of belief in the UK’s tech sector. We will need, at least, tens of thousands more people to become skilled AI professionals to transform the nation in the way this report envisages. We’ll achieve this by investing not just in university students, but by re-training the over 50s, supporting apprenticeships and winning over the half million women missing from the tech industry.
“Just as importantly, the report recognises that AI safety, proportionate regulation and professional oversight can set the UK apart as a world-leader, rather than hinder our innovators.
“Investing heavily in public sector AI is a very important step, and this must be matched with mechanisms to ensure accountability, measurement of progress, and public trust.
“It’s right that our leading minds can use ‘fail-fast’ strategies to test new ideas cost-effectively, but frontier AI must be overseen by strong technical standards, guardrails and ethical frameworks to avoid rushed roll out and risks to our safety.
“The government’s commitment to the Alan Turing Institute’s AI research is commendable. It will want to support all Royal Charter bodies to make sure the influx of new people working with AI, under this plan, all meet the shared professional standards to build public trust in this generational opportunity.”
Professor Alastair Buckley from the University of Sheffield’s School of Mathematical and Physical Sciences, said:
“Artificial intelligence is already a key tool in wind and solar energy forecasting and in the near future it will be used to help plan upgrades to our energy infrastructure.
“The challenges the sector faces are twofold: skilled people and access to the relevant data. The proposed AI growth zones start to address both of these by providing the critical mass of organisations needed to train people, as well as the trusted relationships needed to share the data.”
Gaia Marcus, Director at the Ada Lovelace Institute, said:
“We agree with the Government that the UK should be shaping AI technologies and their impact rather than accepting such decisions will be made by others. We also welcome the plan’s underpinning principles of shared economic prosperity and improved public services.
“It is particularly encouraging to see the plan’s recognition that the Government’s spending power can be used to shape the development of AI through smarter procurement, and the commitment to better resource regulators via the Spending Review.
“Much of the plan will require careful implementation to succeed. And there will be no bigger roadblock to AI’s transformative potential than a failure in public confidence.
“The Government should therefore be cautious of formally requiring watchdogs to implement growth goals. Regulators’ primary role should be to protect the public, and they could become discredited if they are not seen to be doing so.
“The public also have nuanced and often strong views on the use of their data, particularly in areas such as health. In light of past backlash against medical data sharing, the Government must continue to think carefully about the circumstances under which this kind of sharing will be acceptable to the public. Greater public engagement and deliberation will help in understanding their views.
“The piloting of AI throughout the public sector will have real-world impacts on people. We look forward to hearing more about how departments will be incentivised to implement these systems safely as they move at pace, and what provisions will enable the timely sharing of what has worked and – crucially – what hasn’t.
“Just as the Government is investing heavily in realising the opportunities presented by AI, it must also invest in responding to AI’s negative impacts now and in the future. It is critical that the Government look beyond a narrow subset of extreme risks and bring forward a credible vehicle and roadmap for addressing broader AI harms. This will benefit all people at risk of those harms, and secure their trust so that the positive impacts of these technologies can be felt widely.”
Sir John Lazar CBE FREng, President of the Royal Academy of Engineering, said:
“The Academy welcomes the publication of the AI Opportunities Action Plan as a key step to ensure the UK can seize the opportunities associated with the responsible development and adoption of AI across the public and business sectors alike.
“We welcome the expansion of the UK’s public AI compute provision and the creation of the National Data Library, these are significant additions to the UK’s research and innovation infrastructure. Efforts should be made to ensure these serve the needs of the UK AI SME community and deliver value to the UK public.
“The creation of the AI Energy Council is an important partnership for addressing the immediate clean energy challenge. This also presents a critical opportunity to balance the supply and demand for energy and compute and drive UK leadership in sustainable AI.
“Increasing the availability of leading AI talent in the UK, while feeding the future skills pipeline is critical for AI development. This must be accompanied by increasing the digital skills across the workforce for AI adoption to realise the opportunities of AI.
“The Academy stands ready to mobilise our extraordinary community, regional network, fellows, business, and awardees covering all aspects of engineering to help and support this national mission so that everyone benefits. As we drive AI adoption across government, it is vital that departments collaborate with industry of all sizes and the people who will be affected, to ensure positive social, economic and environmental benefits are realised for all citizens rapidly but with a long-term view.”
Declared interests
Professor Anthony G Cohn: I declare no conflicts apart from being a UK based researcher, mostly funded by public money.
For all other experts, no reply to our request for DOIs was received.