Continued acceleration in the use and regulation of AI in the UK life sciences industry
Charlotte Tillett
Stevens & Bolton, Guildford
charlotte.tillett@stevens-bolton.com
Kate Hamson-Maguire
Stevens & Bolton, Guildford
kate.hamson-maguire@stevens-bolton.com
The expanding role of AI in life sciences
AI-driven models are increasingly used by pharmaceutical and biotechnology companies to accelerate drug discovery by predicting molecular interactions, optimising clinical trial design and identifying potential safety concerns earlier in the development process. This acceleration is especially promising for addressing unmet medical needs and developing treatments for complex diseases. Generative AI has various applications in the context of drug discovery, including rapid in silico analysis of genomic data and therapeutic candidates.
The UK government is actively supporting innovation in the field and has recently pledged £82m to support UK-based projects, including PharosAI and Bind Research, in using AI to design new treatment models and therapeutics for diseases such as Alzheimer’s disease and cancer.[1]
In healthcare, AI has improved the early detection of disease by using imaging tools trained on large datasets and enabled personalised treatment plans for patients. Large language models (a type of AI) are also increasingly being used to process and interpret electronic health records. Through the National Health Service (NHS) Artificial Intelligence Laboratory initiative, the NHS is actively exploring the ethical and effective adoption of AI technology into programmes intended to improve patient outcomes and optimise healthcare delivery.[2]
The evolving regulatory landscape in the UK
The UK has taken a pro-innovation approach towards AI regulation, balancing the need for oversight, while encouraging sustained growth in AI-driven industries. In 2023, the Conservative government published an AI White Paper outlining a regulatory framework intended to strengthen the UK’s position as a global leader in AI, support responsible innovation and increase public trust in AI, while mitigating safety risks with proportionate interventions.
The framework proposed by the White Paper is context-specific and seeks to regulate the use of AI rather than the technology itself. This is distinct from the European Union’s more rule-based approach of categorising AI applications into risk tiers, with corresponding legal obligations and financial penalties for misuse.
The White Paper introduced the five key principles underlying the UK’s AI regulatory regime, namely:
- safety, security and robustness;
- appropriate transparency and explainability;
- fairness;
- accountability and governance; and
- contestability and redress,
The expectation set out in the White Paper was for the principles to be assessed and implemented by the existing regulatory authorities in the UK, including the Medicines and Healthcare products Regulatory Agency (MHRA). The task of the regulators was to apply the principles to use cases within their remit, and to issue relevant guidance on how the principles interact with existing legislation. In April 2024, the MHRA published a detailed report for industry stakeholders on its implementation of the principles and how the use of AI in medical products will be regulated in the UK.[3]
The MHRA report focuses primarily on the use of software as a medical device (SaMD) and AI as a medical device (AIaMD), both of which currently fall within the remit of the UK Medical Devices Regulations 2002 (the ‘UK MDR 2002’). Given the technological advances since those regulations came into force, the MHRA is currently developing a programme of regulatory reform for medical devices, including specific legislation for SaMD and AIaMD. In the meantime, the MHRA is supplementing the existing regulations with comprehensive guidance and is working with the British Standards Institution on the standards that should be applicable to SaMD and AIaMD.
The MHRA’s current position is that the existing medical devices regulations align well with the five principles in the White Paper. The UK MDR 2002 uses a risk-based classification system for medical devices, with corresponding levels of scrutiny depending on the risk posed by the device and this system will continue in regard to the reformed regulations. However, many AI products that are currently in the lowest risk classification (meaning that they can be placed on the market without an independent conformity assessment) will be up-classified to ensure their safety and efficacy for patients.
The MHRA report also introduces AI Airlock as a ‘regulatory sandbox for AIaMD’, which launched in May 2024. AI Airlock is a collaborative project which brings together expertise from within the MHRA and key partners, including UK Approved Bodies, the NHS and other regulators, to accelerate solutions for the novel regulatory challenges presented by AIaMD.[4]
The UK is also an active participant in international regulatory collaborations, such as the International Medical Device Regulatory Forum (IMDRF), to harmonise AI oversight on a global scale. The MHRA and the US Food and Drug Administration are currently co-chairs of the IMDRF working group on AI and machine learning-enabled medical devices.[5]
Guidance on key risk areas
While the application of AI is already transforming key aspects of the life sciences sector, businesses should be aware of the risks and consider how to address them as the regulatory framework evolves. These include:
- data privacy and security, particularly given the sensitivity of patient data and the use of AI in clinical decision-making;
- intellectual property protection of AI-generated discoveries, which can raise complex questions about authorship and patentability; and
- bias in training data for AI models can risk perpetuating systemic social and cultural biases, leading to unfair decision-related outcomes for patients and reinforcing global disparities.
To assist organisations in addressing these challenges, the UK Intellectual Property Office has recently updated its guidelines on examining patent applications relating to AI inventions[6] and the Information Commissioner’s Office has published comprehensive guidance on AI and data protection with a focus on transparency, security and fairness.[7]
Looking ahead, the AI Security Institute (AISI), a directorate of the UK Department of Science, Innovation and Technology, intends to evaluate and prepare for the risks that advanced AI poses to national security and public safety.[8] In addition to conducting research, advising on responsible AI development and testing risk mitigation plans, the AISI is expected to play a critical role in shaping future global AI policymaking and governance.
Conclusion
The UK’s pro-innovation regulatory framework is evolving to ensure that successful applications of AI are proportionately balanced against the risks. Life sciences businesses using AI-driven solutions should regularly review their technical and operational processes and follow up-to-date guidance to ensure they remain in compliance with the applicable regulations, while making the most of AI’s transformative potential for the industry.
Notes
[1] The UK government, UK-backed AI companies to transform British cancer care and spark new drug breakthroughs, www.gov.uk/government/news/uk-backed-ai-companies-to-transform-british-cancer-care-and-spark-new-drug-breakthroughs last accessed on 16 May 2025.
[2] NHS England, The NHS AI Lab, https://digital.nhs.uk/services/ai-knowledge-repository last accessed on 16 May 2025.
[3] The UK government, Impact of AI on the regulation of medical products, www.gov.uk/government/publications/impact-of-ai-on-the-regulation-of-medical-products last accessed on 16 May 2025.
[4] The UK government, AI Airlock: the regulatory sandbox for AIaMD, www.gov.uk/government/collections/ai-airlock-the-regulatory-sandbox-for-aiamd last accessed on 16 May 2025.
[5] IMDRF, Artificial intelligence/machine learning-enabled working group, www.imdrf.org/working-groups/artificial-intelligencemachine-learning-enabled last accessed on 16 May 2025.
[6] The UK government, Examining patent applications relating to artificial intelligence (AI) inventions, www.gov.uk/government/publications/examining-patent-applications-relating-to-artificial-intelligence-ai-inventions last accessed on 16 May 2025.
[7] ICO, Guidance on AI and data protection, https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ last accessed on 16 May 2025.
[8] AISI, The AI Security Institute, https://www.aisi.gov.uk last accessed on 16 May 2025.