Artificial intelligence and life sciences: a perspective from Singapore

Monday 29 April 2024

Benjamin Gaw
Drew & Napier, Singapore

Current adoption of technology

The marriage of artificial intelligence (AI) and life sciences presents boundless opportunities for innovation, whereby intelligent systems empower researchers, clinicians and patients alike, catalysing shifts in healthcare delivery and outcomes. In Singapore, the integration and adoption of AI and digital technology are making significant strides in the healthcare landscape, thereby reshaping healthcare delivery and patient care.

At the heart of this transformation is Singapore's national health tech agency, Synapxe,[1] which has engaged in several AI-driven diagnostics and treatment initiatives. For instance:[2]

  • In the AI imaging domain, AimSG, which is a collaboration between Synapxe, SingHealth, NTT Data and its partners, is a platform that integrates AI imaging solutions into clinical workflows, thereby enhancing diagnostic precision and efficiency. This platform, which has been piloted at two hospitals in Singapore, supports a wide range of imaging AI models, marking a significant step towards its broader implementation.
  • In the early disease detection and personalised care domain, the Assisted Chronic Disease Explanation using AI ('ACE-AI') is a digital assistant tool made for general practitioners, which leverages neural networks and explainable AI to identify risk factors and assess chronic disease risk, facilitating early illness detection and management.

Beyond diagnostics and treatment, Synapxe, in collaboration with Microsoft, is pioneering Secure GPT for Healthcare Professionals ('Secure GPT').[3] As a generative AI application, Secure GPT, based on Microsoft's Azure OpenAI Service, seeks to automate routine healthcare tasks. Secure GPT also aims to streamline the generation of patient information summaries and medication changes by leveraging a dedicated healthcare knowledge base and implementing security measures that are aligned with healthcare data requirements. This innovation is aimed to reduce the time healthcare professionals spend on administrative tasks, allowing them to focus more on patient care.

Further, the national research and development agency, Agency for Science, Technology and Research ('A*STAR') is leading several AI projects that target disease detection, remote patient monitoring, drug discovery and precision medicine.[4] These projects include:

  • developing AI models for detecting eye diseases;
  • using AI for Covid-19 diagnosis and monitoring;
  • improving diabetes care, encompassing risk prediction, ongoing monitoring, and the comprehensive treatment and management of the condition; and
  • leveraging AI for drug discovery efforts and precision medicine.

AI governance in Singapore: a general overview

AI Governance Framework

On the regulatory front, in January 2019, the Infocomm Media Development Authority of Singapore (the 'IMDA') launched the Model AI Governance Framework (the 'Model Framework'). The Model Framework provides detailed and readily implementable guidance to private sector organisations at large to address key ethical and governance issues when deploying traditional AI solutions.[5] The Model Framework was further refined in 2020 to emphasise guiding principles centred on human-centric AI and ensue that AI decisions are explainable, transparent and fair.

Broadly, the Model Framework delineates the following key governance principles:[6]

  • adapting or establishing internal governance structures and measures to integrate values, assess risks and assign responsibilities related to AI-driven decision-making processes;
  • determining the level of human involvement in AI-augmented decision-making to set the organisations risk tolerance for AI usage;
  • considering operations management aspects, including data management; and
  • stakeholder interaction and communication focusing on strategies for engaging and managing these relationships effectively.

By explaining how AI systems work, building good data accountability practices, and creating open and transparent communication, the Model Framework aims to promote public understanding and trust in AI technology.

Proposed framework for generative AI

In response to the development of generative AI technology, in January 2024, in a collaborative effort with the AI Verify Foundation, the IMDA undertook a public consultation for a new expanded framework.[7] This initiative, the Proposed Model AI Governance Framework for Generative AI, recognises the development of generative AI and its significant impact and potential risks, and proposes a framework organised into nine strategic dimensions to foster a comprehensive and trustworthy AI ecosystem.

Broadly, these strategic dimensions aim to:[8]

  1. ensure accountability;
  2. enhance data integrity;
  3. promote trusted AI development and deployment;
  4. establish an incident reporting mechanism;
  5. advocate for rigorous testing and assurance practices;
  6. bolster security;
  7. maintain content provenance;
  8. accelerate safety-aligned research and development; and
  9. utilise AI for the public good.

The proposed strategic dimensions also seek to further the core principles of accountability, transparency, fairness, robustness and security, and together underscore the need for policy-makers to work with industry, researchers and like-minded countries in order to develop a trusted AI ecosystem for the public good.

International standard

On the international front, there is also growing effort to align jurisdiction-specific frameworks and standards to respond to the challenges posed by the jurisdiction-specific nature of AI governance frameworks and the inherently global spread of AI technology. In this regard, in 2024, the Association of Southeast Asian Nations (ASEAN) member states launched the ASEAN Guide on AI Governance and Ethics (the 'ASEAN AI Guide').[9]

The ASEAN AI Guide is built around seven principles aimed at ensuring trust in AI and the design, development and deployment of ethical AI systems:[10]

  1. transparency and explainability;
  2. fairness and equity;
  3. security and safety;
  4. robustness and reliability;
  5. human-centricity;
  6. privacy and data governance; and
  7. accountability and integrity.

The ASEAN AI Guide framework follows the same governance principles in the Model Framework, and also sets out regional-level recommendations for policy-makers to implement, such as:[11]

  • setting up an ASEAN Working Group on AI governance to spearhead and supervise AI governance initiatives in the region;
  • adapting the ASEAN AI Guide to take into account the governance of generative AI; and
  • compiling a list of use cases showing the practical implementation of the ASEAN AI Guide by organisations operating in ASEAN.

AI governance in Singapore: healthcare-specific guidelines

The Artificial Intelligence in Healthcare Guidelines (the 'AIHGle') were jointly published by the Ministry of Health (MOH), the Health Sciences Authority (HSA) and Integrated Health Information Systems (IHIS) in October 2021.[12] These guidelines act as a resource for healthcare AI developers and implementers, offering direction that aligns with the HSA's existing regulations for AI medical devices ('AI-MDs').

The guidelines are based on principles adapted from the Personal Data Protection Commission and the Monetary Authority of Singapore, which seek to ensure the safe provision of AI services in healthcare. The key principles espoused in the guidelines for healthcare AI developers and implementers include:[13]

  • fairness: AI-MD development and implementation should avoid discriminatory impact across different demographics;
  • responsibility: developers and implementing organisations are accountable for the design, use and outcomes of AI-MDs;
  • transparency: end-users should be informed about their interactions with AI-MD;
  • explainability: AI-MD decisions should be understandable and reproducible, meeting end-user expectations; and
  • patient-centricity: the design and use of AI-MDs should prioritise patient safety and wellbeing.

Interaction between developers and implementers

The AIHGle recognise that the roles of developers and implementers are interconnected and not always distinct. Indeed, it is common for certain organisations, such as hospitals, to both develop and implement AI-MDs in-house for patient care. This dual role underscores the collaborative nature required in the development and implementation phases of AI-MDs.

The AIHGle note that the effective application of these guidelines requires ongoing and iterative collaboration between developers and implementers. While developers are in charge of designing, building, and testing AI technology, implementers are responsible for their practical application, monitoring and review in healthcare settings. This collaborative process ensures that AI-MDs are effectively integrated into healthcare delivery, improving patient care while meeting safety and regulatory requirements.

Further, to address potential gaps, overlaps and responsibilities, particularly in scenarios in which AI-MDs are co-developed, the guidelines recommend entering into service level agreements (SLAs), which outline the specific roles and expectations of both developers and implementers. SLAs ensure clarity and mutual understanding about the deployment, maintenance and accountability of AI-MDs, allowing for a more seamless integration of AI technology in healthcare and mitigating risks associated with their use.

Use of AI in human biomedical research: the public consultation approach

While AI offers substantial advantages, its application in the life sciences sector also presents notable challenges and concerns. Such concerns include ethical considerations surrounding the use of personal health data; the risk of algorithmic bias; and the impact on patient consent and privacy.

In line with Singapore's consultative approach towards regulation, in June 2023, the Bioethics Advisory Committee conducted an extensive public consultation, which sought to address the ethical, legal and social implications inherent to AI.[14] The key themes espoused in the public consultation include the following:

Responsible data usage

AI systems rely heavily on large datasets for training and operation. The concern is that, without responsible data usage, there is a risk of perpetuating biases, inaccuracies and injustices in research outcomes. AI can inadvertently amplify existing biases in the data it's trained on, leading to discriminatory outcomes or misleading research results.[15]

Data ownership, custodianship and stewardship

The distinctions between data ownership, custodianship and stewardship involve who has the right to access, use and manage the data. The concern is ensuring that all parties involved in the data lifecycle adhere to ethical standards that ensure that data is used for the intended and consented purposes only. This is difficult in the complex ecosystems of biomedical research, where data may be shared across borders and among institutions.[16]

Data privacy, accessibility and security

The balance between maintaining individual privacy and enabling access to data for research purposes is delicate. The concern is that inadequate measures could lead to the unauthorised access and misuse of sensitive personal health information, potentially harming individuals and eroding public trust.[17]

Data anonymisation, and de and re-identification

While anonymisation techniques are used to protect individuals' privacy in biomedical research, the concern arises from the sophisticated capabilities of AI to re-identify individuals from large datasets. This could undermine privacy protections and expose individuals to the risk of privacy breaches.[18]


As AI is constantly evolving, there are challenges with traditional consent mechanisms, which may not fully encompass the future or secondary uses of data at the time of collection.[19]

Responsibility to the public in data sharing for research

Data sharing is essential for advancing research, but raises concerns about ensuring that such sharing respects privacy, is conducted ethically and benefits all stakeholders, including the research participants.[20]

Use and storage of legacy and posthumous data

The ethical use of data from deceased individuals involves respecting the wishes of the deceased and their families, while also taking into account the implications for privacy and consent. The concern is how to balance these considerations with the potential benefits of using the data for research.[21]

Ethical issues

In addition to the above, there are also ethical issues unique to AI.

Transparency, explainability and justifiability of AI

AI systems, particularly those that operate as 'black box' models, often lack transparency and explainability. This obscurity poses significant challenges in understanding how AI models make decisions, complicating efforts to ensure that these decisions are fair, unbiased and ethically justified.[22]

Responsibility to comply with best standards to ensure the clinical safety of AI models

Biomedical researchers and AI developers must adhere to ethical responsibilities and best practices, ensuring that AI research components, like data imputation, model selection and validation, meet local and international guidelines, such as MOH's AI guidelines and HSA's regulations.[23]

Human agency and oversight in AI

The delegation of decision-making processes to AI systems challenges the principle of human agency and oversight in clinical settings. Determining the appropriate level of human intervention and oversight in AI-driven decisions is an ethical issue.[24]

Equitable access to AI technology in research

Ensuring equitable access to the benefits of AI in healthcare across people of different nationalities, genders, ages, races and languages remains a significant challenge.[25]

Concept of 'AI model security'

There is a risk of data breaches or the manipulation of AI decision-making, so protecting AI models from malicious attacks and ensuring the security of the data used to train these models are critical considerations.[26]


The development and integration of AI into the life science sector presents opportunities and new frontiers in advancing medical research and patient care. As discussed above, innovations like AimSG and ACE-AI, developed by Synapxe in collaboration with healthcare institutions and technology companies, enhance diagnostic accuracy and personalise patient care through AI-driven imaging solutions and digital assistants. A*STAR-led projects and partnerships with technology giants, like Microsoft, push the boundaries in disease detection, remote monitoring, drug discovery and routine healthcare task automation.

On the regulatory front, in order to balance the numerous concerns extant in the development of AI technology within the life sciences space, several regulators and healthcare authorities (eg, the IMDA, MOH and HSA) have moved quickly to publish comprehensive guidelines to guide relevant stakeholder action. Indeed, the Model AI Governance Framework and AIHGle, which apply to private sector and healthcare sector entities, respectively, seek to ensure ethical, transparent and fair AI implementation in healthcare. These efforts are supplemented by international collaboration, as seen in the ASEAN Guide on AI Governance and Ethics, aiming for regional ethical AI standards.

Further, as evinced in the public consultation efforts in relation to human biomedical research, the deployment of AI in healthcare also raises ethical, local concerns (eg, data privacy, algorithmic bias, patient consent and other unique challenges posed by AI technology). These necessitate and underscore the importance of continuous dialogue and the development of ethical standards to safeguard against potential risks, and ensure the responsible use of AI in enhancing healthcare delivery.



[1] Previously known as the Integrated Health Information Systems (IHIS).

[2] Adam Ang, 'Behind Singapore's Widespread AI Adoption in Public Health' (Healthcare IT News, 25 January 2024) www.healthcareitnews.com/news/asia/behind-singapores-widespread-ai-adoption-public-health accessed 18 April 2024.

[3] Zhaki Abdullah, 'MOH Agency IHiS, Microsoft to Develop AI Tool to Help Healthcare Workers in Singapore' The Straits Times (Singapore, 8 July 2023) www.straitstimes.com/singapore/health/moh-agency-microsoft-to-develop-ai-tool-for-healthcare-workers-in-s-pore accessed 28 March 2024.

[4] A*STAR, AI in Healthcare www.a-star.edu.sg/htco/ai3/ai-in-healthcare accessed 18 April 2024.

[5] Infocomm Media Development Authority of Singapore, Model Artificial Intelligence Governance Framework www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf accessed 28 March 2024.

[6] Ibid, p 20.

[7] Infocomm Media Development Authority of Singapore, Proposed Model Artificial Intelligence Governance Framework for Generative AI https://aiverifyfoundation.sg/downloads/Proposed_MGF_Gen_AI_2024.pdf accessed 28 March 2024.

[8] Ibid, pp 3–5.

[9] Association of Southeast Asian Nations, ASEAN Guide on AI Governance and Ethics https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf accessed 28 March 2024.

[10] Ibid, p 3.

[11] Ibid, p 5.

[12] Ministry of Health, Artificial Intelligence in Healthcare Guidelines (2021) www.moh.gov.sg/docs/librariesprovider5/eguides/1-0-artificial-in-healthcare-guidelines-(aihgle)_publishedoct21.pdf accessed 28 March 2024.

[13] Ibid, pp 6–7.

[14] Bioethics Advisory Committee, Ethical, Legal and Social Issues Arising from Big Data and Artificial Intelligence Use in Human Biomedical Research – A Consultation Paper (27 June 2023) www.reach.gov.sg/docs/default-source/reach/reach-files/public-consultations/2023/moh/public-consultation-on-big-data-and-artificial-intelligence-in-human-biomedical-research/bdai-public-consultation-paper_27-june.pdf accessed 28 March 2024.

[15] Ibid, p 31.

[16] Ibid, p 41.

[17] Ibid, p 54.

[18] Ibid, p 62.

[19] Ibid, p 72.

[20] Ibid, p 80.

[21] Ibid, p 86.

[22] Ibid, pp 94–98.

[23] Ibid, pp 98–99.

[24] Ibid, pp 99–101.

[25] Ibid, pp 101–102.

[26] Ibid, pp 102–103.