Digital therapeutics and AI-driven health apps: regulatory and intellectual property considerations
Michaela Herron
Mason, Hayes & Curran, Dublin
mherron@mhc.ie
James Gallagher
Mason, Hayes & Curran, Dublin
jamesgallagher@mhc.ie
Hazel McDwyer
Mason, Hayes & Curran, Dublin
hmcdwyer@mhc.ie
Introduction
Determining a product’s classification is arguably one of the most important steps involving its initial assessment. Whether a product is considered a medical device, an in vitro diagnostic device (IVD) or a consumer product will dictate the specific product safety and compliance legislation that applies.
For products such as artificial intelligence (AI)-driven health apps, establishing whether the product falls within the scope of the European Union’s Medical Device Regulation (MDR)[1] (for the purpose of this article we are not examining the In Vitro Diagnostic Regulation (IVDR)[2]) is often the first key regulatory decision before market placement. Incorrect classification can lead to serious consequences, including regulatory delays, compromised patient safety, increased compliance risks or even forced market withdrawal. When bringing a product to market, there are numerous regulatory frameworks to consider, along with additional factors, such as intellectual property (IP) and data protection requirements, as discussed below.
The growing use of digital therapeutics (DTx) and AI-driven health apps adds further regulatory complexity, requiring careful consideration to ensure regulatory compliance, safe market access and user safety.
Is it a medical device?
The first question when bringing a digital therapeutic or AI-driven health app to market is whether it falls within the scope of the EU’s medical device regime. Addressing this requires a thorough assessment of the product, including its accessories, hardware and software.
The MDR
To determine whether a DTx or AI-driven health app qualifies as a medical device, it must be assessed against the definition of a medical device under Article 2(1) of the MDR. Article 2(1) defines a ‘medical device’ as any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes:
- diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease;
- diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability;
- investigation, replacement or modification of the anatomy or of a physiological or pathological process or state;
- providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations; and
- which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body, but which may be assisted in its function by such means.
As part of this, an assessment must determine whether the product is intended to be used for a medical purpose. The product’s functionality and any claims the manufacturer intends to make about the product must also be carefully considered.
If a software satisfies the definition of a medical device, it must comply with the MDR requirements to be placed on the EU market, including the general safety and performance requirements outlined in Annex I. The MDR’s requirements are applied according to the device’s risk classification, determined in accordance with the classification rules set out in Annex VIII. Medical devices are classified according to their level of risk, ranging from Class I (low risk) to Class IIa, Class IIb and Class III (high risk).
Annex VIII, Rule 11 is the primary rule used for classifying software as a medical device (SaMD). It states that software intended for diagnosis, therapy or monitoring physiological processes should be classified as Class IIa. This classification may be elevated to Class IIb or III depending on the impact of the decision it supports. Certain Class I and all devices that are Class IIa or higher require a third-party conformity assessment by a notified body before being placed on the market. The MDR’s classification rules specifically address SaMDs, meaning that many apps that qualify as medical devices will require a third-party conformity assessment.
Placing an app on the EU market that is later found to be an unregulated software medical device may result in enforcement action, including possible forced recall and reputational damage.
A device’s intended purpose is particularly relevant in assessing whether it qualifies as a medical device. Developers must therefore clearly articulate the intended purpose of their app with regard to the MDR’s definition of a medical device.
The EU AI Act
Another legislative regime requiring careful consideration is the EU’s AI Act.
The Act adopts a risk-based approach to the regulation of AI systems. It imposes a range of obligations on providers and deployers according to the system’s risk categorisation. These include requirements relating to transparency, control and risk management, training, support and recordkeeping.
The AI Act establishes an enforcement framework that regulates AI systems on a sliding scale of risk. Compliance obligations are dictated by the risk category into which a system falls, namely:
- unacceptable risk;
- high risk;
- limited risk; and
- minimal or no risk.
Before high-risk AI systems can be placed on the EU market, they must undergo a stringent conformity assessment to ensure that they meet all of the requirements set out in the Act.
If a software qualifies as a medical device under the MDR (or IVDR), it is likely to be treated as a high-risk AI system, and the requirements set out in Annex IV of the AI Act apply. Annex IV requires AI system providers to prepare detailed technical documentation, which should contain, among other things, a general description of the AI system and its purpose, how it interacts with other systems and the hardware it uses. Manufacturers of SaMDs and DTx that incorporate AI should ensure compliance with the Act’s requirements pertaining to high-risk AI systems.
Even if a product is not qualified as a device under the MDR (or the IVDR), it is important to be aware that potential obligations under the AI Act may still arise.
Cybersecurity
The Revised Network and Information Security Directive (Directive (EU) 2022/2555), otherwise known as the NIS2 Directive, is part of a package of measures to improve the cybersecurity of critical organisations, requiring an overhaul in the approach to cybersecurity, with leadership accountability at its core. The NIS2 Directive is currently being transposed into the national law of EU Member States, and so, the Directive’s exact application will vary from country to country.
The NIS2 Directive will apply to entities in sectors critical to the EU’s security and economy, including health, food and manufacturing.
Key issues for life sciences businesses
Some of the main challenges facing life sciences firms in regard to complying with the NIS2 Directive are as follows:
- Registration: in-scope entities will need to register with the national competent authority in each Member State in which they are established.
- Risk management measures: each Member State will establish risk management measures (RMMs) that organisations must implement, as appropriate. The management body of each organisation, such as the board of directors, must approve and oversee the implementation of the RMMs within their own organisation.
- Supply chain due diligence: as part of their RMMs, organisations will be required to carry out due diligence of their supply chain security.
- Incident reporting: in-scope life sciences organisations will be obliged to report significant cybersecurity incidents to the relevant competent authority. An initial report must be made within 24 hours of the organisation becoming aware of the incident. Follow-up reports must be made within 72 hours, and a final report must be issued within 30 days.
- Training: management bodies must receive training in relation to fulfilling their RMM responsibilities. All staff should also receive cybersecurity training.
The General Data Protection Regulation (GDPR)
The EU GDPR[3] applies to any processing of personal data, which is information related to an individual. In the context of digital health apps, this will include any information linked to the user of the digital health app.
The GDPR applies to the processing of personal data if the entity processing it is established in the EU and processes personal data in the context of that establishment’s activities, even if the processing itself takes place outside the EU. It also applies to data processing by entities without an EU establishment if they process the personal data of EU data subjects and the processing relates to (1) goods or services offered to EU data subjects or (2) the monitoring of user behaviour in the EU.
Digital health apps will often involve the processing of health data, which falls within the remit of the GDPR. Health data is considered ‘special category data’ under the GDPR requiring a higher level of protection that requires both a lawful basis and a specific condition for processing and collection. In most cases, this means that the processing of health data must be based on the individual’s explicit consent, as well as compliance with other requirements set out in the GDPR, including the requirement to be transparent about the activities taking place, only processing data that is necessary and not retaining data for longer than is necessary.
Digital health apps commonly collect sensitive user data, including health data, therefore ensuring GDPR compliance is essential.
IP
With DTx and AI-driven health apps, IP protection is increasingly important. AI brings new IP challenges, including the need to protect algorithms and the data used to train models, alongside more traditional IP issues that arise in relation to the development of any new product.
Copyright
In regard to AI-driven health apps or DTx, copyright protects the underlying source code and platform components, such as the graphics, interface and platform text and video content. The key test is originality.
Issues can arise with the use of large datasets to train AI models, which may include copyrighted material. In the EU, exceptions exist for using copyright works in text and data mining (TDM) under the EU Copyright Directive.[4] TDM is defined as ‘any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations’. This is reflected in the Irish Copyright and Related Rights Act 2000, which permits TDM for commercial purposes where the author has not expressly reserved their rights. In practice, however, checking if an author has opted out of the TDM exception can be difficult. If an author has opted out of TDM, then this could restrict the use of such works in regard to training AI models.
Patents
Courts have found that AI cannot be listed as the inventor of an AI-generated invention. Therefore, if an invention created with AI would otherwise be patentable, it is important that there is human direction and that the AI is merely a tool used during the inventive process.
For AI-driven apps, disclosing enough detail about the algorithm or model architecture is essential for a patent to be granted. However, this can be complex, particularly when proprietary training data may include trade secrets. In the race to be the first company to file patent applications for AI-related inventions, it is important to find the right balance between these rights.
Trademarks
A trademark is a sign that is capable of distinguishing the goods or services of one undertaking from those of another. DTx and digital health platforms, product names and brand names can all be protected as trademarks.
As trademarks are territorial, care should be taken to ensure that the brand name does not infringe third-party trademark rights.
IP in contractual arrangements
Digital therapeutics and medical device companies regularly enter into collaboration and research and development agreements with third parties, including academic institutions and business partners. It is essential that those agreements contain watertight confidentiality provisions. Appropriate non-disclosure agreements should be entered into with all third parties from the start of a project.
For digital therapeutics companies using AI, it is important that appropriate data access agreements are negotiated and entered into prior to the use of the technology and commencement of work.
Conclusion
The area of DTx and AI-driven health apps involves an evolving regulatory landscape. Navigating the various legal frameworks can be challenging as the regulation of SaMD and DTx often involves overlapping considerations. Each product must be assessed individually to determine the applicable regulatory framework.
Ideally, this assessment should occur in the very early stages of software development. The first step towards an effective assessment involves developing a thorough understanding of the potentially applicable regulatory regimes.
Beyond that, developers should adopt a structured and strategic approach, drawing on legal, regulatory and technical expertise, to ensure that their product is fully compliant with all of the relevant regulatory requirements before its release onto the market.
[1] Medical Device Regulation, Regulation (EU) 2017/745.
[2] In Vitro Diagnostic Regulation, Regulation (EU) 2017/746.
[3] General Data Protection Regulation, Regulation (EU) 2016/679.
[4] Directive (EU) 2019/790.