Left to our devices: connected care and its legal challenges
Shantanu Mukherjee
Founder, Ronin Legal, Dubai
shantanu@roninlegalconsulting.com
Varun Alase
Associate, Ronin Legal, Bengaluru East, Karnataka, India
varun@roninlegalconsulting.com
Wellness vs SaMD
At first glance, you might think that the distinction between a wellness gadget and a regulated medical device is straightforward. Wellness tools are designed to support a healthy lifestyle, whereas medical devices diagnose, treat, or prevent disease. In practice, however, the dividing line can be crossed with a single new feature, a marketing claim, or an AI-generated insight.
Regulators such as the United States Food and Drug Administration (FDA) and European Union authorities focus on intended use, not the form factor. The use of Artificial Intelligence (AI) in healthcare can give rise to the classification of Software as a Medical Device (SaMD) or even, more recently, AI as a Medical Device (AIaMD).
If a smartwatch counts steps and encourages activity, it typically falls within general wellness, often subject to little or no medical device oversight. Once the same device claims to detect atrial fibrillation, adjust insulin dosage, or predict exacerbations of heart failure, it becomes a medical device or SaMD and must satisfy stringent safety, performance, and quality system requirements.
For digital health products, the user interface and AI outputs can quietly reshape that intended use. Risk-scoring dashboards, colour-coded alerts (‘high risk of stroke in the next 24 hours’), or language implying diagnosis or treatment will all support a regulator’s conclusion that the product is functioning as a SaMD.
Even if marketing avoids diagnosis language, UX patterns that encourage clinicians to rely on algorithmic outputs rather than independent judgement may negate arguments that the product is merely informational or ‘general wellness’.
To remain outside medical device regulation, decision-support and wellness tools must allow clinicians to review the underlying basis of recommendations, avoid claims of substituting for approved devices, and be validated without implying a clinical performance guarantee.
For hospital counsel and product lawyers, design reviews must therefore include not only legal disclaimers but also the tone of notifications, labels in dashboards, and the way AI explanations are presented.
Hospital integration and the puzzle of liability
Once connected devices feed data directly into an electronic medical record (EMR), the allocation of risk between device manufacturers, platform providers, and hospitals becomes a central governance issue rather than a back-end IT detail. Technical integration usually involves gateways, middleware, and active pharmaceutical ingredients (APIs) that translate device data into HL7 (Health Level Seven) or FHIR (Fast Healthcare Interoperability Resources) formats, map it to the correct patient record, and surface it inside the clinician’s workflow. Each hop in that chain creates a potential point of failure and therefore a potential defendant.
From a liability perspective, at least three broad scenarios emerge:
- If the sensor is inaccurate or the algorithm is flawed, the manufacturer (and sometimes the SaMD developer) sits in the primary line of product liability risk. Courts and regulators increasingly scrutinise clinical validation, post-market surveillance, and responses to known defects.
- Misconfigured interfaces, patient–device mismatches, dropped alerts, or time-zone errors may shift responsibility towards the hospital and its IT or integration partners, especially where the device performed within specification, but the hospital system handled the data incorrectly.
- Where an alert is correctly generated and displayed but not acted upon, liability risk frequently moves towards clinicians and institutions under conventional negligence and standard-of-care analysis, although questions of ‘alert fatigue’ and AI over-reliance complicate that assessment.
Well-drafted integration agreements therefore need more than boilerplate. They should articulate data quality responsibilities, specify who is accountable for mapping and validation, address change management (software updates, configuration changes, firmware patches), and define incident response and root-cause analysis processes.
Recommendations by professional bodies stress quality assurance of data transfer, alert design, and clear provenance tracking of patient-generated health data when it enters the clinical record.
Data governance: interoperability, cybersecurity, and secondary use
Connected care is fundamentally a data project, and its legal challenges are anchored in how that data is collected, moved, secured, and reused. Integrating data from bedside monitors, wearables, home devices, and mobile apps into EMR systems can eliminate manual entry, reduce transcription errors, and enable faster, data-driven decisions. Yet every new data source also enlarges the attack surface and complicates compliance with privacy and cybersecurity regulations.
Interoperability initiatives rely on standards such as HL7 and FHIR, but legal risk turns on governance, not only technology. Integration projects must address:
- Data minimisation and purpose limitation: Under frameworks like the General Data Protection Regulation (GDPR) and similar health data rules, providers and vendors must restrict collection to what is necessary and clearly define primary and secondary purposes.
- Cross-border transfers: Cloud-hosted analytics, global support teams, and distributed data centres raise questions about data export, adequacy regimes, standard contractual clauses, and localisation mandates.
- Cybersecurity and resilience: Continuous streams of patient-generated data demand robust encryption, network segmentation, and vulnerability management. Professional guidelines underscore the need for secure transfer, authentication, and monitoring when connecting mobile health devices to EMRs.
Secondary use of patient-generated health data, whether for algorithm improvement, research, or commercialisation, sits at the centre of many connected-care business models. Legal frameworks typically require transparent, specific consent or a strong alternative legal basis, as well as technical safeguards like anonymisation or pseudonymisation. However, genuine anonymisation is challenging in high-dimensional wearable datasets, which can often be re-identified when combined with other information.
Hospitals and vendors need data protection officers, data governance boards, or equivalent structures to vet proposed uses, monitor data sharing arrangements, and ensure that ‘data exhaust’ from devices is not quietly repurposed in ways that undermine patient trust.
AI on wearable data: explainability, bias, and reliance
AI-driven analytics are the engine of many connected-care propositions: predicting decompensation, triaging patients, personalising interventions, or optimising workflows. Continuous data streams from wearables make these models powerful, but also magnify traditional AI concerns around explainability, bias, and over-reliance.
Studies of AI in healthcare and wearable ecosystems have documented material performance disparities across demographic groups, driven by biased training data, unrepresentative cohorts, and proxies that correlate with socio-economic status or race.
In legal terms, deploying such models can create exposure under anti-discrimination laws and professional negligence standards, particularly where systematically worse outcomes are produced for protected groups. For hospitals, this calls for structured model governance: impact assessments, fairness testing, and ongoing performance monitoring across sub-populations.
Explainability is equally important. The ‘black box’ character of many models makes it difficult for clinicians to understand why a given alert or risk score is generated. Emerging guidance emphasises the need for explainable AI techniques that provide meaningful, clinician-facing rationales, not merely confidence scores. From a liability perspective, design choices that enable independent clinical review of AI recommendations help support the argument that the software is a decision-support tool, not a de facto decision-maker – an argument also reflected in regulatory guidance on clinical decision support.
Finally, clinician reliance on AI raises complex questions of standard of care. As AI tools become more common, a failure to use them in specific high-risk contexts might itself be criticised as negligent. At the same time, blind adherence to algorithmic outputs, especially where they conflict with observable clinical signs, may also be negligent.
This makes it important to embed clear policies on when AI outputs must be considered, when they may be overridden, and how overrides should be documented. Professional literature stresses the need for transparency, accountability, and shared liability models between developers and deploying institutions.
Contractual structuring and commercialization models
All of these issues crystallise in the contracts that sit behind connected-care deployments. Traditional device procurement frameworks – purchase orders for hardware, standard support, and a warranty – are poorly suited to ecosystems where consumer-grade wearables, cloud platforms, and AI analytics are embedded into institutional pathways. Modern commercialisation models for connected MedTech increasingly blend:
- device-as-a-service or subscription models, where devices are provided alongside software, connectivity, and analytics under multi-year service agreements;
- outcome-linked or risk-sharing arrangements, tying fees to reduced readmissions, improved adherence, or other measurable metrics, which require careful definition, data access rights, and audit mechanisms;
- data partnerships, in which de-identified or pseudonymised patient-generated data is used for research, model training, or commercial insights under separate licensing or collaboration agreements.
Key clauses in such contracts should address:
- clear allocation of responsibility if a product’s classification shifts (eg, from wellness to SaMD), including who bears re-certification costs and who manages regulatory filings and post-market surveillance;
- tailored indemnities covering device defects, algorithmic errors, integration failures, and data breaches, with caps and exclusions aligned to the parties’ roles and insurance coverage;
- detailed provisions on data controllership, processing roles, permitted uses (including secondary use and AI training), cross-border transfers, and data return or deletion on termination;
- contractual minimum-security standards, audit rights, breach notification timelines, and joint incident-management procedures, particularly where data is flowing through multiple vendors and cloud providers;
- obligations to involve clinical stakeholders in design, testing, and roll-out, as well as processes for handling updates that may change risk profiles or regulatory status.
Conclusion
For lawyers advising hospitals or healthtech firms, the commercial opportunity in connected care is inseparable from the legal architecture. Thoughtful contractual design, combined with strong technical and clinical governance, can transform a fragile, liability-prone integration into a sustainable, scalable connected-care ecosystem. The central task is not to eliminate risk altogether, but to allocate it transparently, manage it actively, and ensure that innovation in connected care remains aligned with patient safety, ethical practice, and public trust.