AI at work: what employers must require, document, and enforce on confidentiality, IP and responsible AI use

Tuesday 21 April 2026

Amrita Tonk
CMS INDUSLAW, New Delhi
amrita.tonk@cms-induslaw.com

Riddhi Jain
CMS INDUSLAW, New Delhi
riddhi.jain@cms-induslaw.com

Shivam Sharma
CMS INDUSLAW, New Delhi
shivam.sharma @cms-induslaw.com

Introduction

Artificial Intelligence (AI) tools have become part of everyday working life across organisations around the world, including India. Employees are using generative AI tools such as ChatGPT, Microsoft Copilot and Google Gemini to draft documents, analyse data, prepare client communications and support a wide range of work tasks – often with genuine productivity gains.

However, while there has been a surge in the use of AI, the contractual and policy frameworks governing employees’ use of these tools have not evolved at the same pace. Many employment contracts, confidentiality clauses and HR policies fail specifically to address how generative AI has become a workplace reality. This creates potential legal exposure across three critical areas: confidentiality, intellectual property (IP), and responsible use of AI at the workplace.

This article provides practical guidance for Indian employers navigating this new terrain. It covers: what to require of employees, what to update in the existing contracts and policies, and how to build an enforceable governance framework that keeps pace with the way employees actually work.

Confidentiality: closing the contract gap

The risk of breach of confidentiality at the workplace

In India there is no standalone statute which protects confidential information. The courts commonly rely on contractual frameworks to enforce confidentiality obligations. Traditionally, confidentiality clauses in employment agreements or non-disclosure agreements (NDAs) typically restrict disclosure of confidential information to identifiable human third parties and may not contemplate AI platforms.

When an employee inputs client data, unpublished financial information, litigation strategy or personnel details into a public-facing AI tool, they are – in practical and increasingly legal terms – making a disclosure to a third party. This is rarely done with any improper intent, as employees simply do not think of an AI prompt in the same way they would think of sending an external email. This creates a gap between the contractual framework and the practical reality of today’s workplace.

This risk is no longer merely theoretical. In a landmark ruling in United States v Heppner (SDNY, February 2026), a United States federal court held that documents generated by a corporate defendant using a consumer AI tool, Anthropic’s Claude, were not protected by attorney-client privilege or work-product doctrine. The defendant, a former CEO, had independently used the tool to analyse case facts and formulate defence strategies during pre-litigation discussions with the government, without the direct involvement or direction of his legal counsel. The Court found that, by entering privileged information into a consumer AI platform, the defendant had effectively disclosed it to a third party. Critically, the Court reasoned that Anthropic’s consumer terms of service, which permit data retention, review for safety and training purposes, and disclosure to government authorities pursuant to legal process, meant the defendant could not maintain any reasonable expectation of confidentiality in those communications.[1]

While this decision arose in a US litigation context, it serves as a useful reference point for employers worldwide when considering the risks associated with employees’ use of consumer AI tools for sensitive or legally significant work.

What employers must require of employees

Employers may consider requiring employees to:

  1. not input any confidential or privileged information into external or public-facing AI tools – including, but not limited to, client data, internal financial information, HR and personnel details, trade secrets, and unpublished business plans;
  2. use only employer-approved AI platforms when working with any sensitive information;
  3. anonymise or generalise any inputs wherever the use of a public AI tool is unavoidable; and
  4. report any accidental or suspected confidential disclosure via an AI tool promptly to the internal stakeholders of the employer.

What employers must update in contracts and policy

Employers may consider the following steps to close this gap:

  1. updating confidentiality clauses and NDAs explicitly to define AI tools and platforms, whether public-facing or otherwise, as third parties to whom disclosure of confidential information is restricted;
  2. issuing a standalone 'Acceptable AI Use Policy' that identifies categories of information which must never be entered into external AI tools, specifies which tools are approved for work use, and makes explicitly clear that breach of confidentiality via an AI tool will be treated as misconduct, subject to disciplinary action; and
  3. classifying AI-related confidentiality breaches as misconduct in standing orders or HR policies, with consequences calibrated to the severity of the breach, up to and including termination where the breach is serious or deliberate.

Intellectual property: ownership, disclosure, and infringement risk

The uncertainty over ownership of AI-generated output

Intellectual property raises distinct yet equally important concerns. Under the Indian Copyright Act, 1957, copyright protection requires human skill and creativity.[2]

Moreover, employers have the ownership rights over intellectual property created by their employees during the course of employment in the absence of an agreement to the contrary.[3] However, purely AI-generated work by employees without meaningful human creative input may not attract such copyright protection, leaving the work product in a legal grey area on which the employer may not be able to assert a claim.[4]

Risk of infringement of third-party copyrights

Separately, AI models are trained on large third-party datasets and may, in certain circumstances, reproduce content in ways that may expose employers to third-party copyright infringement claims where such reproduction is not authorised, does not fall within exceptions of fair dealing, or – where the original authors have not been given credit – a risk that most employers may not factor into their vendor or employment frameworks.

What employers must require of employees

Employers may consider requiring employees to:

  1. disclose usage of AI tools for all significant work product, including client deliverables, regulatory submissions or filings;
  2. review, edit and add meaningful human creative input to all AI-generated output before it is used, submitted or filed; and
  3. not represent AI-generated work as entirely their own where originality is a contractual, professional or regulatory requirement.

What employers must update in contracts and policy

Employers may consider the following steps to mitigate risks:

  1. updating IP assignment clauses to explicitly cover AI-assisted and AI-generated work, clarifying that all work produced in the course of employment – regardless of the tools used – belongs to the employer;
  2. including a contractual obligation mandating disclosure of the use of AI tools in significant deliverables, either through employment contracts or the Acceptable AI Use Policy;
  3. reviewing and amending AI vendor agreements to include specific indemnity provisions addressing third-party claims arising from AI output; and
  4. treating an employee’s passing off AI-generated work as entirely their own as a disciplinary offence in standing orders or HR policy, with consequences up to and including termination of employment.

Responsible AI use at work: monitoring, governance and enforcement

The oversight challenge

Monitoring and governance of AI use at work is perhaps the most complicated area to navigate, but also the one in which clear employer action makes the greatest practical difference. Most organisations currently have limited visibility into how employees are using AI tools in their daily work. Without clear policies, approved tool lists and monitoring frameworks, employers are operating with significant blind spots about what information is leaving the organisation and how AI is shaping work products.[5]

What employers must require of employees

Employers should require employees to:

  1. only use employer-approved AI tools for work purposes;
  2. acknowledge in writing the employer’s right to monitor AI tool usage on company devices;
  3. cooperate with any AI-related audit or investigation the employer initiates; and
  4. report any AI-related incidents or any policy breaches promptly to the employer.

What employers must update in contracts and policy

  1. Implement a three-tier approval tool framework, distinguishing between tools that are fully approved, those that are conditionally approved with restrictions, and those that are banned. Employers should communicate updates to this list as the technology landscape evolves. A defined approved tool list is also the practical foundation for any subsequent disciplinary action. Without it, proceedings for unauthorised AI tool use are difficult to sustain.
  2. Notify employees through the employment contract, IT policy, or Acceptable AI Use Policy that usage of AI-related tools on company devices and corporate networks may be monitored.
  3. Include AI-related misconduct in standing orders and/or HR policies, covering unauthorised usage of AI tools, confidentiality breaches via AI tools, misrepresentation of AI output, and failure to report AI-related incidents.
  4. Build a graduated enforcement mechanism, from informal counselling for inadvertent first-time breaches, through formal written warnings for subsequent or careless conduct, to potential termination for deliberate misuse or for causing significant organisational harm.

Conclusion: govern first, adopt confidently

The Indian market is witnessing a transformation in which employers are focusing more on an efficient, productive and cost-effective workforce, paving a clear path for increased human-AI collaboration. The employers best placed to navigate this landscape are not those who restrict AI use the most aggressively, but those who govern it the most clearly. Organisations which take measured, proactive steps now – updating contracts, issuing well-considered policies, and building enforceable frameworks – will be better protected against disputes, better positioned with clients and regulators, and better placed to support their workforce in using AI tools responsibly.

Notes


[1] Kathryn Johnson, Caroline Sweeney and Geoffrey Vance, ‘Urgent Alert: Federal Judge Rules AI-Generated Documents Are Not Privileged – A Game-Changer for Legal Strategy’, Dorsey & Whitney LLP, 23 February 2026, available at www.jdsupra.com/legalnews/urgent-alert-federal-judge-rules-ai-3358570 accessed 16 April 2026.

[2] Eastern Book Company v D B Modak, (2008) 1 SCC 1.

[3] S 17, The Copyright Act, 1957.

[4] Japman Singh Bagga, ‘Legal Accountability for AI-driven Intellectual Property Infringements: An Analysis of International and Indian Laws’, SCC Times, 30 August 2025, available at www.scconline.com/blog/post/2025/08/30/legal-accountability-for-ai-driven-intellectual-property-infringements-an-analysis-of-international-and-indian-laws accessed 16 April 2026.

[5] ‘Uncensored AI is not the threat; Enterprise exposure is’, nasscom community, 21 January 2026, available at https://community.nasscom.in/index.php/communities/ai/uncensored-ai-not-threat-enterprise-exposure accessed 16 April 2026.