Artificial intelligence in Indian workplaces: diversity law issues from hiring to exits

Wednesday 24 September 2025

Ajay Singh Solanki
AZB & Partners, Mumbai, Maharashtra
ajay.solanki@azbpartners.com

Nipasha Mahanta
AZB & Partners, Bangaluru, Karnataka
nipasha.mahanta@azbpartners.com

Ayushi Singh
AZB & Partners, Mumbai, Maharashtra
Ayushi.singh@azbpartners.com

Imagine this, a savvy brown woman techie in her late 30s eagerly sets up job alerts with precise keywords on a leading professional network; only to hear her white male colleague, equally qualified and her peer in age, casually brag about landing the very kind of job she has been searching for. She is left questioning: What went wrong? Why did the platform never deliver those coveted updates to her inbox? The culprit was the faulty algorithm operating underneath which was optimising only those candidates for its corporate client, whose profiles looked ‘similar’ to those who historically occupied the position.

In 2015, Amazon’s artificial intelligence (AI) based recruitment tool was found to penalise resumes of women owing to the algorithm being fed faulty historical data, and hence, it was soon disbanded.[1] Similarly, the iTutor Group’s hiring system was found to automatically reject female job seekers over 55 years of age and men over 60 years of age.[2] In another example, the University of Washington researchers varied names associated with white and Black men and women across over 550 real-world resumes and found the large language models (LLMs) favoured white-associated names 85 per cent of the time, female-associated names only 11 per cent of the time, and never favoured Black male-associated names over white male-associated names.[3] While supposedly predictive technology in law and order (predicting higher recidivism for certain races against others) and healthcare requirements (favouring those with the ability to pocket the costs) have taken the industries by a storm, similar softer waves are brewing in the field of work across the globe.

AI has transitioned from experimental expensive novelty to a core affordable operational tool in organisations from various industries including those in India. AI now plays a role in screening resumes, scheduling interviews, evaluating employee performance, assessing job satisfaction, predicting attrition and even determining redundancy decisions. It is pertinent to note that Indian employment law is a complex matrix of central and state level laws, constitutional protections and judicial precedents. At the same time, the Indian labour demographic is heavily diverse in social, political and economic identities. Therefore, if algorithms feed on such diverse datasets, it is important for employers to ascertain that the decision making that happens based on AI tools falls on the right side of the diversity laws.

Hiring and recruitment

When making hiring decisions, employers in India need to comply with certain diversity laws. These include laws that address hiring, recruitment and anti-discrimination generally emphasising principles of fairness and non-discrimination. These include the Maternity Benefits Act 1961, the Equal Remuneration Act 1976, the Rights of Persons with Disabilities Act 2016, the Transgender Persons (Protection of Rights) Act 2019, the HIV and AIDS (Prevention and Control) Act 2017, the Scheduled Castes and Scheduled Tribes (Prevention of Atrocities) Act 1989, the Mental Healthcare Act 2017 and finally, the Constitution of India 1950. While these statutes have broad principles that generally indicate a prevention of discrimination at hiring or post-hiring stage, there is currently no specific legal architecture that holds employers accountable for algorithmic bias or enforces transparency, fairness audits or human oversight in automated hiring systems. India is yet to legislate statutes on the lines of the EU’s AI Act which classifies AI systems used in recruitment and employee management as ‘high-risk’, and therefore, made subject to strict conformity assessments, transparency obligations and human oversight mechanisms to ensure fairness and prevent discrimination.[4] Additionally, the Indian labour authorities are yet to regulate algorithmic decision-making tools which may disparately impact protected groups as it has done in the US.[5]

Beside the initial screening, employers have also begun to use AI-driven behavioural analytics during virtual interviews, assessing factors such as body language, confidence and attentiveness. A candidate who is otherwise well suited to the role may, during a demanding interview, display reduced confidence or diminished focus. This may be attributable to conditions such as anxiety or attention deficit hyperactivity disorder (ADHD), yet an AI system, which is not afforded the sensitivities of neurodiversity or personality types – eg extroversion versus introversion – may misinterpret such traits as indicators of normative unsuitability.[6]

At par with global developments, there is a fair need for comprehensive national legislation and guidance to ensure ethical and accountable AI deployment in various areas of life including work.

Employee performance management

In contemporary society, the most pragmatic response to the quintessential question ‘why do we work?’ centres on employment as a mechanism for obtaining exchange value that provides individuals with greater bargaining power in society. It is because of this want that individuals voluntarily contract with employers to perform work duties, agree to limit their liberty in pursuit of such duties and, in return, be rewarded with monies and benefits. Employees negotiate their limited bargaining power with employers who, in turn, promise to optimise their skills and provide competitive compensation commensurate with their manifest skills. Therefore, it becomes paramount for society to agree upon the most objective and fair tools of work assessment.

Traditionally, for work assessment, employers have deployed supervision and monitoring mechanisms heavily reliant on managerial judgment, emanating from fellow humans possessing similar sentience, sensitivities and consciousness. While prone to subjectivity, human managers can accommodate intangibles such as empathy, social context awareness and the anticipation of unrealised potential. This traditional approach is now subject to intense debate as organisations increasingly consider outsourcing these evaluative functions to AI tools in pursuit of greater objectivity and efficiency.

While the hope of having a superior objective assessment model is uncorrupted, solely relying on AI tools might be potentially callous. AI tools are primarily knowledge aggregators trained on historical datasets composed of the same flaws and prejudices which prevail in a particular society. For HR purposes, training AI tools typically means aggregating HR knowledge sets including performance metrics on hours worked, task completion rates, leave patterns, response times to customer inquiries and feedback from multiple stakeholders,[7] payroll details including wage trends, benefits availed, disciplinary actions and outcomes, restructuring decisions and exits. Such an aggregation of datasets may have the potential to create warped outcomes cast with societal prejudices leading to a disregard of protected categories, perceiving reasonable accommodations as a cost incurrence and even dilute collective employee representation. Actions taken purely based on datasets may be at odds with the substance of some of the welfare labour laws such as the Minimum Wages Act 1948, the Equal Remuneration Act 1976, the Industrial Disputes Act 1947 and others. Empirical studies caution that such AI based evaluations may inadvertently fail to account for individual circumstances such as health conditions, maternity/parental rights, disabilities, gender identities, social identities or other limitations, leading to potentially inaccurate performance assessments.[8] Therefore, AI systems in employment, including performance evaluation, could lead to ‘disparate impact’ discrimination, where a seemingly neutral policy or practice (eg an AI evaluation metric) disproportionately affects a protected group, even if discrimination was not explicitly intended. Furthermore, concerns have been raised about employee well-being, as AI-driven HR systems may affect perceptions of fairness and job security.

To address this, experts recommend ongoing audits of AI systems and adjustment of algorithms: for instance, frequent checks to ensure a performance-AI is not unduly penalising employees who take legally protected leave or accommodations. This ensures that performance assessments are contextualised and conducted on a case-by-case basis, thereby aligning with principles of fairness and compliance with applicable employment and equality laws.

Investigation and employee protection

Indian labour laws and judicial precedents provide a clear and detailed step-plan for conducting workplace fact-finding and disciplinary enquiry to prove misconduct in accordance with the principles of natural justice post which employers take any disciplinary action including termination of employment. However, this process is perceived to be fact and labour intensive with scope for subjectivity and prejudices.

By leveraging algorithms and machine learning, AI tools claim to predict and detect employee breaches and conclusively prove misconduct allegations faster than ever before, analysing vast amounts of data to identify patterns indicative of misconduct such as harassment (including sexual harassment), fraud, data breaches or other forms of workplace misconduct.

Certain AI-based employee monitoring tools can raise alerts when they detect deviations from expected workplace behaviour standards.[9] Another reason employers are pivoting to AI-driven employee investigations is that AI tools can expeditiously track and analyse vast amounts of data, including email communications, financial transactions, computer activity, recordings and other digital trails.[10] This aids in collecting evidence of employee misconduct for proceedings before the employer’s disciplinary committee and the courts.

However, below are a few guardrails that employers should be conscious of:

  • AI’s capability to accurately interpret human emotions and behaviours, including distinguishing between behavioural anomalies that are isolated incidents versus repeated acts of misconduct, remains uncertain. Furthermore, diverse groups of people such as persons with disabilities and neurodivergent individuals (such as those with autism and ADHD) may function and perform tasks in ways that do not align with the generally accepted standards of social and workplace behaviour. Consequently, AI tools may perpetuate existing biases against such individuals if they are neither trained nor capable of understanding behavioural and cultural patterns of data subjects with diverse physical and mental capabilities.
  • Data privacy laws such as the Indian Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011, regulate the collection, processing, disclosure and transfer of sensitive personal data or information (SPDI) and require employers to exercise caution while dealing with SPDI such as an individual’s financial information, biometric information, sexual orientation, physical, physiological and mental health conditions and medical records and history. Hence, care must be undertaken to ensure that free and informed consent of the employee is obtained prior to the collection and processing of data using AI tools and that no disproportionately invasive data collection methods are implemented.
  • AI-based workplace investigations must not substitute legally mandated processes such as the initiation of a formal inquiry into sexual harassment complaints by an Internal Committee under the Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act 2013. Findings of AI tools must be treated as preliminary and not conclusive evidence so as to lay the foundation for a human-led investigation process that adheres to principles of natural justice.

Redundancy and retrenchment

In the past few months, a couple of leading tech companies have announced the phasing out of several jobs owing to the integration of AI into these roles.[11] Using algorithms in the workplace has not only brought about automation in business operations but has also automated employment processes and decision making.

If employers rely on AI for redundancy selection and termination decision-making, they may need to tread carefully.

First, AI may have a more mechanical approach in the selection process and be subject to algorithmic bias, which may lead to arbitrary dismissal and discrimination against legally protected category of employees. If an AI tool determines a redundant position owing to ‘lower output’ by a disabled person by comparing such persons with another category of employees, and AI is instead trained to perform the relevant role, the disabled employee may be laid off despite being legally protected. Similarly, if an AI screens absenteeism and efficiency in employees and suggests that jobs of expectant and post-partum mothers fall within the list of jobs to be eliminated owing to them taking more leaves or availing more ‘work-from-home’ options than other employees, it may lead to claims of discrimination from such employees.

Second, AI may not be well-versed in the legal processes and compliance requirements for redundancies and terminations. In India, employees who have worked for at least 240 days in a year cannot be terminated without providing prior notice or salary in lieu thereof, payment of statutory severance, and notification to appropriate labour authorities. Courts generally require employers to demonstrate the rationale behind designating particular roles as redundant, including evidence of attempts to offer reemployment and retraining opportunities to affected employees. Employers must maintain a seniority roster and adhere to the last-in-first-out principle or be able to demonstrate (through adequate documentation) reasons for deviating from it in cases of redundancies.

All is not ‘fair’ with AI at the workplace: possible solutions

While there are valid concerns with unchecked AI usage at the workplace, AI has a lot to offer when applied with caution.

Recruitment

When deploying AI tools in recruitment, it is essential to feed the system with historical data that reflects not only past outcomes but also the legal precincts and the employer’s ideal standards for hiring. This helps the AI model learn what constitutes a successful candidate beyond mere replication of past biases. However, AI should not be the sole decision-maker; human discretion must be preserved for final hiring decisions.

Performance management

AI tools used in performance management must be informed and educated about the need for accommodations, ensuring that the system does not penalise employees who require adjustments. Key Performance Indicators (KPIs) and goals should be customised for each worker, reflecting their unique skillsets and learning objectives. AI can assist in mapping skills and tracking progress, but human managers should remain responsible for interpreting results and making final decisions, especially in cases where context or nuance is required.

Investigations and disciplinary actions

When using AI in workplace investigations, it is vital to feed the system with relevant cultural context and legal standards. The pool of interviewees and factual sources should be carefully designed to include perspectives from all stakeholders involved. AI-generated questions should be fine-tuned based on an understanding of the histories and circumstances of those involved. As with other applications, AI should serve as an assistant, not the final arbiter, with human decision-makers overseeing outcomes.

Redundancy and restructuring

Employers may ideally disclose to employees when AI is used in redundancy decisions and ensure that there is a process for human review of AI-generated recommendations.

Data Protection and Bias Safeguards

To enhance data protection, organisations should implement mandatory bias audits for AI tools used in employment, as well as provide employees with the right to opt out of purely automated decision-making where feasible. Penalties should be established for processing inaccurate or discriminatory datasets, ensuring that AI systems do not perpetuate or exacerbate existing biases.

Ethical guidelines and audits

Voluntary frameworks, such as NITI Aayog’s AI ethics principles,[12] should be converted into binding regulations. Independent third party audits of AI systems used in hiring, performance management and retrenchment should be mandated, following models like the New York City’s bias audit law. This ensures accountability and transparency in the deployment of AI tools.

Judicial and regulatory capacity-building

Labour courts, industrial tribunals and labour commissioners must be trained in AI technologies, algorithmic bias and digital evidence handling. This capacity-building is essential for the effective adjudication of AI-related disputes and for maintaining trust in the legal and regulatory framework governing AI in the workplace.

What lies ahead?

AI is no more a speculative future disruptor; it is an active participant in Indian workplaces today. From automated resume parsing to algorithmic redundancy mapping, AI systems are making decisions that directly affect livelihoods. Therefore, India’s evolving labour laws need to provide sufficient headspace to embrace the AI revolution. Principles of algorithmic transparency and legal principles around dignity, justice and fairness and equity should go hand in hand.

Notes

[1] ‘Amazon scrapped “sexist AI” tool’ BBC News (London, 10 October 2018) www.bbc.co.uk/news/technology-45809919 accessed 17 September 2025.

[2] Noam Scheiber, ‘Lawsuit Claims Tutoring Company Discriminated Based on Age and Sex’ The New York Times (New York, 17 August 2021).

[3] Kyra Zhang, Salehi Niloufar and Lionel P. Robert Jr, ‘Algorithmic bias in hiring: A systematic literature review’ Proceedings of the ACM on Human-Computer Interaction 6, No CSCW2 (2022): 1–31.

[4] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.

[5] US Equal Employment Opportunity Commission, ‘The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees’ (12 May 2022).

[6] Indian Institute of Management Ahmedabad, ‘AI and the Future of White-Collar Work in India’ (2024).

[7] IndiaAI, ‘Using AI for Performance Management’.

[8] Scientific reports, ‘What algorithmic evaluation fails to deliver’ (2024) Scientific Reports’.

[9] Dr Christine Izuakor, ‘Using AI to Detect & Prove HR Related Employee Violations’ (Veriato, 19 September 2020) https://veriato.com/blog/using-ai-to-detect-prove-hr-related-employee-violations/ accessed 17 September 2025.

[10] ‘Identify, Investigate and Close Internal Investigations Faster with Veriato Investigator’ (Veriato) https://veriato.com/products/investigator/ accessed 17 September 2025.

[11] Aman Rashid, ‘IBM replaces 200 HR roles with AI agents as part of automation push’ (India Today, 13 May 2025) www.indiatoday.in/technology/news/story/ibm-replaces-200-hr-roles-with-ai-agents-as-part-of-automation-push-2724043-2025-05-13 accessed 17 September 2025.

[12] ‘Responsible AI: Approach Document for India’ (NITI Aayog, February 2021) www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf accessed 17 September 2025.