Strategies for managing the use of AI in employee disputes
Tuesday 21 April 2026
Lucy Gordon
Walker Morris, London
lucy.gordon@walkermorris.co.uk
The rapid adoption of generative AI tools is reshaping workplace communication. Employers globally are recognising a significant change in how employees draft grievances, respond to investigations and disciplinary allegations, and prepare employment litigation pleadings. Historically, employees’ written submissions were generally short and factual (unless they were legally represented), but this is no longer the case, with lengthy ‘legalised’ pleadings that are often light on factual detail. HR professionals and litigators can struggle to handle the increasing volume and length of correspondence and to manage cases effectively.
Increased use of AI in the workplace
Most employers facilitate and encourage the use of AI in workplaces, and the adoption of AI tools has predominantly increased efficiency and improved productivity for most businesses. Employees are therefore becoming adept at using AI in their everyday work and are subsequently confident using it for personal matters. As such, when an employee is involved in a dispute management process, such as grievances or disciplinaries, or an employee looks to bring a claim against their employer, an ever-increasing amount of communication is being drafted and curated with the help of AI. Arguably, the use of AI by employees in these processes is an attempt to bridge the gap in the unequal balance of power between the parties, by effectively tipping the balance against their employers, but it can also place an excessive burden on employers to review and respond.
However, employees are increasingly placing over-reliance on AI tools, which can lead to issues in the style and content of the submissions. Employees are using AI to produce legally styled and sometimes strategically framed complaints, which often contain little to no factual detail. This is hard for employers to investigate and respond to and may include several allegations that do not go to the core of the employee’s key complaints.
The consequences of AI use
The possibility for AI to ‘hallucinate’ and fill in gaps is well known, and this can send HR teams (and lawyers) on fruitless searches for case law or legislation that does not exist. This increases time and effort in dealing with correspondence and, in the case of litigation, can increase the legal costs for dealing with such claims. Employees risk escalating simple matters or submitting inaccurate information if they over-rely on AI to craft submissions without checking them for accuracy or understanding exactly what they are raising in their submissions.
When used in legal pleadings, AI can increase the amount of tribunal or court time required to identify the issues in the case. In the UK in particular, with an extremely backlogged tribunal system, this can add to existing lengthy delays in hearing cases.
HR practitioners and lawyers also struggle with the speed at which litigants now respond to communications with lengthy correspondence being turned around in a matter of hours. This leads to increased strain on HR teams and is likely to erode goodwill at a time when claimants might be looking for settlement offers.
In this respect, the use of AI is also generating unrealistic expectations for claimants in terms of the merits and value of their claims, making it harder and more expensive to reach settlement. Having been told that their claim is worth ‘x’ amount by an AI tool, a claimant may be less likely to consider what previously would have been regarded as a reasonable commercial settlement for a claim. This, in turn, means that cases could be less likely to result in settlement, leading to parties becoming more entrenched in their positions and more likely to end up at trial, again increasing legal costs and adding to the existing tribunal caseload burden.
Finally, the use of AI by employees may amount to a breach of their terms and conditions of employment. If employees upload confidential information, such as notes of meetings or disciplinary or grievance hearings to help them draft complaints, this may amount to the unauthorised disclosure of confidential information, which is likely to be a gross misconduct offence. Employees can therefore inadvertently expose themselves to the risk of disciplinary action, with resulting sanctions potentially up to and including dismissal.
What can HR and legal teams do?
HR teams must develop robust internal processes to identify when a grievance or response may have been AI‑generated and should adjust investigation methods accordingly. There are two main considerations here.
First, identifying submissions that may have been AI-generated. This may be abundantly clear on the face of a submission. For example, where an employee has submitted a lengthy document that references case law or uses overly complicated language, there is a higher chance that the submissions have been assisted by AI. Where it is not clear whether AI has been utilised, there is no issue in asking an employee whether their submission has been created with the assistance of AI.
Second, it is important to remember that grievances remain valid even if they are drafted with the benefit of AI. Employers should meet with employees to clarify the factual concerns that the employee has, by asking them to explain in their own words what the issues are. The employer can then limit the response to addressing those identified key issues that were confirmed verbally, rather than necessarily addressing every point raised in the written grievance.
Having clarified the key concerns, HR teams should refocus discussions on the underlying issues rather than the presented language, encouraging employees to articulate their personal account.
Where an employee’s submission includes details that are unrelated to the matter in discussion, the HR team should feel empowered to refocus the discussion and ask specific questions that allow the employee to provide related details. This is a skill that HR teams will need to develop regardless of the use of AI, as often employee submissions do not fully articulate the reasoning or events leading up to the matter being discussed.
Train HR teams to handle overly formalised or legally complicated submissions, ensuring they continue to apply proportionate and fair procedures.
HR teams should be able to quickly identify the specific complaint within a submission and if possible, summarise the issue to apply the correct dispute management procedure. This skill comes with experience, but suitable training and policies will improve the HR team’s ability to recognise material details in formalised or overly legally complex submissions from employees.
Set expectations in policies, including guidance on acceptable use of AI in workplace processes and the potential consequences of misuse.
First, employees should be signposted to policies which confirm that AI use in dispute management processes is potentially unhelpful and that any irrelevant submissions will be treated as such. Policies should confirm that HR teams do not expect to review overly lengthy submissions and that the purpose of such employee submissions is to provide short, factual accounts of matters which will then be discussed in more detail at hearings or meetings.
Second, policies should clearly confirm that uploading confidential information (including meeting notes and employee records) to AI is a potential disciplinary offence, and action should be taken to enforce these principles.
Prepare for litigation risks, including increased legal costs and reduced ability to settle for commercial sums.
As with any dispute management process, handling the initial investigation and disciplinary/grievance meeting fairly and fully will put the employer in the best possible position to deal with any employee allegations. If claims are submitted with the assistance of AI, employers should consider how they can most cost-effectively address such claims, whether through dealing with these in-house or with external legal support. Budgets for settlement may need to be increased to counter unrealistic expectations of the values of claims, or employers may need to be prepared to contribute to legal fees to encourage claimants to obtain independent legal advice on the merits of their claims. Alternatively, if settlement is no longer realistic, the costs of longer litigation leading to a final trial will need to be borne in mind.
Conclusion
The use of AI in dispute management processes is only likely to continue to become more prevalent. Access to free AI tools is increasing and employees’ use of the technology is often encouraged by their employers in the workplace generally. As employees become proficient and AI tools become more sophisticated, the challenges in dealing with AI-assisted submissions will only increase. For now, though, employers and legal teams should begin adapting to tackle these challenges head-on and continue to deal with cases effectively and proportionately.