AI, employee monitoring and data protection: a difficult balance in the framework of Italian law

Tuesday 20 June 2023

Simone Carrà
Littler, Milan

Laura Corbeddu
Littler, Milan

AI and the labour market

Artificial intelligence (AI) will most likely replace many repetitive and non-creative jobs, such as data entry and customer service, and this is one reason why people think technological progress will lead to the loss of many jobs.

Despite pessimistic points of view that are most likely driven by a fear of the future, it should be noted that technology has been changing labour practices and the workplace for decades, but this does not mean that human workers will be replaced by machines.

Paradoxically, it is a reality that the Italian labour market definitively moved from being mostly based on blue-collar activities to being based on the services industry, but as of today most job demands concern blue-collar vacancies, as no one wants to fill such positions anymore. Therefore, the fear that there is no future for the human worker should be immediately abandoned. Of course, AI will change the labour market, but this does not necessarily mean that it will be a worse place.

First, AI is going to create new types of jobs and new needs in the labour market. Second, and most significantly, AI might be the best chance to finally move from the concept of simply being employed, to a completely different, modern and innovative approach of being employable, thus focusing on continuous learning and reskilling of an individual’s job profile. This will change people’s attitude and approach to work and will also constitute a revolution in promoting employees’ capacity to carry out different tasks and be more versatile during their careers.

The challenges posed by AI

While this article does not allow for a deep dive into the multiple challenges that AI will pose from a labour law perspective, a focus is placed on several of the most crucial challenges, namely the need to define limits to employee monitoring, as well as the need to regulate the collection and processing of personal data.

Herein, it is worth mentioning a few examples on how AI can impact different stages of the employment relationship, in order to highlight the sensitivity of the personal data that AI (as well as the owners and users of AI systems) may be collecting or using:

  • Hiring – AI can create a database of applicants and is able to automatically create shortlists of potential candidates that fit the profile for a job vacancy; also, during the recruitment process, AI can use video-recorded interviews or even video games, in order to evaluate, based on the facial expressions, tone of voice and wording used, the suitability of candidates for a certain job;
  • Employee performance can easily be monitored by AI, since algorithms can be implemented to ensure that the job is being performed correctly and to pinpoint any infringements that may instigate a disciplinary process; and
  • The monitoring of employees by means of AI may become systematic and cover multiple aspects; for instance, AI may assess how quickly an employee carries out the requested activities, how much empathy they use in interacting with clients, how much time people spend on different activities and how efficient it is to have people involved in multiple tasks, how often employees take breaks, whether people are using a company car, even what employees are looking at a certain time; and even non-working time can be subject to systematic control by AI.

It is, therefore, evident how crucial the role that regulation on the treatment of personal data will play in the next few years.

The issue is even more sensitive than being a mere (although potentially very wide) collection of personal data, because AI elaborates on the data, and the outcome can be altered by many biases that might be voluntary and unfortunately also unpredictable, and sometimes even difficult or impossible to pinpoint.

In this context, the aim of local legislation should be to arrive at a balanced trade-off between the need not to block technological progress and its multiple advantages (also for the wellbeing of employees) and the need to safeguard individual’s rights, including their personal data protection rights.

AI, employee monitoring and data protection: the Italian perspective

It is evident that in Italy, just like elsewhere, lawmakers have a difficult challenge to deal with in the next few years: AI is going to become a reality in our daily lives, but the law is far from being able to effectively provide answers to the issues that this phenomenon poses.

The first real debate on AI in Italy was generated by the use of algorithms in working environments and, particularly, within digital platforms. The debate started in 2018 in Italy (when the Tribunal of Turin published the first decision on the so-called Foodora case), whereas the issue had already been widely examined by courts and scholars in several jurisdictions, from China, France, the UK and the US. The debate has been less focused on criticisms of AI concerning the Italian labour law system, with the case basically addressing the matter of the requalification of platform economy delivery riders. The criticism (sometimes even the obsession) was (and still is) addressed to the fact that, where the algorithm is the ‘boss’, the ‘rider’ cannot be considered by definition to be self-employed.

It should be noted that since 2018 no real discussion has taken place in all cases where employers have implemented algorithms, but only in the context of a business model based on subordinate employment (and not relating to self-employed workers). Nevertheless, the critical issues instigated by the use of AI still need to be explored from a labour law perspective, notwithstanding the fact that those involved are self-employed or subordinate workers (and, indeed, it should be stressed even more so in case of employees that are undoubtedly subordinate workers, who thus requiring a higher level of protection).

More recently, another case was dealt with in the Italian jurisdiction, which involved OpenAI’s ChatGPT.

The case was instigated by the Italian data privacy authority, challenging the following alleged infringements: OpenAI did not provide any notice to users on the data collected; there was no lawful basis for collecting and processing data; the data processing was not accurate; and there was no way of verifying the age of users, as the service appeared to be available to users younger than 13 years old.

Therefore, the Italian privacy authority claimed that Sections 5, 6, 8, 13 and 25 of the privacy law had been violated and ordered – through a measure dated 30 March 2023 – the immediate suspension of the treatment of personal data in Italy.

However, in response to the ability expressed by OpenAI to implement concrete actions in order to meet the need to safeguard users’ rights, the Italian privacy authority suspended the measure.

The recent ChatGPT case has finally (and hopefully) started a new age where AI has started to be concretely analysed in juridical terms due to the implications that AI may have on people’s rights.

Therefore, if the Italian privacy authority announced that there will be investigations into other AI systems implemented in Italy, the need to investigate the implications that AI will have from an employment perspective should also be considered (from the high risk of discrimination to continuous monitoring, etc).


The challenge to the impacts caused by AI has just started and solutions to these issues appear to be far away.

Italian law seems, in fact, to be inadequate in facing the challenges deriving from such technological advancement, as it was mostly conceived in the 1970s when all working activities were carried out at the company’s premises and technology was very limited.

During the Covid-19 pandemic, the spread of remote working forced both lawmakers and employers to take urgent actions to adapt to the new working conditions and it is likely that as AI spreads in the workplace, the need for new and better fitting regulation will arise.

However, as the OpenAI case demonstrates, technical progress should not necessarily be perceived as disruptive and solutions may be found in order to balance different needs, as long as both the authorities and companies are ready to adapt.