lexisnexisip.com

AI: brief global regulatory summary, trends and the Argentine status

Thursday 6 July 2023

Lisandro Frene

Richards Cardinal, Buenos Aires

frene@rctzz.com.ar

Current global regulatory status and trends

In recent months, the development of artificial intelligence (AI) systems – with ChatGPT as its representative icon – marked a significant milestone due to its unsuspected advances from less than a year ago and the exponential growth of its use. This has resulted in global reactions from governments, academics and, as far as lawyers are concerned, a sort of legislative rally in attempting to regulate it.

Although law always lags behind facts (something noticeable in the field of technology due to its rapid evolution), in the case of AI this gap became evident in a very short space of time. The massive proliferation of AI tools, the perceived consequences of their use and, especially, their probable potential effects, have made this regulatory gap more palpable.

To date, there are almost no laws in any jurisdiction which fully regulate the use of AI. There are few very specific regulations on certain uses of AI systems in particular sectors or activities, as is the case, for example, in the US with the state labour regulations of New York and Illinois, which establish requirements that companies shall comply with for the use of AI systems affecting employees. There are also regulations, or ‘soft law’ agreements, of non-binding nature, which generally establish ethical principles and recommendations, good practice guides, etc in the use and development of AI tools. Examples of the latter are: the generic ethical principles of the Organisation of Economic Co-operation and Development (OECD) on the use of AI, signed by more than 40 countries; the ‘AI white papers’ issued by the governments of the EU, UK and US; or the ‘AI Ethics Frameworks’ issued by government entities in Australia, Japan or the US, to name but a few.

Many countries (from Canada to India or even China) have advanced bills of AI regulation, moving forward – or trying to do so – rapidly in recent months. The most commented on of these seems to be the proposed EU AI ​​Act (abstract, generic and comprehensive) which, if enacted, would be the first law on AI by a major regulator. Considering ‘AI’ is such a broad term, comprising many different uses and interrelated technologies, it seems difficult to include all of them in a piece of legislation. As my colleague Chris Holder states, maybe ‘adopting a sectoral approach to AI regulation under a broad policy direction/guardrails appears to be more sensible as it allows regulators to look at the use case scenarios specific to each sector rather than battle with one’. Anyway, 2023 is likely to be the year of AI regulation. It is reasonable to expect that sometime this year one or several jurisdictions will enact some type of AI regulatory framework, more probably in countries of the Western world.

Corporate regulation

In this scenario of unbridled evolution of generative AI tools, their mass use, unpredictable consequences and the regulatory vacuum, some companies have begun to dictate their own internal policies on best practices and precautionary measures when using these tools. This is mainly due to information security issues and to minimise risks. Given the lack of general rules, they issue their own private rules, valid at least for their internal use.

Several ‘big’ companies have restricted or banned ChatGPT in the workplace. These include JP Morgan Chase, Wells Fargo, Bank of America, Goldman Sachs, Deutsche Bank, Amazon and Verizon. Other companies are shaping their internal privacy policies to circumscribe the internal use of chatbots. The genesis of many of these corporate policies is due to the ‘pasting’ of confidential data into ChatGPT by employees, who consequently through this tool share (who knows with whom) the personal data of third parties and confidential company information.

Personal data regulations applied to AI

Given the thus-far AI regulatory emptiness, what laws can be used to adopt concrete measures regarding AI systems? Personal data regulations. This is already happening in countries such as Spain and Italy, where the local personal data authority has issued a temporary block for Replika and ChatGPT until these companies proved compliance with certain personal data legal requirements.

AI systems, like most new technologies, are fed with data. They work based on data, achieve results derived from collecting and processing huge amounts of data, most of it personal data. In addition, AI systems make autonomous decisions (without human intervention), but this does not change the fact that data is its ‘fuel’. It is therefore logical that authorities check that AI systems comply with personal data regulations.

What are the legislative requirements of personal data to which we refer? Apart from verifying compliance with the basic personal data principles existing in almost all personal data laws (ie, consent, adequacy, purpose, data security, etc), the most advanced regulations include more specific requirements applicable to AI systems.

Let’s take a quick look at the General Data Protection Regulation (GDPR), a kind of data privacy ‘bible’, which several countries have taken as an inspiring model for their laws or bills. Through the principles of ‘Data protection by design and by default’ (section 25), GDPR requires the adoption of certain ‘technical and organisational measures’ ex-ante from the very development of massive data processing systems; as well as measures ‘with a view to guaranteeing that, by default, only the personal data that is necessary for each of the specific purposes of the treatment are processed’. On the other hand, GDPR requires an ‘impact assessment related to data protection’ (section 35), whose minimum requirements are described ‘when it is probable that a type of processing, in particular if it uses new technologies, due to its nature, scope, context or purposes, entails a high risk for the rights and freedoms of natural persons’. This ‘Data Protection Impact Assessment’ shall be performed ‘before data processing’, foreseeing GDPR-specific situations in this regard (especially for systematic, automated and large-scale data processing) and the need to request prior consultation to the controlling authority for certain cases of potential high risk (section 36).

In short, the lack of AI-specific legislation in no way prohibits the control of compliance with the personal data regulations to the use of AI systems. This is the control that the personal data authorities of many EU countries are currently exercising in this matter.

The Argentine legal situation

Argentina is unfortunately far away, legally speaking, from the regulatory conflicts caused by new technologies and AI in particular. Despite being the first country in the Americas to have a personal data protection law, considered ‘adequate’ for the EU authorities; and despite being South American cradle of start-ups, Argentina today has an archaic legislative regime regarding new technologies. Legislative attempts to update the aforementioned current personal data protection law (enacted more than 20 years ago) and adapt it to GDPR standards have constantly failed. Even in this context, the Argentine personal data regime has regulations (scarce, and not very specific, but regulations still in force) under which AI systems can be legally reviewed; and eventually adopt measures in this regard.

At a very generic level, in June 2023 the Argentine Federal IT Bureau published Regulation 2/2023 with ‘Recommendations for a Reliable AI’. Aimed at the public sector, they include mainly generic ethical principles, and highly abstract conceptual guidelines. However, they might be the basis for possible future more specific regulation of AI, especially considering that, except for Argentina’s adherence to the ‘Recommendation on the Ethics of Artificial Intelligence’ issued by the UN Educational, Scientific and Cultural Organisation (UNESCO), Argentina, like most countries, lacks specific AI legal legislation.

Going down to more practical rules, the current Personal Data Act establishes basic principles similar to those of GDPR, such as informed consent, purpose, confidentiality, data security. It is mandatory for any process of collecting and processing personal data. In addition, there are several regulations of the local data protection agency which can be applicable to AI systems, such as the regulations about security measures for processing personal data, or the ‘Guide to Good Practices in Privacy for the Development of Applications’. This latter set of regulations includes the concepts of ‘Privacy by Design and by Default’, substantially similar to GDPR, although in less detail.

Furthermore, the obligation to implement ‘appropriate technological and organisational measures to protect privacy by design and by default’ is specifically included in the Amending Protocol to the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data signed in Strasbourg in 2018. Also known as ‘Modernised Convention 108’ or simply the ‘108+ Convention’, this Protocol was recently approved in Argentina by Act 27,699. The accountability principle for data controllers, and the duty to carry out an examination of the probable impact of data processing on data subjects’ rights are also part of the obligations assumed by Argentina under the 108+ Convention.

As can be seen, despite its regulatory delay in the technological field, Argentina has a legislative background through which AI ​​tools can be legally evaluated.

Conclusion

The need and urgency to regulate the development of AI systems is proportional to the speed with which they evolve.

According to Yuval Harari, ‘we can still regulate AI, but we need to move quickly […] before it regulates us’. This seems to me somewhat exaggerated, yet from a legal perspective, ignoring AI is like trying to cover the sun with your hand. It implies that governments disregard a decisive element in people’s lives and decisions. The ‘innovation versus regulation’ dichotomy entails a false polarisation, often fostered by vested interests around the use and commercialisation of technology.

In Argentina, it looks unrealistic to demand AI regulation when we lack much more basic rules on the use of technology. In the meantime, we can – and must – legally analyse AI tools based on the regulations on personal data (as major EU countries are doing) and other regulations, less specific but yet valid and applicable to AI.