AI regulations – a rising global issue: an Australian perspective

Sunday 18 August 2024

Katarina Klaric[1]

Stephens Lawyers & Consultants, Melbourne

katarina.klaric@stephens.com.au

Artificial intelligence (AI) technologies deployed in Australia largely originate from China, Europe, Japan and the US with these countries being the innovators in the AI field having the highest patent filings globally.[2] The Australian Government has recognised that to take advantage of globally supplied AI technologies and to support safe AI development and adoption, regulatory and governance frameworks are required that are consistent with global regulatory approaches.[3] [4]Australia is a participant in a number of global forums on AI regulation and governance.[5]

While the Australian Government continues to undertake consultative processes into reforms required for laws to regulate AI technologies, on 13 March 2024, the European Parliament approved the Artificial Intelligence Act. The new laws prohibit certain AI systems that are considered as contravening the values of the European Union and violating the fundamental rights of its citizens. Prohibited AI systems include those that:

  • deploy subliminal manipulative or deceptive techniques distorting the behaviour of people, impairing their ability to make informed decisions;
  • exploit the vulnerabilities of people due to their age, disabilities or economic situation;
  • create or expand facial recognition databases through the untargeted scraping of facial images from the internet or closed-circuit television (CCTV) footage;
  •  categorise people based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation, subject to certain law enforcement exceptions; or
  •  use ‘real-time’ remote biometric identification systems in public places subject to specific law enforcement exceptions.[6]

The laws also regulate high-risk AI systems, which will be the subject of evaluation, conformity assessment and reporting requirements. Generative AI, such as ChatGPT is not classified as high-risk, but will have to comply with transparency requirements, including compliance with EU copyright laws.[7]

This article provides an overview of Australia’s existing regulatory framework used to regulate AI technologies, the ongoing government consultations and proposed reforms to the laws.

The failed Robodebt automated decision-making system is used as a case study in this article and provides a warning to government agencies and organisations globally of the risks associated with failed AI systems, including claims for compensation in the billions of dollars and reputational damage that follows.

Overview of Australia’s regulatory framework

Existing legal framework

Australia’s existing legal framework is used to regulate AI technologies across all industries. The laws include:

  • Australian competition and consumer laws: administered by the Australian Competition and Consumer Commission (ACCC), regulate competition, anti-competitive conduct and unfair trade practices;
  • corporations’ laws: administered by the Australian Securities and Investment Commission (ASIC), regulate companies and the financial services market sector;
  • data protection and privacy laws: administered by the Office of the Australian Information Commissioner (OAIC) and state privacy commissioners;
  1. online safety laws:[8] Australia established the first world eSafety Commissioner who is responsible for the administration of the law, which includes a mechanism to address online safety issues from cyberbullying to image abuse (including fake images, deepfake pornography and child exploitation material) and other kinds of material that affects online safety, some of which may be generated by the use of AI; the eSafety Commissioner has extensive powers to have illegal and harmful online material removed from online platforms;
  • media and communications laws: administered by the Australian Media and Communications Authority (AMCA);
  • criminal laws;
  • discrimination laws; and
  • copyright laws.

Australia also has industry specific laws that cover the use of AI technology and potential risk in the health, road transportation vehicle[9] and aviation industries. In the high-risk area of health, the Therapeutic Goods Act and regulations were amended in 2021 to cover software (including Al technology) that is used for medical purposes and comes within the definition of ‘medical device’ and is not exempt from the regulations. However, these regulations do not extend to all software used in the health sector.[10]

AI ethics principles and standards

Australia is a signatory to the Organisation for Economic Co-operation and Development (OECD) AI Principles, which were designed to encourage organisations to have ethical practices and good governance when developing and using AI. Australia has also adopted international standards for the management and governance of AI systems.[11]

To complement the existing regulatory framework, government departments and agencies have adopted voluntary AI ethics frameworks, that is, Australia’s Artificial Intelligence (AI) Ethics Principles, which are designed to ensure that AI systems benefit humans and the environment, uphold privacy rights and are fair, non-discriminatory, safe, secure, reliable and transparent.[12]

Case example: Robodebt scheme

Generally, Australians’ trust and confidence in AI technologies and systems is low, with issues concerning privacy, safety, bias, fairness, integrity, lack of transparency and accountability.[13]

The failed Robodebt automated decision-making system developed and implemented by the Australian Department of Human Services (DHS) amplifies the mistrust, and human and economic costs that result when appropriate legal and governance frameworks are not followed.[14]

The Robodebt scheme began as a pilot in 2015 and continued until June 2020. The scheme was designed to recover overpayments to welfare recipients going back to the financial year 2010–2011. Robodebt was an automated system that involved data matching income earned by the welfare recipients as reported by the employer to the Australian Taxation Office (ATO) with income the welfare recipient had declared to DHS. If there was a discrepancy, the system would issue a notice requesting the recipient to explain the discrepancy using an online system. If the recipient did not respond or provide details, or agreed with the income data from the ATO, the system used a process of ‘income averaging’ to calculate overpayments rather than looking at the actual income earned and welfare payment received over the relevant fortnight as required by relevant law. The system issued debt notices and debt collectors were engaged.

The Robodebt system was implemented without appropriate design, human welfare and fairness considerations or testing, including user testing. This resulted in errors with debt notices being illegally and unfairly issued. Before automation, the process had been undertaken by compliance officers who reviewed each of the files and had personal contact with the recipients. The Robodebt scheme was implemented although internal lawyers had provided advice to DHS in 2014, that ‘income averaging’ could not be used, and that ‘actual benefits’ and ‘actual income’ received during the relevant fortnight had to be used to calculate whether there had been any overpayment.

The system came under significant criticism in the media and was subject to investigation by the Ombudsman. DHS, to support the legality of the Robodebt system, obtained a second legal advice from an in-house lawyer in 2017, who expressed the view that it was open for DHS ‘as last resort’ to act on average income to raise and recover debts from welfare recipients. DHS proceeded to cover up the first legal advice regarding the illegality of ‘income averaging’ and the scheme, and only disclosed the second advice. Class actions followed.[15]

Robodebt scheme class actions

In 2020, the government agreed to settle a class action brought on by 400,000 victims, paying AU$112m in compensation, in addition to making repayments to individuals who had paid the debt demanded.[16]

In another class action, the Federal Court of Australia found the scheme was unlawful and there was no way for Centrelink to have been satisfied that the debts were actually correct in issuing debt collection notices to welfare recipients. In June 2021, Justice Murphy approved a settlement sum of AU$1.8bn and described the Robodebt scheme as a ‘shameful chapter’ in Australia’s social security scheme.[17]

Robodebt Royal Commission

The Robodebt Royal Commission was established on 18 August 2022 to enquire into the establishment, design and implementation of the Robodebt scheme. On 7 July 2023, Royal Commissioner Catherine Holmes (the ‘Commissioner’) handed a 990-page report, which has 57 recommendations, including the introduction of a legal framework to deal with automated decision-making, and criminal and civil charges against those involved. The Robodebt scheme was described by the Commissioner as a ‘crude and cruel mechanism, neither fair nor legal, and it made many people feel like criminals’. Many people were traumatised with reported cases of self-harm and suicide. The Commissioner stated the Robodebt scheme was a ‘costly failure of public administration in both human and economic terms’.[18]

Automated decision recommendations

The Royal Commission recommended the introduction of a consistent legal framework in which automation in government services can operate. Where automated decision-making is implemented:

  • there should be a clear path for those affected by decisions to seek a review;
  • departmental websites should contain information advising that automated decision-making is used and explaining in plain language how the process works; and
  • business rules and algorithms should be made available to enable independent expert scrutiny.[19]

The Royal Commission also recommended the establishment of a body or the expanding of the powers of an existing body to monitor and audit automated decision-making by government, including the technical aspects of systems and their impact in respect of fairness, avoiding bias and client usability.[20]

Government inquiries, consultations and proposed regulatory reforms

Digital platform enquiries

Since 2017, the ACCC at the direction of the Australian Government, has been conducting inquiries into competition and the consumer impact of digital platforms; digital platform services and digital markets; and the regulatory reforms required. These inquiries have resulted in eight reports and recommendations for regulatory reforms to deal with anti-competitive practices and unfair trade practices, including those from the use of AI systems, which include:

  1. reforms to address the prevalence of scams, fake reviews and harmful applications (some of which originate from the use of AI technology);
  2. the establishment of a new independent Ombudsman Scheme to resolve disputes between digital platforms, consumers and small businesses;
  3. amendments to the Australian Consumer Law to prohibit economy-wide unfair trading practices, including those occurring on digital platforms or from the use of Al systems;[21] and
  4. the introduction of service-specific codes of conduct to address anti-competitive conduct engaged by digital platforms through the use of AI algorithms, including self-preferencing, tying, price setting, bid determination or market sharing resulting in harmful algorithmic collusion.[22]

Reforms to Australian privacy and data protection laws

Australia has also undertaken a review of its privacy and data protection laws with the release of the Privacy Act Review Report in February 2023, with recommendations for reforms. In September 2023, the government’s Response to Privacy Act Review Report was released, accepting the reform recommendations. The government’s response specifically addresses automated decision-making and acknowledges that the safe and responsible development and deployment of automated decision-making technologies ‘presents significant opportunities for enhancing productivity and facilitating economic growth and improving outcomes across health, environment, defence and national security’. The government also acknowledged the Royal Commission into the Robodebt scheme recommendations relating to automated decision-making and is considering how best to implement these having regard to ongoing consultations into safe and responsible AI.[23] The government has agreed to amendments to the privacy laws to provide for transparency in relation to the use of automated decision-making technologies and to ensure the integrity of the decisions made, with individuals to have a right to request meaningful information about how automated decisions are made that impact them.

Government consultation: safe and responsible AI in Australia

In June 2023, coinciding with the release of the Royal Commission Report into the Robodebt Scheme, the Australian Government released the discussion paper Safe and Responsible AI in Australia (the ‘Discussion Paper’) and commenced a public consultation process that considered the adequacy of the existing legal and governance framework to address the potential risks associated with AI technologies and the safeguards required having regard to global regulatory developments.

Public submissions raised concerns about the use of AI in legitimate, but high-risk contexts, where harm may be difficult or impossible to reverse, and the need for mandatory guardrails.[24] The joint submission of the Digital Platform Regulator Forum[25] favoured reforms to existing laws to address identified gaps and the use of mandatory ‘codes’ to impose specific obligations in respect of the use of AI in an ethical, safe and transparent manner, and to address potential harm resulting from the use of AI.[26] Codes are favoured by Australian regulators because they can be easily adapted and changed with emerging issues and can be enforced through existing legislative instruments.

In response to submissions to the Discussion Paper, in January 2024, the Australian Government established an AI Expert Group (the ‘Group’) to provide advice by the end of June 2024 on options for the development of ‘mandatory guardrails to ensure the design, development and deployment of AI systems in high-risk settings is safe’. The Group is also to advise on testing, transparency and accountability measures for such systems.

Senate Select Committee on Adopting Artificial Intelligence (AI)

On 26 March 2024, the Australian Government Senate established the Senate Select Committee on Adopting Artificial Intelligence (AI) ‘to inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia’, which is expected to report to Parliament by 19 September 2024.[27]

Copyright and AI Reference Group

To complement the government consultative process on ‘safe and responsible use of AI’ and regulatory reform, in December 2023, the Australian Government established a copyright and AI reference group. The Reference Group is to have ongoing engagement and consultation with stakeholders across sectors, including the creative arts, media, film, education, research and technology, to enable the government to respond to existing and future challenges to copyright from AI. The government has recognised that AI has given rise to a number of copyright issues including:

  •  the use of copyright material to train AI models, and whether this should be permissible and, if so, the licensing models required to compensate rights holders;
  • the mining of websites for text, image and data, and whether this should be permissible and, if so, how are rights holders to be protected and compensated;
  • transparency, disclosure and attribution where content has been created by using AI generative tools or where existing copyright material has been used to train AI models;
  • the use of AI to create imitations of existing copyright works; and
  • whether AI-generated works should be given copyright protection.[28]

What next?

The dynamic nature of the digital environment and development and use of AI technologies is and will continue to outpace regulatory reforms in Australia. Any regulatory reforms must be agile and flexible to adapt with the evolution of existing and emerging technology providing adequate safeguards from potential harm and transparency. For the laws to be effective, they must be able to be capable of enforcement in a quick and cost-efficient manner, with appropriate mechanisms for complaint resolution.

 

[1] This article is based on a presentation made by Katarina Klaric at the IBA Annual Conference, Paris at the panel session ‘AI regulations-a rising global issue’ on 31 October 2023.

[2] World Intellectual Property Organization, 2019, Technology Trends 2019 – Artificial Intelligence.

[3] Australian Government, Department of Industry, Science and Resources, Safe and Responsible AI in Australia, discussion paper (June 2023) p 3.

[5] In November 2023, Australia together with the EU and 27 countries, including the US, United Kingdom, Japan, China, Brazil and Chile, signed the Bletchley Declaration affirming that, ‘AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible’.

[6] Artificial Intelligence Act, c II, Prohibited Artificial Intelligence Practices, Art 5.

[7] Artificial Intelligence Act, cc III and IV. Also see European Parliament Press Release www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law accessed 10 June 2024.

[8] Online Safety Act 2021 (Cth).

[9] Before road vehicles can be supplied into the Australian market, they must meet the Road Vehicles Standard Act 2018 and Road Vehicle Standards Rules 2019. This regulated framework was implemented on 1 July 2021.

[10] Examples of regulated and unregulated software (excluded) software-based medical devices: Australian Government, Department of Health, Therapeutic Goods Administration (October 2021) pp 4–6.

[11] ISO/IEC 5339:2024: Information technology – Artificial intelligence – Guidance for AI applications;

ISO/IEC 5392:2024: Information technology – Artificial intelligence – Reference architecture of knowledge engineering; ISO/IEC 5338:2023: Information technology – Artificial intelligence – AI system life cycle processes; AS ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management systems, December 2023; AS ISO/IEC 23894:2023: Information technology – Artificial intelligence – Guidance on risk management; ISO/IEC 8183:2023: Information technology – Artificial intelligence – Data life cycle framework; AS ISO/IEC 23053:2023: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML); ISO/IEC 24668:2022: Information technology – Artificial intelligence – Process management framework for big data analytics; ISO/IEC 22989:2022: Information technology – Artificial intelligence – Artificial intelligence concepts and terminology; and AS ISO/IEC 38507:2022: Information technology – Governance of IT – Governance implications of the use of artificial intelligence by organisations.

[12] Australian Government, Department of Industry, Science and Resources, Australia’s Artificial Intelligence Ethics Framework – Australia’s AI Ethics Principles www.industry.gov.au/public accessed 10 June 2024.

[13] See n 3 above; and Nicole Gillespie, Steve Lockey, Caitlin Curtis, Javad Pool, Ali Akbari (2023), ‘Trust in Artificial Intelligence: A Global Study’ (2023) The University of Queensland and KPMG Australia.

[14] Royal Commission into Robodebt Scheme Report.

[15] Ibid.

[16] Australian Government, Services Australia, Information for Customers, Explaining Class Action Settlement Payments (VID1252/2019) www.servicesaustralia.gov.au/sites/default/files/2022-09/explaining-class-action-settlement-payments.pdf accessed 10 June 2024.

[17] Prygodicz v Commonwealth of Australia [2021] 20 FCA 634 at [5]; see court orders’ attached decision.

[18] See n 13 above.

[19] Ibid, Automated decision-making Recommendation 17.1.

[20] Ibid.

[21] ACCC, Digital Platform Services Inquiry – September 2022 Interim Report – Regulatory Reform (2022), c 4; and DP-REG Joint Submission to Department of Industry, Science and Resources, Safe and Responsible AI in Australia, discussion paper (July 2023).

[22] Ibid.

[23] Australian Government, Government Response – Privacy Act Review Report (September 2023) p 11.

[24] Australian Government, AI Expert Group Terms of Reference www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence/ai-expert-group-terms-reference and www.industry.gov.au/news/new-expert-group-will-help-guide-future-safe-and-responsible-ai-australia accessed 10 June 2024.

[25] The forum members are the ACCC, Australian Communications and Media Authority (ACMA), eSafety Commissioner (eSafety) and Office of the OAIC.

[26] DP-REG Joint Submission to Department of Industry, Science and Resources, Safe and Responsible AI in Australia, discussion paper (July 2023).

[27] Parliament of Australia, Senate Select Committee on Adopting Artificial Intelligence (AI) www.aph.gov.au accessed 10 April 2024.

[28] Mark Dreyfus KC MP (Attorney-General), ‘Copyright and AI reference Group to Be Established’ (press release, 5 December 2023) https://ministers.ag.gov.au/media-centre/copyright-and-ai-reference-group-be-established-05-12-2023 accessed 10 June 2024; and Australian Government. Attorney-General’s Department, Artificial Intelligence (AI) and Copyright, issue summary paper (2023).