Governance in the Digital Economy: the Challenge of Governing Algorithms
Blanca Escribano
Ernst & Young Abogados, Madrid
blanca.escribano.canas@es.ey.com
The digital economy’s paradigm change
The ‘Fourth Industrial Revolution’, as coined by Klaus Schwab,[1] creates new governance challenges. We are immersed in a digital transformation triggered by the exponential effect of emerging technologies that are changing the world: changing the way we communicate, socialise, entertain, work, produce, and provide services.
Indeed, software is eating the world, as was said many years ago by Marc Andreessen (Netscape, Mosaic). Today, organisations are experiencing the ‘softwarisation’ of everything. There is a value shift in the digital economy: the value of information and intangible assets is increasing, and organisations need to protect them; data is becoming essential, as it is the fuel that powers most products and services. Moreover, both the physical and virtual worlds are merging. Technologies that were once science fiction, such as artificial intelligence (AI), 3D printing, augmented reality, sensorisation, and the connection of everything (IOT or the Internet of Things) and everyone (social networks) through the internet, are already a reality.
AI and the governance challenges
Among the exponential technologies, AI (which can be defined as a collection of technologies that combines data, algorithms, and computing power) is one of the most important applications of the data economy. AI is becoming mainstream as it can get the most out of data.
It is important to note that AI is not by defect a neutral tool. The rules and values of the context within which it is developed and deployed have an impact on its applications and outcomes. Moreover, when AI systems are using machine-learning[3] techniques, they can achieve a certain level of (or full) autonomy from the coding or the instructions given by their creators, (eg, designers, developers, data analysts), leading to consequences beyond the creation of a computer program. That complexity explains why AI is probably the technology that is attracting most of the industry’s attention, as well as that of academic and policy circles.
The use of AI in relation to corporate and organisation governance[4] can be seen from three different perspectives.
The first is how AI can be a tool for collecting information for the reporting on and monitoring of sustainability indicators. By using and sharing this information, AI helps us know more about organisations and assign value to their risks and opportunities in order for internal and external actors to make better financial and non-financial decisions. Today, AI is key technology for achieving sustainability goals.
Another is the support, replacement in the decision-making process, or even substitution, of physical board members by AI systems. Currently, there is an ongoing debate about whether the legal frameworks in place allow directors to be replaced by AI systems.
Although both the above points are currently very topical and relevant, this article focuses on a third: how boards and governance bodies address the questions AI and related technologies bring to strategy, risk management, and control within their organisations.
Boards, governance bodies and AI
Emerging technologies, and AI in particular, challenge not only the governance of technologies themselves but also require new policies. Boards need to be ready to ensure that their organisations have a governance structure adapted to a new playing field that combines physical with virtual aspects.
AI systems are substituting some of the tools and processes for decision-making that were performed by humans. But, as Margaret Vestager (EU Commission, 2017) commented:[5] ‘Businesses also need to know that when they decide to use an automated system, they will be held responsible for what it does. So, they better know how that system works.’
Consequently, and in line with the G20/Organisation for Economic Co-operation and Development (OECD) principles of corporate governance, boards and governing bodies have the challenge of ensuring that their organisations have a robust, agile, and fit-for-purpose AI governance structure for setting objectives and monitoring performance. Organisations are accountable and liable for what their systems do.
In the past three years, the OECD, the World Economic Forum, the European Commission, the European Parliament, and nearly a hundred public and private initiatives have produced statements describing high-level principles for appropriate governance on the development and use of AI, robotics, and related technologies in order to ‘increase citizens’ safety and trust in those technologies’. All the mentioned institutions concur on the necessity of providing guidelines, toolkits, or lists for organisations to self-assess the risk that AI entails and the means to deploy and implement a governance structure.
Stakeholders need to decide whether to incorporate the assessment process into existing governance mechanisms or implement new bespoke processes for the new technologies. This choice will depend on the internal structure of the organisation as well as its size and available resources.
The independent High-Level Expert Group on AI set up by the European Commission, in its Ethics Guidelines for a Trustworthy AI (2019),[6] recommends that the top management and the board ‘should be discussing and evaluating the AI systems’ development, deployment, or procurement […] when critical concerns are detected’. When describing how this should be implemented, the recommendation is that:
‘Organisations can adapt their charter of corporate responsibility, key performance indicators, codes of conduct, or internal policy documents to add to the striving towards a responsible or trusted AI [...]'.
How organisations should achieve a trustworthy AI
There is a global consensus that AI governance should include technical, legal, and ethical components. In order to avoid harm, AI should be safe, secure, reliable, and robust. It should comply with the laws in place, which apply irrespective of the technology. And it should be ethical, which goes beyond formal compliance with existing regulations in any given jurisdiction. Sometimes legislation is not up to speed with the dilemmas that cutting-edge technology creates. These three components should be implemented from the outset: legal by design, ethical by design, technically robust by design. In other words, ‘X by design’.
But ethics are not the same everywhere, and, for that reason, organisations are establishing what an ethical AI should look like based on those values that are most obviously affected by the use of AI systems. Consequently, in the EU, the High-Level Expert Group on AI extracted principles from the European Charter of Fundamental Rights (EU Charter) and the EU Treaty, highlighting four of them: respect for human autonomy, prevention of harm, fairness, and explicability.[7]
For implementing and materialising the four principles, the Commission suggested that AI systems must meet seven requirements: human oversight, technical robustness, data governance, transparency, avoidance of bias and discrimination, social and environmental sustainability, and accountability. Furthermore, in its ‘Resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of AI,’ the European Parliament proposes an AI certification of ethical compliance, mandatory for AI systems developed, deployed, or used within EU territory.
In the same direction, the World Economic Forum published Empowering AI Leadership: An Oversight Toolkit for Boards of Directors to guide boards in monitoring compliance with AI ethics, which includes principles very similar to the ones identified by the EU. This toolkit has a three-stage methodology: (1) for selecting activities requiring governance (ethics, risk/reward, technology, and people); (2) for evaluating governance; and (3) for assigning governance responsibilities, splitting obligations between the ethics board and the board of directors.
Therefore, the ethical component introduces a new dimension to governance frameworks. Organisations become accountable for the ethical spectrum of decisions associated with the development, deployment, and use of AI systems. As a result, big organisations[8] are already choosing to create codes of ethics for AI, appointing to the task either a person in charge of ethics issues relating to AI; an internal/external panel, committee, or board; or a board member with ethical training.
AI also heightens the importance of data governance. Without data, there is no AI. Data trains and feeds AI, and the fairness, legality, and ethics of the data used will condition AI. Training machine-learning systems require enormous datasets. Proper data governance requires quality checks of the external sources of data used by AI and puts oversight mechanisms in place regarding their collection, storage, processing, and use, in order to avoid the creation or reinforcement of bias that can lead to discrimination. Choices about how to evaluate the data will affect how the data is used, which, in turn, means more choices about any final algorithms. Data should be transparent, traceable, interoperable, compliant with privacy and security regulations, and auditable.
Future AI regulations
Going a step further, EU institutions are of the opinion that specific AI regulation, directly applicable in all Member States, is crucial for achieving a trusted and responsible AI ecosystem in the EU, just as data protection regulations raise the privacy standard in the EU (and beyond). The legislative process is in progress and the first draft of an AI regulation was published April 2021 which will very likely be the world’s first specific AI regulations.[9] For more details, see the article ‘EU Draft Regulation for Regulating AI’ in this newsletter.
Key points
To sum up, AI creates new technology governance challenges and amplifies existing ones due to the possibility of unintended consequences and the liability that an organisation can face when using AI. Governance models should be redesigned. Guidelines that international organisations and regulators are proposing are a good benchmark which can help anticipate the specific compulsory regulation which will very likely be enacted in some regions, such as the EU. We are currently witnessing how, in many fields, authorities are investigating and asking for disclosure of algorithms.[10] Sector regulations are including specific statements as to how compliance with legal obligations must be embedded in algorithms, as it must be kept in mind that, in addition to specific AI rules, the entire legal system also applies to AI.
As obligations are mandatory ‘by design’ from the very outset of any AI system operation, and considering that organisations will be held accountable for the consequences – the sooner they start checking for compliance in their algorithms, AI systems, robotics, and related technologies, the better. Accountability also has a new dimension, the ethical one, which will be measured through non-financial key performance indicators and ethical compliance certificates. Reports to shareholders and filings to regulators should include information about the use and risks of AI, which should also be detailed in non-financial reporting and audited.
Boards have a responsibility to know what their AI systems do and how algorithms work, in order to ensure their organisation has a robust governance structure for setting objectives and monitoring performance. Risk assessment through audited ethical boards all needs to be determined in a very agile manner, as to keep pace with rapid technological change. Only then will organisations be able to detect failures and liability and, as a result, earn the trust of consumers, partners, investors, shareholders, and society at large.
Notes
[1] Klaus Schwab, The Fourth Industrial Revolution, describes the shift to AI technologies and it effects. World Economic Forum, 2016.
[2] Klaus Schwab, The Fourth Industrial Revolution, describes the shift to AI technologies and it effects. World Economic Forum, 2016.
[3] For definition of machine-learning, see ‘What is Machine Learning? A Definition’, Expert.ai, 6 May 2020 https://www.expert.ai/blog/machine-learning-definition, accessed 15 July 2021.
[4] Governance of the digital assets in general and AI and algorithms in particular is relevant not only for corporations or companies, but also for government and non-government organisations. For this reason, this article will use the term ‘Corporate and organisation governance’ as a wider term beyond private/public corporations.
[5] Margaret Vestager, EU commissioner responsible for competition. Speech on algorithms and competition at the Bundeskartellamt (Germany’s Federal Cartel Office) 18th Conference on Competition, Berlin, 16 March 2017.
[6] The AI HLEG is an independent expert group, set up by the European Commission in June 2018. Ethics Guidelines for Trustworthy AI was published 8 April 2019.
[7] Respect for human autonomy is strongly associated with the right to human dignity and liberty (reflected in Arts 1 and 6 of the EU Charter). The prevention of harm is strongly linked to the protection of physical or mental integrity (reflected in Art 3 of the EU Charter). Fairness is closely related to the rights to Non-Discrimination, Solidarity and Justice (reflected in Arts 21 and following of the EU Charter). Explicability and Responsibility are closely linked to the rights relating to Justice (as reflected in Art 47 of the EU Charter). Explicability refers to the ability for the parties involved to furnish an explanation of why an AI system behaved in the way it did. For further reading on this subject, see for instance L Floridi, ‘Soft Ethics and the Governance of the Digital’, Philosophy & Technology, March 2018, Volume 31, Issue 1, pp 1-8, retrieved from the EU Ethics Guidelines for Trustworthy AI https://ec.europa.eu/futurium/en/node/6945#_ftn23, 18 March 2021.
[8] For instance, the AETHER Committee was established at Microsoft in 2017. Google created an AI Ethics board (Advanced Technology External Advisory Council, or ATEAC), but cancelled it after public outcry caused by the controversial behaviour of some board members. Facebook formed a special ethics team to prevent bias in its AI software, see https://www.cnbc.com/2018/05/03/facebook-ethics-team-prevents-bias-in-ai-software.html. There are also international partnerships for evaluating the impact of AI, such as the Partnership on AI (https://www.partnershiponai.org), which includes Facebook, Amazon, Google, IBM, and Microsoft; or the AI Now Institute (https://ainowinstitute.org). Public organisations are moving also in this direction, such as the European Data Protection Supervisor, which came early in creating an ethical board in 2015 (https://edps.europa.eu/press-publications/press-news/press-releases/2015/edps-set-ethics-board_en).
[9] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM/2021/206 final
[10] At the time of this writing, there are investigations in which access to algorithms has been requested to the investigated companies, such as, Facebook (https://www.businessinsider.com/facebook-google-algorithm-scrutiny-australia-accc-2019-7; https://morningstaronline.co.uk/article/w/spanish-labour-ministry-hails-epic-new-gig-economy-law-forcing-disclosure-algorithms); and Deliveroo (https://blog.quintarelli.it/2021/01/court-rules-deliveroo-used-discriminatory-algorithm.html. Moreover, in the EU, there are legislative initiatives or in-force legislation in which access to algorithms is already foreseen, such as, the proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC (15.12.2020 COM(2020) 825 final 2020/0361 (COD); the proposal for a Regulation of the European Parliament and of the Council on contestable and fair markets in the digital sector (Digital Markets Act, Brussels, 15.12.2020 COM(2020) 842 final 2020/0374 (COD); and the Spanish Digital Services Tax Act (Ley 4/2020, 15 October).