UK Regulators Seek Architectural Advice as They Lay the Foundation for Governing Algorithms
As governments across the globe increase their focus on artificial intelligence and other algorithms, the UK Digital Regulation Cooperation Forum (DRCF) announced its next moves. A joint initiative among the major UK regulators touching on digital services (the Competition and Markets Authority (CMA), the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO), and Ofcom, the UK communications regulator), the DRCF promotes greater cooperation and coordination among them. Recently, the DRCF released its workplan for the financial year and published two discussion papers on algorithms, The benefits and harms of algorithms: a shared perspective from the four digital regulators and Auditing algorithms: the existing landscape, role of regulators and future outlook. The discussion papers explore how regulators and the public should consider the potential risks and benefits of algorithmic processing,1 how algorithmic auditing might reduce those risks, and the appropriate role for regulators in such audits. The DRCF’s call for stakeholders’ views on these questions and the DRCF’s priorities offers an early chance to shape how intrusive regulatory involvement in algorithmic auditing will become in the UK. The call for views closes on 8 June 2022.
DRCF Work Plan
Among other priorities for the coming year, the DRCF will focus on algorithmic transparency. The DRCF recognises that the widespread use of algorithms to process and collect data underpins many digital services, and brings many benefits. For instance, algorithms may be used to detect fraudulent activity, connect individuals to their friends on social media platforms and direct navigation for deliveries. However, without proper oversight, the use of algorithms can lead to individual harm and anti-competitive outcomes. The DRCF’s regulators wish to support the use of algorithmic processing, promoting the benefits while mitigating the risks to individuals, data protection goals and competition.
The DRCF intends to build its understanding of how to assess algorithmic systems effectively and how to support their appropriate deployment by businesses by:
- Improving its capability for algorithmic auditing through knowledge sharing and testing digital solutions to monitor algorithmic processing systems in order to identify harms.
- Researching the third-party algorithmic auditing market. The DRCF is assessing where regulators can play the most valuable role in influencing the development of this emerging market.
- Promoting transparency in algorithmic procurement, by supporting vendors and procurement teams through a publication on best practices, harmful behaviours and clarity on each regulator’s role.
As a launching pad for this year’s work, the DRCF published two papers that set out the benefits and harms of algorithmic processing and the current landscape for algorithmic auditing.
1. The benefits and harms of algorithms: a shared perspective from the four digital regulators
Broadly supportive of algorithmic processing, the DRCF draws the following high-level conclusions from its past year’s work:
- Algorithms offer many benefits to individuals and society, and these benefits can increase with continued responsible innovation.
- Harms can occur both intentionally and inadvertently.
- Those procuring and/or using algorithms often know little about their origins and limitations.
- There is a lack of visibility and transparency in algorithmic processing, which can undermine accountability.
- A 'human in the loop' is not a foolproof safeguard against harms.
- There are limitations to DRCF members’ current understanding of the risks associated with algorithmic processing.
The DRCF also hints at future areas of regulatory activity. With respect to algorithmic transparency, the DRCF notes a
concern that the number of players involved in algorithmic supply chains is leading to confusion over who is accountable for their proper development and use. A study looking at business-to-business AI services, for example, found that the roles of data “processor” and “controller” as expressed in data protection legislation are not always clearly identified, meaning those building, selling and using algorithms may not be fulfilling their obligations under the UK GDPR.
Moreover, lack of transparency may prevent individuals from ‘hav[ing] access to the “logic” of a system’ as required ‘under UK data protection law for solely automated decisions that significantly affect them (with certain exceptions).’ (That said, the government is contemplating removing this Article 22 requirement from the UK GDPR.)
Apart from data protection concerns, the DRCF explains how insufficiently transparent algorithms can promote mis- and disinformation, which the Online Safety Bill pending before Parliament would give Ofcom greater tools to combat (including the power to impose fines of up to the higher of 10% of global annual turnover or £18 million).
In addition, algorithmic processing gives rise to discrimination and other fairness concerns, implicating various legal requirements. According to the DRCF:
The UK GDPR for example mandates that organisations only process personal data fairly and in a transparent manner. Separately, the Equality Act prohibits organisations from discriminating against people on the basis of protected characteristics, including in cases where they are subject to algorithmic processing. The Consumer Rights Act, meanwhile, includes a “fairness test”, whereby a contract term will be unfair if “contrary to the requirement of good faith, it causes a significant imbalance in the parties’ rights and obligations to the detriment of the consumer”. This applies to contracts between traders and consumers, including those which involve algorithmic processing.
The DRCF also points to algorithmic systems’ potential to distort markets through self-preferencing, ‘propagat[ing] and amplify[ing] issues within’ a market as in the 2010 ‘Flash Crash,’ and even ‘autonomously learn[ing] to collude’ in pricing.
The DRCF concludes this discussion paper with a call for input on its findings, other potential areas of focus, how the DRCF should prioritise its efforts, how the DRCF can guide ‘consumers and individuals’ in their involvements with the algorithmic processing ecosystem, and evidence regarding the harms and benefits of algorithmic systems.
2. Auditing algorithms: the existing landscape, role of regulators and future outlook.
In the second discussion paper, the DRCF reviews how algorithmic auditing can ensure the benefits of algorithmic processing are realised and risks are addressed. Among other considerations, the DRCF emphasizes the potential connections between regulators and algorithmic auditing.
Algorithmic auditing refers to a range of approaches that may be adopted to review algorithmic processing systems. These approaches range from checking governance documentation, to testing an algorithm’s outputs, to inspecting its inner workings. Audits may be carried out internally, by third parties appointed by the organisation using the algorithm, by regulators or by other parties. Algorithmic auditing is not currently carried out extensively; however, the practice is expected to grow in the coming years.
Throughout the paper, the DRCF explores the roles that regulators could perform in relation to the algorithmic auditing. These roles could include providing guidance on when auditing is appropriate or even mandating audits, establishing best practice principles, accrediting third-party audit providers, and helping ensure that auditors are granted sufficient access to audit algorithms by the organisations that use them. Regulators may also have a role in ensuring corrective action is taken when an algorithmic audit reveals potential harm. To encourage voluntary self-reporting of problems with algorithmic systems, regulators might offer lenient treatment in enforcement and penalties.
After surveying these possibilities, the DRCF advances six ‘hypotheses related to the potential role for regulators in the algorithmic audit landscape’:
- ‘There may be a role for some regulators to clarify how external audit could support the regulatory process, for example, as a means for those developing and deploying algorithms to demonstrate compliance with regulation, under conditions approved by the regulator.’
- ‘There may be a role for some regulators in producing guidance on how third parties should conduct audits and how they should communicate their results to demonstrate compliance with our respective regimes.’
- ‘There may be a role for some regulators in assisting standards-setting authorities to convert regulatory requirements into testable criteria for audit.’
- ‘Some regulators may have a role to provide mechanisms through which internal and external auditors, the public and civil society bodies can securely share information with regulators to create an evidence base for emerging harms. Such mechanisms could include a confidential database for voluntary information sharing with regulators.’
- ‘There may be a role for some regulators in accrediting organisations to carry out audits, and in some cases these organisations may certify that systems are being used in an appropriate way (for example, through a bias audit) in order to demonstrate compliance with the law to a regulator.’
- ‘For some regulators there may be a further role to play in expanding the use of regulatory sandboxes (where a regulator has power to do so) to test algorithmic systems in a controlled environment.’2
The DRCF seeks views on these hypotheses.
A Unique Opportunity for Influence
Both discussion papers call for input from interested parties, to engage with the DRCF and assist in the shaping of its agenda. The DRCF will take comments (drcf.algorithms@cma.gov.uk) until Wednesday, 8 June 2022. This consultation offers a unique opportunity to affect how prescriptive UK regulators will be with respect to algorithms and algorithmic auditing as well as how intrusive required audits will be. Once the regulatory regime takes form, it will be much harder to induce the agencies to make modifications.
The Global Context
The DRCF work plan is part of a rapidly rising tide of algorithm and AI regulation in the UK and around the world. The UK government separately intends to set out its plan for governing AI in a white paper later this year. The European Parliament and Council of the European Union are deliberating over the comprehensive AI Act proposed last year by the European Commission. The US Federal Trade Commission is planning a rulemaking ‘to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.’ The White House Office of Science and Technology Policy is formulating an ‘AI Bill of Rights.’ (These EU and US efforts are on top of the restrictions on automated decision-making included in the GDPR and various US state privacy laws.) In China, the Cyberspace Administration implemented its Internet Information Service Algorithmic Recommendation Management Provisions earlier this year and is completing work on another regulation—this one on algorithmically created content, including technologies such as virtual reality, text generation, text-to-speech, and ‘deep fakes.’ Brazil, too, is developing an AI regulation.
It is not yet clear whether we are heading towards regulatory harmony or dissonance across jurisdictions. In the meantime, however, businesses can take practical steps to stay ahead of the curve.
© Arnold & Porter Kaye Scholer LLP 2022 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.
-
For purposes of this paper, the DRCF defines algorithmic processing as ‘as the processing of data (both personal and non-personal) by automated systems. This includes artificial intelligence (AI) applications, such as those powered by machine learning (ML) techniques, but also simpler statistical models.’
-
Regulatory sandboxes are controlled environments where systems can be tested for compliance in real-world settings with the regulator in a cooperative, not an adversarial, posture.