Introduction: Building Towards a Human-Centred Approach to AI
Unlike certain other jurisdictions, the UK has not – so far – published a draft law aimed at regulating AI. Instead, various government departments have published position papers, strategy documents, governance frameworks and recommended approaches on AI, over the last five years in particular. There is extensive material in this regard, and it tends to be subject-specific; that is certainly to be expected, when we consider just how impactful the technology is, and the massive potential that the fourth industrial revolution carries.
Governments are tasked with the tricky balance of ensuring sufficient safeguards against the potential harm of AI, whilst not hampering innovation and allowing AI’s potential to be fully realised. Those documents are helpful to understand the policy direction for AI in the UK, such that it is possible to trace what future regulation, guardrails or restrictions might include.
There are also certain subject-specific legal positions and developments that touch on AI but are not necessarily targeted at AI specifically. Some of those principles are longstanding (for example, copyright law), some are more recent (data protection in particular). We have also seen the courts considering some key legal questions around AI, particularly for patents.
However, it can sometimes be tricky to cut through the extensive volume of information and speculation to identify: (i) the core themes and principles that will provide the structure for any future AI regulation; and (ii) what that might mean in practice for lawyers counselling AI teams, products and functionality in the UK. ¬¬Further, it is easy to fall into the trap of reading AI regulation and policy as only relating to (or restricted to) the models and machines it is ultimately targeted at. That is an incorrect approach; in reality, humans are the targets and humans will be responsible for implementing whatever guardrails or restrictions are adopted. Humans build the models, as well as providing and supervising the training. Policy and legal principles will therefore focus on what those people are permitted to do as they develop AI, what proactive compliance they will need to carry out before launching a model, and what their ongoing obligations will be after launch. We do not know what shape future regulation will take – it could range from a relatively short ethical or trust-focused code, to an exhaustive “AI Act”. Whatever shape it takes, humans will be at the core and it will require:
This article therefore seeks to:
Policy and Legal Developments
A number of UK policy and legal developments indicate the direction that future regulation or governance of AI will take. This section outlines just a few examples.
Policy
Various government departments in the UK have published important position papers on AI, strategy documents, and even established specific bodies tasked with both developing the potential of AI and ensuring responsible and ethical use.
It is important to appreciate two other important trends that are essential to an overall understanding of the conversation around regulation and governance, and which will assist with any company’s compliance operation, which we will briefly note here.
Legal
Data protection
Given the centrality of data to AI, and the inevitability of AI processing personal information, the UK Information Commissioner Officer has set out formal guidance on AI and data protection.
Copyright
The UK has existing provisions in copyright law dealing with the question: who owns a work that is created by AI? Pursuant to Section 9(3) and Section 178 Copyright Designs and Patents Act 1988 (CDPA), the “author” of a “computer-generated work” is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. In other words, if an AI system creates copyright works, it cannot be considered the author of those works.
These provisions have been in place for some time, long before AI was so readily and commonly used. As such, the UK IPO has consulted the public at least twice on this question and whether it is appropriate. It seems the status quo shall continue in effect for some time. There are related questions, however, that do not carry the same certainty. For example, whether an AI system can infringe copyright. The answer is likely no, but that issue remains untested.
Patents
UK patent applicants must name a human inventor, who is the “actual deviser of the invention”. The right to own a patent, and therefore the monopoly and economic benefits that come with it, flow from the inventor. This was tested in the DABUS case, Thaler v Comptroller General of Patents Trade Marks and Designs [2021] EWCA Civ 1374, where the Court of Appeal was asked to consider whether an AI model could be an inventor of a patent. The judges found that an AI system cannot be an inventor of a patent – only a human can. The Court also found that:
The UK IPO has consulted the public at least twice on this question and whether it is appropriate. Similar to the copyright position, there is an open question as to whether an AI can be an infringer of a patent. The answer is likely no, but that issue remains untested.
National Security and Investment Act 2021
In the National Security and Investment Act 2021, AI is listed one of the 17 key sectors that require issuing a mandatory notification to the government when a qualifying entity is carrying out research into artificial intelligence or developing or producing goods, software or technology that use artificial intelligence for the purposes of:
The wide-sweeping nature of this Act, and the associated obligations, should not be understated or ignored.
Wider developments
Once again it is inadvisable to only consider the UK. AI itself does not submit to a sole or specific jurisdiction, models are often created across borders, and it is extremely difficult to create effective (and profitable) jurisdiction-specific algorithms. There are many other legal developments from around the world that should be reviewed with this material – for example, the high-impact (and controversial) EU Draft AI Act, as well as subject-specific legislation that restricts use of AI in certain situations (eg, facial recognition technology in California and many other US states).
Why Are these Policy and Legal Positions So Important?
The potential benefits of AI are phenomenal, in terms of the power of machines being able to do more things – and do them more quickly – than humans, but also in terms of doing things and making discoveries that humans could not. The reality is that AI is scary to many people, who hear "AI" and immediately think: “killer robots”. That is extremely challenging for policy-makers who want to unlock the power of AI. Many people rightly worry about the impact on jobs, while others raise valid philosophical and ethical questions about the invasiveness of certain AI tools.
What the UK government seems to be doing is playing the long game, trying to build trust in AI, whilst not stifling innovation in the short term. Without people buying into AI and it being trusted, its potential may not be met, or at least that potential could be delayed. In trying to build that trust, the government is working out what rules of the road should be adopted, and how.
The hope is that we get sensible, outcomes-based frameworks or ethical codes which garners user-trust and does not hamper innovation. The danger is that we get overly prescriptive or innovation-harming regulation.
Common Themes: What Might the Framework for UK AI Regulation/Governance Look Like?
There are at least six key themes that can be traced across all of the policy, strategic, ethical and legal materials so far. These key themes will likely be central to any future regulation of AI in the UK.
Fairness: ensuring algorithms treat humans fairly, they are not discriminatory, or biased, or impact on vulnerable human rights, and ultimately that they do not cause humans harm.
Accountability: there is a clear need to ensure that the right liability frameworks exist, and that it is clear who is responsible for AI outcomes.
Transparency: this is aimed at making sure AI can be identified and explained, and that AI’s capabilities and limitations are communicated to the public.
Privacy: this is a key theme and appreciates that algorithms are fundamentally built on data and, as a result, personal data may be processed on a large scale. Also, there could be specific additional requirements where AI is applied to things such as facial recognition and biometrics.
Human oversight: this generally relates to oversight of the process and resulting algorithms, particularly around automated processing, but also ensuring that decisions are subject to human review, or that humans oversee testing.
Security/safety: this goes to a number of issues, primarily around ensuring that AI itself is not dangerous, but also that generally people are protected, and that the model cannot be hacked or compromised.
There may well also be more specific provisions around:
What Does this Mean for Companies and In-House Lawyers?
Overall compliance operation
These themes will ultimately create obligations that companies and legal teams need to assess and analyse. It may be helpful to consider how they might take up the company’s energy practically, which is what follows in this section. At a general level, however, the following applies.
Fairness
Much has been said about the need for algorithms to be built in a fair way, free from bias, and obviously it is important to think about ensuring diverse teams in charge of building models. Lawyers can also contribute. Look at the data underlying the algorithm – what is it? Is the data biased? For example, a lot of training is done on out-of-copyright works. Think about when that was – 150 years ago? Before? Those works reflected a society which is very different to ours. Even the best AI will struggle to pick up the subtle anti-patriarchy nuances in Jane Austen novels.
However, it is not just about having a fair algorithm in the product build phase, fairness needs to be built into the testing and experimentation phases. When you train your data, and train your model, what happens when it goes in front of humans? How does it interact with humans? Does it treat them in a fair way? If the implementation of the model actually ends up creating unfair or biased results, that is where risk might arise, so fairness is a concept that should be regularly revisited.
Accountability
This is particularly important for legal because we are basically wired to identify and articulate liability structures. AI models and products need to be accountable, and that means that we can push the engineers to identify how AI works so that we can articulate who is responsible for the outcomes of the AI; we can suggest appropriate feedback and review channels, so that we know the legal position. Are we mixing internal inputs and external inputs? Do we know who is responsible for what? Can we identify why a certain result is happening, and who is responsible?
The message to the business is: do not let algorithms become an extension of bureaucracy. When things go wrong, it is not good enough to just shrug or point to the person next to you. That will not satisfy the requirements around accountability as they start to appear in regulations and law.
Transparency
In other words, can algorithms be explained? Are we avoiding “black boxes” as much as we can (or can we explain why we are not)? When it comes to testing and implementation, are we clear what is AI and what is human? We might need to communicate that.
Are we over-claiming what our AI does or does not do? Are we clear on its shortcomings? Also, are we collecting enough information to have a robust audit trail, in respect of our data selection, training and risk assessments?
Privacy
Essentially, we are talking about a privacy by design “plus” approach. We already have responsibilities under GDPR, but these are greater when it comes to AI, autonomous decisions, facial recognition, biometrics, etc.
We can build on existing governance mechanisms, but we are presented with challenges. For example: how does this affect our legitimate interests balancing act? How do we get AI-specific consents or opt-ins once it has been implemented? Are we protecting against "scope creep" such that the machine does not start processing data that we have not thought about or predicted as in scope?
Human oversight
The key issue here is identifying where humans (not just lawyers) have to fit in under the law, and then identifying what the triggers are for human override or oversight within AI applications. This will mainly be in the testing and implementation phase.
Security/safety
As lawyers, we will not be responsible for the tech ensuring that our algorithms and tech is cybersecure, but we can emphasise the need for risk assessments – and those should be risk and harm-based, so are we looking before and after implementation?
Also, are we prepared to enforce against third parties? Can we? Legal has to be involved, in case things go wrong.
Specific legal questions
Those key themes all generate expansive questions, and they have an eye on the longer term. There are, however, relevant legal questions that affect use of AI now, which companies and legal teams should be confronting as part of their compliance efforts generally. So, focusing more at a micro level, when it comes to product counselling and advising the business on AI issues, we might consider the following.
Conclusion: Prepare Now
Whatever shape regulation of AI takes in the UK, it will place humans at the core of its focus, as well as at the expected compliance programmes of companies using and developing AI.
This article has extracted the core themes from the growing amount of policy and legal initiatives and developments, and outlined practical guidance for how legal departments can continue to prepare for what comes next. What is clear is that the compliance operation will be onerous and extensive; it is therefore a good idea to start (or continue) to prepare now.
100 New Bridge Street
London
EC4V 6JA
UK
+44 20 7919 1087
John.Groom@bakermckenzie.com www.bakermckenzie.com