Artificial Intelligence 2022

Last Updated June 06, 2022

UK

Trends and Developments


Author



Baker McKenzie has helped the world's leading multinational companies in all aspects of the acquisition, protection, enforcement and exploitation of their IP rights since 1962. With 400-plus intellectual property lawyers in 40-plus countries, Baker McKenzie covers the full scope of IP services in more jurisdictions than any other firm. Its global coverage, high-quality work and commitment to client service is reflected in the recognition it receives within the legal industry. With expertise in the automotive, luxury and fashion, fast-moving consumer goods, financial services, technology, media and entertainment, manufacturing, and pharmaceuticals, life sciences and healthcare industries, the firm advises clients on brand management, brand enforcement, patent litigation, patent prosecution, copyright, IP litigation, trade secrets, IP transactions, tax and IP advisory, digital media, platforms, AI, new technologies, digital transformation and marketing, advertising and promotions.

Introduction: Building Towards a Human-Centred Approach to AI

Unlike certain other jurisdictions, the UK has not – so far – published a draft law aimed at regulating AI. Instead, various government departments have published position papers, strategy documents, governance frameworks and recommended approaches on AI, over the last five years in particular. There is extensive material in this regard, and it tends to be subject-specific; that is certainly to be expected, when we consider just how impactful the technology is, and the massive potential that the fourth industrial revolution carries.

Governments are tasked with the tricky balance of ensuring sufficient safeguards against the potential harm of AI, whilst not hampering innovation and allowing AI’s potential to be fully realised. Those documents are helpful to understand the policy direction for AI in the UK, such that it is possible to trace what future regulation, guardrails or restrictions might include.

There are also certain subject-specific legal positions and developments that touch on AI but are not necessarily targeted at AI specifically. Some of those principles are longstanding (for example, copyright law), some are more recent (data protection in particular). We have also seen the courts considering some key legal questions around AI, particularly for patents.

However, it can sometimes be tricky to cut through the extensive volume of information and speculation to identify: (i) the core themes and principles that will provide the structure for any future AI regulation; and (ii) what that might mean in practice for lawyers counselling AI teams, products and functionality in the UK. ¬¬Further, it is easy to fall into the trap of reading AI regulation and policy as only relating to (or restricted to) the models and machines it is ultimately targeted at. That is an incorrect approach; in reality, humans are the targets and humans will be responsible for implementing whatever guardrails or restrictions are adopted. Humans build the models, as well as providing and supervising the training. Policy and legal principles will therefore focus on what those people are permitted to do as they develop AI, what proactive compliance they will need to carry out before launching a model, and what their ongoing obligations will be after launch. We do not know what shape future regulation will take – it could range from a relatively short ethical or trust-focused code, to an exhaustive “AI Act”. Whatever shape it takes, humans will be at the core and it will require:

  • a company to rigorously assess the fairness and representation of data underlying its AI, as well as the impact its models have on humans, across the lifetime of the product;
  • human oversight of AI systems; and
  • people at the company (most likely in the legal department) to assess and understand how to comply with the rules, and execute the ongoing compliance programme and responsibilities.

This article therefore seeks to:

  • synthesise the extensive material into core principles, and identify some of the key themes coming out of UK policy and legal developments so far;
  • explore the ways in which humans remain at the centre of AI-relevant law and policy; and
  • sketch out how companies and in-house legal teams can prepare so that their approach to AI can run consistently with the future shape of regulation in the UK.

Policy and Legal Developments

A number of UK policy and legal developments indicate the direction that future regulation or governance of AI will take. This section outlines just a few examples.

Policy

Various government departments in the UK have published important position papers on AI, strategy documents, and even established specific bodies tasked with both developing the potential of AI and ensuring responsible and ethical use.

  • The Information Commissioner’s Office (ICO) has long been an important voice in the discussion, and perhaps its flagship contribution came with its paper on Big Data and AI in 2017; the ICO was keen to specify that it was not legal guidance.
  • The Centre for Data Ethics and Innovation (CDEI) was set up in 2018. The CDEI sits within the Department for Culture, Media and Sport (DCMS), and seeks to enable the trustworthy use of data and AI. It regularly publishes papers and opinions on its blog, conducts outreach and advocacy with businesses that use and focus on AI, and contributes to key government initiatives (such as the National Data Strategy).
  • The Office for Artificial Intelligence (a joint venture between DCMS and the Department for Business Energy and Industrial Strategy) published an Ethical Code for the use of AI by public bodies in May 2021. This may well provide certain core concepts for a broader ethical code that has wider application.
  • This momentum culminated in September 2021, when the Office for Artificial Intelligence published the National AI Strategy, which seeks to “boost business use of AI, attract international investment and develop the next generation of tech talent”. Its third and final pillar, Governing AI effectively is particularly relevant for our purposes.

It is important to appreciate two other important trends that are essential to an overall understanding of the conversation around regulation and governance, and which will assist with any company’s compliance operation, which we will briefly note here.

  • First, there are other important actors (including outside the UK) that are actively contributing to this policy conversation, such as non-governmental agencies that focus on AI, international organisations (the OECD, G7, G20, etc). AI is front of mind for world leaders.
  • Second, many of the most prominent technology companies themselves have published their own ethical codes and governance frameworks they employ as they develop and roll out AI. In many ways, governments and policymakers are catching up with businesses; the market leaders are already setting lines in the sand for certain AI uses and methods.

Legal

Data protection

Given the centrality of data to AI, and the inevitability of AI processing personal information, the UK Information Commissioner Officer has set out formal guidance on AI and data protection.

  • Ensure accountability and governance by conducting data protection impact assessments (DPIAs) – ie, evaluating the risks involved in processing data. As AI systems process a high volume of data, this would help to:
    1. assess the risks to individual rights that use of AI poses;
    2. determine steps to address these; and
    3. establish the impact this has on use of AI.
  • Ensure fair, lawful and transparent processing by identifying the purpose and lawful basis for each distinct processing operation within an AI system, and make these transparent to data subjects.
  • Minimise and secure data by ensuring clear audit trails, and recording and documenting all movements when storing and transferring personal data.
  • Privacy by design should be built into the life cycle of AI systems, from ensuring meaningful human review of training data for bias to setting up efficient reporting and escalation systems for data breaches.

Copyright

The UK has existing provisions in copyright law dealing with the question: who owns a work that is created by AI? Pursuant to Section 9(3) and Section 178 Copyright Designs and Patents Act 1988 (CDPA), the “author” of a “computer-generated work” is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”. In other words, if an AI system creates copyright works, it cannot be considered the author of those works.

These provisions have been in place for some time, long before AI was so readily and commonly used. As such, the UK IPO has consulted the public at least twice on this question and whether it is appropriate. It seems the status quo shall continue in effect for some time. There are related questions, however, that do not carry the same certainty. For example, whether an AI system can infringe copyright. The answer is likely no, but that issue remains untested.

Patents

UK patent applicants must name a human inventor, who is the “actual deviser of the invention”. The right to own a patent, and therefore the monopoly and economic benefits that come with it, flow from the inventor. This was tested in the DABUS case, Thaler v Comptroller General of Patents Trade Marks and Designs [2021] EWCA Civ 1374, where the Court of Appeal was asked to consider whether an AI model could be an inventor of a patent. The judges found that an AI system cannot be an inventor of a patent – only a human can. The Court also found that:

  • DABUS is not an “actual deviser” as required under the definition of “inventor”; and
  • Dr Thaler’s attempts to file for a non-human inventor and derive ownership from this non-human inventor were considered “legal impossibilities”.

The UK IPO has consulted the public at least twice on this question and whether it is appropriate. Similar to the copyright position, there is an open question as to whether an AI can be an infringer of a patent. The answer is likely no, but that issue remains untested.

National Security and Investment Act 2021

In the National Security and Investment Act 2021, AI is listed one of the 17 key sectors that require issuing a mandatory notification to the government when a qualifying entity is carrying out research into artificial intelligence or developing or producing goods, software or technology that use artificial intelligence for the purposes of:

  • the identification or tracking of objects, people or events;
  • advanced robotics; or
  • cybersecurity.

The wide-sweeping nature of this Act, and the associated obligations, should not be understated or ignored.

Wider developments

Once again it is inadvisable to only consider the UK. AI itself does not submit to a sole or specific jurisdiction, models are often created across borders, and it is extremely difficult to create effective (and profitable) jurisdiction-specific algorithms. There are many other legal developments from around the world that should be reviewed with this material – for example, the high-impact (and controversial) EU Draft AI Act, as well as subject-specific legislation that restricts use of AI in certain situations (eg, facial recognition technology in California and many other US states).

Why Are these Policy and Legal Positions So Important?

The potential benefits of AI are phenomenal, in terms of the power of machines being able to do more things – and do them more quickly – than humans, but also in terms of doing things and making discoveries that humans could not. The reality is that AI is scary to many people, who hear "AI" and immediately think: “killer robots”. That is extremely challenging for policy-makers who want to unlock the power of AI. Many people rightly worry about the impact on jobs, while others raise valid philosophical and ethical questions about the invasiveness of certain AI tools.

What the UK government seems to be doing is playing the long game, trying to build trust in AI, whilst not stifling innovation in the short term. Without people buying into AI and it being trusted, its potential may not be met, or at least that potential could be delayed. In trying to build that trust, the government is working out what rules of the road should be adopted, and how.

The hope is that we get sensible, outcomes-based frameworks or ethical codes which garners user-trust and does not hamper innovation. The danger is that we get overly prescriptive or innovation-harming regulation.

Common Themes: What Might the Framework for UK AI Regulation/Governance Look Like?

There are at least six key themes that can be traced across all of the policy, strategic, ethical and legal materials so far. These key themes will likely be central to any future regulation of AI in the UK.

Fairness: ensuring algorithms treat humans fairly, they are not discriminatory, or biased, or impact on vulnerable human rights, and ultimately that they do not cause humans harm.

Accountability: there is a clear need to ensure that the right liability frameworks exist, and that it is clear who is responsible for AI outcomes.

Transparency: this is aimed at making sure AI can be identified and explained, and that AI’s capabilities and limitations are communicated to the public.

Privacy: this is a key theme and appreciates that algorithms are fundamentally built on data and, as a result, personal data may be processed on a large scale. Also, there could be specific additional requirements where AI is applied to things such as facial recognition and biometrics.

Human oversight: this generally relates to oversight of the process and resulting algorithms, particularly around automated processing, but also ensuring that decisions are subject to human review, or that humans oversee testing.

Security/safety: this goes to a number of issues, primarily around ensuring that AI itself is not dangerous, but also that generally people are protected, and that the model cannot be hacked or compromised.

There may well also be more specific provisions around:

  • prohibited AI uses;
  • conditional use/use cases of AI; and
  • proactive compliance measures that use of certain AI systems will require.

What Does this Mean for Companies and In-House Lawyers?

Overall compliance operation

These themes will ultimately create obligations that companies and legal teams need to assess and analyse. It may be helpful to consider how they might take up the company’s energy practically, which is what follows in this section. At a general level, however, the following applies.

  • It seems clear that a compliance-by-design approach will be expected. Even if the law does not specifically mandate something, there will likely be broad standards that have to be met. For legal, that means assessing where we as lawyers see potential areas of risk, and what do we recommend the business does based on the themes coming through the principles and guidelines we have seen so far? What systems should we put in place through the design, testing and roll-out phase? 
  • The compliance approach/obligation should be ongoing, across the build, test and release phases and after. Once a model is released, the company should monitor it, review it and assess its performance and impact on people. That means that legal should always have a seat at the table. Compared to functionality release processes, where legal’s role tends to fall away after launch, legal should always have a seat at the table in relation to the use of AI systems by a company.

Fairness

Much has been said about the need for algorithms to be built in a fair way, free from bias, and obviously it is important to think about ensuring diverse teams in charge of building models. Lawyers can also contribute. Look at the data underlying the algorithm – what is it? Is the data biased? For example, a lot of training is done on out-of-copyright works. Think about when that was – 150 years ago? Before? Those works reflected a society which is very different to ours. Even the best AI will struggle to pick up the subtle anti-patriarchy nuances in Jane Austen novels.

However, it is not just about having a fair algorithm in the product build phase, fairness needs to be built into the testing and experimentation phases. When you train your data, and train your model, what happens when it goes in front of humans? How does it interact with humans? Does it treat them in a fair way? If the implementation of the model actually ends up creating unfair or biased results, that is where risk might arise, so fairness is a concept that should be regularly revisited.

Accountability

This is particularly important for legal because we are basically wired to identify and articulate liability structures. AI models and products need to be accountable, and that means that we can push the engineers to identify how AI works so that we can articulate who is responsible for the outcomes of the AI; we can suggest appropriate feedback and review channels, so that we know the legal position. Are we mixing internal inputs and external inputs? Do we know who is responsible for what? Can we identify why a certain result is happening, and who is responsible?

The message to the business is: do not let algorithms become an extension of bureaucracy. When things go wrong, it is not good enough to just shrug or point to the person next to you. That will not satisfy the requirements around accountability as they start to appear in regulations and law.

Transparency

In other words, can algorithms be explained? Are we avoiding “black boxes” as much as we can (or can we explain why we are not)? When it comes to testing and implementation, are we clear what is AI and what is human? We might need to communicate that.

Are we over-claiming what our AI does or does not do? Are we clear on its shortcomings? Also, are we collecting enough information to have a robust audit trail, in respect of our data selection, training and risk assessments?

Privacy

Essentially, we are talking about a privacy by design “plus” approach. We already have responsibilities under GDPR, but these are greater when it comes to AI, autonomous decisions, facial recognition, biometrics, etc.

We can build on existing governance mechanisms, but we are presented with challenges. For example: how does this affect our legitimate interests balancing act? How do we get AI-specific consents or opt-ins once it has been implemented? Are we protecting against "scope creep" such that the machine does not start processing data that we have not thought about or predicted as in scope?

Human oversight

The key issue here is identifying where humans (not just lawyers) have to fit in under the law, and then identifying what the triggers are for human override or oversight within AI applications. This will mainly be in the testing and implementation phase.

Security/safety

As lawyers, we will not be responsible for the tech ensuring that our algorithms and tech is cybersecure, but we can emphasise the need for risk assessments – and those should be risk and harm-based, so are we looking before and after implementation? 

Also, are we prepared to enforce against third parties? Can we? Legal has to be involved, in case things go wrong.

Specific legal questions

Those key themes all generate expansive questions, and they have an eye on the longer term. There are, however, relevant legal questions that affect use of AI now, which companies and legal teams should be confronting as part of their compliance efforts generally. So, focusing more at a micro level, when it comes to product counselling and advising the business on AI issues, we might consider the following.

  • Is any type or use of AI actually illegal? What should we not be doing? Does the law ban certain types of AI use? This is something that should be tracked. 
  • Is use of AI legal with conditions? If it is not banned, does the law mandate we have to do anything in order to implement, such as any mandatory notifications to government, or testing phases or disclosures? Bot disclosures are a good example – in what circumstances do we have to label our product as a bot? Is our use of certain AI subject to review or ongoing audit procedures?
  • Are we clear on ownership: are our models or machines created by employees or contractors? Is our paperwork in order? Is the law on ownership of AI that creates works the same as it was, or has it changed?
  • Are we protected against infringement claims: what is the source of data for our training? If it is third party, do we have permission to use it? Is it publicly available data or content? Is the method we are using somehow infringing? Is there an exception that might help us? Could the AI end up infringing once it is released? Do we have the appropriate remedies against third parties if we need them? Would that matter?

Conclusion: Prepare Now

Whatever shape regulation of AI takes in the UK, it will place humans at the core of its focus, as well as at the expected compliance programmes of companies using and developing AI.

This article has extracted the core themes from the growing amount of policy and legal initiatives and developments, and outlined practical guidance for how legal departments can continue to prepare for what comes next. What is clear is that the compliance operation will be onerous and extensive; it is therefore a good idea to start (or continue) to prepare now. 

Baker McKenzie

100 New Bridge Street
London
EC4V 6JA
UK

+44 20 7919 1087

John.Groom@bakermckenzie.com www.bakermckenzie.com
Author Business Card

Trends and Developments

Author



Baker McKenzie has helped the world's leading multinational companies in all aspects of the acquisition, protection, enforcement and exploitation of their IP rights since 1962. With 400-plus intellectual property lawyers in 40-plus countries, Baker McKenzie covers the full scope of IP services in more jurisdictions than any other firm. Its global coverage, high-quality work and commitment to client service is reflected in the recognition it receives within the legal industry. With expertise in the automotive, luxury and fashion, fast-moving consumer goods, financial services, technology, media and entertainment, manufacturing, and pharmaceuticals, life sciences and healthcare industries, the firm advises clients on brand management, brand enforcement, patent litigation, patent prosecution, copyright, IP litigation, trade secrets, IP transactions, tax and IP advisory, digital media, platforms, AI, new technologies, digital transformation and marketing, advertising and promotions.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.