The assistant or the master: regulating the rise and rise of AI

Until we know more about the technology and its risks people must remain accountable for decision making, writes Herbert Smith Freehills' Jasveer Randhawa

According to the Centre for Data Ethics and Innovation, now known as the Responsible Technology Adoption Unit, a majority of UK adults believe that artificial intelligence (AI) can improve public services such as healthcare, education and policing. Yet concerns about the potential misuse of AI remain a significant barrier towards accepting that the benefits of this, still nascent, technology outweigh the risks.

Amongst all the noise one thing seems clear there is a need for responsible and transparent implementation of AI. The question, however, that is seemingly causing the greatest challenge is the extent to which regulation is necessary.

Some argue that the previous government's 2024 Generative AI (GAI) framework is a good starting point. That is undoubtedly the case as, with its 10 guiding principles for the use of GAI by government organisations, a structure exists which should give confidence to the voting public. This highlights various scenarios where AI should not be used to ensure that the risk of harm is minimised, such as avoidance of fully automated decision-making. The message is clear: targeted use of AI in areas that do not require significant skill and judgment and are unlikely to have a serious impact is acceptable, as long as there is appropriate human involvement and assurance processes. 

But this clarity of message doesn't equate to a clear set of rules with the necessary safeguards. Many expected the new UK government to use its first King's Speech to announce a UK AI Bill. Whilst it did not, expectations remain that legislation to meet its manifesto commitment of "binding regulation on the handful of companies developing the most powerful AI models" will emerge. 

In light of the wording in the manifesto, the most interesting question for me is how far the scope of any legislation or regulatory framework will reach. Would it be justifiable in principle to end up only with legislation aimed at private companies that develop and sell AI technology, without the same or a parallel regime governing public authorities who use AI to exercise state power in issues such as immigration, access to benefits and healthcare? In my view it would be counterintuitive to focus on the possible power and influence of a handful of private organisations while ignoring the power the state has over every aspect of society.

Despite the desire to introduce checks around AI, there is a difficult tightrope to tread to ensure we are not stifling innovation or denying the UK the benefits of something that could become truly transformative. A study last year showed that in some scenarios low performing employees enjoyed a 43% average boost to their performance using OpenAI's GPT-4. Although limited in scope, this study is grounds for optimism as to the improvements that could be achieved with the widespread use of AI, in terms of both efficiency and quality.

Instances of AI having a positive impact are welcome, but greater output does not necessarily equal accountability for decisions. So, particularly in areas where it really matters such as the use of AI by the state to make decisions impacting all our lives, measures must be put in place ensuring decision-makers are accountable for any decisions reached with AI assistance. The 2024 framework reflects this, referring to establishing a chain of human responsibility across the AI lifecycle, including responsibility throughout the supply chain. In other words, it has been acknowledged that, at least until we know more about AI and its risks, it should be the assistant in making decisions rather than the master.

I would suggest this guiding principle is a good one to stick to, regardless of the type of organisation or sector. Although not subject to public law requirements of fairness and reasonableness, private companies will still face analogous concerns about transparency and accountability.

The challenges of implementing and maintaining adequate AI governance within businesses, given the technology's broad application, growing complexity and rapid pace of change, are similarly immense. Policies, protocols and contracts, while being an absolute necessity for organisations engaging with AI, can easily become outdated almost as soon as they are drafted, meaning Boardrooms may have to consider just how much they can be relied upon right now.

All of which takes us back to the central issue, which in my view should not be whether to regulate at all. That question suggests regulation is by its very nature unwelcome. The real question should be who to regulate and how much, understanding that regulation is flexible and its level and intensity can be varied depending on what the context requires. It is the design of the regulation that poses the true challenge – can we create a scheme that allows us to harness technological advances while protecting society from unchecked power and prioritising human values?

Where we might end up remains to be seen. However, one thing can be said with certainty: AI's rapidly growing capability means there are more questions than answers at this stage. The eventual answers to those questions will say a great deal about what we value in our society.

London-based Jasveer Randhawa is a professional support consultant in Herbert Smith Freehills' disputes practice. 

Email your news and story ideas to: [email protected]

Top