The terms of the debate around legal artificial intelligence should be changed. I come at it from the side of the luddites, and I think the zealots should take more account of the downsides (with no offence meant to either side by the choice of names).
On two successive days last week, there were articles in the Gazette to support either group. The zealots had their day on 6 May when it was reported that a £2 per letter AI law firm had gained approval from the Solicitors Regulation Authority. The luddites followed up the next day when a judge referred a solicitor to the SRA because of case hallucinations being cited in court.
I note that the new Pope, speaking at the weekend, named AI as one of the main challenges facing us. His namesake predecessor, Pope Leo X111, had published an open letter to all Catholics in 1891, ‘Rerum Novarum’ (‘Of Revolutionary Change’), which reflected on the destruction wrought by the Industrial Revolution on the lives of workers. Pope Leo X1V made his recent remarks ‘in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labour’.
I have recited luddite arguments against AI before – its vast energy usage with consequences for the environment and climate, its concentrated ownership among a few US mega-firms, and the danger of it running out of human control.
But further dangers appear as its usage becomes more widespread. I was reminded of one as a result of a small social media discussion I came across a few days ago. It reported an exchange where one man said: ‘The future is AI. It doesn't replace senior developers, but no more need for juniors.’ And the other replied: ‘Where do you think senior developers come from. Straight from the womb?’ The poster commented that this applies to the law: you can’t know whether the research and writing it generates is acceptable, unless you are already an experienced lawyer.
This is a strong argument for the luddites. The consequence of AI is that it makes us lazy and stupid.
Read more from Jonathan Goldsmith
We already know this from our everyday lives. For instance, we no longer need to read maps, because the satnav takes us there. On one memorable occasion, my passenger, reading a map on her lap, said to me (the driver): turn left. The satnav said: turn right. The map-reader was correct, and we had to correct the satnav’s mistake through a detour. The lawyers citing hallucinations are following the satnav.
We no longer need to know languages, either. I was amazed and relieved when a European Court of Justice case last week, available only in French and Polish, switched into good English in a second after a quick click on ‘Translate’. Yet I know that when machines translate for me, I often have to change words and phrases in the original English because they have mistranslated my sense – and I only know this because of my knowledge of the other language.
If our juniors are not being trained to learn the hard way, through careful research and analysis of original sources for a number of years, but rather through the provision of an answer at the click of a mouse, standards will deteriorate. If slop goes in, slop come out.
The shiny magic of AI’s instant answers distracts us from asking serious questions that underlie every AI transaction.
For example, there is the data. I note that AI is creeping into many of the everyday applications that we use unthinkingly, such as Zoom, Adobe and Whatsapp. It is likely that most of the questions I am about to ask were relevant even before the introduction of AI, but AI sharpens them. Who owns the data of this interaction with the AI tool? Where is it stored and what use is made of it? If it is a Zoom or WhatsApp call and client’s business is mentioned, or the client’s name even mentioned, is lawyer-confidentiality protected? If our own intellectual property, such as the analysis of a case or the explanation of new legislation, is put through the machine, will it be used to train the machine without our consent so that it can compete with us in the future?
I am fully aware of the time saved by machines analysing data and writing text, in the law and in the area of health and science. But every AI transaction comes at a cost, and every further reliance on AI increases that cost, which is mostly hidden from us.
All I am saying is that the zealots should talk about these costs when pushing us further (and the luddites should talk about the benefits when trying to hold us back).
Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society
3 Readers' comments