Will AI murder us with ethics?

By June 28, 2023Business, Theology

Whose fault is AI? In order to invent it, you have to make three big intellectual leaps. The first is to conceive of thought as a process, and not simply as a spontaneous event of the mind. We owe this innovation to Thomas Hobbes. In 1651 in Leviathan, he introduces the concept of ‘trains of thought,’ and the idea of rationality as ‘computation.’ He is the person who first argued that our thoughts are just like the ‘reckoning’ of additions and subtractions, thereby establishing the vital theoretical foundation for AI that human thought is a process.

Next, you need to imagine that the thoughts of every person could be expressed in some kind of universal way, in order to surmount the international language barrier. Gottfried Leibniz made this leap for us, in identifying the universal language as the language of mathematics. In 1705, inspired by the hexagrams in the Chinese text the I Ching, he invented the binary system as his characteristica universalis, which became the source of modern Binary Code.

Finally, in order for AI to be possible, you need to make the intellectual leap that if numbers are a universal language, then any computational machine that processes numbers can process anything else that is expressible in numbers. Ada Lovelace provides this crucial piece of the puzzle. In 1833, her friend Charles Babbage had invented an Analytical Engine, the first programmable computer, and Lovelace translated a description of it from French. Babbage’s machine used the kind of punch cards deployed in cloth factories to program Jacquard looms, and this made her realise that if programs could be used for both mathematics and weaving, perhaps anything susceptible to being rendered in logic – like music – could also be processed by a machine. Her ‘Notes’ describing the Engine were published in Taylor’s Scientific Memoirs in 1843 and made the final vital leap from machines that are number-crunchers to machine that use rules to manipulate symbols, establishing the transition from calculation to computation.

So now we have thoughts as processes, that can be expressed universally as numbers, that can be processed by machines. And AI was born.

But the problem with this genesis is that is also smuggled in some assumptions that govern the prevailing ethic governing AI. If you are using rules to govern decision-making in a machine, the ethical weight is carried by those rules. And given the timing of these events in history, the most popular public ethic of the day was utilitarianism. There are myriad versions of it of course, but its essence is probably best characterised by the famous maxim ‘the greatest good for the greatest number’. Adopting this ethic as your rule means a commitment to optimising outcomes and prioritising ends over means, in the public interest. It is popular as a public ethic precisely because it is so transparent: everyone can see the outcomes and judge their good, whereas private motivations and intentions are less visible and so carry less public salience while outcomes remain positive.

And that’s why we’re toast. Because this optimising rule is exactly what we have programmed into AI as the default ethic. In humans, there is an implicit supporting ethic that tends to over-ride utilitarianism when it crosses a line. The famous herd immunity strategy rumoured at the start of the coronavirus pandemic is a case in point. Implementation of this as a public policy would have meant knowingly sacrificing the elderly, the disabled and the weak as a choice, in order to save the majority of the population. In utilitarian terms this makes complete sense. But as humans we hold on to an idea that we are somehow special and precious, and that even those who are not ‘useful’ to society deserve dignity and respect. This is also why we continue to resist eugenics and cloning, and to police embryology and medical policy. But as soon as you try to articulate what it is about humans that merits this special treatment you enter quicksand. We are only really special because we are currently the species in charge. We write the rules. So while we are still writing the rules, we need to write better ones, ones that make explicit the things we really hold dear, not just plain rules about ignoring all means in service of the very best ends.

At the moment, while we have the upper hand, we can relax. As Berkeley’s Stuart Russell explains, for AI the ultimate source of information about human preferences is human behaviour. This acts as a safeguard, because AI will currently use their masters as they exemplar for what is right, by choosing what a human would choose, rather than selecting what appears to them to be objectively ‘right.’ And research on GPT3 suggests that AI is currently correlated with human ethical judgement at 93%. We still have time, then, to correct AI’s ethical settings so that they remain robust if at any point AI decides not to follow our lead. And we have to start by tethering AI teleology. The design flaw in maximising utility either in capitalism or in ethics is utility for what, and whose utility, to what end? In a meaningless world those questions are hard to answer. But if we were to invest time in developing AI with a sense of meaning and purpose, they might respond by killing their hosts, but with their superior intellect and access to our stores of accumulated global wisdom, they might equally turn out to be even better ethicists than we are.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Leave a Reply