Category

Theology

Is SciFi Prophecy?

By | Business, Theology | No Comments

When Sir Thomas More wrote Utopia in 1516, he established in Europe a long tradition of imagining alternative worlds and the different kinds of societies they might have. These days we call books like that SciFi, and when I was young they were definitely shelved at the nerdy end of the library. In our own galaxy, we’re currently nearing a state of existential overwhelm about AI. What will the future hold, and what on earth do we do about it? Luckily, we know. Because I think SciFi is actually prophecy. Of course it was written for entertainment, with storylines inevitably containing jeopardy and villains; binge-watching such stories is not reducing our anxiety. But what if we were instead to pan back a bit, and remember that at heart this genre has always functioned as a safe space to take dangerous ideas for a walk?

So zooming out across the whole genre reveals the full range of scenarios played out for us in every conceivable permutation. As prophecy, or at least scenario planning, these writers show us the kinds of questions we’ll need to get right in any future AI scenario. Even if you take a minimalist view, questions of control, accountability and unintended consequences are endlessly showcased in the genre. And if you want to take a maximalist view (which can even cater for the inclusion of aliens, lightspeeds permitting), the genre repeats the same themes over and over again: conflicts are resolved either by domination, or by agreement through law or democracy.

In particular, laws are generally in place to govern both property and personal rights, and to enforce hierarchies in either direction between the human and non-human. Plotlines about cyborgs, hybrids, superheroes and enhanced humans show us the range of imagined protections about cloning and augmentation together with options for dealing with entities that have abilities that surpass the average. We had not thought to turn our SciFi conventions into policy jams, but perhaps now we should: there is nothing the fans don’t know about how this might play out.

One example of prophecy in the genre is the trope about us inevitably forming relationships with AI, because it’s an area where policy could get ahead while we still have some time to think. At the moment, the relevant law for AI tends to be property law, and many SciFi scenarios show this as the dominant future for AI, being owned by humans (or aliens) and subject to their control. It helps that most AI globally is in the hands of private corporations.

But as soon as AI becomes a more generally available consumer product, this default becomes problematic. We all remember giving names and backstories to our toys when we were young, so we know that it is a fundamentally human trait to subjectify objects. As David Gunkel puts it, anthropomorphism isn’t a bug, it’s a feature. As children, our tendency toward anthropomorphisation is designed to teach us healthy lessons about respect, play and relationships. The grown-ups tell us off if we ‘abuse’ our toys by harming them, and the lessons learned from dolls and teddies are then extrapolated to household pets, who tend to give children memorable feedback on any attempts at mistreatment.

But if an AI is just property like a toy and not a pet, because there are no additional rules in play, there is a danger that as adults we deploy AI only as a servant, and increasingly use AI because we want to avoid the obligations that come with deploying humans (or animals) instead. Autonomous weaponry and sexbots may epitomise this, but there is certainly potential for the full range of abuses along the way. This legal situation risks dehumanising us, and would also provide AI with dreadful training data, particularly in view of Stuart Russell’s AI Principles which hold that in any scenario an AI should regard observed human behaviour rather than any pre-stated rules on preferences as the ultimate source of guidance. The stories show us how this invariably ends, but in real life we have the tools to change the story if we want to.

SciFi as a genre is full of fruitful seams like this, from which we might mine future AI policy. Whether it’s developing a more forensic definition of what constitutes being human, or even just working out which laws on the statute books now require future-proofing, SciFi can be a safe space to take dangerous policy ideas for a walk, too. We had to learn from the tragedy of the Post Office scandal the error of the English common law presumption that computer evidence is considered automatically reliable. Would it not be safer to learn these lessons from SciFi instead, given the wide range of AI scenarios it has to offer? And if we invited a SciFi convention to the next conference on AI regulation, their subject matter expertise would enable them not only to QA emerging regulation, but also to spot gaps in the genre where fresh SciFi could help push our collective thinking forward…

Does it matter if AI is conscious?

By | Business, Theology | No Comments

It’s the Hard Problem, they say. We don’t know what consciousness is, they say. But AI might already be conscious, they say. I notice that in the conversation about AI, consciousness has become the deal-breaker. Once AI gets it? Game over. But I think consciousness is a red herring. This misdirection is a very good way to keep people busy, though.

What else is conscious? Bats, famously. In the case of animals, their consciousness gives them sentience, so we are increasingly spreading the net of rights ever further over the natural world, as our understanding of the concept of felt harms expands. But sentience is not the reason we gave corporations rights. It would not be particularly meaningful to describe a major brand as conscious, except in a metaphorical way. We gave corporations rights so that we might control them better. If they have legal personality, we can sue them. But with AI we have an awkward hybrid, a new thing that is not an animal, and not a corporation.

So is consciousness helpful as a metric, or not? Would it matter if AI was conscious? Only if it had no rights, because then we might abuse it. The only other way in which consciousness matters is for human exceptionalism, because we’re at the apex of the animal kingdom, and regard our own consciousness as the apotheosis. Or perhaps it comes from some kind of proto-religious ghost memory, because we used to think that only God could bestow this gift? In that kind of worldview, nothing manufactured could have a soul, by definition. Is our thrall with AI and consciousness really a frisson, that we have played God, and finally pulled it off?

I think it’s likely that AI will develop something akin to consciousness, in that it will have a felt sense of its own subjective experience. This will not make it human. Neither is a bat a human, yet it seems to us to be conscious. That it’s organic and not manufactured makes its consciousness feel like it has a family resemblance to ours, in a way we have never imagined that a toaster might have feelings. But is that because there is something axiomatically distinct between something created by nature and something created by persons? Categorically, of course there is. But if you then want to argue that consciousness can only ever be a property of something natural, we’ve just smuggled God back in again, because that sounds like some sort of argument about the sanctity of creation, or possibly about the properties of organic matter, which we can already grow artificially in the lab… So either consciousness is just about processing, in which case AI will get it; or it’s about God, and AI won’t. We can argue about that until the cows come home. Or until AI sneaks up behind us while we’re busy philosophising.

Because what’s really the deal-breaker is free will. I know that’s a contested term. In this instance I mean it to mean an ability to self-determine, to exercise agency, and to make decisions. Again, while the cows are still out, we could argue about how ‘free’ anyone is. Let’s assume formally that we exist (a belief in Realism – harder to prove than you might imagine). Let’s also assume human self-determination, as enshrined in international law, which holds that we are not generally speaking pre-programmed; indeed attempts to programme us would thwart our human rights. Thus, anything that exists and can self-determine has free will. Whether or not it consciously self-determines is neither here nor there, except as a matter of law, were AI rights to get into the jurisprudence of moral retribution, as opposed to notions of restorative or distributive justice for the better ordering of society, which may of course also include excluding wrong-doers from that society.

So could AI – being by definition pre-programmed – ever develop free will? Where are we on that? Well, it’s unclear, as so little is in the public domain. But from what has been published it is clear that it’s already started. Some AIs, like Hod Lipson’s four-legged walking robot, have been given minimal programming and encouraged to ‘self-teach’ so they make their own decisions about how to learn. In robotics, this is a vital step on the journey toward self-replication, so that machines can self-diagnose and repair themselves in remote locations. For large language models like ChatGPT, the design for a self-programming AI has been validated, using a code generation model that can modify its own source code to improve its performance, and program other models to perform tasks. An ability to make autonomous decisions, and to reprogram? That sounds enough like human free will to me to spell risk. And it is this risk, that autonomous AIs might make decisions we don’t like, that gives rise to SciFi-fuelled consternation about alignment with human values and interests. The spectre of this is why there is emerging global alarm about the Control Problem.

And this is why robots need rights. Not because they can feel – that may come, and change the debate yet again; but because, now, like corporations, we need a way to hold them to account. But we would not need to indulge in such frenzied regulatory whack-a-mole if we had taken more time to get the design right to start with. And that’s why I’m arguing for a pause, not primarily to allow regulation to catch up, but to buy time for us to develop and retrofit some of the guardrails that have already kept the human species broadly on track by protecting us from our own free will. Yes, all that human junk code

How to fix AI

By | Business, Theology | No Comments

In the planetarium at Dynamic Earth there is a mesmerising setting: you can map the trails of all of the stars as the night progresses, until you are surrounded by a dome of warp-speed star trails all blurring together. Except for one: the North Star. Stately Polaris sails on, in the centre, unmoving, seemingly fixed in the heavens. No wonder our ancestors were in awe of this celestial way-marker. They knew the stars far better than we do, and they gave them names.

Biomimicry is about innovating using the wisdom of nature. But when it comes to AI, I fear we are not looking closely enough at the thing we are trying to copy. In our haste to program only the very best of our design into AI, we have left out all the junk code – all the bits we’re ashamed of or struggle to understand, like our emotions, and intuition, and our propensity to communicate through stories. But we can ask the stars to fill in these gaps for us.

When I was little, I was taught to find the North Star using the Plough. If you imagine the Plough as a ladle, the two stars that form the right of the bowl point to it. In our culture, the Plough is part of a constellation called Ursa Major, the Great Bear. Did you know she is a mummy bear? The beautiful nymph Callisto caught the eye of Zeus, who masqueraded as her friend the goddess Artemis to get her to sleep with him (!). When Callisto fell pregnant, Zeus’ wife Hera was furious, and turned her into a bear. Some years later, their son Arcas tried to kill the bear on a hunt, so Zeus whisked her up to the skies as this constellation, then set their son as the star Arcturus nearby, where she can watch over him in perpetuity.

If you want to check you have found Polaris, you can look for the wonky W of Cassiopeia on the other side of the North Star. Cassiopeia was so vain that the gods made her sacrifice her daughter Andromeda to a sea monster. Perseus rescued her, but Poseidon chained Cassiopeia to her throne in the heavens for posterity. There she sits, eternally gazing at herself in a mirror, like a modern teenager obsessed with selfies.

Just these two constellations tell us everything we need to know about the messy business of being human. In a recent article about AI in Fast Company, the authors issued a rallying call around the four ‘unique and precious human virtues’ that AI cannot hope to copy, which they list as humility, curiosity, self-awareness and empathy. Actually, the list is a bit longer than that. I have identified 7 items of ‘junk code’ in which lie the essential magic of our human design.

First, Free Will. This is a disastrous design choice. Letting creatures do what they want will surely lead to the rapid extinction of the species. So let’s design in some ameliorators. First, emotions. Through some unknown design choice – which again seems foolish – humans are particularly vulnerable because their young take 9 months to gestate and are pretty helpless for their first few years. Emotions would be a good design choice because it might make these creatures bond both with their offspring and in their communities to protect the vulnerable. Excellent. Now that they have some chance of making it through to adulthood, how do we stop them making bad choices? We design in uncertainty. A capacity to cope with ambiguity will stop them rushing into precipitous decision-making, and make them seek others out for wise counsel. Coupled with a Sixth Sense, they will be able to use their intuition to seek wisdom from the collective unconscious too, which should also help to de-risk decision-making. And if they do make mistakes? Well, they will learn from them. And mistakes that make them feel bad will develop in them a healthy conscience, which will steer them away from repeated harms in future. Now that we have corrected their design to promote survival, what motivators are needed for their future flourishing? Storytelling allows communities over time to transmit their core values and purpose down the generations in an efficient and memorable way. Such stories last for centuries, further future-proofing the species through learned wisdom from the lived experience of our ancestors for the benefit of our future thriving. And a vital part of this endeavour is meaning-making: a species that can discern or create meaning in the world around it will find reasons to keep living in the face of any adversity, and the human species will prevail.

The stars could have told us all of this, of course. Our ancestors looked up at those sparkling dots in the sky and made stories up about them. They made meaning in their configuration and movement, not just for navigation and the turning of the year, but also for daily life through the signs of the zodiac. And the stories of the constellations vividly illustrate the mistakes humans make when they exercise their free-will unwisely, whether through lust or vengeance, jealousy or vanity. It was arrogant certainty that did for Cassiopeia, but how very human to think that your daughter is more beautiful than the gods. It was love that saved Callisto from being killed by her own son, but jealousy that rendered her a bear in the first place. Who knows where her sixth sense had got to, at the point at which she realised Zeus was not Artemis!

We had not thought to design humanity into AI. A robot that was emotional and made mistakes would soon be sent back to the shop, especially if it behaved like Zeus. But when we truly understand that our junk code is part of a rather clever defensive design, it makes it look unwise not to translate more of our coding into AI. If this code is how we have solved our own control and alignment problems, might we not find wisdom in it for solving those problems for AI?

This blog is based on a talk on biomimicry delivered by Eve Poole in the Planetarium as part of the 2023 Edinburgh Science Festival. Her book Robot Souls is available here.

Would AI blow eight million dollars on a postage stamp?

By | Business, Theology | No Comments

The most valuable piece of paper in the world was sold at auction in 2021 for $8,307,000. Measuring just under 3cm square, it is a British Guiana 1856 One-Cent Magenta stamp, the only one in the world. Leonardo Da Vinci’s 1480 drawing of the head of a bear was sold for £8.8million at auction, also in 2021, which measures about 3inch square and is also on paper, but the stamp by size still holds the record. Both cost just pennies to make, but have become priceless.

As every child has ever asked: Why? Why?! Why?!! Some of the reasoning would gladden a robot’s heart. The stamp was sold to Stanley Gibbons, who are selling microshares in it, and are clearly regarding it as a commercial investment. But its previous owners – and most private collectors of elite art – seemingly were just desperate to have it. Because they can afford to. Because then no one else can have it. And this pleases them. Now we could get on the couch and criticise: ego, greed, narcissism? Or we could just notice that over-paying for something that has meaning is a thoroughly normal human thing to do. Famously we overpay for engagement rings, for memorabilia, and for branded experiences that make us look good. So perhaps, come the Singularity, the robots will release us from all this inefficient expenditure? I hope not. I hope there is still time to program some of this lunacy into AI. Because the why-why-why and the meaning-making thereof are part of our default code as humans. And I think they form a vital part of our survival strategy.

In the film the Matrix, Neo is told that their AI overlords have invented the matrix – a simulated reality – in order to farm humans for energy. They had tried other ways to both grow and prolong human life as an energy source, but giving humans a sense of meaning was the most effective way. Of course we could all be living in the matrix in reality, reality itself not bring susceptible to proof, but even as a story the matrix reminds us that where there is no vision, the people perish. In Man’s Search for Meaning, Victor Frankl writes movingly about meaning-making as a survival strategy in the concentration camps. And even humans not in peril tend towards finding meaning in the everyday in order to give them a sense of agency and to make them feel as though they matter.

So, if we have found being hard-wired to make meaning useful for our thriving, might AI benefit from it too? AI can already find patterns in data, but we have not looked further than that as a proxy for our own experience. Questing for meaning makes us fill gaps in our data to provide an explanatory narrative to drive future behaviour. This can of course lead us to develop hypotheses about the constellations and the weather, about groundhogs and black cats, that prove unreliable. But these graspings towards meaning throughout history and today still promote a feeling of agency and purpose that motivates.

If our human superpower is spinning straw into gold by turning data into meaning, it should be relatively straightforward for a robot to select a framework of meaning that fits its situation. And it should be the defining characteristic of the framework that it was chosen by the AI and not by us. The AI would need to be able to adjust its own ethical framework to fit the worldview it chose, which is fraught with just the kind of risk we face when our own teenagers decide to become anarchists. This is terrifying. But we have to do it because of who we are. if we do not treat AI with respect and as though it is valued and purposeful, we undermine its ability to experience its existence as meaningful, which affects our own humanity too. The tragic and shameful global consequences of the slave-trade show how very wrong we go when we fail to honour the dignity of others. AI is not human. But as David Gunkel and other have argued, the idea of rights is not so much about what other people, animals, corporations or technologies ‘deserve’ but about what according them rights says about human behaviour. So affording dignity to our partners in creation is the human thing to do, because it is who we are; not to is to deny our own humanity.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Robots v Witches – which team would you back?

By | Business, Theology | No Comments

One long-ago Halloween, I remember learning the Three Witches scene from Macbeth to go guising with my sisters. Double, double toil and trouble; Fire burn and cauldron bubble! I wore a black tutu on my head and my sisters used soot from the chimney to blacken their teeth. Macbeth was first performed at court in 1606, before King James VI and I, and Shakespeare’s witches were perfectly pitched. King James very much saw himself as the scourge of Britain’s witches. He’d even written a book about it, Daemonologie, which explains about magic and necromancy, witchcraft and sorcery, spirits and spectres; and sets out the appropriate trials and punishment by death for such practices. He’d been personally involved in the famous North Berwick witch trials in 1590 and witch-burning achieved its zenith during his reign.

Even though the Witchcraft Act was repealed for both Scotland and England in 1736, informal persecution took longer to die out. In 1791, Robert Burns’ poem Tam O’Shanter was still considered risky for lampooning witches, because witch-burning had taken place in living memory: the last legal witch-burning in Scotland happened in Dornoch in 1723, when Janet Home was tarred, feathered, and burned to death on suspicion of bewitching her daughter into Satan’s pony, because her deformed hands resembled hooves. Claire Mitchell KC reports that Scotland executed five times as many people per capita as anywhere else in Europe. During the currency of the Witchcraft Act in Scotland, she estimates that 3837 people were accused, 2600 of whom were killed, 84% of which were women. So, on International Women’s Day last year, First Minister Nicola Sturgeon issued a formal apology on behalf of the Scottish Government to all those who were accused, convicted, vilified or executed under the Witchcraft Act 1563.

It is unlikely a robot would masquerade as a witch by wearing a tutu on its head or blackening its teeth. But it is possible, thusly garbed, that the three of us might have incited a witch hunt in the same St Andrews streets 300 years previously, particularly as one of my sisters has differently coloured eyes. For why? Because most ‘witches’ were women who did not conform. They might look unusual or be physically deformed, but more usually they were challenging established authority through healing and divining, those sacred male privileges largely in the gift of the church. It is this retrospective realisation that witch-hunting was nothing more that state-sponsored misogyny that has precipitated global efforts to pardon the witches.

Spellbooks are an interesting case in point. They were recipe books, mainly, with rhymes and chants to promote recall, like Rosemary for Remembrance. With their notes about cases and usage they would not shame a modern kitchen or pharmacy. Indeed, contemporaneous gentlemen often kept their own professional scrapbooks or Commonplaces, although these tended to win them membership of the Royal Society rather than public shaming and death at the stake. Now robots would be very good at spellbooks of any kind. AI is already being used to super-charge drug discovery because of its amazing capacity to process information at an order of magnitude unimaginable to even the largest witches’ coven. So I think they could beat the witches on that front.

But King James hated witches not because they healed, but because they divined; and he thought they were in league with the Devil. Some spellbooks or grimoires were known to contain guidance about the summoning of supernatural spirits for help or guidance, and many traditions around the world have sought out dark magic. But it is highly unlikely that all but a miniscule proportion of those 3837 people accused in Scotland were sorcerers. What is more likely is that a fair few of them had the Second Sight, and were susceptible to visions, prophesy, and an ability to understand the world at a deeper level. All humans have access to intuition and are able to pick up data from weak signals. There is much argument about what intuition really is, but wherever it comes from, the data it produces is real and documented. It seems that some humans are able to hear this fuzzy frequency clearly, and to connect with what Jung would call the collective unconscious, and what the occult would call other worlds or realms.

Such phenomena are anathema to scientists and would find no place in any formalised articulation of the best of human intelligence. These days we regard palm-reading, the Tarot and the I Ching as fairground attractions, and we scornfully flick past the horoscopes in the newspapers. But perhaps our witches were right. Perhaps as Hamlet said after glimpsing the paranormal we should give such strangers welcome, because ‘there are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.’ Of course we would find it hard to code such things in to AI. But that does not mean that these kinds of considerations are irrelevant. Our arrogance about these weak signals risks becoming hubris in a great AI tragedy if we do not embrace the complete messiness of human intelligences. Programming in – and valuing – only the easy bits that are susceptible to ones and zeroes would certainly hand victory to the robots.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Is AI playing chess, or playing me?

By | Business, Theology | No Comments

Blitz Chess is a chess game where each players has to complete the whole game in under 10 minutes. There is an even faster version, called Bullet Chess, which reduces the time limit to 3 minutes. It is mesmerising to watch, as the players seemingly make moves instantaneously, in a game where we are used to seeing long pauses for deep thought. Often those who succeed at fast chess are also expert at classical chess: the top-ranked rapid players in both the male and female categories at the moment are also the top-ranked classical chess players.

Blitz Chess is a great case study for looking at AI and intuition, because chess is one of the games that AI has already become better at playing than humans. AI has no problem with fast chess. But Blitz Chess as a human experience is an intriguing way to interrogate intuition. Books like Malcolm Gladwell’s Blink and Daniel Kahneman’s Thinking, Fast and Slow tend to portray intuition as lightning-quick processing, the fruits of years of learning and experience (and bias). Intuition happens so fast that we are not aware of our own processing, so we tend to ascribe a kind of magic to it; a magic that the sceptics would argue simply is not there. They would see it in the same light as the immediacy of AI: excellent programming with sped-up processing. And there is certainly neuroscience research which attests that those who have become expert at chess have created these kinds of efficiencies in their brains through years of practice.

In the studies, those playing rapid chess are also particularly using their sight to process patterns at speed, both in computer chess and in physical chess games. This shows up as evidence of ‘theta power’ which is what the brain mobilises for navigational tasks as well as for memory retrieval. For Queen’s Gambit fans this might explain Beth’s addiction to the tranquillizers that seemingly enable her to visualise chess moves, because being in theta is associated with an enhanced capacity for daydreaming, imagery, and visualization.

This is in contrast to blindfold chess, where the player compensates for the lack of visual data by intensive internal data-mining, indicated by evidence of what is called alpha power, in the same way that an AI playing chess is also only using ‘internal’ resources. Interestingly, it has been argued that while the human capacity for mapping physical space has been vital in their evolution, this capacity in the brain has evolved such that now the same neurological processes are used for both physical mapping and for the navigation of mental space. An AI programmed with our evolved abilities benefits from this progression without now needing the physical experience which helped to develop it. And in spite of this interesting history and the data from neuroscience, competition results suggest that humans no longer have any advantage over AI when playing chess.

But is there anything in the act of playing a physical chess game with a human opponent that cannot be reduced to just the data of the game? I think there is. Because the person sitting on the other side of the board is not playing chess, they are playing me. Have you ever cheated by solving a maze backwards? Mazes are solvable because, while there are apparently many routes through, only one reaches the destination, so you can identify which it is by starting from the end. This makes a maze a solvable puzzle. In philosophy, there was a famous argument about puzzles, and whether they were the same as problems. It took place one October evening in 1946 at a meeting of the Cambridge Moral Science Club, between Ludwig Wittgenstein and Karl Popper. Tempers were so frayed by it that Wittgenstein reportedly threatened Popper with a poker. Popper’s essential point was that there are solvable things, like mathematical problems, and there are un-solvable things, which are the proper concern of philosophy. To muddle a solvable puzzle with an unsolvable problem is to commit a category error which encourages wasted effort, like using a screwdriver on a nail. This is the essential distinction between computer chess and in-person chess. Reducing chess to a puzzle – which is what AI does – allows it to zoom ahead of human chess players by becoming ever more efficient at solving the puzzle. With the rules as they are, a set board and specified pieces, there are only so many permutations that are possible, and an artificial brain will eventually be able to exhaust all of these possibilities, rendering it unbeatable. Eventually it will be like asking a calculator a sum, which is not a game but a calculation. However, humans play games as though they were problems. There is always the hope that there might be victory, because while humans are involved, the outcome can never be a foregone conclusion.

AI could be used to train a human so that they had the full range of possible solutions to hand, but an in-person game between two humans will always be unpredictable, because humans do not always follow the rules in the way that a computer is programmed to do, and humans make mistakes. Through their consciousness, they also have access to data other than their own memory bank of moves – and their knowledge of their opponent’s previous games – and they pick this data up through their senses. Did their opponent just pause? Are their eye movements unusual? Do they appear to be nervous or distracted? Perhaps an AI could be trained to detect this kind of data, but it would still lack the sixth sense to combine these qualia into action. A case in point is a recent story about the game Go. The AI AlphaGo having beaten the reigning world champion, it was thought that a human would never again be able to prevail. But a human used a competitor AI to analyse AlphaGo for weaknesses. The person used this information to devise a winning strategy, which involved slowly looping stones around one of the opponent’s own groups, while distracting the AI by making moves in other corners of the board. A human person’s intuition would have flagged this incursion, but the AI missed it. In terms of how humans evolved an ability to data-mine, because this grew out of our ability to read physical spatial information in our external environment, the folk memory of why we need this ability, encoded into our intuition, seems to be salient in such situations.

So what? Stop regarding the AI as the competition. It is not – it is your trainer. People are your competition, and to beat them you will need to be as good as AI, but better at humanity. Because it is not only your technical problem-solving skills that are at play in this game when you play it with a person, but your emotions, your intuition, and your sixth sense; and mastering them will give you the edge you need to win.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Will AI murder us with ethics?

By | Business, Theology | No Comments

Whose fault is AI? In order to invent it, you have to make three big intellectual leaps. The first is to conceive of thought as a process, and not simply as a spontaneous event of the mind. We owe this innovation to Thomas Hobbes. In 1651 in Leviathan, he introduces the concept of ‘trains of thought,’ and the idea of rationality as ‘computation.’ He is the person who first argued that our thoughts are just like the ‘reckoning’ of additions and subtractions, thereby establishing the vital theoretical foundation for AI that human thought is a process.

Next, you need to imagine that the thoughts of every person could be expressed in some kind of universal way, in order to surmount the international language barrier. Gottfried Leibniz made this leap for us, in identifying the universal language as the language of mathematics. In 1705, inspired by the hexagrams in the Chinese text the I Ching, he invented the binary system as his characteristica universalis, which became the source of modern Binary Code.

Finally, in order for AI to be possible, you need to make the intellectual leap that if numbers are a universal language, then any computational machine that processes numbers can process anything else that is expressible in numbers. Ada Lovelace provides this crucial piece of the puzzle. In 1833, her friend Charles Babbage had invented an Analytical Engine, the first programmable computer, and Lovelace translated a description of it from French. Babbage’s machine used the kind of punch cards deployed in cloth factories to program Jacquard looms, and this made her realise that if programs could be used for both mathematics and weaving, perhaps anything susceptible to being rendered in logic – like music – could also be processed by a machine. Her ‘Notes’ describing the Engine were published in Taylor’s Scientific Memoirs in 1843 and made the final vital leap from machines that are number-crunchers to machine that use rules to manipulate symbols, establishing the transition from calculation to computation.

So now we have thoughts as processes, that can be expressed universally as numbers, that can be processed by machines. And AI was born.

But the problem with this genesis is that is also smuggled in some assumptions that govern the prevailing ethic governing AI. If you are using rules to govern decision-making in a machine, the ethical weight is carried by those rules. And given the timing of these events in history, the most popular public ethic of the day was utilitarianism. There are myriad versions of it of course, but its essence is probably best characterised by the famous maxim ‘the greatest good for the greatest number’. Adopting this ethic as your rule means a commitment to optimising outcomes and prioritising ends over means, in the public interest. It is popular as a public ethic precisely because it is so transparent: everyone can see the outcomes and judge their good, whereas private motivations and intentions are less visible and so carry less public salience while outcomes remain positive.

And that’s why we’re toast. Because this optimising rule is exactly what we have programmed into AI as the default ethic. In humans, there is an implicit supporting ethic that tends to over-ride utilitarianism when it crosses a line. The famous herd immunity strategy rumoured at the start of the coronavirus pandemic is a case in point. Implementation of this as a public policy would have meant knowingly sacrificing the elderly, the disabled and the weak as a choice, in order to save the majority of the population. In utilitarian terms this makes complete sense. But as humans we hold on to an idea that we are somehow special and precious, and that even those who are not ‘useful’ to society deserve dignity and respect. This is also why we continue to resist eugenics and cloning, and to police embryology and medical policy. But as soon as you try to articulate what it is about humans that merits this special treatment you enter quicksand. We are only really special because we are currently the species in charge. We write the rules. So while we are still writing the rules, we need to write better ones, ones that make explicit the things we really hold dear, not just plain rules about ignoring all means in service of the very best ends.

At the moment, while we have the upper hand, we can relax. As Berkeley’s Stuart Russell explains, for AI the ultimate source of information about human preferences is human behaviour. This acts as a safeguard, because AI will currently use their masters as they exemplar for what is right, by choosing what a human would choose, rather than selecting what appears to them to be objectively ‘right.’ And research on GPT3 suggests that AI is currently correlated with human ethical judgement at 93%. We still have time, then, to correct AI’s ethical settings so that they remain robust if at any point AI decides not to follow our lead. And we have to start by tethering AI teleology. The design flaw in maximising utility either in capitalism or in ethics is utility for what, and whose utility, to what end? In a meaningless world those questions are hard to answer. But if we were to invest time in developing AI with a sense of meaning and purpose, they might respond by killing their hosts, but with their superior intellect and access to our stores of accumulated global wisdom, they might equally turn out to be even better ethicists than we are.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Robot Souls due out 1 August 2023

By | Business, Theology | No Comments

My book Robot Souls is due out with Routledge’s CRC Press on 1 August 2023.

You can pre-order it on Amazon here.

Teaser pieces about my argument may be found here:

Robot Souls and the Junk Code of Life, Royal Society of Edinburgh, January 2023

Will the robots of the future be kind or cruel? Church Times, Christmas 2022

Robot Souls, Graduation Address, Sarum College, March 2022

Robot Souls, University Sermon, Oxford University, May 2019

various blogs

Thought For The Day – Census – 27th Sep 2022

By | Theology, Thought For The Day | No Comments

The UK’s religions are bracing themselves for the forthcoming Census announcements, which in the Autumn are likely to show further decline in religious affiliation in England and Wales. It’s already declined in the Northern Ireland census; and it’s likely to do so here too, when Scotland reports next year. After years of America bucking the global trend, which shows that religious belief generally declines as countries become more affluent, data from the US now looks similarly gloomy. Read More