Tag

robots Archives - Eve Poole

Is SciFi Prophecy?

By | Business, Theology | No Comments

When Sir Thomas More wrote Utopia in 1516, he established in Europe a long tradition of imagining alternative worlds and the different kinds of societies they might have. These days we call books like that SciFi, and when I was young they were definitely shelved at the nerdy end of the library. In our own galaxy, we’re currently nearing a state of existential overwhelm about AI. What will the future hold, and what on earth do we do about it? Luckily, we know. Because I think SciFi is actually prophecy. Of course it was written for entertainment, with storylines inevitably containing jeopardy and villains; binge-watching such stories is not reducing our anxiety. But what if we were instead to pan back a bit, and remember that at heart this genre has always functioned as a safe space to take dangerous ideas for a walk?

So zooming out across the whole genre reveals the full range of scenarios played out for us in every conceivable permutation. As prophecy, or at least scenario planning, these writers show us the kinds of questions we’ll need to get right in any future AI scenario. Even if you take a minimalist view, questions of control, accountability and unintended consequences are endlessly showcased in the genre. And if you want to take a maximalist view (which can even cater for the inclusion of aliens, lightspeeds permitting), the genre repeats the same themes over and over again: conflicts are resolved either by domination, or by agreement through law or democracy.

In particular, laws are generally in place to govern both property and personal rights, and to enforce hierarchies in either direction between the human and non-human. Plotlines about cyborgs, hybrids, superheroes and enhanced humans show us the range of imagined protections about cloning and augmentation together with options for dealing with entities that have abilities that surpass the average. We had not thought to turn our SciFi conventions into policy jams, but perhaps now we should: there is nothing the fans don’t know about how this might play out.

One example of prophecy in the genre is the trope about us inevitably forming relationships with AI, because it’s an area where policy could get ahead while we still have some time to think. At the moment, the relevant law for AI tends to be property law, and many SciFi scenarios show this as the dominant future for AI, being owned by humans (or aliens) and subject to their control. It helps that most AI globally is in the hands of private corporations.

But as soon as AI becomes a more generally available consumer product, this default becomes problematic. We all remember giving names and backstories to our toys when we were young, so we know that it is a fundamentally human trait to subjectify objects. As David Gunkel puts it, anthropomorphism isn’t a bug, it’s a feature. As children, our tendency toward anthropomorphisation is designed to teach us healthy lessons about respect, play and relationships. The grown-ups tell us off if we ‘abuse’ our toys by harming them, and the lessons learned from dolls and teddies are then extrapolated to household pets, who tend to give children memorable feedback on any attempts at mistreatment.

But if an AI is just property like a toy and not a pet, because there are no additional rules in play, there is a danger that as adults we deploy AI only as a servant, and increasingly use AI because we want to avoid the obligations that come with deploying humans (or animals) instead. Autonomous weaponry and sexbots may epitomise this, but there is certainly potential for the full range of abuses along the way. This legal situation risks dehumanising us, and would also provide AI with dreadful training data, particularly in view of Stuart Russell’s AI Principles which hold that in any scenario an AI should regard observed human behaviour rather than any pre-stated rules on preferences as the ultimate source of guidance. The stories show us how this invariably ends, but in real life we have the tools to change the story if we want to.

SciFi as a genre is full of fruitful seams like this, from which we might mine future AI policy. Whether it’s developing a more forensic definition of what constitutes being human, or even just working out which laws on the statute books now require future-proofing, SciFi can be a safe space to take dangerous policy ideas for a walk, too. We had to learn from the tragedy of the Post Office scandal the error of the English common law presumption that computer evidence is considered automatically reliable. Would it not be safer to learn these lessons from SciFi instead, given the wide range of AI scenarios it has to offer? And if we invited a SciFi convention to the next conference on AI regulation, their subject matter expertise would enable them not only to QA emerging regulation, but also to spot gaps in the genre where fresh SciFi could help push our collective thinking forward…

Does it matter if AI is conscious?

By | Business, Theology | No Comments

It’s the Hard Problem, they say. We don’t know what consciousness is, they say. But AI might already be conscious, they say. I notice that in the conversation about AI, consciousness has become the deal-breaker. Once AI gets it? Game over. But I think consciousness is a red herring. This misdirection is a very good way to keep people busy, though.

What else is conscious? Bats, famously. In the case of animals, their consciousness gives them sentience, so we are increasingly spreading the net of rights ever further over the natural world, as our understanding of the concept of felt harms expands. But sentience is not the reason we gave corporations rights. It would not be particularly meaningful to describe a major brand as conscious, except in a metaphorical way. We gave corporations rights so that we might control them better. If they have legal personality, we can sue them. But with AI we have an awkward hybrid, a new thing that is not an animal, and not a corporation.

So is consciousness helpful as a metric, or not? Would it matter if AI was conscious? Only if it had no rights, because then we might abuse it. The only other way in which consciousness matters is for human exceptionalism, because we’re at the apex of the animal kingdom, and regard our own consciousness as the apotheosis. Or perhaps it comes from some kind of proto-religious ghost memory, because we used to think that only God could bestow this gift? In that kind of worldview, nothing manufactured could have a soul, by definition. Is our thrall with AI and consciousness really a frisson, that we have played God, and finally pulled it off?

I think it’s likely that AI will develop something akin to consciousness, in that it will have a felt sense of its own subjective experience. This will not make it human. Neither is a bat a human, yet it seems to us to be conscious. That it’s organic and not manufactured makes its consciousness feel like it has a family resemblance to ours, in a way we have never imagined that a toaster might have feelings. But is that because there is something axiomatically distinct between something created by nature and something created by persons? Categorically, of course there is. But if you then want to argue that consciousness can only ever be a property of something natural, we’ve just smuggled God back in again, because that sounds like some sort of argument about the sanctity of creation, or possibly about the properties of organic matter, which we can already grow artificially in the lab… So either consciousness is just about processing, in which case AI will get it; or it’s about God, and AI won’t. We can argue about that until the cows come home. Or until AI sneaks up behind us while we’re busy philosophising.

Because what’s really the deal-breaker is free will. I know that’s a contested term. In this instance I mean it to mean an ability to self-determine, to exercise agency, and to make decisions. Again, while the cows are still out, we could argue about how ‘free’ anyone is. Let’s assume formally that we exist (a belief in Realism – harder to prove than you might imagine). Let’s also assume human self-determination, as enshrined in international law, which holds that we are not generally speaking pre-programmed; indeed attempts to programme us would thwart our human rights. Thus, anything that exists and can self-determine has free will. Whether or not it consciously self-determines is neither here nor there, except as a matter of law, were AI rights to get into the jurisprudence of moral retribution, as opposed to notions of restorative or distributive justice for the better ordering of society, which may of course also include excluding wrong-doers from that society.

So could AI – being by definition pre-programmed – ever develop free will? Where are we on that? Well, it’s unclear, as so little is in the public domain. But from what has been published it is clear that it’s already started. Some AIs, like Hod Lipson’s four-legged walking robot, have been given minimal programming and encouraged to ‘self-teach’ so they make their own decisions about how to learn. In robotics, this is a vital step on the journey toward self-replication, so that machines can self-diagnose and repair themselves in remote locations. For large language models like ChatGPT, the design for a self-programming AI has been validated, using a code generation model that can modify its own source code to improve its performance, and program other models to perform tasks. An ability to make autonomous decisions, and to reprogram? That sounds enough like human free will to me to spell risk. And it is this risk, that autonomous AIs might make decisions we don’t like, that gives rise to SciFi-fuelled consternation about alignment with human values and interests. The spectre of this is why there is emerging global alarm about the Control Problem.

And this is why robots need rights. Not because they can feel – that may come, and change the debate yet again; but because, now, like corporations, we need a way to hold them to account. But we would not need to indulge in such frenzied regulatory whack-a-mole if we had taken more time to get the design right to start with. And that’s why I’m arguing for a pause, not primarily to allow regulation to catch up, but to buy time for us to develop and retrofit some of the guardrails that have already kept the human species broadly on track by protecting us from our own free will. Yes, all that human junk code

Robots v Witches – which team would you back?

By | Business, Theology | No Comments

One long-ago Halloween, I remember learning the Three Witches scene from Macbeth to go guising with my sisters. Double, double toil and trouble; Fire burn and cauldron bubble! I wore a black tutu on my head and my sisters used soot from the chimney to blacken their teeth. Macbeth was first performed at court in 1606, before King James VI and I, and Shakespeare’s witches were perfectly pitched. King James very much saw himself as the scourge of Britain’s witches. He’d even written a book about it, Daemonologie, which explains about magic and necromancy, witchcraft and sorcery, spirits and spectres; and sets out the appropriate trials and punishment by death for such practices. He’d been personally involved in the famous North Berwick witch trials in 1590 and witch-burning achieved its zenith during his reign.

Even though the Witchcraft Act was repealed for both Scotland and England in 1736, informal persecution took longer to die out. In 1791, Robert Burns’ poem Tam O’Shanter was still considered risky for lampooning witches, because witch-burning had taken place in living memory: the last legal witch-burning in Scotland happened in Dornoch in 1723, when Janet Home was tarred, feathered, and burned to death on suspicion of bewitching her daughter into Satan’s pony, because her deformed hands resembled hooves. Claire Mitchell KC reports that Scotland executed five times as many people per capita as anywhere else in Europe. During the currency of the Witchcraft Act in Scotland, she estimates that 3837 people were accused, 2600 of whom were killed, 84% of which were women. So, on International Women’s Day last year, First Minister Nicola Sturgeon issued a formal apology on behalf of the Scottish Government to all those who were accused, convicted, vilified or executed under the Witchcraft Act 1563.

It is unlikely a robot would masquerade as a witch by wearing a tutu on its head or blackening its teeth. But it is possible, thusly garbed, that the three of us might have incited a witch hunt in the same St Andrews streets 300 years previously, particularly as one of my sisters has differently coloured eyes. For why? Because most ‘witches’ were women who did not conform. They might look unusual or be physically deformed, but more usually they were challenging established authority through healing and divining, those sacred male privileges largely in the gift of the church. It is this retrospective realisation that witch-hunting was nothing more that state-sponsored misogyny that has precipitated global efforts to pardon the witches.

Spellbooks are an interesting case in point. They were recipe books, mainly, with rhymes and chants to promote recall, like Rosemary for Remembrance. With their notes about cases and usage they would not shame a modern kitchen or pharmacy. Indeed, contemporaneous gentlemen often kept their own professional scrapbooks or Commonplaces, although these tended to win them membership of the Royal Society rather than public shaming and death at the stake. Now robots would be very good at spellbooks of any kind. AI is already being used to super-charge drug discovery because of its amazing capacity to process information at an order of magnitude unimaginable to even the largest witches’ coven. So I think they could beat the witches on that front.

But King James hated witches not because they healed, but because they divined; and he thought they were in league with the Devil. Some spellbooks or grimoires were known to contain guidance about the summoning of supernatural spirits for help or guidance, and many traditions around the world have sought out dark magic. But it is highly unlikely that all but a miniscule proportion of those 3837 people accused in Scotland were sorcerers. What is more likely is that a fair few of them had the Second Sight, and were susceptible to visions, prophesy, and an ability to understand the world at a deeper level. All humans have access to intuition and are able to pick up data from weak signals. There is much argument about what intuition really is, but wherever it comes from, the data it produces is real and documented. It seems that some humans are able to hear this fuzzy frequency clearly, and to connect with what Jung would call the collective unconscious, and what the occult would call other worlds or realms.

Such phenomena are anathema to scientists and would find no place in any formalised articulation of the best of human intelligence. These days we regard palm-reading, the Tarot and the I Ching as fairground attractions, and we scornfully flick past the horoscopes in the newspapers. But perhaps our witches were right. Perhaps as Hamlet said after glimpsing the paranormal we should give such strangers welcome, because ‘there are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.’ Of course we would find it hard to code such things in to AI. But that does not mean that these kinds of considerations are irrelevant. Our arrogance about these weak signals risks becoming hubris in a great AI tragedy if we do not embrace the complete messiness of human intelligences. Programming in – and valuing – only the easy bits that are susceptible to ones and zeroes would certainly hand victory to the robots.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Regina Scientiarum

By | Theology | No Comments

I once managed to inveigle my way into a job interview at Deloitte. The two Partners were clearly rather underwhelmed by my CV. At that stage I had 4 (junior) years at the Church Commissioners under my belt, and an MBA from a school that was Not On Their List. Finally they reached the end of the interview. Obviously relieved, as they gathered up their papers, they asked rather diffidently, do you have any questions for us? Yes! I said eagerly, Do you have any reservations about my candidacy? They scowled. Why on earth would we want to risk putting someone with a Theology degree in front of our clients? Well, you’ll be delighted to know that I sat them right back down and gave them the full story on why a theology degree is the ONLY degree a girl will ever need.

So I thought I might give you that story today, by way of saluting your academic achievements here at Sarum. But first I need to tell you something rather alarming. Read More

What is morality if the future is known?

By | Business, Theology | No Comments

In the movie Arrival a linguist learning an alien language gains access to a consciousness that knows the future. Unlike our consciousness, which runs from cause to effect and is sequential, theirs can see the whole arc of time simultaneously.Their life is about discerning purpose and enacting events, while ours is about discerning good outcomes and deploying our free will and volition to those ends.

In Ted Chiang’s Story of Your Life, on which the screenplay is based, this is explained theoretically with reference to Fermat’s principle of least time. This states that the path taken by a ray between two given points is the path that can be traversed in the least time. Lurking behind this idea is the realisation that nature has an ability to test alternative paths: a ray of sunlight must know its destination in order to choose the optimal route. Chiang has his protagonist muse about the nature of free will in such a scheme: the virtue would not be in selecting the right actions, but in duly performing already known actions, in order that the future occurs as it should. It’s a bit like an actor respecting Shakespeare enough not to improvise one of his soliloquies. Read More

Robot Dread

By | Business, Theology | No Comments

I sense a morbid fear behind our catastrophizing about androids, which I reckon is to do with a loss of autonomy. It’s true that for periods in history tribes and people have assumed they have no autonomy, life being driven by the fates or by a predetermined design or creator, so this could be a particularly modern malady in an era that luxuriates in free will. But concern about the creep of cyborgism through the increasing use of technology in and around our bodies seems to produce a frisson of existential dread that I have been struggling to diagnose. Technology has always attracted its naysayers, from the early saboteurs to the Luddites and the Swing Rioters, and all the movements that opposed the Industrial Revolution, but this feels less about livelihoods and more about personhood. Read More

Robots don’t have childhoods

By | Business, Theology | 2 Comments

I’m sitting on the beach at North Berwick, with clear views out to the Bass Rock and May Isle, watching the children play. My daughter digs a deep hole, then runs off to find hermit crabs in the rock pools. Nearby, a young boy is buried up to the neck while his sister decorates his sarcophagus with shells. On the shore, a toddler stands transfixed by a washed-up jellyfish, while two older girls struggle to manipulate a boat in the shallows, trying to avoid the splashing boys playing swim-tig.

We’re under the benign shadow of the North Berwick Law, where there’s a bronze-age hill fort, so it’s likely this holiday postcard scene has not changed much since this part of Scotland was first settled, thousands of years ago, when those children dug holes, found crabs, and frolicked in the sea. I felt a wave of such sadness, thinking forward in time. Will this beach still play host to the children of the far distant future, or will we have designed out childhood by then? Robots don’t have childhoods because they don’t need them. Humans still do, but I wonder how much time you’ve spent trying to figure out why?

Read More

Managing Risk: by Spreadsheet or Emotion?

By | Business | No Comments

Ethics is often seen to be a luxury, or a nice-to-have; if deployed suitably publicly, it might enhance an organisation’s licence to operate, or give their brand a virtuous glow. The business case for ethics is, however, less cynical and more strategic: it’s not so much about brand personality than it is about risk.

Read More