When Sir Thomas More wrote Utopia in 1516, he established in Europe a long tradition of imagining alternative worlds and the different kinds of societies they might have. These days we call books like that SciFi, and when I was young they were definitely shelved at the nerdy end of the library. In our own galaxy, we’re currently nearing a state of existential overwhelm about AI. What will the future hold, and what on earth do we do about it? Luckily, we know. Because I think SciFi is actually prophecy. Of course it was written for entertainment, with storylines inevitably containing jeopardy and villains; binge-watching such stories is not reducing our anxiety. But what if we were instead to pan back a bit, and remember that at heart this genre has always functioned as a safe space to take dangerous ideas for a walk?
So zooming out across the whole genre reveals the full range of scenarios played out for us in every conceivable permutation. As prophecy, or at least scenario planning, these writers show us the kinds of questions we’ll need to get right in any future AI scenario. Even if you take a minimalist view, questions of control, accountability and unintended consequences are endlessly showcased in the genre. And if you want to take a maximalist view (which can even cater for the inclusion of aliens, lightspeeds permitting), the genre repeats the same themes over and over again: conflicts are resolved either by domination, or by agreement through law or democracy.
In particular, laws are generally in place to govern both property and personal rights, and to enforce hierarchies in either direction between the human and non-human. Plotlines about cyborgs, hybrids, superheroes and enhanced humans show us the range of imagined protections about cloning and augmentation together with options for dealing with entities that have abilities that surpass the average. We had not thought to turn our SciFi conventions into policy jams, but perhaps now we should: there is nothing the fans don’t know about how this might play out.
One example of prophecy in the genre is the trope about us inevitably forming relationships with AI, because it’s an area where policy could get ahead while we still have some time to think. At the moment, the relevant law for AI tends to be property law, and many SciFi scenarios show this as the dominant future for AI, being owned by humans (or aliens) and subject to their control. It helps that most AI globally is in the hands of private corporations.
But as soon as AI becomes a more generally available consumer product, this default becomes problematic. We all remember giving names and backstories to our toys when we were young, so we know that it is a fundamentally human trait to subjectify objects. As David Gunkel puts it, anthropomorphism isn’t a bug, it’s a feature. As children, our tendency toward anthropomorphisation is designed to teach us healthy lessons about respect, play and relationships. The grown-ups tell us off if we ‘abuse’ our toys by harming them, and the lessons learned from dolls and teddies are then extrapolated to household pets, who tend to give children memorable feedback on any attempts at mistreatment.
But if an AI is just property like a toy and not a pet, because there are no additional rules in play, there is a danger that as adults we deploy AI only as a servant, and increasingly use AI because we want to avoid the obligations that come with deploying humans (or animals) instead. Autonomous weaponry and sexbots may epitomise this, but there is certainly potential for the full range of abuses along the way. This legal situation risks dehumanising us, and would also provide AI with dreadful training data, particularly in view of Stuart Russell’s AI Principles which hold that in any scenario an AI should regard observed human behaviour rather than any pre-stated rules on preferences as the ultimate source of guidance. The stories show us how this invariably ends, but in real life we have the tools to change the story if we want to.
SciFi as a genre is full of fruitful seams like this, from which we might mine future AI policy. Whether it’s developing a more forensic definition of what constitutes being human, or even just working out which laws on the statute books now require future-proofing, SciFi can be a safe space to take dangerous policy ideas for a walk, too. We had to learn from the tragedy of the Post Office scandal the error of the English common law presumption that computer evidence is considered automatically reliable. Would it not be safer to learn these lessons from SciFi instead, given the wide range of AI scenarios it has to offer? And if we invited a SciFi convention to the next conference on AI regulation, their subject matter expertise would enable them not only to QA emerging regulation, but also to spot gaps in the genre where fresh SciFi could help push our collective thinking forward…