My friend is writing a book on robot sex. He’s a professor of ethics, so that’s his day job. And jobs seem to be what it’s all about; widespread panic that the robots are going to put us all out of business.
It seems a while ago now that Erik Brynjolfsson and Andrew Mcafee’s 2014 book The Second Machine Age burst onto the scene. There are many who argue that we are worrying unnecessarily, including those level-headed people at Wired magazine. But the entrepreneur Elon Musk recently said that he thought Artificial Intelligence posed a greater threat to our survival than North Korea, and, amid rumours that some of Silicon Valley’s finest are buying up remote islands and preparing for an AI apocalypse, the UK government has set up an All-Party Artificial Intelligence Group to look into the matter, with the Bishop of Oxford as Treasurer.
Whether or not – and when – the robots will take over is a moot point. But what would most help us navigate this contested terrain is a clearer sense of what it actually means to be human. We can still argue the toss about whose morality should be coded into weaponised drones or driverless cars, but we need a clearer sense of where in this alien landscape the mines are actually hidden. One of the ways we have always worried about the future is through Science Fiction, so that should logically be the best place to start. But the genre is now such a crowded space that it tends to diverge rather than converge, which makes it hard to distil. Just narrowing down the robot literature would help, but is there an easier way?
Well, it has been the job of the wisdom traditions for the past 5,000 years to worry about humanity on our behalf, so it is likely that the religions are already clustering around the squares on the map where danger lies. They traditionally worry about the same kinds of questions: dignity, relationships, procreation, and bad behaviour. We could become endlessly distracted by the specific differences between the faiths and other philosophies of life and their particular take on myriad issues. Or, we could gaze on them corporately with soft eyes, to see what their common preoccupations reveal. These are likely to be the areas on which we should concentrate our efforts, because thousands of years of wisdom are flashing Code Red.
As an example, many debates about AI concern the very basic question of what makes us distinctively human. If we ignore the wisdom traditions and look to the secular narrative for an answer, we encounter a logic that renders media hysteria over what Wired calls robopocalypse rather puzzling. As I have been arguing since 2007, if Richard Dawkins is right, and we are just a supremely well-designed and dogged set of cells, we should really welcome the robots with open arms. They are our children. We have made them, and blessed them with as much intelligence as we know how to program. As we get better at neurobiology, genetics and reproductive science, we should be able to make ever-more accurate copies of ourselves, with all the flaws designed out. These robots will of course take our jobs, and we should rejoice: they are our finest achievement and the perfect legacy – an evolution to end all evolution, because the machines will be able to learn and replicate more quickly and more perfectly than any human ever could.
But what if the unease we seem to feel about the robots is because there is something inalienable about being human? What if there is something fundamental to our humanity that is not replicable, because the coding for it is not accessible to us? In the wisdom traditions and in philosophy this essence has been called the human spirit or the human soul. It’s the one design ingredient that does not seem to be susceptible to scientific analysis, because it’s immaterial in form.
But our modem states and legal systems were built on an Enlightenment ethic that tended to avoid baking religion into public policy. So while our religious freedoms are protected in law, it is less clear who exactly these laws apply to. How human do you need to be, exactly? If in the future we made ‘robots’ who were biologically identical to our species, would they attract human rights too? Or, if we think our humanity is more to do with our minds than our bodies, if we were to engineer robots who could match (indeed outpace) us cognitively and emotionally, should they not also be as protected as we are in law?
It seems to me that the only way to tease out these rather fine if currently – thankfully – hypothetical distinctions is to find out more about what the wisdom traditions mean about the soul. It may end up being the one thing that will keep our jobs safe. So let’s hope that while the Bishop of Oxford is being Treasurer to the AI Working Group, it is our souls that he is treasuring for us.