What is morality if the future is known?

By September 6, 2020Business, Theology

In the movie Arrival a linguist learning an alien language gains access to a consciousness that knows the future. Unlike our consciousness, which runs from cause to effect and is sequential, theirs can see the whole arc of time simultaneously.Their life is about discerning purpose and enacting events, while ours is about discerning good outcomes and deploying our free will and volition to those ends.

In Ted Chiang’s Story of Your Life, on which the screenplay is based, this is explained theoretically with reference to Fermat’s principle of least time. This states that the path taken by a ray between two given points is the path that can be traversed in the least time. Lurking behind this idea is the realisation that nature has an ability to test alternative paths: a ray of sunlight must know its destination in order to choose the optimal route. Chiang has his protagonist muse about the nature of free will in such a scheme: the virtue would not be in selecting the right actions, but in duly performing already known actions, in order that the future occurs as it should. It’s a bit like an actor respecting Shakespeare enough not to improvise one of his soliloquies.

While this thought experiment made me wonder about the ethics curriculum in alien schools – all virtue ethics and Stoicism? – it also made me pause to notice that in many ways we do know the future. While we may not have access to the narrative of our own individual futures, when it comes to AI we do have some inkling about the most likely futures we will face as a species. It has been the signal mission of the Science Fiction genre to set out these futures for our inspection. If we know our futures, what is the moral task?

In his book Life 3.0, Max Tegmark identifies 12 ‘AI Aftermath Scenarios’, ranging from Libertarian Utopia to Self-destruction. These variously describe the relationship between AI and humans, including ‘Zookeeper’ wherein AI retains humans essentially as pets. These 12 scenarios alone would give us fertile ground for devising ethical curricula, but to give you a flavour of the task here are some starter questions. If Zookeeper were a future, we would not be in charge. How could we act now to ensure that our keepers were benign? Cue John Rawls, and the end of our complacency about leaving the coding up to other people and to the free markets.

The problem about life’s current business logics is that they are irredeemably Enlightened and therefore utilitarian. ‘If x then y’ is about – obviously – optimising outcomes. Which would be bad news for us as a species if we were no longer in charge. While we are, we are kept safe, because behind this po-faced scientific rationalism there is a deep-seated commitment to the dignity of the human person: I will not harvest one of your kidneys. euthanize your aling granny, or sterilise your disabled daughter. But this is not a logic that has been transparently translated into computer programs: Asimov’s fictional Laws of Robotics famously forbid harm to humans, but this has been quickly ignored in the case of military drones and autonomous weapons. The philosopher John Rawls’ Veil of Ignorance is a way of thinking about the construction of society such that the designers should assume they might be the losers as well as the winners, on the view that designing in fairness for paupers as well as princes would result in a just society. Should we therefore now use author privilege to bake our supremacy into the basic global codes for AI and more precisely into our systems of law?

Tegmark’s scenarios suggest myriad other approaches and dilemmas. For instance, they include various Utopias, where humans and AI peacefully co-exist, because of the abolition of property rights and the introduction of guaranteed income. Does this possible future shed light on our current debates about Universal Basic Income?

Yet another scenario sees humans gracefully exiting in favour of their superior AI offspring. If we decided that we actively wanted to be bettered, how would we accelerate discoveries about consciousness to devise ever-better androids? In any scenario, our ethics and values must needs be reviewed, which has implications not only for public policy but also for law and for education.

But regardless of which thought experiment about the future you choose to run, the bottom line is this: if we know the range of likely outcomes, why are we not planning more thoroughly for them?

Leave a Reply