Monthly Archives

August 2020

Robot Dread

By | Business, Theology | No Comments

I sense a morbid fear behind our catastrophizing about androids, which I reckon is to do with a loss of autonomy. It’s true that for periods in history tribes and people have assumed they have no autonomy, life being driven by the fates or by a predetermined design or creator, so this could be a particularly modern malady in an era that luxuriates in free will. But concern about the creep of cyborgism through the increasing use of technology in and around our bodies seems to produce a frisson of existential dread that I have been struggling to diagnose. Technology has always attracted its naysayers, from the early saboteurs to the Luddites and the Swing Rioters, and all the movements that opposed the Industrial Revolution, but this feels less about livelihoods and more about personhood.

Raymond Cattell famously identified two types or modes of human intelligence: crystallized intelligence and fluid intelligence. His model is not of course correct, but it’s a useful lens through which to view this particular problem. His categorization distinguishes between those facts and experiences that are crystallized as the sum total of our learned knowledge – our databases, if you like – and the fluid intelligence that abstracts from those databases to solve novel problems or make intuitive leaps. In trying to zero in on our scruple about AI, I think we are groping towards an understanding of the latter as something that is particularly human and which we would hold to be special. As more and more species are found to be able to use tools to solve problems, we tend to obsess about this vanishing ground of particularity, that with google and the great apes leaves us telling the story of Einstein’s beam of light to reassure ourselves that we’re still somehow neurobiologically distinctive.

I think our dread is an existential fear of being programmed, and of waking up to find we have been someone’s puppet all along. All those times when we felt we were wrestling with our consciences or weighing up weighty arguments, and the rules were already driving us towards a decision we had felt was our very own and not preordained in someone else’s playbook. If we have no agency, we have no freedom, except in an illusory and manipulative way.

But while I appreciate this fear, I suspect it is misbegotten. It is of course theoretically possible that AI will be able to programme us in the future. It’s doing that already in many ways, but at the moment with our willing consent, when it comes to our health and life management. And it’s this word ‘consent’ that it is crucial. Do we know enough about AI to be actively giving our consent, globally, in an informed way? And are we given the opportunity to consent? I think not, which is why the dread is useful, if it galvanizes us into interest and action. We should be asking sharp questions about programming and controls, and the ownership of code, and which red lines as a species we think it is not yet safe to cross. Does our driverless car choose to sacrifice the granny or the toddler, and do we consent to that coding? Are we happy that our every online reaction and transaction has become the database for AI, embedding as objective fact the sum total of all our flawed human and subjective interactions? Are we happy with a global Intellectual Property regime that both conveys and protects the ownership of AI technology on corporations without sufficient regulation or accountability to nation states?

It may be that one day we will find out we were programmed after all. But while we still rejoice in our free will, we need to exercise it, and not let this unspecified dread confine us to stupefaction watching SciFi on the sofa. Meanwhile, we still don’t really know enough about Cattell’s fluid intelligence, which does feel distinctive. Could we do more in our schools to develop this muscle, rather than maintaining our narrow focus on A*s in the STEM subjects that the robots have already nailed/? It might make us more human, and able to programme ever more human robots too… 

Robots don’t have childhoods

By | Business, Theology | 2 Comments

I’m sitting on the beach at North Berwick, with clear views out to the Bass Rock and May Isle, watching the children play. My daughter digs a deep hole, then runs off to find hermit crabs in the rock pools. Nearby, a young boy is buried up to the neck while his sister decorates his sarcophagus with shells. On the shore, a toddler stands transfixed by a washed-up jellyfish, while two older girls struggle to manipulate a boat in the shallows, trying to avoid the splashing boys playing swim-tig.

We’re under the benign shadow of the North Berwick Law, where there’s a bronze-age hill fort, so it’s likely this holiday postcard scene has not changed much since this part of Scotland was first settled, thousands of years ago, when those children dug holes, found crabs, and frolicked in the sea. I felt a wave of such sadness, thinking forward in time. Will this beach still play host to the children of the far distant future, or will we have designed out childhood by then? Robots don’t have childhoods because they don’t need them. Humans still do, but I wonder how much time you’ve spent trying to figure out why?

At the moment we need a childhood to grow physically, and to develop mentally towards adulthood and independence from our parents. All robots are adult already, so don’t need of this rather awkward and inefficient phase: just a quick test, then the on button. As a species, humans are ridiculously slow to mature. This is so obviously problematic when compared to other species that there must be an evolutionary reason for keeping this comparative design flaw. It seems that to develop a brain of human complexity takes time, hence this slow process.

But if we could decode the brain, could we not short-circuit the process by cloning adults and programming them direct? This of course is the ultimate design goal of AI, and we’re familiar with it from a whole host of SciFi movies: whether or not we keep humans as well, or simply use the secrets of their brains to evolve beyond them remains to be seen.

It might seem obscene, in these halcyon days of the UN Convention on the Rights of the Child, to empty my beach by indulging in a thought experiment about the future of childhood, but given that our technology has already overtaken our capacity to agree global ethical red lines in so many areas, we need to confront this spectre in order to work out not only why it feels anathema to us, but what that might do in response.  

What are these children doing on the beach? All parents who have spent interminable hours in dilapidated playgrounds will have had the same thought, in case there might be an easier way. They are playing, of course, but with Darwin’s eyes we can also see that they are learning. They are learning  about their physicality and their preferences; they are learning about other people and about relationships; they are learning about the natural world and about the world’s rules. So beware the child with no scabs on their knees: they have not yet learned about taking risks. We’re quite quixotic about childhood. Most of us loathed our own, but it seems we will fight to our dying breath to protect the childhoods of those we love, so our children still tuck a baby tooth under their pillow, and write to Santa Claus.

We’re in a transition. AI can already do many of the cognitive things that humans can do, more quickly, accurately and cheaply, and it’s improving all the time. This creates a dilemma. Because the future is not yet here, we’re still competing hard in the previous race, and on its terms. While most parents know that google has already overtaken their children, and there is more to education than information, the current social frenzy is still about doing your utmost to get your kids into the Gradgrind School of Facts. But the shadow of the future is already here, so we know that today’s highly-prized selective crammers, with a zeal for STEM and an ability to churn out volumes of A stars and Oxbridge places, have maybe 10-15 years to rake it in before their product becomes obsolete. We do need human computers in the interim, to programme AI for us, but once we’ve aced machine learning they will also become defunct. Meanwhile the crammers could save costs and boost performance by removing their STEM teachers in favour of so-called intelligent tutors, because AI-led learning already outperforms traditional learning in most settings.

Yet there is in traditional education a core curriculum which has not yet been improved by AI: moreover it may not be susceptible to AI in the way that STEM undoubtedly is. As I’ve argued elsewhere, I think the key to our humanity isn’t to be found in the clean lines of rationality that would delight any programmer, but in our junk code: the mistakes, the regrets, the dreams, the grief, the envy, the fear, and the joy. We learn these kinds of things very messily on the beaches and in the playgrounds of our childhoods; but at school we learn it particularly in the arts and the humanities: through the myths and stories about human waywardness; the mind-stretching disciplines of philosophy; and the creativity and exuberance of music, drama and art. In all of these, we learn the fundamentally frustrating qualitative nature of argument and criticism, where there are no clear-cut yes/no answers and it is nigh on impossible to score 100%. (By the way, it’s not that we can’t learn these things in STEM subjects, it’s just that they are not taught that way at elementary level). 

I’d wager we learn these junk code things with the particular help of the emotions, because of the role the amygdala plays in memory and in survival. And if that’s the case, it’s the reason we need to stop designing out bad stuff like not coming first or fluffing your lines in front of your peers. We might learn joy from a well-done sum, but we don’t learn shame or embarrassment or chagrin or schadenfreude. Kids need to wallow in the absolute limits of being human in order to feel these limits for themselves: this is a vital prerequisite for the rule of law as well as for human ingenuity and invention. Robots have to act within the bounds of the theoretically known because they are victims of their programming. If only to design better robots, we need to keep pushing at these bounds, in order to extend them: no paradigm shift was ever created without this very human recalcitrance.

So while we decide whether or not childhood is a state that we want to protect in the future, if we develop the know-how to avoid it, we should relish the very essence of childhood, by fighting back against the prevailing policy that prioritises STEM. Indeed we should reverse this trend while there is still time, and make the arts and the humanities both compulsory and subsidised in all formal education. And maybe we should risk teaching philosophy to those kids on the beach: versions of the trolley problem are a daily reality for them anyway, so they might be best placed to help us solve it.