I sense a morbid fear behind our catastrophizing about androids, which I reckon is to do with a loss of autonomy. It’s true that for periods in history tribes and people have assumed they have no autonomy, life being driven by the fates or by a predetermined design or creator, so this could be a particularly modern malady in an era that luxuriates in free will. But concern about the creep of cyborgism through the increasing use of technology in and around our bodies seems to produce a frisson of existential dread that I have been struggling to diagnose. Technology has always attracted its naysayers, from the early saboteurs to the Luddites and the Swing Rioters, and all the movements that opposed the Industrial Revolution, but this feels less about livelihoods and more about personhood.

Raymond Cattell famously identified two types or modes of human intelligence: crystallized intelligence and fluid intelligence. His model is not of course correct, but it’s a useful lens through which to view this particular problem. His categorization distinguishes between those facts and experiences that are crystallized as the sum total of our learned knowledge – our databases, if you like – and the fluid intelligence that abstracts from those databases to solve novel problems or make intuitive leaps. In trying to zero in on our scruple about AI, I think we are groping towards an understanding of the latter as something that is particularly human and which we would hold to be special. As more and more species are found to be able to use tools to solve problems, we tend to obsess about this vanishing ground of particularity, that with google and the great apes leaves us telling the story of Einstein’s beam of light to reassure ourselves that we’re still somehow neurobiologically distinctive.

I think our dread is an existential fear of being programmed, and of waking up to find we have been someone’s puppet all along. All those times when we felt we were wrestling with our consciences or weighing up weighty arguments, and the rules were already driving us towards a decision we had felt was our very own and not preordained in someone else’s playbook. If we have no agency, we have no freedom, except in an illusory and manipulative way.

But while I appreciate this fear, I suspect it is misbegotten. It is of course theoretically possible that AI will be able to programme us in the future. It’s doing that already in many ways, but at the moment with our willing consent, when it comes to our health and life management. And it’s this word ‘consent’ that it is crucial. Do we know enough about AI to be actively giving our consent, globally, in an informed way? And are we given the opportunity to consent? I think not, which is why the dread is useful, if it galvanizes us into interest and action. We should be asking sharp questions about programming and controls, and the ownership of code, and which red lines as a species we think it is not yet safe to cross. Does our driverless car choose to sacrifice the granny or the toddler, and do we consent to that coding? Are we happy that our every online reaction and transaction has become the database for AI, embedding as objective fact the sum total of all our flawed human and subjective interactions? Are we happy with a global Intellectual Property regime that both conveys and protects the ownership of AI technology on corporations without sufficient regulation or accountability to nation states?

It may be that one day we will find out we were programmed after all. But while we still rejoice in our free will, we need to exercise it, and not let this unspecified dread confine us to stupefaction watching SciFi on the sofa. Meanwhile, we still don’t really know enough about Cattell’s fluid intelligence, which does feel distinctive. Could we do more in our schools to develop this muscle, rather than maintaining our narrow focus on A*s in the STEM subjects that the robots have already nailed/? It might make us more human, and able to programme ever more human robots too…

Leave a Reply