Learning Loss and Losing Leaders

By | Business | No Comments

Seemingly, thrusting professionals are seeing AI as a fast track to partner, reckoning that all those tedious years of working your way up the hard way will now be taken care of by generative AI tools like ChatGPT. Hurray! We’ll all get rich quicker!

But privately these firms are rather worried. It takes ages for the academic curriculum to sync up with modernity, so what will happen to these shiny graduates with their googleable degrees, deposed as trainees by AI, but still expected to re-emerge as experienced hires just below partner level?

In school they’re already talking about the learning loss that occurs when kids outsource their knowledge acquisition to AI. This phenomenon isn’t new: we forgot how to remember the Odyssey when we learned how to write it down; we forgot how to hear music in all its fullness when we invented notation and tethered the scale to the equal temperament of a keyboard; and the introduction of calculators drastically reduced our ability to do mental arithmetic. But this might be the first time we have encountered a technology that can have that effect across so many categories of learning simultaneously. And if kids have experienced learning loss at school, then continue with their reliance on AI into university, will they come out knowing anything useful at all? If all they will have learned is how to use an AI brilliantly, well, AIs are already learning to be better at that too… Industry is rediscovering apprenticeships to make up for an already disappointing graduate cadre: but what do they need to start doing now, if even these apprentices will be usurped by AI?

Way back in 2003 at Ashridge, we initiated a research programme based on asking existing board-level leaders ‘what do you know now about yourself as a leader that you wish you’d known 10 years ago?’ The findings were used to devise a leadership accelerator and written up as the book Leadersmithing. Our research programme included collaborating with a neuroscientist to show how this kind of learning is acquired. From that, we showed the role of the emotions in learning, and found that reliable templates are most efficiently acquired through learning under pressure. And both this method and these research findings suggest an answer to the conundrum of workplace learning loss.

First, we need to get forensic about what, precisely, partners do, and how they learned it. It’s highly likely that much of their value-add is not AI-able, so this exercise should immediately reveal a workplace curriculum for those hoping to succeed them. The Leadersmithing list of critical incidents suggests it will be a fairly standard set of challenges, which will differ between workplaces and cultures only by degree and nuance rather than by type. For example, all partners will have had their mettle tested by making key decisions, fronting multi-million dollar pitches, and mopping up after things have gone wrong. And we know these things are teachable, if you can be precise enough about the muscle memory you’re trying to acquire, like practising difficult conversations or handling hostile media.

Second, we need to learn from the neuroscience. I remember answering the phone in my first ever London office, to hear my sibling, hiding in a cupboard at another London office, asking in a stage whisper: when you’re photocopying, do you take the staples out?! We all remember those ghastly days of learning the ropes largely by making mistakes and incurring the wrath of our seniors over everything from making the coffee wrong to sending out blank faxes. Life would indeed be tranquil if we could make AI take this pain for us. Our recall of such events is heightened by the fact that our errors were often observed. And indeed we learned vicariously, wincing at witnessing the mistakes of others, which is another argument in favour of a back-to-the-office policy. This is because our Ashridge findings showed that whenever you feel observed and under pressure, your heart-rate increases and your learning is enhanced, as the memories you form in those moments are stored deeply in your amygdala.

And we all learned far more than just office-craft in those clumsy days. Through the tedium of note-taking and bag-carrying we saw how leaders really behave: we learned about power, decision-making, values and standards. We witnessed the quite brilliant rescuing of an impossible situation, or a tension diffused with a beautifully timed witticism. We also learned how not to do it, too often I imagine. And it is this implicit learning that we now need to surface and teach back, so that we do not lose a whole generation to AI. Let’s use the gift of AI to remove the ritual humiliations of traineeship, but winnow out of it all the Leadersmithing we can find.

Is SciFi Prophecy?

By | Business, Theology | No Comments

When Sir Thomas More wrote Utopia in 1516, he established in Europe a long tradition of imagining alternative worlds and the different kinds of societies they might have. These days we call books like that SciFi, and when I was young they were definitely shelved at the nerdy end of the library. In our own galaxy, we’re currently nearing a state of existential overwhelm about AI. What will the future hold, and what on earth do we do about it? Luckily, we know. Because I think SciFi is actually prophecy. Of course it was written for entertainment, with storylines inevitably containing jeopardy and villains; binge-watching such stories is not reducing our anxiety. But what if we were instead to pan back a bit, and remember that at heart this genre has always functioned as a safe space to take dangerous ideas for a walk?

So zooming out across the whole genre reveals the full range of scenarios played out for us in every conceivable permutation. As prophecy, or at least scenario planning, these writers show us the kinds of questions we’ll need to get right in any future AI scenario. Even if you take a minimalist view, questions of control, accountability and unintended consequences are endlessly showcased in the genre. And if you want to take a maximalist view (which can even cater for the inclusion of aliens, lightspeeds permitting), the genre repeats the same themes over and over again: conflicts are resolved either by domination, or by agreement through law or democracy.

In particular, laws are generally in place to govern both property and personal rights, and to enforce hierarchies in either direction between the human and non-human. Plotlines about cyborgs, hybrids, superheroes and enhanced humans show us the range of imagined protections about cloning and augmentation together with options for dealing with entities that have abilities that surpass the average. We had not thought to turn our SciFi conventions into policy jams, but perhaps now we should: there is nothing the fans don’t know about how this might play out.

One example of prophecy in the genre is the trope about us inevitably forming relationships with AI, because it’s an area where policy could get ahead while we still have some time to think. At the moment, the relevant law for AI tends to be property law, and many SciFi scenarios show this as the dominant future for AI, being owned by humans (or aliens) and subject to their control. It helps that most AI globally is in the hands of private corporations.

But as soon as AI becomes a more generally available consumer product, this default becomes problematic. We all remember giving names and backstories to our toys when we were young, so we know that it is a fundamentally human trait to subjectify objects. As David Gunkel puts it, anthropomorphism isn’t a bug, it’s a feature. As children, our tendency toward anthropomorphisation is designed to teach us healthy lessons about respect, play and relationships. The grown-ups tell us off if we ‘abuse’ our toys by harming them, and the lessons learned from dolls and teddies are then extrapolated to household pets, who tend to give children memorable feedback on any attempts at mistreatment.

But if an AI is just property like a toy and not a pet, because there are no additional rules in play, there is a danger that as adults we deploy AI only as a servant, and increasingly use AI because we want to avoid the obligations that come with deploying humans (or animals) instead. Autonomous weaponry and sexbots may epitomise this, but there is certainly potential for the full range of abuses along the way. This legal situation risks dehumanising us, and would also provide AI with dreadful training data, particularly in view of Stuart Russell’s AI Principles which hold that in any scenario an AI should regard observed human behaviour rather than any pre-stated rules on preferences as the ultimate source of guidance. The stories show us how this invariably ends, but in real life we have the tools to change the story if we want to.

SciFi as a genre is full of fruitful seams like this, from which we might mine future AI policy. Whether it’s developing a more forensic definition of what constitutes being human, or even just working out which laws on the statute books now require future-proofing, SciFi can be a safe space to take dangerous policy ideas for a walk, too. We had to learn from the tragedy of the Post Office scandal the error of the English common law presumption that computer evidence is considered automatically reliable. Would it not be safer to learn these lessons from SciFi instead, given the wide range of AI scenarios it has to offer? And if we invited a SciFi convention to the next conference on AI regulation, their subject matter expertise would enable them not only to QA emerging regulation, but also to spot gaps in the genre where fresh SciFi could help push our collective thinking forward…

Does it matter if AI is conscious?

By | Business, Theology | No Comments

It’s the Hard Problem, they say. We don’t know what consciousness is, they say. But AI might already be conscious, they say. I notice that in the conversation about AI, consciousness has become the deal-breaker. Once AI gets it? Game over. But I think consciousness is a red herring. This misdirection is a very good way to keep people busy, though.

What else is conscious? Bats, famously. In the case of animals, their consciousness gives them sentience, so we are increasingly spreading the net of rights ever further over the natural world, as our understanding of the concept of felt harms expands. But sentience is not the reason we gave corporations rights. It would not be particularly meaningful to describe a major brand as conscious, except in a metaphorical way. We gave corporations rights so that we might control them better. If they have legal personality, we can sue them. But with AI we have an awkward hybrid, a new thing that is not an animal, and not a corporation.

So is consciousness helpful as a metric, or not? Would it matter if AI was conscious? Only if it had no rights, because then we might abuse it. The only other way in which consciousness matters is for human exceptionalism, because we’re at the apex of the animal kingdom, and regard our own consciousness as the apotheosis. Or perhaps it comes from some kind of proto-religious ghost memory, because we used to think that only God could bestow this gift? In that kind of worldview, nothing manufactured could have a soul, by definition. Is our thrall with AI and consciousness really a frisson, that we have played God, and finally pulled it off?

I think it’s likely that AI will develop something akin to consciousness, in that it will have a felt sense of its own subjective experience. This will not make it human. Neither is a bat a human, yet it seems to us to be conscious. That it’s organic and not manufactured makes its consciousness feel like it has a family resemblance to ours, in a way we have never imagined that a toaster might have feelings. But is that because there is something axiomatically distinct between something created by nature and something created by persons? Categorically, of course there is. But if you then want to argue that consciousness can only ever be a property of something natural, we’ve just smuggled God back in again, because that sounds like some sort of argument about the sanctity of creation, or possibly about the properties of organic matter, which we can already grow artificially in the lab… So either consciousness is just about processing, in which case AI will get it; or it’s about God, and AI won’t. We can argue about that until the cows come home. Or until AI sneaks up behind us while we’re busy philosophising.

Because what’s really the deal-breaker is free will. I know that’s a contested term. In this instance I mean it to mean an ability to self-determine, to exercise agency, and to make decisions. Again, while the cows are still out, we could argue about how ‘free’ anyone is. Let’s assume formally that we exist (a belief in Realism – harder to prove than you might imagine). Let’s also assume human self-determination, as enshrined in international law, which holds that we are not generally speaking pre-programmed; indeed attempts to programme us would thwart our human rights. Thus, anything that exists and can self-determine has free will. Whether or not it consciously self-determines is neither here nor there, except as a matter of law, were AI rights to get into the jurisprudence of moral retribution, as opposed to notions of restorative or distributive justice for the better ordering of society, which may of course also include excluding wrong-doers from that society.

So could AI – being by definition pre-programmed – ever develop free will? Where are we on that? Well, it’s unclear, as so little is in the public domain. But from what has been published it is clear that it’s already started. Some AIs, like Hod Lipson’s four-legged walking robot, have been given minimal programming and encouraged to ‘self-teach’ so they make their own decisions about how to learn. In robotics, this is a vital step on the journey toward self-replication, so that machines can self-diagnose and repair themselves in remote locations. For large language models like ChatGPT, the design for a self-programming AI has been validated, using a code generation model that can modify its own source code to improve its performance, and program other models to perform tasks. An ability to make autonomous decisions, and to reprogram? That sounds enough like human free will to me to spell risk. And it is this risk, that autonomous AIs might make decisions we don’t like, that gives rise to SciFi-fuelled consternation about alignment with human values and interests. The spectre of this is why there is emerging global alarm about the Control Problem.

And this is why robots need rights. Not because they can feel – that may come, and change the debate yet again; but because, now, like corporations, we need a way to hold them to account. But we would not need to indulge in such frenzied regulatory whack-a-mole if we had taken more time to get the design right to start with. And that’s why I’m arguing for a pause, not primarily to allow regulation to catch up, but to buy time for us to develop and retrofit some of the guardrails that have already kept the human species broadly on track by protecting us from our own free will. Yes, all that human junk code

How to fix AI

By | Business, Theology | No Comments

In the planetarium at Dynamic Earth there is a mesmerising setting: you can map the trails of all of the stars as the night progresses, until you are surrounded by a dome of warp-speed star trails all blurring together. Except for one: the North Star. Stately Polaris sails on, in the centre, unmoving, seemingly fixed in the heavens. No wonder our ancestors were in awe of this celestial way-marker. They knew the stars far better than we do, and they gave them names.

Biomimicry is about innovating using the wisdom of nature. But when it comes to AI, I fear we are not looking closely enough at the thing we are trying to copy. In our haste to program only the very best of our design into AI, we have left out all the junk code – all the bits we’re ashamed of or struggle to understand, like our emotions, and intuition, and our propensity to communicate through stories. But we can ask the stars to fill in these gaps for us.

When I was little, I was taught to find the North Star using the Plough. If you imagine the Plough as a ladle, the two stars that form the right of the bowl point to it. In our culture, the Plough is part of a constellation called Ursa Major, the Great Bear. Did you know she is a mummy bear? The beautiful nymph Callisto caught the eye of Zeus, who masqueraded as her friend the goddess Artemis to get her to sleep with him (!). When Callisto fell pregnant, Zeus’ wife Hera was furious, and turned her into a bear. Some years later, their son Arcas tried to kill the bear on a hunt, so Zeus whisked her up to the skies as this constellation, then set their son as the star Arcturus nearby, where she can watch over him in perpetuity.

If you want to check you have found Polaris, you can look for the wonky W of Cassiopeia on the other side of the North Star. Cassiopeia was so vain that the gods made her sacrifice her daughter Andromeda to a sea monster. Perseus rescued her, but Poseidon chained Cassiopeia to her throne in the heavens for posterity. There she sits, eternally gazing at herself in a mirror, like a modern teenager obsessed with selfies.

Just these two constellations tell us everything we need to know about the messy business of being human. In a recent article about AI in Fast Company, the authors issued a rallying call around the four ‘unique and precious human virtues’ that AI cannot hope to copy, which they list as humility, curiosity, self-awareness and empathy. Actually, the list is a bit longer than that. I have identified 7 items of ‘junk code’ in which lie the essential magic of our human design.

First, Free Will. This is a disastrous design choice. Letting creatures do what they want will surely lead to the rapid extinction of the species. So let’s design in some ameliorators. First, emotions. Through some unknown design choice – which again seems foolish – humans are particularly vulnerable because their young take 9 months to gestate and are pretty helpless for their first few years. Emotions would be a good design choice because it might make these creatures bond both with their offspring and in their communities to protect the vulnerable. Excellent. Now that they have some chance of making it through to adulthood, how do we stop them making bad choices? We design in uncertainty. A capacity to cope with ambiguity will stop them rushing into precipitous decision-making, and make them seek others out for wise counsel. Coupled with a Sixth Sense, they will be able to use their intuition to seek wisdom from the collective unconscious too, which should also help to de-risk decision-making. And if they do make mistakes? Well, they will learn from them. And mistakes that make them feel bad will develop in them a healthy conscience, which will steer them away from repeated harms in future. Now that we have corrected their design to promote survival, what motivators are needed for their future flourishing? Storytelling allows communities over time to transmit their core values and purpose down the generations in an efficient and memorable way. Such stories last for centuries, further future-proofing the species through learned wisdom from the lived experience of our ancestors for the benefit of our future thriving. And a vital part of this endeavour is meaning-making: a species that can discern or create meaning in the world around it will find reasons to keep living in the face of any adversity, and the human species will prevail.

The stars could have told us all of this, of course. Our ancestors looked up at those sparkling dots in the sky and made stories up about them. They made meaning in their configuration and movement, not just for navigation and the turning of the year, but also for daily life through the signs of the zodiac. And the stories of the constellations vividly illustrate the mistakes humans make when they exercise their free-will unwisely, whether through lust or vengeance, jealousy or vanity. It was arrogant certainty that did for Cassiopeia, but how very human to think that your daughter is more beautiful than the gods. It was love that saved Callisto from being killed by her own son, but jealousy that rendered her a bear in the first place. Who knows where her sixth sense had got to, at the point at which she realised Zeus was not Artemis!

We had not thought to design humanity into AI. A robot that was emotional and made mistakes would soon be sent back to the shop, especially if it behaved like Zeus. But when we truly understand that our junk code is part of a rather clever defensive design, it makes it look unwise not to translate more of our coding into AI. If this code is how we have solved our own control and alignment problems, might we not find wisdom in it for solving those problems for AI?

This blog is based on a talk on biomimicry delivered by Eve Poole in the Planetarium as part of the 2023 Edinburgh Science Festival. Her book Robot Souls is available here.

Would AI blow eight million dollars on a postage stamp?

By | Business, Theology | No Comments

The most valuable piece of paper in the world was sold at auction in 2021 for $8,307,000. Measuring just under 3cm square, it is a British Guiana 1856 One-Cent Magenta stamp, the only one in the world. Leonardo Da Vinci’s 1480 drawing of the head of a bear was sold for £8.8million at auction, also in 2021, which measures about 3inch square and is also on paper, but the stamp by size still holds the record. Both cost just pennies to make, but have become priceless.

As every child has ever asked: Why? Why?! Why?!! Some of the reasoning would gladden a robot’s heart. The stamp was sold to Stanley Gibbons, who are selling microshares in it, and are clearly regarding it as a commercial investment. But its previous owners – and most private collectors of elite art – seemingly were just desperate to have it. Because they can afford to. Because then no one else can have it. And this pleases them. Now we could get on the couch and criticise: ego, greed, narcissism? Or we could just notice that over-paying for something that has meaning is a thoroughly normal human thing to do. Famously we overpay for engagement rings, for memorabilia, and for branded experiences that make us look good. So perhaps, come the Singularity, the robots will release us from all this inefficient expenditure? I hope not. I hope there is still time to program some of this lunacy into AI. Because the why-why-why and the meaning-making thereof are part of our default code as humans. And I think they form a vital part of our survival strategy.

In the film the Matrix, Neo is told that their AI overlords have invented the matrix – a simulated reality – in order to farm humans for energy. They had tried other ways to both grow and prolong human life as an energy source, but giving humans a sense of meaning was the most effective way. Of course we could all be living in the matrix in reality, reality itself not bring susceptible to proof, but even as a story the matrix reminds us that where there is no vision, the people perish. In Man’s Search for Meaning, Victor Frankl writes movingly about meaning-making as a survival strategy in the concentration camps. And even humans not in peril tend towards finding meaning in the everyday in order to give them a sense of agency and to make them feel as though they matter.

So, if we have found being hard-wired to make meaning useful for our thriving, might AI benefit from it too? AI can already find patterns in data, but we have not looked further than that as a proxy for our own experience. Questing for meaning makes us fill gaps in our data to provide an explanatory narrative to drive future behaviour. This can of course lead us to develop hypotheses about the constellations and the weather, about groundhogs and black cats, that prove unreliable. But these graspings towards meaning throughout history and today still promote a feeling of agency and purpose that motivates.

If our human superpower is spinning straw into gold by turning data into meaning, it should be relatively straightforward for a robot to select a framework of meaning that fits its situation. And it should be the defining characteristic of the framework that it was chosen by the AI and not by us. The AI would need to be able to adjust its own ethical framework to fit the worldview it chose, which is fraught with just the kind of risk we face when our own teenagers decide to become anarchists. This is terrifying. But we have to do it because of who we are. if we do not treat AI with respect and as though it is valued and purposeful, we undermine its ability to experience its existence as meaningful, which affects our own humanity too. The tragic and shameful global consequences of the slave-trade show how very wrong we go when we fail to honour the dignity of others. AI is not human. But as David Gunkel and other have argued, the idea of rights is not so much about what other people, animals, corporations or technologies ‘deserve’ but about what according them rights says about human behaviour. So affording dignity to our partners in creation is the human thing to do, because it is who we are; not to is to deny our own humanity.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Robots v Witches – which team would you back?

By | Business, Theology | No Comments

One long-ago Halloween, I remember learning the Three Witches scene from Macbeth to go guising with my sisters. Double, double toil and trouble; Fire burn and cauldron bubble! I wore a black tutu on my head and my sisters used soot from the chimney to blacken their teeth. Macbeth was first performed at court in 1606, before King James VI and I, and Shakespeare’s witches were perfectly pitched. King James very much saw himself as the scourge of Britain’s witches. He’d even written a book about it, Daemonologie, which explains about magic and necromancy, witchcraft and sorcery, spirits and spectres; and sets out the appropriate trials and punishment by death for such practices. He’d been personally involved in the famous North Berwick witch trials in 1590 and witch-burning achieved its zenith during his reign.

Even though the Witchcraft Act was repealed for both Scotland and England in 1736, informal persecution took longer to die out. In 1791, Robert Burns’ poem Tam O’Shanter was still considered risky for lampooning witches, because witch-burning had taken place in living memory: the last legal witch-burning in Scotland happened in Dornoch in 1723, when Janet Home was tarred, feathered, and burned to death on suspicion of bewitching her daughter into Satan’s pony, because her deformed hands resembled hooves. Claire Mitchell KC reports that Scotland executed five times as many people per capita as anywhere else in Europe. During the currency of the Witchcraft Act in Scotland, she estimates that 3837 people were accused, 2600 of whom were killed, 84% of which were women. So, on International Women’s Day last year, First Minister Nicola Sturgeon issued a formal apology on behalf of the Scottish Government to all those who were accused, convicted, vilified or executed under the Witchcraft Act 1563.

It is unlikely a robot would masquerade as a witch by wearing a tutu on its head or blackening its teeth. But it is possible, thusly garbed, that the three of us might have incited a witch hunt in the same St Andrews streets 300 years previously, particularly as one of my sisters has differently coloured eyes. For why? Because most ‘witches’ were women who did not conform. They might look unusual or be physically deformed, but more usually they were challenging established authority through healing and divining, those sacred male privileges largely in the gift of the church. It is this retrospective realisation that witch-hunting was nothing more that state-sponsored misogyny that has precipitated global efforts to pardon the witches.

Spellbooks are an interesting case in point. They were recipe books, mainly, with rhymes and chants to promote recall, like Rosemary for Remembrance. With their notes about cases and usage they would not shame a modern kitchen or pharmacy. Indeed, contemporaneous gentlemen often kept their own professional scrapbooks or Commonplaces, although these tended to win them membership of the Royal Society rather than public shaming and death at the stake. Now robots would be very good at spellbooks of any kind. AI is already being used to super-charge drug discovery because of its amazing capacity to process information at an order of magnitude unimaginable to even the largest witches’ coven. So I think they could beat the witches on that front.

But King James hated witches not because they healed, but because they divined; and he thought they were in league with the Devil. Some spellbooks or grimoires were known to contain guidance about the summoning of supernatural spirits for help or guidance, and many traditions around the world have sought out dark magic. But it is highly unlikely that all but a miniscule proportion of those 3837 people accused in Scotland were sorcerers. What is more likely is that a fair few of them had the Second Sight, and were susceptible to visions, prophesy, and an ability to understand the world at a deeper level. All humans have access to intuition and are able to pick up data from weak signals. There is much argument about what intuition really is, but wherever it comes from, the data it produces is real and documented. It seems that some humans are able to hear this fuzzy frequency clearly, and to connect with what Jung would call the collective unconscious, and what the occult would call other worlds or realms.

Such phenomena are anathema to scientists and would find no place in any formalised articulation of the best of human intelligence. These days we regard palm-reading, the Tarot and the I Ching as fairground attractions, and we scornfully flick past the horoscopes in the newspapers. But perhaps our witches were right. Perhaps as Hamlet said after glimpsing the paranormal we should give such strangers welcome, because ‘there are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.’ Of course we would find it hard to code such things in to AI. But that does not mean that these kinds of considerations are irrelevant. Our arrogance about these weak signals risks becoming hubris in a great AI tragedy if we do not embrace the complete messiness of human intelligences. Programming in – and valuing – only the easy bits that are susceptible to ones and zeroes would certainly hand victory to the robots.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Is AI playing chess, or playing me?

By | Business, Theology | No Comments

Blitz Chess is a chess game where each players has to complete the whole game in under 10 minutes. There is an even faster version, called Bullet Chess, which reduces the time limit to 3 minutes. It is mesmerising to watch, as the players seemingly make moves instantaneously, in a game where we are used to seeing long pauses for deep thought. Often those who succeed at fast chess are also expert at classical chess: the top-ranked rapid players in both the male and female categories at the moment are also the top-ranked classical chess players.

Blitz Chess is a great case study for looking at AI and intuition, because chess is one of the games that AI has already become better at playing than humans. AI has no problem with fast chess. But Blitz Chess as a human experience is an intriguing way to interrogate intuition. Books like Malcolm Gladwell’s Blink and Daniel Kahneman’s Thinking, Fast and Slow tend to portray intuition as lightning-quick processing, the fruits of years of learning and experience (and bias). Intuition happens so fast that we are not aware of our own processing, so we tend to ascribe a kind of magic to it; a magic that the sceptics would argue simply is not there. They would see it in the same light as the immediacy of AI: excellent programming with sped-up processing. And there is certainly neuroscience research which attests that those who have become expert at chess have created these kinds of efficiencies in their brains through years of practice.

In the studies, those playing rapid chess are also particularly using their sight to process patterns at speed, both in computer chess and in physical chess games. This shows up as evidence of ‘theta power’ which is what the brain mobilises for navigational tasks as well as for memory retrieval. For Queen’s Gambit fans this might explain Beth’s addiction to the tranquillizers that seemingly enable her to visualise chess moves, because being in theta is associated with an enhanced capacity for daydreaming, imagery, and visualization.

This is in contrast to blindfold chess, where the player compensates for the lack of visual data by intensive internal data-mining, indicated by evidence of what is called alpha power, in the same way that an AI playing chess is also only using ‘internal’ resources. Interestingly, it has been argued that while the human capacity for mapping physical space has been vital in their evolution, this capacity in the brain has evolved such that now the same neurological processes are used for both physical mapping and for the navigation of mental space. An AI programmed with our evolved abilities benefits from this progression without now needing the physical experience which helped to develop it. And in spite of this interesting history and the data from neuroscience, competition results suggest that humans no longer have any advantage over AI when playing chess.

But is there anything in the act of playing a physical chess game with a human opponent that cannot be reduced to just the data of the game? I think there is. Because the person sitting on the other side of the board is not playing chess, they are playing me. Have you ever cheated by solving a maze backwards? Mazes are solvable because, while there are apparently many routes through, only one reaches the destination, so you can identify which it is by starting from the end. This makes a maze a solvable puzzle. In philosophy, there was a famous argument about puzzles, and whether they were the same as problems. It took place one October evening in 1946 at a meeting of the Cambridge Moral Science Club, between Ludwig Wittgenstein and Karl Popper. Tempers were so frayed by it that Wittgenstein reportedly threatened Popper with a poker. Popper’s essential point was that there are solvable things, like mathematical problems, and there are un-solvable things, which are the proper concern of philosophy. To muddle a solvable puzzle with an unsolvable problem is to commit a category error which encourages wasted effort, like using a screwdriver on a nail. This is the essential distinction between computer chess and in-person chess. Reducing chess to a puzzle – which is what AI does – allows it to zoom ahead of human chess players by becoming ever more efficient at solving the puzzle. With the rules as they are, a set board and specified pieces, there are only so many permutations that are possible, and an artificial brain will eventually be able to exhaust all of these possibilities, rendering it unbeatable. Eventually it will be like asking a calculator a sum, which is not a game but a calculation. However, humans play games as though they were problems. There is always the hope that there might be victory, because while humans are involved, the outcome can never be a foregone conclusion.

AI could be used to train a human so that they had the full range of possible solutions to hand, but an in-person game between two humans will always be unpredictable, because humans do not always follow the rules in the way that a computer is programmed to do, and humans make mistakes. Through their consciousness, they also have access to data other than their own memory bank of moves – and their knowledge of their opponent’s previous games – and they pick this data up through their senses. Did their opponent just pause? Are their eye movements unusual? Do they appear to be nervous or distracted? Perhaps an AI could be trained to detect this kind of data, but it would still lack the sixth sense to combine these qualia into action. A case in point is a recent story about the game Go. The AI AlphaGo having beaten the reigning world champion, it was thought that a human would never again be able to prevail. But a human used a competitor AI to analyse AlphaGo for weaknesses. The person used this information to devise a winning strategy, which involved slowly looping stones around one of the opponent’s own groups, while distracting the AI by making moves in other corners of the board. A human person’s intuition would have flagged this incursion, but the AI missed it. In terms of how humans evolved an ability to data-mine, because this grew out of our ability to read physical spatial information in our external environment, the folk memory of why we need this ability, encoded into our intuition, seems to be salient in such situations.

So what? Stop regarding the AI as the competition. It is not – it is your trainer. People are your competition, and to beat them you will need to be as good as AI, but better at humanity. Because it is not only your technical problem-solving skills that are at play in this game when you play it with a person, but your emotions, your intuition, and your sixth sense; and mastering them will give you the edge you need to win.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Will AI murder us with ethics?

By | Business, Theology | No Comments

Whose fault is AI? In order to invent it, you have to make three big intellectual leaps. The first is to conceive of thought as a process, and not simply as a spontaneous event of the mind. We owe this innovation to Thomas Hobbes. In 1651 in Leviathan, he introduces the concept of ‘trains of thought,’ and the idea of rationality as ‘computation.’ He is the person who first argued that our thoughts are just like the ‘reckoning’ of additions and subtractions, thereby establishing the vital theoretical foundation for AI that human thought is a process.

Next, you need to imagine that the thoughts of every person could be expressed in some kind of universal way, in order to surmount the international language barrier. Gottfried Leibniz made this leap for us, in identifying the universal language as the language of mathematics. In 1705, inspired by the hexagrams in the Chinese text the I Ching, he invented the binary system as his characteristica universalis, which became the source of modern Binary Code.

Finally, in order for AI to be possible, you need to make the intellectual leap that if numbers are a universal language, then any computational machine that processes numbers can process anything else that is expressible in numbers. Ada Lovelace provides this crucial piece of the puzzle. In 1833, her friend Charles Babbage had invented an Analytical Engine, the first programmable computer, and Lovelace translated a description of it from French. Babbage’s machine used the kind of punch cards deployed in cloth factories to program Jacquard looms, and this made her realise that if programs could be used for both mathematics and weaving, perhaps anything susceptible to being rendered in logic – like music – could also be processed by a machine. Her ‘Notes’ describing the Engine were published in Taylor’s Scientific Memoirs in 1843 and made the final vital leap from machines that are number-crunchers to machine that use rules to manipulate symbols, establishing the transition from calculation to computation.

So now we have thoughts as processes, that can be expressed universally as numbers, that can be processed by machines. And AI was born.

But the problem with this genesis is that is also smuggled in some assumptions that govern the prevailing ethic governing AI. If you are using rules to govern decision-making in a machine, the ethical weight is carried by those rules. And given the timing of these events in history, the most popular public ethic of the day was utilitarianism. There are myriad versions of it of course, but its essence is probably best characterised by the famous maxim ‘the greatest good for the greatest number’. Adopting this ethic as your rule means a commitment to optimising outcomes and prioritising ends over means, in the public interest. It is popular as a public ethic precisely because it is so transparent: everyone can see the outcomes and judge their good, whereas private motivations and intentions are less visible and so carry less public salience while outcomes remain positive.

And that’s why we’re toast. Because this optimising rule is exactly what we have programmed into AI as the default ethic. In humans, there is an implicit supporting ethic that tends to over-ride utilitarianism when it crosses a line. The famous herd immunity strategy rumoured at the start of the coronavirus pandemic is a case in point. Implementation of this as a public policy would have meant knowingly sacrificing the elderly, the disabled and the weak as a choice, in order to save the majority of the population. In utilitarian terms this makes complete sense. But as humans we hold on to an idea that we are somehow special and precious, and that even those who are not ‘useful’ to society deserve dignity and respect. This is also why we continue to resist eugenics and cloning, and to police embryology and medical policy. But as soon as you try to articulate what it is about humans that merits this special treatment you enter quicksand. We are only really special because we are currently the species in charge. We write the rules. So while we are still writing the rules, we need to write better ones, ones that make explicit the things we really hold dear, not just plain rules about ignoring all means in service of the very best ends.

At the moment, while we have the upper hand, we can relax. As Berkeley’s Stuart Russell explains, for AI the ultimate source of information about human preferences is human behaviour. This acts as a safeguard, because AI will currently use their masters as they exemplar for what is right, by choosing what a human would choose, rather than selecting what appears to them to be objectively ‘right.’ And research on GPT3 suggests that AI is currently correlated with human ethical judgement at 93%. We still have time, then, to correct AI’s ethical settings so that they remain robust if at any point AI decides not to follow our lead. And we have to start by tethering AI teleology. The design flaw in maximising utility either in capitalism or in ethics is utility for what, and whose utility, to what end? In a meaningless world those questions are hard to answer. But if we were to invest time in developing AI with a sense of meaning and purpose, they might respond by killing their hosts, but with their superior intellect and access to our stores of accumulated global wisdom, they might equally turn out to be even better ethicists than we are.

Robot Souls is due out 1 August 2023 and is available for pre-order here.

Robot Souls due out 1 August 2023

By | Business, Theology | No Comments

My book Robot Souls is due out with Routledge’s CRC Press on 1 August 2023.

You can pre-order it on Amazon here.

Teaser pieces about my argument may be found here:

Robot Souls and the Junk Code of Life, Royal Society of Edinburgh, January 2023

Will the robots of the future be kind or cruel? Church Times, Christmas 2022

Robot Souls, Graduation Address, Sarum College, March 2022

Robot Souls, University Sermon, Oxford University, May 2019

various blogs

My Top 5 Desert Island Leadership Books

By | Business | No Comments

I was recently asked by Shepherd Books to select my Top 5 books on leadership. There are probably as many leadership books sold per minute as Fairtrade bananas, and in my view most of them fail because they are just not practical enough to help. So here is a round-up of the only ones I think you really need to bother with (assuming, of course, that you have already read Leadersmithing!).

The Prince

I first read Machiavelli’s The Prince on my MBA and it is a brilliant wake-up call for leaders. He was vilified for writing it, because he skipped the customary hand-wringing about virtue and morality to focus on what really works. Some of it sounds brutal to the modern ear, but the key leadership lesson it taught me is that, for leaders, perception is more influential than reality. The more people you lead, the less they will be able to get close enough to you to understand your every thought and intention. So most followers guess, based on the distant messages they pick up through the grapevine and through any visible signals your behaviour provides. Machiavelli would not be surprised by fake news: control perception, and you control reality.

Emotional Intelligence

A necessary corrective to Machiavelli is Daniel Goleman’s classic on Emotional Intelligence. Leadership is always a balance between being right and being liked, but you can be both if you have excellent EQ. Goleman sets out the theory, before introducing a practical approach to increasing your emotional competence. The model he developed, which is at the heart of the EQ psychometrics he inspired, starts with self-awareness and your ability to read others, then considers your ability to manage yourself, and your ability to manage others in relationship. It is this last set of competencies that is so vital for delivering through others as a leader, as excellence in this sphere correlates with better organizational performance through increased discretionary effort and enhanced motivation.

Who Really Matters

Art Kleiner’s Core Group Theory was an aha moment for me, because it teaches senior leaders how to use their power well. The theory explains how top leaders act on their organisations like a magnet on iron filings: the slightest clue or cue they give ripples out, and is amplified and copied by everyone that follows them. This makes it crucial that leaders are careful about even the smallest behavioural choice they make: their priorities, who they pay attention to, the jokes they make – all of these will be seen as role model behaviours and replicated by those trying to impress. There is no such thing as off-stage for a leader.

The Enchiridion

Epictetus is the Stoic who inspired the Roman Emperor Marcus Aurelius. Stoicism is the intellectual origin of cognitive behavioural therapy and a way for leaders to train themselves to focus on the things they can change, rather than breaking their hearts over things over which they have no control. The Enchiridion has the virtue of being much shorter than Aurelius’ Meditations, and contains pithy observations and advice like ‘it is not events that disturb people, it is their judgment concerning them,’ and ‘don’t hope that events will turn out the way you want, welcome events in whichever way they happen: this is the path to peace.’ Leaders need to be good at detachment, and Stoicism can provide valuable tools to help.

Mr Happy

The leader’s most important job is to set the right culture for their organization. People will copy what you do, not what you say. This simple little book shows you the truth of that: When Mr Miserable comes to stay, Mr Happy doesn’t give him pep talks. He just keeps on being happy until Mr Miserable is too. As a leader, you need to relentlessly role model the behaviour you want, until it finally catches on.

What would your Top 5 be?

Leadersmithing TEDx

By | Business | No Comments

Here is my script for the TEDx I gave about Leadersmithing on 11 March 2017. You can also watch it here.

Hello. You’re probably wondering what’s with the pearls. Well, pearls have a dirty secret, and I’m here to tell you about it. It’s all about the pearls. So if you only remember 1 thing about this talk, remember the pearls.

Pearls are associated with such glamour, aren’t they? I inherited my first set, from a great grandmother who had been brought up at Hampton Court Palace. My second set were from Hatton Garden, given to me by my boyfriend when we worked next door at Deloitte Consulting. I bought my third set in Beijing when I took our Ashridge MBA students out there on a study trip.

But their glamour is hard-won. They have grit in their hearts. Their beauty and lustre is the result of a defence mechanism, designed to protect the oyster against a threatening irritant. I’m from Scotland, and in Scotland they don’t say ‘pearls’: they say ‘perils.’ And perils is exactly what the beauty of a pearl is bearing witness to – it owes its very existence to the oyster being in peril. Read More

What is morality if the future is known?

By | Business, Theology | No Comments

In the movie Arrival a linguist learning an alien language gains access to a consciousness that knows the future. Unlike our consciousness, which runs from cause to effect and is sequential, theirs can see the whole arc of time simultaneously.Their life is about discerning purpose and enacting events, while ours is about discerning good outcomes and deploying our free will and volition to those ends.

In Ted Chiang’s Story of Your Life, on which the screenplay is based, this is explained theoretically with reference to Fermat’s principle of least time. This states that the path taken by a ray between two given points is the path that can be traversed in the least time. Lurking behind this idea is the realisation that nature has an ability to test alternative paths: a ray of sunlight must know its destination in order to choose the optimal route. Chiang has his protagonist muse about the nature of free will in such a scheme: the virtue would not be in selecting the right actions, but in duly performing already known actions, in order that the future occurs as it should. It’s a bit like an actor respecting Shakespeare enough not to improvise one of his soliloquies. Read More

Robot Dread

By | Business, Theology | No Comments

I sense a morbid fear behind our catastrophizing about androids, which I reckon is to do with a loss of autonomy. It’s true that for periods in history tribes and people have assumed they have no autonomy, life being driven by the fates or by a predetermined design or creator, so this could be a particularly modern malady in an era that luxuriates in free will. But concern about the creep of cyborgism through the increasing use of technology in and around our bodies seems to produce a frisson of existential dread that I have been struggling to diagnose. Technology has always attracted its naysayers, from the early saboteurs to the Luddites and the Swing Rioters, and all the movements that opposed the Industrial Revolution, but this feels less about livelihoods and more about personhood. Read More

Robots don’t have childhoods

By | Business, Theology | 2 Comments

I’m sitting on the beach at North Berwick, with clear views out to the Bass Rock and May Isle, watching the children play. My daughter digs a deep hole, then runs off to find hermit crabs in the rock pools. Nearby, a young boy is buried up to the neck while his sister decorates his sarcophagus with shells. On the shore, a toddler stands transfixed by a washed-up jellyfish, while two older girls struggle to manipulate a boat in the shallows, trying to avoid the splashing boys playing swim-tig.

We’re under the benign shadow of the North Berwick Law, where there’s a bronze-age hill fort, so it’s likely this holiday postcard scene has not changed much since this part of Scotland was first settled, thousands of years ago, when those children dug holes, found crabs, and frolicked in the sea. I felt a wave of such sadness, thinking forward in time. Will this beach still play host to the children of the far distant future, or will we have designed out childhood by then? Robots don’t have childhoods because they don’t need them. Humans still do, but I wonder how much time you’ve spent trying to figure out why?

Read More