Tag

ChatGPT Archives - Eve Poole

Learning Loss and Losing Leaders

By | Business | No Comments

Seemingly, thrusting professionals are seeing AI as a fast track to partner, reckoning that all those tedious years of working your way up the hard way will now be taken care of by generative AI tools like ChatGPT. Hurray! We’ll all get rich quicker!

But privately these firms are rather worried. It takes ages for the academic curriculum to sync up with modernity, so what will happen to these shiny graduates with their googleable degrees, deposed as trainees by AI, but still expected to re-emerge as experienced hires just below partner level?

In school they’re already talking about the learning loss that occurs when kids outsource their knowledge acquisition to AI. This phenomenon isn’t new: we forgot how to remember the Odyssey when we learned how to write it down; we forgot how to hear music in all its fullness when we invented notation and tethered the scale to the equal temperament of a keyboard; and the introduction of calculators drastically reduced our ability to do mental arithmetic. But this might be the first time we have encountered a technology that can have that effect across so many categories of learning simultaneously. And if kids have experienced learning loss at school, then continue with their reliance on AI into university, will they come out knowing anything useful at all? If all they will have learned is how to use an AI brilliantly, well, AIs are already learning to be better at that too… Industry is rediscovering apprenticeships to make up for an already disappointing graduate cadre: but what do they need to start doing now, if even these apprentices will be usurped by AI?

Way back in 2003 at Ashridge, we initiated a research programme based on asking existing board-level leaders ‘what do you know now about yourself as a leader that you wish you’d known 10 years ago?’ The findings were used to devise a leadership accelerator and written up as the book Leadersmithing. Our research programme included collaborating with a neuroscientist to show how this kind of learning is acquired. From that, we showed the role of the emotions in learning, and found that reliable templates are most efficiently acquired through learning under pressure. And both this method and these research findings suggest an answer to the conundrum of workplace learning loss.

First, we need to get forensic about what, precisely, partners do, and how they learned it. It’s highly likely that much of their value-add is not AI-able, so this exercise should immediately reveal a workplace curriculum for those hoping to succeed them. The Leadersmithing list of critical incidents suggests it will be a fairly standard set of challenges, which will differ between workplaces and cultures only by degree and nuance rather than by type. For example, all partners will have had their mettle tested by making key decisions, fronting multi-million dollar pitches, and mopping up after things have gone wrong. And we know these things are teachable, if you can be precise enough about the muscle memory you’re trying to acquire, like practising difficult conversations or handling hostile media.

Second, we need to learn from the neuroscience. I remember answering the phone in my first ever London office, to hear my sibling, hiding in a cupboard at another London office, asking in a stage whisper: when you’re photocopying, do you take the staples out?! We all remember those ghastly days of learning the ropes largely by making mistakes and incurring the wrath of our seniors over everything from making the coffee wrong to sending out blank faxes. Life would indeed be tranquil if we could make AI take this pain for us. Our recall of such events is heightened by the fact that our errors were often observed. And indeed we learned vicariously, wincing at witnessing the mistakes of others, which is another argument in favour of a back-to-the-office policy. This is because our Ashridge findings showed that whenever you feel observed and under pressure, your heart-rate increases and your learning is enhanced, as the memories you form in those moments are stored deeply in your amygdala.

And we all learned far more than just office-craft in those clumsy days. Through the tedium of note-taking and bag-carrying we saw how leaders really behave: we learned about power, decision-making, values and standards. We witnessed the quite brilliant rescuing of an impossible situation, or a tension diffused with a beautifully timed witticism. We also learned how not to do it, too often I imagine. And it is this implicit learning that we now need to surface and teach back, so that we do not lose a whole generation to AI. Let’s use the gift of AI to remove the ritual humiliations of traineeship, but winnow out of it all the Leadersmithing we can find.

Does it matter if AI is conscious?

By | Business, Theology | No Comments

It’s the Hard Problem, they say. We don’t know what consciousness is, they say. But AI might already be conscious, they say. I notice that in the conversation about AI, consciousness has become the deal-breaker. Once AI gets it? Game over. But I think consciousness is a red herring. This misdirection is a very good way to keep people busy, though.

What else is conscious? Bats, famously. In the case of animals, their consciousness gives them sentience, so we are increasingly spreading the net of rights ever further over the natural world, as our understanding of the concept of felt harms expands. But sentience is not the reason we gave corporations rights. It would not be particularly meaningful to describe a major brand as conscious, except in a metaphorical way. We gave corporations rights so that we might control them better. If they have legal personality, we can sue them. But with AI we have an awkward hybrid, a new thing that is not an animal, and not a corporation.

So is consciousness helpful as a metric, or not? Would it matter if AI was conscious? Only if it had no rights, because then we might abuse it. The only other way in which consciousness matters is for human exceptionalism, because we’re at the apex of the animal kingdom, and regard our own consciousness as the apotheosis. Or perhaps it comes from some kind of proto-religious ghost memory, because we used to think that only God could bestow this gift? In that kind of worldview, nothing manufactured could have a soul, by definition. Is our thrall with AI and consciousness really a frisson, that we have played God, and finally pulled it off?

I think it’s likely that AI will develop something akin to consciousness, in that it will have a felt sense of its own subjective experience. This will not make it human. Neither is a bat a human, yet it seems to us to be conscious. That it’s organic and not manufactured makes its consciousness feel like it has a family resemblance to ours, in a way we have never imagined that a toaster might have feelings. But is that because there is something axiomatically distinct between something created by nature and something created by persons? Categorically, of course there is. But if you then want to argue that consciousness can only ever be a property of something natural, we’ve just smuggled God back in again, because that sounds like some sort of argument about the sanctity of creation, or possibly about the properties of organic matter, which we can already grow artificially in the lab… So either consciousness is just about processing, in which case AI will get it; or it’s about God, and AI won’t. We can argue about that until the cows come home. Or until AI sneaks up behind us while we’re busy philosophising.

Because what’s really the deal-breaker is free will. I know that’s a contested term. In this instance I mean it to mean an ability to self-determine, to exercise agency, and to make decisions. Again, while the cows are still out, we could argue about how ‘free’ anyone is. Let’s assume formally that we exist (a belief in Realism – harder to prove than you might imagine). Let’s also assume human self-determination, as enshrined in international law, which holds that we are not generally speaking pre-programmed; indeed attempts to programme us would thwart our human rights. Thus, anything that exists and can self-determine has free will. Whether or not it consciously self-determines is neither here nor there, except as a matter of law, were AI rights to get into the jurisprudence of moral retribution, as opposed to notions of restorative or distributive justice for the better ordering of society, which may of course also include excluding wrong-doers from that society.

So could AI – being by definition pre-programmed – ever develop free will? Where are we on that? Well, it’s unclear, as so little is in the public domain. But from what has been published it is clear that it’s already started. Some AIs, like Hod Lipson’s four-legged walking robot, have been given minimal programming and encouraged to ‘self-teach’ so they make their own decisions about how to learn. In robotics, this is a vital step on the journey toward self-replication, so that machines can self-diagnose and repair themselves in remote locations. For large language models like ChatGPT, the design for a self-programming AI has been validated, using a code generation model that can modify its own source code to improve its performance, and program other models to perform tasks. An ability to make autonomous decisions, and to reprogram? That sounds enough like human free will to me to spell risk. And it is this risk, that autonomous AIs might make decisions we don’t like, that gives rise to SciFi-fuelled consternation about alignment with human values and interests. The spectre of this is why there is emerging global alarm about the Control Problem.

And this is why robots need rights. Not because they can feel – that may come, and change the debate yet again; but because, now, like corporations, we need a way to hold them to account. But we would not need to indulge in such frenzied regulatory whack-a-mole if we had taken more time to get the design right to start with. And that’s why I’m arguing for a pause, not primarily to allow regulation to catch up, but to buy time for us to develop and retrofit some of the guardrails that have already kept the human species broadly on track by protecting us from our own free will. Yes, all that human junk code