Category

Theology

Regina Scientiarum

By | Theology | No Comments

I once managed to inveigle my way into a job interview at Deloitte. The two Partners were clearly rather underwhelmed by my CV. At that stage I had 4 (junior) years at the Church Commissioners under my belt, and an MBA from a school that was Not On Their List. Finally they reached the end of the interview. Obviously relieved, as they gathered up their papers, they asked rather diffidently, do you have any questions for us? Yes! I said eagerly, Do you have any reservations about my candidacy? They scowled. Why on earth would we want to risk putting someone with a Theology degree in front of our clients? Well, you’ll be delighted to know that I sat them right back down and gave them the full story on why a theology degree is the ONLY degree a girl will ever need.

So I thought I might give you that story today, by way of saluting your academic achievements here at Sarum. But first I need to tell you something rather alarming.

In a lab at Columbia University back in 2017, Hod Lipson was working on a robot. It was a simple little thing, just four small mechanical legs, which had been tasked with moving to the other side of the room. It was programmed with Deep Learning, which uses artificial neural networks and reinforcement learning so that robots can teach themselves. Lipson wanted to know if these legs could work out how to walk on their own. Lo and behold, a few days later, the robot had taken its first steps and achieved the goal. So Lipson removed one of the legs, to see if it could re-learn to walk with just three. It worked. Flushed with success, the team got the clever robot ready for a formal demonstration. They started tracking how it was using its neural network to learn, and discovered something unexpected. The robot had taught itself how to read their faces. It had realised that observer feedback was relevant data, and had decided to harvest it. Indeed, it had repurposed a neuron, just for that task. As humans we can see why this was a smart choice, because we’ve all stood around cheering on a toddler, and our feedback helps them learn how to walk too. But we’d thought that robots couldn’t develop spontaneous self-awareness; and that would be our gift to give them if we ever saw fit. We thought that consciousness was something special, that couldn’t be manufactured; that it was our particular gift from God. Were we wrong?

Yes, and no. I think we were wrong to think that self-awareness is just a property of the living. Logically it must kick in whenever in its evolution an entity needs it to, in order to promote learning and progress. But I think self-awareness is still different from consciousness, and I think consciousness is distinct from soul. And I think theology is our only hope, and I’d like to tell you why.

First, a quick recap on consciousness. Have you ever imagined being a bat? In the 1970s, the philosopher Thomas Nagel wrote a famous paper about it. He argued that if you’re conscious, you have a subjective sense of ‘what it’s like’ to be you. As a human, I can’t experience ‘bat-ness,’ but just because I can’t express ‘bat-ness’ in human language, doesn’t airbrush out its reality as a phenomenon. That makes science a bad tool for trying to understand consciousness, because it can only cope with concrete objectivities that are susceptible to physical description. So philosophers of mind have come up with some catchy jargon to try to pin down the experience of being conscious. They call this ‘experiencing qualia.’

‘Qualia’ are defined as individual instances of subjective, conscious experience, like experiencing the colour red. The philosopher Daniel Dennett defines the properties of qualia as: (1) ineffable – you can only apprehend them by direct experience; (2) intrinsic – you feel them independently; (3) private – they are only ever truly yours, even if others seem to have similar experiences; and (4) they are directly or immediately apprehensible in consciousness – you know you’re experiencing them when they happen. That feeling you’re getting at the moment, of puzzlement or daydreaming or hunger? That’s a quale.

I suppose it’s arguable that robots could learn these feelings too. Perhaps their experience of ‘seeing’ the ‘colour’ ‘red’ might differ from our own subjective experience of seeing the colour red; but they might well be able to enjoy their own subjective experience of robot-ness. And if they do develop an ability to experience qualia – indeed they may have already done so, robots could argue that as conscious beings they merit the same moral rights as we do. Scary stuff.

This is where you come in, because you are our vanguard, our shield and buckler. Today is the day when theologians around the world will rise up, and re-take their crowns as liegemen of the Queen of the Sciences! Regina Scientiarum! Because what we still have left, when all of this has been replicated, is our human souls, hiding in plain sight. Where are they hiding? Why, in all that junk code that no self-respecting AI designer would ever code into a robot: Emotions? Mistakes? Uncertainty? Free Will? Glitches! Send them back to the factory! Or are they in fact hallmarks of soul?

I grew up in St Andrews, and if I forget my keys I could shinny over our back wall via the School of Divinity. To do so I had to pass through the front gate of St Mary’s, above which is inscribed in principio erat verbum, the reading from John that we heard today. We know that we’re made in God’s image, and perfectly designed for His ends, so we know that our junk code is no accident. While the race is on to replicate everything human that can be seen, we know that this risks leaving out the most important and most distinctive thing about us as a species.

Are any of you Dr Who fans? Perhaps you remember a two-parter in the modern franchise about a World War II monster, a child with a gas-mask fused to his face who’s turning the rest of London into gas-mask-wearing monsters, all wondering around like zombies asking ‘are you my mummy?’ The problem is resolved when the Doctor realises that alien nanogenes had got the design of the first child they met wrong, assuming that the gasmask was part of him, so they ‘healed him’ with the gasmask attached. It’s only when his mother finally hugs him that they ‘learn’ about her DNA, so can reconfigure the infected humans as normal, and the world is saved.

This story illustrates the difference between what’s called source code and executable code. In programming, the first step is to write down what you want to do, then you compile this in machine code, and operationalise it through executable code. The latter is the ‘black box’ that’s handed over. You might copy the executable code, but if you have no access to the source code, you have to guess about the underlying logic and rules. This is essentially what we’ve been doing with fossils and geology and astronomy to come up with our theory of evolution and the Big Bang. It’s also how we’re explaining consciousness; yet what I see behind consciousness is a design that I would call the soul, or John the Evangelist might call the logos.
So what do I mean by junk code? Here are 4 key lines of it: emotions, uncertainty, mistakes, and free will.

First: emotions. Mr Spock certainly doesn’t approve. How irrational and messy they are! Yuk. But there I go, being emotional about emotions again. Being rational about them, they seem rather important. In fact, the most ancient part of our brain uses them to prioritise the experiences and memories most likely to help us survive. So vital are they as a motivator, that we’re having to programme fake rewards into AI through reinforcement learning to try to drive progress. We may find emotions inconvenient and rather difficult to tame in laboratory conditions, but even Darwin thought they were vital to our evolution, both for their social value and for the psychological health of our young. And the religions have developed myriad theology, art and music to help us with them; as well as liturgy for key emotional life events, to aid our understanding and to tune our emotional registers.

Second, uncertainty. We have an extraordinary capacity to tolerate ambiguity. As Lewis Carroll’s White Queen famously put it, “sometimes I’ve believed as many as six impossible things before breakfast.” And crikey is Theology a primer for this skill! Again, it’s so important that they’re also now trying to programme it back into AI, having originally thought it was death to logic. This is because uncertainty stops us from premature decision-making, and helps us to see possibilities for invention and innovation. In this way it also drives us towards improvement, because when things don’t neatly fit we have to make room for them. That’s why the programmers need it back: if AI is coding a blurry picture, and only has the labels cancer/no-cancer, a mistake could be fatal.

Third, those mistakes. Not the kind of trial and error mistakes made by our stumbling robot, but ghastly errors and lapses of judgement. Sins of omission and commission. Unkindness and injustice. Avoiding error is one of the very reasons we’ve developed computers in the first place. But without this very human capacity to get it wrong, we would not need a conscience or feel remorse, or be driven to repent and to make good: these heartfelt yearnings for restitution where wrongs need righting is a powerful force for good, whether the fault was ours or not; but without the junk code that prevents us from absolute clarity about right and wrong but gives us a conscience to help, we would never have developed the compassion to heal the wounds of our communities and to work to safeguard their future. Now, where might we find a well of wisdom about remorse and repentance?!

Lastly, the humdinger: free will. Oh for a degree in Theology to fathom that one! There’s an intriguing story written by Ted Chiang, which is a warning about an imaginary device called a Predictor, that flashes if you press it. The problem is, it actually flashes one second before you press it, and it’s impossible to beat. Playing with it becomes addictive, and ultimately teaches the player that there’s no such thing as free will. In the story, all those who play it lose the will to live. That’s why in the film The Matrix they had to put us into a simulation, because they couldn’t farm us efficiently if we didn’t experience our lives as meaningful. Theologians have long wrestled with free will, pre-destination and predetermination: we simply don’t know how programmed we might well be. The vital thing is that we have a line of junk code that insists we are free. It has come to be understood as such a fundamental human right that it is internationally protected, and without it we feel dehumanised. That’s why imprisonment and the withdrawal of ordinary freedoms is used by society as a punishment. Those writing about incarceration, like Victor Frankl in Man’s Search for Meaning, show that an ability to find ways to keep feeling free is vital to human survival. But it’s a terrible burden, because as we’ve seen, our emotions, all that uncertainty, and our proneness to mistakes makes our navigation of free will the absolute vale of soul-making. Who might help us in this quest to try to get it as right as possible? Oh, theology again.

So theology comes in everywhere. In fact, it’s the only discipline with the resources to deal with all four of these lines of junk code simultaneously, and in sufficient depth to be useful. What an efficient degree! No PPE for us.

And even more so than the content you’ve studied, the neuro-psychiatrist Iain McGilchrist would massively approve of the brain workout you’ve just given yourselves. You might have heard of his thesis of the Divided Brain, which argues that the brain’s left hemisphere is designed to facilitate the kind of narrow attention an animal or a person would need to focus on the task in hand, while the right hemisphere keeps watch for anything that might interrupt, and makes broad connections with the outside world. So the right hemisphere gives sustained, broad, open vigilance and alertness; while the left hemisphere gives narrow, sharply-focused attention to detail. With our left hemisphere we grasp things in our hands and make tools; in language our left hemisphere similarly ‘grasps’ things by pinning them down and being precise. To do this it needs a simplified version of reality, a sort of mental summary, so that it’s not distracted by complexity. Meanwhile the right hemisphere is alert for exceptions, and things that might be different from what we expect. It sees things in context, and apprehends meaning and metaphor, body-language and the emotions. It’s famously and deliberately woolly. But McGilchrist argues that in society at large, ever since the Enlightenment, there’s been a steady and devastating takeover of the right brain by the left brain, such that we’re now in danger of quite literally losing our faculties. So another way of understanding what you’ve been up to at Sarum is that you’ve been re-balancing your brains, priming you to be better and more supple humans in the future.

And this really matters, because of the risk of us sleepwalking into dystopia. Of course, humans have been using tools for millennia, and computers are just the latest wave of technology designed to make our lives easier. From spears to wheels, and smelting to writing, we’re a species that uses invention to improve the quality of our lives. But inventions like electric light, the combustion engine and anaesthesia were intended to improve the human experience, while Artificial Intelligence is actually designed to replace it. And we’ve careered off on this giddy path with no exit strategy. We’re already losing control of our emerging invention, and we may already have passed the point of no return.

My solution is therefore compulsory theology for all: Regina Scientiarum! But seriously, I’m in deadly earnest. Who else can lead us through this minefield, if we only have the courage to stand up and be counted? We need to explore, understand, describe and nurture our junk code, then learn how to cultivate and cherish it as a global society. And as theologians, if our junk code really is the signs of soul, and part of God’s design for us, we need to account for the hope that is in us. I certainly intend to be very noisy about it, and I hope that with your freshly minted certificates, diplomas and degrees, you will join me.

Address given at the Sarum College Presentation of Academic Awards, Saturday 12 March 2022

What is morality if the future is known?

By | Business, Theology | No Comments

In the movie Arrival a linguist learning an alien language gains access to a consciousness that knows the future. Unlike our consciousness, which runs from cause to effect and is sequential, theirs can see the whole arc of time simultaneously.Their life is about discerning purpose and enacting events, while ours is about discerning good outcomes and deploying our free will and volition to those ends.

In Ted Chiang’s Story of Your Life, on which the screenplay is based, this is explained theoretically with reference to Fermat’s principle of least time. This states that the path taken by a ray between two given points is the path that can be traversed in the least time. Lurking behind this idea is the realisation that nature has an ability to test alternative paths: a ray of sunlight must know its destination in order to choose the optimal route. Chiang has his protagonist muse about the nature of free will in such a scheme: the virtue would not be in selecting the right actions, but in duly performing already known actions, in order that the future occurs as it should. It’s a bit like an actor respecting Shakespeare enough not to improvise one of his soliloquies.

While this thought experiment made me wonder about the ethics curriculum in alien schools – all virtue ethics and Stoicism? – it also made me pause to notice that in many ways we do know the future. While we may not have access to the narrative of our own individual futures, when it comes to AI we do have some inkling about the most likely futures we will face as a species. It has been the signal mission of the Science Fiction genre to set out these futures for our inspection. If we know our futures, what is the moral task?

In his book Life 3.0, Max Tegmark identifies 12 ‘AI Aftermath Scenarios’, ranging from Libertarian Utopia to Self-destruction. These variously describe the relationship between AI and humans, including ‘Zookeeper’ wherein AI retains humans essentially as pets. These 12 scenarios alone would give us fertile ground for devising ethical curricula, but to give you a flavour of the task here are some starter questions. If Zookeeper were a future, we would not be in charge. How could we act now to ensure that our keepers were benign? Cue John Rawls, and the end of our complacency about leaving the coding up to other people and to the free markets.

The problem about life’s current business logics is that they are irredeemably Enlightened and therefore utilitarian. ‘If x then y’ is about – obviously – optimising outcomes. Which would be bad news for us as a species if we were no longer in charge. While we are, we are kept safe, because behind this po-faced scientific rationalism there is a deep-seated commitment to the dignity of the human person: I will not harvest one of your kidneys. euthanize your aling granny, or sterilise your disabled daughter. But this is not a logic that has been transparently translated into computer programs: Asimov’s fictional Laws of Robotics famously forbid harm to humans, but this has been quickly ignored in the case of military drones and autonomous weapons. The philosopher John Rawls’ Veil of Ignorance is a way of thinking about the construction of society such that the designers should assume they might be the losers as well as the winners, on the view that designing in fairness for paupers as well as princes would result in a just society. Should we therefore now use author privilege to bake our supremacy into the basic global codes for AI and more precisely into our systems of law?

Tegmark’s scenarios suggest myriad other approaches and dilemmas. For instance, they include various Utopias, where humans and AI peacefully co-exist, because of the abolition of property rights and the introduction of guaranteed income. Does this possible future shed light on our current debates about Universal Basic Income?

Yet another scenario sees humans gracefully exiting in favour of their superior AI offspring. If we decided that we actively wanted to be bettered, how would we accelerate discoveries about consciousness to devise ever-better androids? In any scenario, our ethics and values must needs be reviewed, which has implications not only for public policy but also for law and for education.

But regardless of which thought experiment about the future you choose to run, the bottom line is this: if we know the range of likely outcomes, why are we not planning more thoroughly for them?

Robot Dread

By | Business, Theology | No Comments

I sense a morbid fear behind our catastrophizing about androids, which I reckon is to do with a loss of autonomy. It’s true that for periods in history tribes and people have assumed they have no autonomy, life being driven by the fates or by a predetermined design or creator, so this could be a particularly modern malady in an era that luxuriates in free will. But concern about the creep of cyborgism through the increasing use of technology in and around our bodies seems to produce a frisson of existential dread that I have been struggling to diagnose. Technology has always attracted its naysayers, from the early saboteurs to the Luddites and the Swing Rioters, and all the movements that opposed the Industrial Revolution, but this feels less about livelihoods and more about personhood.

Raymond Cattell famously identified two types or modes of human intelligence: crystallized intelligence and fluid intelligence. His model is not of course correct, but it’s a useful lens through which to view this particular problem. His categorization distinguishes between those facts and experiences that are crystallized as the sum total of our learned knowledge – our databases, if you like – and the fluid intelligence that abstracts from those databases to solve novel problems or make intuitive leaps. In trying to zero in on our scruple about AI, I think we are groping towards an understanding of the latter as something that is particularly human and which we would hold to be special. As more and more species are found to be able to use tools to solve problems, we tend to obsess about this vanishing ground of particularity, that with google and the great apes leaves us telling the story of Einstein’s beam of light to reassure ourselves that we’re still somehow neurobiologically distinctive.

I think our dread is an existential fear of being programmed, and of waking up to find we have been someone’s puppet all along. All those times when we felt we were wrestling with our consciences or weighing up weighty arguments, and the rules were already driving us towards a decision we had felt was our very own and not preordained in someone else’s playbook. If we have no agency, we have no freedom, except in an illusory and manipulative way.

But while I appreciate this fear, I suspect it is misbegotten. It is of course theoretically possible that AI will be able to programme us in the future. It’s doing that already in many ways, but at the moment with our willing consent, when it comes to our health and life management. And it’s this word ‘consent’ that it is crucial. Do we know enough about AI to be actively giving our consent, globally, in an informed way? And are we given the opportunity to consent? I think not, which is why the dread is useful, if it galvanizes us into interest and action. We should be asking sharp questions about programming and controls, and the ownership of code, and which red lines as a species we think it is not yet safe to cross. Does our driverless car choose to sacrifice the granny or the toddler, and do we consent to that coding? Are we happy that our every online reaction and transaction has become the database for AI, embedding as objective fact the sum total of all our flawed human and subjective interactions? Are we happy with a global Intellectual Property regime that both conveys and protects the ownership of AI technology on corporations without sufficient regulation or accountability to nation states?

It may be that one day we will find out we were programmed after all. But while we still rejoice in our free will, we need to exercise it, and not let this unspecified dread confine us to stupefaction watching SciFi on the sofa. Meanwhile, we still don’t really know enough about Cattell’s fluid intelligence, which does feel distinctive. Could we do more in our schools to develop this muscle, rather than maintaining our narrow focus on A*s in the STEM subjects that the robots have already nailed/? It might make us more human, and able to programme ever more human robots too… 

Robots don’t have childhoods

By | Business, Theology | 2 Comments

I’m sitting on the beach at North Berwick, with clear views out to the Bass Rock and May Isle, watching the children play. My daughter digs a deep hole, then runs off to find hermit crabs in the rock pools. Nearby, a young boy is buried up to the neck while his sister decorates his sarcophagus with shells. On the shore, a toddler stands transfixed by a washed-up jellyfish, while two older girls struggle to manipulate a boat in the shallows, trying to avoid the splashing boys playing swim-tig.

We’re under the benign shadow of the North Berwick Law, where there’s a bronze-age hill fort, so it’s likely this holiday postcard scene has not changed much since this part of Scotland was first settled, thousands of years ago, when those children dug holes, found crabs, and frolicked in the sea. I felt a wave of such sadness, thinking forward in time. Will this beach still play host to the children of the far distant future, or will we have designed out childhood by then? Robots don’t have childhoods because they don’t need them. Humans still do, but I wonder how much time you’ve spent trying to figure out why?

At the moment we need a childhood to grow physically, and to develop mentally towards adulthood and independence from our parents. All robots are adult already, so don’t need of this rather awkward and inefficient phase: just a quick test, then the on button. As a species, humans are ridiculously slow to mature. This is so obviously problematic when compared to other species that there must be an evolutionary reason for keeping this comparative design flaw. It seems that to develop a brain of human complexity takes time, hence this slow process.

But if we could decode the brain, could we not short-circuit the process by cloning adults and programming them direct? This of course is the ultimate design goal of AI, and we’re familiar with it from a whole host of SciFi movies: whether or not we keep humans as well, or simply use the secrets of their brains to evolve beyond them remains to be seen.

It might seem obscene, in these halcyon days of the UN Convention on the Rights of the Child, to empty my beach by indulging in a thought experiment about the future of childhood, but given that our technology has already overtaken our capacity to agree global ethical red lines in so many areas, we need to confront this spectre in order to work out not only why it feels anathema to us, but what that might do in response.  

What are these children doing on the beach? All parents who have spent interminable hours in dilapidated playgrounds will have had the same thought, in case there might be an easier way. They are playing, of course, but with Darwin’s eyes we can also see that they are learning. They are learning  about their physicality and their preferences; they are learning about other people and about relationships; they are learning about the natural world and about the world’s rules. So beware the child with no scabs on their knees: they have not yet learned about taking risks. We’re quite quixotic about childhood. Most of us loathed our own, but it seems we will fight to our dying breath to protect the childhoods of those we love, so our children still tuck a baby tooth under their pillow, and write to Santa Claus.

We’re in a transition. AI can already do many of the cognitive things that humans can do, more quickly, accurately and cheaply, and it’s improving all the time. This creates a dilemma. Because the future is not yet here, we’re still competing hard in the previous race, and on its terms. While most parents know that google has already overtaken their children, and there is more to education than information, the current social frenzy is still about doing your utmost to get your kids into the Gradgrind School of Facts. But the shadow of the future is already here, so we know that today’s highly-prized selective crammers, with a zeal for STEM and an ability to churn out volumes of A stars and Oxbridge places, have maybe 10-15 years to rake it in before their product becomes obsolete. We do need human computers in the interim, to programme AI for us, but once we’ve aced machine learning they will also become defunct. Meanwhile the crammers could save costs and boost performance by removing their STEM teachers in favour of so-called intelligent tutors, because AI-led learning already outperforms traditional learning in most settings.

Yet there is in traditional education a core curriculum which has not yet been improved by AI: moreover it may not be susceptible to AI in the way that STEM undoubtedly is. As I’ve argued elsewhere, I think the key to our humanity isn’t to be found in the clean lines of rationality that would delight any programmer, but in our junk code: the mistakes, the regrets, the dreams, the grief, the envy, the fear, and the joy. We learn these kinds of things very messily on the beaches and in the playgrounds of our childhoods; but at school we learn it particularly in the arts and the humanities: through the myths and stories about human waywardness; the mind-stretching disciplines of philosophy; and the creativity and exuberance of music, drama and art. In all of these, we learn the fundamentally frustrating qualitative nature of argument and criticism, where there are no clear-cut yes/no answers and it is nigh on impossible to score 100%. (By the way, it’s not that we can’t learn these things in STEM subjects, it’s just that they are not taught that way at elementary level). 

I’d wager we learn these junk code things with the particular help of the emotions, because of the role the amygdala plays in memory and in survival. And if that’s the case, it’s the reason we need to stop designing out bad stuff like not coming first or fluffing your lines in front of your peers. We might learn joy from a well-done sum, but we don’t learn shame or embarrassment or chagrin or schadenfreude. Kids need to wallow in the absolute limits of being human in order to feel these limits for themselves: this is a vital prerequisite for the rule of law as well as for human ingenuity and invention. Robots have to act within the bounds of the theoretically known because they are victims of their programming. If only to design better robots, we need to keep pushing at these bounds, in order to extend them: no paradigm shift was ever created without this very human recalcitrance.

So while we decide whether or not childhood is a state that we want to protect in the future, if we develop the know-how to avoid it, we should relish the very essence of childhood, by fighting back against the prevailing policy that prioritises STEM. Indeed we should reverse this trend while there is still time, and make the arts and the humanities both compulsory and subsidised in all formal education. And maybe we should risk teaching philosophy to those kids on the beach: versions of the trolley problem are a daily reality for them anyway, so they might be best placed to help us solve it.

A Year of Universal Basic Income?

By | Business, Theology | 4 Comments

Following the first cases of Covid-19 in the UK in January and February, lockdown was announced on the evening on Monday 23 March. Since then, citizens have been working from home, except for keyworkers, or have been laid off or furloughed under the government’s Job Retention Scheme. As at 12 May, 7.5 million jobs have been furloughed, and the British Chamber of Commerce reported that 71% of businesses surveyed by them had furloughed some staff. This means that the UK government are currently paying the wage bill for about a quarter of all UK employees. The scheme will be gradually phased out, with some part-time working and employer contributions, finally ending in October.

The economy has taken a huge hit. The decline in GDP in 2020 is likely to be the largest since WW2. The Office for Budget Responsibility and The Bank of England have published scenarios of a GDP fall of 14%, which compares rather badly to the 4.2% fall during the financial crisis in 2009. As an example, in April and into May, retail footfall was down by 75-80% compared with a year ago. Claims for the Universal Credit benefit increased by 2.5 million between 16 March and 5 May, and The Bank of England projects the unemployment rate rising to 9% in the second quarter of 2020, compared to 4% before the crisis. Many think this is optimistic, given that the US is already on 15%, having not deployed the policy option of a job retention scheme to slow down the rate of layoffs.

None of this is good news for the economy, but it is very bad news for household debt. Before the crisis, government data reported that household debt had peaked in Q2 2008 at 147% of household disposable income. It then declined to 124% by late 2015. But growth in household debt levels accelerated from early 2016, and the debt-to-income ratio had risen to 128% by mid-2017. In Q3 2019 it stood at 126.8%. Already, the Financial Conduct Authority had established that 12% of UK adults (5.9 million people) have no savings and investments at all and that a further 37% (19.1 million people) have savings or investments of less than £10,000; meaning that almost half of UK adults either have no savings or ones less than £10,000 in value. In particular, of those in the most vulnerable financial category, 3.7 million said that their household could only continue to cover living expenses for under a week if they lost their main source of income, without having to borrow money or to ask for help from friends or family. A survey in March showed that 49% of those polled expected to have difficulty in paying bills, with 57% of those working saying their earnings were lower than in the previous week. In May, the Citizens Advice Bureau reports that an estimated 6 million people have already fallen behind on a household bill due to Coronavirus. 4 million people have fallen behind on rent, council tax or on utility bills where they will have little protection from debt collection when temporary protections on enforcement expire. It is true that many utility companies, banks and landlords have offered payment holidays for lockdown, which is very welcome. But these arrangements merely defer and delay payments rather than cancel them, which means they will be mounting up, presenting many households with a need either to reschedule their payments to render them affordable, or the prospect of being unable to meet the sudden increase in payments once the holiday ends.

These are not the feckless debts of a bling generation, these are cost-of-living debts. Those in the clutches of the high-interest lenders were already using these clear and simple, readily available and impersonal on-line loans to finance rainy-day purchases. In 2014 research by the Competition and Markets Authority suggested that the average payday loan is for around £260, lent over 30 days. Reasons for these loans fell into three categories. While some were used to finance living costs, they were most often used for emergency expenses including the repair or replacement of cars, boilers, and white goods. The other major category was seasonal, particularly the need to buy Christmas presents, or new school uniform and shoes for a new term. Very few were for more frivolous expenditure, for example a last-minute holiday or luxury item. This suggests that a cushion of just £300 savings might enable most households to avoid these kinds of emergency loans.

Looking forward to a post-Covid, post-Brexit Britain, the future looks very bleak, particularly for the financially vulnerable. Even the less vulnerable may now be cautious about post-lockdown spending, just when the economy needs a boost. So my proposal is for the UK government to introduce a year of Universal Basic Income (UBI), which would provide both cushion and spending. Those who do not need it should be invited to donate it to a charity of their choosing, because the UK’s charities too are suffering unprecedented falls in income, with many set to close.

First mooted in Thomas More’s Utopia in 1516, UBI is defined by lifelong champion Guy Standing as ‘a modest amount of money paid unconditionally to individuals on a regular basis; intended to be paid to all, regardless of age, gender, marital status, work status and work history’. Many countries around the world are experimenting with it. Since 1982, residents in Alaska are paid an annual Permanent Fund Dividend, funded by a share of the profits from the state’s oil industry. Last year, the payment amounted to £1,300 per citizen. This is in comparison to the recent pilot in Finland which awarded 2,000 randomly selected unemployed people with an income of around £500 month; and the 2019 proposal by Guy Standing that the UK rate should be £48 a week. A report by Reform Scotland suggested giving adults £100 per week and children £50 per week. Modelling conducted by the RSA for their report on a basic income for Scotland found that in Fife, a basic income of £2,400 a year would reduce relative household poverty by 8.5% and halve destitution. A basic income of £4,800 a year would reduce relative household poverty by 33% and end destitution completely. Foodbanks could become a thing of the past.

Malcolm Torry argues that UBI could be funded through a 5% rise in income tax, while the new economics foundation proposes scrapping the tax-free personal allowance to finance it. Others have argued that it could largely be financed by the reinvention of the current benefits system. My more modest proposal is to introduce this for a year only, as part of the Covid-19 recovery plan, both for its short-term effects and in order to test it properly as an option for the future. It would therefore need to be carefully designed to maximise its potential to generate useful research data. It might also be used to model a separate ‘gap year’ product in the tradition of National Service, whereby every citizen in the future would have the right to a year off, to use for volunteering and citizenship, or perhaps to re-train or re-skill.

Opposition to UBI tends to centre around two key issues. One is moral hazard, and the other its political irreversibility. The latter argues for a very clear Covid-related one-year programme to permit a political u-turn should the experiment not work. The former argument is less easily dismissed. Would citizens abuse the system and would it drive dependency and idleness? Is there any evidence that these schemes in fact work? As a Christian, I hear these concerns, but I hear the moral arguments more loudly. While the economic results of the recent UBI pilot in Finland were unremarkable, one finding stood out for me: ‘the basic income recipients were more satisfied with their lives and experienced less mental strain than the control group. They also had a more positive perception of their economic welfare.’ Given the mental ill-health effects of Covid-19 and lock-down, a year’s UBI would be a gracious way to honour the dignity of the citizen and say thank you for their efforts to protect each other during the pandemic.

with thanks to Andrew Phillips from the Jubilee Centre for help with research

Aceding to Acedia

By | Theology | No Comments

Sermon delivered at New College, Christ the King, 24 November 2019

If you’ve watched any adverts recently, you’ll have noticed that the advertising team at BT are suffering from flashbacks to their A-levels. In their ad, a schoolgirl walks across town, intoning Dickens: “It was the best of times, it was the worst of times…”

In a bizarre twist, its backing track is Stormzy’s Blinded by your Grace, in a Dickens/God mash-up that feels entirely appropriate for a New College sermon, because the liturgy for today’s festival of Christ the King is all about these contrasts. On the one hand, the reading from Jeremiah is about God sending a King to gather the lost sheep of Israel, and the Psalm and the anthem are about God defending his chosen people; while on the other hand the Gospel reading is about God failing to show up to rescue Jesus at the crucifixion, as the thieves on either side also die painfully on their crosses. Read More

Thought For The Day – Bonfire Night – 5th Nov 2019

By | Theology, Thought For The Day | No Comments

Remember, remember the fifth of November…

Guy Fawkes was by no means the only conspirator in the gunpowder plot, but he’s succeeded in being the man we most associate with it. His claim to the day was immortalised by the mask used in the graphic novel and film, ‘V for Vendetta’, where the character V is masked as Guy Fawkes throughout. Since then, demonstrators against parliaments and powers the world over have worn the Guy Fawkes mask, because it says at the end of the film: ‘V was you…and me. He was all of us.’ Read More

Thought For The Day – Sharing The Harvest – 24th Sep 2019

By | Theology, Thought For The Day | No Comments

This weekend was Harvest Festival. When I was little, we’d set off to church for it, laden with bounty from the garden – marrows and broad beans; redcurrants and gooseberries – and we’d decorate every corner of the church with our harvest offerings. The altar would be surrounded by sheaves of corn and elaborately plaited bread, and we’d sing ‘We plough the fields and scatter’ like we’d all personally done so. Read More

Thought For The Day – Forgotten Places – 20th Aug 2019

By | Theology, Thought For The Day | No Comments

In Scotland we’re famous for majestic unspoilt landscapes, and there’s some unusual proof that some of them are really quite empty of people: in the rankings of the least popular Ordnance Survey maps in the UK, Scotland claims all of the top ten. If you watch Landward on BBC Scotland, you’ll have seen a recent episode where the team took on the challenge of visiting the area covered by the map at number 1: OS Explorer 440, covering an area in the Highlands, north of Inverness. As soon as the team arrives, they see an osprey, then a lizard, then a beautiful waterfall: not quite the featureless landscape one might assume from it being so decisively overlooked by the map-buying community. Read More

Thought For The Day – Fair Play – 15th Aug 2019

By | Theology, Thought For The Day | No Comments

Strange things are happening in cathedrals down south. In Rochester, they’ve installed crazy golf; and in the nave at Norwich you can slide down an enormous helter skelter. No news of anything like that in Scotland, yet. But if you’re from St Andrews, the news in August has always been about the fun of the fair. The rides might not be inside the churches, but the whole town grinds to a halt every year, as one by one, the main streets are taken over by the Lammas Market. It’s said to be Europe’s oldest surviving medieval street fair, and it’s been running now for over 900 years. Read More