AI and the future of mission
Are we seeing the dawn of thinking machines? And what does this mean for mission, for Christianity, and the world?
25 NOVEMBER 2024 · 20:35 CET
Since the invention of the first computers, people have dreamed of the idea of machines that can create artwork, write music, reason and explain themselves in natural language: in short, thinking machines.
In the past few years, we have seen a rapid development in the field of artificial intelligence. What has changed? Are we seeing the dawn of thinking machines? And what does this mean for mission, for Christianity, and the world?
‘Artificial Intelligence’ or ‘machine learning’?
In 1950, Alan Turing wrote an article on the question ‘Can machines think?’.1 This article laid the foundations for artificial intelligence (AI)—as well as the ‘Turing Test’—to tell if a machine was ‘really’ thinking.
He proposed a number of ways that we might make a thinking machine. The first approach was attempted between the 1960s and the 1980s: if we provide a computer with huge lists of ‘rules’ about how the world operates, the computer would be able to reason for itself.
One project, Cyc,2 gathered hundreds of thousands of relationships (eg ‘All apples are fruits’) and was able to answer simple questions about its knowledge.
In the 1980s, after the rule-based approach reached a dead end, research switched to Turing’s second strategy, machine learning.
The idea of machine learning is that by building programs with a similar structure to the human brain (‘neural networks’), computers would learn about the world the way we do, through observation and exploration.
But while human brains have trillions of connections between their neurons, artificial neural networks of the 1980s and 1990s could only sustain hundreds or thousands of connections.
This was not due to limitations of memory or hardware, but because of a mathematical problem. As the number of ‘layers’ in the network increased, the flow of information through the network became weaker.
A variety of solutions to this problem became available in the early 2010s allowing ‘deep’ neural networks of many hundreds of layers, and it is this breakthrough which has directly led to the artificial intelligence revolution we are seeing today.3
The reason the history of neural networks is important is that it helps us remain grounded about what ‘artificial intelligence’ is. It is simply a mathematical structure, a model of probabilities.
Indeed, the term ‘machine learning’ is a more accurate description, highlighting the mathematical foundations, compared to ‘artificial intelligence’ which may mislead us into believing that there is ‘thought’ and ‘intelligence’ being applied somehow.
In reality, Large Language Models such as ChatGPT and Gemini convert their training data into a sequence of ‘tokens’ and store how likely it is that those tokens occur in different sequences.
When we ask ChatGPT a question, the computer does not consider the issue and ‘think’ through a response—it simply returns the most probable series of tokens which would come next.
Similarly, Generative Adversarial Networks, the technology behind ‘AI art’ tools such as DALL-E and Midjourney, return the most likely set of pixels in response to the prompt we provide.
Applying AI to the mission of the Church
Even so, the results of these machine learning systems are impressive and potentially very useful, and are already being used to accelerate global mission.
For example, SIL’s AI team have developed language translation models and text-to-speech models for over 300 minority languages,4 and are pioneering machine learning tools to facilitate their mission of Bible translation, quality assurance, and community checking.5
AI provides the potential to automate translations in domains like Deaf sign languages, where production is particularly expensive and time-consuming, although Sign Language video generation is still in its early stages.
My own church uses Microsoft Translator during services so that members from different countries can access the service in their own language and contribute to church life even with limited English; machine translation is already making us richer and more equitable as a community.
Machine translation is already making us richer and more equitable as a community
The Episcopal Church has launched a ‘fine-tuned’ ChatGPT instance to answer questions about faith and doctrine,7 and AI startups are already offering a range of products aimed at churches and church leaders—sermon transcription and ‘remixing’,8 chatbots for discipleship and church engagement,9 and many more.
And it is impossible to tell how the use of AI will develop in the future and how it will affect the mission of the church. A ‘maximal’ view of AI’s potential sees it transforming economies and automating many current careers out of existence, freeing up time and resources for ministry.
Even in a moderately optimistic view, AI will provide huge advances in ministry personnel development and training through computer-assisted learning programs; it will transform medical mission through Machine Learning-assisted diagnosis and telemedicine; we will be able to achieve the same, but better with computer assistance.
At the same time, I believe the increase in automation and the ‘human-like but non-human’ communication of chatbots will deepen an existing epidemic of loneliness and provoke a yearning for real human connection and community which must remain at the heart of mission.
The hidden dangers of the AI revolution
However, although I am a full-time computer programmer and engage with AI in my work, I personally have a pessimistic view of AI, and I believe that there are a number of reasons why Christians should be cautious before wholeheartedly accepting the prevailing artificial intelligence optimism.
Christians who are concerned about truth may be reluctant about artificial intelligence, because as machine learning models are purely stores of probabilities, they are explicitly designed to create plausible answers, not accurate ones.
As Timnit Gebru and others argued (in an article that led to her termination from the Google Ethical Artificial Intelligence Team), Large Language Models (LLMs) act as ‘stochastic parrots’.10
They have no concept of true or false—let alone any moral concepts of right or wrong—but simply answer the question: ‘What does a reasonable answer to this question sound like?’
For example, I asked ChatGPT to provide a bibliography of my own articles about the theology of shame. Out of the impressive-sounding list of citations, not a single one of them was a real paper.
Further, the fact that LLMs are trained on the contents of the Internet, and that not everything on the Internet is true, leads to AI assistants recommending eating rocks and cooking spaghetti with gasoline.11
At the same time, training data often carries an implicit bias based on the dominant worldview which produced it, and so LLMs perpetrate the racism and sexism of their context. For example, Hungarian has a gender-neutral pronoun ‘ö’ which must be translated to ‘he’ or ‘she’ in English.
Machine translation tools will use the probabilities embedded in the training data to choose a pronoun for ambiguous cases, preferring ‘he is researching’ to ‘she is researching’, ‘she is raising a child’ instead of ‘he is raising a child’, and so on.12
Even before we get into the problems of deepfakes and AI-generated misinformation, in domains such as discipling new believers, it would be an understatement to say that accuracy and correct worldview are more important than plausible-sounding answers.
Reconfiguring LLMs for accuracy is a pressing concern within the industry, but it is a problem which the leading players have been unable to effectively solve.
Christians who are concerned about ethics may be reluctant to rely on artificial intelligence because of the way that the training data is obtained.
Machine learning models are purely stores of probabilities, they are explicitly designed to create plausible answers, not accurate ones
But there are significant concerns as to whether this collection is ethical. Consider, for example, using an AI model to generate artwork for a book cover. The Generative Adversarial Network (GAN) used to generate the image will have been trained on many millions of artworks, often with little regard to their copyright or license status,14 and the AI-generated art will be used instead of commissioning a human artist.
In other words, AI is making artists redundant through the uncredited, uncompensated, and arguably unethical, appropriation of their own artwork.
A more troubling aspect of LLM training is the hidden extent to which artificial intelligence relies on human workers to label and moderate content.
A letter from Kenyan AI workers15 highlights the low pay, unethical, and exploitative labour practices, and exposure to ‘murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day,’ which underpins the AI revolution.
Perhaps it is worth remembering the Luddite movement, which resisted the Industrial Revolution not because they were scared of technological advance but because they considered its human cost too much to bear.16
Finally, Christians who are concerned about creation care may be reluctant about artificial intelligence because of the environmental impact of training and using deep neural networks.
Training GPT-3, a single model which would be considered small by today’s standards, required 1287MWh of electricity, enough to power 1500 homes for a month, and created 500 tons of CO2 emissions.17
As well as electricity, the data centres used by large models require huge amounts of water to provide evaporative cooling; it is estimated that GPT-3 needed 700,000 litres of water to train, and requires half a litre of water to power a short conversation with a user.
As climate change leads to increasing drought and water scarcity, we must ask ourselves if we can justify contributing to the estimated 5 billion cubic meters of water—half of the UK’s annual consumption—used by AI tools by the year 2027.18
Towards responsible AI
The responsible use of AI represents an opportunity for those of us with technological skills and gifts to expand into new areas of ministry, but we must remain clear-eyed. I believe it is currently too early to tell what kind of impact AI will have.
On the one hand, the successful application of AI has the potential to transform almost every area of ministry and personal life; on the other, there are serious signs of a bubble in which the hype surrounding AI has not translated into the kinds of applications that industry has promised.19
The use of AI must promote human flourishing, be ‘just, kind and humble’, maintain the privacy and security of data, ensure overall human accountability
In the coming years, Christians will need to critically evaluate the questions I have posed above as they navigate the use of AI, balancing the opportunities it affords with the very real dangers and costs that it represents: in other words, to ‘test everything; hold on to what is good; reject any kind of evil.’
Simon Cozens is a software engineer and a mission partner with WEC International. After spending ten years as a church planter in Japan and a mission trainer in Australia, he now lives in the UK with his wife and two children.
He is the author of Looking Shame In The Eye, Re:Thinking Mission, and several tutorials on computer programming.
This article originally appeared in the July 2024 issue of the Lausanne Global Analysis and is published here with permission. To receive this free bimonthly publication from the Lausanne Movement, subscribe online at www.lausanne.org/analysis.
Endnotes
1. A. Turing, ‘Computing machinery and intelligence,’ Mind, 59(236),1950: 435-60.
2. D. Lenat et. al., ‘CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks,’ AI Magazine, 6(4), 1986: 65–85.
3. See ‘The Vanishing/Exploding Gradients Problem’ in A. Géron, Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow (Sebastopol, CA: O’Reilly Media, 2022), ch 11.
4. C. Leong, et. al., ‘Bloom library: Multimodal datasets in 300+ languages for a variety of downstream tasks,’ arXiv preprint arXiv:2210.14712, 2022; see also https://www.ai.sil.org/projects/acts2.
5. ‘SIL AI & NLP Projects,’ SIL International, accessed 19 August 2024, https://www.ai.sil.org/projects.
6.‘The Bible according to OpenAI,’ OneBread, accessed 19 August 2024,
7. Michael Gryboski, ‘Episcopal Church launches AI chatbot ‘AskCathy’,’ 12 August 2024,
8. ‘Helping busy pastors turn sermons into content,’ Pulpit AI, accessed 19 August 2024,
9. ‘What is truth? Use AI to spread the Gospel,’ Biblebots, accessed 19 August 2024,
10. E. M. Bender et al., ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?¬タル, in FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency: 610-23, 1 March 2021,
11. Toby Walsh, ‘Eat a rock a day, put glue on your pizza: how Google’s AI is losing touch with reality,’ The Conversation, 27 May 2024, https://theconversation.com/eat-a-rock-a-day-put-glue-on-your-pizza-how-googles-ai-is-losing-touch-with-reality-230953.
12. E. Vanmassenhove, (2024). ‘Gender Bias in Machine Translation and the Era of Large Language Models,’ Gendered Technology in Translation and Interpreting: Centering Rights in the Development of Language Technology, ch 9, 18 January 2024, https://arxiv.org/html/2401.10016v1.
13. ‘Introducing Meta Llama 3: The most capable openly available LLM to date,’ Meta, 18 April 2024, https://ai.meta.com/blog/meta-llama-3/.
14 .G. Appel et al., ‘Generative AI Has an Intellectual Property Problem,’ Harvard Business Review, 7 April 2023, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.
15. Caroline Hskins, ‘The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery,’ WIRED, 22 May 2024, https://www.wired.com/story/low-paid-humans-ai-biden-modern-day-slavery/.
16. Richard Conniff, ‘What the Luddites Really Fought Against,’ Smithsonian, March 2011, https://www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412/.
17.Nestor Maslej et al., ‘The AI Index 2023 Annual Report,’ AI Index Steering Committee, Institute for Human-Centered AI, Stanford University (Stanford, CA: April 2023),120-23.
18. P. Li, P, J. Yang, M. A. Islam, & S. Ren, ‘Making AI less ‘thirsty’ Uncovering and addressing the secret water footprint of AI models,’ ArXiv preprint, 29 October 2023,
19. Chris Taylor, ‘The AI bubble has burst. Here’s how we know,’ Marshable, 6 August 2024,
20. ‘AI Ethics Statement,’ SIL, accessed 19 August 2024,.
Published in: Evangelical Focus - Lausanne Movement - AI and the future of mission