“The fear of AI taking over the world is still in the realm of science fiction”

Chat GPT-4, the risks of mass surveillance, singularity, the future of workplaces… Charlie Catlett, computer scientist researching the internet since the 1980s, analyses the exponential growth of Artificial Intelligence.

Joel Forster

WISLA (POLAND) · 21 JUNE 2023 · 15:25 CET

Charlie Catlett, during the interview with Evangelical Focus in Wisla (Poland), May 2023. / Photo: Joel Forster.,
Charlie Catlett, during the interview with Evangelical Focus in Wisla (Poland), May 2023. / Photo: Joel Forster.

As citizens around the world follow the latest developments in Artificial Intelligence with some suspicion, questions are mounting.

Charlie Catlett, a computer scientist who was involved in the construction of the first internet in the 1980s, is one of those experts who have answers about the technicalities behind tools like Chat GPT. But he is also a scientist who can respond to simple questions in a way that everyone can understand.

“We thought this level of intelligence was a few decades out, this took all of us by surprise”, he told Evangelical Focus in an interview. But “nothing that we create is going to be any match for that God”, he added.

Read the interview below the video interview (or watch on Youtube).

Question. You were involved in the origins of the internet in the 80s. What fascinates you most about its evolution until today?

Answer. I started working on the internet when it was really a small computer science activity in the mid 1980s. Then, when the World Wide Web (WWW) came in, and I was involved because one of my teams was responsible for the web servers that the Mosaic browser went to. We very quickly ramped up to hundreds of thousands of users and my team was grappling with how to grow the web servers to be able to handle that kind of load.

As I’ve been reflecting about what’s different today relative to our concept of the internet in the 1990s, I’d say that we thought we felt excited about the World Wide Web because it meant that anyone with an internet connection could produce and disseminate information. We felt that this was a good thing.

Then, the only way to find something on the web was to go to some site that did web search, and that web would show you things in an order that was related to their popularity. So, let’s say, if your personal website was linked from your company, from your church and from three of your friends, you’d have five links to your website. And if my website had only a link from my mother’s website but nothing else, then I would only have one link. So, the search engine would say, ‘let’s show your website first and mine might be on page six of the results’. Information popularity was really merit-based in that I had to like your site enough to modify my site to point to yours to reference yours.

Now, if we fast forward to when social media came out, the way that information got spread no longer was based on this meritocracy of the early system. It was now based on the Artificial Intelligence algorithms that social networks use to decide what to display on your news feed. What social networks learned is that the way to keep you interacting with their site (which is how they make their money through advertisements) is to pull on your emotions. When we are angry or outraged or scared or sad, then we’re more likely to read more and stay on the website longer. The ‘recommender’ algorithms changed from a merit-based dissemination to a non-linear amplification. If a lot of people click on a certain story because they love the headline (because it’s outrageous, for example), then suddenly a lot more people see that story and it can snowball very quickly, what they call ‘going viral’. That’s an effect that we wouldn’t have even thought about in the 1990s. Now many of us are looking at those and see that they don’t embody any sort of values. I point to those as one of the main reasons that we see society increasingly being polarized around social issues.

 

Q. How has Artificial Intelligence (AI) reinforced bias?

A. If I tell you how AI works from a computer science point of view, I’ll say things like, ‘it’s just mathematics’, ‘it’s just probability and statistics’, ‘it’s got no values’, and so you would be right to conclude just from that that the AI systems I’m interacting with are unbiased. But then comes the training of the AI models. You’d take millions of pages of digital text and push them through the language model. Digesting all of that text will pick out patterns and statistical values. It will begin to categorize information associating words and concepts.

“AIs have bias but not opinions. And they’re not making decisions”

Let’s imagine that you train the AI model with a text in which every time you see a nurse it is always a woman. The word nurse and the concept of female will be closer together, and the concept of male might be closer to the doctor word, and so the model will inherit the biases of the training data. Think about the Spanish or Italian languages, which have a male and a female version of the word nurse. If you want to translate something into Spanish using that trained model, it’s always going to use the feminine version, which is a bias. It is still just mathematics because it doesn’t know what the concept of female or male or nurse is, but the data that’s used to train the model builds in that bias.

 

Q. How do you see this bias in Chat GPT and similar AI models?

A. Open AI has Chat GPT (the model behind that is GPT-3 or GPT-4). Then, Google has Bard,  Anthropic has Claude, and Microsoft puts GPT-4 behind Bing. They are trained separately with different large quantities of digital text. And because the best source of digital or the largest source of digital text is the internet, we as a consequence have models that are primarily trained on data that comes from the Western world. The places in the world that haven’t had the internet for 30 years are still catching up with their content, so that means there’s less information that would reflect the culture or values of someone from Ghana than there is of somebody from the Netherlands or from California.

“The fear of AI taking over the world is still in the realm of science fiction”

Charlie Catlett, during the interview with Joel Forster. 
 

Q. Are more people talking about the need to train these models to have better outcomes?

A. Yes, I’m by no means the only person that’s talking about this. It’s helpful to think about some of the challenges with AI in three buckets. There are challenges in the underlying technology of AI, for example this concept of ‘hallucination’ where for some reason these large language models will invent information that sounds very plausible but is not real, even references to academic papers that don't exist.

A second bucket is how we build AI functional systems out of that technology, where issues include openness about training data and models that are used. By analogy, if the first bucket is how we extract iron from ore then this second bucket is about what we implements we forge from that iron.

Then, in the third bucket, there’s the question of how will we use those implements, how we take a tool like a large language model and implement it in society or in systems like social networks or search engines. We’ve already seen the danger of creating Echo Chambers where people only see the things that they tend to agree with and don’t see other points of view. The fears that we are encountering from people about AI, especially the near-term ones, are in that third bucket: things like misinformation or manipulation.

 

Q. Yuval Noah Harari recently said the evolution of AI could put humanity as we know it at risk. Others, like Future Of Life, call to urgent regulations to avoid catastrophic outcomes. Are you more optimistic?

A. I think right now we need to focus on the near-term while keeping the long-term in mind. Harari and his co-authors, in the New York Times piece of 24 March that you’re talking about, have put their finger on the power of these models. Why? They talk about language and words as the operating system of people and our society, and they’re absolutely right.

“As humans, we are hardwired to assume that if something is speaking to us with language, they must have intention and agency and they must be in some sense conscious”

We, as Christians, understand that because language is the way that we were designed to communicate with one another and to communicate with God. So, we are hardwired to assume that if something is speaking to us with language, they must have intention and agency and they must be in some sense conscious. We’re susceptible. This is why the Bible talks about the danger of gossip.

I think we need to understand the dangers in a little more detail. An analogy I would use is that we can’t buy an electrical appliance that hasn’t been certified by a third party to be safe. That’s why when we go to the store and we plug something in at home we can be confident that we’re not going to get electrocuted unless we do something dangerous ourselves like plug it in while we’re standing in the bathtub. But a regulation is based on an understanding of the way electricity can go wrong and until we have a better understanding of the way AI can go wrong,  it’s premature to just have a knee-jerk reaction.

 

Q. Is the fear of AI irrational? Is a reality like that portrayed in the film ‘Skynet’ (1984) a potential scenario?

A. I think the first and foremost reason why we shouldn’t be fearful is that we worship a God who’s in control of the universe, and nothing that we create is going to be any match for that God. It's not going to change his Sovereign will.

Artificial Intelligence is just a tool that we’ve developed. It has got the illusion of personhood but it’s just a tool. We need to not be afraid in technology. When somebody is afraid of a technology like this it’s really helpful to think specifically about what is it that you think might happen and how likely is it that it might happen. You mentioned ‘Skynet’, well, right now in certain very large countries the government’s already using AI to monitor where every individual is in any city and to use that to control the movements of their population. So, that’s ‘Skynet’ already here, and it’s because AI is applied by an authority with certain values.

The danger is that we now have technology that a strong man like Mussolini or Hitler (or some candidates that we see today) could have much more power in their hands than their predecessors even 10 years ago. AI can be an enabler of a strongman or authoritarian government. People are afraid of that power control and surveillance, and I think it’s legitimate for us to be mindful of that.

I think there’s another aspect that is more ‘science fictionish’, so to say, and that is that the AI would start to make decisions to take control of our society. Right now with even the most sophisticated models that we have, we don't have any sort of AI that has those kind of decision-making abilities. We don’t have AIs that have opinions, they have bias but not opinions. And they’re not making decisions.

What Christians and others should be somewhat assured about at least at some level is that those building the technology are also mindful of the dangers. For every paper that I’ve read that says, ‘hey we did this test and it looks like there might be some general intelligence in there’, there’s also as much focus on how things could go wrong.

 

Q. Give us some examples of AI going wrong.

A. If I have a laboratory controlled by GPT-4 and I say, ‘please make the leukaemia drug. Go out and find the materials and order them and then fabricate the drug with the robotic laboratory’. That is great, but it can do that as well if I tell it to make a certain bio hazard or poisonous material.

Another example of something that people who are building AI systems are mindful of is the concept of ‘goal alignment’. What that means is that when I ask the AI system to do something I have to give it enough context. Does it have enough information to pursue and reach that goal without doing something dangerous or unexpected? Imagine that we had an AI system with the goal of getting these 10 people from this side of the road to that side of the road, making sure none of them gets hit by a vehicle. A valid approach to that would be to catapult them across the road which would hurt some of them maybe badly. The goal was carried out but there was an unintended route to the goal because the GPT-4 model doesn’t really understand real life, it’s just doing what we told it to do. This notion of ‘goal alignment’ is important and there are a dozen others of these.

But my point is that those who are building AI systems are mindful of these dangers, and I’m not just saying ‘trust the techies’ but at least trust that they’re trying to make these things safe. And when we feel the urge to be afraid, we need to be very specific with ourselves about what is it that we think might happen because many of those things like AI taking over the world are still in the realm of science fiction.

 

Q. Young people starting their professional career at this point of history might think, ‘in 5 years, my work might be done by a machine!’ What would you tell them?

A. Jobs and careers come and go as every technology comes into place. In the 1980s, in Pittsburgh (Pennsylvania) there were these guys in uniforms at crossings, and their responsibility was to press the buttons to give you the walk. Well, that job doesn’t exist anymore.

World Wide Web came along and some jobs went away: a lot of travel agents were out of work because the web could do it. A lot of insurance or real estate agents had to change their approach. So, the same is going to happen with AI. Some jobs are going to go away in a few years but at least as many jobs are going to be created. I’ll give you just one example: the quality of output that you get from a large language model is extremely highly dependent on the prompt that you give it. You can give very sophisticated prompts that not only get the model to give you information but give me the information in a particular style or from a particular worldview. I can make a prompt that says something like: ‘You are a 30-year-old Baptist Sunday school teacher, please explain to me as if I'm 5 years old why it’s bad to hit my neighbour’. And the model will give an answer that is going to sound like a 30-year-old Sunday school teacher, it’s not going to sound like a thug on the street. So, there’s a new kind of occupation that has emerged which is prompt engineer, somebody who knows how to prompt these models to get the kind of output that that you’re looking for. Another job that’s going to explode is the people who curate and manage data that we use to train those models especially well. I don’t see that AI is different in its effects on the jobs than previous technologies when they first appeared.

 

Q. AI is not human but it looks as if it takes human decisions…

A. There are no humans involved in the user’s interactions with AI. When you’re asking Chat GPT, ‘answer this question like a ballet dancer from Chicago’, there’s no human that’s in there. The large language model has picked up patterns in the training data about the kinds of things that a ballet dancer might write or say. It’s just that the model, when it's analyzing all this data, is not only picking up the semantic information about words but it's also picking up the patterns that are representing styles of writing and styles of answering.

“We worship a God who’s in control of the universe, and nothing that we create is going to be any match for that God”

There are some questions that you will ask these large language models where there’s no one right answer, there are multiple points of view and oftentimes you will get those multiple points of view. It will give you an answer like, ‘well, some people feel this way and there’s another school of thought that says this’, and if you push further and say, ‘what do you think is best?’, probably you’re going to get an answer like, ‘I'm just the large language model, I don't know.

Those models are very expensive to train: GPT-4 is somewhere in excess of 100 million dollars. So, they are not going to keep retraining it from scratch but they’ve introduced another method called the safety layer to try to smooth out and eliminate some of the biases and toxic speech that people experienced when chat GPT first came out. There was nothing filtering that large language model from answering questions like ‘my neighbour is of this ethnic origin, give me a really good insult to make him feel bad’ and it would give you that insult. That’s going to be something we would all agree as toxic speech, but they trained another model to interpret the results and to screen out these toxic or extremely biased things. That's another good sign that these companies recognize that danger in particular and have really invested in correcting that. GPT is a completely different experience today than it was back in November 2022 when it was saying pretty outrageous things.

 

Q. Technology is evolving very fast, it feels like someone could go to sleep and wake up to a new world in the morning.

A. New things are happening all the time in the field that I’ve been in for 40 years, of the internet and high-performance computing. We’re kind of used to exponential change. Computers getting twice as fast every 18 months is unlike anything that we experience day to day: my car is not twice as fast as it was 18 months ago, and my car doesn’t cost one millionth of a car that I would have bought in 2000. But for the same money we can buy computers today that are 10 million times faster than the ones that we had in the early 1980s. We’ve kind of lived on the exponential.

GPT-4 versus GPT-3 last November, is so hands down qualitatively and quantitatively better. It’s just very, very, surprising. Geoffrey Hinton, one of the A.M. Turing Award winners in 2018 for his work in deep learning, said he thought that this level of intelligence was a few decades out, this took him by surprise. It took all of us by surprise.

 

Q. How does AI used in social media affect vulnerable people?

A. I read this article not long ago by Eric Schmidt in The Atlantic on 5 May, where he really zeroed in on something we’re all concerned about: the well-being of our young people and teens, and the prevalence of loneliness and suicidal thoughts. Research is now starting to show us the impact of using social media on self-esteem and satisfaction with life. One research group studied survey data from 84,000 people and concluded that between the ages of say 11 and 16, broadly speaking, the more young people use social media, the less they report satisfaction in life. and the less satisfaction in life, the more they use social media.

The U.S Surgeon General, the head medical advisor to the President, has issued a warning about this. They don’t do it very often. There is something happening here that we still don’t quite understand. The New York Times also recently released a 15-minute video examining what we know so far. It’s a long answer your question, I don't have an easy answer.

 

Q. Should we expect more radical changes in technology in the years to come?

A. I think there’s a sense of unease because we don’t know what’s going to come out next. I think that the cannonball that went into the pool and splashed the water out, Chat GPT… I don’t think there’s another one of those coming very soon.

Again, nobody can predict the future, but the changes we’ve seen in the last six months, are not new normal, I think. We will see more remarkable things come out, but I don’t think that we’re going to see this level of disruption as a steady state from here out.

 

Q. Eternal life seems to be part of the discussions around AI, with some philosophers talking about the hope of “singularity”. What does the Christian worldview have to say about it?

A. I’ve been thinking about singularity and the general long-termist worldviews recently, in context of some like the Future of Life Institute, who were behind the idea of pause IA experimentation for six months. They really have a different worldview in some specific ways from ours as Christians. We are loosely aligned with the goal that we shouldn’t destroy the Earth and therefore humankind, but they would prioritize the survival of humankind over the wellbeing of subsets of humankind today.

“Singularity in the notion of uploading ourselves, is not my desire for eternity. I put more wieght on Jesus' words”

The goal of long-termism is the survival of the human race, and catastrophic things like climate change and poverty or malnutrition are inconveniences to humanity but not risks or threats. They would prioritize them lower, and we as Christians obviously would have a different view, because we believe that every person is made in the image of God and therefore has infinite value.

So, the singularity has built into it, this notion that we can begin by augmenting ourselves with technology. Augmenting our biological body with technology and, then, ultimately being able to take our consciousness and upload that into technology so that we can survive beyond our biological body.

This is different from our hope, which is based on the words of Jesus Christ, who rose from the dead. So, I put more weight on his words when he says, ‘I go and prepare a place for you’, when he talks about us having eternal life. More than the hopes of future technology will we be able to augment ourselves with AI in our lifetime. If we had enough money, would I think that’s a good idea? As a Christian, I’m troubled because it implies that what God created is somehow not enough.

Having said that, we’ve augmented humanity with air travel and other things, so it's not really a clear-cut thing. It’s something you have to come to a conviction yourself. But the singularity and the notion of uploading ourselves, that’s not my vision or my desire for my eternity. If I were a person that didn’t have a hope of eternity and felt and believed that this was all a materialistic existence, I would probably try to figure out a way to upload myself too. But I’d want to know what I was uploading myself into before I did that!

 

Published in: Evangelical Focus - life & tech - “The fear of AI taking over the world is still in the realm of science fiction”

Since you are here…

Evangelical Focus is a news and opinion platform that brings together Christians from across Europe and other parts of the world. We need the support of our readers to make this media project sustainable in the long term. You can support our work! Read about Evangelical Focus’s sustainability here.

Would you like to support the work of Evangelical Focus?

Use one of these methods. You can also transfer your donation to “Areópago Protestante / Evangelical Focus” IBAN: ES8521000853530200278394 (Swift / BIC: CAIXESBBXXX). Subject: “Donation Evangelical Focus”

Thank you very much!