“The generation that will think the best applications of AI is in high school today, maybe even in primary school”
Charlie Catlett, an experienced computer scientist who explains Artificial Intelligence to non-experts, speaks to Evangelical Focus. “Chatbots are non-persons, and you shouldn’t be having a personal relationship with a non-person”.
WISLA (POLAND) · 12 JULY 2024 · 12:10 CET
Charlie Catlett, a computer scientist who was very much involved in the creation of the first internet in the 1980s, has been following the evolution (and certain “revolutions”) of Artificial Intelligence (AI) in recent years closely.
Evangelical Focus spoke to him during a conference in Poland, where he shared with Christian leaders from across Europe about recent technological changes and their impact on society.
Charlie Catlett, during the interview with Evangelical Focus, in Poland. / Photo: Joel Forster.
Question. Charlie, we had you with us only one year ago to discuss the apparition of ChatGPT. Now we have new questions, everything is going very fast. Evangelical Focus readers might remember that you said the Artificial Intelligence models of 2022 were like “a cannonball splashing the water out of the place”. How do you see the the evolution of these technologies in recent months?
Answer. I think we’ve seen very impressive progress on the capabilities of models like ChatGPT and other models come forward and also make dramatic improvements. Google has Gemini, Claud from Anthropic, Meta has opened the Llama series, Mistral in France, and others.
I wouldn’t say we’ve seen a big surprise like we did with the first time with ChatGPT, but I would say we’re still impressed with the speed that this is moving forward in several areas, one of which is multimodal or multimedia.
For example, when Google released Gemini 1.5, we’ve seen some very impressive capabilities to analyse video. You can include an entire movie in your prompt and ask: 'In what scene in the movie one of the characters pulls a piece of paper out of his pocket? What’s written on it?' And the model is able to answer those kind of questions about a set of visual scenes of movie.
And then, recently, Open AI announced a capability called Sora, that takes a text prompt and creates an animation (photo realistic or cartoonish). That’s really quite remarkable. So, we’re still seeing some new capabilities coming in, and I think we’re on a rapid path of growth, but I haven’t seen anything yet that matches the surprise of ChatGPT in November of 2022.
Question. The new models presented by ChatGPT in spring 2024 are very interactive. You can have an audio conversation with the IA on your mobile phone, it can joke and have a very emotional approach to users. It reminds us a lot of the film “Her” in 2013, with Joaquin Phoenix: a lonely man develops a relationship with the IA, and ends up being completely dependent on her. Are there risks that this increasingly sophisticated interaction systems will use the emotional bonding of the human with the machine to sell them products or lead them to take real-life decisions that they shouldn’t have taken?
A. These are several important questions. I was a little bit surprised a few months ago when I saw a survey of the most popular AI consumer applications (‘apps’). It was a no contest, the most popular consumer applications were “companion chats”, where you can go to these online services and pick out the kind of ‘chat buddy’ that you want.
So, I’m somewhat concerned about people substituting digital relationships for human relationships.
“The most popular apps are ‘companion chats’, where you can go to these online services and pick out the kind of ‘chat buddy’ that you want”
It would be pretty easy for chatbots to manipulate us in that way. We are very easily manipulated by language, that’s how we humans are built!
Q. We do not really know where these ways will lead us to in 5-10 years. What should we take and what we should leave about AI?
A. Well, I’d say that it’s important to have our eyes open about these tools and what the business model behind the tool is. We need to think about how we want to participate or not participate.
It’s great to keep in touch with people through free social media networks, so they definitely have some benefits. They have downsides as well: addiction to phones of adolescents whose brains are still developing.
“Chatbots are non-persons, and you shouldn’t be having a personal relationship with a non-person”
Whether it’s correlation or cause, we can see increased polarization of our societies, certainly in the West. People are able to surround themselves in social media with only things that they agree with. The algorithms that are designed by a tech company running this free service, are essentially agnostic to the type of information that you want. As you continue to use these services, the algorithms are designed to give you information that will keep you using the site, get you addicted to spending time on the site.
People make the mistake of thinking that the social media companies are trying to influence them one way or another. They’re really not, they’re just trying to influence you to stay on this site.
Surely, the advertisements that you’re seeing are absolutely intended to influence you. They might be political advertisements in the form of a pseudo news story, or they might be an advertisement to try to get you to buy something. So, in our use of these free services, we are being manipulated for profit.
And then, if we want to become depending on a chatbot and develop a relationship with the chatbot, then we’re opening up a whole new way of being manipulated that makes everything that happened over the last 10 years seem almost ineffective.
Photo: Andy Kelly, Unsplash, CC0.
Q. As technology shapes our communication habits, how can Christians keep the biblical understanding of what it means to be human?
A. As Christians, I think we need to view these things as tools and keep our eyes open about the business model and to participate to the degree that we’re comfortable but not to be fooled and just be sucked in.
There’s nothing wrong with social media to keep in touch with family and friends. But if it becomes an obsession and too much of your time goes into it, that’s maybe a sign for us to kind of take a step back.
And even further than that. The chatbot is a non-person, and you shouldn’t be having a personal relationship with a non-person. That’s just not the way we’re designed. If we start to develop these relationships with chatbots, it wouldn’t be a surprise to see doors opening to so many other dangers.
Q. There’s also the good side of all this technology and you as an IT scientist are hopeful and positive. For instance, we see people who are blind having the chance through their mobile phone camera to be informed in real time about what’s happening around them with very precise descriptions. And there is AI applied to medical surgeries in hospitals. As Christians we don’t want to stay in the negatives and fear but want to see what humanises us. What good things do you see in the evolution of IA?
A. As far as my own sense of what’s happening with IA, I would say I’m cautiously optimistic, and more positive than negative. I think the dangers have more to do with our approach to the tool than to the tool itself.
“The area of medicine is one of the most promising fields where AI has a game- changing potential”
We saw it as well five years ago. A program called “AlphaFold”, which was an AI program that reduced the time that it took to understand the shape of a protein (chemical and DNA description) “AlphaFold” reduced the time from 30 days in the laboratory, with several people working on - to several seconds on a computer. The cost and the time to just understand the shape of a protein went down by a factor of 1,000. These kind of improvements really opene the door to improvements in healthcare.
We can use the AI technology that tracks your behaviour to target a treatment for a disease not just at your demographic, but at your specific disease and your specific body and metabolism. It’s very exciting to think about personalized treatment with AI.
Of course, there are also a lot of double-edged swords here. There’s the movement where people are trying to figure out how to extend our lives or looking at brain-to-AI interfaces, hoping to create implants that would make your memory so much better.
But you could equally imagine helping somebody to become normal functioning. With epilepsy or Parkinson’s disease, imagine an implant that would allow someone to be more functional with a neurological disease, for a much longer period of time.
If you had talked to somebody in the 17th century about the medical interventions we’re capable of doing now, they would have said “no, that’s like witchcraft”, right? So, I think God has given us insights through science, and we should be imagining all the cool things that could be done.
Photo: Unsplash, CC0
As Christians too, we want our young people (who are still trying to figure out what they want to be when they grow up) to imagine those things because they’re the ones that are ultimately going to create the new capabilities.
I was involved in the Internet in the 1980s, and we didn’t really anticipate things like social media. It took a generation after me to think of these kinds of new applications. I think that the generation that will think the most interesting applications of AI is probably in high school today, maybe even in primary school. So, we want them to be engaged in AI, but in the positive and hopeful sense.
Q. This idea of “human imagination” in the use of technology is very interesting. We are here in Poland at the European Leadership Forum, which is a conference for Europe from a Christian perspective. How do people react to what you share with them about what technology is achieving and its consequences? What questions do Christian leaders have?
A. Well, in a place like this I get a lot of questions. I think people are nervous about the discussions around existential threats. Partly this is because when we think about the headlines that we see in the media, it’s pretty scary stuff. But the more evolutionary rather than revolutionary changes are not so “exciting”, less visible.
“We want young people to be engaged in AI, but in a positive and hopeful sense”
I think there are other concerns that that people bring up understandably. What is AI going to do to the job market? Is my job going away? And that’s an interesting discussion because like technologies before AI, the answer is: yes, many jobs will go away, and some jobs will made so efficient that fewer people will be needed.But this is also the case that with every technology change, there are new job types that are created. And I don’t think AI is going to be any different than previous technologies like the radio or television or Internet, or the web or cloud computing. AI is going to probably create more jobs than those technologies did.
So, I think we should be hopeful there. I told my son who’s in finance and works for a bank, and five years ago graduated from school: ‘In 10 years’ time, the job that you take as an entry level job will be done by a piece of software, so you should be the guy who’s trying to figure out how to replace yourself with software’.
I think that’s reasonable advice for anyone in in any job: ‘How can I use AI to do a better job of what I’m doing?’. I think that’s what God wants us to do. If my job is done as unto the Lord, I want to look at these tools as opportunities to do a better job. If we’re engaged in thinking about how our job could be more efficiently done with more AI, then we’re better positioned to ride the wave rather than getting knocked over by it.
One more year
Learn all about our #OneMoreYearEF campaign here (English).
Published in: Evangelical Focus - life & tech - “The generation that will think the best applications of AI is in high school today, maybe even in primary school”