The question is why with AI
Christians need to be more aware of the use of data concerning them, and of both the opportunities and risks associated with AI.
26 NOVEMBER 2019 · 14:00 CET
Let’s start by clarifying, what AI is and is not.
AI is not some kind of science-fiction super-intelligent humanoid robotic which wants to take over the world as we know it. AI is not even intelligent (although some might disagree).
AI is a subset of computer science, involving the coding (by programmers) of algorithmic systems based on statistical mathematical models, which essentially collects, collates, models, and aggregates data to find patterns and trends faster.
It does this to draw insight and inference from our digital transactions or behaviours.
Can’t touch this
Most AI (like other forms of software that have preceded it) is intangible. Some AI is encased in an object or machine, such as an Autonomous Electric Vehicle, an ‘Internet of Things’ enabled smart utility meter, fridge or TV, or a smart speaker (or a ‘Personal Digital Assistant’) in the form of Amazon’s Alexa, Microsoft’s Cortana or Google’s EchoDot.
There are different types of algorithmic systems that are intended to achieve different goals. They are designed to save time, use resources more efficiently, and make life easier and more convenient. These seem like laudable and harmless aims, right?
AI is not new
AI has actually been around since the 1950s, so is not a new concept. The creation of algorithmic systems and advanced data analytics has been growing over the last 10-15 years.
It is not unique to any one country, but has been advancing globally. It is so embedded in our daily lives and been shaping our cultures that we don’t even realise it is happening.
We have seen exponential changes in the way we search the internet, do online shopping, search for comparable insurance or financial products online, how we consume music and TV on-demand, and how we apply for jobs.
All these changes have been subtle and happening behind the scenes, but have created a change in our mind-sets, our cultures, our social interactions, and society around us.
The real concern about AI is not about what AI can do for us, but what AI is doing to us.
Those of us born in the 1970s and 1980s will be the last generation to know a life which was completely analogue (la vie analogue). The generations that have followed and that we are raising now will have never known a life without digitalisation (la vie digitale).
We therefore are uniquely positioned to understand what it is we are giving up and gaining from AI amongst other emerging technology.
Balancing Act
We need to utilise these new tools wisely. Understanding the good that they can do, and benefitting from that, whilst also being cognisant of the potential harm and abating it.
We need to challenge the status quo, think critically, understand the utility and futility of these new tools and choose to engage or not engage with them. There will be consequences to both.
We need to raise a generation that will ask “Why?” (Why is it free? Why do they need that data? For what purpose? How will it be used? What data does it generate? What does that say about me, or people like me? Where does that data go? Where in the world does the data get stored? Who else can access it? Who else can use it? Why?) and the Why? of Why.
Data Hungry
What makes AI so unique is that it is data hungry. The algorithmic systems are trained on data, operationalised on data and need data to be improved. Data is really important in the world of AI.
Not just personally identifiable information, which is protected under European law, but also data which is not strictly personal. Non-personal identifiable data can still provide insight concerning you and your environment.
Context and purpose matter
No matter what is stated in legal terms and conditions or a privacy policy, we expect data and information concerning us will be used in a certain context and for a specific purpose. We share it with the expectation that it will be used in that way, for us and not against us.
For example, you would not expect information about you from a social media platform to be used by your bank to decide whether or not you were creditworthy. Likewise, you would not expect information from a health app on your smartphone to be used to assess how much you should pay for your car insurance. At least not without your consent!
The context and the purpose for which data about you or concerning you matters. Why? Because whether data is personal or anonymised, pseudonymised or synthetic, it carries echoes of the person or people and situation or experience from which that data was originally captured. That includes (whether we like it or not) bias.
Bias can be completely harmless, but it can also be unlawful and discriminatory. We cannot eradicate bias, and indeed sometimes a particular trend, leaning or inclination may be good and necessary for the goal to be achieved (i.e. a bias towards Christian charitable causes in gift donations).
Certain biases may however lead to unfair outcomes for the end users. This can be caused through the patterns AI has observed in the data and through a lack of “representativeness” in the data.
Data representativeness means that there is a proportionate representation of people groups represented in the data when compared to the target audience and end goal. Under representation and over representation of certain data types within a data set can (you guessed it) cause bias.
This might be too many people from a particular postcode area, too few theatre goers, too many On-Demand TV Package customers. Some data can appear quite innocuous, but unintentionally cause an unfair outcome.
For example, postcode level data can act as proxy for a protected characteristic such as race, religious or political belief. Theatre statistics could act as a proxy for wealth and affluence. On-Demand TV could act as a proxy for socio-economic deprivation.
What AI then does with this data, is process it, magnify it and apply it at an exponential rate. So what was originally applicable to a small group of people, may be applied to all the people that use that tool.
This results in potentially unfair outcomes (such as discrimination and potentially also human rights infringement) for an individual or group of people. This can bring about distorted effects on society!
Who knew that the simple topic of AI, and data would get so serious……
Root cause: algorithms are not generic, and business models and ethics
The problem is algorithmic systems are not generic. They are designed, developed, deployed and disseminated with an end goal in mind.
Use it in another context and/or with different data, they will often engender an entirely different result. Sometimes this is intentional and sometimes this is unintentional. Both have consequences.
Business model
Where in the world the algorithmic system was developed has a bearing on the suitability of the end goal in a different culture, but so does the business model that provides the motivation and intention behind the use of a particular algorithmic system in a particular setting.
To bring this to life, take one kind of algorithmic system – automated content systems. In most European states it is designed to display a series of information to you to improve your customer viewing experience online.
But, what if:
- that content generated a series of random advertisements? If you had agreed to it, that may be ok.
- those advertisements were specifically targeted based on your preferences? Well, if you had provided details of your preferences to the service provider, then you might expect it.
- those ads were specifically targeted at you based on your historical web viewing habits where your religious and/or political persuasion and your compassion and welfare for people could be ascertained?
- that content was specifically generated and targeted at you (or at least people like you) to change your opinion, to manipulate your thinking, to make you look at other similarly generated content, to rouse emotion in you (especially anger or outrage)?
All designed to make you act in a particular way, which was predicted from the very outset, and is aimed at making money from the advertisements you click on. The longer it can keep you in the tool and direct your interests, the more income it might be able to generate.
No longer is this just about the data, the AI or the impact of the business model. It is the unique combination of these things to create an outcome.
Everything is permissible but not everything is beneficial
More than ever before, there is a need for us to weigh up these outcomes and consider whether the impact of AI (and the data that feeds it) in all its guises is beneficial or harmful to us individually, as Christians, to the vulnerable and digitally excluded in society, and to society as a whole.
The easy option is to just go with the flow. Most outcomes from AI are (for now) lawful. But that does not make them right, either morally from a Biblical perspective or socially acceptable.
We need to consider not just the impact here and now for me, but what impact this is going to have on future generations.
If you take away nothing else from this article, Christians need to be more aware of the use of data concerning them, and of both the opportunities and risks associated with AI.
This requires us to ask questions, like What? Where? How? For what purpose? and most of all, Why?
Patricia Shaw is CEO and founder of Beyond Reach, a tech and data ethics, governance and legal consultancy.
Early this year, she established a new Christian Think/Do Tank called the Homo Responsiblis Initiative, aimed at raising awareness and considering the outcomes of new and emerging technology, particularly from a Christian perspective.
Published in: Evangelical Focus - European Evangelical Alliance - The question is why with AI