Thinking critically about Artificial Intelligence

Intelligence is an extremely complex, contended subject, and there is plenty we still don’t know about animal intelligence, let alone human intelligence.

22 MAY 2018 · 07:09 CET

,

THE EXPONENTIAL QUESTION

Exponential growth is a big deal because it makes it difficult for accurate predictions about the future to be made, based upon the past.

Taken generally as a frame of mind, the implications of this concept undergird much of the urgency in current dialogue around ‘AI’. A key issue here involves processing/computing speed.

The main driver behind discussion about exponential growth is ‘Moore’s Law‘ (the observation that computing power doubles every 18 months – which, of course, is not an actual law of physics).[1]

However, it appears that Moore’s Law is significantly slowing down, and many believe it will soon come to an end.[2]

This is because transistors can only be made so small. If the rate of growth in computing power plateaus or even simply slows down, many of the more extreme predictions about ‘AI’ will be revealed as too ambitious.

This is certainly not to deny that rapid technological growth is currently taking place and will likely continue to do so for the foreseeable future. But it is vital to debunk false narratives about an inevitable ‘intelligence explosion’.[3]

An additional problem with the ‘exponential growth’ paradigm involves how we measure intelligence. The basic linear, single-dimension model proposed by thinkers such as Nick Bostrom has been strongly criticised.[4]

‘Intelligence’ is an extremely complex, contended subject, and there is plenty we still don’t know about animal intelligence, let alone human intelligence. One way to shed some light in this area is to consider how both quantity and quality of information influences intelligence.

Clearly a certain type of ‘intelligence’ or ‘knowledge’ will continue to grow as we collect and organise more information. Higher quantities of information don’t necessarily correlate with higher qualities of information which are needed for intelligence and understanding.

Accordingly, several leaders in this field have been careful to emphasise that developments and progress have come about as programmers, statisticians, computer scientists, engineers, etc. have worked on very specific solutions to very specific problems.

In short, the exponential growth of AI—though very serious and significant—is neither automatic nor inevitable.

 

THE EXISTENCIAL QUESTION

Many of the most important developments are taking place not merely at the broad level of ‘AI’ or even ‘Machine Learning’ (which are nearly ubiquitous already in the lives of many urban dwellers), but rather at the level of Deep Learning.

This involves complex neural networks, which ‘learn’ by mimicking the brain’s network of organic cells.[5]. It is precisely the potential of such ‘self-improving’ systems that raise concerns about the future existence of humanity.

Interestingly, relatively few seem to be afraid of ‘AI’ development in their own particular discipline. Their own projects are always described calmly and sensibly as something that will greatly benefit humanity. It’s the other applications of AI that are dangerous: sex robots, drones, medical implants, etc.

Regarding the nature of challenges and threats that lie ahead, it seems probable that they will be less like an Orwellian-type of ‘control through oppression’ and more like a Huxleyan-type of ‘control through obsession’.[6]

The fact that wide-reaching satiation is already largely realised among many communities in the West (e.g. social media, online shopping, instantaneous entertainment, and encyclopedic knowledge in our pockets) seems to strengthen the proposition that our future will involve ever-increasing degrees of ‘satiation saturation.’

Once again, concerning the possibility of AGI[7] and the ways it may threaten humanity, some shaky assumptions must be questioned.

The idea that the sum of human intelligence can be equated with the aggregated tasks a human performs is greatly misleading. Furthermore, it is naïve to assume that all humans have some identical form of what can be called ‘general intelligence’.

Communication abilities, technical skills, professional experience, and unique familial interactions vary tremendously across humanity. On this point, Kevin Kelly helpfully describes the chief asset of ‘AI’ as a way of solving problems that is different from our way as opposed to strictly better than our way.[8]

None of this, of course, discounts the fact that our landscapes of work, war, sex, play, etc. are all likely to change radically in the coming decades; it just reiterates the belief that the true purpose or vocation of humanity cannot be reduced to these categories.

This article first appeared on the Jubilee Centre website and was republished with permission.
See more: http://evangelicalfocus.com/blogs/3481/Deepfake_video_and_public_trust_fake_news_charlee_new_jubilee_centre

Calum Samuelson, MPhil in History of Theology. Works for the Jubilee Centre. This article first appeared on the Jubilee Centre website and was republished with permission.

 

[1] Moore’s Law has not been exactly as consistent or as accurate as some flippantly imply. The number of transistors has roughly doubled every 18 months rather than two years (which was itself a revision of an earlier prediction) and there have been several periods where growth has been faster or slower than this.

[2] See Although some give important caveats about the occurrence of ‘S-curves’ in exponential growth, this still takes the overall narrative for granted.

[3] https://en.wikipedia.org/wiki/Intelligence_explosion.

[4] As one example, see MIT article: ‘Progress in AI isn’t as impressive as you might think’. https://www.technologyreview.com/s/609611/progress-in-ai-isnt-as-impressive-as-you-might-think/.

[5] ‘Deep’ refers to the number of layers in the network, not some type of ‘deep’ or profound ‘understanding’.

[6] The themes of 1984 and Brave New World, respectively.

[7] Artificial General Intelligence – when computer intelligence becomes as good or better than that of a human being across the board

[8] For this reason, he alludes that it may be better to understand ‘AI’ as ‘alien intelligence’.

Published in: Evangelical Focus - Jubilee Centre - Thinking critically about Artificial Intelligence