Artificial Intelligence and Us – Who Directs Who?

“The tool is directing the user”

Chamaki, 2010. Unsplash.

In the 20th century a new wave of literature and pop culture emerged that simultaneously revealed civilisation’s development and society’s growing concerns – concerns that are still prevalent today and have continued to appear in media since. Dystopian and Sci-Fi fiction centered around the idea of Artificial Intelligence being first born from human ingenuity, and then becoming the force that suppressed and controlled it has been a popular concept of many films, novels and poems since. Do Androids Dream of Electric Sheep by Phillip K. Dick (1968) was a thrilling (yet cautionary) tale about the wonders and dangers of Artificial Intelligence (AI) if we let it – and push it to – achieve extraordinary heights; a novel which gave birth to the ever popular Blade Runner film, which has since be reproduced for modern audiences. iRobot (2004) alludes to Dick’s tale when it considers the implications of rogue AI’s and the (im)possibility of androids (AI) experiencing emotions. There exsists a plethora of texts in the dystopian/Sci-Fi cannon that explore these concepts and the issues that seem intrinsically tied to them. Humanity has, evidently, been intrigued by the possibilities of science and technology and what AI could do for us, whilst it has also been preoccupied by and fearful of the repercussions of such a reality coming to fruition.

What is interesting to note is that despite this longstanding fear of AI, most Australians – and perhaps even most of the people on this planet – are exposed to AI on a daily basis. And most of them aren’t aware of it.

AI may not necessarily take the form of a humanoid robot that was designed to help humans but ultimately destroys them, as is often portrayed as the case in popular culture and literature (for an exception to this rule, and for a wild ride in an incredibly unique world, put Scythe by Neil Shusterman in your TBR pile). It can – and does – take the form of an algorithm, and chances are you have it sitting beside you or in your pocket right now. Technology and digital resources that we use every day use algorithims to predict, and then offer, content that it thinks we will be most interested in. Finding a good film to watch on Netflix, Stan or Disney+ that is similar to other films you love has never been easier. Conducting research in a digital catalogue or library will lead you down a path of “related titles” that might – and theoretically should – have something to do with the first journal article you read. Love indi rock music? Spotify has got you covered. It will play new artists for you that you haven’t heard of – but will likely enjoy – because they’re similar to other artists in your playlist. Even YouTube has got your back with its tailored “up next” function.

So the idea that the tool – i.e. technology, or the AI and/or algorithm buried and operating within it – is directing the user – i.e. us – is not too far fetched as a concept. In fact, it is happening daily, for each of us that have access to a smart phone. So whilst we may not have humanoid androids running (or strolling, as I imagine they’d do with infinite patience and grace compared to their clumsy human counter-parts) about the place, and whilst they may not be ready to take over the world for the sake of the planet, AI intelligence is no longer a thing of the past. It is very much real. It exists now. And it (just might be) controlling how you access the online world.

Whilst it might be easier than ever to find amazing thriller films on Netflix that you’ve never seem before thanks to it’s algorithm, you might also never see that 1970s block buster Sci-Fi film either – because Netflix has decided, based on your previous choices, that you probably won’t like it. When you select something to watch on its platform, you are really choosing something that has been chosen for you by its algorithm (Blattmann, 2018). This is a very simple example of how the tool controls the user, but the concept remains true all the same.

So what’s the issue? The danger lies in the shallowing of a very specific, niche puddle that is floating in a very, very, very deep and ever expanding ocean (i.e. the internet). These sorts of algorithms may grant us access to more similar material, but it has the potential to limit the pool of information we, as consumers of information, can easily access. So whilst our current model of AI certainly falls short of the gun-toting rebel android image presented by Dick in 1968, one thing is clear: embedded AI affects how we use our technology in the digital environment by helpfully providing us with a convenient “recommended for you” list. How we respond – and how deep we choose to dive into that puddle, and then the depths beyond – determines how informed we become as we seek to navigate the digital realm and all its possibilities.

Reference List

Blattmann, J. (2018). Netflix: Binging on the ALgorithm. UX Planet. https://uxplanet.org/netflix-binging-on-the-algorithm-a3a74a6c1f59

Chamaki, F. (2010) Data Has A Better Idea. [Image]. Unsplash. https://unsplash.com/photos/1K6IQsQbizI