The Awakening of the Internet: the Dystopian Future beyond the Turing Test

Why the Internet as you know it is about to change.

The White Orange 🍊⚪
9 min readNov 6, 2022

It is pretty obvious that the things you think are humans, are indeed humans. But what will happen when the conviction to be interacting with another human is lost? What will happen when you cannot trust the person you are interacting with to be a genuine human? When you cannot tell apart a bot from a person? Sounds like sci-fi, right? Like out of Blade Runner, where distinguishing humans from replicants — synthetic bio-engineer humans — has become an arduous task. But it is no longer science fiction, we too are entering an era in which the conviction to be interacting with a human is being lost. The first and foremost medium of communication where this is already occurring is the Internet, but this trend will continue to propagate to other means of communication, such as the phone, until potentially to face-to-face interactions, and with that, misinformation will accelerate, distilling the truth from reports of recent/future historical events will become nearly impossible and scamming will become easier than ever.

Artifical Intelligence interconnected with the Internet.
Created by Dalle-E.

The first person who probably speculated about the possibility of machines behaving in a manner indistinguishable from that of humans was computer scientists Alan Turing in 1950, when discussing the possibility of machines exhibiting human level of intelligence, in his highly influential paper “Computing Machinery and Intelligence”. Turing however was not interested in whether machines were capable of replicating human behaviour, instead, he considered such behaviour as a proxy for whether a machine could “think”. Therefore, in order to test if a machine exhibited intelligent behaviour, Turing proposed a test in which an evaluator (a human) would have a text-only conversation with a machine and another human located in separate rooms. If by the end of the test the evaluator was unable to distinguish the machine from the human, the machine was said to pass the Turing test (and therefore to be able to think).

The formal Turing test was not passed until 2014, when the chatbot Eugene Goostman was able to fool human by pretending to be a 13-year old Ukrainian boy. While experts are still debating on the validity of such a test, the point of the matter is that such theoretical tests have become irrelevant, as human-fooling machines are now becoming widespread. The reason is that the ease to which a machine can pretend to be a human — understood here as a generalised form of the Turing test — increases dramatically as the complexity of the context of the interaction is reduced. In fact, Turing himself already acknowledged that the complexity of the test setting was reduced (text-only communication with no visual input) so as to get rid of extra complications such as those arising from tones of voice.

For that reason we may argue the most complex context for a machine to replicate human behaviour is a face-to-face interaction. Here a machine would have to look, act and speak like a human in order to pass the Turing test. If in doubt, one could still ask the interlocutor to showcase some biological function such as eating or bleeding to proof his human nature. But things become more interesting if for instance the visual input is removed, such as through a phone conversation. Here a machine would only need to speak like a human, a task that seems easily conceivable considering that Google Duplex can already call on your behalf to book a haircut or reserve a table in a restaurant. The algorithm so perfectly replicates a natural phone conversation — including “ahms” and “hmms” — that the woman at the other end of the conversation doesn’t even consider the possibility to be having a conversation with a robot. Thus we can say that in the arguably small context of phone bookings, Google Duplex is a Turing-test-passing machine. While one may argue that the lady was not aware of being tested — and that thus she could have asked out of context questions to the machine to verify its identity — it is easy to see how in a few years phone-calling machines indistinguishable from humans will be a reality.

It is not far-fetched to say that everyone has already experienced one such situation, in which one cannot tell whether he/she is interacting with a genuine user or a machine. Take for instance the Youtube comments below, can you tell which ones are genuine and which ones are not (answer at the end of the post)?

Can you pass the test? Which of these comments are written by bots or genuine users? Video 1. Video 2.

When you read comments on Youtube, how do you know they were written by genuine users? How do you know this is not a form of subliminal advertising to get you invest in the stock market? In many instances you simply can’t tell, and we may say that these bots have passed the Turing test. We see once again that as the complexity is reduced, such as in simple Youtube comments, it becomes incredibly difficult, if not impossible, to tell humans from bots. And this will only get worse as bots become more sophisticated, now being capable of maintaining entire conversations. These type of subliminal bots may be used to influence public opinion or advertise a product.

Bots having a conversation on Youtube.
Bots (or humans?) having entire conversations on Youtube. Video. Perhaps these users got neat advice from Catherine Louise Marino (website). Whether she is real or not, it’s hard to tell from her website.

Same case can be made about online reviews. Who hasn’t gone through dozens of online reviews only to realise that the main uncertainty as to whether to purchase an item lies on the veracity of the reviews? It is another form of communication in which the complexity of interaction has been dramatically reduced, and so the Turing test can be easily passed.

Online fake reviews on redbubble.com.
Online reviews from a random item from redbubble.com

Another example occurred to me not long ago when trying to sell some items online. Minutes after posting the items on Marketplace my Facebook messenger was flooded with messages asking simple questions such as whether the items were still available. It was incredibly hard to tell which of those were genuine humans — even if I had access to their entire Facebook profiles — or whether these were simply bots trying to get some personal information, such as my address or my bank account — spooky. There probably already exist algorithms which can create realistic social media accounts with real-sounding names, scanning the Internet for publicly available pictures to fabricate genuinely looking profiles, and include likes, interactions and favourite pages. An alternative is to simply replicate existing social media accounts, which can often be seen in sites such as Instagram. Once again the reduction of complexity, the fact that all the information we have access to is the user’s social media profile, makes the Turing test easier to pass in such situations.

Does Soumaia want my Blender Set or my address?
Can you guess which one is the real account? The answer at the end of the post.

Another spooky form of algorithms that will fool us in the future are Deepfakes, AI-generated video or audio content that mimics a real situation very convincingly, making them indistinguishable from real footage, like this Deepfake of Barack Obama. As this technology becomes widely accessible, it will be hard to know whether our actual presidents are addressing us, providing the perfect means for mass misinformation, propaganda and crowd manipulation. Recounts of historical events will become hard to track, with many realistic story lines being generated at once, each one based on the interest of the various players involved.

What about content creation? How do you know this post was not generated by a sophisticated AI? Think you can spot the difference between human- and AI-generated content? What if I tell what you have read so far was AI-generated? (the answer is too given at the end) That is exactly how you‘d have felt after reading this GPT-3 generated post, where the 3rd release of the generation of language prediction (GPT) model from OpenAI was asked to write a blogpost describing itself. Because the fact that the post was AI-generated is only disclosed at the end (apologies for the spoiler), the reader has the impression of reading something genuinely human-generated. GPT-3 can be now seen seamlessly engaging in conversations online (like also in the Reddit conversation below) or writing entire academic articles. All these examples show that the Internet is about to become a massive breeding ground for Turing-test-passing machines.

GPT-3 replying to users on Reddit.
GPT-3 replying to a genuine user (OP — original poster). The user has the impression to be talking with another user. Source.

In the long-run we may say that the Internet will become dead (or alive?), a soulless entity entirely populated by AIs generating, propagating and debating their own content. Another possibility resulting from encountering these sorts of situations is that we may end up doubting our own humanity. As of now, the possibility that your friend/dad/mum or whatever it is you are interacting with is non-human is so far-fetched that the idea does not even cross your mind — i.e. it is unthinkable to think that your friend is a robot. In mathematical terms, this is equivalent to say that our Bayesian prior probability (our current state of knowledge) for an interlocutor to be another thing than a human is essentially 0%. But as we encounter these situations more often, situations in which we are fooled by bots, we will unconsciously update our priors — meaning we will start to add more weight to the possibility that our interlocutor is non-human. Will we one day reach the point where we will start to doubt our own humanity — as it happens to Caleb Smith in the movie Ex Machina?

Possible Solutions

One solution to this problem may be akin to the Voight-Kampff test devised in Blade Runner, a test to tell replicants and humans apart. In essence, this test is a way to get around the Turing test — the subject would be plug into a sort of polygraph to measure bodily responses such as transpiration or blood pressure in response to a set of questions related to empathy. The closest thing we have to tell our less sophisticated bots from humans are CAPTCHAs or “Completely Automated Public Turing test to tell Computers and Humans Apart”. Unlike the standard way of using CAPTCHAs, in which a website needs to verify you are not a robot, what we are missing is a service to send and receive back the answer to a CAPTCHA test. In this way, when interacting with a stranger (like when trying to sell an item online) one could send a CAPTCHA to be solved by the recipient. The sender would then have access to the result of the test and in this way one could verify whether the person at the other end is a human. Other solutions that could mitigate the problem would be ID or fingerprint verification systems when creating online accounts.

Conclusions

With the (dis)information explosion provided by the Internet, we went from “knowledge is power” to “knowing what to filter is power”. Arguably, because all sorts of information became so widely accessible, critical thinking and the ability to reliably source information became as (if not more) important than knowledge itself. But we are entering an era in which the capacity to “know what to filter” is being eroded. If distilling the truth is already difficult in recent events (for instance when Russia blames Ukraine for certain actions, which Ukrainians deny taking part in), what will happen when even more personal interactions such as Whatsapp profiles are being faked? When you no longer know whether mum is at the other end of the line? The future has never looked scarier.

The comments from video 1 they are all bots, whereas the ones on the right they all seem real (after checking some profiles).

The bots having a conversation are all fake as can be seen from the suspiciously similarly looking conversation below.

Casey Allen Gray.

The first picture on the left is the real Instagram profile.

This post was generated by a human.

--

--