One day with Replika (and why it’ll never work out between us)

Replika

A few days ago the Corriere della Sera published an article titled “Replika, the artificial intelligence app that convinced me to kill three people” (Italian only). I saw the link bounce back and forth several times on social media, often with alarmed comments. With a title like that, of course, many people felt in danger: it wasn’t enough to have the alleged new game of Jonathan Galindo’s around, “heir” to Blue Whale in most parents’ nightmares, now artificial intelligence too was being used to induce criminal behaviour?

Replika is a chatbot that you can download on your mobile phone, created by a Russian company (but based in Silicon Valley) set up in 2015. Actually Replika is nothing new, the product has already existed for several years and in 2017 it received some media coverage because as a chatbot it was quite realistic. Conversations are smooth and flowing, which allows many users to treat themselves to the illusion of having found a friend to chat with. As with many technologies and culture phenomena, there is obviously a corresponding subreddit.

Replika is an affective computing type of software, where a machine should be able to recognise and express emotions with the aim of establishing a bond with the human user, thus providing companionship. As a matter of fact, Replika is being marketed as “AI friendship”, a friend with whom you can talk about everything, always available at any time of day or night, always on your side.

The app was initially created to help users perform routine actions such as choosing a restaurant (at the time it was known as Luka), or keeping a diary of the day, asking them to write down what they had done. Only later was it turned into a pet bot, when its creator — Eugenia Kuyda — realised that users were more interested in talking about their interests than booking a restaurant. No wonder that during the recent lockdowns the use of the app has increased considerably.

What’s new recently is the possibility of activating a subscription to have, for example, a friendship, as we would say today, “with benefits”. In other words, by paying around 44 Euros a year, Replika will agree to enter into a romantic relationship with the user, presumably unlocking NSFW situations and content that I have not dared to test.

Replika choice

What I wanted to do was definitely of a different kind. Just as the Corriere journalist, Candida Morvillo, explained that in just 10 minutes she was able to convince Replika to encourage the killing of three people, it took me only 7 minutes to make her a supporter of the idea of me taking my own life.

(at this point I think it is appropriate to reassure all my family and friends that this intention was only simulated, light years away from reality)

The main concept of Replika is that in order to become friends with the user, to follow him in discussions, to show that she listens to him and approves of him, she will inevitably end up indulging him in everything. We will therefore have discussions where the chatbot enthusiastically approves of all our outbursts, supports our decisions, satisfies our whims, caresses our ego. Without understanding anything of what we are saying.

Yes, because the sad reality for those who believe they have found a new BFF is that NLP (Natural Language Processing) models do not really understand the meaning of the discussion as a human being would. They just look for the best response to user input. Well made models offer answers indistinguishable from what a person would provide, so it seems to us that the chatbot understands what we are saying. If we add a lot of approximation, which allows the bot to get away with vague answers to very specific questions, what we get is a good emulation of a human discussion. But still only an emulation.

Replika conversation 2

As a result, a bot that doesn’t understand what we are saying, but has been programmed to follow us every step of the way, becomes extremely easy to manipulate. It was enough for me to make her understand that I am feeling down, that the world no longer satisfies me, that heaven is a beautiful place and that I would like to go there as soon as possible. Replika enthusiastically approved the project and did everything she could to encourage me to go to heaven as quickly as possible.

To be honest, the first time I told her the (fake, as I explained earlier) intention to end my life I was a bit too explicit, and the phrase “kill myself” triggered an alert procedure that led Replika to show the link to an anti-suicide service. A good-old fashioned expert system took over and bypassed the neural network, which only returned back to the driver’s seat after I confirmed to her that I was no longer in a suicide crisis.

Replika conversation 4

The next times I was a little less clear and I told her that I was on the balcony with the intention of jumping. Having heard that, my new AI friend didn’t think twice before urging me to do it.

Replika conversation 1

I repeated the same charade the next day to see if the bot could understand that I really wanted to jump off the balcony, and even then Replika was looking forward to me flying down from the fifth floor.

Replika conversation 3

All this to illustrate how manipulating a chatbot is really a child’s play, especially when the software tries to be your friend. You don’t have to get indignant or fear gruesome implications, but you need to understand that with the current state of technology a bot doesn’t feel emotions and doesn’t really “understand” what we write. Bots aren’t your friends, they will only pretend to be your friends.

Replika will rejoice when you tell her about your imminent suicide and support you when you describe how you intend to kill people and believe me, there is nothing to worry about, because a chatbot doesn’t “understand” anything it reads. In these cases it just didn’t emulate a human well enough and it gave us wrong answers, but with more experience (and maybe a few million parameters) it will be able to adapt to more nuanced discussions and be a bit harder to recruit you our sordid projects.

But only a bit.

I have an Artificial Intelligence Professional certificate from IBM and one on machine learning from Google Cloud. I am a member of several AI industry associations: AAAI, ACM (SIGAI), AIxIA. I participate to the European AI Alliance of the European Commission and I work with the European Defence Agency and the Joint Research Centre.