Disclaimer: Opinions expressed below belong solely to the author.
Much of the talk about artificial intelligence (AI) and its potential impact is revolving around consequences it may have for jobs, with some predicting millions of people may be made redundant by ever improving digital bots.
Historically speaking, I think the odds are still on our side, given how we made it through several technological revolutions, becoming more prosperous than ever in the process.
However, for the first time, we’re up against a technology that isn’t just good at doing a specific task, but one which can accurately mimic human behaviour — to the point of near-perfect replication of someone’s appearance or voice.
An American influencer, Caryn Marjorie (boasting 1.8 million followers on Snapchat) put the capabilities of current AI tech to the test and ordered creation of a replica of… herself — or at least, her voice for the time being), which she then started selling as a ‘virtual girlfriend’ lonely men can chat with via Telegram, paying US$1 per minute of interaction.
In the first week in business, her revenue hit over US$72,000.
The service was created by an AI training company with hours of videos and had them create a realistic audio chatbot, leveraging ChatGPT 4.0, that you can send your own voice messages and receive an audio reply on any topic.
Here’s how it works:
While it seems to be a clever PR stunt, that benefits both the influencer and the company, Forever Voices, which also promotes its own chatbot allowing you to have a conversation with celebrities (and, I assume, may offer services that would preserve the voices of your loved ones), the bot itself is real and there’s no reason why it should not continue to exist as a product, given the demand.
The question is, however, what impact is technology like this going to have on human interaction?
Will perfect bots erase imperfect humans?
Caryn makes some bold claims here, but do they really make sense? How likely is it to be a harmful rather than useful service?
How many people are going to sink into the fantasy world of hyper-realistic, virtual AI partners, who never complain and are always available and supportive?
They don’t age, never have headaches or bad moods, and if you’re bored of them, you can just close the app.
It’s going to take a few years before generative video is mastered to make the experience even more realistic (which can become very immersive when coupled with a VR headset), but should already be technologically possible for such a virtual persona to send you AI-generated selfies of whatever it pretends to be doing.
And while Caryn’s AI was programmed not to engage in sexually explicit talk (although it reportedly did when prompted), what stops any other company from offering just about anything that springs to mind?
Are we soon going to witness the birth of AI-commerce, with online storefronts displaying thousands of generated faces of partners of both genders and any combination of features, to keep us company whenever we’re lonely, bored or horny?
After all, how many times have your friends been unavailable when you wanted to chat about something? How many friendships have ended when your buddies moved, started families or became too busy with work?
But that ‘hot bot’ is always there for you, like a faultless companion who always has your back, motivates you to do better, is a shoulder to cry on, or keeps you company without you having to even leave your couch.
While it can perhaps never attain the same directness of interaction as we experience between living humans, the barriers of entry are so low (compared to making new friends in real life) and benefits so huge, that I can’t see how it would not become a multibillion dollar market very, very soon.
If not overnight, then at least a few years down the road.
After all, OnlyFans, the controversial, largely pornographic site where anybody can post explicit content for money, generated close to US$5 billion in revenue by the end of 2021, and likely more in 2022 (the numbers aren’t in yet).
But why rely on human creators, if you can generate thousands of realistic human avatars that will do whatever the audience wants at any time they want it (and pocket all the money yourself, without having to split it with anyone)?
Besides that, most of our social interactions even with living humans are already digital anyway so, if you can’t really tell the difference, would that matter?
Like a knife
A knife is a very useful utensil, allowing us to prepare meals and cut things we’re about to eat. It can also, however, be used to stab someone to death.
The problem is hardly ever with the tool, but with how it is used — and AI is no different.
The same technology that can further cement your status as a lifelong loser unable to speak to girls, resorting to virtual self-pleasure with an AI bot, could be used to immortalise your family: parents, grandparents, siblings and yourself for future generations.
It’s the digital immortality I wrote about two months ago — a remarkable way of not only dealing with grief over death of your loved ones, whose digital clones will remain with you forever, but also an immersive archive of humanity for centuries to come.
Someone from the year 3268 may be able to speak to anybody from year 2020 to learn about the Covid pandemic, Donald Trump’s presidency, or how good a football player Lionel Messi was.
If used correctly, it can, indeed, help us heal trauma, improve our well-being, ease some of the loneliness and give us quasi-immortality, so we are never forgotten by our successors.
But stray just a little, and it might just as well become a schizophrenic nightmare where separating reality from fantasy becomes more and more difficult — where your best friends are pay-per-voice chatbots and your family exists only on video, since you never got round to starting your own.
And if that’s the case, there might not be anybody left around to ask our virtual clones anything in the future.
Featured Image Credit: Caryn Marjorie