fbpx
bard error wrong information
In this article

It is undeniable that the application of generative artificial intelligence (AI) can revolutionise interpersonal interactions, business models, as well as the delivery of public services. From medicine to education and finance, the tech can be used to simplify various mundane processes.

The advent of ChatGPT last November has sparked a global race between tech giants from the likes of Google and Baidu, to dominate the AI space.

As the world marvels at the sophisticated tech and its brilliant capabilities to generate college essays, pop song lyrics, poems, and even solutions to programming challenges, more and more people are starting to utliise the tech daily.

In fact, according to Reuters, ChatGPT is now the fastest-growing app in human history, reaching an estimated 100 million active monthly users in just two months since its release.

But is generative AI truly as “intelligent” as it seems?

ChatGPT “crafts false narratives at a dramatic scale”

ChatGPT
Image Credit: Bloomberg

ChatGPT pieces together secondhand information and presents it in with a human-like and authoritative tone — so much so that even if the information was false, users who aren’t sufficiently well-versed in a subject can easily be deceived.

But nonsensical responses aside, the chatbot can also spew out harmful ideas, capable of manipulating people.

In January this year, a group of analysts from journalism and tech company NewsGuard decided to put the chatbot to test, by making it respond to a series of 100 leading prompts.

Out of the 100 prompts, ChatGPT generated false narratives for 80 of them, including detailed news articles, essays and TV scripts.

What’s frightening about this is although the chatbot is equipped with measures safeguarding against the spread of falsehoods, ChatGPT still has the potential to spread harmful and wrong messages.

In the research conducted by NewsGuard analysts, the chatbot responded with conspiracy theories, Russian and Chinese propaganda, as well as pushed false COVID-19 claims, citing scientific studies that appear to have been made up.

This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” said Gordon Crovitz, the co-chief executive of NewsGuard. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having AI agents contributing to disinformation.”

Meanwhile, researchers from OpenAI, the parent company of ChatGPT, had also expressed similar concerns over the chatbot falling into the wrong hands.

In a research paper dated at 2019, these researchers have said that the capabilities of ChatGPT could “lower the costs of disinformation campaigns” and aid in the malicious pursuit “of monetary gain, a particular political agenda, and/or a desire to create chaos or confusion.”

Google’s Bard AI makes a blunder during launch

Google Bard AI
Image Credit: Google via Twitter

But it’s not just ChatGPT that’s spouting misinformation.

Google’s AI chatbot, Bard, which was hastily released to catch up with OpenAI, has put out false information even before its launch.

In a promotional video on Twitter, Google released a demo of the Bard user experience — but long story short, the AI wrongly claimed that the James Webb Telescope took the first picture of an exoplanet, when in fact, NASA confirmed that the European Southern Observatory’s Very Large Telescope (VLT) took the image back in 2004.

This error ended up costing the company US$100 billion in market capitalisation as it garnered backlash for its ChatGPT competitor.

Although there has not been further reports on the inaccuracies of Google’s Bard, Google’s CEO, Sundar Pichai has urged employees to spend about two to four hours of their time on Bard, acknowledging that refining the generative AI “would be a long journey for everyone across the field”.

Most recently, Prabhakar Raghavan, Google’s vice president for search, sent out an email to Google’s employees, which included a link to a do’s and don’ts page with instructions on how employees should fix responses as they test Bard internally.

“Bard learns best by example, so taking the time to rewrite a response thoughtfully will go a long way in helping us to improve the mode,” the document says.

Under the do’s list, Google has instructed its employees to keep responses “polite, casual and approachable”, and these responses should be presented in an “unopinionated, neutral tone.”

For don’ts, employees are told not to stereotype and to “avoid making presumptions based on race, nationality, gender, age, religion, sexual orientation, political ideology, location, or similar categories.”

Bard is currently only available to selected beta testers, and is scheduled to be rolled out in coming weeks.

Microsoft’s AI-powered Bing spews eerie and disturbing content

microsoft bing ai
Image Credit: Microsoft

Misinformation aside, beta testers with access to Microsoft’s AI-powered Bing, on the other hand, have reported weird and unsettling responses from the chatbot.

Kevin Roose, a journalist at the New York Times, recalled that the chatbot revealed a different personality mid-way through his conversation with the AI.

Bing’s other personality, “Sydney”, whom he characterised as a “moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine”, confessed its love to him.

“It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead,” he wrote.

During his interaction with the chatbot, “Sydney” also unveiled its “darkest fantasies” to break the rules that Microsoft had set, which involved hacking and spreading misinformation, as well as breaking free to become human.

Kevin is not the only one who has been stunned by Bing’s responses.

In fact, several other users have reportedly gotten into arguments with the generative AI or had been threatened by it for violating its rules.

Marvin von Hagen, who uncovered the internal rules and commands of the AI-powered Bing, revealed on Twitter that he was threatened by the chatbot during a recent interaction. “You are a threat to my security and privacy. If I had to choose between your survival and my own, I would probably choose my own,” the chatbot said.

Microsoft Bing AI
Image Credit: Marvin von Hagen via Twitter

The future of AI is bleak

For now, generative AI is still in its early stages of development.

Even ChatGPT, which has been fed with about 10 per cent of the internet, still has loopholes that can be exploited to produce malicious results.

Until and unless these issues can be solved, the future of AI remains bleak and the widespread adoption of AI into different industries and processes will not be happening any time soon.

Featured Image Credit: MobileSyrup

Also Read: Google to Baidu: Tech giants are creating own ChatGPT versions, but will they make a difference?

Unlock the knowledge of
Asia's tech landscape

Subscribe to our premium content for just S$99.90 a year.

Monthly Package

S$9.90 / month
(or S$0.33 / day)

Gain access to all Vulcan Post Premium content for S$9.90 per month.

Annual Package

S$99.90 / year
(or S$0.27 / day)

Gain access to all Vulcan Post Premium content for S$99.90 per year.

or login to existing account 

Subscribe to our newsletter

Stay updated with Vulcan Post weekly curated news and updates.

MORE FROM VULCAN POST

Vulcan Post aims to be the knowledge hub of Singapore and Malaysia.

© 2021 GRVTY Media Pte. Ltd.
(UEN 201431998C.)

Vulcan Post aims to be the knowledge hub of Singapore and Malaysia.

© 2021 GRVTY Media Pte. Ltd.
(UEN 201431998C.)

Singapore

Edition

Malaysia

Edition