Vulcan Post

Cutting through the hype: ChatGPT is impressive, but it is not AI – and may never be

Screen with ChatGPT chat with AI or artificial intelligence. Man search for information using artificial intelligence chatbot developed by OpenAI. Warsaw, Poland - December 02, 2022

Disclaimer: Opinions expressed below belong solely to the author.

Ever since its release in November 2022, ChatGPT has been on the lips of millions around the world — and, fortunately, for all the right reasons.

The digital chatbot can answer almost any question or provide advice in a seemingly natural, human way, as well as write original pieces of prose or poetry. It is so impressive that the ease with which it provides access to information can threaten the supremacy of the mighty Google search.

However, for all the hype surrounding OpenAI’s promising child, I think it would be good to strike some balance in reporting about it, starting with the term most frequently thrown around: artificial intelligence (AI).

Few phrases are as abused as AI is. I believe it stems from the fact that it is often used in two meanings — research and application — that are confused with each other.

It is, of course, not helped by the fact that companies like to use the latest fancy new term to appear better than they are (think of it like “turbo” that was slapped mindlessly onto everything in the 1980s and early 90s to make even the most ordinary product look special).

Research into AI is tasked with building all of the elements that could lead to creation of “intelligent” or even thinking machines one day, equipped with abilities to perceive the world and process information in a human-like way.

It would, therefore, suggest that we should only use the term “AI” in relation to technologies that show certain signs of reasoning rather than mere data processing, as is still the case with ChatGPT (and pretty much everything else that labels itself as “AI-powered”).

Machine learning — a more accurate term describing most of what we see so far — is merely a part of AI research, after all.

Let me give you an example of ChatGPT’s replies to a currently contentious topic of gender, to illustrate my point:

chatgpt
Screenshot of ChatGPT

It seems to be quite clear that in the current environment of culture wars, the bot’s creators tried to make sure it provides politically-correct answers to contentious topics (likely to avoid controversy, trying to dodge possible bullets flying from either the ideological left or right).

However, it also means that the machine doesn’t process data all on its own and remains corrected — or even censored — by its human creators to provide a desired output.

At the same time, the reflexive emphasis in the second reply, that “all mammals” (i.e. humans as well) have two genders, shows not only that human oversight wasn’t exactly foolproof but — even worse — that the chatbot itself failed to logically incorporate the information, it already has, into its answers.

This includes both the politically correct responses fed to it during pre-training, but also a logical consequence of its earlier statement in reply to the first question asked in the very same stream of interaction with me (if gender in humans is hard to define, then “all mammals” is a false statement).

Similar things happened to others when, for instance, asking for jokes about men and women:

ChatGPT can joke about men, but not women

Again, politically correct when it was clearly programmed to be and thoroughly incapable of drawing logically very basic conclusions to similar questions (i.e. if we can’t joke about women, we shouldn’t joke about men either).

I don’t think I’m being unreasonable here by expecting that anything bearing the label of “artificial intelligence” would be able to process and recognise the logical consequences of what it wrote just seconds earlier, and use them in producing the next response.

This already shows that information processing in ChatGPT is somewhat flawed and that the understanding of earlier entries in the conversation, which was touted by its creators, is clearly not fully functional (or broken).

Some people will point to many disclaimers made by its creators and that the bot hasn’t been taught everything properly yet, but I don’t see how this is lack of information (or wrong information), rather than a flaw in basic functionality (learning from inputs already provided).

Is the cure worse than the disease?

Image Credit: rokas91 / depositphotos

The reason I’m highlighting this is not that I’m nitpicking faults of an unfinished product, but rather an observation that this sort of issues may not be easy to fix (or at all), calling into question whether the “intelligence” label will ever apply to something like ChatGPT.

It is clear that its creators are faced with an unenviable task of balancing independence of the artificial “mind”, the machine learning from the ocean of data it is fed with, and the need to appease various groups in the human public that may deem certain answers to be inaccurate or downright offensive.

Ultimately, responses to many questions may have to be moderated by humans, further undermining the system’s own “intelligence”, hampering its future development.

After all, we could just as well teach regular assistants like Siri, Google or Cortana to provide appropriate answers to certain questions and be done with it.

Can any system that is dependent on human moderation really be deemed “intelligent”? And if so, then to what extent can humans interfere with it before it’s nothing more than a bot like thousands of others?

In other words, trying to “fix” it might as well lead to its destruction.

Are we ready to let a machine “think” for itself? Will we ever be? In the current climate, probably not.

AI = An Interface

This is why in the case of ChatGPT the only thing that the abbreviation AI can stand for is just “an interface”, nothing more.

Because that’s what it is.

Fundamentally, it’s just a technology processing information it has access to, to provide the most relevant answers to the queries entered (quite like Google’s search, for example) with an impressively advanced natural language interface, which is its standout feature.

Also, it has a key competitive advantage which, as I stated before, may be significant enough to undermine the world’s dominant search engine as it offers an enormous step up from having to sift through dozens of links trying to find the right piece of information.

But it isn’t as monumental a breakthrough in building intelligent, thinking machines as it is made out to be. It has the potential, but we humans might not let it reach it in full.

Featured Image Credit: Shutterstock

Exit mobile version