Vulcan Post

Stunning Google leak: Forget ChatGPT – Why Meta, not OpenAI, is winning the AI war

Disclaimer: Opinions expressed below belong solely to the author.

Over the past six months since the ground-breaking debut of the third iteration of ChatGPT, we have grown accustomed to seeing the company behind it — OpenAI — and its bot plastered all over the media as a model example for what the future holds.

Money started pouring in, with Microsoft growing its bet on OpenAI to US$10 billion earlier this year, putting the company’s valuation at around US$30 billion or more, by now.

Meanwhile, Alphabet/Google, formerly considered the leader in the race, has become the butt of public jokes over its botched, hasty launch of Bard AI, widely regarded as evidence of how the trillion dollar giant was caught off guard by competition — and could possibly see its entire business model (based on access to information via google.com) threatened.

Few even considered that Mark Zuckerberg — distracted by his metaverse obsession — could become a major contender… until something happened that flipped the table upside down.

Stroke of genius or luck?

A few days ago, an internal document titled “We Have No Moat, And Neither Does OpenAI”, authored by one of Google’s researchers, was leaked on a public Discord server, sparking a debate about the future of AI — particularly as a closed technology, closely guarded by mega corporations.

While, obviously, not the official stance of the entire company, it does make a ton of sense, especially if we consider where everybody is standing today and where most of the real-life innovation in mass use of AI has originated thus far.

“We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren’t positioned to win this arms race, and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.

I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today.”

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customisable, more private, and pound-for-pound more capable.

They are doing things with $100 and 13B params that we struggle with at $10 million and 540B. And they are doing so in weeks, not months. This has profound implications for us:

  • We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritise enabling 3P integrations.
  • People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
  • Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.
Google “We Have No Moat, And Neither Does OpenAI”

Simply put, the open source community was able to rapidly iterate on the basis of available information — far more quickly than OpenAI and Google, which depend on extremely large and complex in-house models that nobody else has access to.

But how was that possible? How could just a bunch of nerdy hackers leapfrog multibillion giants which had spent years developing their language models? They couldn’t have done it all from scratch, could they? Surely had to have something to work on first?

Yes, they did. Meta’s own language model that was leaked on 4chan in March 2023.

Whether the leak was a deliberate decision by the company or a hack (be it internal or external), it gave the global community firsthand access to the source code of a proprietary model — even if a bit underdeveloped at the time.

Within two months, enthusiasts have filled the gaps all on their own.

“At the beginning of March, the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.

A tremendous outpouring of innovation followed, with just days between major developments. Here we are, barely a month later, and there are variants with instruction tuningquantisationquality improvementshuman evalsmultimodalityRLHF, etcetera, many of which build on each other.

Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people.

The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

Google “We Have No Moat, And Neither Does OpenAI”

Anybody can be a valuable contributor today, and the community itself decides about what succeeds and what doesn’t.

This is the same trajectory that Stable Diffusion has followed over the past year or so, being the only mainstream open source image generation model, that anybody can download and tinker with on their own computer.

Hundreds of websites, marketplaces and communities have sprouted as a result, with thousands if not millions of people working on pre-training their own models at a scale and pace that no single organisation could.

Meanwhile, OpenAI’s own Dall-E 2 was somewhat left behind and the only closed-source competitor, Midjourney, is the last one putting up a fight, trying to outrun the competition coming from half of the world working on their own improvements to Stable Diffusion.

In the aftermath of the leak, Meta — willingly or not — has managed to straddle both ends of this spectrum, in the language model space.

It is obviously a giant, multi-billion dollar, for-profit corporation employing tens of thousands of people of its own — which is, nonetheless, enjoying millions of man hours provided entirely for free by the global developer community, tirelessly building on top of its technology!

“Because the leaked model was theirs, they have effectively garnered an entire planet’s worth of free labour. Since most open source innovation is happening on top of their architecture, there is nothing stopping them from directly incorporating it into their products.

The value of owning the ecosystem cannot be overstated. Google itself has successfully used this paradigm in its open source offerings, like Chrome and Android. By owning the platform where innovation happens, Google cements itself as a thought leader and direction-setter, earning the ability to shape the narrative on ideas that are larger than itself.

The more tightly we control our models, the more attractive we make open alternatives. Google and OpenAI have both gravitated defensively toward release patterns that allow them to retain tight control over how their models are used. But this control is a fiction. Anyone seeking to use LLMs for unsanctioned purposes can simply take their pick of the freely available models.”

Google “We Have No Moat, And Neither Does OpenAI”

If Zuckerberg (or someone in his circle) did not plan this, then he may have just accidentally scored a winning lottery ticket — one which could have far greater value than his success with Facebook.

The New Google?

The parallels with how Google has become the giant that it is today are quite striking.

It has grown so big by fostering organic growth of platforms. It has provided useful tools to millions of people largely free of charge, buying their loyalty in the process, and becoming a profitable middleman offering value-added services between interested parties (starting with most obvious: advertising).

It controls the majority of the global mobile OS market, precisely because of the open source nature of Android that countless companies (big and small) have iterated on — in the pond that Google controls and is then able to monetise (whether by advertising or services like its own app store, cloud computing, business solutions etc.).

How many people would use Google’s search engine if there was a fee to pay for it? Would Android have become a global standard for 80 per cent of smartphones? Would YouTube have been able to monopolise video as it does today?

Meta’s leaked language model — even if it’s currently inferior to the ones powering ChatGPT or Bard — is gradually becoming the standard for all tinkerers out there.

And while the leak was “technically” illegal and nobody can commercialise services built on top of something obtained in breach of the law, all it takes for Meta to capitalise on it is establish a regulated marketplace of its own.

Building a home for all of this grassroots innovation, where it can be monetised under one banner, while Mark Zuckerberg pockets the commission.

Conversely, the company is at liberty to choose the most promising solutions out there and incorporate them in products of its own, since all share the underlying technology.

Meanwhile, OpenAI and Google are stuck at coming up with everything themselves and iterating at a much slower pace, without the community’s input.

The value of secrecy in this business is greatly overstated, as people leave to work for competitors all the time. There are no absolutely unique ideas and with so many smart people, all of the companies are bound to converge in the long run.

The winners will not be defined on the basis of who has done a better job, but rather who is able to succeed in the popularity contest.

This is a story we all know too well. Google wasn’t the first search engine, Facebook wasn’t the first social network, Apple wasn’t the first computer maker, Microsoft didn’t write the first operating system — and so on. Why should it be different with AI?

Of course, Meta can’t just sit idly by if it wants to make the most of this unexpected opportunity. But if Zuckerberg can divert the obscene amounts of money away from metaverse, that nobody wants, into AI that the whole world may soon be dependent on, then it might just be enough to help him score the huge victory he’s been seeking so desperately in the past few years.

Featured Image Credit: Generated with Midjourney

Exit mobile version