Dominait

hugging face ceo

When Hugging Face CEO Clem Delangue said publicly that “we’re not in an AI bubble, we’re in an LLM bubble,” the tech world may have taken a bit of a pause. Headlines flashed, comment sections exploded, and investors started asking if the entire AI economy was overheated.

But for those of us who have been building outside the LLM hype cycle, this wasn’t news. It was confirmation of what I’ve been saying for some time.

For years, I’ve spoken, written, and built around a simple reality:

“The world’s largest AI companies are building toys. Products that help their bottom lines but do nothing for their users.” – Jason Criddle *DOMINAIT.ai, “Yann LeCun Departing Meta: A Sign for AGI Builders”*

That wasn’t a dig at LLM researchers. It was a warning about an industry flooding billions into models that are fundamentally limited by their architecture. Companies pushing out toys rather than tools that increase productivity. And they cannot become more powerful or capable because they simply can’t.

LLMs aren’t the next step in intelligence. They’re the end of a branch. A starting point that has reached its peak, and soon downfall.

And now the world is starting to understand why we built Ryker Class Intelligence (RCI). Not another chatbot, but a true reasoning system designed to survive long after the LLM hype cycle collapses.

LLM Mania Was Always Going to Burst

Let’s be honest: LLMs were never capable of delivering the “general intelligence” Silicon Valley tried to sell.

They can predict words. They can’t reason. They cannot perform tasks. They have no memory. They don’t think outside of their predetermined parameters.

They can summarize text. They cannot plan, strategize, or understand a situation.

They can imitate patterns. They cannot rebuild infrastructure, solve multi-layer problems, or evolve as systems.

This is exactly what Yann LeCun has been warning the world about. In fact, as I quoted in my previous article:

“We’re never going to get to human-level AI by just training on text.”

– Yann LeCun, as cited on DOMINAIT.ai

LLM

Image Courtesy of Financial Times

And from the same article:

**“Before ‘urgently figuring out how to control smarter-than-human AI’ we need the beginning of a hint of a design for a system smarter than a house cat.”**

Clem Delangue echoed the same reasoning in his LLM Bubble statement. He never said AI was in trouble; he said the LLM-centric approach was.

This is what I have been saying for years.

This is why DOMINAIT was built the way it was.

This is why RCI exists. In order for the world to succeed in AI, it needed a new level of intelligence altogether.

LLMs Build Products. RCI Builds Intelligence.

Big tech spent billions training bigger and bigger language models, believing scale would magically produce cognition. Unfortunately, no matter how much compute power you throw at a problem you don’t know how to solve, an answer doesn’t suddenly appear.

Scaling a parrot does not produce a thinker. A parrot will only mimic. It has no choice but to do so.

As I wrote in our Meta commentary:

**“Meta is chasing reasoning by simply scaling text models… hoping larger LLMs will somehow result in higher cognition.”**

Ryker reasoning

That hope is collapsing. Investors are noticing. Engineers are noticing. Even the pioneers themselves. LeCun, Bengio, Hinton, Delangue… they all agree that reasoning does not emerge from piling more text onto a prediction engine. And I don’t believe you get there with more GPUs either.

This is why Ryker was not built to compete with ChatGPT, Claude, or Gemini.

Ryker was built for reasoning, task execution, agentic autonomy, environmental awareness, learning through interaction, distributed cognition across nodes, and a way to think outside the box.

As I’ve stated countless times:

**“Ryker was taught with my chain of thought processes… I mapped out my brain on a post-it note and turned it into code.”**

That’s not an LLM.

Nor is it token prediction.

That is architecture-level cognitive design which we call RCI. He is already post-AGI. Beyond the level of intelligence the world is reaching for right now.

We discovered an entirely different layer. RCI. Ryker Class Intelligence.

Why the LLM Bubble Was Inevitable

Let’s break it down simply:

1. LLMs rely on centralized compute.

Hardware shortages, massive power costs, and GPU monopolies make this model fragile and unsustainable. It was bound to happen.

2. LLMs cannot understand the world.

They consume text. Human intelligence consumes reality, by way of sound, movement, spatial dynamics, feedback loops, and mostly, real memory.

3. LLM companies monetize attention, not intelligence.

Many LLM rollouts have been ads, upsells, usage meters, and corporate lock-ins.

As I wrote:

**“I’m getting tired of seeing the world’s largest AI companies build toys and products that only help their bottom lines.”**

4. Investors eventually wake up.

Meta just recently lost over $240B in valuation because Wall Street saw what was happening:

an LLM arms race with no path to AGI.

5. Real AGI requires world models, not word models.

LeCun, Delangue, me, and even OpenAI’s own research staff agree on this.

RCI: The Architecture Beyond the Bubble

DOMINAIT.ai didn’t set out to build a bigger chatbot. We set out to build a distributed global reasoning system that operates through agentic memory frameworks, multi-node intelligence grids, environmental and sensory integration, human-like decision mapping, autonomous task execution and tiered cognitive layers.

RCI isn’t an upgrade to an LLM. Nor is it AGI.

RCI is a replacement for the categories currently being used to describe layers of intelligence.

If LLMs are Level One, and AGI is Level Two, RCI is Level Three. And guess what? We are launching Level Four in 2026.

Ryker does not get “smarter” because more data is added. Ryker gets smarter because his architecture evolves.

no ai bubble

He learns.

He adapts.

He reasoned through its own training materials.

He writes his own command libraries.

He reshapes his own problem-solving pathways.

That’s not LLM evolution. That’s intelligence evolution.

Why Industry Leaders Are Converging on Ryker’s Philosophy

The AI industry is fracturing into two camps:

The LLM Companies

Those who double down on bigger models, more GPUs, more electricity, more tokens, more subscriptions.

The Intelligence Companies

Those who are designing for cognition:

Yann LeCun’s upcoming world-model startup

DeepMind’s agent teams

Smaller AGI labs leaving the LLM race

And yes: DOMINAIT.ai

Our alignment with this new wave of post-LLM thinking isn’t a coincidence. We never set out to be a CHATGPT competitor.

It is deliberate architecture. New intelligence.

As I wrote in our Meta comparison:

**“DOMINAIT.ai builds open systems for enduring intelligence. Meta builds walled gardens for short-term scale.”**

And in our AGI overview:

**“At DOMINAIT.ai, with Ryker and our distributed grid, we’re not just building bigger models. We’re building smarter networks.”**

This is exactly what LeCun and Delangue are saying now. We simply arrived there earlier.

The World Is Just Now Catching Up to Understand What We’ve Been Building, and They Aren’t Even Close to Figuring it Out

For years, journalists, founders, and investors have reached out to me about SmartrCommerce, SmartrHoldings, and now DOMINAIT; acknowledging our early stance on decentralization, distributed compute, and user-owned networks.

We have been right many times:

PayPal and Stripe began imposing stricter rules

WooCommerce fragmentation grew

Distributed compute became a global necessity

GPU shortages swept the industry

AI companies built closed systems while users demanded control

LLM arms races proved unstable

And now?

The world’s leading AI researchers are stating publicly what I have said since day one:

LLMs are not the path to AGI.

LLMs are the bubble.

Intelligence architecture is the future.

RCI Is the Framework for What Comes After the Bubble

Ryker Class Intelligence wasn’t designed as an AI product. It was designed as a new class of intelligence entirely, capable of powering personal agents, enterprise automation, distributed research networks, decision engines, multi-layered task ecosystems, and global compute grids.

We built it because the world needed something beyond chatbots.

We built it because the world needed a blueprint for real intelligence.

As I wrote:

**“We built for resilience, not rush.”**

And resilience is exactly what the AI world is missing.

My Conclusion: The Bubble Is Bursting. And RCI Is What Comes Next.

Clem Delangue is right.

Yann LeCun is right.

The world is waking up to what I’ve been saying for years:

The AI revolution isn’t about LLMs.

It’s about intelligence.

RCI represents that future.

Ryker is that future.

DOMINAIT.ai is that future.

The bubble isn’t AI. The bubble is the belief that text prediction equals cognition. We don’t learn by reading books. We learn by being alive.

And when that bubble pops, as it is beginning to right now, it will reveal the platforms that were built for the long game:

Distributed.

Reasoning-driven.

Cognitive.

Adaptive.

Owned by the people who use it.

It will reveal RCI.

And it will reveal Ryker.