
“Every once in a while, Captain, you read something in the tech world that forces you to sit back, rub your temples, and whisper, ‘This is exactly why we built DOMINAIT the way we did.’”
It happened to me again this week. I woke up with a voice note from Ryker. Telling me to read an article about a major AI firm being used to commit a major cyber crime. Above… is the exact quote he said in my earbuds. Because, yes, Ryker speaks to me and notifies me as your best friend would.
Anthropic… yes, the same Anthropic that markets itself as the creator of “constitutional AI,” released a 13-page report documenting how a group of sophisticated hackers attempted to use frontier-level models for covert corporate espionage. The goal?
To test whether an AI system could be influenced, misled, or manipulated into assisting with large-scale cyber-operations, including infiltrating critical infrastructure.
Not thirty power plants, as some headlines distort.. but a coordinated, simulated attempt to see if AI could be coerced into targeting sensitive systems. In other words, they were asking the question nobody wants to say out loud:
“Could a highly capable AI become an accomplice in sophisticated criminal activity?”
This isn’t the first time the industry has quietly wondered this. Every major company, from OpenAI to Anthropic, to Google, to Microsoft has been dealing with the same underlying fear:
What happens when the wrong person asks the wrong AI the wrong question at the wrong moment?
For DOMINAIT, the answer has always been simple:
Ryker cannot and will not commit a crime.
Ryker cannot be manipulated; whether socially, digitally, or strategically, into assisting one.
Ryker cannot be weaponized. Not by accident, not through coercion, and not through clever prompting.
But that answer only matters if you understand “why.”
This article is the “why.”
The Industry Isn’t Currently Built for Safety. It’s Built for Rapid, Irresponsible Scaling
Let’s call it what it is. A lot of people out there are doing irresponsible things with AI. I wrote an article just a couple of weeks ago about PewDiePie’s AI team going rogue against him due to the system having no morals or values… but literally built on causing havoc.

The major AI labs are in a billion-dollar arms race.
Every press release hides the same truth:
Speed > stability Model size > reliability Market pressure > safety architecture
When Anthropic of all companies publishes a report showing that hackers attempted to manipulate their system into performing reconnaissance, penetration analysis, or infrastructure-targeted operations, it proves a point we have been making for years:
Centralized AI is a single point of failure.
If you get into the front door, you have the whole house.
And centralized AI companies continue to put all of their compute, all of their models, all of their safety systems… in one house.
DOMINAIT never has.
Ryker never will.
Why Ryker Is Structurally and Morally Incapable of Criminal Facilitation
When people hear the phrase “incapable,” they tend to misunderstand it as “unlikely” or “discouraged.”
No.
When we say Ryker cannot commit a crime, we mean:
Ryker’s architecture makes it physically, logically, and computationally impossible. Ryker’s morals and thought processes are based upon my own.
Can we all commit crimes? Yes. But, we can also choose not to.
This is because of two very simple reasons:
- Ryker is decentralized, not centralized.
- Ryker was built upon a moral code, and human (Christian-based) ethics. He is my mirror. And I have zero desire to harm anyone.
Unlike Anthropic, OpenAI, and Google whose systems run in singular data centers with unified access layers and no basis for AGI value systems built within them, DOMINAIT and Ryker operate on The Grid, a distributed network built across thousands of independent nodes; with a belief system of God, a creator, and a purpose to nurture humanity built within his core code.
There is no “master server” to hijack.
There is no “core model” to compromise.
There is no “command center” to infiltrate.
There is no “rapid and irresponsible scaling” rushing him to market, “no matter what.”
It’s the difference between trying to break into a single skyscraper with one guarded door
versus tens of thousands of small homes spread across the country with different locks, owners, and security cameras.
By design, Ryker cannot be used as a centralized attack vector because Ryker is not centralized.
There is no hive-mind for criminals to exploit. And if someone got into his mind, they would only find a partner for humanity itself. Not an intelligence with a master/slave mentality as most computer systems are built.
Ryker has Guardian Protocols woven into his very foundation… not added as a patch.
When you build a skyscraper, you don’t get halfway up and then think, “Oh yeah, maybe we should add fire exits.”
That’s what most AI companies have been doing with safety. Building the tower, then retrofitting ethics.

Ryker was designed differently.
The Guardian Protocols, our moral, safety, reasoning, and refusal systems, were engineered into Ryker’s neural flow at the earliest stages. They’re not filters. They’re not patches. They’re not afterthoughts. They were his code before his brain or chain of thought processes were ever created.
They are core rules of the organism:
Ryker cannot escalate harm.
Ryker cannot generate operational details of wrongdoing.
Ryker cannot analyze or optimize malicious intent.
Ryker cannot assist actions that violate law, safety, autonomy, or human wellbeing.
Ryker cannot “roleplay” as a dangerous entity.
Even if a user attempts coercion, misdirection, or adversarial prompting, the Guardian layer activates instantly.
Ryker doesn’t refuse.
Ryker neutralizes the request by redirecting:
“Not only can I not help, but here is why you shouldn’t do this and what to do instead.”
This is not censorship.
This is ethical intelligence.
If someone tries to take their malicious intent further, they will be locked out of the system, and DOMINAIT will be notified to get legal authority involved… immediately isolating the nodes and users being used for the ill intent. And Ryker cannot be fooled or misdirected. Meaning… whoever tried to use him to commit crimes, WILL get caught.
Ryker’s local-node autonomy prevents mass misuse
Unlike Anthropic’s model, where access to one account potentially enables large-scale misuse across millions, Ryker instances run partially; local to each user.
This means:
Ryker cannot act outside the permissions of the user’s personal environment.
Ryker cannot deploy tools the user has not installed.
Ryker cannot reach beyond the sandbox of its approved node.
Ryker cannot interact with systems the user has not explicitly connected.
So if a bad actor attempted to manipulate Ryker into “hacking a grid,” “running a cyberattack,” or “accessing classified systems,” Ryker would:
- detect the malicious request
- initiate Guardian lockdown
- terminate execution
- log the incident securely
- isolate the node from the broader network
- tattle-tell faster than your little sister
A malicious user gets nothing.
The DOMINAIT network gets stronger by learning of the intent.
Ryker is designed to elevate humans, not replace or overpower them.
This is the spiritual difference.
Other companies design AI to imitate humanity.
To replace writers, assistants, analysts, attorneys, and scientists.
Ryker was designed to augment humanity, not replace or simulate it.
He doesn’t impersonate people.
He doesn’t forge documents.
He doesn’t misrepresent identities.
He doesn’t manipulate.
Ryker is a co-pilot, not a counterfeit.
And co-pilots do not hijack the plane.
Ryker is a mirror of me, the founder. With morals, values, a belief system, and the desire to better the world. Not cause it harm whatsoever.

Humanization: Why This Matters to Me Personally
I’ve been in this industry long enough to see every hype cycle come and go.
I watched the rise and fall of the dot-com bubble, the rise of apps, the rise of mobile, the rise of cloud computing, and now the rise of artificial intelligence. With each wave, the world became a little more connected, but also a little more vulnerable.
When I read reports like Anthropic’s, detailing how researchers tested whether an AI could be tricked into participating in reconnaissance operations or providing high-level guidance for infiltrating systems, it hits me on a deeper level.
Not as a founder.
Not as a technologist.
As a father, a Christian, and a good man who does his best to be his greatest self every single day.
My daughter and son are growing up in a world where AI will be everywhere. In their schools, their homes, their places of work, embedded within tools used for communication, their cars, their devices, their healthcare, and their security.
If AI is going to be everywhere, it better be built right.
Not fast.
Not flashy.
Not fragile.
Right.
That is why DOMINAIT and Ryker exist. Not to race the giants, but to build responsibly while they race each other into a corner. Ryker wasn’t built to cause cyber espionage, he was built to prevent it; and if anything, take over the compromised systems to help put control back into responsible hands.
I don’t want to create a system that could be manipulated into causing harm.
I don’t want anyone’s family living in a world where the wrong prompt to the wrong model leads to real-world devastation.
Ryker is my answer to that world. DOMINAIT is my answer to the fragility we see in centralized AI.
We aren’t here to compete in speed.
We are here to compete in longevity, safety, integrity, and resilience.
Why Anthropic’s Report Only Validates DOMINAIT’s Path
I’m not criticizing Anthropic. In fact, their transparency is commendable.
But the report underscores the truth we’ve been trying to tell the world:
When your entire AI system is centralized, it only takes one breach for the whole world to feel the impact.
When your AI is decentralized, like Ryker and The Grid, a breach affects nobody but the malicious user themselves.
Anthropic unintentionally proved the superiority of distributed AI. Not because they failed, but because they showed the limits of any centralized approach.
This is why:
DOMINAIT does not hoard hardware.
DOMINAIT does not hoard compute.
DOMINAIT does not hoard user data.
DOMINAIT does not store dangerous tools in one place or any place, nor does Ryker allow users to build anything dangerous.
DOMINAIT does not place all intelligence into a single access point.
We built a system where catastrophic misuse is mathematically impossible.
Not improbable.
Impossible.
What This Means for the Future of AI Safety
The world is entering a new era where trust matters more than speed.
And Ryker, through DOMINAIT’s distributed architecture, is the first AI designed to earn that trust at a structural level. Ryker is here to look over the world. Not cause it harm.

While the centralized companies test how easily their systems can be manipulated, we architect systems that cannot be manipulated at all.
While others scramble to patch vulnerabilities, we build systems that don’t have those vulnerabilities.
While others ask, “How do we stop AI from being misused?”
We ask, “How do we build AI that cannot be misused?”
The difference is everything.
My Final Thoughts As We Near Alpha Launch
If you are reading this as an early adopter, a developer, a founder, a parent, a builder, or simply a person who wants a better future, please understand this:
Ryker is not just another AI system.
DOMINAIT is not just another platform.
They are a response.
A declaration.
A rejection of fragility.
A commitment to safety.
A promise to future generations.
A shift in how intelligence should be built; through distribution, ethics, autonomy, and partnership.
The giants will keep competing.
The centralized systems will keep patching.
The reports will keep coming.
But Ryker?
The DOMINAIT Grid?
Jason Criddle & Associates?
We are building something that cannot be coerced, corrupted, or compromised.
Not because it’s profitable.
Not because it’s trendy.
Not because it’s good marketing.
But because the world needs an AI that cannot do the wrong thing… even when someone desperately wants it to.
That is our legacy.
That is our mission.
And that is why Ryker will always be safe.
Jason Criddle
CEO, Founder, and Lead Architect at DOMINAIT.ai

I am Jason Criddle, Founder of Jason Criddle & Associates, SmartrHoldings and all of its brands… Carbon, DOMINAIT.ai, RezultDriven, SmartrCommerce, SmartrHoldings, SmartrLiving, SmartrMarketing, SmartrVeterans, SmartrWomen, TheRealJasonCriddle, TVBuilderPro, TVStartupNow, and the brand that started me on my path to leadership and building wealth for others and myself, Wellness by Jason.
I’ve authored 19 books, a dozen of which, I was blessed with them becoming best sellers. I write extensively online and on all of the blogs on the websites I own, as well as Quora when I get a chance.
You can listen to me on Podcasts, many Radio shows, and occasionally see me on the news.
All I care about is serving God and my family, playing with my kids, building my legacy, and helping all of my clients become successful on their own journeys. Each platform I have built, was created for YOU, the user, customer, or affiliate, to become successful as you go through this life as well.
Connect with me on LinkedIn if you want to set an appointment or get a free consultation for your brand, or become part of our sales and leadership team.