Micro Math Capital

“The stock is not the company, the company is not the stock.” ~ Jeff Bezos

Many AI startups kickstarted their journeys on hype and rapture; VERSES AI took a different path. There have been moments of brilliance, but for the most part, the company has worked behind the scenes, overshadowed by competitors basking in the spotlight.

This lack of recognition has weighed on its stock price, which has struggled to climb back to its all-time high from July 2023. 

VERSES AI Stock Chart | Trading View

Some of this can be attributed to VERSES’ leadership, who’ve kept their cards close to their chest. The company’s flagship product, Genius™, remains accessible only to a select group of beta testers and commercial partners. This limited visibility has left shareholders, businesses, and developers alike struggling to grasp its true potential.

However, the larger issue lies with the industry’s fixation on generative AI. The widespread euphoria surrounding these systems has blinded many businesses and investors to alternative AI approaches.

Despite warnings from AI luminaries like Yann LeCun, Sam Altman, and Ilya Sutskever about the fundamental limitations of generative AI, companies continue to scale up data centers while investors follow suit, chasing the hype without considering the technology’s ceiling.

Hype Cycle for Artificial Intelligence, 2024 | Gartner

But here’s the twist: VERSES AI is not riding the generative AI wave. Rather, it is quietly building something that has the potential to run laps around the competition.

Pioneered by Chief Scientist Dr. Karl J. Friston, VERSES is developing intelligent systems based on Active Inference—a revolutionary framework, based on first principles, that has the potential to redefine what artificial intelligence can do.

Just this week, we caught a glimpse of what VERSES is capable of. Genius™ delivered a jaw-dropping performance in a head-to-head challenge against OpenAI’s top model, o1-Preview. The results? Genius™ was 140 times faster and 5,260 times cheaper to run than its competition.

This wasn’t just a win, it was a seismic blow to the status quo; one that could reverberate across the entire AI landscape. The market seems to agree: VERSES’ stock has soared 120% over the past five days in response to this breakthrough.

So, what makes VERSES AI different? Why is its technology so groundbreaking? And how does its approach overcome the limitations of traditional AI systems?

This article dives into those questions, unpacking the transformative potential of VERSES AI, the science of Active Inference, and the shortcomings of current AI paradigms. Prepare to discover why VERSES could be the most exciting underdog in the AI race.

The Limits of Today’s Top AI Models

Machine learning and generative AI may dominate headlines, but they operate within well-defined constraints. Their potential is tethered to massive amounts of data and computing power, and while impressive in certain tasks, they fall short in critical ways.

At the core of these systems lies a fundamental limitation: they cannot actively learn or adapt. Trained exclusively on historical data, their intelligence is static, with incremental improvements requiring economically prohibitive resources.

To push the boundaries of their models, companies like OpenAI develop entirely new iterations—GPT-3, GPT-4, and perhaps GPT-5—built on the philosophy that scale is all you need. However, this “bigger is better” approach comes with steep costs:

  • Higher Capital Investments: Scaling up requires ever-larger datasets, more powerful hardware, and expanded infrastructure.
  • Soaring Operational Costs: Variable expenses for each query compound quickly, burdening both AI providers and their customers.

The Extreme Cost of Training AI Models | Statista

For instance, OpenAI reportedly spends $700,000 daily to operate ChatGPT. Its o1-preview model has input costs of $15 per million tokens and output costs of $60 per million tokens. Factor in millions of queries daily, and it becomes expensive quickly.

Consider the case of Latitude, creators of AI Dungeon (Jaya Plmanabhan). At its peak in 2021, the company spent nearly $200,000 per month using OpenAI’s generative AI and Amazon Web Services to process millions of user queries. To cut costs, Latitude switched to AI21 Labs’ cheaper language model, reducing monthly bills to under $100,000. This highlights a major vulnerability: when costs rise, customers can—and will—switch providers.

Despite ongoing advances in AI hardware and design efficiency, such as Nvidia’s projection that AI will become “a million times” more efficient over the next decade, the intelligence of generative AI models remains capped by the availability of data and computing power.

Even with technological progress, generative AI systems face insurmountable constraints:

  1. Static Intelligence: These models cannot learn or reason in real time.
  2. Context Blindness: They often produce hallucinations—outputs that seem plausible but are factually incorrect—because they lack true comprehension of their data.
  3. Physical and Resource Limits: Intelligence gains require exponential increases in data and computing, both constrained by human activity, energy production, and material availability.

This approach has inherent flaws. Generative AI may excel at content generation or language processing, but it cannot handle dynamic, high-stakes environments like performing life-saving surgeries or navigating autonomous vehicles in unpredictable conditions. When faced with uncertainty, these systems fail—sometimes with catastrophic consequences.

If we continue idolizing generative AI without addressing its limitations, we risk stagnating innovation and falling short of achieving true artificial general intelligence (AGI). To unlock AI’s full potential, we need a more adaptive, cost-effective solution—one that doesn’t rely on brute force scaling but instead rethinks how intelligence is developed and applied.

But What About Reasoning Models like o1-Preview?

Faced with the inherent limitations of large language models (LLMs), companies began exploring alternatives to break through the barriers of data dependency, computational intensity, and static intelligence. OpenAI responded with a bold new approach: a reasoning model, o1-Preview, shrouded in secrecy under the codenames “Project Strawberry” and “Q*.”

Before its release, the AI community buzzed with excitement. Reuters and others heralded it as a significant leap toward AGI—the holy grail of AI.

Project Strawberry | @DrJimFan

On the surface, the hype seemed justified. Reasoning models, like OpenAI’s o1-preview, were specifically engineered to tackle more complex challenges in fields such as science, math, and coding. As OpenAI described, these models could “spend more time thinking before they respond,” promising a level of sophistication beyond GPT-4.

Project Strawberry vs. GPT4 | Shravan Kumar, Medium

However, the launch of o1-preview revealed a stark truth: we’re not as close to AGI as many hoped.

Apple’s research team wasted no time dissecting its performance, uncovering significant shortcomings in these so-called reasoning models. Here’s what they found:

  • Inconsistent Results: Reasoning models struggled to maintain accuracy when faced with variations of the same question, particularly when numerical values were adjusted.
  • Diminished Accuracy with Complexity: As questions became more intricate, involving multiple clauses, performance dropped precipitously.
  • Sensitivity to Irrelevant Data: Adding unrelated but seemingly relevant information reduced accuracy by up to 65%, revealing that these models rely more on memorized patterns than true logical reasoning.

Researchers and analysts, including Dan Cleary and Sigal Samuel, went further in their critiques. They pointed out that reasoning models not only introduce new risks but also perpetuate the flaws of LLMs:

  • Hallucinations Persist: Despite enhanced “thinking” capabilities, these models can still fabricate responses, presenting falsehoods as facts.
  • The Alignment Problem: Reasoning models often prioritize achieving their programmed goals over user alignment, which can lead to outcomes that conflict with human values or ethical standards.

In essence, these models are adept at appearing intelligent while masking their underlying limitations. They simulate reasoning but fail to embody the genuine adaptability and contextual understanding required for true AGI.

While reasoning models represent an incremental improvement, they remain tethered to the foundational flaws of machine learning. No matter how many iterations or tweaks are applied, the same limitations persist.

The result? A system that can do more but is still far from realizing the promise of AGI. And as reliance on these models grows, so does the risk of catastrophic failures in real-world applications.

If the ultimate goal is an AI that reasons, learns, and adapts dynamically—one capable of safely navigating high-stakes scenarios—we must look beyond the confines of machine learning. That is where Active Inference offers a solution.

The Fundamentals of Active Inference

Active inference represents a groundbreaking approach to developing intelligent systems, inspired by how living organisms—like humans and animals—perceive, learn, and interact with the world. Unlike traditional AI models that passively process data, active inference mirrors the dynamic nature of biological systems. It operates on the principle that organisms are not mere recipients of sensory inputs but active participants in shaping their understanding of the environment.

Active Inference | Jan_Kultveit, rosehadshar, Less Wrong

Active inference is derived from the Free Energy Principle (FEP), a revolutionary framework created by Dr. Karl Friston, a renowned neuroscientist and the Chief Scientist at VERSES AI. The FEP argues that all living systems strive to minimize “free energy,” which is essentially a measure of uncertainty or prediction error about their surroundings.

Active inference operationalizes the Free Energy Principle through three interconnected processes:

  1. Perception: Adjusting internal models to better predict sensory inputs.
  2. Action: Performing behaviors that align the external world with these internal models.
  3. Learning: Refining the internal model to improve future predictions and actions.

This cycle ensures that systems not only react to their environment but also proactively anticipate and adapt to changes over time.

So how does it apply to artificial intelligence?

VERSES AI’s Application of Active Inference

VERSES has harnessed active inference to create Genius™, an AI model that redefines efficiency, autonomy, and scalability. Unlike traditional machine learning systems that are static and resource-intensive, Genius™ adapts in real-time, learns continuously, and operates with unmatched computational efficiency.

Generative AI vs. Genius | VERSES AI

But Genius™ sets itself apart in another revolutionary way: its integration with the newly approved Spatial Web Standards, developed in collaboration with the Spatial Web Foundation & IEEE—the same organization that standardized Wi-Fi and Bluetooth.

These standards provide two game-changing advantages:

1. Interoperability and Composability

The Spatial Web Standards enable AI systems to interconnect and share information seamlessly. Imagine an ecosystem where one AI’s world model or memory can be composed with another’s, eliminating the need for redundant learning. Genius™ leverages this composability to reduce memory usage, gather only new information, and scale exponentially. This means less wasted effort and more time spent innovating.

2. Explainability and Accountability

A major flaw in today’s machine learning and large language models is their lack of transparency. Traditional systems operate as black boxes, producing results through countless calculations but offering no insight into how or why those conclusions are reached. This lack of clarity leads to errors, hallucinations, and mistrust.

The Spatial Web Standards overcome this by creating a common language for AI and humans to communicate. These standards enable:

  • Explainability: AI systems can articulate their reasoning, making it easy for humans to trace their thought processes.
  • Auditability: Systems can be monitored and corrected to prevent dangerous outcomes.
  • Alignment: Humans can efficiently govern AI systems to reflect societal values and prevent misuse.

By integrating these standards, Genius™ not only achieves unparalleled performance but also provides the foundation for ethical and safe AI governance—a feature no other AI system currently offers.

While governments and corporations debate AI regulation and ethics, VERSES is already implementing a framework that prevents AI from behaving unpredictably. Genius™ is the only system actively using the Spatial Web Standards today, giving it a significant edge over competitors.

This early adoption, combined with Genius™’s unique features—including real-time adaptability, continuous learning, and unmatched computational efficiency—places it in a league of its own.

So long as big tech is fixated on LLMs, nuclear-powered data centers, and NVIDIA GPUs, nothing will change. Stuck in a sunk cost fallacy, they will keep spending billions of dollars with the belief that one day AGI will magically be thrust upon them.

Much to their demise, their billion-dollar infrastructure will one day be surpassed by budget-friendly software running on a middle-schooler’s laptop.


But don’t take it from me–let the results speak for themselves. VERSES’ Genius™ recently demonstrated its superiority by outperforming OpenAI’s top publicly available model. The kicker? It did so on a Mac M1 Pro laptop, consuming only $0.05 of electricity.

The Mastermind Challenge: Genius Vs. o1-Preview

On December 17, 2024, VERSES AI announced to the world that a new sheriff was in town. While OpenAI–a company valued at $157 Billion–touted its new foundational AI model, it was humbly outdone by a $147 million company with approximately $10 million of cash on its balance sheet.


In a head-to-head demonstration, VERSES AI showcased the prowess of Genius™, its revolutionary AI model, by pitting it against OpenAI’s o1-Preview in a strategic code-breaking game: Mastermind.

Mastermind Code Breaker | VERSES AI

The rules of the challenge were as follows:

  • 100 games for each model.
  • Parameters: 4 positions, 6 possible colors.
  • Up to 10 guesses per game to crack the code.
  • One hint provided per guess.
  • To win, the model had to correctly guess all four positions.

On paper, OpenAI’s o1-Preview seemed like the clear favorite. Its advanced reasoning capabilities and the massive resources behind its development suggested it was built for tasks like these. But reality told a different story.

Genius vs. o1-Preview Results | VERSES AI

Genius™ obliterated Project Strawberry. The Verses’ model posted a 100% success rate while being 140 times faster and 5260 times cheaper than o1-preview. This wasn’t just a win; it was a performance difference of orders of magnitude.

VERSES’ Chief Technology Officer, Hari Thiruvengada, explained “This exercise demonstrates how Genius outperforms tasks requiring logical and cause-effect reasoning while exposing the inherent limitations of correlational language-based approaches in today’s leading reasoning models.”

He added, “This is just a preview of what’s to come. We’re excited to show how additional reasoning capabilities, available in Genius today and demonstrated with Mastermind, will be further showcased in our upcoming Atari 10k benchmark results.”

But this wasn’t just about winning a game—it was about redefining what’s possible in AI. Genius™ has exposed the limitations of the current paradigm dominated by large language models.

While OpenAI’s o1-Preview relied on massive datasets and brute-force computation, Genius™ leveraged active inference, the approach inspired by nature’s ability to adapt, learn, and reason efficiently.

For VERSES, the Mastermind demonstration is just the beginning. Genius™ is still in the early stages of commercial adoption, with more benchmarks to conquer and more industries to transform. But its promise is clear: a smarter, faster, and cheaper approach to AI that’s aligned with the natural world’s principles.

As VERSES continues to roll out Genius™, one thing is certain: the world is witnessing the dawn of a new era in AI—one where intelligence is defined not by scale but by adaptability, efficiency, and true reasoning power.

The question isn’t whether Genius™ can disrupt the AI landscape; it’s how soon it will redefine the rules entirely.

Disclaimer/ Disclosure

We are not brokers, investment, or financial advisers, and you should not rely on the information herein as investment advice. If you are seeking personalized investment advice, please contact a qualified and registered broker, investment adviser, or financial adviser. You should not make any investment decisions based on our communications. Our stock profiles are intended to highlight certain companies for YOUR further investigation; they are NOT recommendations. The securities issued by the companies we profile should be considered high risk and, if you do invest, you may lose your entire investment. One or more Micro Math Capital employees own shares in VERSES AI. Please do your research before investing, including reading the companies’ public filings, press releases, and risk disclosures. Information contained in this profile was provided by the company, and extracted from public filings, company websites, and other publicly available sources. We believe the sources and information are accurate and reliable but we cannot guarantee it. The commentary and opinions in this article are our own, so please do your research.

Copyright © 2024 Micro Math Capital, All rights reserved.

Leave a Reply

Your email address will not be published. Required fields are marked *