Micro Math Capital

A Deep Dive into Perion Network (PERI)

Digital display advertising covering a virtual city.

Perion Network (PERI) is a small-cap play that has been on our radar for some time now. The company is a strong contender outside of Google and Meta, as a digital advertiser, demonstrating a promising track record of growth and profitability. 

An Analysis of Vertex Resource Group’s Q1 2024

Businessmen overlooking an oilfield.

In periods of economic uncertainty, organizations that excel are those with a diversified business model, organic cash flow generation, and a durable competitive advantage. These factors are vital for long-term success, enabling a company to thrive even when competitors struggle.

Could This Company Hold the Key to US Energy Metal Independence?

A mine full of rare earth minerals

The United States is in dire need of domestic energy minerals. Metals like nickel, copper, and cobalt are critical components of America’s energy storage infrastructure, making up 15.7% (64lbs), 10.8% (44lbs), and 4.3% (10lbs) of EV batteries, respectively.

Active Inference: Humanity’s Final Great Invention

Verses AI (CA: VERS.NE) (USA: VRSSF) is disrupting the most disruptive technology in the world. It transforms artificial intelligence from something that mimics knowledge to one that formulates ideas and fosters curiosity on its own. This is causing a paradigm shift in the industry and leading many to wonder if this is the true path to Artificial General Intelligence, a.k.a. AGI. But understanding this technology isn’t easy. There is an absurd amount of nuance, which can be discouraging when trying to make sense of it. This article attempts to distill that. Because whether you like it or not, AI, and shortly AGI thereafter, is going to alter the fabric of reality. You can either embrace it, or, you will quickly fall behind. The choice is yours. The Letter that Put the World on Notice It has been two weeks since Verses published an open letter to OpenAI in the New York Times. In it, Verses CEO Gabriel René outlines how OpenAI, and other major players, are struggling to produce AGI that is “adaptable, safe [and] sustainable.” He then goes on to highlight OpenAI’s Charter which states: “…if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.” René closes the section by expressing that: “VERSES qualifies for [its] assistance.” Bold words for a company 100 times smaller than the creators of GPT-4. But the letter is not merely meant to garner attention. Active Inference is real. It has backing. People and organizations, like Verses, are using it. And it is shaping up to be our best chance at creating human-level intelligence, or greater, that works with us, not against us. So why haven’t OpenAI, and other leading AI players taken notice? My guess is that change can be slow and it is not easy to admit defeat. These organizations have a lot riding on Generative AI technology, and to be fair, they have made significant progress in recent years. Moreover, nearly all computer scientists, software engineers, big data architects, and the like, are disciples of the “Godfather of AI”, Geoffrey Hinton—Hinton is considered a leading figure in deep learning and the creator of artificial neural networks. For them to embrace a new approach, such as Active Inference, it will take a significant event to change their minds; they may need to see-it-to-believe. However, the sooner they do, the greater the benefit Active Inference AI will have for us all. If OpenAI decides to accept Verses’ invitation, there is no doubt that it will set a new precedent for AGI collaboration. With the brightest minds, on both sides, working together, they can create machines that propel our civilization beyond what is comprehensible. But, let’s not get ahead of ourselves. For now, there remains a divide between the Generative AI and Active Inference AI communities. Therefore, if you want to understand the essence of AI, you should familiarize yourself with both approaches. Each offers its advantages, though it is clear that one path is closer to reaching the ultimate goal than the other. Let’s dive in. The Inherent Problem with Generative AI Generative AI is an evolution of the deep learning framework, pioneered by Geoffrey Hinton. It uses artificial neural networks to create connections between billions, and sometimes trillions of data points. This enables the AI to realize patterns and formulate predictions which is how chatbots like Chat-GPT can write entire essays with just a few sentences. To create these Large Language Models (LLMs), developers upload massive datasets into the AI and program it to find connections within the database. Then, using deep learning and reinforcement techniques, the AI begins to derive patterns and relationships between the data points which can then be used to make predictions. Eventually, the AI becomes highly adept at solving these computations and begins generating outputs based on the prompts you give it. But this is where the problems start to manifest. For one, LLMs are inherently biased. Depending on what data you feed it, the AI will spit out responses that seem skewed one way or another. For example, when prompting the image generator Midjourney with “Tech CEO Skydiving in Egypt,” it produces four images of white guys skydiving in the desert. Why? Because the majority of data Midjourney is likely trained on contains “Tech CEOs” who are white. This is just a fun example, but you can see how this could be quite problematic when assessing someone’s credit score, job skills, and the like. Taking it a step further, LLMs struggle to differentiate what they know and do not know. If you ask a chatbot a question it doesn’t have the answer to, it will produce a response regardless of whether it is accurate or not. That is because the AI assigns a “relationship score” to every data point it analyzes. This score falls between 0.0 and 1.0 but never reaches 0.0 or 1.0, exactly. This causes the AI to lie since every response it generates is a “best guess” rather than the truth. I don’t know about you, but I have a hard time trusting a machine or human that isn’t 100% honest when sharing information. Moving on, another limitation of LLMs is that they do not learn in real-time and fail to interoperate with other agents and IoT devices. If you go onto ChatGPT 3.5 right now, you’ll see that its last knowledge update was in January 2022. This means that any information beyond January 2022 does not exist within the LLM’s database. To resolve this lack of knowledge, OpenAI’s scientists must upload and train ChatGPT with new data. This can be quite time-consuming and costly depending on how much data is required. For example, it cost OpenAI over $100 million to train GPT-4 alone. But that’s not all. In addition to costly updates, these LLMs only have access to their personal databases. This means that they are unable to communicate with other LLMs and are privy only to the information they store. If we want