About the Engine
What the engine does. What it doesn't. Why you should care.
The Short Version
I spent thirty years inside technology transitions. Not writing about them — working inside them. Watching what happened when new technology hit an existing market, who adapted, who got crushed, and why the outcomes were almost never what the headlines predicted.
Six transitions. Thirty years. Hundreds of deals, negotiations, product launches, market collapses, and recoveries. Along the way, I noticed something: the technology changes every time, but the human behavior underneath it almost never does. The panic is the same. The hype is the same. The mistakes are the same. The winners win for the same structural reasons, and the losers lose for reasons they refuse to see.
That is the engine. Not a model. Not an algorithm. A library of patterns — what happened, why it happened, and the conditions under which it is likely to happen again.
Garbage In, Garbage Out
Every prediction engine — every model, every framework, every expert opinion — is only as good as what goes into it. Feed it bad data and you get confident wrong answers. Feed it headlines and you get headline-quality thinking.
The difference between useful pattern recognition and expensive guessing is input quality. We track a focused set of companies in the AI infrastructure space — the semiconductor designers, the power providers, the networking layer, the construction firms, the capital allocators. For each one, we maintain a clean, current summary of what the company does, its key financial numbers, and the structural conditions worth watching.
That research is what you see on this site.
We do not use analyst estimates. We do not aggregate sentiment. We do not run quantitative screens. We read filings, track structural conditions, and compare what is happening now to what happened before — because a surprising amount of it has happened before.
What We Share
Everything on this site is input. It is the research layer — the data we collect, organize, and monitor.
You get:
- A ticker dashboard with 18 companies across AI infrastructure — compute, power, and third-order plays — each with a color-coded signal status
- Company summaries — single-page overviews with key numbers, what the company does, and what conditions we are tracking. Clean enough to print and pin to a wall.
- A weekly observation — one short piece connecting a pattern from past transitions to something happening this week. Not a prediction. An observation with context.
All of this is free. No paywall. No upsell on the data. The research has standalone value whether or not you care about what we do with it.
What We Keep Private
The engine produces outputs. We keep those to ourselves.
We do not publish predictions. We do not share probabilities, timing estimates, or position information. We do not disclose trades, returns, or portfolio composition. We do not offer investment advice.
This is not modesty. This is discipline. Publishing predictions creates incentives that corrupt the process — the pressure to be right in public, the temptation to adjust after the fact, the audience dynamics that turn analysis into performance.
We would rather be privately right than publicly impressive.
If the engine works, the results will speak for themselves over time. If it does not work, no amount of public prediction theater would have changed that.
Why Share the Inputs?
A reasonable question. If the engine is private, why give away the research?
Three reasons.
First, the research is useful on its own. A clean, organized summary of what 18 AI infrastructure companies actually do, with current numbers and conditions worth watching — that has value whether or not you know anything about pattern recognition. Most people do not have time to read SEC filings. We do.
Second, input quality matters more than model quality. By sharing our inputs, anyone with their own experience and judgment can form their own view. We are not the only people who have been through technology transitions. We are just the ones writing it down.
Third, the inputs do not reveal the outputs. Knowing what we watch does not tell you what we think will happen, when we think it will happen, or what we are doing about it. The gap between organized data and actionable prediction is where thirty years of experience lives. That gap is the engine.
The $100 Bill
There is an old story about a professor and a hundred-dollar bill on the sidewalk. The professor says it cannot be there because someone would have already picked it up. The student picks it up.
Most people look at publicly available data and assume there is nothing useful in it — because if there were, someone smarter would have already found it. That assumption is wrong. The data is available. What is rare is the experience to know which pieces matter and how they connect.
That is what we built. Not more data. Better pattern recognition applied to the data that is already there.
We are not selling urgency. We are not selling panic. We are not selling the idea that you need us to navigate what is happening.
We are saying: this is not the first time everything changed. It is the fourth or fifth time, depending on how you count. And if you look at what happened before — really look, not just read the summary — the next moves become a lot less surprising.
The technology changes. The behavior does not.