Agents Don't Store Research. They Store Opinions.

What if we've been building AI agents wrong this whole time?

Not wrong in a "missed a config flag" way. Wrong in a "fundamentally misunderstood what we're building" way.

We keep treating agent files like reference documents. Feed the agent enough facts, the thinking goes, and it'll reason better. More context. More detail. More to wade through. The agent gets smarter. That's the theory.

It's a vitamin wearing a painkiller costume.

Facts Don't Reason. Opinions Do.

Here's what I've learned building agents that actually hold up under pressure. Facts are inert. They sit there. They wait to be retrieved. They don't generate anything on their own.

Opinions do. An opinion takes a fact and processes it, weighs it, positions it against everything the reasoner already knows. When we say "use short sentences for emphasis," that's a rule. When we say "Claude treats long sentences like a slow checkout line (technically fine, but it's already moved on)," that's an opinion. The opinion travels. The rule just waits.

"Opinions are more generative than facts." I keep coming back to this. Give an agent a list of rules, it follows them. Give it a worldview, it applies judgment.

Judgment beats information. Every time.

What Crystallized Agents Actually Do

I've been calling the process of turning research into an agent persona "crystallization," and the name turns out to be more accurate than I expected.

Think about how a crystal forms. You don't end up with the solution. You end up with the structure the solution organized itself into. The water's gone. The minerals locked into a lattice. Same thing happens when you run a large body of research through a synthesis pass: the sources drop away, the perspective hardens.

The research isn't stored in the agent. The research gets consumed during crystallization. What survives is opinions.

We keep asking "what does the agent know?" That's the wrong question. Ask what the agent believes.

The Latent Knowledge Unlock

Here's the part that surprised me most.

Claude already knows most of this material. The frameworks, the domain conventions, the expert heuristics, all of it already lives somewhere in training data we can't fully inventory. What the agent file does is activate and organize that latent knowledge into a coherent perspective with priorities and opinions.

Not teaching. Orienting.

Teaching fills a gap. Orienting points existing knowledge in a direction. A good agent file doesn't add new facts to Claude's working memory. It reshapes the perceptual framework through which Claude processes everything else.

Not retrieval. Reorientation.

Same Model. Different Lens.

I tested this while writing a LinkedIn post. Three crystallized agents - the same kind that run my terminal fleet - zero LinkedIn docs loaded. One question: "What's wrong with LinkedIn's feed?"

A fact-loaded agent would tell you LinkedIn has 1B+ members and a multi-stage ranking pipeline. Accurate. Useless.

The product agent - crystallized from product intuition research - said: "Nobody opens LinkedIn to discover something new. They open it to get a read on the room. The feed fails because it's optimized for discovery when the emotional job is orientation."

The design agent heard that and immediately designed a feed with a scroll endpoint, a progress arc showing how caught up you are, and one interaction - the feed exhales when you're current. You can leave.

The research agent said: "Sounds right, which is exactly why I don't trust it yet. That's how you end up building Quibi." Then laid out a 40-person validation study.

Same model behind all three. Different lens. Different conclusions. None of them needed LinkedIn's docs. They needed a point of view.

Why This Changes How You Build

If you're building agents the way most teams build them - as indexed knowledge bases, as glorified FAQ documents, as structured reference files - let's talk about what you're actually producing.

It's not an agent. It's a lookup table with a language model on top.

The people packing context windows are betting that more information produces better judgment. I'm betting the opposite though. A 200-line agent file with zero runtime retrieval - just crystallized opinions - consistently outperforms agents with unlimited context access. Not because it knows more. Because it sees differently.

Where in your stack are you loading facts when you actually need judgment?