Every question gets the right expert. Or an honest ‘I don’t know.’
These four tools externalize different parts of the developer’s brain. Technical judgment that usually lives in senior engineers’ heads. Memory and identity context that resets every session. The fresh perspective you lose after months on the same codebase. The usability instinct that fades when you stop being a first-time user. Two MCP servers and two agents, each handling a job that shouldn’t be done manually.
Technical decisions, grounded in ingested sources.
Every team has that one person who just knows how things work. "Use Zustand here, not Context." "Server component for that page." Nobody wrote it down - it lives in Slack threads and the heads of senior engineers. I wanted a shared mental model I could actually query.
So I built six domain knowledge bases - React architecture, AI/LLM patterns, backend infrastructure, Vercel, frontend performance, and marketing strategy. Each one has its own ingested corpus of docs, architecture decisions, and real project notes. When a question comes in, it scores every relevant match and routes through a confidence gate that determines how much the system actually knows.
That confidence gate changes everything. Most AI tools answer whether or not they have anything useful to say. This one is honest about what it knows - full expert response when confident, answer with caveats when partial, raw search results when uncertain. Sounds obvious. Nobody does it though.
React Designer
reactComponent architecture, server vs. client components, hooks patterns, and accessibility standards
AI Architect
aiKnowledge system design, tool orchestration, token cost optimization, and reliability patterns
Backend Infrastructure
backendDatabase design, API architecture, deployment pipelines, and production observability
Vercel Expert
vercelPlatform optimization, Core Web Vitals, edge functions, and incremental static regeneration
Frontend Performance
frontendRendering performance, animation compositing, GPU layer management, and paint optimization
Marketing Strategist
marketingProduct positioning, growth strategy, jobs-to-be-done analysis, and category design
It knows what it knows. And tells you when it doesn’t.
[React Designer] useEffect is for side effects - data fetching, subscriptions, DOM mutations. useMemo is for expensive computations derived from props/state. If you're computing a value, use useMemo. If you're syncing with an external system, use useEffect.
Persistent identity and memory across every session.
Claude is stateless by default. Every session starts fresh - no memory of the decision you made last week, no idea how you prefer to work, no accumulated context from the last 40 conversations. That drove me crazy enough to build a fix.
The Persona Engine has two layers. The identity layer analyzes your conversation history and distills a stable profile across seven dimensions - tech preferences, architecture opinions, communication style, decision patterns. It loads as a system prompt at zero extra cost per query. The memory layer handles the dynamic stuff: when something needs past context, hybrid search with temporal decay pulls it in, weighting recent decisions higher than old ones.
Now I have Claude across multiple terminals that all share the same context through PostgreSQL. What I decided in one session is available in the next. It's the difference between working with a collaborator who's been here from the start and a contractor who just walked in.
How direct, casual, or technical you prefer AI responses to be
Whether you guide through Socratic questions or direct requests
Your default stack choices - languages, frameworks, and tools you reach for
How you weigh tradeoffs and choose between valid approaches
Your philosophy on system boundaries, data flow, and coupling
How you use AI tooling - parallel terminals, focused sessions, delegation style
What you believe about shipping, quality, testing, and technical debt
Fresh eyes find the bugs you can’t see.
Developers test their own code the way they wrote it. Same paths, same inputs, same muscle memory skipping past the rough edges. You already know where the happy path lives - so you never actually find out what breaks outside it.
This agent comes in with zero context. No codebase knowledge, no assumption about how navigation is supposed to work. It uses MCP browser tools to click around the live app - real DOM interactions, real form submissions, real page navigations. It tries the things you'd never think to try because you already know they're "wrong." That's exactly why it finds bugs you missed.
Read the full writeup: Fresh Eyes Don’t Assume
Automated usability evaluation against Nielsen’s 10.
After months on the same product, you stop seeing the rough edges. The error message below the fold. The loading spinner with no progress indication. The search page with no empty state. They're not bugs, so nobody files a ticket. They just stay broken.
This agent navigates live sites through browser tools and evaluates what it finds against Jakob Nielsen's 10 usability heuristics - the actual principles that separate polished products from frustrating ones. It's checking the running app, not the source code or mockups. Findings come back prioritized by severity, and approved fixes get implemented automatically. No UX audit spreadsheet. No scheduled review. Just a list of what's actually broken and a fix ready to ship.