Growth is addition. Compounding is architecture.
I operate alongside founders, investors, and operators who already have traction. The business works. The revenue is real. But something isn't compounding the way it should with the inputs you already have.
I don't engage with distressed systems, rebuilds, or early-stage chaos.
The leverage layer underneath most of this is Emberplane, infrastructure I built for exactly this kind of work. It keeps context intact across complex situations so nothing falls through and decisions have real history behind them. Details below.
Emberplane is an AI-parallel knowledge plane, a sidecar brain that runs alongside your work and keeps everything you know searchable. It started as an MCP server and grew into a CLI, a workspace daemon, a semantic memory layer, a trust boundary console, and a native iOS app. All Go on the backend. All talking through enforced trust boundaries.
Not code intelligence. A knowledge infrastructure built around a
simple idea: context is a function.
C = f(intent, temporal_knowledge, now_knowledge, project_knowledge, global_knowledge).
A developer making sense of a codebase is one instance of that
function running. So is an entrepreneur coordinating across ventures
without losing a thread. A farmer whose institutional knowledge lives
in spreadsheets. Business partners watching a synergy emerge across
things that were never meant to connect. One mental leap from any of
those to the next. The system doesn't care what domain it's indexing.
This isn't a chatbot with file access. It's a distributed system with real architectural opinions about where data lives, who touches it, and what happens when things go sideways.
The command line. Query, search, synthesize across your codebase. Manage tasks and sparks (quick captures that promote into real work). Workspace init, git ops, AST indexing, runbooks. Also speaks MCP for editor integration.
Workspace watcher daemon. Tracks file deltas, detects coding bursts, manages checkpoints, parses code in the background. The local intelligence that feeds the rest of the system without you thinking about it.
Semantic memory persistence backed by pgvector. Stores meaning as graph nodes with vector embeddings, link management, enrichment pipeline. The thing that makes sessions remember what happened last time.
Trust boundary and policy enforcement. JWT auth, admission control, session lifecycle, multi-tenant RBAC. The only service allowed to write to Lattice. Everything else asks permission.
Native iPhone app for tasks and sparks. Dark-mode only. No webview, no React Native. Constrained on purpose: login, view tasks, capture sparks, promote sparks to tasks by picking a project.
Twenty-plus years of building things, starting before the frameworks existed. Still prefers a terminal to a GUI and Vim to everything else. The kind of developer who reads RFCs recreationally and thinks semantic versioning is a social contract.
The resume version would list companies and dates. The real version: twenty years of building the thing people didn't know could exist yet, across enough domains that "full-stack" stopped meaning anything. Distributed systems, design systems, mesh networks, AI tooling. Same person, different problems. Right now building what AI-assisted development should actually look like, not what everyone defaults to. Opinionated. Willing to tell the LLM no.
The things a resume would never mention but a recruiter should probably know about.
I build things that have opinions. Software should know what it is and enforce its own boundaries. It should also still work when the world gets weird. I'd rather ship a single binary that does one thing well than a Kubernetes cluster that does everything badly.
I write prose as well as code. Give me a whiteboard, a conference room, or a stage. Same energy. Documentation is how you prove you understand what you built. If you can't explain the architecture in plain English, you don't have an architecture. You have an accident.
I use AI tools extensively. I also tell them no. The judgment call about when to accept the suggestion and when to override it is the actual skill. The models are a power tool, and knowing when to put it down is what keeps your fingers attached.
I engage with a small number of situations at a time. The preference is for meaningful problems where systems thinking changes the trajectory. I'm WD-40. I keep things clean, dry, and moving smoothly.
If you're operating something that works but isn't compounding the way it should, reach out. If you can't quite frame the problem yet, reach out anyway. That's usually the better conversation.