Repo orientation becomes a context tax.
- Read broad file trees just to get started.
- Miss dependency hubs and actual edit surfaces.
- Repeat the same orientation work in every host.
Cartograph maps the repo, builds a typed task packet, and loads only the context the next agent needs.
The product is live now as an open-source package with a CLI, an optional MCP server, a Claude Code plugin path, OpenClaw skills, benchmark proof, and real sample artifacts on this site.
npm install -g @anthony-maio/cartograph
cartograph analyze ./my-project --static
cartograph packet ./my-project --type bug-fix --task "fix auth refresh bug"
cartograph context ./my-project --task "trace the auth flow" --json
/plugin marketplace add anthony-maio/cartograph
/plugin install cartograph@making-minds-tools
The point is not to stuff more files into context. The point is to hand the next agent a smaller, better artifact.
That is why the product is organized around analyze, packet, and context.
Small projects should not get a 64 KB JSON dump just because a tool ran. Cartograph now keeps small-repo analysis compact by default and only embeds file contents when you ask for them.
This is where the workflow starts to matter: analyze the repo, shape the concrete job, then hand the next agent a packet instead of a wall of text.
On repos like llama.cpp, fastapi, and next.js, the value is not just summary. It is choosing what to read first and what to ignore.
These are designed to be lifted directly into the launch asset pack. Each frame has one job: make the product legible in seconds.
They read too many files, miss the real wiring, and spend the first half of the session figuring out what matters.
Cartograph maps the repo, builds a typed task packet, and loads the smallest useful working set for the next agent.
Summary-first static analysis surfaces what matters, dependency hubs, and the next command to run instead of forcing raw internals on the first screen.
Bug-fix, PR review, trace-flow, and change-request packets stay focused on likely edit surfaces, risks, and validation targets.
Use the public benchmark page plus the real llama.cpp task packet and DeepWiki-style brief to prove the product is not just a mockup.
npm, Claude Code plugin path, OpenClaw skills, GitHub, and the official MCP Registry all point back to the same product surface.
Turn any repo into task-shaped context for coding agents
Open-source repo analysis for coding agents. Cartograph maps the repo, builds a typed task packet, and loads only the context the next agent needs. CLI first, MCP optional, with Claude Code and OpenClaw paths included.
https://cartograph.making-minds.ai/launch/I built Cartograph because every coding agent workflow I watched wasted context on repo orientation before doing useful work.
The goal is simple: map the repo, build a task packet for the actual job, then load the minimum context needed for that job.
This release is centered on analyze -> packet -> context. Cartograph is live today as a CLI, an optional MCP server, a Claude Code plugin path, and an OpenClaw skill path, all riding on the same core analysis engine.
The site also includes public benchmark scorecards plus a real llama.cpp task packet and DeepWiki-style brief so people can inspect the outputs instead of trusting screenshots.
I’m most interested in feedback on where packets still drift in large repos, what repo shapes should be benchmarked next, and what would make the handoff from packet to actual coding work tighter.