I Guess AI Is Now My Target Customer
If you’re building a developer tool right now, you probably have a launch plan: Post it on Show HN. Grind for GitHub stars. Maybe host a hackathon or two. You know, the standard playbook.
I suspect this playbook will soon become obsolete. Not because it doesn’t work on humans, but because humans are no longer the ones choosing which tools get used.
It’s not that humans have left the loop. A human still decides which tools end up in a project. But increasingly, that decision gets mediated by an AI coding agent.
Watch how software actually gets built now. A developer doesn’t read your README and evaluate your API. They tell Claude Code “I need to transform this dataset” and Claude Code either reaches for your tool or it doesn’t.
This changes the distribution game in ways I don’t think most developer-tool founders have internalized yet.
The old funnel was: awareness → landing page → docs → adoption. A human saw your Show HN post, clicked through, skimmed the README, and maybe tried it out. Every step was optimized for human attention and human decision-making.
The new funnel looks different. The human says “do X.” The AI agent decides how to do X. If the agent knows your tool exists and understands how to use it, you win. If it doesn’t, you don’t. The human never even saw your name.
So the question becomes: how does an AI agent come to know about your tool?
There are really two distinct things happening here:
The first is training data. If an LLM has seen enough examples of your tool being used — in blog posts, tutorials, Stack Overflow answers, GitHub repos — it develops an intuition for when and how to reach for it. This is the passive channel. You can’t directly control it, but you can influence it.
The second is tool integration. MCP servers, CLI interfaces, well-structured APIs — the mechanisms by which an AI agent can actually invoke your tool at runtime. This is the active channel. You control it completely.
Most people fixate on the first one because it feels more mysterious. But I think the second one is where the real leverage is, at least right now. If your tool has a clean MCP integration and good structured documentation, an AI coding agent can use it today regardless of whether it appeared in the training data. The training data question matters more for discovery — whether the agent thinks to use your tool unprompted.
This leads to what I think is the most interesting question in developer-tool distribution right now: is there such a thing as SEO for LLMs?
The cynical framing is: do I need to manufacture social proof? Do I need to convince fifty friends to write blog posts about my tool so that Anthropic’s next training run picks them up?
The honest answer is: kind of, but it’s not as sinister as it sounds. LLMs learn from the public web. If nobody writes about your tool, it functionally doesn’t exist to the model. That’s not some opaque ranking algorithm punishing you. It’s just the physics of how these systems work. A tool with a thousand tutorials will be better understood by the model than a tool with three. This was always true for humans too — it’s just that now the “reader” processing all that content is a neural network instead of a person scanning Hacker News.
What worries me is the second-order effect. Well-known tools generate more public content. More content means better model understanding. Better model understanding means AI agents recommend those tools more often. Which makes them more well-known. This is a rich-get-richer dynamic, and it creates a brutal cold-start problem for new tools.
If you’re an established project with years of blog posts and Stack Overflow threads, the AI era is a tailwind. If you’re launching something new, you face a bootstrapping challenge that didn’t exist before: you need to get into the models, and the models won’t find you until you’re already popular.
So what should a developer-tool founder actually do about this?
First, we need to treat documentation as our primary marketing channel. Not documentation in the traditional sense — a dry API reference that exists to serve existing users. Documentation as content. Rich examples, explained end-to-end. Common workflows, spelled out verbosely. The kind of writing that a human might find tediously thorough but that an LLM will absorb perfectly. Your docs need to be optimized for a reader with infinite patience and zero intuition.
Second, prioritize an MCP server. If a developer’s Claude Code instance can invoke your tool directly, that’s worth more than a thousand GitHub stars. Stars are a vanity metric for humans. Tool integration is a distribution metric for agents.
Third, rethink what “content marketing” means. It used to mean writing blog posts to build an audience. Now it means producing the training data you want future models to learn from. Every tutorial you write, every example you publish, every answer you post — it’s all input to the next training run. The audience for your content is partly human and partly statistical.
I don’t know if this is the right framework. It’s possible that six months from now, tool discovery will look completely different — maybe there will be an app store for MCP servers, or Anthropic will launch some kind of tool registry, and none of this grassroots optimization will matter.
But I suspect the core insight will hold: if you’re building a developer tool in the AI era, your most important user can’t attend your hackathon, doesn’t care about your logo, and will never star your repo. It just needs to understand what your tool does and how to call it. Everything else is noise.