Agentic AI is the category we have done the most reading and the most rejecting in over the last twelve months. The volume of pitches is enormous; the quality is bimodal. There is a small group of founders who are doing the actual work and a larger group who have wrapped a model with a UI and assumed that is enough. We say yes to the first group and try to say no to the second one quickly.
Here is what we look for, in the rough order we look for it.
1. They have shipped an agent already
Not a prototype. Not a demo. A real agent in real production, even if the deployment is internal, even if it serves one customer, even if it ran on a weekend. The founders who have shipped have a vocabulary that the deck-only founders do not — they can talk about tool use failures, retry loops, eval drift, latency budgets, and the specific moment in production where the agent did something they did not predict. That vocabulary is the cheapest signal we have.
2. They own a workflow, not a feature
The defensible agentic AI businesses are the ones that own an end-to-end workflow inside a specific function. 'AI for sales' is not a workflow. 'Outbound qualification for mid-market SaaS, integrated with the seven systems the SDR already lives in' is a workflow. Founders who can describe the workflow in operator language — handoffs, exceptions, the human-in-the-loop step that doesn't go away — survive contact with real customers. Founders who can only describe the model don't.
3. They have a clear view on evals
Every serious agentic team we have backed has a homegrown eval suite by month three. They can tell you the agent's success rate on their own task definition, segmented by customer and by task type, and they can show you the failure modes. They know which categories of failure are tolerable for their customer and which are not. They have a roadmap for closing the gap.
Founders who treat evals as 'we will figure that out post-funding' are a no for us, almost always. The eval is the product.
4. They are honest about what the model can't do
The best agentic founders we have met spend half the meeting telling us what their system gets wrong. They are not selling perfection; they are selling a system that improves on the prior state of the world by a measurable margin, and they have a plan for the edge cases. Founders who claim 99% accuracy on tasks with no agreed-upon benchmark have either not run the eval or are not being straight with us. Either way it ends the conversation.
5. They have a pricing model that survives model deflation
If your pricing is tied to API cost and the API cost drops 90%, the customer is going to ask for the discount. Founders who price on outcomes — tickets resolved, hours saved, contracts closed — get to retain the value they create. Founders who price on tokens get squeezed. The ones we back have a theory of value capture that does not depend on the model being expensive.
“We back agentic founders who are pricing on outcomes, evaluating on their own benchmarks, and shipping into a workflow they understand at operator depth.”
What we are learning to filter out faster
Three patterns we now say no to in the first call.
- Wrapper companies — a thin UI over a single API call, with no proprietary data, no workflow integration, and no defensible moat against the foundation model itself.
- Tool-of-tools — a platform to manage all your other agentic tools. Sometimes this is real infrastructure; more often it is a category that will be eaten by either the model providers or the system-of-record players within eighteen months.
- AGI-adjacent pitches — companies that need progress in foundation models to become viable. We invest in product and distribution risk, not in research risk.
If this sounds like you
Send us the deck and the link to the production agent. We will respond within two business days.