Loops and Swarms
By Mike Montano, March 25, 2026
It feels a bit like “Everything Everywhere All at Once” in AI right now. New models, agent products, software factories, persona systems, research copilots, workflow automations, multi-agent demos, agent-to-agent interactions. It can feel scattered. But two ideas are organizing this for me:
Loops and swarms
Loops are the persistence layer: memory, goals, recurring workflows, feedback, and background action. They are what allow work, judgment, and context to compound over time, including while you sleep.
Swarms are the cognitive parallelism layer: groups of agents spinning up to do real units of reasoning, judgment, and decision-making in parallel — investigating, delegating, critiquing, synthesizing, and acting. They are what allow a system to bring elastic cognitive labor to a problem.
Loops compound across time.
Swarms multiply the power within each cycle. And when their outputs feed back into the loop’s memory, they can accelerate the compounding itself.
The biggest shift comes when the two combine.
I think this should change both how we operate as investors and what we look for in seed companies.
The loop thesis
The next unlock in AI is not more intelligence on demand. It is systems that persist.
Most AI today is still session-based: prompt, response, done. Some of what is emerging (software factories, long-running agent sessions) is already starting to cross the line into real systematized loops. But even among those of us who live near the edge — in tech, in SF, in venture, seeing this stuff every day — the vast majority of what we do with AI is still not compounding. It is still episodic, even when the episodes are impressive.
A compounding loop engine is a persistent system that remembers context, acts inside a recurring workflow, learns from outcomes, and keeps moving the work forward often with no human in the loop at all. Human on the loop, not in the loop. Checking in periodically, not steering every turn. And critically, not blocked on a human having to cognitively digest the massive output of each cycle before the next one can begin.
The real unlock is not just that AI helps when you are there. It is that it keeps the loop moving when you are not — while you are away, in meetings, focused elsewhere, with family, or offline. The system is still monitoring, filtering, drafting, reconciling, surfacing, updating, and preparing the next move.
That is the transition from assistant to engine.
Why loops are the unit of value
Most of the highest-value things in life and work are not one-off tasks. They are recurring loops.
Health is a loop. Parenting is a loop. Scheduling is a loop. Learning is a loop. Investing is a loop. Product iteration is a loop. Sales is a loop. Hiring is a loop. Customer support is a loop.
AI becomes much more powerful when it improves the loop itself, not just one task inside the loop. Once it is embedded, more cycles run and the cycles get tighter. But the deeper unlock is that memory accumulates and judgment becomes reusable — and that is also what enables the human to step back from the loop. Because their judgment is not lost when they step away. It is memorialized, automated, and still steering the system. It can even improve through the loop’s own feedback. More work happens off the clock. Over time, that compounds.
Why “while you sleep” is the deepest part
The real maximalists already get this and are building toward true loop engines. But most of us — even those living on the edge, which is still vanishingly rare in the broader world — are in the interim mode. We sit in Claude or ChatGPT all day doing more powerful and more ambitious work than before, but we also feel cognitively hung over. More output, more mental residue. More leverage, more drain.
If AI only makes us faster and more ambitious operators inside the loop, we may become more productive but also more exhausted and fundamentally limited in what is possible. We are still carrying too much of the loop manually: too much context loading, too many micro-decisions, too much steering, too much synchronous co-processing with the machine.
The better end state is different. The human designs the loop, sets the goals, defines the thresholds, shapes the memory and the judgment architecture, and reviews the important deltas. The system carries the recurring work in the background.
That is why “while you sleep” matters. It is the path out of AI as cognitively draining copilot mode. It is what turns AI from a cognitive intensifier into a cognitive liberator.
The goal is not to spend all day thinking with AI. The goal is to build systems that think usefully between our turns.
Swarms as the accelerant
Now here is what makes loops dramatically more powerful.
A loop engine does not have to act through a single agent. It can spin up many specialized agents in parallel — a swarm — to handle different parts of the cycle: researching, testing, critiquing, comparing, synthesizing, deciding, and acting.
That changes the shape of what a single cycle can accomplish. You get parallel search instead of serial search, specialization instead of one generalist thread, cross-checking instead of single-path failure, and elastic cognitive labor instead of a fixed one-assistant model.
Here is the important nuance: a swarm detached from a loop is just an impressive burst of parallel intelligence. It can be powerful in the moment but leave very little behind. The real value comes when a swarm operates inside a loop, because then it generates vastly more memory updates, judgment refinements, and carry-forward than a single agent ever could. It does not just make the cycle more powerful. It accelerates the compounding by feeding richer outputs back into the loop’s memory and judgment architecture.
Over time, the system also learns how to deploy its swarm capacity more effectively — better task decomposition, better specialization, smarter routing — which means the swarm itself starts to improve across cycles, not just within them.
That is the crucial framing: swarms multiply the power of a cycle; loops create compounding across cycles. And when swarms feed back into loops, the multiplication itself starts to compound.
The most powerful systems will be loop engines that can orchestrate swarms on demand. Persistent memory, reusable judgment, and recurring workflows providing the continuity. Swarms providing the burst capacity. The loop gets smarter each cycle, and within each cycle, the swarm brings more intelligence to bear.
That is not one assistant per person or one agent per workflow. It is a persistent loop engine that can scale its cognitive labor elastically within each turn.
Why this is clicking already
Part of why this framing feels right is that we are already seeing real fragments of it.
StrongDM CTO Justin McCarthy’s “software factory” demo was one of the clearest. What felt important was not just that AI could write more code. It was that agents were running repeated cycles of building, testing, and validating against real scenarios — with the human defining intent and the system iterating toward convergence. The shift was from using AI to do the work to using AI to run the system that produces and verifies the work. That is a loop with swarm-like properties.
Our own experiments point in two related directions. Cowork-based research — like going deep on 200 YC companies — is clearly more powerful than plain prompt-response. It is ambitious, parallel, and sustained. But it is still not fully compounding until the outputs, priors, and judgment carry forward into the next cycle.
Polsia (used to build Office Hours by True Ventures) is a different kind of example. It is Claude Code for normals: a system that creates and operates a product, and keeps iterating on it even when you step away. That is closer to the real thing: the human stepping out of the loop while the system continues to build, test, and improve. Persona files and reusable prompts are earlier fragments of the same idea — judgment becoming more reusable, context carrying forward more cleanly.
My partner Rohit’s idea, about your AI-bot talking to my AI-bot points at another dimension: richer context transfer. The interesting thing is not just automation. It is the possibility that systems can carry more of a person’s knowledge, preferences, history, and curiosity — allowing more exploratory work to happen between human turns. That matters in work, but I think it matters just as much in personal life.
These are still fragments. But they all point in the same direction: less session-bound intelligence, more systematized context, more reusable judgment, more work continuing between turns, and increasingly, more parallel capacity within each turn.
What the components are
A real loops-and-swarms system has a few core components:
- Persistent memory — it retains relevant context over time: goals, preferences, constraints, history, prior decisions, prior outputs, prior mistakes, learned patterns.
- Reusable judgment — it does not just remember facts. It memorializes how decisions were made, what tradeoffs were weighed, what worked and what did not. This is what allows the system to steer well even when the human is not present.
- Clear goals — it knows what it is trying to optimize for. Without this, you get activity but not compounding.
- Recurring triggers — it sits inside a workflow that actually repeats: new inbound, customer feedback, calendar changes, product signals, founder meetings, weekly reviews, support tickets, market events.
- Action capability — it can do something meaningful: classify, summarize, prioritize, draft, recommend, route, compare, escalate, or execute within defined bounds.
- Evaluation and feedback — it gets signals about what is working, whether from user corrections, downstream outcomes, behavioral signals, or explicit metrics.
- Background execution — it continues making useful progress between human turns. This is the core while-you-sleep layer.
- Dynamic orchestration — increasingly, the system can spin up coordinated swarms: different agents that specialize, fan out in parallel, cross-check one another, synthesize findings, and escalate only the highest-value deltas.
- Human steering — the best systems are not full autonomy everywhere. They have clear thresholds for what happens automatically, what gets queued for review, and what gets escalated. Human on the loop, not always in the loop.
The key point is that the output is not just “a task got done.” The output is that the system is in a better state after each cycle. Better memory. Better judgment. Better defaults. Better prioritization. Better recommendations. Better next actions. That is what compounds.
Why this matters for seed investing
This should become a real lens for us. Historically, one of the defining startup questions was: what is your distribution? What is your viral strategy? What is your wedge? Those questions still matter. But in the AI era another question is becoming just as important:
What compounds here because of AI?
I think there are two questions under that one.
- First: how is the company leveraging AI to compound internally? Are they using AI to create tighter learning loops, memorialized founder judgment, better knowledge reuse, more off-clock progress, faster iteration, and lower coordination costs across product, GTM, support, recruiting, and decision-making?
- Second: how does the product create compounding results for customers because of AI? Does it sit inside a valuable recurring workflow where memory, context, feedback, personalization, background execution, and swarm-based orchestration make the product more useful over time?
The strongest startups will do both — run on compounding loops internally and deliver compounding loops externally. And the most interesting ones will not just embed one assistant in a workflow but orchestrate swarms within a persistent loop that learns over time.
That is the sharper lens. Not just: what compounds? But: what compounds here because of AI, inside the company and inside the product? And where do loops and swarms combine to create a widening advantage?
Why this matters for how we work
This is not just an investing lens. It should change how we operate.
Our own work is full of loops: sourcing, diligence, decision-making, portfolio support, and thesis development. These can all become more persistent, more memory-backed, and more asynchronous. And within each loop, swarms can dramatically increase what a single cycle accomplishes — a sourcing loop that fans out into multiple research agents, a diligence loop that splits into market mapping, product teardown, Founder pattern matching, and rebuttal generation in parallel.
Founder meetings can turn into structured memory. Theme development can compound instead of resetting. Portfolio signals can surface earlier. Internal judgment can become more reusable. Background systems can keep the loop moving between meetings and between days.
That is where leverage comes from: Not just faster memo writing or better summaries, but a genuinely different operating cadence.
Bottom line
I think AI is enabling two major unlocks at once: loops that compound across time, and swarms that amplify within each cycle. The most powerful systems will combine both — persistent engines that remember, learn, and keep working while we sleep, with the ability to spin up parallel intelligence on demand.
Those systems will create major divergence in freedom, capability, execution speed, learning rate, and enduring value creation.
For startups, the question is: what compounds here because of AI — internally and for the customer? And where do loops and swarms combine to create something that widens over time?
For us, this is not optional and it is not aspirational. The firms and operators who build real loop engines — internally and in their portfolio lens — will compound their judgment, their pattern recognition, their speed, and their signal quality in ways that leave everyone else behind. It is the next source of structural advantage, and the gap will widen fast. We should be building toward it now, in our own workflows and in how we evaluate every company we meet. AI not as a feature, and not just as a productivity layer, but as the engine that makes compounding possible and dramatically increases the scale and speed at which it happens.

