Workflow Topology

Pattern 06 of 9

Autonomous Agent Loop

The model decides which tools to call, in what order, and when to stop.

Autonomous Agent Loop workflow diagram

The autonomous agent loop is the pattern where the model drives execution. Given a goal and a set of available tools, the model decides at each step what action to take: which tool to call, with what arguments, or whether the task is complete. There is no predetermined sequence of steps. The model observes the results of each action and uses them to decide the next action. This loop continues until the model determines it has achieved the goal or until an external limit is hit.

Why it matters

Most real-world tasks cannot be fully specified in advance. The right sequence of steps depends on what you find along the way. A static pipeline cannot adapt to unexpected inputs or results. The autonomous agent loop is the pattern you reach for when the task requires genuine reasoning about intermediate results, not just transformation of a known input into a known output format.

Deep Dive

The ReAct paper from 2022 is the canonical formulation of this pattern. ReAct stands for Reasoning and Acting: the model alternates between producing a reasoning trace, which is a chain of thought about what it knows and what it should do, and producing an action, which is a tool call. The observation from the tool call is added to the context, and the model produces another reasoning step. This interleaving of thought and action is what makes ReAct different from pure chain of thought, where the model reasons without taking actions. The reasoning traces also serve as audit logs: you can read them to understand why the model did what it did.

Tool design matters more in the autonomous loop than in any other pattern. Because the model is deciding which tools to call based on their descriptions, those descriptions are load-bearing text. A tool described vaguely will be called in unexpected ways. A tool whose failure modes are not documented will fail in ways the model cannot reason about. The same care you would apply to writing a public API, including what it does, what it does not do, what inputs are valid, and what errors to expect, should go into writing tool descriptions for an autonomous agent.

Stopping conditions are the hardest part of this pattern to get right. The model has to decide when it is done, and that decision depends on its interpretation of the original goal. Ambiguous goals produce ambiguous stopping. A model that stops too early leaves the task incomplete. A model that does not stop loops forever, accumulating cost and sometimes making things worse. Practical mitigations include setting a hard step limit, defining explicit completion criteria in the system prompt, and adding a separate verifier that checks whether the final state satisfies the original goal before returning to the caller. The hard step limit is the minimum viable safety net; the others improve the quality of the stopping decision.

Examples

Open-ended research assistant that searches, reads, and synthesizes without a fixed plan
Software debugging agent that reads error messages, modifies code, and reruns tests
Travel planning agent that checks flights, hotels, and local events to build an itinerary
Data analyst agent that writes and executes queries until it finds the answer to a question

Go Deeper

PAPERReAct: Synergizing Reasoning and Acting in Language ModelsARTICLEBuilding Effective Agents

Related Patterns