Workflow Topology
Pattern 03 of 9
Orchestrator-Workers
One LLM plans and dispatches; specialist LLMs or tools execute the actual work.

In the orchestrator-workers pattern, an orchestrator model receives the high-level task and dynamically decides which worker agents or tools to invoke, in what order, and with what inputs. The orchestrator does not execute work directly. Its job is coordination: breaking the task into subtasks, assigning each subtask to the right worker, and synthesizing the results. Workers are specialized: each handles a narrow task type and does not need to know about the overall goal.
Why it matters
This pattern lets you scale complexity without requiring any single model to be competent at everything. The orchestrator can be a large, expensive model optimized for reasoning. Workers can be smaller, cheaper models or deterministic tools, each optimized for their specific task. You get the benefits of specialization without building a monolithic system. The tradeoff is coordination overhead and more failure modes to manage.
Deep Dive
The orchestrator is doing a kind of task decomposition at runtime. It reads the incoming goal and produces a set of subtask assignments. Each assignment specifies which worker to call and what input to give it. The workers run, return their outputs, and the orchestrator reads those outputs and decides what to do next. This is a loop, and the orchestrator may go through several rounds of worker invocations before it has enough information to produce a final answer. LangGraph models this explicitly with its graph-based execution model, where the orchestrator is a node that can route to any other node based on state.
Worker design is where most of the implementation work happens. Good workers have narrow, well-defined contracts: clear input format, clear output format, clear failure modes. An orchestrator that receives unpredictable output from workers has to spend tokens parsing and interpreting rather than reasoning. The more predictable each worker is, the more the orchestrator can focus on strategy. In practice, the most reliable workers are ones that produce structured output: JSON, typed objects, or at minimum a format the orchestrator can parse without another LLM call.
The failure mode specific to this pattern is orchestrator drift: over multiple rounds, the orchestrator loses track of the original goal and starts optimizing for intermediate states. You see this in long-running agent sessions where the model gets focused on a specific subtask and forgets to return to the top-level objective. Mitigation strategies include passing the original goal as persistent context in every orchestrator prompt, setting maximum iteration counts, and adding a separate verifier model that checks whether the orchestrator's plan still aligns with the original intent. None of these are foolproof, but they reduce the probability of silent goal drift.