Foundations
Pattern 01 of 26
Tool Use and Function Calling
How a model stops talking and starts doing
I think about tool use as the moment a language model stops being a text generator and starts being something you can actually give work to. The model outputs structured JSON. Your code reads that JSON and does something real: calls an API, writes a file, queries a database. That is the whole trick. MCP, computer use, all of it sits on top of this one primitive. If you understand this, you understand why agents can do things at all.
Why it matters
Every agent capability you will ever build traces back here. A model without tool use is genuinely useful but it is not an agent. The gap between "can generate text about doing X" and "can actually do X" is exactly this primitive. That gap is worth understanding before you build anything on top of it.
Deep Dive
Function calling arrived in the OpenAI API in June 2023. Before that, getting structured output from a model meant parsing free text, which worked until it did not. After June 2023, you describe a function with a name, a set of typed parameters, and a description, and the model returns a JSON object that maps to that function. The model decides which function to call and with what arguments. Your code handles the actual execution. That separation matters more than it sounds: the model is doing reasoning, not running your infrastructure.
Anthropic's tool use API hit general availability in May 2024 and brought parallel tool calling with it. That means a model can issue multiple tool calls in a single turn without waiting for each result to come back before deciding on the next. For agents doing research or pulling data from multiple sources at once, this is the difference between sequential and concurrent work. The Toolformer paper from Meta AI, published in February 2023, provided some of the theoretical grounding for this: it showed models could learn to insert API calls into generated text through self-supervised training, at the points where a real result would improve the prediction.
The Model Context Protocol is worth understanding in relation to this. MCP sits on top of function calling as a standardization layer. It defines how tools are described and invoked across different clients, so a tool server you write once can work with Claude Code, Cursor, and any other MCP-compatible host without modification. The failure mode I see most often is people conflating the two. Function calling is the mechanism the model uses to request an action. MCP is a convention for how those requests and the tools behind them are structured. You can have function calling without MCP. MCP without the underlying function calling primitive does not exist.