Advanced tool-calling capabilities
Many LLMs offer you some additional configuration options on tool calling. First, some models support parallel function calling—specifically, an LLM can call multiple tools at once. LangChain natively supports this since the tool_calls field of an AIMessage is a list. When you return ToolMessage objects as function call results, you should carefully match the tool_call_id field of a ToolMessage to the generated payload. This alignment is necessary so that LangChain and the underlying LLM can match them together when doing the next turn.
Another advanced capability is forcing an LLM to call a tool, or even to call a specific tool. Generally speaking, an LLM decides whether it should call a tool, and if it should, which tool to call from the list of provided tools. Typically, it’s handled by tool_choice and/or tool_config arguments passed to the invoke method, but implementation depends on the model’s provider. Anthropic, Google...