Skip to main content

Runnable Interface

To make it as easy as possible to create custom chains, we've implemented a "Runnable" protocol. Many LangChain components implement the Runnable protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.

This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes:

  • stream: stream back chunks of the response
  • invoke: call the chain on an input
  • batch: call the chain on a list of inputs

These also have corresponding async methods that should be used with asyncio await syntax for concurrency:

  • astream: stream back chunks of the response async
  • ainvoke: call the chain on an input async
  • abatch: call the chain on a list of inputs async
  • astream_log: stream back intermediate steps as they happen, in addition to the final response
  • astream_events: beta stream events as they happen in the chain (introduced in langchain-core 0.1.14)

The input type and output type varies by component:

ComponentInput TypeOutput Type
PromptDictionaryPromptValue
ChatModelSingle string, list of chat messages or a PromptValueChatMessage
LLMSingle string, list of chat messages or a PromptValueString
OutputParserThe output of an LLM or ChatModelDepends on the parser
RetrieverSingle stringList of Documents
ToolSingle string or dictionary, depending on the toolDepends on the tool

All runnables expose input and output schemas to inspect the inputs and outputs:

  • input_schema: an input Pydantic model auto-generated from the structure of the Runnable
  • output_schema: an output Pydantic model auto-generated from the structure of the Runnable

Was this page helpful?


You can also leave detailed feedback on GitHub.