langchain-hs-0.0.1.0: Haskell implementation of Langchain
Copyright(c) 2025 Tushar Adhatrao
LicenseMIT
MaintainerTushar Adhatrao <tusharadhatrao@gmail.com>
Safe HaskellSafe-Inferred
LanguageHaskell2010

Langchain.Runnable.Core

Description

This module defines the Runnable typeclass, which is the fundamental abstraction in the Haskell implementation of LangChain Expression Language (LCEL). A Runnable represents any component that can process an input and produce an output, potentially with side effects.

The Runnable abstraction enables composition of various LLM-related components into processing pipelines, including:

  • Language Models
  • Prompt Templates
  • Document Retrievers
  • Text Splitters
  • Embedders
  • Vector Stores
  • Output Parsers

By implementing the Runnable typeclass, components can be combined using the combinators provided in Langchain.Runnable.Chain.

Synopsis

Documentation

class Runnable r where Source #

The core Runnable typeclass represents anything that can "run" with an input and produce an output.

This typeclass is the foundation of the LangChain Expression Language (LCEL) in Haskell, allowing different components to be composed into processing pipelines.

To implement a Runnable, you must:

  1. Define the input and output types using associated type families
  2. Implement the invoke method
  3. Optionally override batch and stream for specific optimizations

Example implementation:

data TextSplitter = TextSplitter { chunkSize :: Int, overlap :: Int }

instance Runnable TextSplitter where
  type RunnableInput TextSplitter = String
  type RunnableOutput TextSplitter = [String]

  invoke splitter text = do
    -- Implementation of text splitting logic
    let chunks = splitTextIntoChunks (chunkSize splitter) (overlap splitter) text
    return $ Right chunks

Minimal complete definition

invoke

Associated Types

type RunnableInput r Source #

The type of input the runnable accepts.

For example, an LLM might accept String or PromptValue as input.

type RunnableOutput r Source #

The type of output the runnable produces.

For example, an LLM might produce String or LLMResult as output.

Methods

invoke :: r -> RunnableInput r -> IO (Either String (RunnableOutput r)) Source #

Core method to invoke (run) this component with a single input.

This is the primary method that must be implemented for any Runnable. It processes a single input and returns either an error message or the output.

Example usage:

  let model = OpenAI { temperature = 0.7, model = "gpt-3.5-turbo" }
  result <- invoke model "Explain monads in simple terms."
  case result of
    Left err -> putStrLn $ "Error: " ++ err
    Right response -> putStrLn response
  

batch :: r -> [RunnableInput r] -> IO (Either String [RunnableOutput r]) Source #

Batch process multiple inputs at once.

This method can be overridden to provide more efficient batch processing, particularly for components like LLMs that may have batch APIs.

The default implementation simply maps invoke over each input and sequences the results.

Example usage:

  let retriever = VectorDBRetriever { ... }
  questions <- ["What is Haskell?", "Explain monads.", "How do I install GHC?"]
  result <- batch retriever questions
  case result of
    Left err -> putStrLn $ "Batch processing failed: " ++ err
    Right docs -> mapM_ print docs
  

stream :: r -> RunnableInput r -> (RunnableOutput r -> IO ()) -> IO (Either String ()) Source #

Stream results for components that support streaming.

This method is particularly useful for LLMs that can stream tokens as they're generated, allowing for more responsive user interfaces.

The callback function is called with each piece of the output as it becomes available.

Example usage:

  let model = OpenAI { temperature = 0.7, model = "gpt-3.5-turbo", streaming = True }
  result stream model "Write a story about a programmer." $ chunk - do
    putStr chunk
    hFlush stdout
  case result of
    Left err -> putStrLn $ "\nError: " ++ err
    Right _ -> putStrLn "\nStreaming completed successfully."
  

Instances

Instances details
Runnable Ollama Source # 
Instance details

Defined in Langchain.LLM.Ollama

Runnable WindowBufferMemory Source # 
Instance details

Defined in Langchain.Memory.Core

Runnable PromptTemplate Source # 
Instance details

Defined in Langchain.PromptTemplate

Runnable WikipediaTool Source #

Implements Runnable compatibility layer Note: The current implementation returns Right values only, though the type signature allows for future error handling.

Example usage:

response <- invoke defaultWikipediaTool "Artificial intelligence"
case response of
  Right content -> putStrLn content
  Left err -> print err
Instance details

Defined in Langchain.Tool.WikipediaTool

VectorStore a => Runnable (VectorStoreRetriever a) Source #

Runnable interface for vector store retrievers Allows integration with LangChain workflows and expressions.

Example:

>>> invoke (VectorStoreRetriever store) "Quantum computing"
Right [Document "Quantum theory...", ...]
Instance details

Defined in Langchain.Retriever.Core

(Runnable r, Ord (RunnableInput r)) => Runnable (Cached r) Source #

Make Cached a Runnable that uses a cache

Instance details

Defined in Langchain.Runnable.Utils

Associated Types

type RunnableInput (Cached r) Source #

type RunnableOutput (Cached r) Source #

Runnable r => Runnable (Retry r) Source #

Make Retry a Runnable that retries on failure

Instance details

Defined in Langchain.Runnable.Utils

Associated Types

type RunnableInput (Retry r) Source #

type RunnableOutput (Retry r) Source #

Runnable r => Runnable (WithTimeout r) Source #

Make WithTimeout a Runnable that times out

Instance details

Defined in Langchain.Runnable.Utils

(Agent a, BaseMemory m) => Runnable (AgentExecutor a m) Source #

Runnable instance for agent execution Allows integration with LangChain workflows.

Example:

response <- invoke myAgentExecutor "Solve 5+3"
case response of
  Right result -> print result
  Left err -> print err
Instance details

Defined in Langchain.Agents.Core

(Retriever a, LLM m) => Runnable (MultiQueryRetriever a m) Source #

Runnable interface implementation Allows integration with LangChain workflows:

>>> invoke mqRetriever "AI applications"
Right [Document "Machine learning...", ...]
Instance details

Defined in Langchain.Retriever.MultiQueryRetriever

Runnable (RunnableBranch a b) Source # 
Instance details

Defined in Langchain.Runnable.Chain

Runnable (RunnableSequence a b) Source # 
Instance details

Defined in Langchain.Runnable.Chain

(BaseMemory m, LLM l) => Runnable (ConversationChain m l) Source #

Make ConversationChain an instance of Runnable to enable composition with other components

Instance details

Defined in Langchain.Runnable.ConversationChain

Runnable r => Runnable (WithConfig config r) Source #

Make WithConfig a Runnable that applies the configuration

Instance details

Defined in Langchain.Runnable.Utils

Associated Types

type RunnableInput (WithConfig config r) Source #

type RunnableOutput (WithConfig config r) Source #

Methods

invoke :: WithConfig config r -> RunnableInput (WithConfig config r) -> IO (Either String (RunnableOutput (WithConfig config r))) Source #

batch :: WithConfig config r -> [RunnableInput (WithConfig config r)] -> IO (Either String [RunnableOutput (WithConfig config r)]) Source #

stream :: WithConfig config r -> RunnableInput (WithConfig config r) -> (RunnableOutput (WithConfig config r) -> IO ()) -> IO (Either String ()) Source #

Runnable (RunnableMap a b c) Source # 
Instance details

Defined in Langchain.Runnable.Chain

Associated Types

type RunnableInput (RunnableMap a b c) Source #

type RunnableOutput (RunnableMap a b c) Source #