langchain-hs-0.0.1.0: Haskell implementation of Langchain
Copyright(c) 2025 Tushar Adhatrao
LicenseMIT
MaintainerTushar Adhatrao <tusharadhatrao@gmail.com>
Safe HaskellSafe-Inferred
LanguageHaskell2010

Langchain.Runnable.Utils

Description

This module provides various utility wrappers for Runnable components that enhance their behavior with common patterns like:

  • Configuration management
  • Result caching
  • Automatic retries
  • Timeout handling

These utilities follow the decorator pattern, wrapping existing Runnable instances with additional functionality while preserving the original input/output types.

Note: This module is experimental and the API may change in future versions.

Synopsis

Configuration Management

data WithConfig config r Source #

Wrapper for Runnable components with configurable behavior.

This wrapper allows attaching configuration data to a Runnable instance. The configuration data can be accessed and modified without changing the underlying Runnable implementation.

Example:

data LLMConfig = LLMConfig
  { temperature :: Float
  , maxTokens :: Int
  }

let
  baseModel = OpenAI defaultOpenAIConfig
  configuredModel = WithConfig
    { configuredRunnable = baseModel
    , runnableConfig = LLMConfig 0.7 100
    }

-- Later, modify the configuration without changing the model
let updatedModel = configuredModel { runnableConfig = LLMConfig 0.9 150 }

-- Use the model as a regular Runnable
result <- invoke updatedModel "Explain monads in Haskell"

Constructors

Runnable r => WithConfig 

Fields

Instances

Instances details
Runnable r => Runnable (WithConfig config r) Source #

Make WithConfig a Runnable that applies the configuration

Instance details

Defined in Langchain.Runnable.Utils

Associated Types

type RunnableInput (WithConfig config r) Source #

type RunnableOutput (WithConfig config r) Source #

Methods

invoke :: WithConfig config r -> RunnableInput (WithConfig config r) -> IO (Either String (RunnableOutput (WithConfig config r))) Source #

batch :: WithConfig config r -> [RunnableInput (WithConfig config r)] -> IO (Either String [RunnableOutput (WithConfig config r)]) Source #

stream :: WithConfig config r -> RunnableInput (WithConfig config r) -> (RunnableOutput (WithConfig config r) -> IO ()) -> IO (Either String ()) Source #

type RunnableInput (WithConfig config r) Source # 
Instance details

Defined in Langchain.Runnable.Utils

type RunnableOutput (WithConfig config r) Source # 
Instance details

Defined in Langchain.Runnable.Utils

Caching

data Cached r Source #

Cache results of a Runnable to avoid duplicate computations.

This wrapper stores previously computed results in a thread-safe cache. When an input is encountered again, the cached result is returned instead of recomputing it, which can significantly improve performance for expensive operations or when the same inputs are frequently processed.

Note: The cached results are stored in-memory and will be lost when the program terminates. For persistent caching, consider implementing a custom wrapper that uses database storage.

The RunnableInput type must be an instance of Ord for map lookups.

Constructors

(Runnable r, Ord (RunnableInput r)) => Cached 

Fields

Instances

Instances details
(Runnable r, Ord (RunnableInput r)) => Runnable (Cached r) Source #

Make Cached a Runnable that uses a cache

Instance details

Defined in Langchain.Runnable.Utils

Associated Types

type RunnableInput (Cached r) Source #

type RunnableOutput (Cached r) Source #

type RunnableInput (Cached r) Source # 
Instance details

Defined in Langchain.Runnable.Utils

type RunnableOutput (Cached r) Source # 
Instance details

Defined in Langchain.Runnable.Utils

cached :: (Runnable r, Ord (RunnableInput r)) => r -> IO (Cached r) Source #

Create a new cached Runnable.

This function initializes an empty cache and wraps the provided Runnable in a Cached wrapper.

Example:

main = do
  -- Create a cached LLM to avoid redundant API calls
  let expensiveModel = OpenAI { model = "gpt-4", temperature = 0.7 }
  cachedModel <- cached expensiveModel

  -- These will all use the same cached result for identical inputs
  result1 <- invoke cachedModel "What is functional programming?"
  result2 <- invoke cachedModel "What is functional programming?"
  result3 <- invoke cachedModel "What is functional programming?"

  -- This will compute a new result
  result4 <- invoke cachedModel "What is Haskell?"

Resilience Patterns

data Retry r Source #

Add retry capability to any Runnable.

This wrapper automatically retries failed operations up to a specified number of times with a configurable delay between attempts. This is particularly useful for network operations or external API calls that might fail transiently.

Example:

-- Create an LLM with automatic retry for network failures
let
  baseModel = OpenAI defaultConfig
  resilientModel = Retry
    { retryRunnable = baseModel
    , maxRetries = 3
    , retryDelay = 1000000  -- 1 second delay between retries
    }

-- If the API call fails, it will retry up to 3 times
result <- invoke resilientModel "Generate a story about a Haskell programmer"

Constructors

Runnable r => Retry 

Fields

Instances

Instances details
Runnable r => Runnable (Retry r) Source #

Make Retry a Runnable that retries on failure

Instance details

Defined in Langchain.Runnable.Utils

Associated Types

type RunnableInput (Retry r) Source #

type RunnableOutput (Retry r) Source #

type RunnableInput (Retry r) Source # 
Instance details

Defined in Langchain.Runnable.Utils

type RunnableOutput (Retry r) Source # 
Instance details

Defined in Langchain.Runnable.Utils

data WithTimeout r Source #

Add timeout capability to any Runnable.

This wrapper enforces a maximum execution time for the wrapped Runnable. If the operation takes longer than the specified timeout, it is cancelled and an error is returned. This is useful for limiting the execution time of potentially long-running operations.

Example:

-- Create an LLM with a 30-second timeout
let
  baseModel = OpenAI defaultConfig
  timeboxedModel = WithTimeout
    { timeoutRunnable = baseModel
    , timeoutMicroseconds = 30000000  -- 30 seconds
    }

-- If the API call takes longer than 30 seconds, it will be cancelled
result <- invoke timeboxedModel "Generate a detailed analysis of Haskell's type system"

Note: This implementation uses forkIO and killThread, which may not always cleanly terminate the underlying operation, especially for certain types of I/O. For critical applications, consider implementing a more robust timeout mechanism.

Constructors

Runnable r => WithTimeout 

Fields

Instances

Instances details
Runnable r => Runnable (WithTimeout r) Source #

Make WithTimeout a Runnable that times out

Instance details

Defined in Langchain.Runnable.Utils

type RunnableInput (WithTimeout r) Source # 
Instance details

Defined in Langchain.Runnable.Utils

type RunnableOutput (WithTimeout r) Source # 
Instance details

Defined in Langchain.Runnable.Utils