langchain-hs-0.0.1.0: Haskell implementation of Langchain
Copyright(c) 2025 Tushar Adhatrao
LicenseMIT
MaintainerTushar Adhatrao <tusharadhatrao@gmail.com>
Stabilityexperimental
Safe HaskellSafe-Inferred
LanguageHaskell2010

Langchain.LLM.Core

Description

This module provides the core types and typeclasses for the Langchain library in Haskell, which is designed to facilitate interaction with language models (LLMs). It defines a standardized interface that allows different LLM implementations to be used interchangeably, promoting code reuse and modularity.

The main components include:

This module is intended to be used as the foundation for building applications that interact with LLMs, providing a consistent API across different model implementations.

Synopsis

LLM Typeclass

class LLM m where Source #

Typeclass defining the interface for language models. This provides methods for invoking the model, chatting with it, and streaming responses.

data TestLLM = TestLLM
  { responseText :: Text
  , shouldSucceed :: Bool
  }

instance LLM TestLLM where
  generate m _ _ = pure $ if shouldSucceed m
    then Right (responseText m)
    else Left "Test error"
ollamaLLM = Ollama "llama3.2:latest" [stdOutCallback]
response <- generate ollamaLLM "What is Haskell?" Nothing

Methods

generate Source #

Arguments

:: m

The type of the language model instance.

-> Text

The prompt to send to the model.

-> Maybe Params

Optional configuration parameters.

-> IO (Either String Text) 

Invoke the language model with a single prompt. Suitable for simple queries; returns either an error or generated text.

chat Source #

Arguments

:: m

The type of the language model instance.

-> ChatMessage

A non-empty list of messages to send to the model.

-> Maybe Params

Optional configuration parameters.

-> IO (Either String Text)

The result of the chat, either an error or the response text.

Chat with the language model using a sequence of messages. Suitable for multi-turn conversations; returns either an error or the response.

stream :: m -> ChatMessage -> StreamHandler -> Maybe Params -> IO (Either String ()) Source #

Stream responses from the language model for a sequence of messages. Uses callbacks to process tokens in real-time; returns either an error or unit.

Instances

Instances details
LLM Ollama Source #

Ollama implementation of the LLM typeclass Note: Params argument is currently ignored (see TODOs).

Example instance usage:

-- Generate text with error handling
case generate ollamaLLM Hello Nothing of
  Left err -> putStrLn $ "Error: " ++ err
  Right res -> putStrLn res
Instance details

Defined in Langchain.LLM.Ollama

LLM OpenAI Source # 
Instance details

Defined in Langchain.LLM.OpenAI

Parameters

data Message Source #

Represents a message in a conversation, including the sender's role, content, and additional metadata. https:/python.langchain.comdocsconceptsmessages/

userMsg :: Message
userMsg = Message
  { role = User
  , content = "Explain functional programming"
  , messageData = defaultMessageData
  }

Constructors

Message 

Fields

Instances

Instances details
Show Message Source # 
Instance details

Defined in Langchain.LLM.Core

Eq Message Source # 
Instance details

Defined in Langchain.LLM.Core

Methods

(==) :: Message -> Message -> Bool #

(/=) :: Message -> Message -> Bool #

data Role Source #

Enumeration of possible roles in a conversation.

Constructors

System

System role, typically for instructions or context

User

User role, for user inputs

Assistant

Assistant role, for model responses

Tool

Tool role, for tool outputs or interactions

Instances

Instances details
FromJSON Role Source # 
Instance details

Defined in Langchain.LLM.Core

ToJSON Role Source # 
Instance details

Defined in Langchain.LLM.Core

Generic Role Source # 
Instance details

Defined in Langchain.LLM.Core

Associated Types

type Rep Role :: Type -> Type #

Methods

from :: Role -> Rep Role x #

to :: Rep Role x -> Role #

Show Role Source # 
Instance details

Defined in Langchain.LLM.Core

Methods

showsPrec :: Int -> Role -> ShowS #

show :: Role -> String #

showList :: [Role] -> ShowS #

Eq Role Source # 
Instance details

Defined in Langchain.LLM.Core

Methods

(==) :: Role -> Role -> Bool #

(/=) :: Role -> Role -> Bool #

type Rep Role Source # 
Instance details

Defined in Langchain.LLM.Core

type Rep Role = D1 ('MetaData "Role" "Langchain.LLM.Core" "langchain-hs-0.0.1.0-inplace" 'False) ((C1 ('MetaCons "System" 'PrefixI 'False) (U1 :: Type -> Type) :+: C1 ('MetaCons "User" 'PrefixI 'False) (U1 :: Type -> Type)) :+: (C1 ('MetaCons "Assistant" 'PrefixI 'False) (U1 :: Type -> Type) :+: C1 ('MetaCons "Tool" 'PrefixI 'False) (U1 :: Type -> Type)))

type ChatMessage = NonEmpty Message Source #

Type alias for NonEmpty Message

data MessageData Source #

Additional data for a message, such as a name or tool calls. This type is designed for extensibility, allowing new fields to be added without breaking changes. Use defaultMessageData for typical usage.

Constructors

MessageData 

Fields

Instances

Instances details
FromJSON MessageData Source #

JSON deserialization for MessageData.

Instance details

Defined in Langchain.LLM.Core

ToJSON MessageData Source #

JSON serialization for MessageData.

Instance details

Defined in Langchain.LLM.Core

Show MessageData Source # 
Instance details

Defined in Langchain.LLM.Core

Eq MessageData Source # 
Instance details

Defined in Langchain.LLM.Core

data Params Source #

Parameters for configuring language model invocations. These parameters control aspects such as randomness, length, and stopping conditions of generated output. This type corresponds to standard parameters in Python Langchain: https:/python.langchain.comdocsconceptschat_models/#standard-parameters

Example usage:

myParams :: Params
myParams = defaultParams
  { temperature = Just 0.7
  , maxTokens = Just 100
  }

Constructors

Params 

Fields

  • temperature :: Maybe Double

    Sampling temperature. Higher values increase randomness (creativity), while lower values make output more focused.

  • maxTokens :: Maybe Integer
     
  • topP :: Maybe Double

    Nucleus sampling parameter. Considers tokens whose cumulative probability mass is at least topP.

  • n :: Maybe Int

    Number of responses to generate (e.g., for sampling multiple outputs).

  • stop :: Maybe [Text]

    Sequences where generation should stop (e.g., ["n"] stops at newlines).

Instances

Instances details
Show Params Source # 
Instance details

Defined in Langchain.LLM.Core

Eq Params Source # 
Instance details

Defined in Langchain.LLM.Core

Methods

(==) :: Params -> Params -> Bool #

(/=) :: Params -> Params -> Bool #

data StreamHandler Source #

Callbacks for handling streaming responses from a language model. This allows real-time processing of tokens as they are generated and an action upon completion.

printHandler :: StreamHandler
printHandler = StreamHandler
  { onToken = putStrLn . ("Token: " ++)
  , onComplete = putStrLn "Streaming complete"
  }

Constructors

StreamHandler 

Fields

  • onToken :: Text -> IO ()

    Action to perform for each token received

  • onComplete :: IO ()

    Action to perform when streaming is complete

Default Values

defaultParams :: Params Source #

Default parameters with all fields set to Nothing. Use this when no specific configuration is needed for the language model.

>>> generate myLLM "Hello" (Just defaultParams)

defaultMessageData :: MessageData Source #

Default message data with all fields set to Nothing. Use this for standard messages without additional metadata