llama-cpp-hs: Haskell FFI bindings to the llama.cpp LLM inference library

[ ai, ffi, library, mit, natural-language-processing ] [ Propose Tags ] [ Report a vulnerability ]
Versions [RSS] 0.1.0.0
Change log CHANGELOG.md
Dependencies base (>=4.7 && <5), bytestring (>=0.9 && <0.13), derive-storable (>=0.2 && <0.4) [details]
License MIT
Copyright 2025 tushar
Author tushar
Maintainer tusharadhatrao@gmail.com
Category ai, ffi, natural-language-processing
Home page https://github.com/tusharad/llama-cpp-hs#readme
Bug tracker https://github.com/tusharad/llama-cpp-hs/issues
Source repo head: git clone https://github.com/tusharad/llama-cpp-hs
Uploaded by tusharad at 2025-05-21T09:13:36Z
Distributions
Downloads 3 total (3 in the last 30 days)
Rating 2.0 (votes: 1) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs uploaded by user
Build status unknown [no reports yet]

Readme for llama-cpp-hs-0.1.0.0

[back to package description]

llama-cpp-hs

Haskell bindings over llama.cpp

This package provides both low-level and high-level interfaces to interact with the LLaMA C++ inference engine via Haskell FFI. It allows you to run LLMs locally in pure C/C++, with support for GPU acceleration and quantized models.

Features

  • Low-level access to the full LLaMA C API using Haskell FFI.
  • Higher-level convenience functions for easier model interaction.
  • Examples provided for quickly getting started.

Example Usage

Check out the /examples directory to see how to load and query models directly from Haskell.


Setup

Ensure that Nix is installed on your system.

Then, enter the development shell:

nix-shell

Build the project using Stack:

stack build

2. Using Stack (Manual Setup)

If you prefer not to use Nix, follow these steps:

  1. Clone and install llama.cpp manually.
  2. Make sure llama.h is available at /usr/local/include/ and compiled libllama.a or libllama.so at /usr/local/lib/.
  3. Install Stack if you haven’t already: https://docs.haskellstack.org/en/stable/install_and_upgrade/
  4. Then proceed with:
stack build

Models

To use this library, you'll need to download one of the many open-source GGUF models available on Hugging Face

Search for compatible GGUF models:


Current State

The codebase is still under active development and may undergo breaking changes. Use it with caution in production environments.

Pull requests, issues, and community contributions are highly encouraged!


Contributing

Contributions are welcome!


License

This project is licensed under MIT.


Thank You

Thanks to ggml-org/llama.cpp for making local LLM inference fast, lightweight, and accessible!