Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

🏭 llama-cpp-python Prebuilt Wheels

The most complete collection of prebuilt llama-cpp-python wheels for manylinux x86_64.

Stop compiling. Start inferencing.

pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl

πŸ“Š What's Inside

Count
Total Wheels 3,794+
Versions 0.3.0 β€” 0.3.16 (17 versions)
Python 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14
Platform manylinux_2_31_x86_64
Backends 8
CPU Profiles 13+ flag combinations

⚑ Backends

Backend Tag Description
OpenBLAS openblas CPU BLAS acceleration β€” best general-purpose choice
Intel MKL mkl Intel Math Kernel Library β€” fastest on Intel CPUs
Basic basic No BLAS β€” maximum compatibility, no extra dependencies
Vulkan vulkan Universal GPU acceleration β€” works on NVIDIA, AMD, Intel
CLBlast clblast OpenCL GPU acceleration
SYCL sycl Intel GPU acceleration (Data Center, Arc, iGPU)
OpenCL opencl Generic OpenCL GPU backend
RPC rpc Distributed inference over network

πŸ–₯️ CPU Optimization Profiles

Wheels are built with specific CPU instruction sets enabled. Pick the one that matches your hardware:

CPU Tag Instructions Best For
basic None Any x86-64 CPU (maximum compatibility)
avx AVX Sandy Bridge+ (2011)
avx_f16c AVX + F16C Ivy Bridge+ (2012)
avx2_fma_f16c AVX2 + FMA + F16C Haswell+ (2013) β€” most common
avx2_fma_f16c_avxvnni AVX2 + FMA + F16C + AVX-VNNI Alder Lake+ (2021)
avx512_fma_f16c AVX-512 + FMA + F16C Skylake-X+ (2017)
avx512_fma_f16c_vnni + AVX512-VNNI Cascade Lake+ (2019)
avx512_fma_f16c_vnni_vbmi + AVX512-VBMI Ice Lake+ (2019)
avx512_fma_f16c_vnni_vbmi_bf16_amx + BF16 + AMX Sapphire Rapids+ (2023)

How to Pick the Right Wheel

Don't know your CPU? Start with avx2_fma_f16c β€” it works on any CPU from 2013 onwards (Intel Haswell, AMD Ryzen, and newer).

Want maximum compatibility? Use basic β€” works on literally any x86-64 CPU.

Have a server CPU? Check if it supports AVX-512:

grep -o 'avx[^ ]*\|fma\|f16c\|bmi2\|sse4_2' /proc/cpuinfo | sort -u

πŸ“¦ Filename Format

All wheels follow the PEP 440 local version identifier standard:

llama_cpp_python-{version}+{backend}_{cpu_flags}-{python}-{python}-{platform}.whl

Examples:

llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
llama_cpp_python-0.3.16+basic-cp310-cp310-manylinux_2_31_x86_64.whl

The local version label (+openblas_avx2_fma_f16c) encodes:

  • Backend: openblas, mkl, basic, vulkan, clblast, sycl, opencl, rpc
  • CPU flags (in order): avx, avx2, avx512, fma, f16c, vnni, vbmi, bf16, avxvnni, amx

πŸš€ Quick Start

CPU (OpenBLAS + AVX2 β€” recommended for most users)

sudo apt-get install libopenblas-dev

pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl

GPU (Vulkan β€” works on any GPU vendor)

sudo apt-get install libvulkan1

pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl

Basic (zero dependencies)

pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic-cp311-cp311-manylinux_2_31_x86_64.whl

Example Usage

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="Qwen/Qwen2.5-Coder-7B-Instruct-GGUF",
    filename="*q4_k_m.gguf",
    n_ctx=4096,
)

output = llm.create_chat_completion(
    messages=[{"role": "user", "content": "Write a Python hello world"}],
    max_tokens=256,
)
print(output["choices"][0]["message"]["content"])

πŸ”§ Runtime Dependencies

Backend Required Packages
OpenBLAS libopenblas0 (runtime) or libopenblas-dev (build)
MKL Intel oneAPI MKL
Vulkan libvulkan1
CLBlast libclblast1
OpenCL ocl-icd-libopencl1
Basic None
SYCL Intel oneAPI DPC++ runtime
RPC Network access to RPC server

🏭 How These Wheels Are Built

These wheels are built by the Ultimate Llama Wheel Factory β€” a distributed build system running entirely on free HuggingFace Spaces:

Component Link
🏭 Dispatcher wheel-factory-dispatcher
βš™οΈ Workers 1-4 wheel-factory-worker-1 ... 4
πŸ” Auditor wheel-factory-auditor

The factory uses explicit cmake flags matching llama.cpp's official CPU variant builds:

CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS -DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON -DGGML_AVX=OFF -DGGML_AVX512=OFF -DGGML_NATIVE=OFF"

Every flag is set explicitly (no cmake defaults) to ensure reproducible, deterministic builds.

❓ FAQ

Q: Which wheel should I use? For most people: openblas_avx2_fma_f16c with your Python version. It's fast, works on 90%+ of modern CPUs, and only needs libopenblas.

Q: Can I use these on Ubuntu / Debian / Fedora / Arch? Yes β€” manylinux_2_31 wheels work on any Linux distro with glibc 2.31 or newer (Ubuntu 20.04+, Debian 11+, Fedora 34+, Arch).

Q: What about Windows / macOS / CUDA wheels? This repo focuses on manylinux x86_64. For other platforms, see:

Q: These wheels don't work on Alpine Linux. Alpine uses musl, not glibc. These are manylinux (glibc) wheels. Build from source or use musllinux wheels.

Q: I get "illegal instruction" errors. You're using a wheel with CPU flags your processor doesn't support. Try basic (no SIMD) or check your CPU flags with:

grep -o 'avx[^ ]*\|fma\|f16c' /proc/cpuinfo | sort -u

Q: Can I contribute more wheels? Yes! The factory source code is open. See the Dispatcher and Worker Spaces linked above.

πŸ“„ License

MIT β€” same as llama-cpp-python and llama.cpp.

πŸ™ Credits

Downloads last month
9,464

Spaces using AIencoder/llama-cpp-wheels 7