Chapter 2

Installation

LocalAI can be installed in multiple ways depending on your platform and preferences.

Tip

Recommended: Docker Installation

Docker is the recommended installation method for most users as it works across all platforms (Linux, macOS, Windows) and provides the easiest setup experience. It’s the fastest way to get started with LocalAI.

Installation Methods

Choose the installation method that best suits your needs:

  1. DockerRecommended - Works on all platforms, easiest setup
  2. macOS - Download and install the DMG application
  3. Linux - Install on Linux using the one-liner script or binaries
  4. Kubernetes - Deploy LocalAI on Kubernetes clusters
  5. Build from Source - Build LocalAI from source code

Quick Start

Recommended: Docker (works on all platforms)

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest

This will start LocalAI. The API will be available at http://localhost:8080. For images with pre-configured models, see All-in-One images.

For other platforms:

  • macOS: Download the DMG
  • Linux: Use the curl https://localai.io/install.sh | sh one-liner

For detailed instructions, see the Docker installation guide.

Subsections of Installation

Docker Installation

Tip

Recommended Installation Method

Docker is the recommended way to install LocalAI as it works across all platforms (Linux, macOS, Windows) and provides the easiest setup experience.

LocalAI provides Docker images that work with Docker, Podman, and other container engines. These images are available on Docker Hub and Quay.io.

Prerequisites

Before you begin, ensure you have Docker or Podman installed:

Quick Start

The fastest way to get started is with the CPU image:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest

This will:

  • Start LocalAI (you’ll need to install models separately)
  • Make the API available at http://localhost:8080
Tip

Docker Run vs Docker Start

  • docker run creates and starts a new container. If a container with the same name already exists, this command will fail.
  • docker start starts an existing container that was previously created with docker run.

If you’ve already run LocalAI before and want to start it again, use: docker start -i local-ai

Image Types

LocalAI provides several image types to suit different needs:

Standard Images

Standard images don’t include pre-configured models. Use these if you want to configure models manually.

CPU Image

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

GPU Images

NVIDIA CUDA 12:

docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12

NVIDIA CUDA 11:

docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11

AMD GPU (ROCm):

docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas

Intel GPU:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel

Vulkan:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan

NVIDIA Jetson (L4T ARM64):

docker run -ti --name local-ai -p 8080:8080 --runtime nvidia --gpus all localai/localai:latest-nvidia-l4t-arm64

All-in-One (AIO) Images

Recommended for beginners - These images come pre-configured with models and backends, ready to use immediately.

CPU Image

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu

GPU Images

NVIDIA CUDA 12:

docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12

NVIDIA CUDA 11:

docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-11

AMD GPU (ROCm):

docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas

Intel GPU:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel

Using Docker Compose

For a more manageable setup, especially with persistent volumes, use Docker Compose:

version: "3.9"
services:
  api:
    image: localai/localai:latest-aio-cpu
    # For GPU support, use one of:
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-12
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-11
    # image: localai/localai:latest-aio-gpu-hipblas
    # image: localai/localai:latest-aio-gpu-intel
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 1m
      timeout: 20m
      retries: 5
    ports:
      - 8080:8080
    environment:
      - DEBUG=true
    volumes:
      - ./models:/models:cached
    # For NVIDIA GPUs, uncomment:
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities: [gpu]

Save this as docker-compose.yml and run:

docker compose up -d

Persistent Storage

To persist models and configurations, mount a volume:

docker run -ti --name local-ai -p 8080:8080 \
  -v $PWD/models:/models \
  localai/localai:latest-aio-cpu

Or use a named volume:

docker volume create localai-models
docker run -ti --name local-ai -p 8080:8080 \
  -v localai-models:/models \
  localai/localai:latest-aio-cpu

What’s Included in AIO Images

All-in-One images come pre-configured with:

  • Text Generation: LLM models for chat and completion
  • Image Generation: Stable Diffusion models
  • Text to Speech: TTS models
  • Speech to Text: Whisper models
  • Embeddings: Vector embedding models
  • Function Calling: Support for OpenAI-compatible function calling

The AIO images use OpenAI-compatible model names (like gpt-4, gpt-4-vision-preview) but are backed by open-source models. See the container images documentation for the complete mapping.

Next Steps

After installation:

  1. Access the WebUI at http://localhost:8080
  2. Check available models: curl http://localhost:8080/v1/models
  3. Install additional models
  4. Try out examples

Advanced Configuration

For detailed information about:

  • All available image tags and versions
  • Advanced Docker configuration options
  • Custom image builds
  • Backend management

See the Container Images documentation.

Troubleshooting

Container won’t start

  • Check Docker is running: docker ps
  • Check port 8080 is available: netstat -an | grep 8080 (Linux/Mac)
  • View logs: docker logs local-ai

GPU not detected

  • Ensure Docker has GPU access: docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi
  • For NVIDIA: Install NVIDIA Container Toolkit
  • For AMD: Ensure devices are accessible: ls -la /dev/kfd /dev/dri

Models not downloading

  • Check internet connection
  • Verify disk space: df -h
  • Check Docker logs for errors: docker logs local-ai

See Also

macOS Installation

The easiest way to install LocalAI on macOS is using the DMG application.

Download

Download the latest DMG from GitHub releases:

Download LocalAI for macOS

Installation Steps

  1. Download the LocalAI.dmg file from the link above
  2. Open the downloaded DMG file
  3. Drag the LocalAI application to your Applications folder
  4. Launch LocalAI from your Applications folder

Known Issues

Note: The DMGs are not signed by Apple and may show as quarantined.

Workaround: See this issue for details on how to bypass the quarantine.

Fix tracking: The signing issue is being tracked in this issue.

Next Steps

After installing LocalAI, you can:

Linux Installation

The fastest way to install LocalAI on Linux is with the installation script:

curl https://localai.io/install.sh | sh

This script will:

  • Detect your system architecture
  • Download the appropriate LocalAI binary
  • Set up the necessary configuration
  • Start LocalAI automatically

Installer Configuration Options

The installer can be configured using environment variables:

curl https://localai.io/install.sh | VAR=value sh

Environment Variables

Environment VariableDescription
DOCKER_INSTALLSet to "true" to enable the installation of Docker images
USE_AIOSet to "true" to use the all-in-one LocalAI Docker image
USE_VULKANSet to "true" to use Vulkan GPU support
API_KEYSpecify an API key for accessing LocalAI, if required
PORTSpecifies the port on which LocalAI will run (default is 8080)
THREADSNumber of processor threads the application should use. Defaults to the number of logical cores minus one
VERSIONSpecifies the version of LocalAI to install. Defaults to the latest available version
MODELS_PATHDirectory path where LocalAI models are stored (default is /usr/share/local-ai/models)
P2P_TOKENToken to use for the federation or for starting workers. See distributed inferencing documentation
WORKERSet to "true" to make the instance a worker (p2p token is required)
FEDERATEDSet to "true" to share the instance with the federation (p2p token is required)
FEDERATED_SERVERSet to "true" to run the instance as a federation server which forwards requests to the federation (p2p token is required)

Image Selection

The installer will automatically detect your GPU and select the appropriate image. By default, it uses the standard images without extra Python dependencies. You can customize the image selection:

  • USE_AIO=true: Use all-in-one images that include all dependencies
  • USE_VULKAN=true: Use Vulkan GPU support instead of vendor-specific GPU support

Uninstallation

To uninstall LocalAI installed via the script:

curl https://localai.io/install.sh | sh -s -- --uninstall

Manual Installation

Download Binary

You can manually download the appropriate binary for your system from the releases page:

  1. Go to GitHub Releases
  2. Download the binary for your architecture (amd64, arm64, etc.)
  3. Make it executable:
chmod +x local-ai-*
  1. Run LocalAI:
./local-ai-*

System Requirements

Hardware requirements vary based on:

  • Model size
  • Quantization method
  • Backend used

For performance benchmarks with different backends like llama.cpp, visit this link.

Configuration

After installation, you can:

  • Access the WebUI at http://localhost:8080
  • Configure models in the models directory
  • Customize settings via environment variables or config files

Next Steps

Run with Kubernetes

For installing LocalAI in Kubernetes, the deployment file from the examples can be used and customized as preferred:

kubectl apply -f https://raw.githubusercontent.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment.yaml

For Nvidia GPUs:

kubectl apply -f https://raw.githubusercontent.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment-nvidia.yaml

Alternatively, the helm chart can be used as well:

helm repo add go-skynet https://go-skynet.github.io/helm-charts/
helm repo update
helm show values go-skynet/local-ai > values.yaml


helm install local-ai go-skynet/local-ai -f values.yaml

Build LocalAI

Build

LocalAI can be built as a container image or as a single, portable binary. Note that some model architectures might require Python libraries, which are not included in the binary.

LocalAI’s extensible architecture allows you to add your own backends, which can be written in any language, and as such the container images contains also the Python dependencies to run all the available backends (for example, in order to run backends like Diffusers that allows to generate images and videos from text).

This section contains instructions on how to build LocalAI from source.

Build LocalAI locally

Requirements

In order to build LocalAI locally, you need the following requirements:

  • Golang >= 1.21
  • GCC
  • GRPC

To install the dependencies follow the instructions below:

Install xcode from the App Store

brew install go protobuf protoc-gen-go protoc-gen-go-grpc wget
apt install golang make protobuf-compiler-grpc

After you have golang installed and working, you can install the required binaries for compiling the golang protobuf components via the following commands

go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@1958fcbe2ca8bd93af633f11e97d44e567e945af
make build
Build

To build LocalAI with make:

git clone https://github.com/go-skynet/LocalAI
cd LocalAI
make build

This should produce the binary local-ai

Container image

Requirements:

  • Docker or podman, or a container engine

In order to build the LocalAI container image locally you can use docker, for example:

docker build -t localai .
docker run localai

Example: Build on mac

Building on Mac (M1, M2 or M3) works, but you may need to install some prerequisites using brew.

The below has been tested by one mac user and found to work. Note that this doesn’t use Docker to run the server:

Install xcode from the Apps Store (needed for metalkit)

brew install abseil cmake go grpc protobuf wget protoc-gen-go protoc-gen-go-grpc

git clone https://github.com/go-skynet/LocalAI.git

cd LocalAI

make build

wget https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q2_K.gguf -O models/phi-2.Q2_K

cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/phi-2.Q2_K.tmpl

./local-ai backends install llama-cpp

./local-ai --models-path=./models/ --debug=true

curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "phi-2.Q2_K",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

Troubleshooting mac

  • If you encounter errors regarding a missing utility metal, install Xcode from the App Store.

  • After the installation of Xcode, if you receive a xcrun error 'xcrun: error: unable to find utility "metal", not a developer tool or in PATH'. You might have installed the Xcode command line tools before installing Xcode, the former one is pointing to an incomplete SDK.

xcode-select --print-path

sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
  • If completions are slow, ensure that gpu-layers in your model yaml matches the number of layers from the model in use (or simply use a high number such as 256).

  • If you get a compile error: error: only virtual member functions can be marked 'final', reinstall all the necessary brew packages, clean the build, and try again.

brew reinstall go grpc protobuf wget

make clean

make build

Build backends

LocalAI have several backends available for installation in the backend gallery. The backends can be also built by source. As backends might vary from language and dependencies that they require, the documentation will provide generic guidance for few of the backends, which can be applied with some slight modifications also to the others.

Manually

Typically each backend include a Makefile which allow to package the backend.

In the LocalAI repository, for instance you can build bark-cpp by doing:

git clone https://github.com/go-skynet/LocalAI.git

make -C LocalAI/backend/go/bark-cpp build package

make -C LocalAI/backend/python/vllm

With Docker

Building with docker is simpler as abstracts away all the requirement, and focuses on building the final OCI images that are available in the gallery. This allows for instance also to build locally a backend and install it with LocalAI. You can refer to Backends for general guidance on how to install and develop backends.

In the LocalAI repository, you can build bark-cpp by doing:

git clone https://github.com/go-skynet/LocalAI.git

make docker-build-bark-cpp

Note that make is only by convenience, in reality it just runs a simple docker command as:

docker build --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg BASE_IMAGE=$(BASE_IMAGE) -t local-ai-backend:bark-cpp -f LocalAI/backend/Dockerfile.golang --build-arg BACKEND=bark-cpp .               

Note:

  • BUILD_TYPE can be either: cublas, hipblas, sycl_f16, sycl_f32, metal.
  • BASE_IMAGE is tested on ubuntu:22.04 (and defaults to it) and quay.io/go-skynet/intel-oneapi-base:latest for intel/sycl