LocalAI is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run LLMs, generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.
Tip
Security considerations
If you are exposing LocalAI remotely, make sure you protect the API endpoints adequately with a mechanism which allows to protect from the incoming traffic or alternatively, run LocalAI with API_KEY to gate the access with an API key. The API key guarantees a total access to the features (there is no role separation), and it is to be considered as likely as an admin role.
Once installed, start LocalAI. For Docker installations:
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest
The API will be available at http://localhost:8080.
Downloading models on start
When starting LocalAI (either via Docker or via CLI) you can specify as argument a list of models to install automatically before starting the API, for example:
local-ai run llama-3.2-1b-instruct:q4_k_m
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
local-ai run ollama://gemma:2b
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
local-ai run oci://localai/phi-2:latest
Tip
Automatic Backend Detection: When you install models from the gallery or YAML files, LocalAI automatically detects your system’s GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see GPU Acceleration.
For a full list of options, you can run LocalAI with --help or refer to the Linux Installation guide for installer configuration options.
Using LocalAI and the full stack with LocalAGI
LocalAI is part of the Local family stack, along with LocalAGI and LocalRecall.
LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility which encompassess and uses all the software stack. It provides a complete drop-in replacement for OpenAI’s Responses APIs with advanced agentic capabilities, working entirely locally on consumer-grade hardware (CPU and GPU).
Quick Start
git clone https://github.com/mudler/LocalAGI
cd LocalAGI
docker compose up
docker compose -f docker-compose.nvidia.yaml up
docker compose -f docker-compose.intel.yaml up
MODEL_NAME=gemma-3-12b-it docker compose up
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-4_5 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
Key Features
Privacy-Focused: All processing happens locally, ensuring your data never leaves your machine
Flexible Deployment: Supports CPU, NVIDIA GPU, and Intel GPU configurations
Multiple Model Support: Compatible with various models from Hugging Face and other sources
Web Interface: User-friendly chat interface for interacting with AI agents
Advanced Capabilities: Supports multimodal models, image generation, and more
Docker Integration: Easy deployment using Docker Compose
Environment Variables
You can customize your LocalAGI setup using the following environment variables:
MODEL_NAME: Specify the model to use (e.g., gemma-3-12b-it)
There is much more to explore with LocalAI! You can run any model from Hugging Face, perform video generation, and also voice cloning. For a comprehensive overview, check out the features section.
Explore additional resources and community contributions:
This section covers everything you need to know about installing and configuring models in LocalAI. You’ll learn multiple methods to get models running.
Prerequisites
LocalAI installed and running (see Quickstart if you haven’t set it up yet)
Basic understanding of command line usage
Method 1: Using the Model Gallery (Easiest)
The Model Gallery is the simplest way to install models. It provides pre-configured models ready to use.
# List available modelslocal-ai models list
# Install a specific modellocal-ai models install llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with a model from the gallerylocal-ai run llama-3.2-1b-instruct:q4_k_m
To run models available in the LocalAI gallery, you can use the model name as the URI. For example, to run LocalAI with the Hermes model, execute:
local-ai run hermes-2-theta-llama-3-8b
To install only the model, use:
local-ai models install hermes-2-theta-llama-3-8b
Note: The galleries available in LocalAI can be customized to point to a different URL or a local directory. For more information on how to setup your own gallery, see the Gallery Documentation.
Browse Online
Visit models.localai.io to browse all available models in your browser.
Method 1.5: Import Models via WebUI
The WebUI provides a powerful model import interface that supports both simple and advanced configuration:
Simple Import Mode
Open the LocalAI WebUI at http://localhost:8080
Click “Import Model”
Enter the model URI (e.g., https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct-GGUF)
Optionally configure preferences:
Backend selection
Model name
Description
Quantizations
Embeddings support
Custom preferences
Click “Import Model” to start the import process
Advanced Import Mode
For full control over model configuration:
In the WebUI, click “Import Model”
Toggle to “Advanced Mode”
Edit the YAML configuration directly in the code editor
Use the “Validate” button to check your configuration
Click “Create” or “Update” to save
The advanced editor includes:
Syntax highlighting
YAML validation
Format and copy tools
Full configuration options
This is especially useful for:
Custom model configurations
Fine-tuning model parameters
Setting up complex model setups
Editing existing model configurations
Method 2: Installing from Hugging Face
LocalAI can directly install models from Hugging Face:
# Install and run a model from Hugging Facelocal-ai run huggingface://TheBloke/phi-2-GGUF
The format is: huggingface://<repository>/<model-file> ( is optional)
Examples
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
Method 3: Installing from OCI Registries
Ollama Registry
local-ai run ollama://gemma:2b
Standard OCI Registry
local-ai run oci://localai/phi-2:latest
Run Models via URI
To run models via URI, specify a URI to a model file or a configuration file when starting LocalAI. Valid syntax includes:
From OCIs: oci://container_image:tag, ollama://model_id:tag
From configuration files: https://gist.githubusercontent.com/.../phi-2.yaml
Configuration files can be used to customize the model defaults and settings. For advanced configurations, refer to the Customize Models section.
Examples
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
local-ai run ollama://gemma:2b
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
local-ai run oci://localai/phi-2:latest
Method 4: Manual Installation
For full control, you can manually download and configure models.
If running on Apple Silicon (ARM), it is not recommended to run on Docker due to emulation. Follow the build instructions to use Metal acceleration for full GPU support.
If you are running on Apple x86_64, you can use Docker without additional gain from building it from source.
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
cp your-model.gguf models/
docker compose up -d --pull always
curl http://localhost:8080/v1/models
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.gguf",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Tip
Other Docker Images:
For other Docker images, please refer to the table in Getting Started.
Note: If you are on Windows, ensure the project is on the Linux filesystem to avoid slow model loading. For more information, see the Microsoft Docs.
# Via APIcurl http://localhost:8080/v1/models
# Via CLIlocal-ai models list
Remove Models
Simply delete the model file and configuration from your models directory:
rm models/model-name.gguf
rm models/model-name.yaml # if exists
Troubleshooting
Model Not Loading
Check backend: Ensure the required backend is installed
local-ai backends list
local-ai backends install llama-cpp # if needed
Check logs: Enable debug mode
DEBUG=true local-ai
Verify file: Ensure the model file is not corrupted
Out of Memory
Use a smaller quantization (Q4_K_S or Q2_K)
Reduce context_size in configuration
Close other applications to free RAM
Wrong Backend
Check the Compatibility Table to ensure you’re using the correct backend for your model.
Best Practices
Start small: Begin with smaller models to test your setup
Use quantized models: Q4_K_M is a good balance for most use cases
Organize models: Keep your models directory organized
Backup configurations: Save your YAML configurations
Monitor resources: Watch RAM and disk usage
Try it out
Once LocalAI is installed, you can start it (either by using docker, or the cli, or the systemd service).
By default the LocalAI WebUI should be accessible from http://localhost:8080. You can also use 3rd party projects to interact with LocalAI as you would use OpenAI (see also Integrations ).
After installation, install new models by navigating the model gallery, or by using the local-ai CLI.
Tip
To install models with the WebUI, see the Models section.
With the CLI you can list the models with local-ai models list and install them with local-ai models install <model-name>.
You can also run models manually by copying files into the models directory.
You can test out the API endpoints using curl, few examples are listed below. The models we are referring here (gpt-4, gpt-4-vision-preview, tts-1, whisper-1) are the default models that come with the AIO images - you can also use any other model you have installed.
curl http://localhost:8080/v1/audio/speech \
-H "Content-Type: application/json"\
-d '{
"model": "tts-1",
"input": "The quick brown fox jumped over the lazy dog.",
"voice": "alloy"
}'\
--output speech.mp3
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. OpenAI Embeddings.
curl http://localhost:8080/embeddings \
-X POST -H "Content-Type: application/json"\
-d '{
"input": "Your text string goes here",
"model": "text-embedding-ada-002"
}'
Tip
Don’t use the model file as model in the request unless you want to handle the prompt template for yourself.
Use the model names like you would do with OpenAI like in the examples below. For instance gpt-4-vision-preview, or gpt-4.
Customizing the Model
To customize the prompt template or the default settings of the model, a configuration file is utilized. This file must adhere to the LocalAI YAML configuration standards. For comprehensive syntax details, refer to the advanced documentation. The configuration file can be located either remotely (such as in a Github Gist) or within the local filesystem or a remote URL.
LocalAI can be initiated using either its container image or binary, with a command that includes URLs of model config files or utilizes a shorthand format (like huggingface:// or github://), which is then expanded into complete URLs.
The configuration can also be set via an environment variable. For instance:
name: phi-2context_size: 2048f16: truethreads: 11gpu_layers: 90mmap: trueparameters:
# Reference any HF model or a local file heremodel: huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguftemperature: 0.2top_k: 40top_p: 0.95template:
chat: &template | Instruct: {{.Input}}
Output:# Modify the prompt template here ^^^ as per your requirementscompletion: *template
Then, launch LocalAI using your gist’s URL:
## Important! Substitute with your gist's URL!docker run -p 8080:8080 localai/localai:v3.7.0 https://gist.githubusercontent.com/xxxx/phi-2.yaml
Next Steps
Visit the advanced section for more insights on prompt templates and configuration files.
Building LocalAI from source is an installation method that allows you to compile LocalAI yourself, which is useful for custom configurations, development, or when you need specific build options.
For complete build instructions, see the Build from Source documentation in the Installation section.
Run with container images
LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub.
All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed.
For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don’t have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the build section.
Tip
Available Images Types:
Images ending with -core are smaller images without predownload python dependencies. Use these images if you plan to use llama.cpp, stablediffusion-ncn or rwkv backends - if you are not sure which one to use, do not use these images.
Images containing the aio tag are all-in-one images with all the features enabled, and come with an opinionated set of configuration.
Prerequisites
Before you begin, ensure you have a container engine installed if you are not using the binaries. Suitable options include Docker or Podman. For installation instructions, refer to the following guides:
Hardware Requirements: The hardware requirements for LocalAI vary based on the model size and quantization method used. For performance benchmarks with different backends, such as llama.cpp, visit this link. The rwkv backend is noted for its lower resource consumption.
Standard container images
Standard container images do not have pre-installed models. Use these if you want to configure models manually.
These images are compatible with Nvidia ARM64 devices, such as the Jetson Nano, Jetson Xavier NX, and Jetson AGX Xavier. For more information, see the Nvidia L4T guide.
All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and require no configuration. Models configuration can be found here separated by size.
In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below
Category
Model name
Real model (CPU)
Real model (GPU)
Text Generation
gpt-4
phi-2
hermes-2-pro-mistral
Multimodal Vision
gpt-4-vision-preview
bakllava
llava-1.6-mistral
Image Generation
stablediffusion
stablediffusion
dreamshaper-8
Speech to Text
whisper-1
whisper with whisper-base model
<= same
Text to Speech
tts-1
en-us-amy-low.onnx from rhasspy/piper
<= same
Embeddings
text-embedding-ada-002
all-MiniLM-L6-v2 in Q4
all-MiniLM-L6-v2
Usage
Select the image (CPU or GPU) and start the container with Docker:
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
LocalAI will automatically download all the required models, and the API will be available at localhost:8080.
Or with a docker-compose file:
version: "3.9"services:
api:
image: localai/localai:latest-aio-cpu# For a specific version:# image: localai/localai:v3.7.0-aio-cpu# For Nvidia GPUs decomment one of the following (cuda11 or cuda12):# image: localai/localai:v3.7.0-aio-gpu-nvidia-cuda-11# image: localai/localai:v3.7.0-aio-gpu-nvidia-cuda-12# image: localai/localai:latest-aio-gpu-nvidia-cuda-11# image: localai/localai:latest-aio-gpu-nvidia-cuda-12healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
interval: 1mtimeout: 20mretries: 5ports:
- 8080:8080environment:
- DEBUG=true# ...volumes:
- ./models:/models:cached# decomment the following piece if running with Nvidia GPUs# deploy:# resources:# reservations:# devices:# - driver: nvidia# count: 1# capabilities: [gpu]
Tip
Models caching: The AIO image will download the needed models on the first run if not already present and store those in /models inside the container. The AIO models will be automatically updated with new versions of AIO images.
You can change the directory inside the container by specifying a MODELS_PATH environment variable (or --models-path).
If you want to use a named model or a local directory, you can mount it as a volume to /models:
docker run -p 8080:8080 --name local-ai -ti -v $PWD/models:/models localai/localai:latest-aio-cpu
The AIO Images are inheriting the same environment variables as the base images and the environment of LocalAI (that you can inspect by calling --help). However, it supports additional environment variables available only from the container image
Variable
Default
Description
PROFILE
Auto-detected
The size of the model to use. Available: cpu, gpu-8g
MODELS
Auto-detected
A list of models YAML Configuration file URI/URL (see also running models)