Total Pageviews

Tuesday 29 October 2024

LocalAI

 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference.

localai.io

LocalAI forks LocalAI stars LocalAI pull-requests

LocalAI Docker hub LocalAI Quay.io

Follow LocalAI_API Join LocalAI Discord Community

💡 Get help - ❓FAQ 💭Discussions 💬 Discord 📖 Documentation website

💻 Quickstart 🖼️ Models 🚀 Roadmap 🥽 Demo 🌍 Explorer 🛫 Examples

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by Ettore Di Giacinto.

Run the installer script:

curl https://localai.io/install.sh | sh

Or run with docker:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
# Alternative images:
# - if you have an Nvidia GPU:
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
# - without preconfigured models
# docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
# - without preconfigured models for Nvidia GPUs
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12 

To load models:

# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
local-ai run llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with the phi-2 model directly from huggingface
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
# Install and run a model from the Ollama OCI registry
local-ai run ollama://gemma:2b
# Run a model from a configuration file
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
local-ai run oci://localai/phi-2:latest

💻 Getting started

📰 Latest project news

  • Aug 2024: 🆕 FLUX-1, P2P Explorer
  • July 2024: 🔥🔥 🆕 P2P Dashboard, LocalAI Federated mode and AI Swarms: #2723
  • June 2024: 🆕 You can browse now the model gallery without LocalAI! Check out https://models.localai.io
  • June 2024: Support for models from OCI registries: #2628
  • May 2024: 🔥🔥 Decentralized P2P llama.cpp: #2343 (peer2peer llama.cpp!) 👉 Docs https://localai.io/features/distribute/
  • May 2024: 🔥🔥 Openvoice: #2334
  • May 2024: 🆕 Function calls without grammars and mixed mode: #2328
  • May 2024: 🔥🔥 Distributed inferencing: #2324
  • May 2024: Chat, TTS, and Image generation in the WebUI: #2222
  • April 2024: Reranker API: #2121

Roadmap items: List of issues

🔥🔥 Hot topics (looking for help):

  • Multimodal with vLLM and Video understanding: #3729
  • Realtime API #3714
  • 🔥🔥 Distributed, P2P Global community pools: #3113
  • WebUI improvements: #2156
  • Backends v2: #1126
  • Improving UX v2: #1373
  • Assistant API: #1273
  • Moderation endpoint: #999
  • Vulkan: #1647
  • Anthropic API: #1808

If you want to help and contribute, issues up for grabs: https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3A%22up+for+grabs%22

💻 Usage

Check out the Getting started section in our documentation.

🔗 Community and integrations

Build and deploy custom containers:

WebUIs:

Model galleries

Other:

🔗 Resources

from https://github.com/mudler/LocalAI

 

No comments:

Post a Comment