Using Elephas Inbuilt Offline Models
Elephas comes with powerful built-in offline AI models, allowing you to chat and index files without an internet connection. Everything runs locally on your Mac — faster, private, and fully under your control.
Requirements
- Apple Silicon (M1 or later)
- macOS 14+
- Paid Elephas plan
<aside>
💡
No token limits when using offline models
</aside>
Watch Video :
https://www.youtube.com/embed/0_9gt13C33Q?si=YV5LqJ3oUF7DUPDj
Creating a Brain with Offline Models
The primary way to leverage Elephas offline models is through Brain creation. Here's how to set up a Brain using offline models:
- Open the Create Brain window.
- Click Compatibility & control.
- Select Indexed (Legacy).
- Under Indexing approach, choose an Elephas Offline model (e.g., multilingual-e5-large or multilingual-e5-small).
- Set Chat mode to Offline

- Choose your models:
- Indexing Model → Select from offline embedding models (e.g., multilingual-e5-large). multilingual-e5-large offers better quality results than small variant.
- Default Chat Model → Select from offline chat models (e.g., llama3.2:3b). llama3.2:3b offer good quality results. Uses around 2 GB of memory when running.
