Selora AI Local
A local LLM backend for Selora AI, running entirely on-device for maximum privacy and performance.
Search results
Summary
Develop and deploy a local LLM backend for Selora AI, allowing for intelligent smart home control and assistance without any data leaving the home network. This involves training and optimizing lightweight models specifically tailored for Home Assistant and the SeloraBox hardware.
Value
- Privacy: No voice or configuration data is sent to the cloud; all processing happens locally.
- Reliability: Selora AI continues to function even if the internet connection is lost.
- Performance: Reduced latency compared to cloud-based LLMs, providing a snappier user experience.
- Cost: Eliminates recurring cloud LLM API costs for the user.
Scope
- Model Training: Develop and fine-tune lightweight LLMs optimized for Home Assistant entities and services.
- On-Device Inference: Implement a high-performance inference engine capable of running on SeloraBox hardware.
- Integration: Seamless switching between Selora AI Cloud and Selora AI Local backends.
- Privacy Controls: Clear visibility and control over data processing and storage.
Target customers
- Privacy-conscious homeowners.
- Users with limited or unreliable internet connectivity.
- Power users seeking the lowest possible latency for AI interactions.
Open questions
- Hardware requirements for optimal local inference performance.
- Balancing model size and capability for complex reasoning tasks.
- Strategy for local model updates and improvements.
Related
- Selora AI Cloud Gateway (Epic #49): GitLab epic #49
- Selora Smart Agent (Epic #5): GitLab epic #5
- Selora AI: Suggest Integrations and Devices (Epic #51): GitLab epic #51
Last modified April 21, 2026: Selora AI Local (4fc69e6)