Selora Homes Selora Homes

Selora AI Cloud gateway

Simplify the Selora AI experience by providing a managed cloud gateway alongside local model support.

Roadmap Selora-Ai Cloud Gateway Infrastructure

Summary

Simplify the Selora AI experience by providing a managed cloud gateway alongside local model support. This abstracts the complexity of LLM backend configuration, offering users a choice between high-performance cloud intelligence and private, local execution.

Configurations

  • Selora AI Cloud: Leverages Selora’s managed infrastructure to provide access to top-tier models without requiring individual API keys (OpenAI/Anthropic).
  • Selora AI Local: Leverages local models (via Ollama or future tailored Selora models) for maximum privacy and offline capability.

Value

  • Onboarding: Removes the friction of obtaining and configuring external API keys for new users.
  • Performance: Cloud gateway ensures access to the most capable models for complex reasoning.
  • Privacy: Local option remains a first-class citizen for users who prefer to keep all data on-premises.
  • Abstraction: Selora handles the backend complexity, API rotation, and model selection.

Scope

  • Cloud Gateway Infrastructure: Backend services to proxy and manage LLM requests securely.
  • Configuration UI: Simplified toggle in Selora Connect between “Cloud” and “Local” modes.
  • Model Provisioning: Automated setup for local models and seamless authentication for cloud services.
  • Fallback Logic: Intelligent switching or alerting if a preferred backend is unavailable.

Dependencies

Open questions

  • Pricing: How will the cloud gateway be tiered (Free vs Premium)?
  • Model Selection: Which specific models will be offered in the cloud vs local tiers?
  • Latency: Impact of the gateway proxy on response times compared to direct API access.