Server Setup
The Soliplex server is a FastAPI-based backend that forwards requests to OpenAI and provides RAG functionality.
Prerequisites
-
Python 3.13+
-
Access to LLM:
-
OpenAI - an API key is required to use OpenAI
-
Ollama ([https://ollama.com/] https://ollama.com/)
-
Logfire (optional):
A token from logfire (login here) allows for visibility into the application. See:
Installation
-
Clone the repository:
-
Set up a Python3 virtual environment:
-
Install
soliplexand its dependencies: -
Set up environment variables:
An environment file (.env) can be used to configure secrets, e.g.:
Running the example
The example configuration provides an overview of how a soliplex application is assembled. It contains four top-level installation configurations:
-
example/minimal.yamlis a minimal example using Ollama: it requires no secrets. -
example/installation.yamlis a more fleshed-out example using Ollama: it requires secrets for the exernal Model-Control Protocol (MCP) client toosets for the roommcptest. -
example/minimal-openai.yamlis a minimal example using OpenAI: it requires no secrets beyond theOPENAI_API_KEY. -
example/installation.yamlis a more fleshed-out example using OpenAI: in addition totheOPENAI_API_KEYsecret, it requires secrets for the exernal Model-Control Protocol (MCP) client toosets for the roommcptest.
Each installation configuration includes a number of rooms that
- Configure resources:
The example needs access to a model server using either openapi or ollama as well as access to example MCP services.
The example uses https://smithery.ai/ but others can be configured.
a. OIDC configuration: TODO
-
Configure the LLM (Ollama / OpenAI):
-
For the Ollama veriants, export the URL of your model server as
OLLAMA_BASE_URL. This url should not contain the/v1suffix. E.g. if you are running Ollama on your own machine: -
The example configuration uses the
gpt-ossmodel. If using either Ollama variant, install that model via: -
Check for missing secrets / environment variables:
This command will check the server for any missing variables or invalid configuration files.
The secrets used in the your chosen configuration should be exported as environment variables, e.g.:
Note that the alternate installation configurations, example/minimal.yaml
and example/minimal-openai.yaml, requires no additional secrets
The example/minimal.yaml configuration still expects
the OLLAMA_BASE_URL environment variable to be set (or present in
an .env file):
-
Configure any missing secrets, e.g. by sourcing a
.envfile, or by exporting them directly. -
Configure any missing environment variables, e.g. by editing the installation YAML file, adding them to a
.envfile in the installation path, or exporting them directly.
Running the Server
Start the FastAPI server:
The server will be available at http://localhost:8000 by default.
Server Command Options
Basic Options:
INSTALLATION_CONFIG_PATH- Path to installation YAML file (required)--help- Show help message
Network Options:
--host HOST- Bind to specific host (default:127.0.0.1)- Use
0.0.0.0to accept connections from any network interface -
Example:
--host 0.0.0.0 -
--port PORT- Listen on specific port (default:8000) - Example:
--port 8080
Authentication Options:
--no-auth-mode- Disable authentication (development/testing only)- WARNING: Never use in production
- Example:
--no-auth-mode
Hot Reload Options:
--reload {code,config,both}/-r {code,config,both}- Enable hot reloadcode- Watch Python source files for changesconfig- Watch YAML configuration files for changesboth- Watch both code and config files-
Example:
--reload bothor-r both -
--reload-dirs DIRS- Additional directories to watch (repeatable) -
Example:
--reload-dirs ./custom_modules --reload-dirs ./plugins -
--reload-includes PATTERNS- File patterns to include in watch (repeatable) - Example:
--reload-includes "*.yaml" --reload-includes "*.json"
Proxy Options:
--proxy-headers- Enable parsing of X-Forwarded-* headers- Use when running behind a reverse proxy (nginx, traefik, etc.)
-
Example:
--proxy-headers -
--forwarded-allow-ips IPS- Trusted IP addresses for proxy headers (comma-separated) - Default:
127.0.0.1 - Example:
--forwarded-allow-ips "127.0.0.1,10.0.0.0/8"
Common Usage Examples
Development with hot reload:
Production (behind nginx):
soliplex-cli serve example/installation.yaml \
--host 127.0.0.1 \
--port 8000 \
--proxy-headers \
--forwarded-allow-ips "127.0.0.1"
Docker container (all network interfaces):
Custom port for testing:
Verifying the Server
To confirm your room configuration:
To check server health:
API Endpoints
If the soliplex-cli server is running, you can browse the
live OpenAPI documentation.