feat: add unified setup-local-dev.sh for standalone deployment

Single script takes fresh clone to working Reflector: Ollama/LLM setup,
env file generation (server/.env + www/.env.local), docker compose up,
health checks. No Hatchet in standalone — live pipeline is pure Celery.
This commit is contained in:
Igor Loskutov
2026-02-10 17:47:12 -05:00
parent 46750abad9
commit 427254fe33
3 changed files with 365 additions and 247 deletions

218
TASKS.md
View File

@@ -1,218 +0,0 @@
# Standalone Setup — Remaining Tasks
Branch: `local-llm-prd`. Setup doc: `docs/docs/installation/standalone-local-setup.md`.
**Goal**: one script (`scripts/setup-local-dev.sh`) that takes a fresh clone to a working Reflector with no cloud accounts, no API keys, no manual env editing. Live/WebRTC mode only (no file upload, no Daily.co, no Whereby).
**Already done**: Step 1 (Ollama/LLM) in `scripts/setup-local-llm.sh`, Step 3 (storage — skip S3).
**Not our scope**: Step 4 (transcription/diarization) — another developer handles that.
---
## Task 1: Research env defaults for standalone
### Goal
Determine the exact `server/.env` and `www/.env.local` contents for standalone mode. Both files must be generatable by the setup script with zero user input.
### server/.env — what we know
Source of truth for all backend settings: `server/reflector/settings.py` (pydantic BaseSettings, reads from `.env`).
**Vars that MUST be set (no usable default for docker):**
| Variable | Standalone value | Why |
|----------|-----------------|-----|
| `DATABASE_URL` | `postgresql+asyncpg://reflector:reflector@postgres:5432/reflector` | Default is `localhost`, containers need `postgres` hostname |
| `REDIS_HOST` | `redis` | Default is `localhost`, containers need `redis` hostname |
| `CELERY_BROKER_URL` | `redis://redis:6379/1` | Default uses `localhost` |
| `CELERY_RESULT_BACKEND` | `redis://redis:6379/1` | Default uses `localhost` |
| `HATCHET_CLIENT_TOKEN` | *generated at runtime* | Must be extracted from hatchet container after it starts (see below) |
**Vars that MUST be overridden from .env.example defaults:**
| Variable | Standalone value | .env.example has | Why |
|----------|-----------------|------------------|-----|
| `AUTH_BACKEND` | `none` | `jwt` | No Authentik in standalone |
| `TRANSCRIPT_STORAGE_BACKEND` | *(unset/empty)* | `aws` | Skip S3, audio stays local |
| `DIARIZATION_ENABLED` | `false` | `true` (settings.py default) | No diarization backend in standalone |
| `TRANSLATION_BACKEND` | `passthrough` | `modal` (.env.example) | No Modal in standalone. Default in settings.py is already `passthrough`. |
**Vars set by LLM setup (step 1, already handled):**
| Variable | Mac value | Linux GPU value | Linux CPU value |
|----------|-----------|-----------------|-----------------|
| `LLM_URL` | `http://host.docker.internal:11434/v1` | `http://ollama:11434/v1` | `http://ollama-cpu:11434/v1` |
| `LLM_MODEL` | `qwen2.5:14b` | same | same |
| `LLM_API_KEY` | `not-needed` | same | same |
**Vars with safe defaults in settings.py (no override needed):**
- `LLM_CONTEXT_WINDOW` = 16000
- `SECRET_KEY` = `changeme-f02f86fd8b3e4fd892c6043e5a298e21` (fine for local dev)
- `BASE_URL` = `http://localhost:1250`
- `UI_BASE_URL` = `http://localhost:3000`
- `CORS_ORIGIN` = `*`
- `DATA_DIR` = `./data`
- `TRANSCRIPT_BACKEND` = `whisper` (default in settings.py — step 4 developer may change this)
- `HATCHET_CLIENT_TLS_STRATEGY` = `none`
- `PUBLIC_MODE` = `false`
**OPEN QUESTION — Hatchet token chicken-and-egg:**
The `HATCHET_CLIENT_TOKEN` must be generated after the hatchet container starts and creates its DB schema. The current manual process (from `server/README.md`):
```bash
TENANT_ID=$(docker compose exec -T postgres psql -U reflector -d hatchet -t -c \
"SELECT id FROM \"Tenant\" WHERE slug = 'default';" | tr -d ' \n') && \
TOKEN=$(docker compose exec -T hatchet /hatchet-admin token create \
--config /config --tenant-id "$TENANT_ID" 2>/dev/null | tr -d '\n') && \
echo "HATCHET_CLIENT_TOKEN=$TOKEN"
```
The setup script needs to:
1. Start postgres + hatchet first
2. Wait for hatchet to be healthy
3. Generate the token
4. Write it to `server/.env`
5. Then start server + workers (which need the token)
**OPEN QUESTION — HATCHET_CLIENT_HOST_PORT and HATCHET_CLIENT_SERVER_URL:**
These are NOT in `settings.py` — they're Hatchet SDK env vars read directly by the SDK. The JWT token embeds `localhost` URLs, but workers inside Docker need `hatchet:7077`. From CLAUDE.md:
```
HATCHET_CLIENT_HOST_PORT=hatchet:7077
HATCHET_CLIENT_SERVER_URL=http://hatchet:8888
HATCHET_CLIENT_TLS_STRATEGY=none
```
These may need to go in `server/.env` too. Verify by checking how hatchet-worker containers connect — they share the same `env_file: ./server/.env` as the server.
### www/.env.local — what we know
The `web` service in `docker-compose.yml` reads `env_file: ./www/.env.local`.
Template: `www/.env.example`. For standalone:
| Variable | Standalone value | Notes |
|----------|-----------------|-------|
| `SITE_URL` | `http://localhost:3000` | |
| `NEXTAUTH_URL` | `http://localhost:3000` | Required by NextAuth |
| `NEXTAUTH_SECRET` | `standalone-dev-secret-not-for-production` | Any string works for dev |
| `API_URL` | `http://localhost:1250` | Browser-side API calls |
| `SERVER_API_URL` | `http://server:1250` | Server-side (SSR) API calls within Docker network |
| `WEBSOCKET_URL` | `ws://localhost:1250` | Browser-side WebSocket |
**Not needed for standalone (no auth):**
- `AUTHENTIK_*` vars — only needed when `AUTH_BACKEND=jwt`
- `FEATURE_REQUIRE_LOGIN` — should be `false` or unset
- `ZULIP_*` — no Zulip integration
- `SENTRY_DSN` — no Sentry
**OPEN QUESTION**: Does the frontend crash if `AUTHENTIK_*` vars are missing? Or does it gracefully skip auth UI when backend reports `AUTH_BACKEND=none`? Check `www/` auth code.
### Deliverable
A concrete list of env vars for each file, with exact values. Resolve all open questions above.
---
## Task 2: Build unified setup script + docker integration
### Goal
Create `scripts/setup-local-dev.sh` that does everything: LLM setup (absorb existing `setup-local-llm.sh`), env file generation, docker services, migrations, health check.
### Depends on
Task 1 (env defaults must be decided first).
### Script structure (from standalone-local-setup.md)
```
setup-local-dev.sh
├── Step 1: LLM/Ollama setup (existing logic from setup-local-llm.sh)
├── Step 2: Generate server/.env and www/.env.local
├── Step 3: (skip — no S3 needed)
├── Step 4: (skip — handled by other developer)
├── Step 5: docker compose up (postgres, redis, hatchet, server, workers, web)
├── Step 6: Wait for services + run migrations
└── Step 7: Health check + print success URLs
```
### Key implementation details
**Idempotency**: Script must be safe to re-run. Each step should check if already done:
- LLM: check if Ollama running + model pulled
- Env files: check if files exist, don't overwrite (or merge carefully)
- Docker: `docker compose up -d` is already idempotent
- Migrations: `alembic upgrade head` is already idempotent
- Health check: always run
**Hatchet token flow** (the tricky part):
1. Generate env files WITHOUT `HATCHET_CLIENT_TOKEN`
2. Start postgres + redis + hatchet
3. Wait for hatchet health (`curl -f http://localhost:8889/api/live`)
4. Generate token via `hatchet-admin` CLI (see Task 1 for command)
5. Append/update `HATCHET_CLIENT_TOKEN=...` in `server/.env`
6. Start server + hatchet-worker-cpu + hatchet-worker-llm + web
**Docker compose invocation**:
- Mac: `docker compose -f docker-compose.yml -f docker-compose.standalone.yml up -d <services>`
- Linux with GPU: add `--profile ollama-gpu`
- Linux without GPU: add `--profile ollama-cpu`
- Services for standalone: `postgres redis hatchet server hatchet-worker-cpu hatchet-worker-llm web`
- Note: `worker` (Celery) and `beat` may not be needed for standalone live mode — verify if live pipeline uses Celery or only Hatchet
**Migrations**: `docker compose exec server uv run alembic upgrade head` — must wait for server container to be ready first.
**Health checks**:
- `curl -sf http://localhost:1250/health` (server `/health` endpoint returns `{"status": "healthy"}`)
- `curl -sf http://localhost:3000` (frontend)
- LLM reachability from container: `docker compose exec server curl -sf http://host.docker.internal:11434/v1/models` (Mac) or equivalent
### Files to create/modify
| File | Action |
|------|--------|
| `scripts/setup-local-dev.sh` | Create — unified setup script |
| `scripts/setup-local-llm.sh` | Keep or remove after folding into unified script |
| `docs/docs/installation/standalone-local-setup.md` | Update status section when done |
| `server/.env.example` | May need standalone section/comments |
### Docker compose considerations
Current `docker-compose.yml` services: server, worker, beat, hatchet-worker-cpu, hatchet-worker-llm, redis, web, postgres, hatchet.
Current `docker-compose.standalone.yml` services: ollama (GPU profile), ollama-cpu (CPU profile).
**OPEN QUESTION**: Does the live pipeline (WebRTC recording) use Celery tasks or Hatchet workflows? If only Hatchet, we can skip `worker` and `beat` services. Check `server/reflector/pipelines/main_live_pipeline.py` — it currently uses Celery chains/chords for post-processing. So `worker` IS needed for live mode.
Update: Looking at `main_live_pipeline.py`, the live pipeline dispatches Celery tasks via `chain()` and `chord()` (lines ~780-810). So both `worker` (Celery) and hatchet workers are needed. `beat` is for cron jobs (cleanup, polling) — probably not critical for standalone demo but harmless to include.
### Final service list for standalone
```
postgres redis hatchet server worker hatchet-worker-cpu hatchet-worker-llm web
```
Plus on Linux: `ollama` or `ollama-cpu` via profile.
---
## Reference: key file locations
| File | Purpose |
|------|---------|
| `server/reflector/settings.py` | All backend env vars with defaults |
| `server/.env.example` | Current env template (production-oriented) |
| `www/.env.example` | Frontend env template |
| `docker-compose.yml` | Main services definition |
| `docker-compose.standalone.yml` | Ollama services for standalone |
| `scripts/setup-local-llm.sh` | Existing LLM setup script |
| `docs/docs/installation/standalone-local-setup.md` | Setup documentation |
| `server/README.md:56-84` | Hatchet token generation commands |
| `server/reflector/hatchet/client.py` | Hatchet client (requires HATCHET_CLIENT_TOKEN) |
| `server/reflector/storage/__init__.py` | Storage factory (skipped when TRANSCRIPT_STORAGE_BACKEND unset) |
| `server/reflector/pipelines/main_live_pipeline.py` | Live pipeline (uses Celery chains for post-processing) |
| `server/reflector/app.py:72-74` | Health endpoint (`GET /health` returns `{"status": "healthy"}`) |
| `server/docker/init-hatchet-db.sql` | Creates `hatchet` DB on postgres init |

View File

@@ -20,32 +20,46 @@ The script is idempotent — safe to re-run at any time. It detects what's alrea
- Docker / OrbStack / Docker Desktop (any)
- Mac (Apple Silicon) or Linux
- 16GB+ RAM (32GB recommended for 14B LLM models)
- **Mac only**: [Ollama](https://ollama.com/download) installed (`brew install ollama`)
## What the script does
### 1. LLM inference via Ollama (implemented)
### 1. LLM inference via Ollama
**Mac**: starts Ollama natively (Metal GPU acceleration). Pulls the LLM model. Docker containers reach it via `host.docker.internal:11434`.
**Linux**: starts containerized Ollama via `docker-compose.standalone.yml` profile (`ollama-gpu` with NVIDIA, `ollama-cpu` without). Pulls model inside the container.
Configures `server/.env`:
```
LLM_URL=http://host.docker.internal:11434/v1
LLM_MODEL=qwen2.5:14b
LLM_API_KEY=not-needed
```
The current standalone script for this step is `scripts/setup-local-llm.sh`. It will be folded into the unified `setup-local-dev.sh` once the other steps are implemented.
### 2. Environment files
The script would copy `.env` templates if not present and fill defaults suitable for local dev (localhost postgres, redis, no auth, etc.).
Generates `server/.env` and `www/.env.local` with standalone defaults:
> The exact set of env defaults and whether the script patches an existing `.env` or only creates from template has not been decided yet. A follow-up research pass can determine what's safe to auto-fill vs. what needs user input.
**`server/.env`** — key settings:
### 3. Transcript storage (resolved — skip for standalone)
| Variable | Value | Why |
|----------|-------|-----|
| `DATABASE_URL` | `postgresql+asyncpg://...@postgres:5432/reflector` | Docker-internal hostname |
| `REDIS_HOST` | `redis` | Docker-internal hostname |
| `CELERY_BROKER_URL` | `redis://redis:6379/1` | Docker-internal hostname |
| `AUTH_BACKEND` | `none` | No Authentik in standalone |
| `TRANSCRIPT_BACKEND` | `whisper` | Local transcription |
| `DIARIZATION_ENABLED` | `false` | No diarization backend |
| `TRANSLATION_BACKEND` | `passthrough` | No Modal |
| `LLM_URL` | `http://host.docker.internal:11434/v1` (Mac) | Ollama endpoint |
**`www/.env.local`** — key settings:
| Variable | Value |
|----------|-------|
| `API_URL` | `http://localhost:1250` |
| `SERVER_API_URL` | `http://server:1250` |
| `WEBSOCKET_URL` | `ws://localhost:1250` |
| `FEATURE_REQUIRE_LOGIN` | `false` |
| `NEXTAUTH_SECRET` | `standalone-dev-secret-not-for-production` |
If env files already exist, the script only updates LLM vars — it won't overwrite your customizations.
### 3. Transcript storage (skip for standalone)
Production uses AWS S3 to persist processed audio. **Not needed for standalone live/WebRTC mode.**
@@ -56,38 +70,43 @@ When `TRANSCRIPT_STORAGE_BACKEND` is unset (the default):
- Post-processing (LLM summary, topics, title) works entirely from DB text
- Diarization (speaker ID) is skipped — already disabled in standalone config (`DIARIZATION_ENABLED=false`)
The script ensures `TRANSCRIPT_STORAGE_BACKEND` is left unset in `server/.env`.
> **Future**: if file upload or audio persistence across restarts is needed, implement a filesystem storage backend (`storage_local.py`) using the existing `Storage` plugin architecture in `reflector/storage/base.py`. No MinIO required.
### 4. Transcription and diarization
Production uses Modal.com (cloud GPU) or self-hosted GPU servers.
Standalone uses `TRANSCRIPT_BACKEND=whisper` for local CPU-based transcription. Diarization is disabled.
> The codebase has a `TRANSCRIPT_BACKEND=whisper` option for local Whisper. Whether this runs acceptably on CPU for short dev recordings, and whether diarization has a local fallback, is unknown. For a minimal local setup, it may be sufficient to skip transcription and only test the LLM pipeline against already-transcribed data.
> Another developer is working on optimizing the local transcription experience. For now, local Whisper works for short recordings but is slow on CPU.
### 5. Docker services
```bash
docker compose up -d postgres redis server hatchet hatchet-worker-cpu hatchet-worker-llm web
docker compose up -d postgres redis server worker beat web
```
Frontend included in compose (`web` service). Everything comes up in one command.
All services start in a single command. No Hatchet in standalone mode — LLM processing (summaries, topics, titles) runs via Celery tasks.
### 6. Database migrations
```bash
docker compose exec server uv run alembic upgrade head
```
Idempotent (alembic tracks applied migrations).
Run automatically by the `server` container on startup (`runserver.sh` calls `alembic upgrade head`). No manual step needed.
### 7. Health check
Verifies:
- Server responds at `http://localhost:1250/health`
- LLM endpoint reachable from inside containers
- Frontend serves at `http://localhost:3000`
- LLM endpoint reachable from inside containers
## Services
| Service | Port | Purpose |
|---------|------|---------|
| `server` | 1250 | FastAPI backend (runs migrations on start) |
| `web` | 3000 | Next.js frontend |
| `postgres` | 5432 | PostgreSQL database |
| `redis` | 6379 | Cache + Celery broker |
| `worker` | — | Celery worker (live pipeline post-processing) |
| `beat` | — | Celery beat (scheduled tasks) |
## What's NOT covered
@@ -95,11 +114,14 @@ These require external accounts and infrastructure that can't be scripted:
- **Live meeting rooms** — requires Daily.co account, S3 bucket, IAM roles
- **Authentication** — requires Authentik deployment and OAuth configuration
- **Hatchet workflows** — requires separate Hatchet setup for multitrack processing
- **Production deployment** — see [Deployment Guide](./overview)
## Current status
- Step 1 (Ollama/LLM) — implemented and tested
- Step 3 (transcript storage) — resolved: skip for live-only mode, no code changes needed
- Step 1 (Ollama/LLM) — implemented
- Step 2 (environment files) — implemented
- Step 3 (transcript storage) — resolved: skip for live-only mode
- Step 4 (transcription/diarization) — in progress by another developer
- Steps 2, 5, 6, 7 — next up: env defaults research, then unified script (see `TASKS.md`)
- Steps 5-7 (Docker, migrations, health) — implemented
- **Unified script**: `scripts/setup-local-dev.sh`

314
scripts/setup-local-dev.sh Executable file
View File

@@ -0,0 +1,314 @@
#!/usr/bin/env bash
#
# Standalone local development setup for Reflector.
# Takes a fresh clone to a working instance — no cloud accounts, no API keys.
#
# Usage:
# ./scripts/setup-local-dev.sh
#
# Idempotent — safe to re-run at any time.
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
SERVER_ENV="$ROOT_DIR/server/.env"
WWW_ENV="$ROOT_DIR/www/.env.local"
MODEL="${LLM_MODEL:-qwen2.5:14b}"
OLLAMA_PORT="${OLLAMA_PORT:-11434}"
OS="$(uname -s)"
# --- Colors ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m'
info() { echo -e "${CYAN}==>${NC} $*"; }
ok() { echo -e "${GREEN}${NC} $*"; }
warn() { echo -e "${YELLOW} !${NC} $*"; }
err() { echo -e "${RED}${NC} $*" >&2; }
# --- Helpers ---
wait_for_url() {
local url="$1" label="$2" retries="${3:-30}" interval="${4:-2}"
for i in $(seq 1 "$retries"); do
if curl -sf "$url" > /dev/null 2>&1; then
return 0
fi
echo -ne "\r Waiting for $label... ($i/$retries)"
sleep "$interval"
done
echo ""
err "$label not responding at $url after $retries attempts"
return 1
}
env_has_key() {
local file="$1" key="$2"
grep -q "^${key}=" "$file" 2>/dev/null
}
env_set() {
local file="$1" key="$2" value="$3"
if env_has_key "$file" "$key"; then
# Replace existing value (portable sed)
if [[ "$OS" == "Darwin" ]]; then
sed -i '' "s|^${key}=.*|${key}=${value}|" "$file"
else
sed -i "s|^${key}=.*|${key}=${value}|" "$file"
fi
else
echo "${key}=${value}" >> "$file"
fi
}
compose_cmd() {
if [[ "$OS" == "Linux" ]] && [[ -n "${OLLAMA_PROFILE:-}" ]]; then
docker compose -f "$ROOT_DIR/docker-compose.yml" \
-f "$ROOT_DIR/docker-compose.standalone.yml" \
--profile "$OLLAMA_PROFILE" \
"$@"
else
docker compose -f "$ROOT_DIR/docker-compose.yml" "$@"
fi
}
# =========================================================
# Step 1: LLM / Ollama
# =========================================================
step_llm() {
info "Step 1: LLM setup (Ollama + $MODEL)"
case "$OS" in
Darwin)
if ! command -v ollama &> /dev/null; then
err "Ollama not found. Install it:"
err " brew install ollama"
err " # or https://ollama.com/download"
exit 1
fi
# Start if not running
if ! curl -sf "http://localhost:$OLLAMA_PORT/api/tags" > /dev/null 2>&1; then
info "Starting Ollama..."
ollama serve &
disown
fi
wait_for_url "http://localhost:$OLLAMA_PORT/api/tags" "Ollama"
echo ""
# Pull model if not already present
if ollama list 2>/dev/null | grep -q "$MODEL"; then
ok "Model $MODEL already pulled"
else
info "Pulling model $MODEL (this may take a while)..."
ollama pull "$MODEL"
fi
LLM_URL_VALUE="http://host.docker.internal:$OLLAMA_PORT/v1"
;;
Linux)
if command -v nvidia-smi &> /dev/null && nvidia-smi > /dev/null 2>&1; then
ok "NVIDIA GPU detected — using ollama-gpu profile"
OLLAMA_PROFILE="ollama-gpu"
OLLAMA_SVC="ollama"
LLM_URL_VALUE="http://ollama:$OLLAMA_PORT/v1"
else
warn "No NVIDIA GPU — using ollama-cpu profile"
OLLAMA_PROFILE="ollama-cpu"
OLLAMA_SVC="ollama-cpu"
LLM_URL_VALUE="http://ollama-cpu:$OLLAMA_PORT/v1"
fi
info "Starting Ollama container..."
compose_cmd up -d
wait_for_url "http://localhost:$OLLAMA_PORT/api/tags" "Ollama"
echo ""
# Pull model inside container
if compose_cmd exec "$OLLAMA_SVC" ollama list 2>/dev/null | grep -q "$MODEL"; then
ok "Model $MODEL already pulled"
else
info "Pulling model $MODEL inside container (this may take a while)..."
compose_cmd exec "$OLLAMA_SVC" ollama pull "$MODEL"
fi
;;
*)
err "Unsupported OS: $OS"
exit 1
;;
esac
ok "LLM ready ($MODEL via Ollama)"
}
# =========================================================
# Step 2: Generate server/.env
# =========================================================
step_server_env() {
info "Step 2: Generating server/.env"
if [[ -f "$SERVER_ENV" ]]; then
ok "server/.env already exists — checking key vars"
else
cat > "$SERVER_ENV" << 'ENVEOF'
# Generated by setup-local-dev.sh — standalone local development
# Source of truth for settings: server/reflector/settings.py
# --- Database (Docker internal hostnames) ---
DATABASE_URL=postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST=redis
CELERY_BROKER_URL=redis://redis:6379/1
CELERY_RESULT_BACKEND=redis://redis:6379/1
# --- Auth (disabled for standalone) ---
AUTH_BACKEND=none
# --- Transcription (local whisper) ---
TRANSCRIPT_BACKEND=whisper
# --- Storage (local disk, no S3) ---
# TRANSCRIPT_STORAGE_BACKEND is intentionally unset — audio stays on local disk
# --- Diarization (disabled, no backend available) ---
DIARIZATION_ENABLED=false
# --- Translation (passthrough, no Modal) ---
TRANSLATION_BACKEND=passthrough
# --- LLM (set below by setup script) ---
LLM_API_KEY=not-needed
ENVEOF
ok "Created server/.env"
fi
# Ensure LLM vars are set (may differ per OS/re-run)
env_set "$SERVER_ENV" "LLM_URL" "$LLM_URL_VALUE"
env_set "$SERVER_ENV" "LLM_MODEL" "$MODEL"
env_set "$SERVER_ENV" "LLM_API_KEY" "not-needed"
ok "LLM vars set (LLM_URL=$LLM_URL_VALUE)"
}
# =========================================================
# Step 3: Generate www/.env.local
# =========================================================
step_www_env() {
info "Step 3: Generating www/.env.local"
if [[ -f "$WWW_ENV" ]]; then
ok "www/.env.local already exists — skipping"
return
fi
cat > "$WWW_ENV" << 'ENVEOF'
# Generated by setup-local-dev.sh — standalone local development
SITE_URL=http://localhost:3000
NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=standalone-dev-secret-not-for-production
# Browser-side URLs (localhost, outside Docker)
API_URL=http://localhost:1250
WEBSOCKET_URL=ws://localhost:1250
# Server-side (SSR) URL (Docker internal)
SERVER_API_URL=http://server:1250
# Auth disabled for standalone
FEATURE_REQUIRE_LOGIN=false
ENVEOF
ok "Created www/.env.local"
}
# =========================================================
# Step 4: Start all services
# =========================================================
step_services() {
info "Step 4: Starting Docker services"
# server runs alembic migrations on startup automatically (see runserver.sh)
compose_cmd up -d postgres redis server worker beat web
ok "Containers started"
info "Server is running migrations (alembic upgrade head)..."
}
# =========================================================
# Step 5: Health checks
# =========================================================
step_health() {
info "Step 5: Health checks"
wait_for_url "http://localhost:1250/health" "Server API" 60 3
echo ""
ok "Server API healthy"
wait_for_url "http://localhost:3000" "Frontend" 90 3
echo ""
ok "Frontend responding"
# Check LLM reachability from inside a container
if docker compose -f "$ROOT_DIR/docker-compose.yml" exec -T server \
curl -sf "$LLM_URL_VALUE/models" > /dev/null 2>&1; then
ok "LLM reachable from containers"
else
warn "LLM not reachable from containers at $LLM_URL_VALUE"
warn "Summaries/topics/titles won't work until LLM is accessible"
fi
}
# =========================================================
# Main
# =========================================================
main() {
echo ""
echo "=========================================="
echo " Reflector — Standalone Local Setup"
echo "=========================================="
echo ""
# Ensure we're in the repo root
if [[ ! -f "$ROOT_DIR/docker-compose.yml" ]]; then
err "docker-compose.yml not found in $ROOT_DIR"
err "Run this script from the repo root: ./scripts/setup-local-dev.sh"
exit 1
fi
# LLM_URL_VALUE is set by step_llm, used by later steps
LLM_URL_VALUE=""
OLLAMA_PROFILE=""
step_llm
echo ""
step_server_env
echo ""
step_www_env
echo ""
step_services
echo ""
step_health
echo ""
echo "=========================================="
echo -e " ${GREEN}Reflector is running!${NC}"
echo "=========================================="
echo ""
echo " Frontend: http://localhost:3000"
echo " API: http://localhost:1250"
echo ""
echo " To stop: docker compose down"
echo " To re-run: ./scripts/setup-local-dev.sh"
echo ""
}
main "$@"