Files
reflector/docs/docs/installation/overview.md

12 KiB

sidebar_position, title
sidebar_position title
1 Deployment Guide

Deployment Guide

This guide walks you through deploying Reflector from scratch. Follow these steps in order.

What You'll Set Up

flowchart LR
    User --> Caddy["Caddy (auto-SSL)"]
    Caddy --> Frontend["Frontend (Next.js)"]
    Caddy --> Backend["Backend (FastAPI)"]
    Backend --> PostgreSQL
    Backend --> Redis
    Backend --> Workers["Celery Workers"]
    Workers --> PostgreSQL
    Workers --> Redis
    Workers --> GPU["GPU Processing<br/>(Modal.com OR Self-hosted)"]

Prerequisites

Before starting, you need:

Optional (for live meeting rooms)

  • Daily.co account - Free tier at https://dashboard.daily.co
  • AWS S3 bucket + IAM Role - For Daily.co recording storage (separate from transcript storage)

Configure DNS

Type: A    Name: app    Value: <your-server-ip>
Type: A    Name: api    Value: <your-server-ip>

Deploy GPU Processing

Reflector requires GPU processing for transcription and speaker diarization. Choose one option:

Modal.com (Cloud) Self-Hosted GPU
Best for No GPU hardware, zero maintenance Own GPU server, full control
Pricing Pay-per-use Fixed infrastructure cost

Option A: Modal.com (Serverless Cloud GPU)

Accept HuggingFace Licenses

Visit both pages and click "Accept":

Generate a token at https://huggingface.co/settings/tokens

Deploy to Modal

There's an install script to help with this setup. It's using modal API to set all necessary moving parts.

As an alternative, all those operations that script does could be performed in modal settings in modal UI.

uv tool install modal
modal setup  # opens browser for authentication

git clone https://github.com/monadical-sas/reflector.git
cd reflector/gpu/modal_deployments
./deploy-all.sh --hf-token YOUR_HUGGINGFACE_TOKEN

Save the output - copy the configuration block, you'll need it soon.

See Modal Setup for troubleshooting and details.

Option B: Self-Hosted GPU

Location: YOUR GPU SERVER

Requires: NVIDIA GPU with 8GB+ VRAM, Ubuntu 22.04+, 40-50GB disk.

See Self-Hosted GPU Setup for complete instructions. Quick summary:

  1. Install NVIDIA drivers and Docker
  2. Clone repository: git clone https://github.com/monadical-sas/reflector.git
  3. Configure .env with HuggingFace token
  4. Start service with Docker compose
  5. Set up Caddy reverse proxy for HTTPS

Save your API key and HTTPS URL - you'll need them soon.


Prepare Server

Location: dedicated reflector server

Install Docker

ssh user@your-server-ip

curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

# Log out and back in for group changes
exit
ssh user@your-server-ip

docker --version  # verify

Firewall

Ensure ports 80 (HTTP) and 443 (HTTPS) are open for inbound traffic. The method varies by cloud provider and OS configuration.

For live transcription without Daily/Whereby rooms: WebRTC requires UDP port range 49152-65535 for media traffic.

Clone Repository

The Docker images contain all application code. You clone the repository for configuration files and the compose definition:

git clone https://github.com/monadical-sas/reflector.git
cd reflector

Create S3 Bucket for Transcript Storage

Reflector requires AWS S3 to store audio files during processing.

Create Bucket

# Choose a unique bucket name
BUCKET_NAME="reflector-transcripts-yourname"
AWS_REGION="us-east-1"

# Create bucket
aws s3 mb s3://$BUCKET_NAME --region $AWS_REGION

Create IAM User

Create an IAM user with S3 access for Reflector:

  1. Go to AWS IAM Console → Users → Create User
  2. Name: reflector-transcripts
  3. Attach policy: AmazonS3FullAccess (or create a custom policy for just your bucket)
  4. Create access key (Access key ID + Secret access key)

Save these credentials - you'll need them in the next step.


Configure Environment

Reflector has two env files:

  • server/.env - Backend configuration
  • www/.env - Frontend configuration

Backend Configuration

cp server/.env.example server/.env
nano server/.env

Required settings:

# Database (defaults work with docker-compose.prod.yml)
DATABASE_URL=postgresql+asyncpg://reflector:reflector@postgres:5432/reflector

# Redis
REDIS_HOST=redis
CELERY_BROKER_URL=redis://redis:6379/1
CELERY_RESULT_BACKEND=redis://redis:6379/1

# Your domains
BASE_URL=https://api.example.com
CORS_ORIGIN=https://app.example.com
CORS_ALLOW_CREDENTIALS=true

# Secret key - generate with: openssl rand -hex 32
SECRET_KEY=<your-generated-secret>

# GPU Processing - choose ONE option:

# Option A: Modal.com (paste from deploy-all.sh output)
TRANSCRIPT_BACKEND=modal
TRANSCRIPT_URL=https://yourname--reflector-transcriber-web.modal.run
TRANSCRIPT_MODAL_API_KEY=<from-deploy-all.sh-output>
DIARIZATION_BACKEND=modal
DIARIZATION_URL=https://yourname--reflector-diarizer-web.modal.run
DIARIZATION_MODAL_API_KEY=<from-deploy-all.sh-output>

# Option B: Self-hosted GPU (use your GPU server URL and API key)
# TRANSCRIPT_BACKEND=modal
# TRANSCRIPT_URL=https://gpu.example.com
# TRANSCRIPT_MODAL_API_KEY=<your-generated-api-key>
# DIARIZATION_BACKEND=modal
# DIARIZATION_URL=https://gpu.example.com
# DIARIZATION_MODAL_API_KEY=<your-generated-api-key>

# Storage - where to store audio files and transcripts (requires AWS S3)
TRANSCRIPT_STORAGE_BACKEND=aws
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=your-aws-access-key
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
TRANSCRIPT_STORAGE_AWS_BUCKET_NAME=reflector-media
TRANSCRIPT_STORAGE_AWS_REGION=us-east-1

# LLM - for generating titles, summaries, and topics
LLM_API_KEY=sk-your-openai-api-key
LLM_MODEL=gpt-4o-mini
# LLM_URL=https://api.openai.com/v1  # Optional: custom endpoint (vLLM, LiteLLM, Ollama, etc.)

# Auth - disable for initial setup (see a dedicated step for authentication)
AUTH_BACKEND=none

Frontend Configuration

cp www/.env.example www/.env
nano www/.env

Required settings:

# Your domains
SITE_URL=https://app.example.com
API_URL=https://api.example.com
WEBSOCKET_URL=wss://api.example.com
SERVER_API_URL=http://server:1250

# NextAuth
NEXTAUTH_URL=https://app.example.com
NEXTAUTH_SECRET=<generate-with-openssl-rand-hex-32>

# Disable login requirement for initial setup
FEATURE_REQUIRE_LOGIN=false

Reverse proxy (Caddy or existing)

If Coolify, Traefik, or nginx already use ports 80/443 (e.g. Coolify on your host): skip Caddy. Start the stack without the Caddy profile (see Start Services below), then point your proxy at web:3000 (frontend) and server:1250 (API).

If you want Reflector to provide the reverse proxy and SSL:

cp Caddyfile.example Caddyfile
nano Caddyfile

Replace example.com with your domains. The {$VAR:default} syntax uses Caddy's env var substitution - you can either edit the file directly or set FRONTEND_DOMAIN and API_DOMAIN environment variables.

{$FRONTEND_DOMAIN:app.example.com} {
    reverse_proxy web:3000
}

{$API_DOMAIN:api.example.com} {
    reverse_proxy server:1250
}

Start Services

Without Caddy (e.g. Coolify already on 80/443):

docker compose -f docker-compose.prod.yml up -d

With Caddy (Reflector handles SSL):

docker compose -f docker-compose.prod.yml --profile caddy up -d

Wait for containers to start (first run may take 1-2 minutes to pull images and initialize).


Verify Deployment

Check services

docker compose -f docker-compose.prod.yml ps
# All should show "Up"

Test API

curl https://api.example.com/health
# Should return: {"status":"healthy"}

Test Frontend

  • Visit https://app.example.com
  • You should see the Reflector interface
  • Try uploading an audio file to test transcription

If any verification fails, see Troubleshooting below.


Enable Authentication (Required for Live Rooms)

By default, Reflector is open (no login required). Authentication is required if you want to use Live Meeting Rooms.

See Authentication Setup for full Authentik OAuth configuration.

Quick summary:

  1. Deploy Authentik on your server
  2. Create OAuth provider in Authentik
  3. Extract public key for JWT verification
  4. Update server/.env: AUTH_BACKEND=jwt + AUTH_JWT_AUDIENCE
  5. Update www/.env: FEATURE_REQUIRE_LOGIN=true + Authentik credentials
  6. Mount JWT keys volume and restart services

Enable Live Meeting Rooms

Requires: Authentication Step

Live rooms require Daily.co and AWS S3. See Daily.co Setup for complete S3/IAM configuration instructions.

Note that Reflector also supports Whereby as a call provider - this doc doesn't cover its setup yet.

Quick config - Add to server/.env:

DEFAULT_VIDEO_PLATFORM=daily
DAILY_API_KEY=<from-daily.co-dashboard>
DAILY_SUBDOMAIN=<your-daily-subdomain>

# S3 for recording storage
DAILYCO_STORAGE_AWS_BUCKET_NAME=<your-bucket>
DAILYCO_STORAGE_AWS_REGION=us-east-1
DAILYCO_STORAGE_AWS_ROLE_ARN=<arn:aws:iam::ACCOUNT:role/DailyCo>

Reload env and restart:

docker compose -f docker-compose.prod.yml up -d server worker

Troubleshooting

Check logs for errors

docker compose -f docker-compose.prod.yml logs server --tail 20
docker compose -f docker-compose.prod.yml logs worker --tail 20

Services won't start

docker compose -f docker-compose.prod.yml logs

CORS errors in browser

  • Verify CORS_ORIGIN in server/.env matches your frontend domain exactly (including https://)
  • Reload env: docker compose -f docker-compose.prod.yml up -d server

SSL certificate errors (when using Caddy)

  • Caddy auto-provisions Let's Encrypt certificates
  • Ensure ports 80 and 443 are open and not used by another proxy
  • Check: docker compose -f docker-compose.prod.yml logs caddy
  • If port 80 is already in use (e.g. by Coolify), run without Caddy: docker compose -f docker-compose.prod.yml up -d and use your existing proxy

Transcription not working

  • Check Modal dashboard: https://modal.com/apps
  • Verify URLs in server/.env match deployed functions
  • Check worker logs: docker compose -f docker-compose.prod.yml logs worker

"Login required" but auth not configured

  • Set FEATURE_REQUIRE_LOGIN=false in www/.env
  • Rebuild frontend: docker compose -f docker-compose.prod.yml up -d --force-recreate web

Database migrations or connectivity issues

Migrations run automatically on server startup. To check database connectivity or debug migration failures:

# Check server logs for migration errors
docker compose -f docker-compose.prod.yml logs server | grep -i -E "(alembic|migration|database|postgres)"

# Verify database connectivity
docker compose -f docker-compose.prod.yml exec server uv run python -c "from reflector.db import engine; print('DB connected')"

# Manually run migrations (if needed)
docker compose -f docker-compose.prod.yml exec server uv run alembic upgrade head