mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2026-03-21 22:56:47 +00:00
* feat: add Caddy reverse proxy with auto HTTPS for LAN access and auto-derive WebSocket URL Add a Caddy service to docker-compose.standalone.yml that provides automatic HTTPS with local certificates, enabling secure access to both the frontend and API from the local network through a single entrypoint. Backend changes: - Add ROOT_PATH setting to FastAPI so the API can be served under /api prefix - Route frontend and API (/server-api) through Caddy reverse proxy Frontend changes: - Support WEBSOCKET_URL=auto to derive the WebSocket URL from API_URL automatically, using the page protocol (http→ws, https→wss) and host - Make WEBSOCKET_URL env var optional instead of required * style: pre-commit * fix: make standalone compose self-contained (drop !reset dependency) docker-compose.standalone.yml used !reset YAML tags to clear network_mode and volumes from the base compose. !reset requires Compose v2.24+ and breaks on Colima + brew-installed compose. Rewrite as a fully self-contained file with all services defined directly (server, worker, beat, redis, postgres, web, garage, cpu, gpu-nvidia, ollama, ollama-cpu). No longer overlays docker-compose.yml. Update setup-standalone.sh compose_cmd() to use only the standalone file instead of both files. * fix: update standalone docs to match self-contained compose usage --------- Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
API Key Management
Finding Your User ID
# Get your OAuth sub (user ID) - requires authentication
curl -H "Authorization: Bearer <your_jwt>" http://localhost:1250/v1/me
# Returns: {"sub": "your-oauth-sub-here", "email": "...", ...}
Creating API Keys
curl -X POST http://localhost:1250/v1/user/api-keys \
-H "Authorization: Bearer <your_jwt>" \
-H "Content-Type: application/json" \
-d '{"name": "My API Key"}'
Using API Keys
# Use X-API-Key header instead of Authorization
curl -H "X-API-Key: <your_api_key>" http://localhost:1250/v1/transcripts
AWS S3/SQS usage clarification
Whereby.com uploads recordings directly to our S3 bucket when meetings end.
SQS Queue (AWS_PROCESS_RECORDING_QUEUE_URL)
Filled by: AWS S3 Event Notifications
The S3 bucket is configured to send notifications to our SQS queue when new objects are created. This is standard AWS infrastructure - not in our codebase.
AWS S3 → SQS Event Configuration:
- Event Type: s3:ObjectCreated:*
- Filter: *.mp4 files
- Destination: Our SQS queue
Our System's Role
Polls SQS every 60 seconds via /server/reflector/worker/process.py:24-62:
Every 60 seconds, check for new recordings
sqs = boto3.client("sqs", ...) response = sqs.receive_message(QueueUrl=queue_url, ...)
Requeue
uv run /app/requeue_uploaded_file.py TRANSCRIPT_ID
Hatchet Setup (Fresh DB)
After resetting the Hatchet database:
Option A: Automatic (CLI)
# Get default tenant ID and create token in one command
TENANT_ID=$(docker compose exec -T postgres psql -U reflector -d hatchet -t -c \
"SELECT id FROM \"Tenant\" WHERE slug = 'default';" | tr -d ' \n') && \
TOKEN=$(docker compose exec -T hatchet /hatchet-admin token create \
--config /config --tenant-id "$TENANT_ID" 2>/dev/null | tr -d '\n') && \
echo "HATCHET_CLIENT_TOKEN=$TOKEN"
Copy the output to server/.env.
Option B: Manual (UI)
- Create API token at http://localhost:8889 → Settings → API Tokens
- Update
server/.env:HATCHET_CLIENT_TOKEN=<new-token>
Then restart workers
docker compose restart server hatchet-worker
Workflows register automatically when hatchet-worker starts.
Pipeline Management
Continue stuck pipeline from final summaries (identify_participants) step:
uv run python -c "from reflector.pipelines.main_live_pipeline import task_pipeline_final_summaries; result = task_pipeline_final_summaries.delay(transcript_id='TRANSCRIPT_ID'); print(f'Task queued: {result.id}')"
Run full post-processing pipeline (continues to completion):
uv run python -c "from reflector.pipelines.main_live_pipeline import pipeline_post; pipeline_post(transcript_id='TRANSCRIPT_ID')"
.