mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2026-03-21 22:56:47 +00:00
* feat: remove network_mode host for standalone by fixing WebRTC port range and ICE candidates aioice hardcodes bind(addr, 0) for ICE UDP sockets, making port mapping impossible in Docker bridge networking. This adds two env-var-gated mechanisms to replace network_mode: host: 1. WEBRTC_PORT_RANGE (e.g. "50000-50100"): monkey-patches aioice to bind UDP sockets within a known range, so they can be mapped in Docker. 2. WEBRTC_HOST (e.g. "host.docker.internal"): rewrites container-internal IPs in SDP answers with the Docker host's real IP, so LAN clients can reach the ICE candidates. Both default to None — no effect on existing deployments. * fix: do not attempt sidecar to detect host ip, use the standalone script to figure out the external ip and use it * style: reformat --------- Co-authored-by: tito <tito@titos-Mac-Studio.local>
API Key Management
Finding Your User ID
# Get your OAuth sub (user ID) - requires authentication
curl -H "Authorization: Bearer <your_jwt>" http://localhost:1250/v1/me
# Returns: {"sub": "your-oauth-sub-here", "email": "...", ...}
Creating API Keys
curl -X POST http://localhost:1250/v1/user/api-keys \
-H "Authorization: Bearer <your_jwt>" \
-H "Content-Type: application/json" \
-d '{"name": "My API Key"}'
Using API Keys
# Use X-API-Key header instead of Authorization
curl -H "X-API-Key: <your_api_key>" http://localhost:1250/v1/transcripts
AWS S3/SQS usage clarification
Whereby.com uploads recordings directly to our S3 bucket when meetings end.
SQS Queue (AWS_PROCESS_RECORDING_QUEUE_URL)
Filled by: AWS S3 Event Notifications
The S3 bucket is configured to send notifications to our SQS queue when new objects are created. This is standard AWS infrastructure - not in our codebase.
AWS S3 → SQS Event Configuration:
- Event Type: s3:ObjectCreated:*
- Filter: *.mp4 files
- Destination: Our SQS queue
Our System's Role
Polls SQS every 60 seconds via /server/reflector/worker/process.py:24-62:
Every 60 seconds, check for new recordings
sqs = boto3.client("sqs", ...) response = sqs.receive_message(QueueUrl=queue_url, ...)
Requeue
uv run /app/requeue_uploaded_file.py TRANSCRIPT_ID
Hatchet Setup (Fresh DB)
After resetting the Hatchet database:
Option A: Automatic (CLI)
# Get default tenant ID and create token in one command
TENANT_ID=$(docker compose exec -T postgres psql -U reflector -d hatchet -t -c \
"SELECT id FROM \"Tenant\" WHERE slug = 'default';" | tr -d ' \n') && \
TOKEN=$(docker compose exec -T hatchet /hatchet-admin token create \
--config /config --tenant-id "$TENANT_ID" 2>/dev/null | tr -d '\n') && \
echo "HATCHET_CLIENT_TOKEN=$TOKEN"
Copy the output to server/.env.
Option B: Manual (UI)
- Create API token at http://localhost:8889 → Settings → API Tokens
- Update
server/.env:HATCHET_CLIENT_TOKEN=<new-token>
Then restart workers
docker compose restart server hatchet-worker
Workflows register automatically when hatchet-worker starts.
Pipeline Management
Continue stuck pipeline from final summaries (identify_participants) step:
uv run python -c "from reflector.pipelines.main_live_pipeline import task_pipeline_final_summaries; result = task_pipeline_final_summaries.delay(transcript_id='TRANSCRIPT_ID'); print(f'Task queued: {result.id}')"
Run full post-processing pipeline (continues to completion):
uv run python -c "from reflector.pipelines.main_live_pipeline import pipeline_post; pipeline_post(transcript_id='TRANSCRIPT_ID')"
.