mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2026-02-07 03:06:46 +00:00
When Cal.com events appear in an aggregated ICS feed, the same event gets two different UIDs (one from Cal.com, one from Google Calendar). This caused duplicate meetings to be created for the same time slot. Add time-window dedup check in create_upcoming_meetings_for_event: after verifying no meeting exists for the calendar_event_id, also check if a meeting already exists for the same room + start_date + end_date.
API Key Management
Finding Your User ID
# Get your OAuth sub (user ID) - requires authentication
curl -H "Authorization: Bearer <your_jwt>" http://localhost:1250/v1/me
# Returns: {"sub": "your-oauth-sub-here", "email": "...", ...}
Creating API Keys
curl -X POST http://localhost:1250/v1/user/api-keys \
-H "Authorization: Bearer <your_jwt>" \
-H "Content-Type: application/json" \
-d '{"name": "My API Key"}'
Using API Keys
# Use X-API-Key header instead of Authorization
curl -H "X-API-Key: <your_api_key>" http://localhost:1250/v1/transcripts
AWS S3/SQS usage clarification
Whereby.com uploads recordings directly to our S3 bucket when meetings end.
SQS Queue (AWS_PROCESS_RECORDING_QUEUE_URL)
Filled by: AWS S3 Event Notifications
The S3 bucket is configured to send notifications to our SQS queue when new objects are created. This is standard AWS infrastructure - not in our codebase.
AWS S3 → SQS Event Configuration:
- Event Type: s3:ObjectCreated:*
- Filter: *.mp4 files
- Destination: Our SQS queue
Our System's Role
Polls SQS every 60 seconds via /server/reflector/worker/process.py:24-62:
Every 60 seconds, check for new recordings
sqs = boto3.client("sqs", ...) response = sqs.receive_message(QueueUrl=queue_url, ...)
Requeue
uv run /app/requeue_uploaded_file.py TRANSCRIPT_ID
Hatchet Setup (Fresh DB)
After resetting the Hatchet database:
Option A: Automatic (CLI)
# Get default tenant ID and create token in one command
TENANT_ID=$(docker compose exec -T postgres psql -U reflector -d hatchet -t -c \
"SELECT id FROM \"Tenant\" WHERE slug = 'default';" | tr -d ' \n') && \
TOKEN=$(docker compose exec -T hatchet /hatchet-admin token create \
--config /config --tenant-id "$TENANT_ID" 2>/dev/null | tr -d '\n') && \
echo "HATCHET_CLIENT_TOKEN=$TOKEN"
Copy the output to server/.env.
Option B: Manual (UI)
- Create API token at http://localhost:8889 → Settings → API Tokens
- Update
server/.env:HATCHET_CLIENT_TOKEN=<new-token>
Then restart workers
docker compose restart server hatchet-worker
Workflows register automatically when hatchet-worker starts.
Pipeline Management
Continue stuck pipeline from final summaries (identify_participants) step:
uv run python -c "from reflector.pipelines.main_live_pipeline import task_pipeline_final_summaries; result = task_pipeline_final_summaries.delay(transcript_id='TRANSCRIPT_ID'); print(f'Task queued: {result.id}')"
Run full post-processing pipeline (continues to completion):
uv run python -c "from reflector.pipelines.main_live_pipeline import pipeline_post; pipeline_post(transcript_id='TRANSCRIPT_ID')"
.