Files
reflector/server
Igor Loskutov 1da687fe13 fix: prevent duplicate meetings from aggregated ICS calendar feeds
When Cal.com events appear in an aggregated ICS feed, the same event
gets two different UIDs (one from Cal.com, one from Google Calendar).
This caused duplicate meetings to be created for the same time slot.

Add time-window dedup check in create_upcoming_meetings_for_event:
after verifying no meeting exists for the calendar_event_id, also check
if a meeting already exists for the same room + start_date + end_date.
2026-02-05 18:42:42 -05:00
..
2025-12-22 12:09:20 -05:00
2026-01-23 12:33:06 -05:00
2025-02-03 16:11:01 +01:00
2026-01-23 12:33:06 -05:00
2025-08-20 20:56:45 -04:00
2025-07-16 18:10:11 -06:00
2023-08-29 10:58:27 +02:00
2026-01-30 13:11:51 -05:00
2025-12-22 12:09:20 -05:00
2026-01-20 12:27:16 -05:00
2025-09-17 16:43:20 -06:00
2026-01-30 13:11:51 -05:00

API Key Management

Finding Your User ID

# Get your OAuth sub (user ID) - requires authentication
curl -H "Authorization: Bearer <your_jwt>" http://localhost:1250/v1/me
# Returns: {"sub": "your-oauth-sub-here", "email": "...", ...}

Creating API Keys

curl -X POST http://localhost:1250/v1/user/api-keys \
  -H "Authorization: Bearer <your_jwt>" \
  -H "Content-Type: application/json" \
  -d '{"name": "My API Key"}'

Using API Keys

# Use X-API-Key header instead of Authorization
curl -H "X-API-Key: <your_api_key>" http://localhost:1250/v1/transcripts

AWS S3/SQS usage clarification

Whereby.com uploads recordings directly to our S3 bucket when meetings end.

SQS Queue (AWS_PROCESS_RECORDING_QUEUE_URL)

Filled by: AWS S3 Event Notifications

The S3 bucket is configured to send notifications to our SQS queue when new objects are created. This is standard AWS infrastructure - not in our codebase.

AWS S3 → SQS Event Configuration:

  • Event Type: s3:ObjectCreated:*
  • Filter: *.mp4 files
  • Destination: Our SQS queue

Our System's Role

Polls SQS every 60 seconds via /server/reflector/worker/process.py:24-62:

Every 60 seconds, check for new recordings

sqs = boto3.client("sqs", ...) response = sqs.receive_message(QueueUrl=queue_url, ...)

Requeue

uv run /app/requeue_uploaded_file.py TRANSCRIPT_ID

Hatchet Setup (Fresh DB)

After resetting the Hatchet database:

Option A: Automatic (CLI)

# Get default tenant ID and create token in one command
TENANT_ID=$(docker compose exec -T postgres psql -U reflector -d hatchet -t -c \
  "SELECT id FROM \"Tenant\" WHERE slug = 'default';" | tr -d ' \n') && \
TOKEN=$(docker compose exec -T hatchet /hatchet-admin token create \
  --config /config --tenant-id "$TENANT_ID" 2>/dev/null | tr -d '\n') && \
echo "HATCHET_CLIENT_TOKEN=$TOKEN"

Copy the output to server/.env.

Option B: Manual (UI)

  1. Create API token at http://localhost:8889 → Settings → API Tokens
  2. Update server/.env: HATCHET_CLIENT_TOKEN=<new-token>

Then restart workers

docker compose restart server hatchet-worker

Workflows register automatically when hatchet-worker starts.

Pipeline Management

Continue stuck pipeline from final summaries (identify_participants) step:

uv run python -c "from reflector.pipelines.main_live_pipeline import task_pipeline_final_summaries; result = task_pipeline_final_summaries.delay(transcript_id='TRANSCRIPT_ID'); print(f'Task queued: {result.id}')"

Run full post-processing pipeline (continues to completion):

uv run python -c "from reflector.pipelines.main_live_pipeline import pipeline_post; pipeline_post(transcript_id='TRANSCRIPT_ID')"

.