Compare commits

..

16 Commits

Author SHA1 Message Date
Igor Loskutov
6a57388723 format 2026-01-26 17:38:38 -05:00
ddef1d4a4a Merge branch 'main' into feature/split-padding-transcription 2026-01-26 13:22:16 -05:00
88e0d11ccd Update server/reflector/hatchet/workflows/padding_workflow.py
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2026-01-23 19:59:32 -05:00
Igor Loskutov
9f6e7b515b Revert transcript text broadcast to empty string
Empty string was intentional - reverting my incorrect fix
2026-01-23 17:00:24 -05:00
Igor Loskutov
d0110f4dd4 Fix: Remove redundant checks and clarify variable scope
- Remove redundant padded_key None check (NonEmptyString cannot be None)
- Move storage_path definition before try block for clarity
- All padded tracks added to list (original or new)
2026-01-23 16:55:34 -05:00
Igor Loskutov
7dfb37154d Fix critical data flow and concurrency bugs
- Add empty padded_tracks guard in process_transcriptions
- Fix created_padded_files: use list instead of set to preserve order for zip cleanup
- Document size=0 contract in PadTrackResult (size=0 means original key, not padded)
- Remove redundant ctx.log in padding_workflow
2026-01-23 16:47:11 -05:00
Igor Loskutov
67679e90b2 Revert waveform dependency - allow background completion
Waveform generation can complete after transcript marked "ended".
User can see transcript immediately while waveform finishes in background.
2026-01-23 16:42:36 -05:00
Igor Loskutov
aa4c368479 Fix critical bugs from refactoring
- Fix empty transcript broadcast (was sending text="", should send merged_transcript.text)
- Restore generate_waveform to finalize parents (finalize must wait for waveform)
2026-01-23 16:40:57 -05:00
Igor Loskutov
deb5ed6010 Fix: Preserve track_index explicitly in PaddedTrackInfo
- Add track_index to PaddedTrackInfo model
- Preserve track_index from PadTrackResult when building padded_tracks list
- Use explicit track_index instead of enumerate in process_transcriptions
- Removes fragile ordering assumption
2026-01-23 16:36:16 -05:00
Igor Loskutov
30b28eed3b Merge main into feature/split-padding-transcription 2026-01-23 16:20:39 -05:00
Igor Loskutov
1b33fba3ba Fix: Move padding_workflow to LLM worker for parallel execution
Critical bug fix: padding_workflow was registered on CPU worker (slots=1),
causing all padding tasks to run serially instead of in parallel.

Changes:
- Moved padding_workflow from run_workers_cpu.py to run_workers_llm.py
- LLM worker has slots=10, allowing up to 10 parallel padding operations
- Padding is I/O-bound (S3 download/upload), not CPU-intensive
- CPU worker now handles only mixdown_tracks (compute-heavy, serialized)

Impact:
- Before: 4 tracks × 5s padding = 20s serial execution
- After: 4 tracks × 5s padding = ~5s parallel execution (4 concurrent)
- Restores intended performance benefit of the refactoring
2026-01-23 16:05:43 -05:00
Igor Loskutov
3ce279daa4 Split padding and transcription into separate workflow steps
- Split process_tracks into process_paddings + process_transcriptions
- Create PaddingWorkflow and TranscriptionWorkflow as separate child workflows
- Update dependency: mixdown_tracks now depends on process_paddings (not process_transcriptions)
- Performance: mixdown starts ~295s earlier (after padding completes, not after transcription)

Changes:
- New: padding_workflow.py, transcription_workflow.py
- Modified: daily_multitrack_pipeline.py (new tasks, updated dependencies)
- Modified: models.py (new ProcessPaddingsResult, ProcessTranscriptionsResult, deleted dead ProcessTracksResult)
- Modified: constants.py (new task names)
- Modified: run_workers_cpu.py, run_workers_llm.py (workflow registration)
- Deleted: track_processing.py

Code quality fixes:
- Removed redundant comments and verbose docstrings
- Added language validation in process_transcriptions
- Improved error logging with full context (transcript_id, track_index)
- Fixed log accuracy bugs (use correct counts)
- Updated worker pool documentation
2026-01-21 16:53:06 -05:00
Igor Loskutov
01650be787 fix tests 2026-01-21 15:04:05 -05:00
f00c16a41c Merge branch 'main' into fix/ics-window-bug 2026-01-21 14:38:36 -05:00
859df5513e Merge branch 'main' into fix/ics-window-bug 2026-01-21 08:47:34 -05:00
Igor Loskutov
2af9918979 ics non-sync bugfix 2026-01-20 16:56:06 -05:00
41 changed files with 474 additions and 3092 deletions

1
.gitignore vendored
View File

@@ -1,6 +1,5 @@
.DS_Store .DS_Store
server/.env server/.env
server/.env.production
.env .env
Caddyfile Caddyfile
server/exportdanswer server/exportdanswer

View File

@@ -1,29 +1,5 @@
# Changelog # Changelog
## [0.32.2](https://github.com/Monadical-SAS/reflector/compare/v0.32.1...v0.32.2) (2026-02-03)
### Bug Fixes
* increase TIMEOUT_MEDIUM from 2m to 5m for LLM tasks ([#843](https://github.com/Monadical-SAS/reflector/issues/843)) ([4acde4b](https://github.com/Monadical-SAS/reflector/commit/4acde4b7fdef88cc02ca12cf38c9020b05ed96ac))
* make caddy optional ([#841](https://github.com/Monadical-SAS/reflector/issues/841)) ([a2ed7d6](https://github.com/Monadical-SAS/reflector/commit/a2ed7d60d557b551a5b64e4dfd909b63a791d9fc))
* use Daily API recording.duration as master source for transcript duration ([#844](https://github.com/Monadical-SAS/reflector/issues/844)) ([8707c66](https://github.com/Monadical-SAS/reflector/commit/8707c6694a80c939b6214bbc13331741f192e082))
## [0.32.1](https://github.com/Monadical-SAS/reflector/compare/v0.32.0...v0.32.1) (2026-01-30)
### Bug Fixes
* daily multitrack pipeline finalze dependency fix ([23eb137](https://github.com/Monadical-SAS/reflector/commit/23eb1371cb9348c4b81eb12ad506b582f8a4799e))
* match httpx pad with hatchet audio timeout ([c05d1f0](https://github.com/Monadical-SAS/reflector/commit/c05d1f03cd8369fc06efd455527e50246887efd0))
## [0.32.0](https://github.com/Monadical-SAS/reflector/compare/v0.31.0...v0.32.0) (2026-01-30)
### Features
* modal padding ([#837](https://github.com/Monadical-SAS/reflector/issues/837)) ([7fde64e](https://github.com/Monadical-SAS/reflector/commit/7fde64e2529a1d37b0f7507c62d983a7bd0b5b89))
## [0.31.0](https://github.com/Monadical-SAS/reflector/compare/v0.30.0...v0.31.0) (2026-01-23) ## [0.31.0](https://github.com/Monadical-SAS/reflector/compare/v0.30.0...v0.31.0) (2026-01-23)

View File

@@ -1,8 +1,6 @@
# Reflector Caddyfile (optional reverse proxy) # Reflector Caddyfile
# Use this only when you run Caddy via: docker compose -f docker-compose.prod.yml --profile caddy up -d # Replace example.com with your actual domains
# If Coolify, Traefik, or nginx already use ports 80/443, do NOT start Caddy; point your proxy at web:3000 and server:1250. # CORS is handled by the backend - Caddy just proxies
#
# Replace example.com with your actual domains. CORS is handled by the backend - Caddy just proxies.
# #
# For environment variable substitution, set: # For environment variable substitution, set:
# FRONTEND_DOMAIN=app.example.com # FRONTEND_DOMAIN=app.example.com

View File

@@ -1,14 +1,9 @@
# Production Docker Compose configuration # Production Docker Compose configuration
# Usage: docker compose -f docker-compose.prod.yml up -d # Usage: docker compose -f docker-compose.prod.yml up -d
# #
# Caddy (reverse proxy on ports 80/443) is OPTIONAL and behind the "caddy" profile:
# - With Caddy (self-hosted, you manage SSL): docker compose -f docker-compose.prod.yml --profile caddy up -d
# - Without Caddy (Coolify/Traefik/nginx already on 80/443): docker compose -f docker-compose.prod.yml up -d
# Then point your proxy at web:3000 (frontend) and server:1250 (API).
#
# Prerequisites: # Prerequisites:
# 1. Copy .env.example to .env and configure for both server/ and www/ # 1. Copy .env.example to .env and configure for both server/ and www/
# 2. If using Caddy: copy Caddyfile.example to Caddyfile and edit your domains # 2. Copy Caddyfile.example to Caddyfile and edit with your domains
# 3. Deploy Modal GPU functions (see gpu/modal_deployments/deploy-all.sh) # 3. Deploy Modal GPU functions (see gpu/modal_deployments/deploy-all.sh)
services: services:
@@ -89,8 +84,6 @@ services:
retries: 3 retries: 3
caddy: caddy:
profiles:
- caddy
image: caddy:2-alpine image: caddy:2-alpine
restart: unless-stopped restart: unless-stopped
ports: ports:

View File

@@ -11,15 +11,15 @@ This page documents the Docker Compose configuration for Reflector. For the comp
The `docker-compose.prod.yml` includes these services: The `docker-compose.prod.yml` includes these services:
| Service | Image | Purpose | | Service | Image | Purpose |
| ---------- | --------------------------------- | --------------------------------------------------------------------------- | |---------|-------|---------|
| `web` | `monadicalsas/reflector-frontend` | Next.js frontend | | `web` | `monadicalsas/reflector-frontend` | Next.js frontend |
| `server` | `monadicalsas/reflector-backend` | FastAPI backend | | `server` | `monadicalsas/reflector-backend` | FastAPI backend |
| `worker` | `monadicalsas/reflector-backend` | Celery worker for background tasks | | `worker` | `monadicalsas/reflector-backend` | Celery worker for background tasks |
| `beat` | `monadicalsas/reflector-backend` | Celery beat scheduler | | `beat` | `monadicalsas/reflector-backend` | Celery beat scheduler |
| `redis` | `redis:7.2-alpine` | Message broker and cache | | `redis` | `redis:7.2-alpine` | Message broker and cache |
| `postgres` | `postgres:17-alpine` | Primary database | | `postgres` | `postgres:17-alpine` | Primary database |
| `caddy` | `caddy:2-alpine` | Reverse proxy with auto-SSL (optional; see [Caddy profile](#caddy-profile)) | | `caddy` | `caddy:2-alpine` | Reverse proxy with auto-SSL |
## Environment Files ## Environment Files
@@ -30,7 +30,6 @@ Reflector uses two separate environment files:
Used by: `server`, `worker`, `beat` Used by: `server`, `worker`, `beat`
Key variables: Key variables:
```env ```env
# Database connection # Database connection
DATABASE_URL=postgresql+asyncpg://reflector:reflector@postgres:5432/reflector DATABASE_URL=postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
@@ -55,7 +54,6 @@ TRANSCRIPT_MODAL_API_KEY=...
Used by: `web` Used by: `web`
Key variables: Key variables:
```env ```env
# Domain configuration # Domain configuration
SITE_URL=https://app.example.com SITE_URL=https://app.example.com
@@ -72,42 +70,26 @@ Note: `API_URL` is used client-side (browser), `SERVER_API_URL` is used server-s
## Volumes ## Volumes
| Volume | Purpose | | Volume | Purpose |
| --------------- | ----------------------------- | |--------|---------|
| `redis_data` | Redis persistence | | `redis_data` | Redis persistence |
| `postgres_data` | PostgreSQL data | | `postgres_data` | PostgreSQL data |
| `server_data` | Uploaded files, local storage | | `server_data` | Uploaded files, local storage |
| `caddy_data` | SSL certificates | | `caddy_data` | SSL certificates |
| `caddy_config` | Caddy configuration | | `caddy_config` | Caddy configuration |
## Network ## Network
All services share the default network. The network is marked `attachable: true` to allow external containers (like Authentik) to join. All services share the default network. The network is marked `attachable: true` to allow external containers (like Authentik) to join.
## Caddy profile
Caddy (ports 80 and 443) is **optional** and behind the `caddy` profile so it does not conflict with an existing reverse proxy (e.g. Coolify, Traefik, nginx).
- **With Caddy** (you want Reflector to handle SSL):
`docker compose -f docker-compose.prod.yml --profile caddy up -d`
- **Without Caddy** (Coolify or another proxy already on 80/443):
`docker compose -f docker-compose.prod.yml up -d`
Then configure your proxy to send traffic to `web:3000` (frontend) and `server:1250` (API).
## Common Commands ## Common Commands
### Start all services ### Start all services
```bash ```bash
# Without Caddy (e.g. when using Coolify)
docker compose -f docker-compose.prod.yml up -d docker compose -f docker-compose.prod.yml up -d
# With Caddy as reverse proxy
docker compose -f docker-compose.prod.yml --profile caddy up -d
``` ```
### View logs ### View logs
```bash ```bash
# All services # All services
docker compose -f docker-compose.prod.yml logs -f docker compose -f docker-compose.prod.yml logs -f
@@ -117,7 +99,6 @@ docker compose -f docker-compose.prod.yml logs server --tail 50
``` ```
### Restart a service ### Restart a service
```bash ```bash
# Quick restart (doesn't reload .env changes) # Quick restart (doesn't reload .env changes)
docker compose -f docker-compose.prod.yml restart server docker compose -f docker-compose.prod.yml restart server
@@ -127,32 +108,27 @@ docker compose -f docker-compose.prod.yml up -d server
``` ```
### Run database migrations ### Run database migrations
```bash ```bash
docker compose -f docker-compose.prod.yml exec server uv run alembic upgrade head docker compose -f docker-compose.prod.yml exec server uv run alembic upgrade head
``` ```
### Access database ### Access database
```bash ```bash
docker compose -f docker-compose.prod.yml exec postgres psql -U reflector docker compose -f docker-compose.prod.yml exec postgres psql -U reflector
``` ```
### Pull latest images ### Pull latest images
```bash ```bash
docker compose -f docker-compose.prod.yml pull docker compose -f docker-compose.prod.yml pull
docker compose -f docker-compose.prod.yml up -d docker compose -f docker-compose.prod.yml up -d
``` ```
### Stop all services ### Stop all services
```bash ```bash
docker compose -f docker-compose.prod.yml down docker compose -f docker-compose.prod.yml down
``` ```
### Full reset (WARNING: deletes data) ### Full reset (WARNING: deletes data)
```bash ```bash
docker compose -f docker-compose.prod.yml down -v docker compose -f docker-compose.prod.yml down -v
``` ```
@@ -211,7 +187,6 @@ The Caddyfile supports environment variable substitution:
Set `FRONTEND_DOMAIN` and `API_DOMAIN` environment variables, or edit the file directly. Set `FRONTEND_DOMAIN` and `API_DOMAIN` environment variables, or edit the file directly.
### Reload Caddy after changes ### Reload Caddy after changes
```bash ```bash
docker compose -f docker-compose.prod.yml exec caddy caddy reload --config /etc/caddy/Caddyfile docker compose -f docker-compose.prod.yml exec caddy caddy reload --config /etc/caddy/Caddyfile
``` ```

View File

@@ -26,7 +26,7 @@ flowchart LR
Before starting, you need: Before starting, you need:
- **Production server** - 4+ cores, 8GB+ RAM, public IP - **Production server** - 4+ cores, 8GB+ RAM, public IP
- **Two domain names** - e.g., `app.example.com` (frontend) and `api.example.com` (backend) - **Two domain names** - e.g., `app.example.com` (frontend) and `api.example.com` (backend)
- **GPU processing** - Choose one: - **GPU processing** - Choose one:
- Modal.com account, OR - Modal.com account, OR
@@ -60,17 +60,16 @@ Type: A Name: api Value: <your-server-ip>
Reflector requires GPU processing for transcription and speaker diarization. Choose one option: Reflector requires GPU processing for transcription and speaker diarization. Choose one option:
| | **Modal.com (Cloud)** | **Self-Hosted GPU** | | | **Modal.com (Cloud)** | **Self-Hosted GPU** |
| ------------ | --------------------------------- | ---------------------------- | |---|---|---|
| **Best for** | No GPU hardware, zero maintenance | Own GPU server, full control | | **Best for** | No GPU hardware, zero maintenance | Own GPU server, full control |
| **Pricing** | Pay-per-use | Fixed infrastructure cost | | **Pricing** | Pay-per-use | Fixed infrastructure cost |
### Option A: Modal.com (Serverless Cloud GPU) ### Option A: Modal.com (Serverless Cloud GPU)
#### Accept HuggingFace Licenses #### Accept HuggingFace Licenses
Visit both pages and click "Accept": Visit both pages and click "Accept":
- https://huggingface.co/pyannote/speaker-diarization-3.1 - https://huggingface.co/pyannote/speaker-diarization-3.1
- https://huggingface.co/pyannote/segmentation-3.0 - https://huggingface.co/pyannote/segmentation-3.0
@@ -180,7 +179,6 @@ Save these credentials - you'll need them in the next step.
## Configure Environment ## Configure Environment
Reflector has two env files: Reflector has two env files:
- `server/.env` - Backend configuration - `server/.env` - Backend configuration
- `www/.env` - Frontend configuration - `www/.env` - Frontend configuration
@@ -192,7 +190,6 @@ nano server/.env
``` ```
**Required settings:** **Required settings:**
```env ```env
# Database (defaults work with docker-compose.prod.yml) # Database (defaults work with docker-compose.prod.yml)
DATABASE_URL=postgresql+asyncpg://reflector:reflector@postgres:5432/reflector DATABASE_URL=postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
@@ -252,7 +249,6 @@ nano www/.env
``` ```
**Required settings:** **Required settings:**
```env ```env
# Your domains # Your domains
SITE_URL=https://app.example.com SITE_URL=https://app.example.com
@@ -270,11 +266,7 @@ FEATURE_REQUIRE_LOGIN=false
--- ---
## Reverse proxy (Caddy or existing) ## Configure Caddy
**If Coolify, Traefik, or nginx already use ports 80/443** (e.g. Coolify on your host): skip Caddy. Start the stack without the Caddy profile (see [Start Services](#start-services) below), then point your proxy at `web:3000` (frontend) and `server:1250` (API).
**If you want Reflector to provide the reverse proxy and SSL:**
```bash ```bash
cp Caddyfile.example Caddyfile cp Caddyfile.example Caddyfile
@@ -297,18 +289,10 @@ Replace `example.com` with your domains. The `{$VAR:default}` syntax uses Caddy'
## Start Services ## Start Services
**Without Caddy** (e.g. Coolify already on 80/443):
```bash ```bash
docker compose -f docker-compose.prod.yml up -d docker compose -f docker-compose.prod.yml up -d
``` ```
**With Caddy** (Reflector handles SSL):
```bash
docker compose -f docker-compose.prod.yml --profile caddy up -d
```
Wait for containers to start (first run may take 1-2 minutes to pull images and initialize). Wait for containers to start (first run may take 1-2 minutes to pull images and initialize).
--- ---
@@ -316,21 +300,18 @@ Wait for containers to start (first run may take 1-2 minutes to pull images and
## Verify Deployment ## Verify Deployment
### Check services ### Check services
```bash ```bash
docker compose -f docker-compose.prod.yml ps docker compose -f docker-compose.prod.yml ps
# All should show "Up" # All should show "Up"
``` ```
### Test API ### Test API
```bash ```bash
curl https://api.example.com/health curl https://api.example.com/health
# Should return: {"status":"healthy"} # Should return: {"status":"healthy"}
``` ```
### Test Frontend ### Test Frontend
- Visit https://app.example.com - Visit https://app.example.com
- You should see the Reflector interface - You should see the Reflector interface
- Try uploading an audio file to test transcription - Try uploading an audio file to test transcription
@@ -346,7 +327,6 @@ By default, Reflector is open (no login required). **Authentication is required
See [Authentication Setup](./auth-setup) for full Authentik OAuth configuration. See [Authentication Setup](./auth-setup) for full Authentik OAuth configuration.
Quick summary: Quick summary:
1. Deploy Authentik on your server 1. Deploy Authentik on your server
2. Create OAuth provider in Authentik 2. Create OAuth provider in Authentik
3. Extract public key for JWT verification 3. Extract public key for JWT verification
@@ -378,7 +358,6 @@ DAILYCO_STORAGE_AWS_ROLE_ARN=<arn:aws:iam::ACCOUNT:role/DailyCo>
``` ```
Reload env and restart: Reload env and restart:
```bash ```bash
docker compose -f docker-compose.prod.yml up -d server worker docker compose -f docker-compose.prod.yml up -d server worker
``` ```
@@ -388,43 +367,35 @@ docker compose -f docker-compose.prod.yml up -d server worker
## Troubleshooting ## Troubleshooting
### Check logs for errors ### Check logs for errors
```bash ```bash
docker compose -f docker-compose.prod.yml logs server --tail 20 docker compose -f docker-compose.prod.yml logs server --tail 20
docker compose -f docker-compose.prod.yml logs worker --tail 20 docker compose -f docker-compose.prod.yml logs worker --tail 20
``` ```
### Services won't start ### Services won't start
```bash ```bash
docker compose -f docker-compose.prod.yml logs docker compose -f docker-compose.prod.yml logs
``` ```
### CORS errors in browser ### CORS errors in browser
- Verify `CORS_ORIGIN` in `server/.env` matches your frontend domain exactly (including `https://`) - Verify `CORS_ORIGIN` in `server/.env` matches your frontend domain exactly (including `https://`)
- Reload env: `docker compose -f docker-compose.prod.yml up -d server` - Reload env: `docker compose -f docker-compose.prod.yml up -d server`
### SSL certificate errors (when using Caddy) ### SSL certificate errors
- Caddy auto-provisions Let's Encrypt certificates - Caddy auto-provisions Let's Encrypt certificates
- Ensure ports 80 and 443 are open and not used by another proxy - Ensure ports 80 and 443 are open
- Check: `docker compose -f docker-compose.prod.yml logs caddy` - Check: `docker compose -f docker-compose.prod.yml logs caddy`
- If port 80 is already in use (e.g. by Coolify), run without Caddy: `docker compose -f docker-compose.prod.yml up -d` and use your existing proxy
### Transcription not working ### Transcription not working
- Check Modal dashboard: https://modal.com/apps - Check Modal dashboard: https://modal.com/apps
- Verify URLs in `server/.env` match deployed functions - Verify URLs in `server/.env` match deployed functions
- Check worker logs: `docker compose -f docker-compose.prod.yml logs worker` - Check worker logs: `docker compose -f docker-compose.prod.yml logs worker`
### "Login required" but auth not configured ### "Login required" but auth not configured
- Set `FEATURE_REQUIRE_LOGIN=false` in `www/.env` - Set `FEATURE_REQUIRE_LOGIN=false` in `www/.env`
- Rebuild frontend: `docker compose -f docker-compose.prod.yml up -d --force-recreate web` - Rebuild frontend: `docker compose -f docker-compose.prod.yml up -d --force-recreate web`
### Database migrations or connectivity issues ### Database migrations or connectivity issues
Migrations run automatically on server startup. To check database connectivity or debug migration failures: Migrations run automatically on server startup. To check database connectivity or debug migration failures:
```bash ```bash
@@ -437,3 +408,4 @@ docker compose -f docker-compose.prod.yml exec server uv run python -c "from ref
# Manually run migrations (if needed) # Manually run migrations (if needed)
docker compose -f docker-compose.prod.yml exec server uv run alembic upgrade head docker compose -f docker-compose.prod.yml exec server uv run alembic upgrade head
``` ```

View File

@@ -131,15 +131,6 @@ if [ -z "$DIARIZER_URL" ]; then
fi fi
echo " -> $DIARIZER_URL" echo " -> $DIARIZER_URL"
echo ""
echo "Deploying padding (CPU audio processing via Modal SDK)..."
modal deploy reflector_padding.py
if [ $? -ne 0 ]; then
echo "Error: Failed to deploy padding. Check Modal dashboard for details."
exit 1
fi
echo " -> reflector-padding.pad_track (Modal SDK function)"
# --- Output Configuration --- # --- Output Configuration ---
echo "" echo ""
echo "==========================================" echo "=========================================="
@@ -156,6 +147,4 @@ echo ""
echo "DIARIZATION_BACKEND=modal" echo "DIARIZATION_BACKEND=modal"
echo "DIARIZATION_URL=$DIARIZER_URL" echo "DIARIZATION_URL=$DIARIZER_URL"
echo "DIARIZATION_MODAL_API_KEY=$API_KEY" echo "DIARIZATION_MODAL_API_KEY=$API_KEY"
echo ""
echo "# Padding uses Modal SDK (requires MODAL_TOKEN_ID/SECRET in worker containers)"
echo "# --- End Modal Configuration ---" echo "# --- End Modal Configuration ---"

View File

@@ -1,277 +0,0 @@
"""
Reflector GPU backend - audio padding
======================================
CPU-intensive audio padding service for adding silence to audio tracks.
Uses PyAV filter graph (adelay) for precise track synchronization.
IMPORTANT: This padding logic is duplicated from server/reflector/utils/audio_padding.py
for Modal deployment isolation (Modal can't import from server/reflector/). If you modify
the PyAV filter graph or padding algorithm, you MUST update both:
- gpu/modal_deployments/reflector_padding.py (this file)
- server/reflector/utils/audio_padding.py
Constants duplicated from server/reflector/utils/audio_constants.py for same reason.
"""
import os
import tempfile
from fractions import Fraction
import math
import asyncio
import modal
S3_TIMEOUT = 60 # happens 2 times
PADDING_TIMEOUT = 600 + (S3_TIMEOUT * 2)
SCALEDOWN_WINDOW = 60 # The maximum duration (in seconds) that individual containers can remain idle when scaling down.
DISCONNECT_CHECK_INTERVAL = 2 # Check for client disconnect
app = modal.App("reflector-padding")
# CPU-based image
image = (
modal.Image.debian_slim(python_version="3.12")
.apt_install("ffmpeg") # Required by PyAV
.pip_install(
"av==13.1.0", # PyAV for audio processing
"requests==2.32.3", # HTTP for presigned URL downloads/uploads
"fastapi==0.115.12", # API framework
)
)
# ref B0F71CE8-FC59-4AA5-8414-DAFB836DB711
OPUS_STANDARD_SAMPLE_RATE = 48000
# ref B0F71CE8-FC59-4AA5-8414-DAFB836DB711
OPUS_DEFAULT_BIT_RATE = 128000
@app.function(
cpu=2.0,
timeout=PADDING_TIMEOUT,
scaledown_window=SCALEDOWN_WINDOW,
image=image,
)
@modal.asgi_app()
def web():
from fastapi import FastAPI, Request, HTTPException
from pydantic import BaseModel
class PaddingRequest(BaseModel):
track_url: str
output_url: str
start_time_seconds: float
track_index: int
class PaddingResponse(BaseModel):
size: int
cancelled: bool = False
web_app = FastAPI()
@web_app.post("/pad")
async def pad_track_endpoint(request: Request, req: PaddingRequest) -> PaddingResponse:
"""Modal web endpoint for padding audio tracks with disconnect detection.
"""
import logging
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
logger = logging.getLogger(__name__)
if not req.track_url:
raise HTTPException(status_code=400, detail="track_url cannot be empty")
if not req.output_url:
raise HTTPException(status_code=400, detail="output_url cannot be empty")
if req.start_time_seconds <= 0:
raise HTTPException(status_code=400, detail=f"start_time_seconds must be positive, got {req.start_time_seconds}")
if req.start_time_seconds > 18000:
raise HTTPException(status_code=400, detail=f"start_time_seconds exceeds maximum 18000s (5 hours)")
logger.info(f"Padding request: track {req.track_index}, delay={req.start_time_seconds}s")
# Thread-safe cancellation flag shared between async disconnect checker and blocking thread
import threading
cancelled = threading.Event()
async def check_disconnect():
"""Background task to check for client disconnect every 2 seconds."""
while not cancelled.is_set():
await asyncio.sleep(DISCONNECT_CHECK_INTERVAL)
if await request.is_disconnected():
logger.warning("Client disconnected, setting cancellation flag")
cancelled.set()
break
# Start disconnect checker in background
disconnect_task = asyncio.create_task(check_disconnect())
try:
result = await asyncio.get_event_loop().run_in_executor(
None, _pad_track_blocking, req, cancelled, logger
)
return PaddingResponse(**result)
finally:
cancelled.set()
disconnect_task.cancel()
try:
await disconnect_task
except asyncio.CancelledError:
pass
def _pad_track_blocking(req, cancelled, logger) -> dict:
"""Blocking CPU-bound padding work with periodic cancellation checks.
Args:
cancelled: threading.Event for thread-safe cancellation signaling
"""
import av
import requests
from av.audio.resampler import AudioResampler
import time
temp_dir = tempfile.mkdtemp()
input_path = None
output_path = None
last_check = time.time()
try:
logger.info("Downloading track for padding")
response = requests.get(req.track_url, stream=True, timeout=S3_TIMEOUT)
response.raise_for_status()
input_path = os.path.join(temp_dir, "track.webm")
total_bytes = 0
chunk_count = 0
with open(input_path, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
total_bytes += len(chunk)
chunk_count += 1
# Check for cancellation every arbitrary amount of chunks
if chunk_count % 12 == 0:
now = time.time()
if now - last_check >= DISCONNECT_CHECK_INTERVAL:
if cancelled.is_set():
logger.info("Cancelled during download, exiting early")
return {"size": 0, "cancelled": True}
last_check = now
logger.info(f"Track downloaded: {total_bytes} bytes")
if cancelled.is_set():
logger.info("Cancelled after download, exiting early")
return {"size": 0, "cancelled": True}
# Apply padding using PyAV
output_path = os.path.join(temp_dir, "padded.webm")
delay_ms = math.floor(req.start_time_seconds * 1000)
logger.info(f"Padding track {req.track_index} with {delay_ms}ms delay using PyAV")
in_container = av.open(input_path)
in_stream = next((s for s in in_container.streams if s.type == "audio"), None)
if in_stream is None:
raise ValueError("No audio stream in input")
with av.open(output_path, "w", format="webm") as out_container:
out_stream = out_container.add_stream("libopus", rate=OPUS_STANDARD_SAMPLE_RATE)
out_stream.bit_rate = OPUS_DEFAULT_BIT_RATE
graph = av.filter.Graph()
abuf_args = (
f"time_base=1/{OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_rate={OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_fmt=s16:"
f"channel_layout=stereo"
)
src = graph.add("abuffer", args=abuf_args, name="src")
aresample_f = graph.add("aresample", args="async=1", name="ares")
delays_arg = f"{delay_ms}|{delay_ms}"
adelay_f = graph.add("adelay", args=f"delays={delays_arg}:all=1", name="delay")
sink = graph.add("abuffersink", name="sink")
src.link_to(aresample_f)
aresample_f.link_to(adelay_f)
adelay_f.link_to(sink)
graph.configure()
resampler = AudioResampler(
format="s16", layout="stereo", rate=OPUS_STANDARD_SAMPLE_RATE
)
for frame in in_container.decode(in_stream):
# Check for cancellation periodically
now = time.time()
if now - last_check >= DISCONNECT_CHECK_INTERVAL:
if cancelled.is_set():
logger.info("Cancelled during processing, exiting early")
in_container.close()
return {"size": 0, "cancelled": True}
last_check = now
out_frames = resampler.resample(frame) or []
for rframe in out_frames:
rframe.sample_rate = OPUS_STANDARD_SAMPLE_RATE
rframe.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
src.push(rframe)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
# Flush filter graph
src.push(None)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
# Flush encoder
for packet in out_stream.encode(None):
out_container.mux(packet)
in_container.close()
file_size = os.path.getsize(output_path)
logger.info(f"Padding complete: {file_size} bytes")
logger.info("Uploading padded track to S3")
with open(output_path, "rb") as f:
upload_response = requests.put(req.output_url, data=f, timeout=S3_TIMEOUT)
upload_response.raise_for_status()
logger.info(f"Upload complete: {file_size} bytes")
return {"size": file_size}
finally:
if input_path and os.path.exists(input_path):
try:
os.unlink(input_path)
except Exception as e:
logger.warning(f"Failed to cleanup input file: {e}")
if output_path and os.path.exists(output_path):
try:
os.unlink(output_path)
except Exception as e:
logger.warning(f"Failed to cleanup output file: {e}")
try:
os.rmdir(temp_dir)
except Exception as e:
logger.warning(f"Failed to cleanup temp directory: {e}")
return web_app

View File

@@ -4,31 +4,27 @@ ENV PYTHONUNBUFFERED=1 \
UV_LINK_MODE=copy \ UV_LINK_MODE=copy \
UV_NO_CACHE=1 UV_NO_CACHE=1
# patch until nvidia updates the sha1 repo
ADD sequoia.config /etc/crypto-policies/back-ends/sequoia.config
WORKDIR /tmp WORKDIR /tmp
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ RUN apt-get update \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \ && apt-get install -y \
ffmpeg \ ffmpeg \
curl \ curl \
ca-certificates \ ca-certificates \
gnupg \ gnupg \
wget wget \
&& apt-get clean
# Add NVIDIA CUDA repo for Debian 12 (bookworm) and install cuDNN 9 for CUDA 12 # Add NVIDIA CUDA repo for Debian 12 (bookworm) and install cuDNN 9 for CUDA 12
ADD https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb /cuda-keyring.deb ADD https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb /cuda-keyring.deb
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ RUN dpkg -i /cuda-keyring.deb \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
dpkg -i /cuda-keyring.deb \
&& rm /cuda-keyring.deb \ && rm /cuda-keyring.deb \
&& apt-get update \ && apt-get update \
&& apt-get install -y --no-install-recommends \ && apt-get install -y --no-install-recommends \
cuda-cudart-12-6 \ cuda-cudart-12-6 \
libcublas-12-6 \ libcublas-12-6 \
libcudnn9-cuda-12 \ libcudnn9-cuda-12 \
libcudnn9-dev-cuda-12 libcudnn9-dev-cuda-12 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ADD https://astral.sh/uv/install.sh /uv-installer.sh ADD https://astral.sh/uv/install.sh /uv-installer.sh
RUN sh /uv-installer.sh && rm /uv-installer.sh RUN sh /uv-installer.sh && rm /uv-installer.sh
ENV PATH="/root/.local/bin/:$PATH" ENV PATH="/root/.local/bin/:$PATH"
@@ -43,13 +39,6 @@ COPY ./app /app/app
COPY ./main.py /app/ COPY ./main.py /app/
COPY ./runserver.sh /app/ COPY ./runserver.sh /app/
# prevent uv failing with too many open files on big cpus
ENV UV_CONCURRENT_INSTALLS=16
# first install
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --compile-bytecode --locked
EXPOSE 8000 EXPOSE 8000
CMD ["sh", "/app/runserver.sh"] CMD ["sh", "/app/runserver.sh"]

View File

@@ -1,2 +0,0 @@
[hash_algorithms]
sha1 = "always"

View File

@@ -8,7 +8,7 @@ readme = "README.md"
dependencies = [ dependencies = [
"aiohttp>=3.9.0", "aiohttp>=3.9.0",
"aiohttp-cors>=0.7.0", "aiohttp-cors>=0.7.0",
"av>=15.0.0", "av>=10.0.0",
"requests>=2.31.0", "requests>=2.31.0",
"aiortc>=1.5.0", "aiortc>=1.5.0",
"sortedcontainers>=2.4.0", "sortedcontainers>=2.4.0",

View File

@@ -8,7 +8,8 @@ from enum import StrEnum
class TaskName(StrEnum): class TaskName(StrEnum):
GET_RECORDING = "get_recording" GET_RECORDING = "get_recording"
GET_PARTICIPANTS = "get_participants" GET_PARTICIPANTS = "get_participants"
PROCESS_TRACKS = "process_tracks" PROCESS_PADDINGS = "process_paddings"
PROCESS_TRANSCRIPTIONS = "process_transcriptions"
MIXDOWN_TRACKS = "mixdown_tracks" MIXDOWN_TRACKS = "mixdown_tracks"
GENERATE_WAVEFORM = "generate_waveform" GENERATE_WAVEFORM = "generate_waveform"
DETECT_TOPICS = "detect_topics" DETECT_TOPICS = "detect_topics"
@@ -35,9 +36,7 @@ LLM_RATE_LIMIT_PER_SECOND = 10
# Task execution timeouts (seconds) # Task execution timeouts (seconds)
TIMEOUT_SHORT = 60 # Quick operations: API calls, DB updates TIMEOUT_SHORT = 60 # Quick operations: API calls, DB updates
TIMEOUT_MEDIUM = ( TIMEOUT_MEDIUM = 120 # Single LLM calls, waveform generation
300 # Single LLM calls, waveform generation (5m for slow LLM responses)
)
TIMEOUT_LONG = 180 # Action items (larger context LLM) TIMEOUT_LONG = 180 # Action items (larger context LLM)
TIMEOUT_AUDIO = 720 # Audio processing: padding, mixdown TIMEOUT_AUDIO = 300 # Audio processing: padding, mixdown
TIMEOUT_HEAVY = 600 # Transcription, fan-out LLM tasks TIMEOUT_HEAVY = 600 # Transcription, fan-out LLM tasks

View File

@@ -1,9 +1,9 @@
""" """
CPU-heavy worker pool for audio processing tasks. CPU-heavy worker pool for audio processing tasks.
Handles ONLY: mixdown_tracks Handles: mixdown_tracks only (serialized with max_runs=1)
Configuration: Configuration:
- slots=1: Only mixdown (already serialized globally with max_runs=1) - slots=1: Only one mixdown at a time
- Worker affinity: pool=cpu-heavy - Worker affinity: pool=cpu-heavy
""" """
@@ -26,7 +26,7 @@ def main():
cpu_worker = hatchet.worker( cpu_worker = hatchet.worker(
"cpu-worker-pool", "cpu-worker-pool",
slots=1, # Only 1 mixdown at a time (already serialized globally) slots=1,
labels={ labels={
"pool": "cpu-heavy", "pool": "cpu-heavy",
}, },

View File

@@ -1,15 +1,16 @@
""" """
LLM/I/O worker pool for all non-CPU tasks. LLM/I/O worker pool for all non-CPU tasks.
Handles: all tasks except mixdown_tracks (transcription, LLM inference, orchestration) Handles: all tasks except mixdown_tracks (padding, transcription, LLM inference, orchestration)
""" """
from reflector.hatchet.client import HatchetClientManager from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.workflows.daily_multitrack_pipeline import ( from reflector.hatchet.workflows.daily_multitrack_pipeline import (
daily_multitrack_pipeline, daily_multitrack_pipeline,
) )
from reflector.hatchet.workflows.padding_workflow import padding_workflow
from reflector.hatchet.workflows.subject_processing import subject_workflow from reflector.hatchet.workflows.subject_processing import subject_workflow
from reflector.hatchet.workflows.topic_chunk_processing import topic_chunk_workflow from reflector.hatchet.workflows.topic_chunk_processing import topic_chunk_workflow
from reflector.hatchet.workflows.track_processing import track_workflow from reflector.hatchet.workflows.transcription_workflow import transcription_workflow
from reflector.logger import logger from reflector.logger import logger
SLOTS = 10 SLOTS = 10
@@ -29,7 +30,7 @@ def main():
llm_worker = hatchet.worker( llm_worker = hatchet.worker(
WORKER_NAME, WORKER_NAME,
slots=SLOTS, # not all slots are probably used slots=SLOTS,
labels={ labels={
"pool": POOL, "pool": POOL,
}, },
@@ -37,7 +38,8 @@ def main():
daily_multitrack_pipeline, daily_multitrack_pipeline,
topic_chunk_workflow, topic_chunk_workflow,
subject_workflow, subject_workflow,
track_workflow, padding_workflow,
transcription_workflow,
], ],
) )

View File

@@ -4,6 +4,10 @@ from reflector.hatchet.workflows.daily_multitrack_pipeline import (
PipelineInput, PipelineInput,
daily_multitrack_pipeline, daily_multitrack_pipeline,
) )
from reflector.hatchet.workflows.padding_workflow import (
PaddingInput,
padding_workflow,
)
from reflector.hatchet.workflows.subject_processing import ( from reflector.hatchet.workflows.subject_processing import (
SubjectInput, SubjectInput,
subject_workflow, subject_workflow,
@@ -12,15 +16,20 @@ from reflector.hatchet.workflows.topic_chunk_processing import (
TopicChunkInput, TopicChunkInput,
topic_chunk_workflow, topic_chunk_workflow,
) )
from reflector.hatchet.workflows.track_processing import TrackInput, track_workflow from reflector.hatchet.workflows.transcription_workflow import (
TranscriptionInput,
transcription_workflow,
)
__all__ = [ __all__ = [
"daily_multitrack_pipeline", "daily_multitrack_pipeline",
"subject_workflow", "subject_workflow",
"topic_chunk_workflow", "topic_chunk_workflow",
"track_workflow", "padding_workflow",
"transcription_workflow",
"PipelineInput", "PipelineInput",
"SubjectInput", "SubjectInput",
"TopicChunkInput", "TopicChunkInput",
"TrackInput", "PaddingInput",
"TranscriptionInput",
] ]

View File

@@ -54,8 +54,9 @@ from reflector.hatchet.workflows.models import (
PadTrackResult, PadTrackResult,
ParticipantInfo, ParticipantInfo,
ParticipantsResult, ParticipantsResult,
ProcessPaddingsResult,
ProcessSubjectsResult, ProcessSubjectsResult,
ProcessTracksResult, ProcessTranscriptionsResult,
RecapResult, RecapResult,
RecordingResult, RecordingResult,
SubjectsResult, SubjectsResult,
@@ -68,6 +69,7 @@ from reflector.hatchet.workflows.models import (
WebhookResult, WebhookResult,
ZulipResult, ZulipResult,
) )
from reflector.hatchet.workflows.padding_workflow import PaddingInput, padding_workflow
from reflector.hatchet.workflows.subject_processing import ( from reflector.hatchet.workflows.subject_processing import (
SubjectInput, SubjectInput,
subject_workflow, subject_workflow,
@@ -76,7 +78,10 @@ from reflector.hatchet.workflows.topic_chunk_processing import (
TopicChunkInput, TopicChunkInput,
topic_chunk_workflow, topic_chunk_workflow,
) )
from reflector.hatchet.workflows.track_processing import TrackInput, track_workflow from reflector.hatchet.workflows.transcription_workflow import (
TranscriptionInput,
transcription_workflow,
)
from reflector.logger import logger from reflector.logger import logger
from reflector.pipelines import topic_processing from reflector.pipelines import topic_processing
from reflector.processors import AudioFileWriterProcessor from reflector.processors import AudioFileWriterProcessor
@@ -322,7 +327,6 @@ async def get_participants(input: PipelineInput, ctx: Context) -> ParticipantsRe
mtg_session_id = recording.mtg_session_id mtg_session_id = recording.mtg_session_id
async with fresh_db_connection(): async with fresh_db_connection():
from reflector.db.transcripts import ( # noqa: PLC0415 from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptDuration,
TranscriptParticipant, TranscriptParticipant,
transcripts_controller, transcripts_controller,
) )
@@ -331,26 +335,15 @@ async def get_participants(input: PipelineInput, ctx: Context) -> ParticipantsRe
if not transcript: if not transcript:
raise ValueError(f"Transcript {input.transcript_id} not found") raise ValueError(f"Transcript {input.transcript_id} not found")
# Note: title NOT cleared - preserves existing titles # Note: title NOT cleared - preserves existing titles
# Duration from Daily API (seconds -> milliseconds) - master source
duration_ms = recording.duration * 1000 if recording.duration else 0
await transcripts_controller.update( await transcripts_controller.update(
transcript, transcript,
{ {
"events": [], "events": [],
"topics": [], "topics": [],
"participants": [], "participants": [],
"duration": duration_ms,
}, },
) )
await append_event_and_broadcast(
input.transcript_id,
transcript,
"DURATION",
TranscriptDuration(duration=duration_ms),
logger=logger,
)
mtg_session_id = assert_non_none_and_non_empty( mtg_session_id = assert_non_none_and_non_empty(
mtg_session_id, "mtg_session_id is required" mtg_session_id, "mtg_session_id is required"
) )
@@ -416,72 +409,115 @@ async def get_participants(input: PipelineInput, ctx: Context) -> ParticipantsRe
execution_timeout=timedelta(seconds=TIMEOUT_HEAVY), execution_timeout=timedelta(seconds=TIMEOUT_HEAVY),
retries=3, retries=3,
) )
@with_error_handling(TaskName.PROCESS_TRACKS) @with_error_handling(TaskName.PROCESS_PADDINGS)
async def process_tracks(input: PipelineInput, ctx: Context) -> ProcessTracksResult: async def process_paddings(input: PipelineInput, ctx: Context) -> ProcessPaddingsResult:
"""Spawn child workflows for each track (dynamic fan-out).""" """Spawn child workflows for each track to apply padding (dynamic fan-out)."""
ctx.log(f"process_tracks: spawning {len(input.tracks)} track workflows") ctx.log(f"process_paddings: spawning {len(input.tracks)} padding workflows")
participants_result = ctx.task_output(get_participants)
source_language = participants_result.source_language
bulk_runs = [ bulk_runs = [
track_workflow.create_bulk_run_item( padding_workflow.create_bulk_run_item(
input=TrackInput( input=PaddingInput(
track_index=i, track_index=i,
s3_key=track["s3_key"], s3_key=track["s3_key"],
bucket_name=input.bucket_name, bucket_name=input.bucket_name,
transcript_id=input.transcript_id, transcript_id=input.transcript_id,
language=source_language,
) )
) )
for i, track in enumerate(input.tracks) for i, track in enumerate(input.tracks)
] ]
results = await track_workflow.aio_run_many(bulk_runs) results = await padding_workflow.aio_run_many(bulk_runs)
target_language = participants_result.target_language
track_words: list[list[Word]] = []
padded_tracks = [] padded_tracks = []
created_padded_files = set() created_padded_files = []
for result in results: for result in results:
transcribe_result = TranscribeTrackResult(**result[TaskName.TRANSCRIBE_TRACK])
track_words.append(transcribe_result.words)
pad_result = PadTrackResult(**result[TaskName.PAD_TRACK]) pad_result = PadTrackResult(**result[TaskName.PAD_TRACK])
# Store S3 key info (not presigned URL) - consumer tasks presign on demand padded_tracks.append(
if pad_result.padded_key: PaddedTrackInfo(
padded_tracks.append( key=pad_result.padded_key,
PaddedTrackInfo( bucket_name=pad_result.bucket_name,
key=pad_result.padded_key, bucket_name=pad_result.bucket_name track_index=pad_result.track_index,
)
) )
)
if pad_result.size > 0: if pad_result.size > 0:
storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{pad_result.track_index}.webm" storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{pad_result.track_index}.webm"
created_padded_files.add(storage_path) created_padded_files.append(storage_path)
all_words = [word for words in track_words for word in words] ctx.log(f"process_paddings complete: {len(padded_tracks)} padded tracks")
all_words.sort(key=lambda w: w.start)
ctx.log( return ProcessPaddingsResult(
f"process_tracks complete: {len(all_words)} words from {len(input.tracks)} tracks"
)
return ProcessTracksResult(
all_words=all_words,
padded_tracks=padded_tracks, padded_tracks=padded_tracks,
word_count=len(all_words),
num_tracks=len(input.tracks), num_tracks=len(input.tracks),
target_language=target_language,
created_padded_files=list(created_padded_files), created_padded_files=list(created_padded_files),
) )
@daily_multitrack_pipeline.task( @daily_multitrack_pipeline.task(
parents=[process_tracks], parents=[process_paddings],
execution_timeout=timedelta(seconds=TIMEOUT_HEAVY),
retries=3,
)
@with_error_handling(TaskName.PROCESS_TRANSCRIPTIONS)
async def process_transcriptions(
input: PipelineInput, ctx: Context
) -> ProcessTranscriptionsResult:
"""Spawn child workflows for each padded track to transcribe (dynamic fan-out)."""
participants_result = ctx.task_output(get_participants)
paddings_result = ctx.task_output(process_paddings)
source_language = participants_result.source_language
if not source_language:
raise ValueError("source_language is required for transcription")
target_language = participants_result.target_language
padded_tracks = paddings_result.padded_tracks
if not padded_tracks:
raise ValueError("No padded tracks available for transcription")
ctx.log(
f"process_transcriptions: spawning {len(padded_tracks)} transcription workflows"
)
bulk_runs = [
transcription_workflow.create_bulk_run_item(
input=TranscriptionInput(
track_index=padded_track.track_index,
padded_key=padded_track.key,
bucket_name=padded_track.bucket_name,
language=source_language,
)
)
for padded_track in padded_tracks
]
results = await transcription_workflow.aio_run_many(bulk_runs)
track_words: list[list[Word]] = []
for result in results:
transcribe_result = TranscribeTrackResult(**result[TaskName.TRANSCRIBE_TRACK])
track_words.append(transcribe_result.words)
all_words = [word for words in track_words for word in words]
all_words.sort(key=lambda w: w.start)
ctx.log(
f"process_transcriptions complete: {len(all_words)} words from {len(padded_tracks)} tracks"
)
return ProcessTranscriptionsResult(
all_words=all_words,
word_count=len(all_words),
num_tracks=len(input.tracks),
target_language=target_language,
)
@daily_multitrack_pipeline.task(
parents=[process_paddings],
execution_timeout=timedelta(seconds=TIMEOUT_AUDIO), execution_timeout=timedelta(seconds=TIMEOUT_AUDIO),
retries=3, retries=3,
desired_worker_labels={ desired_worker_labels={
@@ -501,12 +537,12 @@ async def process_tracks(input: PipelineInput, ctx: Context) -> ProcessTracksRes
) )
@with_error_handling(TaskName.MIXDOWN_TRACKS) @with_error_handling(TaskName.MIXDOWN_TRACKS)
async def mixdown_tracks(input: PipelineInput, ctx: Context) -> MixdownResult: async def mixdown_tracks(input: PipelineInput, ctx: Context) -> MixdownResult:
"""Mix all padded tracks into single audio file using PyAV (same as Celery).""" """Mix all padded tracks into single audio file using PyAV."""
ctx.log("mixdown_tracks: mixing padded tracks into single audio file") ctx.log("mixdown_tracks: mixing padded tracks into single audio file")
track_result = ctx.task_output(process_tracks) paddings_result = ctx.task_output(process_paddings)
recording_result = ctx.task_output(get_recording) recording_result = ctx.task_output(get_recording)
padded_tracks = track_result.padded_tracks padded_tracks = paddings_result.padded_tracks
# Dynamic timeout: scales with track count and recording duration # Dynamic timeout: scales with track count and recording duration
# Base 300s + 60s per track + 1s per 10s of recording # Base 300s + 60s per track + 1s per 10s of recording
@@ -660,7 +696,7 @@ async def generate_waveform(input: PipelineInput, ctx: Context) -> WaveformResul
@daily_multitrack_pipeline.task( @daily_multitrack_pipeline.task(
parents=[process_tracks], parents=[process_transcriptions],
execution_timeout=timedelta(seconds=TIMEOUT_HEAVY), execution_timeout=timedelta(seconds=TIMEOUT_HEAVY),
retries=3, retries=3,
) )
@@ -669,8 +705,8 @@ async def detect_topics(input: PipelineInput, ctx: Context) -> TopicsResult:
"""Detect topics using parallel child workflows (one per chunk).""" """Detect topics using parallel child workflows (one per chunk)."""
ctx.log("detect_topics: analyzing transcript for topics") ctx.log("detect_topics: analyzing transcript for topics")
track_result = ctx.task_output(process_tracks) transcriptions_result = ctx.task_output(process_transcriptions)
words = track_result.all_words words = transcriptions_result.all_words
if not words: if not words:
ctx.log("detect_topics: no words, returning empty topics") ctx.log("detect_topics: no words, returning empty topics")
@@ -1107,7 +1143,7 @@ async def identify_action_items(
@daily_multitrack_pipeline.task( @daily_multitrack_pipeline.task(
parents=[process_tracks, generate_title, generate_recap, identify_action_items], parents=[generate_title, generate_recap, identify_action_items],
execution_timeout=timedelta(seconds=TIMEOUT_SHORT), execution_timeout=timedelta(seconds=TIMEOUT_SHORT),
retries=3, retries=3,
) )
@@ -1120,10 +1156,15 @@ async def finalize(input: PipelineInput, ctx: Context) -> FinalizeResult:
""" """
ctx.log("finalize: saving transcript and setting status to 'ended'") ctx.log("finalize: saving transcript and setting status to 'ended'")
track_result = ctx.task_output(process_tracks) mixdown_result = ctx.task_output(mixdown_tracks)
transcriptions_result = ctx.task_output(process_transcriptions)
paddings_result = ctx.task_output(process_paddings)
duration = mixdown_result.duration
all_words = transcriptions_result.all_words
# Cleanup temporary padded S3 files (deferred until finalize for semantic parity with Celery) # Cleanup temporary padded S3 files (deferred until finalize for semantic parity with Celery)
created_padded_files = track_result.created_padded_files created_padded_files = paddings_result.created_padded_files
if created_padded_files: if created_padded_files:
ctx.log(f"Cleaning up {len(created_padded_files)} temporary S3 files") ctx.log(f"Cleaning up {len(created_padded_files)} temporary S3 files")
storage = _spawn_storage() storage = _spawn_storage()
@@ -1141,6 +1182,7 @@ async def finalize(input: PipelineInput, ctx: Context) -> FinalizeResult:
async with fresh_db_connection(): async with fresh_db_connection():
from reflector.db.transcripts import ( # noqa: PLC0415 from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptDuration,
TranscriptText, TranscriptText,
transcripts_controller, transcripts_controller,
) )
@@ -1149,6 +1191,8 @@ async def finalize(input: PipelineInput, ctx: Context) -> FinalizeResult:
if transcript is None: if transcript is None:
raise ValueError(f"Transcript {input.transcript_id} not found in database") raise ValueError(f"Transcript {input.transcript_id} not found in database")
merged_transcript = TranscriptType(words=all_words, translation=None)
await append_event_and_broadcast( await append_event_and_broadcast(
input.transcript_id, input.transcript_id,
transcript, transcript,
@@ -1160,15 +1204,21 @@ async def finalize(input: PipelineInput, ctx: Context) -> FinalizeResult:
logger=logger, logger=logger,
) )
# Clear workflow_run_id (workflow completed successfully) # Save duration and clear workflow_run_id (workflow completed successfully)
# Note: title/long_summary/short_summary/duration already saved by their callbacks # Note: title/long_summary/short_summary already saved by their callbacks
await transcripts_controller.update( await transcripts_controller.update(
transcript, transcript,
{ {
"duration": duration,
"workflow_run_id": None, # Clear on success - no need to resume "workflow_run_id": None, # Clear on success - no need to resume
}, },
) )
duration_data = TranscriptDuration(duration=duration)
await append_event_and_broadcast(
input.transcript_id, transcript, "DURATION", duration_data, logger=logger
)
await set_status_and_broadcast(input.transcript_id, "ended", logger=logger) await set_status_and_broadcast(input.transcript_id, "ended", logger=logger)
ctx.log( ctx.log(

View File

@@ -21,12 +21,14 @@ class ParticipantInfo(BaseModel):
class PadTrackResult(BaseModel): class PadTrackResult(BaseModel):
"""Result from pad_track task.""" """Result from pad_track task.
padded_key: NonEmptyString # S3 key (not presigned URL) - presign on demand to avoid stale URLs on replay If size=0, track required no padding and padded_key contains original S3 key.
bucket_name: ( If size>0, track was padded and padded_key contains new padded file S3 key.
NonEmptyString | None """
) # None means use default transcript storage bucket
padded_key: NonEmptyString
bucket_name: NonEmptyString | None
size: int size: int
track_index: int track_index: int
@@ -59,18 +61,25 @@ class PaddedTrackInfo(BaseModel):
"""Info for a padded track - S3 key + bucket for on-demand presigning.""" """Info for a padded track - S3 key + bucket for on-demand presigning."""
key: NonEmptyString key: NonEmptyString
bucket_name: NonEmptyString | None # None = use default storage bucket bucket_name: NonEmptyString | None
track_index: int
class ProcessTracksResult(BaseModel): class ProcessPaddingsResult(BaseModel):
"""Result from process_tracks task.""" """Result from process_paddings task."""
padded_tracks: list[PaddedTrackInfo]
num_tracks: int
created_padded_files: list[NonEmptyString]
class ProcessTranscriptionsResult(BaseModel):
"""Result from process_transcriptions task."""
all_words: list[Word] all_words: list[Word]
padded_tracks: list[PaddedTrackInfo] # S3 keys, not presigned URLs
word_count: int word_count: int
num_tracks: int num_tracks: int
target_language: NonEmptyString target_language: NonEmptyString
created_padded_files: list[NonEmptyString]
class MixdownResult(BaseModel): class MixdownResult(BaseModel):

View File

@@ -1,9 +1,11 @@
""" """
Hatchet child workflow: PaddingWorkflow Hatchet child workflow: PaddingWorkflow
Handles individual audio track padding via Modal.com backend. Handles individual audio track padding only.
""" """
import tempfile
from datetime import timedelta from datetime import timedelta
from pathlib import Path
import av import av
from hatchet_sdk import Context from hatchet_sdk import Context
@@ -14,7 +16,10 @@ from reflector.hatchet.constants import TIMEOUT_AUDIO
from reflector.hatchet.workflows.models import PadTrackResult from reflector.hatchet.workflows.models import PadTrackResult
from reflector.logger import logger from reflector.logger import logger
from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS
from reflector.utils.audio_padding import extract_stream_start_time_from_container from reflector.utils.audio_padding import (
apply_audio_padding_to_file,
extract_stream_start_time_from_container,
)
class PaddingInput(BaseModel): class PaddingInput(BaseModel):
@@ -63,83 +68,61 @@ async def pad_track(input: PaddingInput, ctx: Context) -> PadTrackResult:
bucket=input.bucket_name, bucket=input.bucket_name,
) )
# Extract start_time to determine if padding needed
with av.open(source_url) as in_container: with av.open(source_url) as in_container:
if in_container.duration: with av.open(source_url) as in_container:
try: if in_container.duration:
duration = timedelta(seconds=in_container.duration // 1_000_000) try:
ctx.log( duration = timedelta(seconds=in_container.duration // 1_000_000)
f"pad_track: track {input.track_index}, duration={duration}" ctx.log(
) f"pad_track: track {input.track_index}, duration={duration}"
except (ValueError, TypeError, OverflowError) as e: )
ctx.log( except (ValueError, TypeError, OverflowError) as e:
f"pad_track: track {input.track_index}, duration error: {str(e)}" ctx.log(
) f"pad_track: track {input.track_index}, duration error: {str(e)}"
)
start_time_seconds = extract_stream_start_time_from_container( start_time_seconds = extract_stream_start_time_from_container(
in_container, input.track_index, logger=logger in_container, input.track_index, logger=logger
) )
if start_time_seconds <= 0: if start_time_seconds <= 0:
logger.info( logger.info(
f"Track {input.track_index} requires no padding", f"Track {input.track_index} requires no padding",
track_index=input.track_index, track_index=input.track_index,
) )
return PadTrackResult( return PadTrackResult(
padded_key=input.s3_key, padded_key=input.s3_key,
bucket_name=input.bucket_name, bucket_name=input.bucket_name,
size=0, size=0,
track_index=input.track_index, track_index=input.track_index,
) )
storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{input.track_index}.webm" storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{input.track_index}.webm"
# Presign PUT URL for output (Modal will upload directly) with tempfile.NamedTemporaryFile(suffix=".webm", delete=False) as temp_file:
output_url = await storage.get_file_url( temp_path = temp_file.name
storage_path,
operation="put_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
import httpx # noqa: PLC0415 try:
apply_audio_padding_to_file(
in_container,
temp_path,
start_time_seconds,
input.track_index,
logger=logger,
)
from reflector.processors.audio_padding_modal import ( # noqa: PLC0415 file_size = Path(temp_path).stat().st_size
AudioPaddingModalProcessor,
)
try: with open(temp_path, "rb") as padded_file:
processor = AudioPaddingModalProcessor() await storage.put_file(storage_path, padded_file)
result = await processor.pad_track(
track_url=source_url,
output_url=output_url,
start_time_seconds=start_time_seconds,
track_index=input.track_index,
)
file_size = result.size
ctx.log(f"pad_track: Modal returned size={file_size}") logger.info(
except httpx.HTTPStatusError as e: f"Uploaded padded track to S3",
error_detail = e.response.text if hasattr(e.response, "text") else str(e) key=storage_path,
logger.error( size=file_size,
"[Hatchet] Modal padding HTTP error", )
transcript_id=input.transcript_id, finally:
track_index=input.track_index, Path(temp_path).unlink(missing_ok=True)
status_code=e.response.status_code if hasattr(e, "response") else None,
error=error_detail,
exc_info=True,
)
raise Exception(
f"Modal padding failed: HTTP {e.response.status_code}"
) from e
except httpx.TimeoutException as e:
logger.error(
"[Hatchet] Modal padding timeout",
transcript_id=input.transcript_id,
track_index=input.track_index,
error=str(e),
exc_info=True,
)
raise Exception("Modal padding timeout") from e
logger.info( logger.info(
"[Hatchet] pad_track complete", "[Hatchet] pad_track complete",

View File

@@ -1,205 +0,0 @@
"""
Hatchet child workflow: TrackProcessing
Handles individual audio track processing: padding and transcription.
Spawned dynamically by the main diarization pipeline for each track.
Architecture note: This is a separate workflow (not inline tasks in DailyMultitrackPipeline)
because Hatchet workflow DAGs are defined statically, but the number of tracks varies
at runtime. Child workflow spawning via `aio_run()` + `asyncio.gather()` is the
standard pattern for dynamic fan-out. See `process_tracks` in daily_multitrack_pipeline.py.
Note: This file uses deferred imports (inside tasks) intentionally.
Hatchet workers run in forked processes; fresh imports per task ensure
storage/DB connections are not shared across forks.
"""
from datetime import timedelta
import av
from hatchet_sdk import Context
from pydantic import BaseModel
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.constants import TIMEOUT_AUDIO, TIMEOUT_HEAVY
from reflector.hatchet.workflows.models import PadTrackResult, TranscribeTrackResult
from reflector.logger import logger
from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS
from reflector.utils.audio_padding import extract_stream_start_time_from_container
class TrackInput(BaseModel):
"""Input for individual track processing."""
track_index: int
s3_key: str
bucket_name: str
transcript_id: str
language: str = "en"
hatchet = HatchetClientManager.get_client()
track_workflow = hatchet.workflow(name="TrackProcessing", input_validator=TrackInput)
@track_workflow.task(execution_timeout=timedelta(seconds=TIMEOUT_AUDIO), retries=3)
async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
"""Pad single audio track with silence for alignment.
Extracts stream.start_time from WebM container metadata and applies
silence padding using PyAV filter graph (adelay).
"""
ctx.log(f"pad_track: track {input.track_index}, s3_key={input.s3_key}")
logger.info(
"[Hatchet] pad_track",
track_index=input.track_index,
s3_key=input.s3_key,
transcript_id=input.transcript_id,
)
try:
# Create fresh storage instance to avoid aioboto3 fork issues
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
)
source_url = await storage.get_file_url(
input.s3_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
bucket=input.bucket_name,
)
with av.open(source_url) as in_container:
start_time_seconds = extract_stream_start_time_from_container(
in_container, input.track_index, logger=logger
)
# If no padding needed, return original S3 key
if start_time_seconds <= 0:
logger.info(
f"Track {input.track_index} requires no padding",
track_index=input.track_index,
)
return PadTrackResult(
padded_key=input.s3_key,
bucket_name=input.bucket_name,
size=0,
track_index=input.track_index,
)
storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{input.track_index}.webm"
# Presign PUT URL for output (Modal uploads directly)
output_url = await storage.get_file_url(
storage_path,
operation="put_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
from reflector.processors.audio_padding_modal import ( # noqa: PLC0415
AudioPaddingModalProcessor,
)
processor = AudioPaddingModalProcessor()
result = await processor.pad_track(
track_url=source_url,
output_url=output_url,
start_time_seconds=start_time_seconds,
track_index=input.track_index,
)
file_size = result.size
ctx.log(f"pad_track complete: track {input.track_index} -> {storage_path}")
logger.info(
"[Hatchet] pad_track complete",
track_index=input.track_index,
padded_key=storage_path,
)
# Return S3 key (not presigned URL) - consumer tasks presign on demand
# This avoids stale URLs when workflow is replayed
return PadTrackResult(
padded_key=storage_path,
bucket_name=None, # None = use default transcript storage bucket
size=file_size,
track_index=input.track_index,
)
except Exception as e:
logger.error("[Hatchet] pad_track failed", error=str(e), exc_info=True)
raise
@track_workflow.task(
parents=[pad_track], execution_timeout=timedelta(seconds=TIMEOUT_HEAVY), retries=3
)
async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackResult:
"""Transcribe audio track using GPU (Modal.com) or local Whisper."""
ctx.log(f"transcribe_track: track {input.track_index}, language={input.language}")
logger.info(
"[Hatchet] transcribe_track",
track_index=input.track_index,
language=input.language,
)
try:
pad_result = ctx.task_output(pad_track)
padded_key = pad_result.padded_key
bucket_name = pad_result.bucket_name
if not padded_key:
raise ValueError("Missing padded_key from pad_track")
# Presign URL on demand (avoids stale URLs on workflow replay)
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
)
audio_url = await storage.get_file_url(
padded_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
bucket=bucket_name,
)
from reflector.pipelines.transcription_helpers import ( # noqa: PLC0415
transcribe_file_with_processor,
)
transcript = await transcribe_file_with_processor(audio_url, input.language)
# Tag all words with speaker index
for word in transcript.words:
word.speaker = input.track_index
ctx.log(
f"transcribe_track complete: track {input.track_index}, {len(transcript.words)} words"
)
logger.info(
"[Hatchet] transcribe_track complete",
track_index=input.track_index,
word_count=len(transcript.words),
)
return TranscribeTrackResult(
words=transcript.words,
track_index=input.track_index,
)
except Exception as e:
logger.error("[Hatchet] transcribe_track failed", error=str(e), exc_info=True)
raise

View File

@@ -0,0 +1,98 @@
"""
Hatchet child workflow: TranscriptionWorkflow
Handles individual audio track transcription only.
"""
from datetime import timedelta
from hatchet_sdk import Context
from pydantic import BaseModel
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.constants import TIMEOUT_HEAVY
from reflector.hatchet.workflows.models import TranscribeTrackResult
from reflector.logger import logger
from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS
class TranscriptionInput(BaseModel):
"""Input for individual track transcription."""
track_index: int
padded_key: str # S3 key from padding step
bucket_name: str | None # None = use default bucket
language: str = "en"
hatchet = HatchetClientManager.get_client()
transcription_workflow = hatchet.workflow(
name="TranscriptionWorkflow", input_validator=TranscriptionInput
)
@transcription_workflow.task(
execution_timeout=timedelta(seconds=TIMEOUT_HEAVY), retries=3
)
async def transcribe_track(
input: TranscriptionInput, ctx: Context
) -> TranscribeTrackResult:
"""Transcribe audio track using GPU (Modal.com) or local Whisper."""
ctx.log(f"transcribe_track: track {input.track_index}, language={input.language}")
logger.info(
"[Hatchet] transcribe_track",
track_index=input.track_index,
language=input.language,
)
try:
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
)
audio_url = await storage.get_file_url(
input.padded_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
bucket=input.bucket_name,
)
from reflector.pipelines.transcription_helpers import ( # noqa: PLC0415
transcribe_file_with_processor,
)
transcript = await transcribe_file_with_processor(audio_url, input.language)
for word in transcript.words:
word.speaker = input.track_index
ctx.log(
f"transcribe_track complete: track {input.track_index}, {len(transcript.words)} words"
)
logger.info(
"[Hatchet] transcribe_track complete",
track_index=input.track_index,
word_count=len(transcript.words),
)
return TranscribeTrackResult(
words=transcript.words,
track_index=input.track_index,
)
except Exception as e:
logger.error(
"[Hatchet] transcribe_track failed",
track_index=input.track_index,
padded_key=input.padded_key,
language=input.language,
error=str(e),
exc_info=True,
)
raise

View File

@@ -1,17 +0,0 @@
"""Presence tracking for meetings."""
from reflector.presence.pending_joins import (
PENDING_JOIN_PREFIX,
PENDING_JOIN_TTL,
create_pending_join,
delete_pending_join,
has_pending_joins,
)
__all__ = [
"PENDING_JOIN_PREFIX",
"PENDING_JOIN_TTL",
"create_pending_join",
"delete_pending_join",
"has_pending_joins",
]

View File

@@ -1,59 +0,0 @@
"""Track pending join intents in Redis.
When a user signals intent to join a meeting (before WebRTC handshake completes),
we store a pending join record. This prevents the meeting from being deactivated
while users are still connecting.
"""
import time
from redis.asyncio import Redis
from reflector.logger import logger
PENDING_JOIN_TTL = 30 # seconds
PENDING_JOIN_PREFIX = "pending_join"
# Max keys to scan per Redis SCAN iteration
SCAN_BATCH_SIZE = 100
async def create_pending_join(redis: Redis, meeting_id: str, user_id: str) -> None:
"""Create a pending join record. Called before WebRTC handshake."""
key = f"{PENDING_JOIN_PREFIX}:{meeting_id}:{user_id}"
log = logger.bind(meeting_id=meeting_id, user_id=user_id, key=key)
await redis.setex(key, PENDING_JOIN_TTL, str(time.time()))
log.debug("Created pending join")
async def delete_pending_join(redis: Redis, meeting_id: str, user_id: str) -> None:
"""Delete pending join. Called after WebRTC connection established."""
key = f"{PENDING_JOIN_PREFIX}:{meeting_id}:{user_id}"
log = logger.bind(meeting_id=meeting_id, user_id=user_id, key=key)
await redis.delete(key)
log.debug("Deleted pending join")
async def has_pending_joins(redis: Redis, meeting_id: str) -> bool:
"""Check if meeting has any pending joins.
Uses Redis SCAN to iterate through all keys matching the pattern.
Properly iterates until cursor returns 0 to ensure all keys are checked.
"""
pattern = f"{PENDING_JOIN_PREFIX}:{meeting_id}:*"
log = logger.bind(meeting_id=meeting_id, pattern=pattern)
cursor = 0
iterations = 0
while True:
cursor, keys = await redis.scan(
cursor=cursor, match=pattern, count=SCAN_BATCH_SIZE
)
iterations += 1
if keys:
log.debug("Found pending joins", count=len(keys), iterations=iterations)
return True
if cursor == 0:
break
log.debug("No pending joins found", iterations=iterations)
return False

View File

@@ -1,113 +0,0 @@
"""
Modal.com backend for audio padding.
"""
import asyncio
import os
import httpx
from pydantic import BaseModel
from reflector.hatchet.constants import TIMEOUT_AUDIO
from reflector.logger import logger
class PaddingResponse(BaseModel):
size: int
cancelled: bool = False
class AudioPaddingModalProcessor:
"""Audio padding processor using Modal.com CPU backend via HTTP."""
def __init__(
self, padding_url: str | None = None, modal_api_key: str | None = None
):
self.padding_url = padding_url or os.getenv("PADDING_URL")
if not self.padding_url:
raise ValueError(
"PADDING_URL required to use AudioPaddingModalProcessor. "
"Set PADDING_URL environment variable or pass padding_url parameter."
)
self.modal_api_key = modal_api_key or os.getenv("MODAL_API_KEY")
async def pad_track(
self,
track_url: str,
output_url: str,
start_time_seconds: float,
track_index: int,
) -> PaddingResponse:
"""Pad audio track with silence via Modal backend.
Args:
track_url: Presigned GET URL for source audio track
output_url: Presigned PUT URL for output WebM
start_time_seconds: Amount of silence to prepend
track_index: Track index for logging
"""
if not track_url:
raise ValueError("track_url cannot be empty")
if start_time_seconds <= 0:
raise ValueError(
f"start_time_seconds must be positive, got {start_time_seconds}"
)
log = logger.bind(track_index=track_index, padding_seconds=start_time_seconds)
log.info("Sending Modal padding HTTP request")
url = f"{self.padding_url}/pad"
headers = {}
if self.modal_api_key:
headers["Authorization"] = f"Bearer {self.modal_api_key}"
try:
async with httpx.AsyncClient(timeout=TIMEOUT_AUDIO) as client:
response = await client.post(
url,
headers=headers,
json={
"track_url": track_url,
"output_url": output_url,
"start_time_seconds": start_time_seconds,
"track_index": track_index,
},
follow_redirects=True,
)
if response.status_code != 200:
error_body = response.text
log.error(
"Modal padding API error",
status_code=response.status_code,
error_body=error_body,
)
response.raise_for_status()
result = response.json()
# Check if work was cancelled
if result.get("cancelled"):
log.warning("Modal padding was cancelled by disconnect detection")
raise asyncio.CancelledError(
"Padding cancelled due to client disconnect"
)
log.info("Modal padding complete", size=result["size"])
return PaddingResponse(**result)
except asyncio.CancelledError:
log.warning(
"Modal padding cancelled (Hatchet timeout, disconnect detected on Modal side)"
)
raise
except httpx.TimeoutException as e:
log.error("Modal padding timeout", error=str(e), exc_info=True)
raise Exception(f"Modal padding timeout: {e}") from e
except httpx.HTTPStatusError as e:
log.error("Modal padding HTTP error", error=str(e), exc_info=True)
raise Exception(f"Modal padding HTTP error: {e}") from e
except Exception as e:
log.error("Modal padding unexpected error", error=str(e), exc_info=True)
raise

View File

@@ -98,10 +98,6 @@ class Settings(BaseSettings):
# Diarization: local pyannote.audio # Diarization: local pyannote.audio
DIARIZATION_PYANNOTE_AUTH_TOKEN: str | None = None DIARIZATION_PYANNOTE_AUTH_TOKEN: str | None = None
# Audio Padding (Modal.com backend)
PADDING_URL: str | None = None
PADDING_MODAL_API_KEY: str | None = None
# Sentry # Sentry
SENTRY_DSN: str | None = None SENTRY_DSN: str | None = None

View File

@@ -5,9 +5,7 @@ Used by both Hatchet workflows and Celery pipelines for consistent audio encodin
""" """
# Opus codec settings # Opus codec settings
# ref B0F71CE8-FC59-4AA5-8414-DAFB836DB711
OPUS_STANDARD_SAMPLE_RATE = 48000 OPUS_STANDARD_SAMPLE_RATE = 48000
# ref B0F71CE8-FC59-4AA5-8414-DAFB836DB711
OPUS_DEFAULT_BIT_RATE = 128000 # 128kbps for good speech quality OPUS_DEFAULT_BIT_RATE = 128000 # 128kbps for good speech quality
# S3 presigned URL expiration # S3 presigned URL expiration

View File

@@ -1,5 +1,4 @@
import json import json
from datetime import datetime, timezone
from typing import assert_never from typing import assert_never
from fastapi import APIRouter, HTTPException, Request from fastapi import APIRouter, HTTPException, Request
@@ -13,9 +12,6 @@ from reflector.dailyco_api import (
RecordingReadyEvent, RecordingReadyEvent,
RecordingStartedEvent, RecordingStartedEvent,
) )
from reflector.db.daily_participant_sessions import (
daily_participant_sessions_controller,
)
from reflector.db.meetings import meetings_controller from reflector.db.meetings import meetings_controller
from reflector.logger import logger as _logger from reflector.logger import logger as _logger
from reflector.settings import settings from reflector.settings import settings
@@ -145,57 +141,15 @@ async def _handle_participant_joined(event: ParticipantJoinedEvent):
async def _handle_participant_left(event: ParticipantLeftEvent): async def _handle_participant_left(event: ParticipantLeftEvent):
"""Close session directly on webhook and update num_clients. """Queue poll task for presence reconciliation."""
await _queue_poll_for_room(
The webhook IS the authoritative signal that a participant left. event.payload.room_name,
We close the session immediately rather than polling Daily.co API, "participant.left",
which avoids the race where the API still shows the participant. event.payload.user_id,
A delayed reconciliation poll is queued as a safety net. event.payload.session_id,
""" duration=event.payload.duration,
room_name = event.payload.room_name
if not room_name:
logger.warning("participant.left: no room in payload")
return
meeting = await meetings_controller.get_by_room_name(room_name)
if not meeting:
logger.warning("participant.left: meeting not found", room_name=room_name)
return
log = logger.bind(
meeting_id=meeting.id,
room_name=room_name,
session_id=event.payload.session_id,
user_id=event.payload.user_id,
) )
existing = await daily_participant_sessions_controller.get_open_session(
meeting.id, event.payload.session_id
)
if existing:
now = datetime.now(timezone.utc)
await daily_participant_sessions_controller.batch_close_sessions(
[existing.id], left_at=now
)
active = await daily_participant_sessions_controller.get_active_by_meeting(
meeting.id
)
await meetings_controller.update_meeting(meeting.id, num_clients=len(active))
log.info(
"Participant left - session closed",
remaining_clients=len(active),
duration=event.payload.duration,
)
else:
log.info(
"Participant left - no open session found, skipping direct close",
duration=event.payload.duration,
)
# Delayed reconciliation poll as safety net
poll_daily_room_presence_task.apply_async(args=[meeting.id], countdown=5)
async def _handle_recording_started(event: RecordingStartedEvent): async def _handle_recording_started(event: RecordingStartedEvent):
room_name = event.payload.room_name room_name = event.payload.room_name

View File

@@ -1,9 +1,9 @@
import json import logging
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from enum import Enum from enum import Enum
from typing import Annotated, Any, Literal, Optional from typing import Annotated, Any, Literal, Optional
from fastapi import APIRouter, Depends, HTTPException, Request from fastapi import APIRouter, Depends, HTTPException
from fastapi_pagination import Page from fastapi_pagination import Page
from fastapi_pagination.ext.databases import apaginate from fastapi_pagination.ext.databases import apaginate
from pydantic import BaseModel from pydantic import BaseModel
@@ -12,24 +12,18 @@ from redis.exceptions import LockError
import reflector.auth as auth import reflector.auth as auth
from reflector.db import get_database from reflector.db import get_database
from reflector.db.calendar_events import calendar_events_controller from reflector.db.calendar_events import calendar_events_controller
from reflector.db.daily_participant_sessions import (
DailyParticipantSession,
daily_participant_sessions_controller,
)
from reflector.db.meetings import meetings_controller from reflector.db.meetings import meetings_controller
from reflector.db.rooms import rooms_controller from reflector.db.rooms import rooms_controller
from reflector.logger import logger from reflector.redis_cache import RedisAsyncLock
from reflector.presence.pending_joins import create_pending_join, delete_pending_join
from reflector.redis_cache import RedisAsyncLock, get_async_redis_client
from reflector.schemas.platform import Platform from reflector.schemas.platform import Platform
from reflector.services.ics_sync import ics_sync_service from reflector.services.ics_sync import ics_sync_service
from reflector.settings import settings from reflector.settings import settings
from reflector.utils.string import NonEmptyString
from reflector.utils.url import add_query_param from reflector.utils.url import add_query_param
from reflector.video_platforms.factory import create_platform_client from reflector.video_platforms.factory import create_platform_client
from reflector.worker.process import poll_daily_room_presence_task
from reflector.worker.webhook import test_webhook from reflector.worker.webhook import test_webhook
logger = logging.getLogger(__name__)
class Room(BaseModel): class Room(BaseModel):
id: str id: str
@@ -603,221 +597,3 @@ async def rooms_join_meeting(
meeting.room_url = add_query_param(meeting.room_url, "t", token) meeting.room_url = add_query_param(meeting.room_url, "t", token)
return meeting return meeting
class JoiningRequest(BaseModel):
"""Request body for /joining endpoint (before WebRTC handshake)."""
connection_id: NonEmptyString
"""Unique identifier for this connection. Generated by client via crypto.randomUUID()."""
class JoiningResponse(BaseModel):
status: Literal["ok"]
class JoinedRequest(BaseModel):
"""Request body for /joined endpoint (after WebRTC connection established)."""
connection_id: NonEmptyString
"""Must match the connection_id sent to /joining."""
session_id: NonEmptyString | None = None
"""Daily.co session_id for direct session creation. Optional for backward compat."""
user_name: str | None = None
"""Display name from Daily.co participant data."""
class JoinedResponse(BaseModel):
status: Literal["ok"]
def _get_pending_join_key(
user: Optional[auth.UserInfo], connection_id: NonEmptyString
) -> str:
"""Get a unique key for pending join tracking.
Uses user ID for authenticated users, connection_id for anonymous users.
This ensures each browser tab has its own unique pending join record.
"""
if user:
return f"{user['sub']}:{connection_id}"
return f"anon:{connection_id}"
@router.post(
"/rooms/{room_name}/meetings/{meeting_id}/joining", response_model=JoiningResponse
)
async def meeting_joining(
room_name: str,
meeting_id: str,
body: JoiningRequest,
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
) -> JoiningResponse:
"""Signal intent to join meeting. Called before WebRTC handshake starts.
This creates a pending join record that prevents the meeting from being
deactivated while the WebRTC handshake is in progress. The record expires
automatically after 30 seconds if the connection is not established.
"""
log = logger.bind(
room_name=room_name, meeting_id=meeting_id, connection_id=body.connection_id
)
room = await rooms_controller.get_by_name(room_name)
if not room:
raise HTTPException(status_code=404, detail="Room not found")
meeting = await meetings_controller.get_by_id(meeting_id, room=room)
if not meeting:
raise HTTPException(status_code=404, detail="Meeting not found")
if not meeting.is_active:
raise HTTPException(status_code=400, detail="Meeting is not active")
join_key = _get_pending_join_key(user, body.connection_id)
redis = await get_async_redis_client()
try:
await create_pending_join(redis, meeting_id, join_key)
log.debug("Created pending join intent", join_key=join_key)
finally:
await redis.aclose()
return JoiningResponse(status="ok")
@router.post(
"/rooms/{room_name}/meetings/{meeting_id}/joined", response_model=JoinedResponse
)
async def meeting_joined(
room_name: str,
meeting_id: str,
body: JoinedRequest,
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
) -> JoinedResponse:
"""Signal that WebRTC connection is established.
This clears the pending join record, confirming the user has successfully
connected to the meeting. Safe to call even if meeting was deactivated
during the handshake (idempotent cleanup).
"""
log = logger.bind(
room_name=room_name, meeting_id=meeting_id, connection_id=body.connection_id
)
room = await rooms_controller.get_by_name(room_name)
if not room:
raise HTTPException(status_code=404, detail="Room not found")
meeting = await meetings_controller.get_by_id(meeting_id, room=room)
if not meeting:
raise HTTPException(status_code=404, detail="Meeting not found")
# Note: We don't check is_active here - the /joined call is a cleanup operation
# and should succeed even if the meeting was deactivated during the handshake
join_key = _get_pending_join_key(user, body.connection_id)
redis = await get_async_redis_client()
try:
await delete_pending_join(redis, meeting_id, join_key)
log.debug("Cleared pending join intent", join_key=join_key)
finally:
await redis.aclose()
# Create session directly when session_id provided (instant presence update)
if body.session_id and meeting.platform == "daily":
session = DailyParticipantSession(
id=f"{meeting.id}:{body.session_id}",
meeting_id=meeting.id,
room_id=room.id,
session_id=body.session_id,
user_id=user["sub"] if user else None,
user_name=body.user_name or "Anonymous",
joined_at=datetime.now(timezone.utc),
)
await daily_participant_sessions_controller.batch_upsert_sessions([session])
active = await daily_participant_sessions_controller.get_active_by_meeting(
meeting.id
)
await meetings_controller.update_meeting(meeting.id, num_clients=len(active))
log.info(
"Session created directly",
session_id=body.session_id,
num_clients=len(active),
)
# Trigger presence poll as reconciliation safety net
if meeting.platform == "daily":
poll_daily_room_presence_task.apply_async(args=[meeting_id], countdown=3)
return JoinedResponse(status="ok")
class LeaveResponse(BaseModel):
status: Literal["ok"]
@router.post(
"/rooms/{room_name}/meetings/{meeting_id}/leave", response_model=LeaveResponse
)
async def meeting_leave(
room_name: str,
meeting_id: str,
request: Request,
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
) -> LeaveResponse:
"""Trigger presence update when user leaves meeting.
When session_id is provided in the body, closes the session directly
for instant presence update. Falls back to polling when session_id
is not available (e.g., sendBeacon without frame access).
Called on tab close/navigation via sendBeacon().
"""
# Parse session_id from body (sendBeacon may send text/plain or no body)
session_id: str | None = None
try:
body_bytes = await request.body()
if body_bytes:
data = json.loads(body_bytes)
raw = data.get("session_id")
if isinstance(raw, str) and raw.strip():
session_id = raw.strip()
except Exception:
pass
room = await rooms_controller.get_by_name(room_name)
if not room:
raise HTTPException(status_code=404, detail="Room not found")
meeting = await meetings_controller.get_by_id(meeting_id, room=room)
if not meeting:
raise HTTPException(status_code=404, detail="Meeting not found")
# Close session directly when session_id provided
session_closed = False
if session_id and meeting.platform == "daily":
existing = await daily_participant_sessions_controller.get_open_session(
meeting.id, session_id
)
if existing:
await daily_participant_sessions_controller.batch_close_sessions(
[existing.id], left_at=datetime.now(timezone.utc)
)
active = await daily_participant_sessions_controller.get_active_by_meeting(
meeting.id
)
await meetings_controller.update_meeting(
meeting.id, num_clients=len(active)
)
session_closed = True
# Only queue poll if we couldn't close directly — the poll runs before
# Daily.co API removes the participant, which would undo our correct count
if meeting.platform == "daily" and not session_closed:
poll_daily_room_presence_task.apply_async(args=[meeting_id], countdown=3)
return LeaveResponse(status="ok")

View File

@@ -31,10 +31,9 @@ from reflector.pipelines.main_multitrack_pipeline import (
task_pipeline_multitrack_process, task_pipeline_multitrack_process,
) )
from reflector.pipelines.topic_processing import EmptyPipeline from reflector.pipelines.topic_processing import EmptyPipeline
from reflector.presence.pending_joins import has_pending_joins
from reflector.processors import AudioFileWriterProcessor from reflector.processors import AudioFileWriterProcessor
from reflector.processors.audio_waveform_processor import AudioWaveformProcessor from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
from reflector.redis_cache import RedisAsyncLock, get_async_redis_client from reflector.redis_cache import RedisAsyncLock
from reflector.settings import settings from reflector.settings import settings
from reflector.storage import get_transcripts_storage from reflector.storage import get_transcripts_storage
from reflector.utils.daily import ( from reflector.utils.daily import (
@@ -870,18 +869,6 @@ async def process_meetings():
logger_.debug("Meeting not yet started, keep it") logger_.debug("Meeting not yet started, keep it")
if should_deactivate: if should_deactivate:
# Check for pending joins before deactivating
# Users might be in the process of connecting via WebRTC
redis = await get_async_redis_client()
try:
if await has_pending_joins(redis, meeting.id):
logger_.info(
"Meeting has pending joins, skipping deactivation"
)
continue
finally:
await redis.aclose()
await meetings_controller.update_meeting( await meetings_controller.update_meeting(
meeting.id, is_active=False meeting.id, is_active=False
) )

View File

@@ -11,6 +11,7 @@ broadcast messages to all connected websockets.
import asyncio import asyncio
import json import json
import threading
import redis.asyncio as redis import redis.asyncio as redis
from fastapi import WebSocket from fastapi import WebSocket
@@ -97,7 +98,6 @@ class WebsocketManager:
async def _pubsub_data_reader(self, pubsub_subscriber): async def _pubsub_data_reader(self, pubsub_subscriber):
while True: while True:
# timeout=1.0 prevents tight CPU loop when no messages available
message = await pubsub_subscriber.get_message( message = await pubsub_subscriber.get_message(
ignore_subscribe_messages=True ignore_subscribe_messages=True
) )
@@ -109,38 +109,29 @@ class WebsocketManager:
await socket.send_json(data) await socket.send_json(data)
# Process-global singleton to ensure only one WebsocketManager instance exists.
# Multiple instances would cause resource leaks and CPU issues.
_ws_manager: WebsocketManager | None = None
def get_ws_manager() -> WebsocketManager: def get_ws_manager() -> WebsocketManager:
""" """
Returns the global WebsocketManager singleton. Returns the WebsocketManager instance for managing websockets.
Creates instance on first call, subsequent calls return cached instance. This function initializes and returns the WebsocketManager instance,
Thread-safe via GIL. Concurrent initialization may create duplicate which is responsible for managing websockets and handling websocket
instances but last write wins (acceptable for this use case). connections.
Returns: Returns:
WebsocketManager: The global WebsocketManager instance. WebsocketManager: The initialized WebsocketManager instance.
Raises:
ImportError: If the 'reflector.settings' module cannot be imported.
RedisConnectionError: If there is an error connecting to the Redis server.
""" """
global _ws_manager local = threading.local()
if hasattr(local, "ws_manager"):
return local.ws_manager
if _ws_manager is not None:
return _ws_manager
# No lock needed - GIL makes this safe enough
# Worst case: race creates two instances, last assignment wins
pubsub_client = RedisPubSubManager( pubsub_client = RedisPubSubManager(
host=settings.REDIS_HOST, host=settings.REDIS_HOST,
port=settings.REDIS_PORT, port=settings.REDIS_PORT,
) )
_ws_manager = WebsocketManager(pubsub_client=pubsub_client) ws_manager = WebsocketManager(pubsub_client=pubsub_client)
return _ws_manager local.ws_manager = ws_manager
return ws_manager
def reset_ws_manager() -> None:
"""Reset singleton for testing. DO NOT use in production."""
global _ws_manager
_ws_manager = None

View File

@@ -1,5 +1,6 @@
import os import os
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from tempfile import NamedTemporaryFile
from unittest.mock import patch from unittest.mock import patch
import pytest import pytest
@@ -332,14 +333,11 @@ def celery_enable_logging():
@pytest.fixture(scope="session") @pytest.fixture(scope="session")
def celery_config(): def celery_config():
redis_host = os.environ.get("REDIS_HOST", "localhost") with NamedTemporaryFile() as f:
redis_port = os.environ.get("REDIS_PORT", "6379") yield {
# Use db 2 to avoid conflicts with main app "broker_url": "memory://",
redis_url = f"redis://{redis_host}:{redis_port}/2" "result_backend": f"db+sqlite:///{f.name}",
yield { }
"broker_url": redis_url,
"result_backend": redis_url,
}
@pytest.fixture(scope="session") @pytest.fixture(scope="session")
@@ -372,12 +370,9 @@ async def ws_manager_in_memory(monkeypatch):
def __init__(self, queue: asyncio.Queue): def __init__(self, queue: asyncio.Queue):
self.queue = queue self.queue = queue
async def get_message( async def get_message(self, ignore_subscribe_messages: bool = True):
self, ignore_subscribe_messages: bool = True, timeout: float | None = None
):
wait_timeout = timeout if timeout is not None else 0.05
try: try:
return await asyncio.wait_for(self.queue.get(), timeout=wait_timeout) return await asyncio.wait_for(self.queue.get(), timeout=0.05)
except Exception: except Exception:
return None return None

View File

@@ -1,213 +0,0 @@
"""Tests for direct session close on participant.left webhook.
Verifies that _handle_participant_left:
1. Closes the session directly (authoritative signal)
2. Updates num_clients from remaining active sessions
3. Queues a delayed reconciliation poll as safety net
4. Handles missing session gracefully
"""
from datetime import datetime, timedelta, timezone
from unittest.mock import AsyncMock, patch
import pytest
from reflector.dailyco_api.webhooks import ParticipantLeftEvent, ParticipantLeftPayload
from reflector.db.daily_participant_sessions import DailyParticipantSession
from reflector.db.meetings import Meeting
from reflector.views.daily import _handle_participant_left
@pytest.fixture
def mock_meeting():
return Meeting(
id="meeting-123",
room_id="room-456",
room_name="test-room-20251118120000",
room_url="https://daily.co/test-room-20251118120000",
host_room_url="https://daily.co/test-room-20251118120000?t=host-token",
platform="daily",
num_clients=2,
is_active=True,
start_date=datetime.now(timezone.utc),
end_date=datetime.now(timezone.utc),
)
@pytest.fixture
def participant_left_event():
now = datetime.now(timezone.utc)
return ParticipantLeftEvent(
version="1.0.0",
type="participant.left",
id="evt-left-abc123",
payload=ParticipantLeftPayload(
room_name="test-room-20251118120000",
session_id="session-alice",
user_id="user-alice",
user_name="Alice",
joined_at=int((now - timedelta(minutes=10)).timestamp()),
duration=600,
),
event_ts=int(now.timestamp()),
)
@pytest.fixture
def existing_session():
now = datetime.now(timezone.utc)
return DailyParticipantSession(
id="meeting-123:session-alice",
meeting_id="meeting-123",
room_id="room-456",
session_id="session-alice",
user_id="user-alice",
user_name="Alice",
joined_at=now - timedelta(minutes=10),
left_at=None,
)
@pytest.mark.asyncio
@patch("reflector.views.daily.poll_daily_room_presence_task")
@patch("reflector.views.daily.meetings_controller")
@patch("reflector.views.daily.daily_participant_sessions_controller")
async def test_closes_session_and_updates_num_clients(
mock_sessions_ctrl,
mock_meetings_ctrl,
mock_poll_task,
mock_meeting,
participant_left_event,
existing_session,
):
"""Webhook directly closes session and updates num_clients from remaining active count."""
mock_meetings_ctrl.get_by_room_name = AsyncMock(return_value=mock_meeting)
mock_sessions_ctrl.get_open_session = AsyncMock(return_value=existing_session)
mock_sessions_ctrl.batch_close_sessions = AsyncMock()
# One remaining active session after close
remaining = DailyParticipantSession(
id="meeting-123:session-bob",
meeting_id="meeting-123",
room_id="room-456",
session_id="session-bob",
user_id="user-bob",
user_name="Bob",
joined_at=datetime.now(timezone.utc),
left_at=None,
)
mock_sessions_ctrl.get_active_by_meeting = AsyncMock(return_value=[remaining])
mock_meetings_ctrl.update_meeting = AsyncMock()
await _handle_participant_left(participant_left_event)
# Session closed
mock_sessions_ctrl.batch_close_sessions.assert_called_once()
closed_ids = mock_sessions_ctrl.batch_close_sessions.call_args.args[0]
assert closed_ids == [existing_session.id]
# num_clients updated to remaining count
mock_meetings_ctrl.update_meeting.assert_called_once_with(
mock_meeting.id, num_clients=1
)
# Delayed reconciliation poll queued
mock_poll_task.apply_async.assert_called_once()
call_kwargs = mock_poll_task.apply_async.call_args.kwargs
assert call_kwargs["countdown"] == 5
assert call_kwargs["args"] == [mock_meeting.id]
@pytest.mark.asyncio
@patch("reflector.views.daily.poll_daily_room_presence_task")
@patch("reflector.views.daily.meetings_controller")
@patch("reflector.views.daily.daily_participant_sessions_controller")
async def test_handles_missing_session(
mock_sessions_ctrl,
mock_meetings_ctrl,
mock_poll_task,
mock_meeting,
participant_left_event,
):
"""No crash when session not found in DB — still queues reconciliation poll."""
mock_meetings_ctrl.get_by_room_name = AsyncMock(return_value=mock_meeting)
mock_sessions_ctrl.get_open_session = AsyncMock(return_value=None)
await _handle_participant_left(participant_left_event)
# No session close attempted
mock_sessions_ctrl.batch_close_sessions.assert_not_called()
# No num_clients update (no authoritative data without session)
mock_meetings_ctrl.update_meeting.assert_not_called()
# Still queues reconciliation poll
mock_poll_task.apply_async.assert_called_once()
@pytest.mark.asyncio
@patch("reflector.views.daily.poll_daily_room_presence_task")
@patch("reflector.views.daily.meetings_controller")
@patch("reflector.views.daily.daily_participant_sessions_controller")
async def test_updates_num_clients_to_zero_when_last_participant_leaves(
mock_sessions_ctrl,
mock_meetings_ctrl,
mock_poll_task,
mock_meeting,
participant_left_event,
existing_session,
):
"""num_clients set to 0 when no active sessions remain."""
mock_meetings_ctrl.get_by_room_name = AsyncMock(return_value=mock_meeting)
mock_sessions_ctrl.get_open_session = AsyncMock(return_value=existing_session)
mock_sessions_ctrl.batch_close_sessions = AsyncMock()
mock_sessions_ctrl.get_active_by_meeting = AsyncMock(return_value=[])
mock_meetings_ctrl.update_meeting = AsyncMock()
await _handle_participant_left(participant_left_event)
mock_meetings_ctrl.update_meeting.assert_called_once_with(
mock_meeting.id, num_clients=0
)
@pytest.mark.asyncio
@patch("reflector.views.daily.poll_daily_room_presence_task")
@patch("reflector.views.daily.meetings_controller")
async def test_no_room_name_in_event(
mock_meetings_ctrl,
mock_poll_task,
):
"""No crash when room_name is missing from webhook payload."""
event = ParticipantLeftEvent(
version="1.0.0",
type="participant.left",
id="evt-left-no-room",
payload=ParticipantLeftPayload(
room_name=None,
session_id="session-x",
user_id="user-x",
user_name="X",
joined_at=int(datetime.now(timezone.utc).timestamp()),
duration=0,
),
event_ts=int(datetime.now(timezone.utc).timestamp()),
)
await _handle_participant_left(event)
mock_meetings_ctrl.get_by_room_name.assert_not_called()
mock_poll_task.apply_async.assert_not_called()
@pytest.mark.asyncio
@patch("reflector.views.daily.poll_daily_room_presence_task")
@patch("reflector.views.daily.meetings_controller")
async def test_meeting_not_found(
mock_meetings_ctrl,
mock_poll_task,
participant_left_event,
):
"""No crash when meeting not found for room_name."""
mock_meetings_ctrl.get_by_room_name = AsyncMock(return_value=None)
await _handle_participant_left(participant_left_event)
mock_poll_task.apply_async.assert_not_called()

View File

@@ -1,339 +0,0 @@
"""Tests for direct session management via /joined and /leave endpoints.
Verifies that:
1. /joined with session_id creates session directly, updates num_clients
2. /joined without session_id (backward compat) still works, queues poll
3. /leave with session_id closes session, updates num_clients
4. /leave without session_id falls back to poll
5. Duplicate /joined calls are idempotent (upsert)
6. /leave for already-closed session is a no-op
"""
from datetime import datetime, timedelta, timezone
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from reflector.db.daily_participant_sessions import DailyParticipantSession
from reflector.db.meetings import Meeting
from reflector.views.rooms import (
JoinedRequest,
meeting_joined,
meeting_leave,
)
@pytest.fixture
def mock_room():
room = MagicMock()
room.id = "room-456"
room.name = "test-room"
room.platform = "daily"
return room
@pytest.fixture
def mock_meeting():
return Meeting(
id="meeting-123",
room_id="room-456",
room_name="test-room-20251118120000",
room_url="https://daily.co/test-room",
host_room_url="https://daily.co/test-room?t=host",
platform="daily",
num_clients=0,
is_active=True,
start_date=datetime.now(timezone.utc),
end_date=datetime.now(timezone.utc) + timedelta(hours=8),
)
@pytest.fixture
def mock_redis():
redis = AsyncMock()
redis.aclose = AsyncMock()
return redis
@pytest.fixture
def mock_request_with_session_id():
"""Mock Request with session_id in JSON body."""
request = AsyncMock()
request.body = AsyncMock(return_value=b'{"session_id": "session-abc"}')
return request
@pytest.fixture
def mock_request_empty_body():
"""Mock Request with empty JSON body (old frontend / no frame access)."""
request = AsyncMock()
request.body = AsyncMock(return_value=b"{}")
return request
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.delete_pending_join")
@patch("reflector.views.rooms.get_async_redis_client")
@patch("reflector.views.rooms.meetings_controller")
@patch("reflector.views.rooms.rooms_controller")
@patch("reflector.views.rooms.daily_participant_sessions_controller")
async def test_joined_with_session_id_creates_session(
mock_sessions_ctrl,
mock_rooms_ctrl,
mock_meetings_ctrl,
mock_redis_client,
mock_delete_pending,
mock_poll_task,
mock_room,
mock_meeting,
mock_redis,
):
"""session_id in /joined -> create session + update num_clients."""
mock_rooms_ctrl.get_by_name = AsyncMock(return_value=mock_room)
mock_meetings_ctrl.get_by_id = AsyncMock(return_value=mock_meeting)
mock_redis_client.return_value = mock_redis
mock_sessions_ctrl.batch_upsert_sessions = AsyncMock()
mock_sessions_ctrl.get_active_by_meeting = AsyncMock(
return_value=[MagicMock()] # 1 active session
)
mock_meetings_ctrl.update_meeting = AsyncMock()
body = JoinedRequest(
connection_id="conn-1",
session_id="session-abc",
user_name="Alice",
)
result = await meeting_joined(
"test-room", "meeting-123", body, user={"sub": "user-1"}
)
assert result.status == "ok"
# Session created via upsert
mock_sessions_ctrl.batch_upsert_sessions.assert_called_once()
sessions = mock_sessions_ctrl.batch_upsert_sessions.call_args.args[0]
assert len(sessions) == 1
assert sessions[0].session_id == "session-abc"
assert sessions[0].meeting_id == "meeting-123"
assert sessions[0].room_id == "room-456"
assert sessions[0].user_name == "Alice"
assert sessions[0].user_id == "user-1"
assert sessions[0].id == "meeting-123:session-abc"
# num_clients updated
mock_meetings_ctrl.update_meeting.assert_called_once_with(
"meeting-123", num_clients=1
)
# Reconciliation poll still queued
mock_poll_task.apply_async.assert_called_once()
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.delete_pending_join")
@patch("reflector.views.rooms.get_async_redis_client")
@patch("reflector.views.rooms.meetings_controller")
@patch("reflector.views.rooms.rooms_controller")
@patch("reflector.views.rooms.daily_participant_sessions_controller")
async def test_joined_without_session_id_backward_compat(
mock_sessions_ctrl,
mock_rooms_ctrl,
mock_meetings_ctrl,
mock_redis_client,
mock_delete_pending,
mock_poll_task,
mock_room,
mock_meeting,
mock_redis,
):
"""No session_id in /joined -> no session create, still queues poll."""
mock_rooms_ctrl.get_by_name = AsyncMock(return_value=mock_room)
mock_meetings_ctrl.get_by_id = AsyncMock(return_value=mock_meeting)
mock_redis_client.return_value = mock_redis
body = JoinedRequest(connection_id="conn-1")
result = await meeting_joined(
"test-room", "meeting-123", body, user={"sub": "user-1"}
)
assert result.status == "ok"
mock_sessions_ctrl.batch_upsert_sessions.assert_not_called()
mock_poll_task.apply_async.assert_called_once()
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.delete_pending_join")
@patch("reflector.views.rooms.get_async_redis_client")
@patch("reflector.views.rooms.meetings_controller")
@patch("reflector.views.rooms.rooms_controller")
@patch("reflector.views.rooms.daily_participant_sessions_controller")
async def test_joined_anonymous_user_sets_null_user_id(
mock_sessions_ctrl,
mock_rooms_ctrl,
mock_meetings_ctrl,
mock_redis_client,
mock_delete_pending,
mock_poll_task,
mock_room,
mock_meeting,
mock_redis,
):
"""Anonymous user -> session.user_id is None, user_name defaults to 'Anonymous'."""
mock_rooms_ctrl.get_by_name = AsyncMock(return_value=mock_room)
mock_meetings_ctrl.get_by_id = AsyncMock(return_value=mock_meeting)
mock_redis_client.return_value = mock_redis
mock_sessions_ctrl.batch_upsert_sessions = AsyncMock()
mock_sessions_ctrl.get_active_by_meeting = AsyncMock(return_value=[MagicMock()])
mock_meetings_ctrl.update_meeting = AsyncMock()
body = JoinedRequest(connection_id="conn-1", session_id="session-abc")
result = await meeting_joined("test-room", "meeting-123", body, user=None)
assert result.status == "ok"
sessions = mock_sessions_ctrl.batch_upsert_sessions.call_args.args[0]
assert sessions[0].user_id is None
assert sessions[0].user_name == "Anonymous"
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.meetings_controller")
@patch("reflector.views.rooms.rooms_controller")
@patch("reflector.views.rooms.daily_participant_sessions_controller")
async def test_leave_with_session_id_closes_session(
mock_sessions_ctrl,
mock_rooms_ctrl,
mock_meetings_ctrl,
mock_poll_task,
mock_room,
mock_meeting,
mock_request_with_session_id,
):
"""session_id in /leave -> close session + update num_clients."""
mock_rooms_ctrl.get_by_name = AsyncMock(return_value=mock_room)
mock_meetings_ctrl.get_by_id = AsyncMock(return_value=mock_meeting)
existing_session = DailyParticipantSession(
id="meeting-123:session-abc",
meeting_id="meeting-123",
room_id="room-456",
session_id="session-abc",
user_id="user-1",
user_name="Alice",
joined_at=datetime.now(timezone.utc) - timedelta(minutes=5),
left_at=None,
)
mock_sessions_ctrl.get_open_session = AsyncMock(return_value=existing_session)
mock_sessions_ctrl.batch_close_sessions = AsyncMock()
mock_sessions_ctrl.get_active_by_meeting = AsyncMock(return_value=[])
mock_meetings_ctrl.update_meeting = AsyncMock()
result = await meeting_leave(
"test-room", "meeting-123", mock_request_with_session_id, user={"sub": "user-1"}
)
assert result.status == "ok"
# Session closed
mock_sessions_ctrl.batch_close_sessions.assert_called_once()
closed_ids = mock_sessions_ctrl.batch_close_sessions.call_args.args[0]
assert closed_ids == ["meeting-123:session-abc"]
# num_clients updated
mock_meetings_ctrl.update_meeting.assert_called_once_with(
"meeting-123", num_clients=0
)
# No poll — direct close is authoritative, poll would race with API latency
mock_poll_task.apply_async.assert_not_called()
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.meetings_controller")
@patch("reflector.views.rooms.rooms_controller")
async def test_leave_without_session_id_falls_back_to_poll(
mock_rooms_ctrl,
mock_meetings_ctrl,
mock_poll_task,
mock_room,
mock_meeting,
mock_request_empty_body,
):
"""No session_id in /leave -> just queues poll as before."""
mock_rooms_ctrl.get_by_name = AsyncMock(return_value=mock_room)
mock_meetings_ctrl.get_by_id = AsyncMock(return_value=mock_meeting)
result = await meeting_leave(
"test-room", "meeting-123", mock_request_empty_body, user=None
)
assert result.status == "ok"
mock_poll_task.apply_async.assert_called_once()
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.delete_pending_join")
@patch("reflector.views.rooms.get_async_redis_client")
@patch("reflector.views.rooms.meetings_controller")
@patch("reflector.views.rooms.rooms_controller")
@patch("reflector.views.rooms.daily_participant_sessions_controller")
async def test_duplicate_joined_is_idempotent(
mock_sessions_ctrl,
mock_rooms_ctrl,
mock_meetings_ctrl,
mock_redis_client,
mock_delete_pending,
mock_poll_task,
mock_room,
mock_meeting,
mock_redis,
):
"""Calling /joined twice with same session_id -> upsert both times, no error."""
mock_rooms_ctrl.get_by_name = AsyncMock(return_value=mock_room)
mock_meetings_ctrl.get_by_id = AsyncMock(return_value=mock_meeting)
mock_redis_client.return_value = mock_redis
mock_sessions_ctrl.batch_upsert_sessions = AsyncMock()
mock_sessions_ctrl.get_active_by_meeting = AsyncMock(return_value=[MagicMock()])
mock_meetings_ctrl.update_meeting = AsyncMock()
body = JoinedRequest(
connection_id="conn-1", session_id="session-abc", user_name="Alice"
)
await meeting_joined("test-room", "meeting-123", body, user={"sub": "user-1"})
await meeting_joined("test-room", "meeting-123", body, user={"sub": "user-1"})
assert mock_sessions_ctrl.batch_upsert_sessions.call_count == 2
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.meetings_controller")
@patch("reflector.views.rooms.rooms_controller")
@patch("reflector.views.rooms.daily_participant_sessions_controller")
async def test_leave_already_closed_session_is_noop(
mock_sessions_ctrl,
mock_rooms_ctrl,
mock_meetings_ctrl,
mock_poll_task,
mock_room,
mock_meeting,
mock_request_with_session_id,
):
"""/leave for already-closed session -> no close attempted, just poll."""
mock_rooms_ctrl.get_by_name = AsyncMock(return_value=mock_room)
mock_meetings_ctrl.get_by_id = AsyncMock(return_value=mock_meeting)
mock_sessions_ctrl.get_open_session = AsyncMock(return_value=None)
result = await meeting_leave(
"test-room", "meeting-123", mock_request_with_session_id, user=None
)
assert result.status == "ok"
mock_sessions_ctrl.batch_close_sessions.assert_not_called()
mock_meetings_ctrl.update_meeting.assert_not_called()
mock_poll_task.apply_async.assert_called_once()

View File

@@ -1,367 +0,0 @@
"""Integration tests for /joining and /joined endpoints.
Tests for the join intent tracking to prevent race conditions during
WebRTC handshake when users join meetings.
"""
from datetime import datetime, timedelta, timezone
from unittest.mock import AsyncMock, patch
import pytest
from reflector.db.meetings import Meeting
from reflector.presence.pending_joins import PENDING_JOIN_PREFIX
TEST_CONNECTION_ID = "test-connection-uuid-12345"
@pytest.fixture
def mock_room():
"""Mock room object."""
from reflector.db.rooms import Room
return Room(
id="room-123",
name="test-room",
user_id="owner-user",
created_at=datetime.now(timezone.utc),
zulip_auto_post=False,
zulip_stream="",
zulip_topic="",
is_locked=False,
room_mode="normal",
recording_type="cloud",
recording_trigger="automatic",
is_shared=True,
platform="daily",
skip_consent=False,
)
@pytest.fixture
def mock_meeting():
"""Mock meeting object."""
now = datetime.now(timezone.utc)
return Meeting(
id="meeting-456",
room_id="room-123",
room_name="test-room-20251118120000",
room_url="https://daily.co/test-room-20251118120000",
host_room_url="https://daily.co/test-room-20251118120000?t=host",
platform="daily",
num_clients=0,
is_active=True,
start_date=now,
end_date=now + timedelta(hours=1),
)
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
@patch("reflector.views.rooms.get_async_redis_client")
async def test_joining_endpoint_creates_pending_join(
mock_get_redis,
mock_get_meeting,
mock_get_room,
mock_room,
mock_meeting,
client,
authenticated_client,
):
"""Test that /joining endpoint creates pending join in Redis."""
mock_get_room.return_value = mock_room
mock_get_meeting.return_value = mock_meeting
mock_redis = AsyncMock()
mock_redis.setex = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
response = await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joining",
json={"connection_id": TEST_CONNECTION_ID},
)
assert response.status_code == 200
assert response.json() == {"status": "ok"}
# Verify Redis setex was called with correct key pattern
mock_redis.setex.assert_called_once()
call_args = mock_redis.setex.call_args[0]
assert call_args[0].startswith(f"{PENDING_JOIN_PREFIX}:{mock_meeting.id}:")
assert TEST_CONNECTION_ID in call_args[0]
@pytest.mark.asyncio
@patch("reflector.views.rooms.poll_daily_room_presence_task")
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
@patch("reflector.views.rooms.get_async_redis_client")
async def test_joined_endpoint_deletes_pending_join(
mock_get_redis,
mock_get_meeting,
mock_get_room,
mock_poll_task,
mock_room,
mock_meeting,
client,
authenticated_client,
):
"""Test that /joined endpoint deletes pending join from Redis."""
mock_get_room.return_value = mock_room
mock_get_meeting.return_value = mock_meeting
mock_redis = AsyncMock()
mock_redis.delete = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
response = await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joined",
json={"connection_id": TEST_CONNECTION_ID},
)
assert response.status_code == 200
assert response.json() == {"status": "ok"}
# Verify Redis delete was called with correct key pattern
mock_redis.delete.assert_called_once()
call_args = mock_redis.delete.call_args[0]
assert call_args[0].startswith(f"{PENDING_JOIN_PREFIX}:{mock_meeting.id}:")
assert TEST_CONNECTION_ID in call_args[0]
# Verify presence poll was triggered for Daily meetings
mock_poll_task.delay.assert_called_once_with(mock_meeting.id)
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
async def test_joining_endpoint_room_not_found(
mock_get_room,
client,
authenticated_client,
):
"""Test that /joining returns 404 when room not found."""
mock_get_room.return_value = None
response = await client.post(
"/rooms/nonexistent-room/meetings/meeting-123/joining",
json={"connection_id": TEST_CONNECTION_ID},
)
assert response.status_code == 404
assert response.json()["detail"] == "Room not found"
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
async def test_joining_endpoint_meeting_not_found(
mock_get_meeting,
mock_get_room,
mock_room,
client,
authenticated_client,
):
"""Test that /joining returns 404 when meeting not found."""
mock_get_room.return_value = mock_room
mock_get_meeting.return_value = None
response = await client.post(
f"/rooms/{mock_room.name}/meetings/nonexistent-meeting/joining",
json={"connection_id": TEST_CONNECTION_ID},
)
assert response.status_code == 404
assert response.json()["detail"] == "Meeting not found"
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
async def test_joining_endpoint_meeting_not_active(
mock_get_meeting,
mock_get_room,
mock_room,
mock_meeting,
client,
authenticated_client,
):
"""Test that /joining returns 400 when meeting is not active."""
mock_get_room.return_value = mock_room
inactive_meeting = mock_meeting.model_copy(update={"is_active": False})
mock_get_meeting.return_value = inactive_meeting
response = await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joining",
json={"connection_id": TEST_CONNECTION_ID},
)
assert response.status_code == 400
assert response.json()["detail"] == "Meeting is not active"
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
@patch("reflector.views.rooms.get_async_redis_client")
async def test_joining_endpoint_anonymous_user(
mock_get_redis,
mock_get_meeting,
mock_get_room,
mock_room,
mock_meeting,
client,
):
"""Test that /joining works for anonymous users with unique connection_id."""
mock_get_room.return_value = mock_room
mock_get_meeting.return_value = mock_meeting
mock_redis = AsyncMock()
mock_redis.setex = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
response = await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joining",
json={"connection_id": TEST_CONNECTION_ID},
)
assert response.status_code == 200
assert response.json() == {"status": "ok"}
# Verify Redis setex was called with "anon:" prefix and connection_id
call_args = mock_redis.setex.call_args[0]
assert ":anon:" in call_args[0]
assert TEST_CONNECTION_ID in call_args[0]
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
@patch("reflector.views.rooms.get_async_redis_client")
async def test_joining_endpoint_redis_closed_on_success(
mock_get_redis,
mock_get_meeting,
mock_get_room,
mock_room,
mock_meeting,
client,
authenticated_client,
):
"""Test that Redis connection is closed after successful operation."""
mock_get_room.return_value = mock_room
mock_get_meeting.return_value = mock_meeting
mock_redis = AsyncMock()
mock_redis.setex = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joining",
json={"connection_id": TEST_CONNECTION_ID},
)
mock_redis.aclose.assert_called_once()
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
@patch("reflector.views.rooms.get_async_redis_client")
async def test_joining_endpoint_redis_closed_on_error(
mock_get_redis,
mock_get_meeting,
mock_get_room,
mock_room,
mock_meeting,
client,
authenticated_client,
):
"""Test that Redis connection is closed even when operation fails."""
mock_get_room.return_value = mock_room
mock_get_meeting.return_value = mock_meeting
mock_redis = AsyncMock()
mock_redis.setex = AsyncMock(side_effect=Exception("Redis error"))
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
with pytest.raises(Exception):
await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joining",
json={"connection_id": TEST_CONNECTION_ID},
)
mock_redis.aclose.assert_called_once()
@pytest.mark.asyncio
async def test_joining_endpoint_requires_connection_id(
client,
):
"""Test that /joining returns 422 when connection_id is missing."""
response = await client.post(
"/rooms/test-room/meetings/meeting-123/joining",
json={},
)
assert response.status_code == 422 # Validation error
@pytest.mark.asyncio
async def test_joining_endpoint_rejects_empty_connection_id(
client,
):
"""Test that /joining returns 422 when connection_id is empty string."""
response = await client.post(
"/rooms/test-room/meetings/meeting-123/joining",
json={"connection_id": ""},
)
assert response.status_code == 422 # Validation error (NonEmptyString)
@pytest.mark.asyncio
@patch("reflector.views.rooms.rooms_controller.get_by_name")
@patch("reflector.views.rooms.meetings_controller.get_by_id")
@patch("reflector.views.rooms.get_async_redis_client")
async def test_different_connection_ids_create_different_keys(
mock_get_redis,
mock_get_meeting,
mock_get_room,
mock_room,
mock_meeting,
client,
):
"""Test that different connection_ids create different Redis keys."""
mock_get_room.return_value = mock_room
mock_get_meeting.return_value = mock_meeting
mock_redis = AsyncMock()
mock_redis.setex = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
# First connection
await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joining",
json={"connection_id": "connection-1"},
)
key1 = mock_redis.setex.call_args[0][0]
mock_redis.setex.reset_mock()
# Second connection (different tab)
await client.post(
f"/rooms/{mock_room.name}/meetings/{mock_meeting.id}/joining",
json={"connection_id": "connection-2"},
)
key2 = mock_redis.setex.call_args[0][0]
# Keys should be different
assert key1 != key2
assert "connection-1" in key1
assert "connection-2" in key2

View File

@@ -1,153 +0,0 @@
"""Tests for pending joins Redis helper functions.
TDD tests for tracking join intent to prevent race conditions during
WebRTC handshake when users join meetings.
"""
from unittest.mock import AsyncMock
import pytest
from reflector.presence.pending_joins import (
PENDING_JOIN_PREFIX,
PENDING_JOIN_TTL,
create_pending_join,
delete_pending_join,
has_pending_joins,
)
@pytest.fixture
def mock_redis():
"""Mock async Redis client."""
redis = AsyncMock()
redis.setex = AsyncMock()
redis.delete = AsyncMock()
redis.scan = AsyncMock(return_value=(0, []))
return redis
@pytest.mark.asyncio
async def test_create_pending_join_sets_key_with_ttl(mock_redis):
"""Test that create_pending_join stores key with correct TTL."""
meeting_id = "meeting-123"
user_id = "user-456"
await create_pending_join(mock_redis, meeting_id, user_id)
expected_key = f"{PENDING_JOIN_PREFIX}:{meeting_id}:{user_id}"
mock_redis.setex.assert_called_once()
call_args = mock_redis.setex.call_args
assert call_args[0][0] == expected_key
assert call_args[0][1] == PENDING_JOIN_TTL
# Value should be a timestamp string
assert call_args[0][2] is not None
@pytest.mark.asyncio
async def test_delete_pending_join_removes_key(mock_redis):
"""Test that delete_pending_join removes the key."""
meeting_id = "meeting-123"
user_id = "user-456"
await delete_pending_join(mock_redis, meeting_id, user_id)
expected_key = f"{PENDING_JOIN_PREFIX}:{meeting_id}:{user_id}"
mock_redis.delete.assert_called_once_with(expected_key)
@pytest.mark.asyncio
async def test_has_pending_joins_returns_false_when_no_keys(mock_redis):
"""Test has_pending_joins returns False when no matching keys."""
mock_redis.scan.return_value = (0, [])
result = await has_pending_joins(mock_redis, "meeting-123")
assert result is False
mock_redis.scan.assert_called_once()
call_kwargs = mock_redis.scan.call_args.kwargs
assert call_kwargs["match"] == f"{PENDING_JOIN_PREFIX}:meeting-123:*"
@pytest.mark.asyncio
async def test_has_pending_joins_returns_true_when_keys_exist(mock_redis):
"""Test has_pending_joins returns True when matching keys found."""
mock_redis.scan.return_value = (0, [b"pending_join:meeting-123:user-1"])
result = await has_pending_joins(mock_redis, "meeting-123")
assert result is True
@pytest.mark.asyncio
async def test_has_pending_joins_scans_with_correct_pattern(mock_redis):
"""Test has_pending_joins uses correct scan pattern."""
meeting_id = "meeting-abc-def"
mock_redis.scan.return_value = (0, [])
await has_pending_joins(mock_redis, meeting_id)
expected_pattern = f"{PENDING_JOIN_PREFIX}:{meeting_id}:*"
mock_redis.scan.assert_called_once()
call_kwargs = mock_redis.scan.call_args.kwargs
assert call_kwargs["match"] == expected_pattern
assert call_kwargs["count"] == 100
@pytest.mark.asyncio
async def test_multiple_users_pending_joins(mock_redis):
"""Test that multiple users can have pending joins for same meeting."""
meeting_id = "meeting-123"
# Simulate two pending joins
mock_redis.scan.return_value = (
0,
[b"pending_join:meeting-123:user-1", b"pending_join:meeting-123:user-2"],
)
result = await has_pending_joins(mock_redis, meeting_id)
assert result is True
@pytest.mark.asyncio
async def test_pending_join_ttl_value():
"""Test that PENDING_JOIN_TTL has expected value."""
# 30 seconds should be enough for WebRTC handshake but not too long
assert PENDING_JOIN_TTL == 30
@pytest.mark.asyncio
async def test_pending_join_prefix_value():
"""Test that PENDING_JOIN_PREFIX has expected value."""
assert PENDING_JOIN_PREFIX == "pending_join"
@pytest.mark.asyncio
async def test_has_pending_joins_multi_iteration_scan_no_keys(mock_redis):
"""Test has_pending_joins iterates until cursor returns 0."""
# Simulate multi-iteration scan: cursor 100 -> cursor 50 -> cursor 0
mock_redis.scan.side_effect = [
(100, []), # First iteration, no keys, continue
(50, []), # Second iteration, no keys, continue
(0, []), # Third iteration, cursor 0, done
]
result = await has_pending_joins(mock_redis, "meeting-123")
assert result is False
assert mock_redis.scan.call_count == 3
@pytest.mark.asyncio
async def test_has_pending_joins_multi_iteration_finds_key_later(mock_redis):
"""Test has_pending_joins finds key on second iteration."""
# Simulate finding key on second scan iteration
mock_redis.scan.side_effect = [
(100, []), # First iteration, no keys
(0, [b"pending_join:meeting-123:user-1"]), # Second iteration, found key
]
result = await has_pending_joins(mock_redis, "meeting-123")
assert result is True
assert mock_redis.scan.call_count == 2

View File

@@ -1,241 +0,0 @@
"""Tests for process_meetings pending joins check.
Tests that process_meetings correctly skips deactivation when
pending joins exist for a meeting.
"""
from datetime import datetime, timedelta, timezone
from unittest.mock import AsyncMock, patch
import pytest
from reflector.db.meetings import Meeting
def _get_process_meetings_fn():
"""Get the underlying async function without Celery/asynctask decorators."""
from reflector.worker import process
fn = process.process_meetings
# Get through both decorator layers (@shared_task and @asynctask)
if hasattr(fn, "__wrapped__"):
fn = fn.__wrapped__
if hasattr(fn, "__wrapped__"):
fn = fn.__wrapped__
return fn
@pytest.fixture
def mock_active_meeting():
"""Mock an active meeting that should be considered for deactivation."""
now = datetime.now(timezone.utc)
return Meeting(
id="meeting-123",
room_id="room-456",
room_name="test-room-20251118120000",
room_url="https://daily.co/test-room-20251118120000",
host_room_url="https://daily.co/test-room-20251118120000?t=host",
platform="daily",
num_clients=0,
is_active=True,
start_date=now - timedelta(hours=1),
end_date=now - timedelta(minutes=30), # Already ended
)
@pytest.mark.asyncio
@patch("reflector.worker.process.meetings_controller.get_all_active")
@patch("reflector.worker.process.RedisAsyncLock")
@patch("reflector.worker.process.create_platform_client")
@patch("reflector.worker.process.get_async_redis_client")
@patch("reflector.worker.process.has_pending_joins")
@patch("reflector.worker.process.meetings_controller.update_meeting")
async def test_process_meetings_skips_deactivation_with_pending_joins(
mock_update_meeting,
mock_has_pending_joins,
mock_get_redis,
mock_create_client,
mock_redis_lock_class,
mock_get_all_active,
mock_active_meeting,
):
"""Test that process_meetings skips deactivation when pending joins exist."""
process_meetings = _get_process_meetings_fn()
mock_get_all_active.return_value = [mock_active_meeting]
# Mock lock acquired
mock_lock_instance = AsyncMock()
mock_lock_instance.acquired = True
mock_lock_instance.__aenter__ = AsyncMock(return_value=mock_lock_instance)
mock_lock_instance.__aexit__ = AsyncMock()
mock_redis_lock_class.return_value = mock_lock_instance
# Mock platform client - no active sessions, but had sessions (triggers deactivation)
mock_daily_client = AsyncMock()
mock_session = AsyncMock()
mock_session.ended_at = datetime.now(timezone.utc) # Session ended
mock_daily_client.get_room_sessions = AsyncMock(return_value=[mock_session])
mock_create_client.return_value = mock_daily_client
# Mock Redis client
mock_redis = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
# Mock pending joins exist
mock_has_pending_joins.return_value = True
await process_meetings()
# Verify has_pending_joins was called
mock_has_pending_joins.assert_called_once_with(mock_redis, mock_active_meeting.id)
# Verify meeting was NOT deactivated
mock_update_meeting.assert_not_called()
# Verify Redis was closed
mock_redis.aclose.assert_called_once()
@pytest.mark.asyncio
@patch("reflector.worker.process.meetings_controller.get_all_active")
@patch("reflector.worker.process.RedisAsyncLock")
@patch("reflector.worker.process.create_platform_client")
@patch("reflector.worker.process.get_async_redis_client")
@patch("reflector.worker.process.has_pending_joins")
@patch("reflector.worker.process.meetings_controller.update_meeting")
async def test_process_meetings_deactivates_without_pending_joins(
mock_update_meeting,
mock_has_pending_joins,
mock_get_redis,
mock_create_client,
mock_redis_lock_class,
mock_get_all_active,
mock_active_meeting,
):
"""Test that process_meetings deactivates when no pending joins."""
process_meetings = _get_process_meetings_fn()
mock_get_all_active.return_value = [mock_active_meeting]
# Mock lock acquired
mock_lock_instance = AsyncMock()
mock_lock_instance.acquired = True
mock_lock_instance.__aenter__ = AsyncMock(return_value=mock_lock_instance)
mock_lock_instance.__aexit__ = AsyncMock()
mock_redis_lock_class.return_value = mock_lock_instance
# Mock platform client - no active sessions, but had sessions
mock_daily_client = AsyncMock()
mock_session = AsyncMock()
mock_session.ended_at = datetime.now(timezone.utc)
mock_daily_client.get_room_sessions = AsyncMock(return_value=[mock_session])
mock_create_client.return_value = mock_daily_client
# Mock Redis client
mock_redis = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
# Mock no pending joins
mock_has_pending_joins.return_value = False
await process_meetings()
# Verify meeting was deactivated
mock_update_meeting.assert_called_once_with(mock_active_meeting.id, is_active=False)
@pytest.mark.asyncio
@patch("reflector.worker.process.meetings_controller.get_all_active")
@patch("reflector.worker.process.RedisAsyncLock")
@patch("reflector.worker.process.create_platform_client")
async def test_process_meetings_no_check_when_active_sessions(
mock_create_client,
mock_redis_lock_class,
mock_get_all_active,
mock_active_meeting,
):
"""Test that pending joins check is skipped when there are active sessions."""
process_meetings = _get_process_meetings_fn()
mock_get_all_active.return_value = [mock_active_meeting]
# Mock lock acquired
mock_lock_instance = AsyncMock()
mock_lock_instance.acquired = True
mock_lock_instance.__aenter__ = AsyncMock(return_value=mock_lock_instance)
mock_lock_instance.__aexit__ = AsyncMock()
mock_redis_lock_class.return_value = mock_lock_instance
# Mock platform client - has active session
mock_daily_client = AsyncMock()
mock_session = AsyncMock()
mock_session.ended_at = None # Still active
mock_daily_client.get_room_sessions = AsyncMock(return_value=[mock_session])
mock_create_client.return_value = mock_daily_client
with (
patch("reflector.worker.process.get_async_redis_client") as mock_get_redis,
patch("reflector.worker.process.has_pending_joins") as mock_has_pending_joins,
patch(
"reflector.worker.process.meetings_controller.update_meeting"
) as mock_update_meeting,
):
await process_meetings()
# Verify pending joins check was NOT called (no need - active sessions exist)
mock_has_pending_joins.assert_not_called()
# Verify meeting was NOT deactivated
mock_update_meeting.assert_not_called()
@pytest.mark.asyncio
@patch("reflector.worker.process.meetings_controller.get_all_active")
@patch("reflector.worker.process.RedisAsyncLock")
@patch("reflector.worker.process.create_platform_client")
@patch("reflector.worker.process.get_async_redis_client")
@patch("reflector.worker.process.has_pending_joins")
@patch("reflector.worker.process.meetings_controller.update_meeting")
async def test_process_meetings_closes_redis_even_on_continue(
mock_update_meeting,
mock_has_pending_joins,
mock_get_redis,
mock_create_client,
mock_redis_lock_class,
mock_get_all_active,
mock_active_meeting,
):
"""Test that Redis connection is always closed, even when skipping deactivation."""
process_meetings = _get_process_meetings_fn()
mock_get_all_active.return_value = [mock_active_meeting]
# Mock lock acquired
mock_lock_instance = AsyncMock()
mock_lock_instance.acquired = True
mock_lock_instance.__aenter__ = AsyncMock(return_value=mock_lock_instance)
mock_lock_instance.__aexit__ = AsyncMock()
mock_redis_lock_class.return_value = mock_lock_instance
# Mock platform client - no active sessions
mock_daily_client = AsyncMock()
mock_session = AsyncMock()
mock_session.ended_at = datetime.now(timezone.utc)
mock_daily_client.get_room_sessions = AsyncMock(return_value=[mock_session])
mock_create_client.return_value = mock_daily_client
# Mock Redis client
mock_redis = AsyncMock()
mock_redis.aclose = AsyncMock()
mock_get_redis.return_value = mock_redis
# Mock pending joins exist (will trigger continue)
mock_has_pending_joins.return_value = True
await process_meetings()
# Verify Redis was closed
mock_redis.aclose.assert_called_once()

View File

@@ -115,7 +115,9 @@ def appserver(tmpdir, setup_database, celery_session_app, celery_session_worker)
settings.DATA_DIR = DATA_DIR settings.DATA_DIR = DATA_DIR
# Using celery_includes from conftest.py which includes both pipelines @pytest.fixture(scope="session")
def celery_includes():
return ["reflector.pipelines.main_live_pipeline"]
@pytest.mark.usefixtures("setup_database") @pytest.mark.usefixtures("setup_database")

View File

@@ -56,12 +56,7 @@ def appserver_ws_user(setup_database):
if server_instance: if server_instance:
server_instance.should_exit = True server_instance.should_exit = True
server_thread.join(timeout=2.0) server_thread.join(timeout=30)
# Reset global singleton for test isolation
from reflector.ws_manager import reset_ws_manager
reset_ws_manager()
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
@@ -138,8 +133,6 @@ async def test_user_ws_accepts_valid_token_and_receives_events(appserver_ws_user
# Connect and then trigger an event via HTTP create # Connect and then trigger an event via HTTP create
async with aconnect_ws(base_ws, subprotocols=subprotocols) as ws: async with aconnect_ws(base_ws, subprotocols=subprotocols) as ws:
await asyncio.sleep(0.2)
# Emit an event to the user's room via a standard HTTP action # Emit an event to the user's room via a standard HTTP action
from httpx import AsyncClient from httpx import AsyncClient
@@ -157,7 +150,6 @@ async def test_user_ws_accepts_valid_token_and_receives_events(appserver_ws_user
"email": "user-abc@example.com", "email": "user-abc@example.com",
} }
# Use in-memory client (global singleton makes it share ws_manager)
async with AsyncClient(app=app, base_url=f"http://{host}:{port}/v1") as ac: async with AsyncClient(app=app, base_url=f"http://{host}:{port}/v1") as ac:
# Create a transcript as this user so that the server publishes TRANSCRIPT_CREATED to user room # Create a transcript as this user so that the server publishes TRANSCRIPT_CREATED to user room
resp = await ac.post("/transcripts", json={"name": "WS Test"}) resp = await ac.post("/transcripts", json={"name": "WS Test"})

45
server/uv.lock generated
View File

@@ -159,20 +159,21 @@ wheels = [
[[package]] [[package]]
name = "aiortc" name = "aiortc"
version = "1.14.0" version = "1.13.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "aioice" }, { name = "aioice" },
{ name = "av" }, { name = "av" },
{ name = "cffi" },
{ name = "cryptography" }, { name = "cryptography" },
{ name = "google-crc32c" }, { name = "google-crc32c" },
{ name = "pyee" }, { name = "pyee" },
{ name = "pylibsrtp" }, { name = "pylibsrtp" },
{ name = "pyopenssl" }, { name = "pyopenssl" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/51/9c/4e027bfe0195de0442da301e2389329496745d40ae44d2d7c4571c4290ce/aiortc-1.14.0.tar.gz", hash = "sha256:adc8a67ace10a085721e588e06a00358ed8eaf5f6b62f0a95358ff45628dd762", size = 1180864 } sdist = { url = "https://files.pythonhosted.org/packages/62/03/bc947d74c548e0c17cf94e5d5bdacaed0ee9e5b2bb7b8b8cf1ac7a7c01ec/aiortc-1.13.0.tar.gz", hash = "sha256:5d209975c22d0910fb5a0f0e2caa828f2da966c53580f7c7170ac3a16a871620", size = 1179894 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/57/ab/31646a49209568cde3b97eeade0d28bb78b400e6645c56422c101df68932/aiortc-1.14.0-py3-none-any.whl", hash = "sha256:4b244d7e482f4e1f67e685b3468269628eca1ec91fa5b329ab517738cfca086e", size = 93183 }, { url = "https://files.pythonhosted.org/packages/87/29/765633cab5f1888890f5f172d1d53009b9b14e079cdfa01a62d9896a9ea9/aiortc-1.13.0-py3-none-any.whl", hash = "sha256:9ccccec98796f6a96bd1c3dd437a06da7e0f57521c96bd56e4b965a91b03a0a0", size = 92910 },
] ]
[[package]] [[package]]
@@ -326,24 +327,28 @@ wheels = [
[[package]] [[package]]
name = "av" name = "av"
version = "16.1.0" version = "14.4.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/78/cd/3a83ffbc3cc25b39721d174487fb0d51a76582f4a1703f98e46170ce83d4/av-16.1.0.tar.gz", hash = "sha256:a094b4fd87a3721dacf02794d3d2c82b8d712c85b9534437e82a8a978c175ffd", size = 4285203 } sdist = { url = "https://files.pythonhosted.org/packages/86/f6/0b473dab52dfdea05f28f3578b1c56b6c796ce85e76951bab7c4e38d5a74/av-14.4.0.tar.gz", hash = "sha256:3ecbf803a7fdf67229c0edada0830d6bfaea4d10bfb24f0c3f4e607cd1064b42", size = 3892203 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/48/d0/b71b65d1b36520dcb8291a2307d98b7fc12329a45614a303ff92ada4d723/av-16.1.0-cp311-cp311-macosx_11_0_x86_64.whl", hash = "sha256:e88ad64ee9d2b9c4c5d891f16c22ae78e725188b8926eb88187538d9dd0b232f", size = 26927747 }, { url = "https://files.pythonhosted.org/packages/18/8a/d57418b686ffd05fabd5a0a9cfa97e63b38c35d7101af00e87c51c8cc43c/av-14.4.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:5b21d5586a88b9fce0ab78e26bd1c38f8642f8e2aad5b35e619f4d202217c701", size = 19965048 },
{ url = "https://files.pythonhosted.org/packages/2f/79/720a5a6ccdee06eafa211b945b0a450e3a0b8fc3d12922f0f3c454d870d2/av-16.1.0-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:cb296073fa6935724de72593800ba86ae49ed48af03960a4aee34f8a611f442b", size = 21492232 }, { url = "https://files.pythonhosted.org/packages/f5/aa/3f878b0301efe587e9b07bb773dd6b47ef44ca09a3cffb4af50c08a170f3/av-14.4.0-cp311-cp311-macosx_12_0_x86_64.whl", hash = "sha256:cf8762d90b0f94a20c9f6e25a94f1757db5a256707964dfd0b1d4403e7a16835", size = 23750064 },
{ url = "https://files.pythonhosted.org/packages/8e/4f/a1ba8d922f2f6d1a3d52419463ef26dd6c4d43ee364164a71b424b5ae204/av-16.1.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:720edd4d25aa73723c1532bb0597806d7b9af5ee34fc02358782c358cfe2f879", size = 39291737 }, { url = "https://files.pythonhosted.org/packages/9a/b4/6fe94a31f9ed3a927daa72df67c7151968587106f30f9f8fcd792b186633/av-14.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c0ac9f08920c7bbe0795319689d901e27cb3d7870b9a0acae3f26fc9daa801a6", size = 33648775 },
{ url = "https://files.pythonhosted.org/packages/1a/31/fc62b9fe8738d2693e18d99f040b219e26e8df894c10d065f27c6b4f07e3/av-16.1.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:c7f2bc703d0df260a1fdf4de4253c7f5500ca9fc57772ea241b0cb241bcf972e", size = 40846822 }, { url = "https://files.pythonhosted.org/packages/6c/f3/7f3130753521d779450c935aec3f4beefc8d4645471159f27b54e896470c/av-14.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a56d9ad2afdb638ec0404e962dc570960aae7e08ae331ad7ff70fbe99a6cf40e", size = 32216915 },
{ url = "https://files.pythonhosted.org/packages/53/10/ab446583dbce730000e8e6beec6ec3c2753e628c7f78f334a35cad0317f4/av-16.1.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d69c393809babada7d54964d56099e4b30a3e1f8b5736ca5e27bd7be0e0f3c83", size = 40675604 }, { url = "https://files.pythonhosted.org/packages/f8/9a/8ffabfcafb42154b4b3a67d63f9b69e68fa8c34cb39ddd5cb813dd049ed4/av-14.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bed513cbcb3437d0ae47743edc1f5b4a113c0b66cdd4e1aafc533abf5b2fbf2", size = 35287279 },
{ url = "https://files.pythonhosted.org/packages/31/d7/1003be685277005f6d63fd9e64904ee222fe1f7a0ea70af313468bb597db/av-16.1.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:441892be28582356d53f282873c5a951592daaf71642c7f20165e3ddcb0b4c63", size = 42015955 }, { url = "https://files.pythonhosted.org/packages/ad/11/7023ba0a2ca94a57aedf3114ab8cfcecb0819b50c30982a4c5be4d31df41/av-14.4.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d030c2d3647931e53d51f2f6e0fcf465263e7acf9ec6e4faa8dbfc77975318c3", size = 36294683 },
{ url = "https://files.pythonhosted.org/packages/2f/4a/fa2a38ee9306bf4579f556f94ecbc757520652eb91294d2a99c7cf7623b9/av-16.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:273a3e32de64819e4a1cd96341824299fe06f70c46f2288b5dc4173944f0fd62", size = 31750339 }, { url = "https://files.pythonhosted.org/packages/3d/fa/b8ac9636bd5034e2b899354468bef9f4dadb067420a16d8a493a514b7817/av-14.4.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:1cc21582a4f606271d8c2036ec7a6247df0831050306c55cf8a905701d0f0474", size = 34552391 },
{ url = "https://files.pythonhosted.org/packages/9c/84/2535f55edcd426cebec02eb37b811b1b0c163f26b8d3f53b059e2ec32665/av-16.1.0-cp312-cp312-macosx_11_0_x86_64.whl", hash = "sha256:640f57b93f927fba8689f6966c956737ee95388a91bd0b8c8b5e0481f73513d6", size = 26945785 }, { url = "https://files.pythonhosted.org/packages/fb/29/0db48079c207d1cba7a2783896db5aec3816e17de55942262c244dffbc0f/av-14.4.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ce7c9cd452153d36f1b1478f904ed5f9ab191d76db873bdd3a597193290805d4", size = 37265250 },
{ url = "https://files.pythonhosted.org/packages/b6/17/ffb940c9e490bf42e86db4db1ff426ee1559cd355a69609ec1efe4d3a9eb/av-16.1.0-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:ae3fb658eec00852ebd7412fdc141f17f3ddce8afee2d2e1cf366263ad2a3b35", size = 21481147 }, { url = "https://files.pythonhosted.org/packages/1c/55/715858c3feb7efa4d667ce83a829c8e6ee3862e297fb2b568da3f968639d/av-14.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:fd261e31cc6b43ca722f80656c39934199d8f2eb391e0147e704b6226acebc29", size = 27925845 },
{ url = "https://files.pythonhosted.org/packages/15/c1/e0d58003d2d83c3921887d5c8c9b8f5f7de9b58dc2194356a2656a45cfdc/av-16.1.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:27ee558d9c02a142eebcbe55578a6d817fedfde42ff5676275504e16d07a7f86", size = 39517197 }, { url = "https://files.pythonhosted.org/packages/a6/75/b8641653780336c90ba89e5352cac0afa6256a86a150c7703c0b38851c6d/av-14.4.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:a53e682b239dd23b4e3bc9568cfb1168fc629ab01925fdb2e7556eb426339e94", size = 19954125 },
{ url = "https://files.pythonhosted.org/packages/32/77/787797b43475d1b90626af76f80bfb0c12cfec5e11eafcfc4151b8c80218/av-16.1.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:7ae547f6d5fa31763f73900d43901e8c5fa6367bb9a9840978d57b5a7ae14ed2", size = 41174337 }, { url = "https://files.pythonhosted.org/packages/99/e6/37fe6fa5853a48d54d749526365780a63a4bc530be6abf2115e3a21e292a/av-14.4.0-cp312-cp312-macosx_12_0_x86_64.whl", hash = "sha256:5aa0b901751a32703fa938d2155d56ce3faf3630e4a48d238b35d2f7e49e5395", size = 23751479 },
{ url = "https://files.pythonhosted.org/packages/8e/ac/d90df7f1e3b97fc5554cf45076df5045f1e0a6adf13899e10121229b826c/av-16.1.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8cf065f9d438e1921dc31fc7aa045790b58aee71736897866420d80b5450f62a", size = 40817720 }, { url = "https://files.pythonhosted.org/packages/f7/75/9a5f0e6bda5f513b62bafd1cff2b495441a8b07ab7fb7b8e62f0c0d1683f/av-14.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a3b316fed3597675fe2aacfed34e25fc9d5bb0196dc8c0b014ae5ed4adda48de", size = 33801401 },
{ url = "https://files.pythonhosted.org/packages/80/6f/13c3a35f9dbcebafd03fe0c4cbd075d71ac8968ec849a3cfce406c35a9d2/av-16.1.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a345877a9d3cc0f08e2bc4ec163ee83176864b92587afb9d08dff50f37a9a829", size = 42267396 }, { url = "https://files.pythonhosted.org/packages/6a/c9/e4df32a2ad1cb7f3a112d0ed610c5e43c89da80b63c60d60e3dc23793ec0/av-14.4.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a587b5c5014c3c0e16143a0f8d99874e46b5d0c50db6111aa0b54206b5687c81", size = 32364330 },
{ url = "https://files.pythonhosted.org/packages/c8/b9/275df9607f7fb44317ccb1d4be74827185c0d410f52b6e2cd770fe209118/av-16.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:f49243b1d27c91cd8c66fdba90a674e344eb8eb917264f36117bf2b6879118fd", size = 31752045 }, { url = "https://files.pythonhosted.org/packages/ca/f0/64e7444a41817fde49a07d0239c033f7e9280bec4a4bb4784f5c79af95e6/av-14.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10d53f75e8ac1ec8877a551c0db32a83c0aaeae719d05285281eaaba211bbc30", size = 35519508 },
{ url = "https://files.pythonhosted.org/packages/c2/a8/a370099daa9033a3b6f9b9bd815304b3d8396907a14d09845f27467ba138/av-14.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:c8558cfde79dd8fc92d97c70e0f0fa8c94c7a66f68ae73afdf58598f0fe5e10d", size = 36448593 },
{ url = "https://files.pythonhosted.org/packages/27/bb/edb6ceff8fa7259cb6330c51dbfbc98dd1912bd6eb5f7bc05a4bb14a9d6e/av-14.4.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:455b6410dea0ab2d30234ffb28df7d62ca3cdf10708528e247bec3a4cdcced09", size = 34701485 },
{ url = "https://files.pythonhosted.org/packages/a7/8a/957da1f581aa1faa9a5dfa8b47ca955edb47f2b76b949950933b457bfa1d/av-14.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:1661efbe9d975f927b8512d654704223d936f39016fad2ddab00aee7c40f412c", size = 37521981 },
{ url = "https://files.pythonhosted.org/packages/28/76/3f1cf0568592f100fd68eb40ed8c491ce95ca3c1378cc2d4c1f6d1bd295d/av-14.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:fbbeef1f421a3461086853d6464ad5526b56ffe8ccb0ab3fd0a1f121dfbf26ad", size = 27925944 },
] ]
[[package]] [[package]]
@@ -3262,7 +3267,7 @@ requires-dist = [
{ name = "aiohttp-cors", specifier = ">=0.7.0" }, { name = "aiohttp-cors", specifier = ">=0.7.0" },
{ name = "aiortc", specifier = ">=1.5.0" }, { name = "aiortc", specifier = ">=1.5.0" },
{ name = "alembic", specifier = ">=1.11.3" }, { name = "alembic", specifier = ">=1.11.3" },
{ name = "av", specifier = ">=15.0.0" }, { name = "av", specifier = ">=10.0.0" },
{ name = "celery", specifier = ">=5.3.4" }, { name = "celery", specifier = ">=5.3.4" },
{ name = "databases", extras = ["aiosqlite", "asyncpg"], specifier = ">=0.7.0" }, { name = "databases", extras = ["aiosqlite", "asyncpg"], specifier = ">=0.7.0" },
{ name = "fastapi", extras = ["standard"], specifier = ">=0.100.1" }, { name = "fastapi", extras = ["standard"], specifier = ">=0.100.1" },

View File

@@ -25,9 +25,6 @@ import { useConsentDialog } from "../../lib/consent";
import { import {
useRoomJoinMeeting, useRoomJoinMeeting,
useMeetingStartRecording, useMeetingStartRecording,
useMeetingJoining,
useMeetingJoined,
buildMeetingLeaveUrl,
} from "../../lib/apiHooks"; } from "../../lib/apiHooks";
import { omit } from "remeda"; import { omit } from "remeda";
import { import {
@@ -91,7 +88,7 @@ const useFrame = (
cbs: { cbs: {
onLeftMeeting: () => void; onLeftMeeting: () => void;
onCustomButtonClick: (ev: DailyEventObjectCustomButtonClick) => void; onCustomButtonClick: (ev: DailyEventObjectCustomButtonClick) => void;
onJoinMeeting: (sessionId: string | null) => void; onJoinMeeting: () => void;
}, },
) => { ) => {
const [{ frame, joined }, setState] = useState(USE_FRAME_INIT_STATE); const [{ frame, joined }, setState] = useState(USE_FRAME_INIT_STATE);
@@ -142,8 +139,7 @@ const useFrame = (
console.error("frame is null in joined-meeting callback"); console.error("frame is null in joined-meeting callback");
return; return;
} }
const local = frame.participants()?.local; cbs.onJoinMeeting();
cbs.onJoinMeeting(local?.session_id ?? null);
}; };
frame.on("joined-meeting", joinCb); frame.on("joined-meeting", joinCb);
return () => { return () => {
@@ -191,14 +187,7 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
const [container, setContainer] = useState<HTMLDivElement | null>(null); const [container, setContainer] = useState<HTMLDivElement | null>(null);
const joinMutation = useRoomJoinMeeting(); const joinMutation = useRoomJoinMeeting();
const startRecordingMutation = useMeetingStartRecording(); const startRecordingMutation = useMeetingStartRecording();
const joiningMutation = useMeetingJoining();
const joinedMutation = useMeetingJoined();
const [joinedMeeting, setJoinedMeeting] = useState<Meeting | null>(null); const [joinedMeeting, setJoinedMeeting] = useState<Meeting | null>(null);
const sessionIdRef = useRef<string | null>(null);
// Generate a stable connection ID for this component instance
// Used to track pending joins per browser tab (prevents key collision for anonymous users)
const connectionId = useMemo(() => crypto.randomUUID(), []);
// Generate deterministic instanceIds so all participants use SAME IDs // Generate deterministic instanceIds so all participants use SAME IDs
const cloudInstanceId = parseNonEmptyString(meeting.id); const cloudInstanceId = parseNonEmptyString(meeting.id);
@@ -245,34 +234,8 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
const roomUrl = joinedMeeting?.room_url; const roomUrl = joinedMeeting?.room_url;
const handleLeave = useCallback(() => { const handleLeave = useCallback(() => {
if (meeting?.id && roomName) {
const payload = sessionIdRef.current
? { session_id: sessionIdRef.current }
: {};
navigator.sendBeacon(
buildMeetingLeaveUrl(roomName, meeting.id),
JSON.stringify(payload),
);
}
router.push("/browse"); router.push("/browse");
}, [router, roomName, meeting?.id]); }, [router]);
// Trigger presence recheck on dirty disconnects (tab close, navigation away)
useEffect(() => {
if (!meeting?.id || !roomName) return;
const handleBeforeUnload = () => {
// sendBeacon guarantees delivery even if tab closes mid-request
const url = buildMeetingLeaveUrl(roomName, meeting.id);
const payload = sessionIdRef.current
? { session_id: sessionIdRef.current }
: {};
navigator.sendBeacon(url, JSON.stringify(payload));
};
window.addEventListener("beforeunload", handleBeforeUnload);
return () => window.removeEventListener("beforeunload", handleBeforeUnload);
}, [meeting?.id, roomName]);
const handleCustomButtonClick = useCallback( const handleCustomButtonClick = useCallback(
(ev: DailyEventObjectCustomButtonClick) => { (ev: DailyEventObjectCustomButtonClick) => {
@@ -285,106 +248,72 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
], ],
); );
const handleFrameJoinMeeting = useCallback( const handleFrameJoinMeeting = useCallback(() => {
(sessionId: string | null) => { if (meeting.recording_type === "cloud") {
sessionIdRef.current = sessionId; console.log("Starting dual recording via REST API", {
cloudInstanceId,
rawTracksInstanceId,
});
// Signal that WebRTC connection is established // Start both cloud and raw-tracks via backend REST API (with retry on 404)
// This clears the pending join intent and creates session record directly // Daily.co needs time to register call as "hosting" for REST API
joinedMutation.mutate( const startRecordingWithRetry = (
{ type: DailyRecordingType,
params: { instanceId: NonEmptyString,
path: { attempt: number = 1,
room_name: roomName, ) => {
meeting_id: meeting.id, setTimeout(() => {
startRecordingMutation.mutate(
{
params: {
path: {
meeting_id: meeting.id,
},
},
body: {
type,
instanceId,
},
}, },
}, {
body: { onError: (error: any) => {
connection_id: connectionId, const errorText = error?.detail || error?.message || "";
session_id: sessionId, const is404NotHosting = errorText.includes(
}, "does not seem to be hosting a call",
}, );
{ const isActiveStream = errorText.includes(
onError: (error: unknown) => { "has an active stream",
// Non-blocking: log but don't fail - this is cleanup, not critical );
console.warn("Failed to signal joined:", error);
},
},
);
if (meeting.recording_type === "cloud") { if (is404NotHosting && attempt < RECORDING_START_MAX_RETRIES) {
console.log("Starting dual recording via REST API", { console.log(
cloudInstanceId, `${type}: Call not hosting yet, retry ${attempt + 1}/${RECORDING_START_MAX_RETRIES} in ${RECORDING_START_DELAY_MS}ms...`,
rawTracksInstanceId,
});
// Start both cloud and raw-tracks via backend REST API (with retry on 404)
// Daily.co needs time to register call as "hosting" for REST API
const startRecordingWithRetry = (
type: DailyRecordingType,
instanceId: NonEmptyString,
attempt: number = 1,
) => {
setTimeout(() => {
startRecordingMutation.mutate(
{
params: {
path: {
meeting_id: meeting.id,
},
},
body: {
type,
instanceId,
},
},
{
onError: (error: any) => {
const errorText = error?.detail || error?.message || "";
const is404NotHosting = errorText.includes(
"does not seem to be hosting a call",
); );
const isActiveStream = errorText.includes( startRecordingWithRetry(type, instanceId, attempt + 1);
"has an active stream", } else if (isActiveStream) {
console.log(
`${type}: Recording already active (started by another participant)`,
); );
} else {
if ( console.error(`Failed to start ${type} recording:`, error);
is404NotHosting && }
attempt < RECORDING_START_MAX_RETRIES
) {
console.log(
`${type}: Call not hosting yet, retry ${attempt + 1}/${RECORDING_START_MAX_RETRIES} in ${RECORDING_START_DELAY_MS}ms...`,
);
startRecordingWithRetry(type, instanceId, attempt + 1);
} else if (isActiveStream) {
console.log(
`${type}: Recording already active (started by another participant)`,
);
} else {
console.error(`Failed to start ${type} recording:`, error);
}
},
}, },
); },
}, RECORDING_START_DELAY_MS); );
}; }, RECORDING_START_DELAY_MS);
};
// Start both recordings // Start both recordings
startRecordingWithRetry("cloud", cloudInstanceId); startRecordingWithRetry("cloud", cloudInstanceId);
startRecordingWithRetry("raw-tracks", rawTracksInstanceId); startRecordingWithRetry("raw-tracks", rawTracksInstanceId);
} }
}, }, [
[ meeting.recording_type,
meeting.recording_type, meeting.id,
meeting.id, startRecordingMutation,
roomName, cloudInstanceId,
connectionId, rawTracksInstanceId,
joinedMutation, ]);
startRecordingMutation,
cloudInstanceId,
rawTracksInstanceId,
],
);
const recordingIconUrl = useMemo( const recordingIconUrl = useMemo(
() => new URL("/recording-icon.svg", window.location.origin), () => new URL("/recording-icon.svg", window.location.origin),
@@ -399,28 +328,8 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
useEffect(() => { useEffect(() => {
if (!frame || !roomUrl) return; if (!frame || !roomUrl) return;
frame
const joinRoom = async () => { .join({
// Signal intent to join before WebRTC handshake starts
// This prevents race condition where meeting is deactivated during handshake
try {
await joiningMutation.mutateAsync({
params: {
path: {
room_name: roomName,
meeting_id: meeting.id,
},
},
body: {
connection_id: connectionId,
},
});
} catch (error) {
// Non-blocking: log but continue with join
console.warn("Failed to signal joining intent:", error);
}
await frame.join({
url: roomUrl, url: roomUrl,
sendSettings: { sendSettings: {
video: { video: {
@@ -432,13 +341,9 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
}, },
// Note: screenVideo intentionally not configured to preserve full quality for screen shares // Note: screenVideo intentionally not configured to preserve full quality for screen shares
}, },
}); })
}; .catch(console.error.bind(console, "Failed to join daily room:"));
}, [frame, roomUrl]);
joinRoom().catch(console.error.bind(console, "Failed to join daily room:"));
// joiningMutation excluded from deps - it's a stable hook reference
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [frame, roomUrl, roomName, meeting.id, connectionId]);
useEffect(() => { useEffect(() => {
setCustomTrayButton( setCustomTrayButton(

View File

@@ -1,10 +1,9 @@
"use client"; "use client";
import { createFinalURL, createQuerySerializer } from "openapi-fetch"; import { $api } from "./apiClient";
import { $api, API_URL } from "./apiClient";
import { useError } from "../(errors)/errorContext"; import { useError } from "../(errors)/errorContext";
import { QueryClient, useQueryClient } from "@tanstack/react-query"; import { QueryClient, useQueryClient } from "@tanstack/react-query";
import type { components, paths } from "../reflector-api"; import type { components } from "../reflector-api";
import { useAuth } from "./AuthProvider"; import { useAuth } from "./AuthProvider";
import { MeetingId } from "./types"; import { MeetingId } from "./types";
import { NonEmptyString } from "./utils"; import { NonEmptyString } from "./utils";
@@ -15,10 +14,6 @@ import { NonEmptyString } from "./utils";
* or, limitation or incorrect usage of .d type generator from json schema * or, limitation or incorrect usage of .d type generator from json schema
* */ * */
/*
if you experience lack of expected endpoints on $api, try to regenerate
*/
export const useAuthReady = () => { export const useAuthReady = () => {
const auth = useAuth(); const auth = useAuth();
@@ -773,7 +768,6 @@ export function useRoomActiveMeetings(roomName: string | null) {
}, },
{ {
enabled: !!roomName, enabled: !!roomName,
refetchInterval: 5000,
}, },
); );
} }
@@ -813,47 +807,6 @@ export function useRoomJoinMeeting() {
); );
} }
// Presence race fix endpoints - signal join intent to prevent race conditions during WebRTC handshake
export function useMeetingJoining() {
return $api.useMutation(
"post",
"/v1/rooms/{room_name}/meetings/{meeting_id}/joining",
{},
);
}
export function useMeetingJoined() {
return $api.useMutation(
"post",
"/v1/rooms/{room_name}/meetings/{meeting_id}/joined",
{},
);
}
export function useMeetingLeave() {
return $api.useMutation(
"post",
"/v1/rooms/{room_name}/meetings/{meeting_id}/leave",
{},
);
}
/**
* Build absolute URL for /leave endpoint (for sendBeacon which can't use hooks).
*/
export function buildMeetingLeaveUrl(
roomName: string,
meetingId: string,
): string {
return createFinalURL("/v1/rooms/{room_name}/meetings/{meeting_id}/leave", {
baseUrl: API_URL,
params: {
path: { room_name: roomName, meeting_id: meetingId },
},
querySerializer: createQuerySerializer(),
});
}
export function useRoomIcsSync() { export function useRoomIcsSync() {
const { setError } = useError(); const { setError } = useError();

View File

@@ -313,79 +313,6 @@ export interface paths {
patch?: never; patch?: never;
trace?: never; trace?: never;
}; };
"/v1/rooms/{room_name}/meetings/{meeting_id}/joining": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/**
* Meeting Joining
* @description Signal intent to join meeting. Called before WebRTC handshake starts.
*
* This creates a pending join record that prevents the meeting from being
* deactivated while the WebRTC handshake is in progress. The record expires
* automatically after 30 seconds if the connection is not established.
*/
post: operations["v1_meeting_joining"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/v1/rooms/{room_name}/meetings/{meeting_id}/joined": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/**
* Meeting Joined
* @description Signal that WebRTC connection is established.
*
* This clears the pending join record, confirming the user has successfully
* connected to the meeting. Safe to call even if meeting was deactivated
* during the handshake (idempotent cleanup).
*/
post: operations["v1_meeting_joined"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/v1/rooms/{room_name}/meetings/{meeting_id}/leave": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/**
* Meeting Leave
* @description Trigger presence update when user leaves meeting.
*
* When session_id is provided in the body, closes the session directly
* for instant presence update. Falls back to polling when session_id
* is not available (e.g., sendBeacon without frame access).
* Called on tab close/navigation via sendBeacon().
*/
post: operations["v1_meeting_leave"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/v1/transcripts": { "/v1/transcripts": {
parameters: { parameters: {
query?: never; query?: never;
@@ -1570,56 +1497,6 @@ export interface components {
/** Reason */ /** Reason */
reason?: string | null; reason?: string | null;
}; };
/**
* JoinedRequest
* @description Request body for /joined endpoint (after WebRTC connection established).
*/
JoinedRequest: {
/**
* Connection Id
* @description A non-empty string
*/
connection_id: string;
/** Session Id */
session_id?: string | null;
/** User Name */
user_name?: string | null;
};
/** JoinedResponse */
JoinedResponse: {
/**
* Status
* @constant
*/
status: "ok";
};
/**
* JoiningRequest
* @description Request body for /joining endpoint (before WebRTC handshake).
*/
JoiningRequest: {
/**
* Connection Id
* @description A non-empty string
*/
connection_id: string;
};
/** JoiningResponse */
JoiningResponse: {
/**
* Status
* @constant
*/
status: "ok";
};
/** LeaveResponse */
LeaveResponse: {
/**
* Status
* @constant
*/
status: "ok";
};
/** Meeting */ /** Meeting */
Meeting: { Meeting: {
/** Id */ /** Id */
@@ -2810,110 +2687,6 @@ export interface operations {
}; };
}; };
}; };
v1_meeting_joining: {
parameters: {
query?: never;
header?: never;
path: {
room_name: string;
meeting_id: string;
};
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["JoiningRequest"];
};
};
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["JoiningResponse"];
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
v1_meeting_joined: {
parameters: {
query?: never;
header?: never;
path: {
room_name: string;
meeting_id: string;
};
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["JoinedRequest"];
};
};
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["JoinedResponse"];
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
v1_meeting_leave: {
parameters: {
query?: never;
header?: never;
path: {
room_name: string;
meeting_id: string;
};
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["LeaveResponse"];
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
v1_transcripts_list: { v1_transcripts_list: {
parameters: { parameters: {
query?: { query?: {