mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2026-03-22 07:06:47 +00:00
Compare commits
10 Commits
local-llm-
...
hypothesis
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d9dfa17b5a | ||
|
|
2476fd48ac | ||
|
|
ad64f43202 | ||
|
|
15aa121727 | ||
|
|
08f4294b36 | ||
|
|
485b455c69 | ||
|
|
74c9ec2ff1 | ||
|
|
aac89e8d03 | ||
|
|
13088e72f8 | ||
|
|
775c9b667d |
@@ -4,4 +4,5 @@ docs/docs/installation/daily-setup.md:curl-auth-header:277
|
||||
gpu/self_hosted/DEV_SETUP.md:curl-auth-header:74
|
||||
gpu/self_hosted/DEV_SETUP.md:curl-auth-header:83
|
||||
server/reflector/worker/process.py:generic-api-key:465
|
||||
server/tests/test_recording_request_flow.py:generic-api-key:121
|
||||
server/reflector/worker/process.py:generic-api-key:594
|
||||
|
||||
12
CHANGELOG.md
12
CHANGELOG.md
@@ -1,17 +1,5 @@
|
||||
# Changelog
|
||||
|
||||
## [0.33.0](https://github.com/Monadical-SAS/reflector/compare/v0.32.2...v0.33.0) (2026-02-05)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* Daily+hatchet default ([#846](https://github.com/Monadical-SAS/reflector/issues/846)) ([15ab2e3](https://github.com/Monadical-SAS/reflector/commit/15ab2e306eacf575494b4b5d2b2ad779d44a1c7f))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* websocket tests ([#825](https://github.com/Monadical-SAS/reflector/issues/825)) ([1ce1c7a](https://github.com/Monadical-SAS/reflector/commit/1ce1c7a910b6c374115d2437b17f9d288ef094dc))
|
||||
|
||||
## [0.32.2](https://github.com/Monadical-SAS/reflector/compare/v0.32.1...v0.32.2) (2026-02-03)
|
||||
|
||||
|
||||
|
||||
@@ -1,10 +0,0 @@
|
||||
# Standalone Compose: Remaining Production Work
|
||||
|
||||
## Server/worker/beat: remove host network mode + bind mounts
|
||||
|
||||
Currently `server` uses `network_mode: host` and all three services bind-mount `./server/:/app/`. For full standalone prod:
|
||||
|
||||
- Remove `network_mode: host` from server
|
||||
- Remove bind-mount volumes from server, worker, beat (use built image only)
|
||||
- Update `compose_cmd` in `setup-standalone.sh` to not rely on host network
|
||||
- Change `SERVER_API_URL` from `http://host.docker.internal:1250` to `http://server:1250` (server reachable via Docker network once off host mode)
|
||||
414
PRESENCE_RACE_DESIGN_DOC.md
Normal file
414
PRESENCE_RACE_DESIGN_DOC.md
Normal file
@@ -0,0 +1,414 @@
|
||||
# Presence System Race Condition: Design Document
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Users in the same Reflector room can end up in **different Daily.co rooms** due to race conditions in meeting lifecycle management. This document details the root cause, why current mitigations are insufficient, and proposes a solution that eliminates the race by design.
|
||||
|
||||
---
|
||||
|
||||
## Problem Statement
|
||||
|
||||
When a user quickly leaves and rejoins a meeting (e.g., closes tab and reopens within seconds), they may find themselves in a different Daily.co room than other participants in the same Reflector room. This breaks the core assumption that all users in a Reflector room share the same video call.
|
||||
|
||||
### Symptoms
|
||||
- User A and User B are in the same Reflector room but different Daily.co rooms
|
||||
- User reports "I can't see/hear the other participant"
|
||||
- Meeting appears active but users are isolated
|
||||
|
||||
---
|
||||
|
||||
## Evidence: Hypothesis Simulation
|
||||
|
||||
A simulation was built to model the presence system and find race conditions through randomized action sequences.
|
||||
|
||||
**Location**: `server/tests/simulation/`
|
||||
|
||||
```bash
|
||||
cd server
|
||||
|
||||
# Current system config - finds race conditions
|
||||
uv run pytest tests/simulation/test_presence_race.py::test_presence_race_conditions_current_system
|
||||
# Result: XFAIL (expected failure - race found)
|
||||
|
||||
# Fixed system config - no race conditions
|
||||
uv run pytest tests/simulation/test_presence_race.py::test_presence_no_race_conditions_fixed_system
|
||||
# Result: PASS
|
||||
```
|
||||
|
||||
The simulation models:
|
||||
- Discrete time clock for deterministic replay
|
||||
- Daily.co rooms, participants, presence API with configurable lag
|
||||
- Reflector meetings, sessions, webhooks
|
||||
- User state machine: `idle → joining → handshaking → connected → leaving → idle`
|
||||
- Background tasks: `poll_daily_room_presence`, `process_meetings`
|
||||
|
||||
### Key Finding
|
||||
|
||||
The simulation proves that **even with the Daily API call**, a race window exists during WebRTC handshake when users are invisible to the presence API.
|
||||
|
||||
---
|
||||
|
||||
## Current System Analysis
|
||||
|
||||
### Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Frontend │────▶│ Backend │────▶│ Daily.co │
|
||||
│ (Next.js) │ │ (FastAPI) │ │ API │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┐
|
||||
│ Database │
|
||||
│ (Sessions) │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
### Relevant Code Paths
|
||||
|
||||
#### 1. Meeting Join Flow
|
||||
- **File**: `server/reflector/views/rooms.py`
|
||||
- **Endpoint**: `POST /rooms/{room_name}/meeting`
|
||||
- Returns existing active meeting or creates new one
|
||||
- User then connects to Daily.co via WebRTC (frontend)
|
||||
|
||||
#### 2. Presence Polling
|
||||
- **File**: `server/reflector/worker/process.py:642`
|
||||
- **Function**: `poll_daily_room_presence()`
|
||||
- Called by webhooks (`participant.joined`, `participant.left`) and `/joined`, `/leave` endpoints
|
||||
- Queries Daily API for current participants
|
||||
- Updates `daily_participant_sessions` table in database
|
||||
|
||||
#### 3. Meeting Deactivation
|
||||
- **File**: `server/reflector/worker/process.py:754`
|
||||
- **Function**: `process_meetings()`
|
||||
- Runs periodically (every 60s via Celery beat)
|
||||
- Checks if meetings should be deactivated
|
||||
|
||||
**Current implementation** (lines 806-833):
|
||||
```python
|
||||
if meeting.platform == "daily":
|
||||
try:
|
||||
presence = await client.get_room_presence(meeting.room_name)
|
||||
has_active_sessions = presence.total_count > 0
|
||||
# ...
|
||||
except Exception:
|
||||
logger_.warning("Daily.co presence API failed, falling back to DB sessions")
|
||||
room_sessions = await client.get_room_sessions(meeting.room_name)
|
||||
has_active_sessions = bool(
|
||||
room_sessions and any(s.ended_at is None for s in room_sessions)
|
||||
)
|
||||
```
|
||||
|
||||
**Key observation**: The code already uses the Daily API (`get_room_presence`), not just the database. The race condition persists despite this.
|
||||
|
||||
### Endpoints from feature-leave-endpoint Branch
|
||||
|
||||
The `feature-leave-endpoint` branch added explicit leave/join notifications:
|
||||
|
||||
| Endpoint | Purpose | Trigger |
|
||||
|----------|---------|---------|
|
||||
| `POST /rooms/{room_name}/meetings/{meeting_id}/join` | Get meeting info | User navigates to room |
|
||||
| `POST /rooms/{room_name}/meetings/{meeting_id}/joined` | Signal connection complete | After WebRTC connects |
|
||||
| `POST /rooms/{room_name}/meetings/{meeting_id}/leave` | Signal user leaving | Tab close via sendBeacon |
|
||||
|
||||
These endpoints trigger `poll_daily_room_presence_task` to update session state faster than waiting for webhooks.
|
||||
|
||||
---
|
||||
|
||||
## Race Condition: Detailed Analysis
|
||||
|
||||
### The Fundamental Problem
|
||||
|
||||
**The backend has no knowledge of users who are in the process of joining (WebRTC handshake phase).**
|
||||
|
||||
Data sources available to backend:
|
||||
| Source | What it knows | Limitation |
|
||||
|--------|---------------|------------|
|
||||
| Daily Presence API | Currently connected users | 0-500ms lag; doesn't see handshaking users |
|
||||
| Database sessions | Historical join/leave events | Stale; updated by polls |
|
||||
| Webhooks | Join/leave events | Delayed; can fail |
|
||||
|
||||
**Gap**: No source knows about users between "decided to join" and "WebRTC handshake complete".
|
||||
|
||||
### Race Scenario Timeline
|
||||
|
||||
```
|
||||
T+0ms: User A connected to Meeting M1, visible in Daily presence
|
||||
T+1000ms: User A closes browser tab
|
||||
T+1050ms: participant.left webhook fires → poll_daily_room_presence queued
|
||||
T+1500ms: User A reopens tab (quick rejoin)
|
||||
T+1600ms: POST /meeting returns M1 (still active)
|
||||
T+1700ms: Frontend starts WebRTC handshake
|
||||
T+2000ms: User A in handshake - NOT visible to Daily presence API
|
||||
T+2100ms: poll runs → sees 0 participants → marks session as left_at
|
||||
T+3000ms: process_meetings runs
|
||||
T+3100ms: Daily API returns 0 participants (user still handshaking)
|
||||
T+3200ms: has_active_sessions=False, has_had_sessions=True
|
||||
T+3300ms: Meeting deactivated, Daily room deleted
|
||||
T+4000ms: User A WebRTC completes → Daily room is gone!
|
||||
T+5000ms: User B joins same Reflector room → new Meeting M2 created
|
||||
|
||||
RESULT: User A orphaned, User B in different Daily room
|
||||
```
|
||||
|
||||
### Why Current Mitigations Are Insufficient
|
||||
|
||||
#### 1. Using Daily API (already implemented)
|
||||
The code already calls `get_room_presence()` instead of relying solely on database sessions. **This doesn't help** because the Daily presence API itself doesn't see users during WebRTC handshake (0-500ms consistency lag + handshake duration of 500-3000ms).
|
||||
|
||||
#### 2. Fallback to Database
|
||||
When Daily API fails, the code falls back to database sessions. This is **worse** because database is even more stale than the API.
|
||||
|
||||
#### 3. Leave/Join Endpoints
|
||||
The `/joined` and `/leave` endpoints trigger immediate polls, reducing the window but **not eliminating it**. The poll still only sees what Daily presence API reports.
|
||||
|
||||
---
|
||||
|
||||
## Proposed Solutions
|
||||
|
||||
### Option A: Grace Period (Not Recommended)
|
||||
|
||||
Add a time-based buffer before deactivation.
|
||||
|
||||
```python
|
||||
GRACE_PERIOD_SECONDS = 10
|
||||
|
||||
if not has_active_sessions and has_had_sessions:
|
||||
recent_activity = await get_recent_activity(meeting_id, within_seconds=GRACE_PERIOD_SECONDS)
|
||||
if recent_activity:
|
||||
continue # Skip deactivation
|
||||
```
|
||||
|
||||
**Pros:**
|
||||
- Simple to implement
|
||||
- Low risk
|
||||
|
||||
**Cons:**
|
||||
- Arbitrary timeout value (why 10s? why not 5s or 30s?)
|
||||
- Feels like a hack ("setTimeout solution")
|
||||
- Delays legitimate deactivation
|
||||
- Doesn't eliminate race, just makes it less likely
|
||||
|
||||
### Option B: Track "Intent to Join" (Recommended)
|
||||
|
||||
Add explicit state tracking for users who are in the process of joining.
|
||||
|
||||
**New endpoint**: `POST /rooms/{room_name}/meetings/{meeting_id}/joining`
|
||||
|
||||
**Flow change**:
|
||||
```
|
||||
Current:
|
||||
1. POST /join → get meeting info
|
||||
2. Render Daily iframe (start WebRTC)
|
||||
3. POST /joined (after connected)
|
||||
|
||||
Proposed:
|
||||
1. POST /join → get meeting info
|
||||
2. POST /joining → "I'm about to connect" ← NEW (wait for 200 OK)
|
||||
3. Render Daily iframe (start WebRTC)
|
||||
4. POST /joined (after connected)
|
||||
```
|
||||
|
||||
**Backend tracking**:
|
||||
```python
|
||||
# On /joining endpoint
|
||||
await pending_joins.create(meeting_id=meeting_id, user_id=user_id, created_at=now())
|
||||
|
||||
# In process_meetings
|
||||
pending = await pending_joins.get_recent(meeting_id, max_age_seconds=30)
|
||||
if pending:
|
||||
logger.info("Meeting has pending joins, skipping deactivation")
|
||||
continue
|
||||
|
||||
# On /joined endpoint or timeout
|
||||
await pending_joins.delete(meeting_id=meeting_id, user_id=user_id)
|
||||
```
|
||||
|
||||
**Pros:**
|
||||
- Eliminates race by design (backend knows before Daily does)
|
||||
- Explicit state machine, not time-based guessing
|
||||
- Clear semantics
|
||||
|
||||
**Cons:**
|
||||
- Adds ~50-200ms latency (one round-trip before iframe renders)
|
||||
- Requires frontend changes
|
||||
- Needs cleanup mechanism for abandoned joins (user closes tab during handshake)
|
||||
|
||||
### Option C: Optimistic Locking with Version
|
||||
|
||||
Track meeting "version" that must match for deactivation.
|
||||
|
||||
**Concept**: Each join attempt increments a version. Deactivation only proceeds if version hasn't changed since presence check.
|
||||
|
||||
**Cons:**
|
||||
- Complex to implement correctly
|
||||
- Still has edge cases with concurrent joins
|
||||
|
||||
---
|
||||
|
||||
## Recommended Approach: Option B
|
||||
|
||||
**Track "Intent to Join"** is the cleanest solution because it:
|
||||
|
||||
1. **Eliminates the race by design** - no timing windows
|
||||
2. **Makes state explicit** - joining/connected/leaving are tracked, not inferred
|
||||
3. **Aligns with existing patterns** - similar to `/joined` and `/leave` endpoints
|
||||
4. **No arbitrary timeouts** - unlike grace period
|
||||
|
||||
### Data Model Change
|
||||
|
||||
Add tracking for pending joins. Options:
|
||||
|
||||
| Storage | Pros | Cons |
|
||||
|---------|------|------|
|
||||
| Redis key | Fast, auto-expire | Lost on Redis restart |
|
||||
| Database table | Persistent, queryable | Slightly slower |
|
||||
| In-memory | Fastest | Lost on server restart |
|
||||
|
||||
**Recommendation**: Redis with TTL (30s expiry) for simplicity. Pending joins are ephemeral - if Redis restarts, worst case is a brief deactivation delay.
|
||||
|
||||
```python
|
||||
# Redis key format
|
||||
pending_join:{meeting_id}:{user_id} = {timestamp}
|
||||
# TTL: 30 seconds
|
||||
```
|
||||
|
||||
### Implementation Checklist
|
||||
|
||||
1. **Backend: Add `/joining` endpoint**
|
||||
- File: `server/reflector/views/rooms.py`
|
||||
- Creates Redis key with 30s TTL
|
||||
- Returns 200 OK
|
||||
|
||||
2. **Backend: Modify `process_meetings()`**
|
||||
- File: `server/reflector/worker/process.py`
|
||||
- Before deactivation, check for pending joins
|
||||
- If any exist, skip deactivation
|
||||
|
||||
3. **Backend: Modify `/joined` endpoint**
|
||||
- Clear pending join on successful connection
|
||||
|
||||
4. **Frontend: Call `/joining` before WebRTC**
|
||||
- File: `www/app/[roomName]/components/DailyRoom.tsx`
|
||||
- Await response before rendering Daily iframe
|
||||
|
||||
5. **Update simulation**
|
||||
- Add `joining` state tracking to match new design
|
||||
- Verify race condition is eliminated
|
||||
|
||||
6. **Integration tests**
|
||||
- Test quick rejoin scenario
|
||||
- Test abandoned join (user closes during handshake)
|
||||
- Test concurrent joins from multiple users
|
||||
|
||||
---
|
||||
|
||||
## Files Reference
|
||||
|
||||
### Core Files to Modify
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `server/reflector/views/rooms.py` | Add `/joining` endpoint |
|
||||
| `server/reflector/worker/process.py` | Check pending joins before deactivation |
|
||||
| `www/app/[roomName]/components/DailyRoom.tsx` | Call `/joining` before WebRTC |
|
||||
|
||||
### Reference Files
|
||||
| File | Contains |
|
||||
|------|----------|
|
||||
| `server/reflector/video_platforms/daily.py:128` | `get_room_presence()` - Daily API call |
|
||||
| `server/reflector/worker/process.py:642` | `poll_daily_room_presence()` - presence polling |
|
||||
| `server/reflector/views/daily.py:125` | Webhook handlers |
|
||||
| `server/tests/simulation/` | Hypothesis simulation proving the race |
|
||||
| `server/tests/test_daily_presence_deactivation.py` | Unit tests for presence logic |
|
||||
|
||||
### Simulation Files
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `tests/simulation/system.py` | Main simulation engine |
|
||||
| `tests/simulation/config.py` | Current vs fixed system configs |
|
||||
| `tests/simulation/state.py` | State dataclasses |
|
||||
| `tests/simulation/test_presence_race.py` | Hypothesis stateful tests |
|
||||
| `tests/simulation/test_targeted_scenarios.py` | Specific race scenarios |
|
||||
| `server/reflector/presence/model.py` | Shared state machine model |
|
||||
|
||||
---
|
||||
|
||||
## Alternative Considered: Remove DB Fallback
|
||||
|
||||
One simpler change discussed: remove the database fallback when Daily API fails, and "fail loudly" instead.
|
||||
|
||||
```python
|
||||
# Current (with fallback)
|
||||
try:
|
||||
presence = await client.get_room_presence(meeting.room_name)
|
||||
has_active_sessions = presence.total_count > 0
|
||||
except Exception:
|
||||
# Fallback to stale DB
|
||||
room_sessions = await client.get_room_sessions(meeting.room_name)
|
||||
has_active_sessions = bool(room_sessions and any(s.ended_at is None for s in room_sessions))
|
||||
|
||||
# Proposed (fail loudly)
|
||||
try:
|
||||
presence = await client.get_room_presence(meeting.room_name)
|
||||
has_active_sessions = presence.total_count > 0
|
||||
except Exception:
|
||||
logger.error("Daily API failed, skipping deactivation check for this meeting")
|
||||
continue # Don't deactivate if we can't verify
|
||||
```
|
||||
|
||||
**This helps but doesn't eliminate the race** - it only removes one failure mode (stale DB). The core race (handshake invisibility) remains.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The presence system race condition is a **data model gap**, not a timing issue that can be solved with grace periods. The backend needs explicit knowledge of users who intend to join, before they become visible to the Daily presence API.
|
||||
|
||||
The recommended fix is to add a `/joining` endpoint that the frontend calls before starting WebRTC. This creates a "reservation" that prevents premature meeting deactivation during the handshake window.
|
||||
|
||||
This approach:
|
||||
- Eliminates the race by design
|
||||
- Adds minimal latency (~50-200ms)
|
||||
- Follows explicit state machine principles
|
||||
- Avoids arbitrary timeout hacks
|
||||
|
||||
---
|
||||
|
||||
## Appendix: Simulation Test Results
|
||||
|
||||
```
|
||||
$ uv run pytest tests/simulation/ -v
|
||||
|
||||
tests/simulation/test_model_conformance.py::TestModelConformance::test_simulation_uses_model_states PASSED
|
||||
tests/simulation/test_model_conformance.py::TestModelConformance::test_simulation_respects_transitions PASSED
|
||||
tests/simulation/test_model_conformance.py::TestModelConformance::test_simulation_invalid_transitions_checked PASSED
|
||||
tests/simulation/test_model_conformance.py::TestModelConformance::test_simulation_implements_protocols PASSED
|
||||
tests/simulation/test_model_conformance.py::TestModelConformance::test_simulation_uses_shared_invariants PASSED
|
||||
tests/simulation/test_model_conformance.py::TestProductionStateMachine::test_state_machine_has_all_states PASSED
|
||||
tests/simulation/test_model_conformance.py::TestProductionStateMachine::test_state_machine_valid_transitions PASSED
|
||||
tests/simulation/test_model_conformance.py::TestProductionStateMachine::test_state_machine_invalid_transitions_raise PASSED
|
||||
tests/simulation/test_model_conformance.py::TestProductionStateMachine::test_guarded_user_state_transitions PASSED
|
||||
tests/simulation/test_model_conformance.py::TestProductionStateMachine::test_guarded_user_state_rejects_invalid PASSED
|
||||
tests/simulation/test_model_conformance.py::TestProductionStateMachine::test_guarded_user_state_tracks_history PASSED
|
||||
tests/simulation/test_model_conformance.py::TestInvariantConsistency::test_invariants_same_between_model_and_simulation PASSED
|
||||
tests/simulation/test_model_conformance.py::test_quick_conformance_check PASSED
|
||||
tests/simulation/test_presence_race.py::TestPresenceRaceFixed::runTest PASSED
|
||||
tests/simulation/test_presence_race.py::test_presence_race_conditions_current_system XFAIL
|
||||
tests/simulation/test_presence_race.py::test_presence_no_race_conditions_fixed_system PASSED
|
||||
tests/simulation/test_presence_race.py::test_smoke_presence_simulation PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestQuickRejoinRace::test_quick_rejoin_causes_split PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestQuickRejoinRace::test_quick_rejoin_fixed_system PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestSimultaneousJoins::test_two_users_join_simultaneously PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestProcessMeetingsRace::test_process_meetings_during_handshake PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestPresenceLagRace::test_presence_lag_causes_incorrect_count PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestMeetingDeactivationEdgeCases::test_deactivation_with_no_sessions PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestMeetingDeactivationEdgeCases::test_deactivation_requires_had_sessions PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::TestEventLogTracing::test_event_log_captures_flow PASSED
|
||||
tests/simulation/test_targeted_scenarios.py::test_config_presets PASSED
|
||||
|
||||
================== 25 passed, 1 xfailed ==================
|
||||
```
|
||||
|
||||
The `xfail` test (`test_presence_race_conditions_current_system`) demonstrates that the current system configuration has race conditions that can be found through randomized testing.
|
||||
@@ -1,129 +0,0 @@
|
||||
# Standalone services for fully local deployment (no external dependencies).
|
||||
# Usage: docker compose -f docker-compose.yml -f docker-compose.standalone.yml up -d
|
||||
#
|
||||
# On Linux with NVIDIA GPU, also pass: --profile ollama-gpu
|
||||
# On Linux without GPU: --profile ollama-cpu
|
||||
# On Mac: Ollama runs natively (Metal GPU) — no profile needed, services here unused.
|
||||
|
||||
services:
|
||||
garage:
|
||||
image: dxflrs/garage:v1.1.0
|
||||
ports:
|
||||
- "3900:3900" # S3 API
|
||||
- "3903:3903" # Admin API
|
||||
volumes:
|
||||
- garage_data:/var/lib/garage/data
|
||||
- garage_meta:/var/lib/garage/meta
|
||||
- ./data/garage.toml:/etc/garage.toml:ro
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "/garage", "stats"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 5s
|
||||
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
profiles: ["ollama-gpu"]
|
||||
ports:
|
||||
- "11434:11434"
|
||||
volumes:
|
||||
- ollama_data:/root/.ollama
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: all
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
ollama-cpu:
|
||||
image: ollama/ollama:latest
|
||||
profiles: ["ollama-cpu"]
|
||||
ports:
|
||||
- "11434:11434"
|
||||
volumes:
|
||||
- ollama_data:/root/.ollama
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
# Override server/worker/beat to use self-hosted GPU service for transcription+diarization.
|
||||
# compose `environment:` overrides values from `env_file:` — no need to edit server/.env.
|
||||
server:
|
||||
environment:
|
||||
TRANSCRIPT_BACKEND: modal
|
||||
TRANSCRIPT_URL: http://localhost:8100
|
||||
TRANSCRIPT_MODAL_API_KEY: local
|
||||
DIARIZATION_BACKEND: modal
|
||||
DIARIZATION_URL: http://localhost:8100
|
||||
|
||||
worker:
|
||||
environment:
|
||||
TRANSCRIPT_BACKEND: modal
|
||||
TRANSCRIPT_URL: http://cpu:8000
|
||||
TRANSCRIPT_MODAL_API_KEY: local
|
||||
DIARIZATION_BACKEND: modal
|
||||
DIARIZATION_URL: http://cpu:8000
|
||||
|
||||
web:
|
||||
image: reflector-frontend-standalone
|
||||
build:
|
||||
context: ./www
|
||||
command: ["node", "server.js"]
|
||||
volumes: !reset []
|
||||
environment:
|
||||
NODE_ENV: production
|
||||
|
||||
cpu:
|
||||
build:
|
||||
context: ./gpu/self_hosted
|
||||
dockerfile: Dockerfile.cpu
|
||||
ports:
|
||||
- "8100:8000"
|
||||
volumes:
|
||||
- gpu_cache:/root/.cache
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/docs"]
|
||||
interval: 15s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 120s
|
||||
|
||||
gpu-nvidia:
|
||||
build:
|
||||
context: ./gpu/self_hosted
|
||||
profiles: ["gpu-nvidia"]
|
||||
volumes:
|
||||
- gpu_cache:/root/.cache
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: all
|
||||
capabilities: [gpu]
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/docs"]
|
||||
interval: 15s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 120s
|
||||
|
||||
volumes:
|
||||
garage_data:
|
||||
garage_meta:
|
||||
ollama_data:
|
||||
gpu_cache:
|
||||
@@ -2,7 +2,8 @@ services:
|
||||
server:
|
||||
build:
|
||||
context: server
|
||||
network_mode: host
|
||||
ports:
|
||||
- 1250:1250
|
||||
volumes:
|
||||
- ./server/:/app/
|
||||
- /app/.venv
|
||||
@@ -10,12 +11,6 @@ services:
|
||||
- ./server/.env
|
||||
environment:
|
||||
ENTRYPOINT: server
|
||||
DATABASE_URL: postgresql+asyncpg://reflector:reflector@localhost:5432/reflector
|
||||
REDIS_HOST: localhost
|
||||
CELERY_BROKER_URL: redis://localhost:6379/1
|
||||
CELERY_RESULT_BACKEND: redis://localhost:6379/1
|
||||
HATCHET_CLIENT_SERVER_URL: http://localhost:8889
|
||||
HATCHET_CLIENT_HOST_PORT: localhost:7078
|
||||
|
||||
worker:
|
||||
build:
|
||||
@@ -27,11 +22,6 @@ services:
|
||||
- ./server/.env
|
||||
environment:
|
||||
ENTRYPOINT: worker
|
||||
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
|
||||
HATCHET_CLIENT_HOST_PORT: hatchet:7077
|
||||
depends_on:
|
||||
redis:
|
||||
condition: service_started
|
||||
|
||||
beat:
|
||||
build:
|
||||
@@ -43,9 +33,6 @@ services:
|
||||
- ./server/.env
|
||||
environment:
|
||||
ENTRYPOINT: beat
|
||||
depends_on:
|
||||
redis:
|
||||
condition: service_started
|
||||
|
||||
hatchet-worker-cpu:
|
||||
build:
|
||||
@@ -57,8 +44,6 @@ services:
|
||||
- ./server/.env
|
||||
environment:
|
||||
ENTRYPOINT: hatchet-worker-cpu
|
||||
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
|
||||
HATCHET_CLIENT_HOST_PORT: hatchet:7077
|
||||
depends_on:
|
||||
hatchet:
|
||||
condition: service_healthy
|
||||
@@ -72,8 +57,6 @@ services:
|
||||
- ./server/.env
|
||||
environment:
|
||||
ENTRYPOINT: hatchet-worker-llm
|
||||
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
|
||||
HATCHET_CLIENT_HOST_PORT: hatchet:7077
|
||||
depends_on:
|
||||
hatchet:
|
||||
condition: service_healthy
|
||||
@@ -92,16 +75,10 @@ services:
|
||||
volumes:
|
||||
- ./www:/app/
|
||||
- /app/node_modules
|
||||
- next_cache:/app/.next
|
||||
env_file:
|
||||
- ./www/.env.local
|
||||
environment:
|
||||
- NODE_ENV=development
|
||||
- SERVER_API_URL=http://host.docker.internal:1250
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
depends_on:
|
||||
- server
|
||||
|
||||
postgres:
|
||||
image: postgres:17
|
||||
@@ -117,14 +94,13 @@ services:
|
||||
- ./server/docker/init-hatchet-db.sql:/docker-entrypoint-initdb.d/init-hatchet-db.sql:ro
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d reflector -U reflector"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 15s
|
||||
interval: 10s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 10s
|
||||
|
||||
hatchet:
|
||||
image: ghcr.io/hatchet-dev/hatchet/hatchet-lite:latest
|
||||
restart: on-failure
|
||||
ports:
|
||||
- "8889:8888"
|
||||
- "7078:7077"
|
||||
@@ -132,7 +108,7 @@ services:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
DATABASE_URL: "postgresql://reflector:reflector@postgres:5432/hatchet?sslmode=disable&connect_timeout=30"
|
||||
DATABASE_URL: "postgresql://reflector:reflector@postgres:5432/hatchet?sslmode=disable"
|
||||
SERVER_AUTH_COOKIE_DOMAIN: localhost
|
||||
SERVER_AUTH_COOKIE_INSECURE: "t"
|
||||
SERVER_GRPC_BIND_ADDRESS: "0.0.0.0"
|
||||
@@ -152,5 +128,6 @@ services:
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
|
||||
volumes:
|
||||
next_cache:
|
||||
networks:
|
||||
default:
|
||||
attachable: true
|
||||
|
||||
@@ -1,214 +0,0 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
title: Standalone Local Setup
|
||||
---
|
||||
|
||||
# Standalone Local Setup
|
||||
|
||||
**The goal**: a clueless user clones the repo, runs one script, and has a working Reflector instance locally. No cloud accounts, no API keys, no manual env file editing.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/monadical-sas/reflector.git
|
||||
cd reflector
|
||||
./scripts/setup-standalone.sh
|
||||
```
|
||||
|
||||
The script is idempotent — safe to re-run at any time. It detects what's already set up and skips completed steps.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker / OrbStack / Docker Desktop (any)
|
||||
- Mac (Apple Silicon) or Linux
|
||||
- 16GB+ RAM (32GB recommended for 14B LLM models)
|
||||
- **Mac only**: [Ollama](https://ollama.com/download) installed (`brew install ollama`)
|
||||
|
||||
## What the script does
|
||||
|
||||
### 1. LLM inference via Ollama
|
||||
|
||||
**Mac**: starts Ollama natively (Metal GPU acceleration). Pulls the LLM model. Docker containers reach it via `host.docker.internal:11434`.
|
||||
|
||||
**Linux**: starts containerized Ollama via `docker-compose.standalone.yml` profile (`ollama-gpu` with NVIDIA, `ollama-cpu` without). Pulls model inside the container.
|
||||
|
||||
### 2. Environment files
|
||||
|
||||
Generates `server/.env` and `www/.env.local` with standalone defaults:
|
||||
|
||||
**`server/.env`** — key settings:
|
||||
|
||||
| Variable | Value | Why |
|
||||
|----------|-------|-----|
|
||||
| `DATABASE_URL` | `postgresql+asyncpg://...@postgres:5432/reflector` | Docker-internal hostname |
|
||||
| `REDIS_HOST` | `redis` | Docker-internal hostname |
|
||||
| `CELERY_BROKER_URL` | `redis://redis:6379/1` | Docker-internal hostname |
|
||||
| `AUTH_BACKEND` | `none` | No Authentik in standalone |
|
||||
| `TRANSCRIPT_BACKEND` | `modal` | HTTP API to self-hosted CPU service |
|
||||
| `TRANSCRIPT_URL` | `http://cpu:8000` | Docker-internal CPU service |
|
||||
| `DIARIZATION_BACKEND` | `modal` | HTTP API to self-hosted CPU service |
|
||||
| `DIARIZATION_URL` | `http://cpu:8000` | Docker-internal CPU service |
|
||||
| `TRANSLATION_BACKEND` | `passthrough` | No Modal |
|
||||
| `LLM_URL` | `http://host.docker.internal:11434/v1` (Mac) | Ollama endpoint |
|
||||
|
||||
**`www/.env.local`** — key settings:
|
||||
|
||||
| Variable | Value |
|
||||
|----------|-------|
|
||||
| `API_URL` | `http://localhost:1250` |
|
||||
| `SERVER_API_URL` | `http://server:1250` |
|
||||
| `WEBSOCKET_URL` | `ws://localhost:1250` |
|
||||
| `FEATURE_REQUIRE_LOGIN` | `false` |
|
||||
| `NEXTAUTH_SECRET` | `standalone-dev-secret-not-for-production` |
|
||||
|
||||
If env files already exist (including symlinks from worktree setup), the script resolves symlinks and ensures all standalone-critical vars are set. Existing vars not related to standalone are preserved.
|
||||
|
||||
### 3. Object storage (Garage)
|
||||
|
||||
Standalone uses [Garage](https://garagehq.deuxfleurs.fr/) — a lightweight S3-compatible object store running in Docker. The setup script starts Garage, initializes the layout, creates a bucket and access key, and writes the credentials to `server/.env`.
|
||||
|
||||
**`server/.env`** — storage settings added by the script:
|
||||
|
||||
| Variable | Value | Why |
|
||||
|----------|-------|-----|
|
||||
| `TRANSCRIPT_STORAGE_BACKEND` | `aws` | Uses the S3-compatible storage driver |
|
||||
| `TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL` | `http://garage:3900` | Docker-internal Garage S3 API |
|
||||
| `TRANSCRIPT_STORAGE_AWS_BUCKET_NAME` | `reflector-media` | Created by the script |
|
||||
| `TRANSCRIPT_STORAGE_AWS_REGION` | `garage` | Must match Garage config |
|
||||
| `TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID` | *(auto-generated)* | Created by `garage key create` |
|
||||
| `TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY` | *(auto-generated)* | Created by `garage key create` |
|
||||
|
||||
The `TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL` setting enables S3-compatible backends. When set, the storage driver uses path-style addressing and routes all requests to the custom endpoint. When unset (production AWS), behavior is unchanged.
|
||||
|
||||
Garage config template lives at `scripts/garage.toml`. The setup script generates `data/garage.toml` (gitignored) with a random RPC secret and mounts it read-only into the container. Single-node, `replication_factor=1`.
|
||||
|
||||
> **Note**: Presigned URLs embed the Garage Docker hostname (`http://garage:3900`). This is fine — the server proxies S3 responses to the browser. Modal GPU workers cannot reach internal Garage, but standalone doesn't use Modal.
|
||||
|
||||
### 4. Transcription and diarization
|
||||
|
||||
Standalone runs the self-hosted ML service (`gpu/self_hosted/`) in a CPU-only Docker container named `cpu`. This is the same FastAPI service used for Modal.com GPU deployments, but built with `Dockerfile.cpu` (no NVIDIA CUDA dependencies). The compose service is named `cpu` (not `gpu`) to make clear it runs without GPU acceleration; the source code lives in `gpu/self_hosted/` because it's shared with the GPU deployment.
|
||||
|
||||
The `modal` backend name is reused — it just means "HTTP API client". Setting `TRANSCRIPT_URL` / `DIARIZATION_URL` to `http://cpu:8000` routes requests to the local container instead of Modal.com.
|
||||
|
||||
On first start, the service downloads pyannote speaker diarization models (~1GB) from a public S3 bundle. Models are cached in a Docker volume (`gpu_cache`) so subsequent starts are fast. No HuggingFace token or API key needed.
|
||||
|
||||
> **Performance**: CPU-only transcription and diarization work but are slow (~15 min for a 3 min file). For faster processing on Linux with NVIDIA GPU, use `--profile gpu-nvidia` instead (see `docker-compose.standalone.yml`).
|
||||
|
||||
### 5. Docker services
|
||||
|
||||
```bash
|
||||
docker compose up -d postgres redis garage cpu server worker beat web
|
||||
```
|
||||
|
||||
All services start in a single command. Garage and `cpu` are already started by earlier steps but included for idempotency. No Hatchet in standalone mode — LLM processing (summaries, topics, titles) runs via Celery tasks.
|
||||
|
||||
### 6. Database migrations
|
||||
|
||||
Run automatically by the `server` container on startup (`runserver.sh` calls `alembic upgrade head`). No manual step needed.
|
||||
|
||||
### 7. Health check
|
||||
|
||||
Verifies:
|
||||
- CPU service responds (transcription + diarization ready)
|
||||
- Server responds at `http://localhost:1250/health`
|
||||
- Frontend serves at `http://localhost:3000`
|
||||
- LLM endpoint reachable from inside containers
|
||||
|
||||
## Services
|
||||
|
||||
| Service | Port | Purpose |
|
||||
|---------|------|---------|
|
||||
| `server` | 1250 | FastAPI backend (runs migrations on start) |
|
||||
| `web` | 3000 | Next.js frontend |
|
||||
| `postgres` | 5432 | PostgreSQL database |
|
||||
| `redis` | 6379 | Cache + Celery broker |
|
||||
| `garage` | 3900, 3903 | S3-compatible object storage (S3 API + admin API) |
|
||||
| `cpu` | — | Self-hosted transcription + diarization (CPU-only) |
|
||||
| `worker` | — | Celery worker (live pipeline post-processing) |
|
||||
| `beat` | — | Celery beat (scheduled tasks) |
|
||||
|
||||
## Testing programmatically
|
||||
|
||||
After the setup script completes, verify the full pipeline (upload, transcription, diarization, LLM summary) via the API:
|
||||
|
||||
```bash
|
||||
# 1. Create a transcript
|
||||
TRANSCRIPT_ID=$(curl -s -X POST 'http://localhost:1250/v1/transcripts' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"name":"test-upload"}' | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])")
|
||||
echo "Created: $TRANSCRIPT_ID"
|
||||
|
||||
# 2. Upload an audio file (single-chunk upload)
|
||||
curl -s "http://localhost:1250/v1/transcripts/${TRANSCRIPT_ID}/record/upload?chunk_number=0&total_chunks=1" \
|
||||
-X POST -F "chunk=@/path/to/audio.mp3"
|
||||
|
||||
# 3. Poll until processing completes (status: ended or error)
|
||||
while true; do
|
||||
STATUS=$(curl -s "http://localhost:1250/v1/transcripts/${TRANSCRIPT_ID}" \
|
||||
| python3 -c "import sys,json; print(json.load(sys.stdin)['status'])")
|
||||
echo "Status: $STATUS"
|
||||
case "$STATUS" in ended|error) break;; esac
|
||||
sleep 10
|
||||
done
|
||||
|
||||
# 4. Check the result
|
||||
curl -s "http://localhost:1250/v1/transcripts/${TRANSCRIPT_ID}" | python3 -m json.tool
|
||||
```
|
||||
|
||||
Expected result: status `ended`, auto-generated `title`, `short_summary`, `long_summary`, and `transcript` text with `Speaker 0` / `Speaker 1` labels.
|
||||
|
||||
CPU-only processing is slow (~15 min for a 3 min audio file). Diarization finishes in ~3 min, transcription takes the rest.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Port conflicts (most common issue)
|
||||
|
||||
If the frontend or backend behaves unexpectedly (e.g., env vars seem ignored, changes don't take effect), **check for port conflicts first**:
|
||||
|
||||
```bash
|
||||
# Check what's listening on key ports
|
||||
lsof -i :3000 # frontend
|
||||
lsof -i :1250 # backend
|
||||
lsof -i :5432 # postgres
|
||||
lsof -i :3900 # Garage S3 API
|
||||
lsof -i :6379 # Redis
|
||||
|
||||
# Kill stale processes on a port
|
||||
lsof -ti :3000 | xargs kill
|
||||
```
|
||||
|
||||
Common causes:
|
||||
- A stale `next dev` or `pnpm dev` process from another terminal/worktree
|
||||
- Another Docker Compose project (different worktree) with containers on the same ports — the setup script only manages its own project; containers from other projects must be stopped manually (`docker ps` to find them, `docker stop` to kill them)
|
||||
|
||||
The setup script checks ports 3000, 1250, 5432, 6379, 3900, 3903 for conflicts before starting services. It ignores OrbStack/Docker Desktop port forwarding processes (which always bind these ports but are not real conflicts).
|
||||
|
||||
### OrbStack false port-conflict warnings (Mac)
|
||||
|
||||
If you use OrbStack as your Docker runtime, `lsof` will show OrbStack binding ports like 3000, 1250, etc. even when no containers are running. This is OrbStack's port forwarding mechanism — not a real conflict. The setup script filters these out automatically.
|
||||
|
||||
### Re-enabling authentication
|
||||
|
||||
Standalone runs without authentication (`FEATURE_REQUIRE_LOGIN=false`, `AUTH_BACKEND=none`). To re-enable:
|
||||
|
||||
1. In `www/.env.local`: set `FEATURE_REQUIRE_LOGIN=true`, uncomment `AUTHENTIK_ISSUER` and `AUTHENTIK_REFRESH_TOKEN_URL`
|
||||
2. In `server/.env`: set `AUTH_BACKEND=authentik` (or your backend), configure `AUTH_JWT_AUDIENCE`
|
||||
3. Restart: `docker compose -f docker-compose.yml -f docker-compose.standalone.yml up -d --force-recreate web server`
|
||||
|
||||
## What's NOT covered
|
||||
|
||||
These require external accounts and infrastructure that can't be scripted:
|
||||
|
||||
- **Live meeting rooms** — requires Daily.co account, S3 bucket, IAM roles
|
||||
- **Authentication** — requires Authentik deployment and OAuth configuration
|
||||
- **Hatchet workflows** — requires separate Hatchet setup for multitrack processing
|
||||
- **Production deployment** — see [Deployment Guide](./overview)
|
||||
|
||||
## Current status
|
||||
|
||||
All steps implemented. The setup script handles everything end-to-end:
|
||||
|
||||
- Step 1 (Ollama/LLM) — implemented
|
||||
- Step 2 (environment files) — implemented
|
||||
- Step 3 (object storage / Garage) — implemented
|
||||
- Step 4 (transcription/diarization) — implemented (self-hosted GPU service)
|
||||
- Steps 5-7 (Docker, migrations, health) — implemented
|
||||
- **Unified script**: `scripts/setup-standalone.sh`
|
||||
@@ -1,39 +0,0 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
ENV PYTHONUNBUFFERED=1 \
|
||||
UV_LINK_MODE=copy \
|
||||
UV_NO_CACHE=1
|
||||
|
||||
WORKDIR /tmp
|
||||
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update \
|
||||
&& apt-get install -y \
|
||||
ffmpeg \
|
||||
curl \
|
||||
ca-certificates \
|
||||
gnupg \
|
||||
wget
|
||||
ADD https://astral.sh/uv/install.sh /uv-installer.sh
|
||||
RUN sh /uv-installer.sh && rm /uv-installer.sh
|
||||
ENV PATH="/root/.local/bin/:$PATH"
|
||||
|
||||
RUN mkdir -p /app
|
||||
WORKDIR /app
|
||||
COPY pyproject.toml uv.lock /app/
|
||||
|
||||
|
||||
COPY ./app /app/app
|
||||
COPY ./main.py /app/
|
||||
COPY ./runserver.sh /app/
|
||||
|
||||
# prevent uv failing with too many open files on big cpus
|
||||
ENV UV_CONCURRENT_INSTALLS=16
|
||||
|
||||
# first install
|
||||
RUN --mount=type=cache,target=/root/.cache/uv \
|
||||
uv sync --compile-bytecode --locked
|
||||
|
||||
EXPOSE 8000
|
||||
|
||||
CMD ["sh", "/app/runserver.sh"]
|
||||
@@ -3,14 +3,14 @@ import os
|
||||
from fastapi import Depends, HTTPException, status
|
||||
from fastapi.security import OAuth2PasswordBearer
|
||||
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token", auto_error=False)
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
|
||||
|
||||
|
||||
def apikey_auth(apikey: str | None = Depends(oauth2_scheme)):
|
||||
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
|
||||
required_key = os.environ.get("REFLECTOR_GPU_APIKEY")
|
||||
if not required_key:
|
||||
return
|
||||
if apikey and apikey == required_key:
|
||||
if apikey == required_key:
|
||||
return
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
||||
@@ -1,65 +1,10 @@
|
||||
import logging
|
||||
import os
|
||||
import tarfile
|
||||
import threading
|
||||
from pathlib import Path
|
||||
from urllib.request import urlopen
|
||||
|
||||
import torch
|
||||
import torchaudio
|
||||
import yaml
|
||||
from pyannote.audio import Pipeline
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
S3_BUNDLE_URL = "https://reflector-public.s3.us-east-1.amazonaws.com/pyannote-speaker-diarization-3.1.tar.gz"
|
||||
BUNDLE_CACHE_DIR = Path("/root/.cache/pyannote-bundle")
|
||||
|
||||
|
||||
def _ensure_model(cache_dir: Path) -> str:
|
||||
"""Download and extract S3 model bundle if not cached."""
|
||||
model_dir = cache_dir / "pyannote-speaker-diarization-3.1"
|
||||
config_path = model_dir / "config.yaml"
|
||||
|
||||
if config_path.exists():
|
||||
logger.info("Using cached model bundle at %s", model_dir)
|
||||
return str(model_dir)
|
||||
|
||||
cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
tarball_path = cache_dir / "model.tar.gz"
|
||||
|
||||
logger.info("Downloading model bundle from %s", S3_BUNDLE_URL)
|
||||
with urlopen(S3_BUNDLE_URL) as response, open(tarball_path, "wb") as f:
|
||||
while chunk := response.read(8192):
|
||||
f.write(chunk)
|
||||
|
||||
logger.info("Extracting model bundle")
|
||||
with tarfile.open(tarball_path, "r:gz") as tar:
|
||||
tar.extractall(path=cache_dir, filter="data")
|
||||
tarball_path.unlink()
|
||||
|
||||
_patch_config(model_dir, cache_dir)
|
||||
return str(model_dir)
|
||||
|
||||
|
||||
def _patch_config(model_dir: Path, cache_dir: Path) -> None:
|
||||
"""Rewrite config.yaml to reference local pytorch_model.bin paths."""
|
||||
config_path = model_dir / "config.yaml"
|
||||
with open(config_path) as f:
|
||||
config = yaml.safe_load(f)
|
||||
|
||||
config["pipeline"]["params"]["segmentation"] = str(
|
||||
cache_dir / "pyannote-segmentation-3.0" / "pytorch_model.bin"
|
||||
)
|
||||
config["pipeline"]["params"]["embedding"] = str(
|
||||
cache_dir / "pyannote-wespeaker-voxceleb-resnet34-LM" / "pytorch_model.bin"
|
||||
)
|
||||
|
||||
with open(config_path, "w") as f:
|
||||
yaml.dump(config, f)
|
||||
|
||||
logger.info("Patched config.yaml with local model paths")
|
||||
|
||||
|
||||
class PyannoteDiarizationService:
|
||||
def __init__(self):
|
||||
@@ -69,20 +14,10 @@ class PyannoteDiarizationService:
|
||||
|
||||
def load(self):
|
||||
self._device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
hf_token = os.environ.get("HF_TOKEN")
|
||||
|
||||
if hf_token:
|
||||
logger.info("Loading pyannote model from HuggingFace (HF_TOKEN set)")
|
||||
self._pipeline = Pipeline.from_pretrained(
|
||||
"pyannote/speaker-diarization-3.1",
|
||||
use_auth_token=hf_token,
|
||||
)
|
||||
else:
|
||||
logger.info("HF_TOKEN not set — loading model from S3 bundle")
|
||||
model_path = _ensure_model(BUNDLE_CACHE_DIR)
|
||||
config_path = Path(model_path) / "config.yaml"
|
||||
self._pipeline = Pipeline.from_pretrained(str(config_path))
|
||||
|
||||
self._pipeline = Pipeline.from_pretrained(
|
||||
"pyannote/speaker-diarization-3.1",
|
||||
use_auth_token=os.environ.get("HF_TOKEN"),
|
||||
)
|
||||
self._pipeline.to(torch.device(self._device))
|
||||
|
||||
def diarize_file(self, file_path: str, timestamp: float = 0.0) -> dict:
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
metadata_dir = "/var/lib/garage/meta"
|
||||
data_dir = "/var/lib/garage/data"
|
||||
replication_factor = 1
|
||||
|
||||
rpc_secret = "__GARAGE_RPC_SECRET__"
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
|
||||
[s3_api]
|
||||
api_bind_addr = "[::]:3900"
|
||||
s3_region = "garage"
|
||||
root_domain = ".s3.garage.localhost"
|
||||
|
||||
[admin]
|
||||
api_bind_addr = "[::]:3903"
|
||||
@@ -1,417 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Standalone local development setup for Reflector.
|
||||
# Takes a fresh clone to a working instance — no cloud accounts, no API keys.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/setup-standalone.sh
|
||||
#
|
||||
# Idempotent — safe to re-run at any time.
|
||||
#
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ROOT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
SERVER_ENV="$ROOT_DIR/server/.env"
|
||||
WWW_ENV="$ROOT_DIR/www/.env.local"
|
||||
|
||||
MODEL="${LLM_MODEL:-qwen2.5:14b}"
|
||||
OLLAMA_PORT="${OLLAMA_PORT:-11434}"
|
||||
|
||||
OS="$(uname -s)"
|
||||
|
||||
# --- Colors ---
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
info() { echo -e "${CYAN}==>${NC} $*"; }
|
||||
ok() { echo -e "${GREEN} ✓${NC} $*"; }
|
||||
warn() { echo -e "${YELLOW} !${NC} $*"; }
|
||||
err() { echo -e "${RED} ✗${NC} $*" >&2; }
|
||||
|
||||
# --- Helpers ---
|
||||
|
||||
wait_for_url() {
|
||||
local url="$1" label="$2" retries="${3:-30}" interval="${4:-2}"
|
||||
for i in $(seq 1 "$retries"); do
|
||||
if curl -sf "$url" > /dev/null 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
echo -ne "\r Waiting for $label... ($i/$retries)"
|
||||
sleep "$interval"
|
||||
done
|
||||
echo ""
|
||||
err "$label not responding at $url after $retries attempts"
|
||||
return 1
|
||||
}
|
||||
|
||||
env_has_key() {
|
||||
local file="$1" key="$2"
|
||||
grep -q "^${key}=" "$file" 2>/dev/null
|
||||
}
|
||||
|
||||
env_set() {
|
||||
local file="$1" key="$2" value="$3"
|
||||
if env_has_key "$file" "$key"; then
|
||||
# Replace existing value (portable sed)
|
||||
if [[ "$OS" == "Darwin" ]]; then
|
||||
sed -i '' "s|^${key}=.*|${key}=${value}|" "$file"
|
||||
else
|
||||
sed -i "s|^${key}=.*|${key}=${value}|" "$file"
|
||||
fi
|
||||
else
|
||||
echo "${key}=${value}" >> "$file"
|
||||
fi
|
||||
}
|
||||
|
||||
resolve_symlink() {
|
||||
local file="$1"
|
||||
if [[ -L "$file" ]]; then
|
||||
warn "$(basename "$file") is a symlink — creating standalone copy"
|
||||
cp -L "$file" "$file.tmp"
|
||||
rm "$file"
|
||||
mv "$file.tmp" "$file"
|
||||
fi
|
||||
}
|
||||
|
||||
compose_cmd() {
|
||||
local compose_files="-f $ROOT_DIR/docker-compose.yml -f $ROOT_DIR/docker-compose.standalone.yml"
|
||||
if [[ "$OS" == "Linux" ]] && [[ -n "${OLLAMA_PROFILE:-}" ]]; then
|
||||
docker compose $compose_files --profile "$OLLAMA_PROFILE" "$@"
|
||||
else
|
||||
docker compose $compose_files "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
# =========================================================
|
||||
# Step 1: LLM / Ollama
|
||||
# =========================================================
|
||||
step_llm() {
|
||||
info "Step 1: LLM setup (Ollama + $MODEL)"
|
||||
|
||||
case "$OS" in
|
||||
Darwin)
|
||||
if ! command -v ollama &> /dev/null; then
|
||||
err "Ollama not found. Install it:"
|
||||
err " brew install ollama"
|
||||
err " # or https://ollama.com/download"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Start if not running
|
||||
if ! curl -sf "http://localhost:$OLLAMA_PORT/api/tags" > /dev/null 2>&1; then
|
||||
info "Starting Ollama..."
|
||||
ollama serve &
|
||||
disown
|
||||
fi
|
||||
|
||||
wait_for_url "http://localhost:$OLLAMA_PORT/api/tags" "Ollama"
|
||||
echo ""
|
||||
|
||||
# Pull model if not already present
|
||||
if ollama list 2>/dev/null | awk '{print $1}' | grep -qx "$MODEL"; then
|
||||
ok "Model $MODEL already pulled"
|
||||
else
|
||||
info "Pulling model $MODEL (this may take a while)..."
|
||||
ollama pull "$MODEL"
|
||||
fi
|
||||
|
||||
LLM_URL_VALUE="http://host.docker.internal:$OLLAMA_PORT/v1"
|
||||
;;
|
||||
|
||||
Linux)
|
||||
if command -v nvidia-smi &> /dev/null && nvidia-smi > /dev/null 2>&1; then
|
||||
ok "NVIDIA GPU detected — using ollama-gpu profile"
|
||||
OLLAMA_PROFILE="ollama-gpu"
|
||||
OLLAMA_SVC="ollama"
|
||||
LLM_URL_VALUE="http://ollama:$OLLAMA_PORT/v1"
|
||||
else
|
||||
warn "No NVIDIA GPU — using ollama-cpu profile"
|
||||
OLLAMA_PROFILE="ollama-cpu"
|
||||
OLLAMA_SVC="ollama-cpu"
|
||||
LLM_URL_VALUE="http://ollama-cpu:$OLLAMA_PORT/v1"
|
||||
fi
|
||||
|
||||
info "Starting Ollama container..."
|
||||
compose_cmd up -d
|
||||
|
||||
wait_for_url "http://localhost:$OLLAMA_PORT/api/tags" "Ollama"
|
||||
echo ""
|
||||
|
||||
# Pull model inside container
|
||||
if compose_cmd exec "$OLLAMA_SVC" ollama list 2>/dev/null | awk '{print $1}' | grep -qx "$MODEL"; then
|
||||
ok "Model $MODEL already pulled"
|
||||
else
|
||||
info "Pulling model $MODEL inside container (this may take a while)..."
|
||||
compose_cmd exec "$OLLAMA_SVC" ollama pull "$MODEL"
|
||||
fi
|
||||
;;
|
||||
|
||||
*)
|
||||
err "Unsupported OS: $OS"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
ok "LLM ready ($MODEL via Ollama)"
|
||||
}
|
||||
|
||||
# =========================================================
|
||||
# Step 2: Generate server/.env
|
||||
# =========================================================
|
||||
step_server_env() {
|
||||
info "Step 2: Generating server/.env"
|
||||
|
||||
resolve_symlink "$SERVER_ENV"
|
||||
|
||||
if [[ -f "$SERVER_ENV" ]]; then
|
||||
ok "server/.env already exists — ensuring standalone vars"
|
||||
else
|
||||
cat > "$SERVER_ENV" << 'ENVEOF'
|
||||
# Generated by setup-standalone.sh — standalone local development
|
||||
# Source of truth for settings: server/reflector/settings.py
|
||||
ENVEOF
|
||||
ok "Created server/.env"
|
||||
fi
|
||||
|
||||
# Ensure all standalone-critical vars (appends if missing, replaces if present)
|
||||
env_set "$SERVER_ENV" "DATABASE_URL" "postgresql+asyncpg://reflector:reflector@postgres:5432/reflector"
|
||||
env_set "$SERVER_ENV" "REDIS_HOST" "redis"
|
||||
env_set "$SERVER_ENV" "CELERY_BROKER_URL" "redis://redis:6379/1"
|
||||
env_set "$SERVER_ENV" "CELERY_RESULT_BACKEND" "redis://redis:6379/1"
|
||||
env_set "$SERVER_ENV" "AUTH_BACKEND" "none"
|
||||
env_set "$SERVER_ENV" "PUBLIC_MODE" "true"
|
||||
# TRANSCRIPT_BACKEND, TRANSCRIPT_URL, DIARIZATION_BACKEND, DIARIZATION_URL
|
||||
# are set via docker-compose.standalone.yml `environment:` overrides — not written here
|
||||
# so we don't clobber the user's server/.env for non-standalone use.
|
||||
env_set "$SERVER_ENV" "TRANSLATION_BACKEND" "passthrough"
|
||||
env_set "$SERVER_ENV" "LLM_URL" "$LLM_URL_VALUE"
|
||||
env_set "$SERVER_ENV" "LLM_MODEL" "$MODEL"
|
||||
env_set "$SERVER_ENV" "LLM_API_KEY" "not-needed"
|
||||
|
||||
ok "Standalone vars set (LLM_URL=$LLM_URL_VALUE)"
|
||||
}
|
||||
|
||||
# =========================================================
|
||||
# Step 3: Object storage (Garage)
|
||||
# =========================================================
|
||||
step_storage() {
|
||||
info "Step 3: Object storage (Garage)"
|
||||
|
||||
# Generate garage.toml from template (fill in RPC secret)
|
||||
GARAGE_TOML="$ROOT_DIR/scripts/garage.toml"
|
||||
GARAGE_TOML_RUNTIME="$ROOT_DIR/data/garage.toml"
|
||||
if [[ ! -f "$GARAGE_TOML_RUNTIME" ]]; then
|
||||
mkdir -p "$ROOT_DIR/data"
|
||||
RPC_SECRET=$(openssl rand -hex 32)
|
||||
sed "s|__GARAGE_RPC_SECRET__|${RPC_SECRET}|" "$GARAGE_TOML" > "$GARAGE_TOML_RUNTIME"
|
||||
fi
|
||||
|
||||
compose_cmd up -d garage
|
||||
|
||||
wait_for_url "http://localhost:3903/health" "Garage admin API"
|
||||
echo ""
|
||||
|
||||
# Layout: get node ID, assign, apply (skip if already applied)
|
||||
NODE_ID=$(compose_cmd exec -T garage /garage node id -q 2>/dev/null | tr -d '[:space:]')
|
||||
LAYOUT_STATUS=$(compose_cmd exec -T garage /garage layout show 2>&1 || true)
|
||||
if echo "$LAYOUT_STATUS" | grep -q "No nodes"; then
|
||||
compose_cmd exec -T garage /garage layout assign "$NODE_ID" -c 1G -z dc1
|
||||
compose_cmd exec -T garage /garage layout apply --version 1
|
||||
fi
|
||||
|
||||
# Create bucket (idempotent — skip if exists)
|
||||
if ! compose_cmd exec -T garage /garage bucket info reflector-media &>/dev/null; then
|
||||
compose_cmd exec -T garage /garage bucket create reflector-media
|
||||
fi
|
||||
|
||||
# Create key (idempotent — skip if exists)
|
||||
CREATED_KEY=false
|
||||
if compose_cmd exec -T garage /garage key info reflector &>/dev/null; then
|
||||
ok "Key 'reflector' already exists"
|
||||
else
|
||||
KEY_OUTPUT=$(compose_cmd exec -T garage /garage key create reflector)
|
||||
CREATED_KEY=true
|
||||
fi
|
||||
|
||||
# Grant bucket permissions (idempotent)
|
||||
compose_cmd exec -T garage /garage bucket allow reflector-media --read --write --key reflector
|
||||
|
||||
# Set env vars (only parse key on first create — key info redacts the secret)
|
||||
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_BACKEND" "aws"
|
||||
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL" "http://garage:3900"
|
||||
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_BUCKET_NAME" "reflector-media"
|
||||
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_REGION" "garage"
|
||||
if [[ "$CREATED_KEY" == "true" ]]; then
|
||||
KEY_ID=$(echo "$KEY_OUTPUT" | grep -i "key id" | awk '{print $NF}')
|
||||
KEY_SECRET=$(echo "$KEY_OUTPUT" | grep -i "secret key" | awk '{print $NF}')
|
||||
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID" "$KEY_ID"
|
||||
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY" "$KEY_SECRET"
|
||||
fi
|
||||
|
||||
ok "Object storage ready (Garage)"
|
||||
}
|
||||
|
||||
# =========================================================
|
||||
# Step 4: Generate www/.env.local
|
||||
# =========================================================
|
||||
step_www_env() {
|
||||
info "Step 4: Generating www/.env.local"
|
||||
|
||||
resolve_symlink "$WWW_ENV"
|
||||
|
||||
if [[ -f "$WWW_ENV" ]]; then
|
||||
ok "www/.env.local already exists — ensuring standalone vars"
|
||||
else
|
||||
cat > "$WWW_ENV" << 'ENVEOF'
|
||||
# Generated by setup-standalone.sh — standalone local development
|
||||
ENVEOF
|
||||
ok "Created www/.env.local"
|
||||
fi
|
||||
|
||||
env_set "$WWW_ENV" "SITE_URL" "http://localhost:3000"
|
||||
env_set "$WWW_ENV" "NEXTAUTH_URL" "http://localhost:3000"
|
||||
env_set "$WWW_ENV" "NEXTAUTH_SECRET" "standalone-dev-secret-not-for-production"
|
||||
env_set "$WWW_ENV" "API_URL" "http://localhost:1250"
|
||||
env_set "$WWW_ENV" "WEBSOCKET_URL" "ws://localhost:1250"
|
||||
env_set "$WWW_ENV" "SERVER_API_URL" "http://server:1250"
|
||||
env_set "$WWW_ENV" "FEATURE_REQUIRE_LOGIN" "false"
|
||||
|
||||
ok "Standalone www vars set"
|
||||
}
|
||||
|
||||
# =========================================================
|
||||
# Step 5: Start all services
|
||||
# =========================================================
|
||||
step_services() {
|
||||
info "Step 5: Starting Docker services"
|
||||
|
||||
# Check for port conflicts — stale processes silently shadow Docker port mappings.
|
||||
# OrbStack/Docker Desktop bind ports for forwarding; ignore those PIDs.
|
||||
local ports_ok=true
|
||||
for port in 3000 1250 5432 6379 3900 3903; do
|
||||
local pids
|
||||
pids=$(lsof -ti :"$port" 2>/dev/null || true)
|
||||
for pid in $pids; do
|
||||
local pname
|
||||
pname=$(ps -p "$pid" -o comm= 2>/dev/null || true)
|
||||
# OrbStack and Docker Desktop own port forwarding — not real conflicts
|
||||
if [[ "$pname" == *"OrbStack"* ]] || [[ "$pname" == *"com.docker"* ]] || [[ "$pname" == *"vpnkit"* ]]; then
|
||||
continue
|
||||
fi
|
||||
warn "Port $port already in use by PID $pid ($pname)"
|
||||
warn "Kill it with: lsof -ti :$port | xargs kill"
|
||||
ports_ok=false
|
||||
done
|
||||
done
|
||||
if [[ "$ports_ok" == "false" ]]; then
|
||||
warn "Port conflicts detected — Docker containers may not be reachable"
|
||||
warn "Continuing anyway (services will start but may be shadowed)"
|
||||
fi
|
||||
|
||||
# server runs alembic migrations on startup automatically (see runserver.sh)
|
||||
compose_cmd up -d postgres redis garage cpu server worker beat web
|
||||
ok "Containers started"
|
||||
info "Server is running migrations (alembic upgrade head)..."
|
||||
}
|
||||
|
||||
# =========================================================
|
||||
# Step 6: Health checks
|
||||
# =========================================================
|
||||
step_health() {
|
||||
info "Step 6: Health checks"
|
||||
|
||||
# CPU service may take a while on first start (model download + load).
|
||||
# No host port exposed — check via docker exec.
|
||||
info "Waiting for CPU service (first start downloads ~1GB of models)..."
|
||||
local cpu_ok=false
|
||||
for i in $(seq 1 120); do
|
||||
if compose_cmd exec -T cpu curl -sf http://localhost:8000/docs > /dev/null 2>&1; then
|
||||
cpu_ok=true
|
||||
break
|
||||
fi
|
||||
echo -ne "\r Waiting for CPU service... ($i/120)"
|
||||
sleep 5
|
||||
done
|
||||
echo ""
|
||||
if [[ "$cpu_ok" == "true" ]]; then
|
||||
ok "CPU service healthy (transcription + diarization)"
|
||||
else
|
||||
warn "CPU service not ready yet — it will keep loading in the background"
|
||||
warn "Check with: docker compose logs cpu"
|
||||
fi
|
||||
|
||||
wait_for_url "http://localhost:1250/health" "Server API" 60 3
|
||||
echo ""
|
||||
ok "Server API healthy"
|
||||
|
||||
wait_for_url "http://localhost:3000" "Frontend" 90 3
|
||||
echo ""
|
||||
ok "Frontend responding"
|
||||
|
||||
# Check LLM reachability from inside a container
|
||||
if compose_cmd exec -T server \
|
||||
curl -sf "$LLM_URL_VALUE/models" > /dev/null 2>&1; then
|
||||
ok "LLM reachable from containers"
|
||||
else
|
||||
warn "LLM not reachable from containers at $LLM_URL_VALUE"
|
||||
warn "Summaries/topics/titles won't work until LLM is accessible"
|
||||
fi
|
||||
}
|
||||
|
||||
# =========================================================
|
||||
# Main
|
||||
# =========================================================
|
||||
main() {
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo " Reflector — Standalone Local Setup"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Ensure we're in the repo root
|
||||
if [[ ! -f "$ROOT_DIR/docker-compose.yml" ]]; then
|
||||
err "docker-compose.yml not found in $ROOT_DIR"
|
||||
err "Run this script from the repo root: ./scripts/setup-standalone.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
# LLM_URL_VALUE is set by step_llm, used by later steps
|
||||
LLM_URL_VALUE=""
|
||||
OLLAMA_PROFILE=""
|
||||
|
||||
# docker-compose.yml may reference env_files that don't exist yet;
|
||||
# touch them so compose_cmd works before the steps that populate them.
|
||||
touch "$SERVER_ENV" "$WWW_ENV"
|
||||
|
||||
step_llm
|
||||
echo ""
|
||||
step_server_env
|
||||
echo ""
|
||||
step_storage
|
||||
echo ""
|
||||
step_www_env
|
||||
echo ""
|
||||
step_services
|
||||
echo ""
|
||||
step_health
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo -e " ${GREEN}Reflector is running!${NC}"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo " Frontend: http://localhost:3000"
|
||||
echo " API: http://localhost:1250"
|
||||
echo ""
|
||||
echo " To stop: docker compose down"
|
||||
echo " To re-run: ./scripts/setup-standalone.sh"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
@@ -66,22 +66,15 @@ TRANSLATE_URL=https://monadical-sas--reflector-translator-web.modal.run
|
||||
## LLM backend (Required)
|
||||
##
|
||||
## Responsible for generating titles, summaries, and topic detection
|
||||
## Supports any OpenAI-compatible endpoint.
|
||||
## Requires OpenAI API key
|
||||
## =======================================================
|
||||
|
||||
## --- Option A: Local LLM via Ollama (recommended for dev) ---
|
||||
## Setup: ./scripts/setup-standalone.sh
|
||||
## Mac: Ollama runs natively (Metal GPU). Containers reach it via host.docker.internal.
|
||||
## Linux: docker compose --profile ollama-gpu up -d (or ollama-cpu for no GPU)
|
||||
LLM_URL=http://host.docker.internal:11434/v1
|
||||
LLM_MODEL=qwen2.5:14b
|
||||
LLM_API_KEY=not-needed
|
||||
## Linux with containerized Ollama: LLM_URL=http://ollama:11434/v1
|
||||
## OpenAI API key - get from https://platform.openai.com/account/api-keys
|
||||
LLM_API_KEY=sk-your-openai-api-key
|
||||
LLM_MODEL=gpt-4o-mini
|
||||
|
||||
## --- Option B: Remote/cloud LLM ---
|
||||
#LLM_API_KEY=sk-your-openai-api-key
|
||||
#LLM_MODEL=gpt-4o-mini
|
||||
## LLM_URL defaults to OpenAI when unset
|
||||
## Optional: Custom endpoint (defaults to OpenAI)
|
||||
# LLM_URL=https://api.openai.com/v1
|
||||
|
||||
## Context size for summary generation (tokens)
|
||||
LLM_CONTEXT_WINDOW=16000
|
||||
|
||||
@@ -86,7 +86,7 @@ Daily.co Room: "daily-private-igor-20260110042117"
|
||||
| **Purpose** | Tracks active session state | Links recordings, transcripts, participants |
|
||||
| **Scope** | Per room instance | Per Reflector room + timestamp |
|
||||
|
||||
**Critical Limitation:** Daily.co's recordings API often does NOT return `mtgSessionId`, requiring time-based matching (see [Time-Based Matching](#time-based-matching)).
|
||||
**Critical Limitation:** Daily.co's recordings API often does NOT return `mtgSessionId` (can be null), requiring time-based matching (see [Time-Based Matching](#time-based-matching)).
|
||||
|
||||
### Recording
|
||||
|
||||
@@ -101,6 +101,30 @@ Daily.co Room: "daily-private-igor-20260110042117"
|
||||
|
||||
**Critical Behavior:** Recording **stops/restarts** create **separate recording objects** with unique IDs.
|
||||
|
||||
### instanceId (Reflector-Generated)
|
||||
|
||||
**Definition:** UUID we generate and send when starting recording via REST API.
|
||||
|
||||
**Generation:** Deterministic from meeting_id
|
||||
- Cloud: `instanceId = meeting_id` directly
|
||||
- Raw-tracks: `instanceId = UUIDv5(meeting_id, namespace)`
|
||||
|
||||
**Key behaviors:**
|
||||
- ✅ **Reuse allowed:** Same instanceId can be used after stop (validated 2026-01-20)
|
||||
- ❌ **Not returned:** Daily.co does NOT echo instanceId back in GET /recordings response
|
||||
- ✅ **Present in error webhooks:** `recording.error` webhook includes instanceId
|
||||
- **Purpose:** Allows multiple concurrent recordings (cloud + raw-tracks) in same room
|
||||
|
||||
**Stop/restart example:**
|
||||
```
|
||||
Recording 1: POST /start with instanceId="779e6376..." → recording_id="ee00c4e8..."
|
||||
Stop recording
|
||||
Recording 2: POST /start with instanceId="779e6376..." (SAME) → recording_id="b702f509..." (DIFFERENT)
|
||||
✅ Both succeed, different recording_ids returned
|
||||
```
|
||||
|
||||
**Implication:** Cannot match recordings by instanceId (not in response) - must use recording_id.
|
||||
|
||||
---
|
||||
|
||||
## Entity Relationships
|
||||
@@ -196,6 +220,19 @@ Daily.co Room: "daily-private-igor-20260110042117"
|
||||
|
||||
`mtgSessionId` identifies a **Daily.co meeting session** (not individual participants, not a room).
|
||||
|
||||
**Reliability:** Can be null or present in GET /recordings response (unreliable).
|
||||
|
||||
**When present:** Multiple recordings from same session (stop/restart with participants connected) share same mtgSessionId.
|
||||
|
||||
**Example (validated 2026-01-20):**
|
||||
```json
|
||||
Recording 1: {"id": "ee00c4e8...", "mtgSessionId": "92c4136a-a8da-41c5-9c45-e9a2baae6bd6"}
|
||||
Recording 2: {"id": "b702f509...", "mtgSessionId": "92c4136a-a8da-41c5-9c45-e9a2baae6bd6"}
|
||||
// Same mtgSessionId (stop/restart in same session)
|
||||
```
|
||||
|
||||
**When null:** Common - Daily.co API does not reliably populate this field.
|
||||
|
||||
### session_id (Per-Participant)
|
||||
|
||||
**Different concept:** Per-participant connection identifier from webhooks.
|
||||
@@ -220,16 +257,24 @@ TABLE daily_participant_session (
|
||||
|
||||
Daily.co's recordings API does not reliably return `mtgSessionId`, making it impossible to directly link recordings to meetings via Daily.co's identifiers.
|
||||
|
||||
**Example API response:**
|
||||
**Example API response (mtgSessionId can be null OR present):**
|
||||
```json
|
||||
{
|
||||
"id": "recording-uuid",
|
||||
"room_name": "daily-private-igor-20260110042117",
|
||||
"start_ts": 1768018896,
|
||||
"mtgSessionId": null ← Missing!
|
||||
"mtgSessionId": null // ← Often null (unreliable)
|
||||
}
|
||||
|
||||
// OR (when present):
|
||||
{
|
||||
"id": "recording-uuid",
|
||||
"mtgSessionId": "92c4136a-a8da-41c5-9c45-e9a2baae6bd6" // ← Sometimes present
|
||||
}
|
||||
```
|
||||
|
||||
**Key insight:** Cannot rely on mtgSessionId for matching (unreliable). instanceId also not returned. Only reliable identifier is recording.id.
|
||||
|
||||
### Solution: Time-Based Matching
|
||||
|
||||
**Implementation:** `reflector/db/meetings.py:get_by_room_name_and_time()`
|
||||
@@ -491,6 +536,10 @@ UI: User sees 3 separate transcripts
|
||||
|
||||
|
||||
---
|
||||
**Document Version:** 1.0
|
||||
**Last Verified:** 2026-01-15
|
||||
**Data Source:** Production database + Daily.co API inspection
|
||||
**Document Version:** 1.1
|
||||
**Last Updated:** 2026-01-20
|
||||
**Data Source:** Production database + Daily.co API inspection + empirical testing
|
||||
**Changes in 1.1:**
|
||||
- Added instanceId behavior documentation (reuse allowed, not returned in API)
|
||||
- Clarified mtgSessionId reliability (can be null or present)
|
||||
- Added empirical validation of stop/restart behavior
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
"""drop_use_celery_column
|
||||
|
||||
Revision ID: 3aa20b96d963
|
||||
Revises: e69f08ead8ea
|
||||
Create Date: 2026-02-05 10:12:44.065279
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "3aa20b96d963"
|
||||
down_revision: Union[str, None] = "e69f08ead8ea"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
with op.batch_alter_table("room", schema=None) as batch_op:
|
||||
batch_op.drop_column("use_celery")
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
with op.batch_alter_table("room", schema=None) as batch_op:
|
||||
batch_op.add_column(
|
||||
sa.Column(
|
||||
"use_celery",
|
||||
sa.Boolean(),
|
||||
server_default=sa.text("false"),
|
||||
nullable=False,
|
||||
)
|
||||
)
|
||||
@@ -0,0 +1,67 @@
|
||||
"""add_daily_recording_requests
|
||||
|
||||
Revision ID: f5b008fa8a14
|
||||
Revises: 1b1e6a6fc465
|
||||
Create Date: 2026-01-20 22:32:06.697144
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "f5b008fa8a14"
|
||||
down_revision: Union[str, None] = "1b1e6a6fc465"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.create_table(
|
||||
"daily_recording_request",
|
||||
sa.Column("recording_id", sa.String(), nullable=False),
|
||||
sa.Column("meeting_id", sa.String(), nullable=False),
|
||||
sa.Column("instance_id", sa.String(), nullable=False),
|
||||
sa.Column("type", sa.String(), nullable=False),
|
||||
sa.Column("requested_at", sa.DateTime(timezone=True), nullable=False),
|
||||
sa.ForeignKeyConstraint(["meeting_id"], ["meeting.id"], ondelete="CASCADE"),
|
||||
sa.PrimaryKeyConstraint("recording_id"),
|
||||
)
|
||||
op.create_index("idx_meeting_id", "daily_recording_request", ["meeting_id"])
|
||||
op.create_index("idx_instance_id", "daily_recording_request", ["instance_id"])
|
||||
|
||||
# Clean up orphaned recordings before adding FK constraint
|
||||
op.execute("""
|
||||
UPDATE recording SET status = 'orphan', meeting_id = NULL
|
||||
WHERE meeting_id IS NOT NULL
|
||||
AND meeting_id NOT IN (SELECT id FROM meeting)
|
||||
""")
|
||||
|
||||
# Add FK constraint to recording table (cascade delete recordings when meeting deleted)
|
||||
op.execute("""
|
||||
ALTER TABLE recording ADD CONSTRAINT fk_recording_meeting
|
||||
FOREIGN KEY (meeting_id) REFERENCES meeting(id) ON DELETE CASCADE
|
||||
""")
|
||||
|
||||
# Add CHECK constraints to enforce orphan invariants
|
||||
op.execute("""
|
||||
ALTER TABLE recording ADD CONSTRAINT chk_orphan_no_meeting
|
||||
CHECK (status != 'orphan' OR meeting_id IS NULL)
|
||||
""")
|
||||
op.execute("""
|
||||
ALTER TABLE recording ADD CONSTRAINT chk_non_orphan_has_meeting
|
||||
CHECK (status = 'orphan' OR meeting_id IS NOT NULL)
|
||||
""")
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.execute("ALTER TABLE recording DROP CONSTRAINT IF EXISTS chk_orphan_no_meeting")
|
||||
op.execute(
|
||||
"ALTER TABLE recording DROP CONSTRAINT IF EXISTS chk_non_orphan_has_meeting"
|
||||
)
|
||||
op.execute("ALTER TABLE recording DROP CONSTRAINT IF EXISTS fk_recording_meeting")
|
||||
op.drop_index("idx_instance_id", table_name="daily_recording_request")
|
||||
op.drop_index("idx_meeting_id", table_name="daily_recording_request")
|
||||
op.drop_table("daily_recording_request")
|
||||
@@ -68,6 +68,7 @@ evaluation = [
|
||||
"pydantic>=2.1.1",
|
||||
]
|
||||
local = [
|
||||
"pyannote-audio>=3.3.2",
|
||||
"faster-whisper>=0.10.0",
|
||||
]
|
||||
silero-vad = [
|
||||
|
||||
@@ -22,8 +22,6 @@ def asynctask(f):
|
||||
await database.disconnect()
|
||||
|
||||
coro = run_with_db()
|
||||
if current_task:
|
||||
return asyncio.run(coro)
|
||||
try:
|
||||
loop = asyncio.get_running_loop()
|
||||
except RuntimeError:
|
||||
|
||||
@@ -1,5 +1,11 @@
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import Depends
|
||||
from fastapi.security import OAuth2PasswordBearer
|
||||
from pydantic import BaseModel
|
||||
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token", auto_error=False)
|
||||
|
||||
|
||||
class UserInfo(BaseModel):
|
||||
sub: str
|
||||
@@ -9,13 +15,13 @@ class AccessTokenInfo(BaseModel):
|
||||
pass
|
||||
|
||||
|
||||
def authenticated():
|
||||
def authenticated(token: Annotated[str, Depends(oauth2_scheme)]):
|
||||
return None
|
||||
|
||||
|
||||
def current_user():
|
||||
def current_user(token: Annotated[str, Depends(oauth2_scheme)]):
|
||||
return None
|
||||
|
||||
|
||||
def current_user_optional():
|
||||
def current_user_optional(token: Annotated[str, Depends(oauth2_scheme)]):
|
||||
return None
|
||||
|
||||
@@ -146,8 +146,6 @@ class DailyApiClient:
|
||||
)
|
||||
raise DailyApiError(operation, response)
|
||||
|
||||
if not response.content:
|
||||
return {}
|
||||
return response.json()
|
||||
|
||||
# ============================================================================
|
||||
|
||||
56
server/reflector/dailyco_api/recording_orphans.py
Normal file
56
server/reflector/dailyco_api/recording_orphans.py
Normal file
@@ -0,0 +1,56 @@
|
||||
"""Utility for creating orphan recordings."""
|
||||
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from reflector.db.recordings import Recording, recordings_controller
|
||||
from reflector.logger import logger
|
||||
from reflector.utils.string import NonEmptyString
|
||||
|
||||
|
||||
async def create_and_log_orphan(
|
||||
recording_id: NonEmptyString,
|
||||
bucket_name: str,
|
||||
room_name: str,
|
||||
start_ts: int,
|
||||
track_keys: list[str] | None,
|
||||
source: str,
|
||||
) -> bool:
|
||||
"""Create orphan recording and log if first occurrence.
|
||||
|
||||
Args:
|
||||
recording_id: Daily.co recording ID
|
||||
bucket_name: S3 bucket (empty string for cloud recordings)
|
||||
room_name: Daily.co room name
|
||||
start_ts: Unix timestamp
|
||||
track_keys: Track keys for raw-tracks, None for cloud
|
||||
source: "webhook" or "polling" for logging
|
||||
|
||||
Returns:
|
||||
True if created (first poller), False if already exists
|
||||
"""
|
||||
if track_keys:
|
||||
object_key = os.path.dirname(track_keys[0]) if track_keys else room_name
|
||||
else:
|
||||
object_key = room_name
|
||||
|
||||
created = await recordings_controller.create_orphan(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name=bucket_name,
|
||||
object_key=object_key,
|
||||
recorded_at=datetime.fromtimestamp(start_ts, tz=timezone.utc),
|
||||
track_keys=track_keys,
|
||||
meeting_id=None,
|
||||
status="orphan",
|
||||
)
|
||||
)
|
||||
|
||||
if created:
|
||||
logger.error(
|
||||
f"Orphan recording ({source})",
|
||||
recording_id=recording_id,
|
||||
room_name=room_name,
|
||||
)
|
||||
|
||||
return created
|
||||
@@ -99,7 +99,7 @@ def extract_room_name(event: DailyWebhookEvent) -> str | None:
|
||||
>>> event = DailyWebhookEvent(**webhook_payload)
|
||||
>>> room_name = extract_room_name(event)
|
||||
"""
|
||||
room = event.payload.get("room_name") or event.payload.get("room")
|
||||
room = event.payload.get("room_name")
|
||||
# Ensure we return a string, not any falsy value that might be in payload
|
||||
return room if isinstance(room, str) else None
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ Reference: https://docs.daily.co/reference/rest-api/webhooks
|
||||
|
||||
from typing import Annotated, Any, Dict, Literal, Union
|
||||
|
||||
from pydantic import AliasChoices, BaseModel, ConfigDict, Field, field_validator
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
from reflector.utils.string import NonEmptyString
|
||||
|
||||
@@ -41,8 +41,6 @@ class DailyTrack(BaseModel):
|
||||
Reference: https://docs.daily.co/reference/rest-api/recordings
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
type: Literal["audio", "video"]
|
||||
s3Key: NonEmptyString = Field(description="S3 object key for the track file")
|
||||
size: int = Field(description="File size in bytes")
|
||||
@@ -56,8 +54,6 @@ class DailyWebhookEvent(BaseModel):
|
||||
Reference: https://docs.daily.co/reference/rest-api/webhooks
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
version: NonEmptyString = Field(
|
||||
description="Represents the version of the event. This uses semantic versioning to inform a consumer if the payload has introduced any breaking changes"
|
||||
)
|
||||
@@ -86,13 +82,7 @@ class ParticipantJoinedPayload(BaseModel):
|
||||
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/participant-joined
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
room_name: NonEmptyString | None = Field(
|
||||
None,
|
||||
description="Daily.co room name",
|
||||
validation_alias=AliasChoices("room_name", "room"),
|
||||
)
|
||||
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
|
||||
session_id: NonEmptyString = Field(description="Daily.co session identifier")
|
||||
user_id: NonEmptyString = Field(description="User identifier (may be encoded)")
|
||||
user_name: NonEmptyString | None = Field(None, description="User display name")
|
||||
@@ -110,13 +100,7 @@ class ParticipantLeftPayload(BaseModel):
|
||||
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/participant-left
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
room_name: NonEmptyString | None = Field(
|
||||
None,
|
||||
description="Daily.co room name",
|
||||
validation_alias=AliasChoices("room_name", "room"),
|
||||
)
|
||||
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
|
||||
session_id: NonEmptyString = Field(description="Daily.co session identifier")
|
||||
user_id: NonEmptyString = Field(description="User identifier (may be encoded)")
|
||||
user_name: NonEmptyString | None = Field(None, description="User display name")
|
||||
@@ -128,9 +112,6 @@ class ParticipantLeftPayload(BaseModel):
|
||||
_normalize_joined_at = field_validator("joined_at", mode="before")(
|
||||
normalize_timestamp_to_int
|
||||
)
|
||||
_normalize_duration = field_validator("duration", mode="before")(
|
||||
normalize_timestamp_to_int
|
||||
)
|
||||
|
||||
|
||||
class RecordingStartedPayload(BaseModel):
|
||||
@@ -140,8 +121,6 @@ class RecordingStartedPayload(BaseModel):
|
||||
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-started
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
|
||||
recording_id: NonEmptyString = Field(description="Recording identifier")
|
||||
start_ts: int | None = Field(None, description="Recording start timestamp")
|
||||
@@ -159,9 +138,7 @@ class RecordingReadyToDownloadPayload(BaseModel):
|
||||
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-ready-to-download
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
type: Literal["cloud", "cloud-audio-only", "raw-tracks"] = Field(
|
||||
type: Literal["cloud", "raw-tracks"] = Field(
|
||||
description="The type of recording that was generated"
|
||||
)
|
||||
recording_id: NonEmptyString = Field(
|
||||
@@ -176,9 +153,8 @@ class RecordingReadyToDownloadPayload(BaseModel):
|
||||
status: Literal["finished"] = Field(
|
||||
description="The status of the given recording (always 'finished' in ready-to-download webhook, see RecordingStatus in responses.py for full API statuses)"
|
||||
)
|
||||
max_participants: int | None = Field(
|
||||
None,
|
||||
description="The number of participants on the call that were recorded (optional; Daily may omit it in some webhook versions)",
|
||||
max_participants: int = Field(
|
||||
description="The number of participants on the call that were recorded"
|
||||
)
|
||||
duration: int = Field(description="The duration in seconds of the call")
|
||||
s3_key: NonEmptyString = Field(
|
||||
@@ -204,8 +180,6 @@ class RecordingErrorPayload(BaseModel):
|
||||
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-error
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
action: Literal["clourd-recording-err", "cloud-recording-error"] = Field(
|
||||
description="A string describing the event that was emitted (both variants are documented)"
|
||||
)
|
||||
@@ -226,8 +200,6 @@ class RecordingErrorPayload(BaseModel):
|
||||
|
||||
|
||||
class ParticipantJoinedEvent(BaseModel):
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
version: NonEmptyString
|
||||
type: Literal["participant.joined"]
|
||||
id: NonEmptyString
|
||||
@@ -240,8 +212,6 @@ class ParticipantJoinedEvent(BaseModel):
|
||||
|
||||
|
||||
class ParticipantLeftEvent(BaseModel):
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
version: NonEmptyString
|
||||
type: Literal["participant.left"]
|
||||
id: NonEmptyString
|
||||
@@ -254,8 +224,6 @@ class ParticipantLeftEvent(BaseModel):
|
||||
|
||||
|
||||
class RecordingStartedEvent(BaseModel):
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
version: NonEmptyString
|
||||
type: Literal["recording.started"]
|
||||
id: NonEmptyString
|
||||
@@ -268,8 +236,6 @@ class RecordingStartedEvent(BaseModel):
|
||||
|
||||
|
||||
class RecordingReadyEvent(BaseModel):
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
version: NonEmptyString
|
||||
type: Literal["recording.ready-to-download"]
|
||||
id: NonEmptyString
|
||||
@@ -282,8 +248,6 @@ class RecordingReadyEvent(BaseModel):
|
||||
|
||||
|
||||
class RecordingErrorEvent(BaseModel):
|
||||
model_config = ConfigDict(extra="ignore")
|
||||
|
||||
version: NonEmptyString
|
||||
type: Literal["recording.error"]
|
||||
id: NonEmptyString
|
||||
|
||||
@@ -26,6 +26,7 @@ def get_database() -> databases.Database:
|
||||
# import models
|
||||
import reflector.db.calendar_events # noqa
|
||||
import reflector.db.daily_participant_sessions # noqa
|
||||
import reflector.db.daily_recording_requests # noqa
|
||||
import reflector.db.meetings # noqa
|
||||
import reflector.db.recordings # noqa
|
||||
import reflector.db.rooms # noqa
|
||||
|
||||
111
server/reflector/db/daily_recording_requests.py
Normal file
111
server/reflector/db/daily_recording_requests.py
Normal file
@@ -0,0 +1,111 @@
|
||||
from datetime import datetime
|
||||
from typing import Literal
|
||||
from uuid import UUID
|
||||
|
||||
import sqlalchemy as sa
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy.dialects.postgresql import insert
|
||||
|
||||
from reflector.db import get_database, metadata
|
||||
from reflector.utils.string import NonEmptyString
|
||||
|
||||
daily_recording_requests = sa.Table(
|
||||
"daily_recording_request",
|
||||
metadata,
|
||||
sa.Column("recording_id", sa.String, primary_key=True),
|
||||
sa.Column(
|
||||
"meeting_id",
|
||||
sa.String,
|
||||
sa.ForeignKey("meeting.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("instance_id", sa.String, nullable=False),
|
||||
sa.Column("type", sa.String, nullable=False),
|
||||
sa.Column("requested_at", sa.DateTime(timezone=True), nullable=False),
|
||||
sa.Index("idx_meeting_id", "meeting_id"),
|
||||
sa.Index("idx_instance_id", "instance_id"),
|
||||
)
|
||||
|
||||
|
||||
class DailyRecordingRequest(BaseModel):
|
||||
recording_id: NonEmptyString
|
||||
meeting_id: NonEmptyString
|
||||
instance_id: UUID
|
||||
type: Literal["cloud", "raw-tracks"]
|
||||
requested_at: datetime
|
||||
|
||||
|
||||
class DailyRecordingRequestsController:
|
||||
async def create(self, request: DailyRecordingRequest) -> None:
|
||||
stmt = insert(daily_recording_requests).values(
|
||||
recording_id=request.recording_id,
|
||||
meeting_id=request.meeting_id,
|
||||
instance_id=str(request.instance_id),
|
||||
type=request.type,
|
||||
requested_at=request.requested_at,
|
||||
)
|
||||
stmt = stmt.on_conflict_do_nothing(index_elements=["recording_id"])
|
||||
await get_database().execute(stmt)
|
||||
|
||||
async def find_by_recording_id(
|
||||
self,
|
||||
recording_id: NonEmptyString,
|
||||
) -> tuple[NonEmptyString, Literal["cloud", "raw-tracks"]] | None:
|
||||
query = daily_recording_requests.select().where(
|
||||
daily_recording_requests.c.recording_id == recording_id
|
||||
)
|
||||
result = await get_database().fetch_one(query)
|
||||
|
||||
if not result:
|
||||
return None
|
||||
|
||||
req = DailyRecordingRequest(
|
||||
recording_id=result["recording_id"],
|
||||
meeting_id=result["meeting_id"],
|
||||
instance_id=UUID(result["instance_id"]),
|
||||
type=result["type"],
|
||||
requested_at=result["requested_at"],
|
||||
)
|
||||
return (req.meeting_id, req.type)
|
||||
|
||||
async def find_by_instance_id(
|
||||
self,
|
||||
instance_id: UUID,
|
||||
) -> list[DailyRecordingRequest]:
|
||||
"""Multiple recordings can have same instance_id (stop/restart)."""
|
||||
query = daily_recording_requests.select().where(
|
||||
daily_recording_requests.c.instance_id == str(instance_id)
|
||||
)
|
||||
results = await get_database().fetch_all(query)
|
||||
return [
|
||||
DailyRecordingRequest(
|
||||
recording_id=r["recording_id"],
|
||||
meeting_id=r["meeting_id"],
|
||||
instance_id=UUID(r["instance_id"]),
|
||||
type=r["type"],
|
||||
requested_at=r["requested_at"],
|
||||
)
|
||||
for r in results
|
||||
]
|
||||
|
||||
async def get_by_meeting_id(
|
||||
self,
|
||||
meeting_id: NonEmptyString,
|
||||
) -> list[DailyRecordingRequest]:
|
||||
query = daily_recording_requests.select().where(
|
||||
daily_recording_requests.c.meeting_id == meeting_id
|
||||
)
|
||||
results = await get_database().fetch_all(query)
|
||||
return [
|
||||
DailyRecordingRequest(
|
||||
recording_id=r["recording_id"],
|
||||
meeting_id=r["meeting_id"],
|
||||
instance_id=UUID(r["instance_id"]),
|
||||
type=r["type"],
|
||||
requested_at=r["requested_at"],
|
||||
)
|
||||
for r in results
|
||||
]
|
||||
|
||||
|
||||
daily_recording_requests_controller = DailyRecordingRequestsController()
|
||||
@@ -1,4 +1,4 @@
|
||||
from datetime import datetime, timedelta
|
||||
from datetime import datetime
|
||||
from typing import Any, Literal
|
||||
|
||||
import sqlalchemy as sa
|
||||
@@ -183,84 +183,6 @@ class MeetingController:
|
||||
results = await get_database().fetch_all(query)
|
||||
return [Meeting(**r) for r in results]
|
||||
|
||||
async def get_by_room_name_and_time(
|
||||
self,
|
||||
room_name: NonEmptyString,
|
||||
recording_start: datetime,
|
||||
time_window_hours: int = 168,
|
||||
) -> Meeting | None:
|
||||
"""
|
||||
Get meeting by room name closest to recording timestamp.
|
||||
|
||||
HACK ALERT: Daily.co doesn't return instanceId in recordings API response,
|
||||
and mtgSessionId is separate from our instanceId. Time-based matching is
|
||||
the least-bad workaround.
|
||||
|
||||
This handles edge case of duplicate room_name values in DB (race conditions,
|
||||
double-clicks, etc.) by matching based on temporal proximity.
|
||||
|
||||
Algorithm:
|
||||
1. Find meetings within time_window_hours of recording_start
|
||||
2. Return meeting with start_date closest to recording_start
|
||||
3. If tie, return first by meeting.id (deterministic)
|
||||
|
||||
Args:
|
||||
room_name: Daily.co room name from recording
|
||||
recording_start: Timezone-aware datetime from recording.start_ts
|
||||
time_window_hours: Search window (default 168 = 1 week)
|
||||
|
||||
Returns:
|
||||
Meeting closest to recording timestamp, or None if no matches
|
||||
|
||||
Failure modes:
|
||||
- Multiple meetings in same room within ~5 minutes: picks closest
|
||||
- All meetings outside time window: returns None
|
||||
- Clock skew between Daily.co and DB: 1-week window tolerates this
|
||||
|
||||
Why 1 week window:
|
||||
- Handles webhook failures (recording discovered days later)
|
||||
- Tolerates clock skew
|
||||
- Rejects unrelated meetings from weeks ago
|
||||
|
||||
"""
|
||||
# Validate timezone-aware datetime
|
||||
if recording_start.tzinfo is None:
|
||||
raise ValueError(
|
||||
f"recording_start must be timezone-aware, got naive datetime: {recording_start}"
|
||||
)
|
||||
|
||||
window_start = recording_start - timedelta(hours=time_window_hours)
|
||||
window_end = recording_start + timedelta(hours=time_window_hours)
|
||||
|
||||
query = (
|
||||
meetings.select()
|
||||
.where(
|
||||
sa.and_(
|
||||
meetings.c.room_name == room_name,
|
||||
meetings.c.start_date >= window_start,
|
||||
meetings.c.start_date <= window_end,
|
||||
)
|
||||
)
|
||||
.order_by(meetings.c.start_date)
|
||||
)
|
||||
|
||||
results = await get_database().fetch_all(query)
|
||||
if not results:
|
||||
return None
|
||||
|
||||
candidates = [Meeting(**r) for r in results]
|
||||
|
||||
# Find meeting with start_date closest to recording_start
|
||||
closest = min(
|
||||
candidates,
|
||||
key=lambda m: (
|
||||
abs((m.start_date - recording_start).total_seconds()),
|
||||
m.id, # Tie-breaker: deterministic by UUID
|
||||
),
|
||||
)
|
||||
|
||||
return closest
|
||||
|
||||
async def get_active(self, room: Room, current_time: datetime) -> Meeting | None:
|
||||
"""
|
||||
Get latest active meeting for a room.
|
||||
@@ -350,44 +272,6 @@ class MeetingController:
|
||||
query = meetings.update().where(meetings.c.id == meeting_id).values(**kwargs)
|
||||
await get_database().execute(query)
|
||||
|
||||
async def set_cloud_recording_if_missing(
|
||||
self,
|
||||
meeting_id: NonEmptyString,
|
||||
s3_key: NonEmptyString,
|
||||
duration: int,
|
||||
) -> bool:
|
||||
"""
|
||||
Set cloud recording only if not already set.
|
||||
|
||||
Returns True if updated, False if already set.
|
||||
Prevents webhook/polling race condition via atomic WHERE clause.
|
||||
"""
|
||||
# Check current value before update to detect actual change
|
||||
meeting_before = await self.get_by_id(meeting_id)
|
||||
if not meeting_before:
|
||||
return False
|
||||
|
||||
was_null = meeting_before.daily_composed_video_s3_key is None
|
||||
|
||||
query = (
|
||||
meetings.update()
|
||||
.where(
|
||||
sa.and_(
|
||||
meetings.c.id == meeting_id,
|
||||
meetings.c.daily_composed_video_s3_key.is_(None),
|
||||
)
|
||||
)
|
||||
.values(
|
||||
daily_composed_video_s3_key=s3_key,
|
||||
daily_composed_video_duration=duration,
|
||||
)
|
||||
)
|
||||
await get_database().execute(query)
|
||||
|
||||
# Return True only if value was NULL before (actual update occurred)
|
||||
# If was_null=False, the WHERE clause prevented the update
|
||||
return was_null
|
||||
|
||||
async def increment_num_clients(self, meeting_id: str) -> None:
|
||||
"""Atomically increment participant count."""
|
||||
query = (
|
||||
@@ -467,6 +351,27 @@ class MeetingConsentController:
|
||||
result = await get_database().fetch_one(query)
|
||||
return result is not None
|
||||
|
||||
async def set_cloud_recording_if_missing(
|
||||
self,
|
||||
meeting_id: NonEmptyString,
|
||||
s3_key: NonEmptyString,
|
||||
duration: int,
|
||||
) -> bool:
|
||||
"""Returns True if updated, False if already set."""
|
||||
query = (
|
||||
meetings.update()
|
||||
.where(
|
||||
meetings.c.id == meeting_id,
|
||||
meetings.c.daily_composed_video_s3_key.is_(None),
|
||||
)
|
||||
.values(
|
||||
daily_composed_video_s3_key=s3_key,
|
||||
daily_composed_video_duration=duration,
|
||||
)
|
||||
)
|
||||
result = await get_database().execute(query)
|
||||
return result.rowcount > 0
|
||||
|
||||
|
||||
meetings_controller = MeetingController()
|
||||
meeting_consent_controller = MeetingConsentController()
|
||||
|
||||
@@ -4,10 +4,10 @@ from typing import Literal
|
||||
import sqlalchemy as sa
|
||||
from pydantic import BaseModel, Field
|
||||
from sqlalchemy import or_
|
||||
from sqlalchemy.dialects.postgresql import insert
|
||||
|
||||
from reflector.db import get_database, metadata
|
||||
from reflector.utils import generate_uuid4
|
||||
from reflector.utils.string import NonEmptyString
|
||||
|
||||
recordings = sa.Table(
|
||||
"recording",
|
||||
@@ -31,14 +31,13 @@ recordings = sa.Table(
|
||||
class Recording(BaseModel):
|
||||
id: str = Field(default_factory=generate_uuid4)
|
||||
bucket_name: str
|
||||
# for single-track
|
||||
object_key: str
|
||||
recorded_at: datetime
|
||||
status: Literal["pending", "processing", "completed", "failed"] = "pending"
|
||||
status: Literal["pending", "processing", "completed", "failed", "orphan"] = (
|
||||
"pending"
|
||||
)
|
||||
meeting_id: str | None = None
|
||||
# for multitrack reprocessing
|
||||
# track_keys can be empty list [] if recording finished but no audio was captured (silence/muted)
|
||||
# None means not a multitrack recording, [] means multitrack with no tracks
|
||||
# None = single-track, [] = multitrack with no audio, [keys...] = multitrack with audio
|
||||
track_keys: list[str] | None = None
|
||||
|
||||
@property
|
||||
@@ -72,20 +71,6 @@ class RecordingController:
|
||||
query = recordings.delete().where(recordings.c.id == id)
|
||||
await get_database().execute(query)
|
||||
|
||||
async def set_meeting_id(
|
||||
self,
|
||||
recording_id: NonEmptyString,
|
||||
meeting_id: NonEmptyString,
|
||||
) -> None:
|
||||
"""Link recording to meeting."""
|
||||
query = (
|
||||
recordings.update()
|
||||
.where(recordings.c.id == recording_id)
|
||||
.values(meeting_id=meeting_id)
|
||||
)
|
||||
await get_database().execute(query)
|
||||
|
||||
# no check for existence
|
||||
async def get_by_ids(self, recording_ids: list[str]) -> list[Recording]:
|
||||
if not recording_ids:
|
||||
return []
|
||||
@@ -104,9 +89,12 @@ class RecordingController:
|
||||
|
||||
This is more efficient than fetching all recordings and filtering in Python.
|
||||
"""
|
||||
from reflector.db.transcripts import (
|
||||
transcripts, # noqa: PLC0415 cyclic import
|
||||
)
|
||||
# INLINE IMPORT REQUIRED: Circular dependency
|
||||
# - recordings.py needs transcripts table for JOIN query
|
||||
# - transcripts.py imports recordings_controller
|
||||
# - db/__init__.py loads recordings before transcripts (line 31 vs 33)
|
||||
# - Top-level import would fail during module initialization
|
||||
from reflector.db.transcripts import transcripts
|
||||
|
||||
query = (
|
||||
recordings.select()
|
||||
@@ -124,5 +112,27 @@ class RecordingController:
|
||||
recordings_list = [Recording(**row) for row in results]
|
||||
return [r for r in recordings_list if r.is_multitrack]
|
||||
|
||||
async def try_create_with_meeting(self, recording: Recording) -> bool:
|
||||
"""Returns True if created, False if already exists."""
|
||||
assert recording.meeting_id is not None, "meeting_id required for non-orphan"
|
||||
assert recording.status != "orphan", "use create_orphan for orphans"
|
||||
|
||||
stmt = insert(recordings).values(**recording.model_dump())
|
||||
stmt = stmt.on_conflict_do_nothing(index_elements=["id"])
|
||||
result = await get_database().execute(stmt)
|
||||
|
||||
return result.rowcount > 0
|
||||
|
||||
async def create_orphan(self, recording: Recording) -> bool:
|
||||
"""Returns True if created, False if already exists."""
|
||||
assert recording.status == "orphan", "status must be 'orphan'"
|
||||
assert recording.meeting_id is None, "meeting_id must be NULL for orphan"
|
||||
|
||||
stmt = insert(recordings).values(**recording.model_dump())
|
||||
stmt = stmt.on_conflict_do_nothing(index_elements=["id"])
|
||||
result = await get_database().execute(stmt)
|
||||
|
||||
return result.rowcount > 0
|
||||
|
||||
|
||||
recordings_controller = RecordingController()
|
||||
|
||||
@@ -57,6 +57,12 @@ rooms = sqlalchemy.Table(
|
||||
sqlalchemy.String,
|
||||
nullable=False,
|
||||
),
|
||||
sqlalchemy.Column(
|
||||
"use_celery",
|
||||
sqlalchemy.Boolean,
|
||||
nullable=False,
|
||||
server_default=false(),
|
||||
),
|
||||
sqlalchemy.Column(
|
||||
"skip_consent",
|
||||
sqlalchemy.Boolean,
|
||||
@@ -91,6 +97,7 @@ class Room(BaseModel):
|
||||
ics_last_sync: datetime | None = None
|
||||
ics_last_etag: str | None = None
|
||||
platform: Platform = Field(default_factory=lambda: settings.DEFAULT_VIDEO_PLATFORM)
|
||||
use_celery: bool = False
|
||||
skip_consent: bool = False
|
||||
|
||||
|
||||
|
||||
@@ -26,7 +26,6 @@ from reflector.db.rooms import rooms
|
||||
from reflector.db.transcripts import SourceKind, TranscriptStatus, transcripts
|
||||
from reflector.db.utils import is_postgresql
|
||||
from reflector.logger import logger
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.string import NonEmptyString, try_parse_non_empty_string
|
||||
|
||||
DEFAULT_SEARCH_LIMIT = 20
|
||||
@@ -397,7 +396,7 @@ class SearchController:
|
||||
transcripts.c.user_id == params.user_id, rooms.c.is_shared
|
||||
)
|
||||
)
|
||||
elif not settings.PUBLIC_MODE:
|
||||
else:
|
||||
base_query = base_query.where(rooms.c.is_shared)
|
||||
if params.room_id:
|
||||
base_query = base_query.where(transcripts.c.room_id == params.room_id)
|
||||
|
||||
@@ -406,7 +406,7 @@ class TranscriptController:
|
||||
query = query.where(
|
||||
or_(transcripts.c.user_id == user_id, rooms.c.is_shared)
|
||||
)
|
||||
elif not settings.PUBLIC_MODE:
|
||||
else:
|
||||
query = query.where(rooms.c.is_shared)
|
||||
|
||||
if source_kind:
|
||||
|
||||
@@ -12,9 +12,7 @@ import threading
|
||||
|
||||
from hatchet_sdk import ClientConfig, Hatchet
|
||||
from hatchet_sdk.clients.rest.models import V1TaskStatus
|
||||
from hatchet_sdk.rate_limit import RateLimitDuration
|
||||
|
||||
from reflector.hatchet.constants import LLM_RATE_LIMIT_KEY, LLM_RATE_LIMIT_PER_SECOND
|
||||
from reflector.logger import logger
|
||||
from reflector.settings import settings
|
||||
|
||||
@@ -115,26 +113,3 @@ class HatchetClientManager:
|
||||
"""Reset the client instance (for testing)."""
|
||||
with cls._lock:
|
||||
cls._instance = None
|
||||
|
||||
@classmethod
|
||||
async def ensure_rate_limit(cls) -> None:
|
||||
"""Ensure the LLM rate limit exists in Hatchet.
|
||||
|
||||
Uses the Hatchet SDK rate_limits client (aio_put). See:
|
||||
https://docs.hatchet.run/sdks/python/feature-clients/rate_limits
|
||||
"""
|
||||
logger.info(
|
||||
"[Hatchet] Ensuring rate limit exists",
|
||||
rate_limit_key=LLM_RATE_LIMIT_KEY,
|
||||
limit=LLM_RATE_LIMIT_PER_SECOND,
|
||||
)
|
||||
client = cls.get_client()
|
||||
await client.rate_limits.aio_put(
|
||||
key=LLM_RATE_LIMIT_KEY,
|
||||
limit=LLM_RATE_LIMIT_PER_SECOND,
|
||||
duration=RateLimitDuration.SECOND,
|
||||
)
|
||||
logger.info(
|
||||
"[Hatchet] Rate limit put successfully",
|
||||
rate_limit_key=LLM_RATE_LIMIT_KEY,
|
||||
)
|
||||
|
||||
@@ -3,8 +3,6 @@ LLM/I/O worker pool for all non-CPU tasks.
|
||||
Handles: all tasks except mixdown_tracks (transcription, LLM inference, orchestration)
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
|
||||
from reflector.hatchet.client import HatchetClientManager
|
||||
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
|
||||
daily_multitrack_pipeline,
|
||||
@@ -22,15 +20,6 @@ POOL = "llm-io"
|
||||
def main():
|
||||
hatchet = HatchetClientManager.get_client()
|
||||
|
||||
try:
|
||||
asyncio.run(HatchetClientManager.ensure_rate_limit())
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"[Hatchet] Rate limit initialization failed, but continuing. "
|
||||
"If workflows fail to register, rate limits may need to be created manually.",
|
||||
error=str(e),
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"Starting Hatchet LLM worker pool (all tasks except mixdown)",
|
||||
worker_name=WORKER_NAME,
|
||||
|
||||
@@ -171,13 +171,11 @@ async def set_workflow_error_status(transcript_id: NonEmptyString) -> bool:
|
||||
|
||||
def _spawn_storage():
|
||||
"""Create fresh storage instance."""
|
||||
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
|
||||
return AwsStorage(
|
||||
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
|
||||
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
|
||||
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
|
||||
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
|
||||
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -49,13 +49,11 @@ async def pad_track(input: PaddingInput, ctx: Context) -> PadTrackResult:
|
||||
from reflector.settings import settings # noqa: PLC0415
|
||||
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
|
||||
|
||||
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
|
||||
storage = AwsStorage(
|
||||
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
|
||||
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
|
||||
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
|
||||
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
|
||||
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
|
||||
)
|
||||
|
||||
source_url = await storage.get_file_url(
|
||||
|
||||
@@ -60,7 +60,6 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
|
||||
|
||||
try:
|
||||
# Create fresh storage instance to avoid aioboto3 fork issues
|
||||
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
|
||||
from reflector.settings import settings # noqa: PLC0415
|
||||
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
|
||||
|
||||
@@ -69,7 +68,6 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
|
||||
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
|
||||
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
|
||||
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
|
||||
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
|
||||
)
|
||||
|
||||
source_url = await storage.get_file_url(
|
||||
@@ -161,7 +159,6 @@ async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackRe
|
||||
raise ValueError("Missing padded_key from pad_track")
|
||||
|
||||
# Presign URL on demand (avoids stale URLs on workflow replay)
|
||||
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
|
||||
from reflector.settings import settings # noqa: PLC0415
|
||||
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
|
||||
|
||||
@@ -170,7 +167,6 @@ async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackRe
|
||||
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
|
||||
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
|
||||
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
|
||||
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
|
||||
)
|
||||
|
||||
audio_url = await storage.get_file_url(
|
||||
|
||||
@@ -144,18 +144,7 @@ class StructuredOutputWorkflow(Workflow, Generic[OutputT]):
|
||||
)
|
||||
|
||||
# Network retries handled by OpenAILike (max_retries=3)
|
||||
# response_format enables grammar-based constrained decoding on backends
|
||||
# that support it (DMR/llama.cpp, vLLM, Ollama, OpenAI).
|
||||
response = await Settings.llm.acomplete(
|
||||
json_prompt,
|
||||
response_format={
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
"name": self.output_cls.__name__,
|
||||
"schema": self.output_cls.model_json_schema(),
|
||||
},
|
||||
},
|
||||
)
|
||||
response = await Settings.llm.acomplete(json_prompt)
|
||||
return ExtractionDone(output=response.text)
|
||||
|
||||
@step
|
||||
|
||||
74
server/reflector/processors/audio_diarization_pyannote.py
Normal file
74
server/reflector/processors/audio_diarization_pyannote.py
Normal file
@@ -0,0 +1,74 @@
|
||||
import os
|
||||
|
||||
import torch
|
||||
import torchaudio
|
||||
from pyannote.audio import Pipeline
|
||||
|
||||
from reflector.processors.audio_diarization import AudioDiarizationProcessor
|
||||
from reflector.processors.audio_diarization_auto import AudioDiarizationAutoProcessor
|
||||
from reflector.processors.types import AudioDiarizationInput, DiarizationSegment
|
||||
|
||||
|
||||
class AudioDiarizationPyannoteProcessor(AudioDiarizationProcessor):
|
||||
"""Local diarization processor using pyannote.audio library"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_name: str = "pyannote/speaker-diarization-3.1",
|
||||
pyannote_auth_token: str | None = None,
|
||||
device: str | None = None,
|
||||
**kwargs,
|
||||
):
|
||||
super().__init__(**kwargs)
|
||||
self.model_name = model_name
|
||||
self.auth_token = pyannote_auth_token or os.environ.get("HF_TOKEN")
|
||||
self.device = device
|
||||
|
||||
if device is None:
|
||||
self.device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
|
||||
self.logger.info(f"Loading pyannote diarization model: {self.model_name}")
|
||||
self.diarization_pipeline = Pipeline.from_pretrained(
|
||||
self.model_name, use_auth_token=self.auth_token
|
||||
)
|
||||
self.diarization_pipeline.to(torch.device(self.device))
|
||||
self.logger.info(f"Diarization model loaded on device: {self.device}")
|
||||
|
||||
async def _diarize(self, data: AudioDiarizationInput) -> list[DiarizationSegment]:
|
||||
try:
|
||||
# Load audio file (audio_url is assumed to be a local file path)
|
||||
self.logger.info(f"Loading local audio file: {data.audio_url}")
|
||||
waveform, sample_rate = torchaudio.load(data.audio_url)
|
||||
audio_input = {"waveform": waveform, "sample_rate": sample_rate}
|
||||
self.logger.info("Running speaker diarization")
|
||||
diarization = self.diarization_pipeline(audio_input)
|
||||
|
||||
# Convert pyannote diarization output to our format
|
||||
segments = []
|
||||
for segment, _, speaker in diarization.itertracks(yield_label=True):
|
||||
# Extract speaker number from label (e.g., "SPEAKER_00" -> 0)
|
||||
speaker_id = 0
|
||||
if speaker.startswith("SPEAKER_"):
|
||||
try:
|
||||
speaker_id = int(speaker.split("_")[-1])
|
||||
except (ValueError, IndexError):
|
||||
# Fallback to hash-based ID if parsing fails
|
||||
speaker_id = hash(speaker) % 1000
|
||||
|
||||
segments.append(
|
||||
{
|
||||
"start": round(segment.start, 3),
|
||||
"end": round(segment.end, 3),
|
||||
"speaker": speaker_id,
|
||||
}
|
||||
)
|
||||
|
||||
self.logger.info(f"Diarization completed with {len(segments)} segments")
|
||||
return segments
|
||||
|
||||
except Exception as e:
|
||||
self.logger.exception(f"Diarization failed: {e}")
|
||||
raise
|
||||
|
||||
|
||||
AudioDiarizationAutoProcessor.register("pyannote", AudioDiarizationPyannoteProcessor)
|
||||
@@ -15,10 +15,14 @@ from hatchet_sdk.clients.rest.exceptions import ApiException, NotFoundException
|
||||
from hatchet_sdk.clients.rest.models import V1TaskStatus
|
||||
|
||||
from reflector.db.recordings import recordings_controller
|
||||
from reflector.db.rooms import rooms_controller
|
||||
from reflector.db.transcripts import Transcript, transcripts_controller
|
||||
from reflector.hatchet.client import HatchetClientManager
|
||||
from reflector.logger import logger
|
||||
from reflector.pipelines.main_file_pipeline import task_pipeline_file_process
|
||||
from reflector.pipelines.main_multitrack_pipeline import (
|
||||
task_pipeline_multitrack_process,
|
||||
)
|
||||
from reflector.utils.string import NonEmptyString
|
||||
|
||||
|
||||
@@ -97,11 +101,8 @@ async def validate_transcript_for_processing(
|
||||
if transcript.locked:
|
||||
return ValidationLocked(detail="Recording is locked")
|
||||
|
||||
if (
|
||||
transcript.status == "idle"
|
||||
and not transcript.workflow_run_id
|
||||
and not transcript.recording_id
|
||||
):
|
||||
# Check if recording is ready for processing
|
||||
if transcript.status == "idle" and not transcript.workflow_run_id:
|
||||
return ValidationNotReady(detail="Recording is not ready for processing")
|
||||
|
||||
# Check Celery tasks
|
||||
@@ -180,98 +181,124 @@ async def dispatch_transcript_processing(
|
||||
Returns AsyncResult for Celery tasks, None for Hatchet workflows.
|
||||
"""
|
||||
if isinstance(config, MultitrackProcessingConfig):
|
||||
# Multitrack processing always uses Hatchet (no Celery fallback)
|
||||
# First check if we can replay (outside transaction since it's read-only)
|
||||
transcript = await transcripts_controller.get_by_id(config.transcript_id)
|
||||
if transcript and transcript.workflow_run_id and not force:
|
||||
can_replay = await HatchetClientManager.can_replay(
|
||||
transcript.workflow_run_id
|
||||
use_celery = False
|
||||
if config.room_id:
|
||||
room = await rooms_controller.get_by_id(config.room_id)
|
||||
use_celery = room.use_celery if room else False
|
||||
|
||||
use_hatchet = not use_celery
|
||||
|
||||
if use_celery:
|
||||
logger.info(
|
||||
"Room uses legacy Celery processing",
|
||||
room_id=config.room_id,
|
||||
transcript_id=config.transcript_id,
|
||||
)
|
||||
if can_replay:
|
||||
await HatchetClientManager.replay_workflow(transcript.workflow_run_id)
|
||||
logger.info(
|
||||
"Replaying Hatchet workflow",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
|
||||
if use_hatchet:
|
||||
# First check if we can replay (outside transaction since it's read-only)
|
||||
transcript = await transcripts_controller.get_by_id(config.transcript_id)
|
||||
if transcript and transcript.workflow_run_id and not force:
|
||||
can_replay = await HatchetClientManager.can_replay(
|
||||
transcript.workflow_run_id
|
||||
)
|
||||
return None
|
||||
else:
|
||||
# Workflow can't replay (CANCELLED, COMPLETED, or 404 deleted)
|
||||
# Log and proceed to start new workflow
|
||||
if can_replay:
|
||||
await HatchetClientManager.replay_workflow(
|
||||
transcript.workflow_run_id
|
||||
)
|
||||
logger.info(
|
||||
"Replaying Hatchet workflow",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
return None
|
||||
else:
|
||||
# Workflow can't replay (CANCELLED, COMPLETED, or 404 deleted)
|
||||
# Log and proceed to start new workflow
|
||||
try:
|
||||
status = await HatchetClientManager.get_workflow_run_status(
|
||||
transcript.workflow_run_id
|
||||
)
|
||||
logger.info(
|
||||
"Old workflow not replayable, starting new",
|
||||
old_workflow_id=transcript.workflow_run_id,
|
||||
old_status=status.value,
|
||||
)
|
||||
except NotFoundException:
|
||||
# Workflow deleted from Hatchet but ID still in DB
|
||||
logger.info(
|
||||
"Old workflow not found in Hatchet, starting new",
|
||||
old_workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
|
||||
# Force: cancel old workflow if exists
|
||||
if force and transcript and transcript.workflow_run_id:
|
||||
try:
|
||||
await HatchetClientManager.cancel_workflow(
|
||||
transcript.workflow_run_id
|
||||
)
|
||||
logger.info(
|
||||
"Cancelled old workflow (--force)",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
except NotFoundException:
|
||||
logger.info(
|
||||
"Old workflow already deleted (--force)",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
await transcripts_controller.update(
|
||||
transcript, {"workflow_run_id": None}
|
||||
)
|
||||
|
||||
# Re-fetch and check for concurrent dispatch (optimistic approach).
|
||||
# No database lock - worst case is duplicate dispatch, but Hatchet
|
||||
# workflows are idempotent so this is acceptable.
|
||||
transcript = await transcripts_controller.get_by_id(config.transcript_id)
|
||||
if transcript and transcript.workflow_run_id:
|
||||
# Another process started a workflow between validation and now
|
||||
try:
|
||||
status = await HatchetClientManager.get_workflow_run_status(
|
||||
transcript.workflow_run_id
|
||||
)
|
||||
logger.info(
|
||||
"Old workflow not replayable, starting new",
|
||||
old_workflow_id=transcript.workflow_run_id,
|
||||
old_status=status.value,
|
||||
)
|
||||
except NotFoundException:
|
||||
# Workflow deleted from Hatchet but ID still in DB
|
||||
logger.info(
|
||||
"Old workflow not found in Hatchet, starting new",
|
||||
old_workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
if status in (V1TaskStatus.RUNNING, V1TaskStatus.QUEUED):
|
||||
logger.info(
|
||||
"Concurrent workflow detected, skipping dispatch",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
return None
|
||||
except ApiException:
|
||||
# Workflow might be gone (404) or API issue - proceed with new workflow
|
||||
pass
|
||||
|
||||
# Force: cancel old workflow if exists
|
||||
if force and transcript and transcript.workflow_run_id:
|
||||
try:
|
||||
await HatchetClientManager.cancel_workflow(transcript.workflow_run_id)
|
||||
logger.info(
|
||||
"Cancelled old workflow (--force)",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
except NotFoundException:
|
||||
logger.info(
|
||||
"Old workflow already deleted (--force)",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
await transcripts_controller.update(transcript, {"workflow_run_id": None})
|
||||
|
||||
# Re-fetch and check for concurrent dispatch (optimistic approach).
|
||||
# No database lock - worst case is duplicate dispatch, but Hatchet
|
||||
# workflows are idempotent so this is acceptable.
|
||||
transcript = await transcripts_controller.get_by_id(config.transcript_id)
|
||||
if transcript and transcript.workflow_run_id:
|
||||
# Another process started a workflow between validation and now
|
||||
try:
|
||||
status = await HatchetClientManager.get_workflow_run_status(
|
||||
transcript.workflow_run_id
|
||||
)
|
||||
if status in (V1TaskStatus.RUNNING, V1TaskStatus.QUEUED):
|
||||
logger.info(
|
||||
"Concurrent workflow detected, skipping dispatch",
|
||||
workflow_id=transcript.workflow_run_id,
|
||||
)
|
||||
return None
|
||||
except ApiException:
|
||||
# Workflow might be gone (404) or API issue - proceed with new workflow
|
||||
pass
|
||||
|
||||
workflow_id = await HatchetClientManager.start_workflow(
|
||||
workflow_name="DiarizationPipeline",
|
||||
input_data={
|
||||
"recording_id": config.recording_id,
|
||||
"tracks": [{"s3_key": k} for k in config.track_keys],
|
||||
"bucket_name": config.bucket_name,
|
||||
"transcript_id": config.transcript_id,
|
||||
"room_id": config.room_id,
|
||||
},
|
||||
additional_metadata={
|
||||
"transcript_id": config.transcript_id,
|
||||
"recording_id": config.recording_id,
|
||||
"daily_recording_id": config.recording_id,
|
||||
},
|
||||
)
|
||||
|
||||
if transcript:
|
||||
await transcripts_controller.update(
|
||||
transcript, {"workflow_run_id": workflow_id}
|
||||
workflow_id = await HatchetClientManager.start_workflow(
|
||||
workflow_name="DiarizationPipeline",
|
||||
input_data={
|
||||
"recording_id": config.recording_id,
|
||||
"tracks": [{"s3_key": k} for k in config.track_keys],
|
||||
"bucket_name": config.bucket_name,
|
||||
"transcript_id": config.transcript_id,
|
||||
"room_id": config.room_id,
|
||||
},
|
||||
additional_metadata={
|
||||
"transcript_id": config.transcript_id,
|
||||
"recording_id": config.recording_id,
|
||||
"daily_recording_id": config.recording_id,
|
||||
},
|
||||
)
|
||||
|
||||
logger.info("Hatchet workflow dispatched", workflow_id=workflow_id)
|
||||
return None
|
||||
if transcript:
|
||||
await transcripts_controller.update(
|
||||
transcript, {"workflow_run_id": workflow_id}
|
||||
)
|
||||
|
||||
logger.info("Hatchet workflow dispatched", workflow_id=workflow_id)
|
||||
return None
|
||||
|
||||
# Celery pipeline (durable workflows disabled)
|
||||
return task_pipeline_multitrack_process.delay(
|
||||
transcript_id=config.transcript_id,
|
||||
bucket_name=config.bucket_name,
|
||||
track_keys=config.track_keys,
|
||||
)
|
||||
elif isinstance(config, FileProcessingConfig):
|
||||
return task_pipeline_file_process.delay(transcript_id=config.transcript_id)
|
||||
else:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from pydantic.types import PositiveInt
|
||||
from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||
|
||||
from reflector.schemas.platform import DAILY_PLATFORM, Platform
|
||||
from reflector.schemas.platform import WHEREBY_PLATFORM, Platform
|
||||
from reflector.utils.string import NonEmptyString
|
||||
|
||||
|
||||
@@ -49,7 +49,6 @@ class Settings(BaseSettings):
|
||||
TRANSCRIPT_STORAGE_AWS_REGION: str = "us-east-1"
|
||||
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID: str | None = None
|
||||
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY: str | None = None
|
||||
TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL: str | None = None
|
||||
|
||||
# Platform-specific recording storage (follows {PREFIX}_STORAGE_AWS_{CREDENTIAL} pattern)
|
||||
# Whereby storage configuration
|
||||
@@ -85,7 +84,9 @@ class Settings(BaseSettings):
|
||||
)
|
||||
|
||||
# Diarization
|
||||
# backend: modal — HTTP API client (works with Modal.com OR self-hosted gpu/self_hosted/)
|
||||
# backends:
|
||||
# - pyannote: in-process model loading (no HTTP, runs in same process)
|
||||
# - modal: HTTP API client (works with Modal.com OR self-hosted gpu/self_hosted/)
|
||||
DIARIZATION_ENABLED: bool = True
|
||||
DIARIZATION_BACKEND: str = "modal"
|
||||
DIARIZATION_URL: str | None = None
|
||||
@@ -94,6 +95,9 @@ class Settings(BaseSettings):
|
||||
# Diarization: modal backend
|
||||
DIARIZATION_MODAL_API_KEY: str | None = None
|
||||
|
||||
# Diarization: local pyannote.audio
|
||||
DIARIZATION_PYANNOTE_AUTH_TOKEN: str | None = None
|
||||
|
||||
# Audio Padding (Modal.com backend)
|
||||
PADDING_URL: str | None = None
|
||||
PADDING_MODAL_API_KEY: str | None = None
|
||||
@@ -151,7 +155,7 @@ class Settings(BaseSettings):
|
||||
None # Webhook UUID for this environment. Not used by production code
|
||||
)
|
||||
# Platform Configuration
|
||||
DEFAULT_VIDEO_PLATFORM: Platform = DAILY_PLATFORM
|
||||
DEFAULT_VIDEO_PLATFORM: Platform = WHEREBY_PLATFORM
|
||||
|
||||
# Zulip integration
|
||||
ZULIP_REALM: str | None = None
|
||||
|
||||
@@ -53,7 +53,6 @@ class AwsStorage(Storage):
|
||||
aws_access_key_id: str | None = None,
|
||||
aws_secret_access_key: str | None = None,
|
||||
aws_role_arn: str | None = None,
|
||||
aws_endpoint_url: str | None = None,
|
||||
):
|
||||
if not aws_bucket_name:
|
||||
raise ValueError("Storage `aws_storage` require `aws_bucket_name`")
|
||||
@@ -74,26 +73,17 @@ class AwsStorage(Storage):
|
||||
self._access_key_id = aws_access_key_id
|
||||
self._secret_access_key = aws_secret_access_key
|
||||
self._role_arn = aws_role_arn
|
||||
self._endpoint_url = aws_endpoint_url
|
||||
|
||||
self.aws_folder = ""
|
||||
if "/" in aws_bucket_name:
|
||||
self._bucket_name, self.aws_folder = aws_bucket_name.split("/", 1)
|
||||
|
||||
config_kwargs: dict = {"retries": {"max_attempts": 3, "mode": "adaptive"}}
|
||||
if aws_endpoint_url:
|
||||
config_kwargs["s3"] = {"addressing_style": "path"}
|
||||
self.boto_config = Config(**config_kwargs)
|
||||
|
||||
self.boto_config = Config(retries={"max_attempts": 3, "mode": "adaptive"})
|
||||
self.session = aioboto3.Session(
|
||||
aws_access_key_id=aws_access_key_id,
|
||||
aws_secret_access_key=aws_secret_access_key,
|
||||
region_name=aws_region,
|
||||
)
|
||||
if aws_endpoint_url:
|
||||
self.base_url = f"{aws_endpoint_url}/{self._bucket_name}/"
|
||||
else:
|
||||
self.base_url = f"https://{self._bucket_name}.s3.amazonaws.com/"
|
||||
self.base_url = f"https://{self._bucket_name}.s3.amazonaws.com/"
|
||||
|
||||
# Implement credential properties
|
||||
@property
|
||||
@@ -149,9 +139,7 @@ class AwsStorage(Storage):
|
||||
s3filename = f"{folder}/{filename}" if folder else filename
|
||||
logger.info(f"Uploading {filename} to S3 {actual_bucket}/{folder}")
|
||||
|
||||
async with self.session.client(
|
||||
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
|
||||
) as client:
|
||||
async with self.session.client("s3", config=self.boto_config) as client:
|
||||
if isinstance(data, bytes):
|
||||
await client.put_object(Bucket=actual_bucket, Key=s3filename, Body=data)
|
||||
else:
|
||||
@@ -174,9 +162,7 @@ class AwsStorage(Storage):
|
||||
actual_bucket = bucket or self._bucket_name
|
||||
folder = self.aws_folder
|
||||
s3filename = f"{folder}/{filename}" if folder else filename
|
||||
async with self.session.client(
|
||||
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
|
||||
) as client:
|
||||
async with self.session.client("s3", config=self.boto_config) as client:
|
||||
presigned_url = await client.generate_presigned_url(
|
||||
operation,
|
||||
Params={"Bucket": actual_bucket, "Key": s3filename},
|
||||
@@ -191,9 +177,7 @@ class AwsStorage(Storage):
|
||||
folder = self.aws_folder
|
||||
logger.info(f"Deleting {filename} from S3 {actual_bucket}/{folder}")
|
||||
s3filename = f"{folder}/{filename}" if folder else filename
|
||||
async with self.session.client(
|
||||
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
|
||||
) as client:
|
||||
async with self.session.client("s3", config=self.boto_config) as client:
|
||||
await client.delete_object(Bucket=actual_bucket, Key=s3filename)
|
||||
|
||||
@handle_s3_client_errors("download")
|
||||
@@ -202,9 +186,7 @@ class AwsStorage(Storage):
|
||||
folder = self.aws_folder
|
||||
logger.info(f"Downloading {filename} from S3 {actual_bucket}/{folder}")
|
||||
s3filename = f"{folder}/{filename}" if folder else filename
|
||||
async with self.session.client(
|
||||
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
|
||||
) as client:
|
||||
async with self.session.client("s3", config=self.boto_config) as client:
|
||||
response = await client.get_object(Bucket=actual_bucket, Key=s3filename)
|
||||
return await response["Body"].read()
|
||||
|
||||
@@ -219,9 +201,7 @@ class AwsStorage(Storage):
|
||||
logger.info(f"Listing objects from S3 {actual_bucket} with prefix '{s3prefix}'")
|
||||
|
||||
keys = []
|
||||
async with self.session.client(
|
||||
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
|
||||
) as client:
|
||||
async with self.session.client("s3", config=self.boto_config) as client:
|
||||
paginator = client.get_paginator("list_objects_v2")
|
||||
async for page in paginator.paginate(Bucket=actual_bucket, Prefix=s3prefix):
|
||||
if "Contents" in page:
|
||||
@@ -247,9 +227,7 @@ class AwsStorage(Storage):
|
||||
folder = self.aws_folder
|
||||
logger.info(f"Streaming {filename} from S3 {actual_bucket}/{folder}")
|
||||
s3filename = f"{folder}/{filename}" if folder else filename
|
||||
async with self.session.client(
|
||||
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
|
||||
) as client:
|
||||
async with self.session.client("s3", config=self.boto_config) as client:
|
||||
await client.download_fileobj(
|
||||
Bucket=actual_bucket, Key=s3filename, Fileobj=fileobj
|
||||
)
|
||||
|
||||
@@ -129,6 +129,10 @@ class DailyClient(VideoPlatformClient):
|
||||
"""Get room presence/session data for a Daily.co room."""
|
||||
return await self._api_client.get_room_presence(room_name)
|
||||
|
||||
async def delete_room(self, room_name: str) -> None:
|
||||
"""Delete a Daily.co room (idempotent - succeeds even if room doesn't exist)."""
|
||||
return await self._api_client.delete_room(room_name)
|
||||
|
||||
async def get_meeting_participants(
|
||||
self, meeting_id: str
|
||||
) -> MeetingParticipantsResponse:
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
from typing import assert_never
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Request
|
||||
@@ -12,7 +14,10 @@ from reflector.dailyco_api import (
|
||||
RecordingReadyEvent,
|
||||
RecordingStartedEvent,
|
||||
)
|
||||
from reflector.dailyco_api.recording_orphans import create_and_log_orphan
|
||||
from reflector.db.daily_recording_requests import daily_recording_requests_controller
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.recordings import Recording, recordings_controller
|
||||
from reflector.logger import logger as _logger
|
||||
from reflector.settings import settings
|
||||
from reflector.video_platforms.factory import create_platform_client
|
||||
@@ -80,14 +85,7 @@ async def webhook(request: Request):
|
||||
try:
|
||||
event = event_adapter.validate_python(body_json)
|
||||
except Exception as e:
|
||||
err_detail = str(e)
|
||||
if hasattr(e, "errors"):
|
||||
err_detail = f"{err_detail}; errors={e.errors()!r}"
|
||||
logger.error(
|
||||
"Failed to parse webhook event",
|
||||
error=err_detail,
|
||||
body=body.decode(),
|
||||
)
|
||||
logger.error("Failed to parse webhook event", error=str(e), body=body.decode())
|
||||
raise HTTPException(status_code=422, detail="Invalid event format")
|
||||
|
||||
match event:
|
||||
@@ -219,10 +217,73 @@ async def _handle_recording_ready(event: RecordingReadyEvent):
|
||||
|
||||
track_keys = [t.s3Key for t in tracks if t.type == "audio"]
|
||||
|
||||
# Lookup request
|
||||
match = await daily_recording_requests_controller.find_by_recording_id(
|
||||
recording_id
|
||||
)
|
||||
|
||||
if not match:
|
||||
await create_and_log_orphan(
|
||||
recording_id=recording_id,
|
||||
bucket_name=bucket_name,
|
||||
room_name=room_name,
|
||||
start_ts=event.payload.start_ts,
|
||||
track_keys=track_keys,
|
||||
source="webhook",
|
||||
)
|
||||
return
|
||||
|
||||
meeting_id, _ = match
|
||||
|
||||
# Verify meeting exists
|
||||
meeting = await meetings_controller.get_by_id(meeting_id)
|
||||
if not meeting:
|
||||
logger.error(
|
||||
"Meeting not found (webhook)",
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting_id,
|
||||
)
|
||||
await create_and_log_orphan(
|
||||
recording_id=recording_id,
|
||||
bucket_name=bucket_name,
|
||||
room_name=room_name,
|
||||
start_ts=event.payload.start_ts,
|
||||
track_keys=track_keys,
|
||||
source="webhook",
|
||||
)
|
||||
return
|
||||
|
||||
# Create recording atomically
|
||||
created = await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name=bucket_name,
|
||||
object_key=(
|
||||
os.path.dirname(track_keys[0]) if track_keys else room_name
|
||||
),
|
||||
recorded_at=datetime.fromtimestamp(
|
||||
event.payload.start_ts, tz=timezone.utc
|
||||
),
|
||||
track_keys=track_keys,
|
||||
meeting_id=meeting_id,
|
||||
status="pending",
|
||||
)
|
||||
)
|
||||
|
||||
if not created:
|
||||
# Already created (polling got it first)
|
||||
logger.debug(
|
||||
"Recording already exists (webhook late)",
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting_id,
|
||||
)
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"Raw-tracks recording queuing processing",
|
||||
"Raw-tracks recording queuing processing (webhook)",
|
||||
recording_id=recording_id,
|
||||
room_name=room_name,
|
||||
meeting_id=meeting_id,
|
||||
num_tracks=len(track_keys),
|
||||
)
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime, timezone
|
||||
from typing import Annotated, Any, Optional
|
||||
from uuid import UUID
|
||||
@@ -9,16 +10,21 @@ from pydantic import BaseModel
|
||||
import reflector.auth as auth
|
||||
from reflector.dailyco_api import RecordingType
|
||||
from reflector.dailyco_api.client import DailyApiError
|
||||
from reflector.db.daily_recording_requests import (
|
||||
DailyRecordingRequest,
|
||||
daily_recording_requests_controller,
|
||||
)
|
||||
from reflector.db.meetings import (
|
||||
MeetingConsent,
|
||||
meeting_consent_controller,
|
||||
meetings_controller,
|
||||
)
|
||||
from reflector.db.rooms import rooms_controller
|
||||
from reflector.logger import logger
|
||||
from reflector.utils.string import NonEmptyString
|
||||
from reflector.video_platforms.factory import create_platform_client
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@@ -102,13 +108,6 @@ async def start_recording(
|
||||
if not meeting:
|
||||
raise HTTPException(status_code=404, detail="Meeting not found")
|
||||
|
||||
log = logger.bind(
|
||||
meeting_id=meeting_id,
|
||||
room_name=meeting.room_name,
|
||||
recording_type=body.type,
|
||||
instance_id=body.instanceId,
|
||||
)
|
||||
|
||||
try:
|
||||
client = create_platform_client("daily")
|
||||
result = await client.start_recording(
|
||||
@@ -117,9 +116,30 @@ async def start_recording(
|
||||
instance_id=body.instanceId,
|
||||
)
|
||||
|
||||
log.info(f"Started {body.type} recording via REST API")
|
||||
recording_id = result["id"]
|
||||
|
||||
return {"status": "ok", "result": result}
|
||||
await daily_recording_requests_controller.create(
|
||||
DailyRecordingRequest(
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting_id,
|
||||
instance_id=body.instanceId,
|
||||
type=body.type,
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Started {body.type} recording via REST API",
|
||||
extra={
|
||||
"meeting_id": meeting_id,
|
||||
"room_name": meeting.room_name,
|
||||
"recording_type": body.type,
|
||||
"instance_id": body.instanceId,
|
||||
"recording_id": recording_id,
|
||||
},
|
||||
)
|
||||
|
||||
return {"status": "ok", "recording_id": recording_id}
|
||||
|
||||
except DailyApiError as e:
|
||||
# Parse Daily.co error response to detect "has an active stream"
|
||||
@@ -130,22 +150,42 @@ async def start_recording(
|
||||
# "has an active stream" means recording already started by another participant
|
||||
# This is SUCCESS from business logic perspective - return 200
|
||||
if "has an active stream" in error_info:
|
||||
log.info(
|
||||
f"{body.type} recording already active (started by another participant)"
|
||||
logger.info(
|
||||
f"{body.type} recording already active (started by another participant)",
|
||||
extra={
|
||||
"meeting_id": meeting_id,
|
||||
"room_name": meeting.room_name,
|
||||
"recording_type": body.type,
|
||||
"instance_id": body.instanceId,
|
||||
},
|
||||
)
|
||||
return {"status": "already_active", "instanceId": str(body.instanceId)}
|
||||
except (json.JSONDecodeError, KeyError):
|
||||
pass # Fall through to error handling
|
||||
|
||||
# All other Daily.co API errors
|
||||
log.error(f"Failed to start {body.type} recording", error=str(e))
|
||||
logger.error(
|
||||
f"Failed to start {body.type} recording",
|
||||
extra={
|
||||
"meeting_id": meeting_id,
|
||||
"recording_type": body.type,
|
||||
"error": str(e),
|
||||
},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=500, detail=f"Failed to start recording: {str(e)}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
# Non-Daily.co errors
|
||||
log.error(f"Failed to start {body.type} recording", error=str(e))
|
||||
logger.error(
|
||||
f"Failed to start {body.type} recording",
|
||||
extra={
|
||||
"meeting_id": meeting_id,
|
||||
"recording_type": body.type,
|
||||
"error": str(e),
|
||||
},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=500, detail=f"Failed to start recording: {str(e)}"
|
||||
)
|
||||
|
||||
@@ -20,6 +20,7 @@ from reflector.services.ics_sync import ics_sync_service
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.url import add_query_param
|
||||
from reflector.video_platforms.factory import create_platform_client
|
||||
from reflector.worker.process import poll_daily_room_presence_task
|
||||
from reflector.worker.webhook import test_webhook
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -365,6 +366,53 @@ async def rooms_create_meeting(
|
||||
return meeting
|
||||
|
||||
|
||||
@router.post("/rooms/{room_name}/meetings/{meeting_id}/joined")
|
||||
async def rooms_joined_meeting(
|
||||
room_name: str,
|
||||
meeting_id: str,
|
||||
):
|
||||
"""Trigger presence poll (ideally when user actually joins meeting in Daily iframe)"""
|
||||
room = await rooms_controller.get_by_name(room_name)
|
||||
if not room:
|
||||
raise HTTPException(status_code=404, detail="Room not found")
|
||||
|
||||
meeting = await meetings_controller.get_by_id(meeting_id, room=room)
|
||||
if not meeting:
|
||||
raise HTTPException(status_code=404, detail="Meeting not found")
|
||||
|
||||
if meeting.platform == "daily":
|
||||
poll_daily_room_presence_task.delay(meeting_id)
|
||||
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@router.post("/rooms/{room_name}/meetings/{meeting_id}/leave")
|
||||
async def rooms_leave_meeting(
|
||||
room_name: str,
|
||||
meeting_id: str,
|
||||
delay_seconds: int = 2,
|
||||
):
|
||||
"""Trigger presence recheck when user leaves meeting (e.g., tab close/navigation).
|
||||
|
||||
Queues presence poll with optional delay to allow Daily.co to detect disconnect.
|
||||
"""
|
||||
room = await rooms_controller.get_by_name(room_name)
|
||||
if not room:
|
||||
raise HTTPException(status_code=404, detail="Room not found")
|
||||
|
||||
meeting = await meetings_controller.get_by_id(meeting_id, room=room)
|
||||
if not meeting:
|
||||
raise HTTPException(status_code=404, detail="Meeting not found")
|
||||
|
||||
if meeting.platform == "daily":
|
||||
poll_daily_room_presence_task.apply_async(
|
||||
args=[meeting_id],
|
||||
countdown=delay_seconds,
|
||||
)
|
||||
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@router.post("/rooms/{room_id}/webhook/test", response_model=WebhookTestResult)
|
||||
async def rooms_test_webhook(
|
||||
room_id: str,
|
||||
|
||||
@@ -5,7 +5,7 @@ from fastapi import APIRouter, Depends, HTTPException, UploadFile
|
||||
from pydantic import BaseModel
|
||||
|
||||
import reflector.auth as auth
|
||||
from reflector.db.transcripts import SourceKind, transcripts_controller
|
||||
from reflector.db.transcripts import transcripts_controller
|
||||
from reflector.pipelines.main_file_pipeline import task_pipeline_file_process
|
||||
|
||||
router = APIRouter()
|
||||
@@ -88,10 +88,8 @@ async def transcript_record_upload(
|
||||
finally:
|
||||
container.close()
|
||||
|
||||
# set the status to "uploaded" and mark as file source
|
||||
await transcripts_controller.update(
|
||||
transcript, {"status": "uploaded", "source_kind": SourceKind.FILE}
|
||||
)
|
||||
# set the status to "uploaded"
|
||||
await transcripts_controller.update(transcript, {"status": "uploaded"})
|
||||
|
||||
# launch a background task to process the file
|
||||
task_pipeline_file_process.delay(transcript_id=transcript_id)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
|
||||
from fastapi import APIRouter, WebSocket
|
||||
|
||||
from reflector.auth.auth_jwt import JWTAuth # type: ignore
|
||||
from reflector.db.users import user_controller
|
||||
@@ -60,8 +60,6 @@ async def user_events_websocket(websocket: WebSocket):
|
||||
try:
|
||||
while True:
|
||||
await websocket.receive()
|
||||
except (RuntimeError, WebSocketDisconnect):
|
||||
pass
|
||||
finally:
|
||||
if room_id:
|
||||
await ws_manager.remove_user_from_room(room_id, websocket)
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
from typing import List, Literal
|
||||
from urllib.parse import unquote
|
||||
@@ -13,10 +12,12 @@ from celery.utils.log import get_task_logger
|
||||
from pydantic import ValidationError
|
||||
|
||||
from reflector.dailyco_api import FinishedRecordingResponse, RecordingResponse
|
||||
from reflector.dailyco_api.recording_orphans import create_and_log_orphan
|
||||
from reflector.db.daily_participant_sessions import (
|
||||
DailyParticipantSession,
|
||||
daily_participant_sessions_controller,
|
||||
)
|
||||
from reflector.db.daily_recording_requests import daily_recording_requests_controller
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.recordings import Recording, recordings_controller
|
||||
from reflector.db.rooms import rooms_controller
|
||||
@@ -27,6 +28,9 @@ from reflector.db.transcripts import (
|
||||
from reflector.hatchet.client import HatchetClientManager
|
||||
from reflector.pipelines.main_file_pipeline import task_pipeline_file_process
|
||||
from reflector.pipelines.main_live_pipeline import asynctask
|
||||
from reflector.pipelines.main_multitrack_pipeline import (
|
||||
task_pipeline_multitrack_process,
|
||||
)
|
||||
from reflector.pipelines.topic_processing import EmptyPipeline
|
||||
from reflector.processors import AudioFileWriterProcessor
|
||||
from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
|
||||
@@ -227,79 +231,44 @@ async def _process_multitrack_recording_inner(
|
||||
recording_start_ts: int,
|
||||
):
|
||||
"""
|
||||
Process multitrack recording (first time or reprocessing).
|
||||
Process multitrack recording.
|
||||
|
||||
For first processing (webhook/polling):
|
||||
- Uses recording_start_ts for time-based meeting matching (no instanceId available)
|
||||
|
||||
For reprocessing:
|
||||
- Uses recording.meeting_id directly (already linked during first processing)
|
||||
- recording_start_ts is ignored
|
||||
Recording must already exist with meeting_id set (created by webhook/polling before queueing).
|
||||
"""
|
||||
|
||||
tz = timezone.utc
|
||||
recorded_at = datetime.now(tz)
|
||||
try:
|
||||
if track_keys:
|
||||
folder = os.path.basename(os.path.dirname(track_keys[0]))
|
||||
ts_match = re.search(r"(\d{14})$", folder)
|
||||
if ts_match:
|
||||
ts = ts_match.group(1)
|
||||
recorded_at = datetime.strptime(ts, "%Y%m%d%H%M%S").replace(tzinfo=tz)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f"Could not parse recorded_at from keys, using now() {recorded_at}",
|
||||
e,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# Check if recording already exists (reprocessing path)
|
||||
# Get recording (must exist - created by webhook/polling)
|
||||
recording = await recordings_controller.get_by_id(recording_id)
|
||||
|
||||
if recording and recording.meeting_id:
|
||||
# Reprocessing: recording exists with meeting already linked
|
||||
meeting = await meetings_controller.get_by_id(recording.meeting_id)
|
||||
if not meeting:
|
||||
logger.error(
|
||||
"Reprocessing: meeting not found for recording - skipping",
|
||||
meeting_id=recording.meeting_id,
|
||||
recording_id=recording_id,
|
||||
)
|
||||
return
|
||||
if not recording:
|
||||
logger.error(
|
||||
"Recording not found - should have been created by webhook/polling",
|
||||
recording_id=recording_id,
|
||||
)
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"Reprocessing: using existing recording.meeting_id",
|
||||
if not recording.meeting_id:
|
||||
logger.error(
|
||||
"Recording has no meeting_id - orphan should not be queued",
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting.id,
|
||||
room_name=daily_room_name,
|
||||
)
|
||||
else:
|
||||
# First processing: recording doesn't exist, need time-based matching
|
||||
# (Daily.co doesn't return instanceId in API, must match by timestamp)
|
||||
recording_start = datetime.fromtimestamp(recording_start_ts, tz=timezone.utc)
|
||||
meeting = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name=daily_room_name,
|
||||
recording_start=recording_start,
|
||||
time_window_hours=168, # 1 week
|
||||
)
|
||||
if not meeting:
|
||||
logger.error(
|
||||
"Raw-tracks: no meeting found within 1-week window (time-based match) - skipping",
|
||||
recording_id=recording_id,
|
||||
room_name=daily_room_name,
|
||||
recording_start_ts=recording_start_ts,
|
||||
recording_start=recording_start.isoformat(),
|
||||
)
|
||||
return # Skip processing, will retry on next poll
|
||||
logger.info(
|
||||
"First processing: found meeting via time-based matching",
|
||||
meeting_id=meeting.id,
|
||||
room_name=daily_room_name,
|
||||
return
|
||||
|
||||
# Get meeting
|
||||
meeting = await meetings_controller.get_by_id(recording.meeting_id)
|
||||
if not meeting:
|
||||
logger.error(
|
||||
"Meeting not found for recording",
|
||||
meeting_id=recording.meeting_id,
|
||||
recording_id=recording_id,
|
||||
time_delta_seconds=abs(
|
||||
(meeting.start_date - recording_start).total_seconds()
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"Processing multitrack recording",
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting.id,
|
||||
room_name=daily_room_name,
|
||||
)
|
||||
|
||||
room_name_base = extract_base_room_name(daily_room_name)
|
||||
|
||||
@@ -307,33 +276,6 @@ async def _process_multitrack_recording_inner(
|
||||
if not room:
|
||||
raise Exception(f"Room not found: {room_name_base}")
|
||||
|
||||
if not recording:
|
||||
# Create recording (only happens during first processing)
|
||||
object_key_dir = os.path.dirname(track_keys[0]) if track_keys else ""
|
||||
recording = await recordings_controller.create(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name=bucket_name,
|
||||
object_key=object_key_dir,
|
||||
recorded_at=recorded_at,
|
||||
meeting_id=meeting.id,
|
||||
track_keys=track_keys,
|
||||
)
|
||||
)
|
||||
elif not recording.meeting_id:
|
||||
# Recording exists but meeting_id is null (failed first processing)
|
||||
# Update with meeting from time-based matching
|
||||
await recordings_controller.set_meeting_id(
|
||||
recording_id=recording.id,
|
||||
meeting_id=meeting.id,
|
||||
)
|
||||
recording.meeting_id = meeting.id
|
||||
logger.info(
|
||||
"Updated existing recording with meeting_id",
|
||||
recording_id=recording.id,
|
||||
meeting_id=meeting.id,
|
||||
)
|
||||
|
||||
transcript = await transcripts_controller.get_by_recording_id(recording.id)
|
||||
if not transcript:
|
||||
transcript = await transcripts_controller.add(
|
||||
@@ -348,29 +290,49 @@ async def _process_multitrack_recording_inner(
|
||||
room_id=room.id,
|
||||
)
|
||||
|
||||
# Multitrack processing always uses Hatchet (no Celery fallback)
|
||||
workflow_id = await HatchetClientManager.start_workflow(
|
||||
workflow_name="DiarizationPipeline",
|
||||
input_data={
|
||||
"recording_id": recording_id,
|
||||
"tracks": [{"s3_key": k} for k in filter_cam_audio_tracks(track_keys)],
|
||||
"bucket_name": bucket_name,
|
||||
"transcript_id": transcript.id,
|
||||
"room_id": room.id,
|
||||
},
|
||||
additional_metadata={
|
||||
"transcript_id": transcript.id,
|
||||
"recording_id": recording_id,
|
||||
"daily_recording_id": recording_id,
|
||||
},
|
||||
)
|
||||
logger.info(
|
||||
"Started Hatchet workflow",
|
||||
workflow_id=workflow_id,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
use_celery = room and room.use_celery
|
||||
use_hatchet = not use_celery
|
||||
|
||||
await transcripts_controller.update(transcript, {"workflow_run_id": workflow_id})
|
||||
if use_celery:
|
||||
logger.info(
|
||||
"Room uses legacy Celery processing",
|
||||
room_id=room.id,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
|
||||
if use_hatchet:
|
||||
workflow_id = await HatchetClientManager.start_workflow(
|
||||
workflow_name="DiarizationPipeline",
|
||||
input_data={
|
||||
"recording_id": recording_id,
|
||||
"tracks": [{"s3_key": k} for k in filter_cam_audio_tracks(track_keys)],
|
||||
"bucket_name": bucket_name,
|
||||
"transcript_id": transcript.id,
|
||||
"room_id": room.id,
|
||||
},
|
||||
additional_metadata={
|
||||
"transcript_id": transcript.id,
|
||||
"recording_id": recording_id,
|
||||
"daily_recording_id": recording_id,
|
||||
},
|
||||
)
|
||||
logger.info(
|
||||
"Started Hatchet workflow",
|
||||
workflow_id=workflow_id,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
|
||||
await transcripts_controller.update(
|
||||
transcript, {"workflow_run_id": workflow_id}
|
||||
)
|
||||
return
|
||||
|
||||
# Celery pipeline (runs when durable workflows disabled)
|
||||
task_pipeline_multitrack_process.delay(
|
||||
transcript_id=transcript.id,
|
||||
bucket_name=bucket_name,
|
||||
track_keys=filter_cam_audio_tracks(track_keys),
|
||||
)
|
||||
|
||||
|
||||
@shared_task
|
||||
@@ -499,7 +461,7 @@ async def store_cloud_recording(
|
||||
Store cloud recording reference in meeting table.
|
||||
|
||||
Common function for both webhook and polling code paths.
|
||||
Uses time-based matching to handle duplicate room_name values.
|
||||
Uses direct recording_id lookup via daily_recording_requests table.
|
||||
|
||||
Args:
|
||||
recording_id: Daily.co recording ID
|
||||
@@ -512,155 +474,170 @@ async def store_cloud_recording(
|
||||
Returns:
|
||||
True if stored, False if skipped/failed
|
||||
"""
|
||||
recording_start = datetime.fromtimestamp(start_ts, tz=timezone.utc)
|
||||
# Lookup request
|
||||
match = await daily_recording_requests_controller.find_by_recording_id(recording_id)
|
||||
|
||||
meeting = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name=room_name,
|
||||
recording_start=recording_start,
|
||||
time_window_hours=168, # 1 week
|
||||
)
|
||||
|
||||
if not meeting:
|
||||
logger.warning(
|
||||
f"Cloud recording ({source}): no meeting found within 1-week window",
|
||||
if not match:
|
||||
# ORPHAN: No request found (pre-migration recording or failed request creation)
|
||||
await create_and_log_orphan(
|
||||
recording_id=recording_id,
|
||||
bucket_name="",
|
||||
room_name=room_name,
|
||||
recording_start_ts=start_ts,
|
||||
recording_start=recording_start.isoformat(),
|
||||
start_ts=start_ts,
|
||||
track_keys=None,
|
||||
source=source,
|
||||
)
|
||||
return False
|
||||
|
||||
meeting_id, _ = match
|
||||
|
||||
success = await meetings_controller.set_cloud_recording_if_missing(
|
||||
meeting_id=meeting.id,
|
||||
meeting_id=meeting_id,
|
||||
s3_key=s3_key,
|
||||
duration=duration,
|
||||
)
|
||||
|
||||
if not success:
|
||||
logger.debug(
|
||||
f"Cloud recording ({source}): already set (race lost)",
|
||||
f"Cloud recording ({source}): already set (stop/restart?)",
|
||||
recording_id=recording_id,
|
||||
room_name=room_name,
|
||||
meeting_id=meeting.id,
|
||||
meeting_id=meeting_id,
|
||||
)
|
||||
return False
|
||||
|
||||
logger.info(
|
||||
f"Cloud recording stored via {source} (time-based match)",
|
||||
meeting_id=meeting.id,
|
||||
f"Cloud recording stored via {source}",
|
||||
meeting_id=meeting_id,
|
||||
recording_id=recording_id,
|
||||
s3_key=s3_key,
|
||||
duration=duration,
|
||||
time_delta_seconds=abs((meeting.start_date - recording_start).total_seconds()),
|
||||
)
|
||||
return True
|
||||
|
||||
|
||||
async def _poll_cloud_recordings(cloud_recordings: List[FinishedRecordingResponse]):
|
||||
"""
|
||||
Store cloud recordings missing from meeting table via polling.
|
||||
"""Process cloud recordings (database deduplication, worker-agnostic).
|
||||
|
||||
Uses time-based matching via store_cloud_recording().
|
||||
Cloud recordings stored in meeting.daily_composed_video_s3_key, not recording table.
|
||||
Only first cloud recording per meeting is kept (existing behavior).
|
||||
"""
|
||||
if not cloud_recordings:
|
||||
return
|
||||
|
||||
stored_count = 0
|
||||
for recording in cloud_recordings:
|
||||
# Extract S3 key from recording (cloud recordings use s3key field)
|
||||
s3_key = recording.s3key or (recording.s3.key if recording.s3 else None)
|
||||
if not s3_key:
|
||||
logger.warning(
|
||||
"Cloud recording: missing S3 key",
|
||||
recording_id=recording.id,
|
||||
room_name=recording.room_name,
|
||||
for rec in cloud_recordings:
|
||||
# Lookup request
|
||||
match = await daily_recording_requests_controller.find_by_recording_id(rec.id)
|
||||
|
||||
if not match:
|
||||
await create_and_log_orphan(
|
||||
recording_id=rec.id,
|
||||
bucket_name="",
|
||||
room_name=rec.room_name,
|
||||
start_ts=rec.start_ts,
|
||||
track_keys=None,
|
||||
source="polling",
|
||||
)
|
||||
continue
|
||||
|
||||
stored = await store_cloud_recording(
|
||||
recording_id=recording.id,
|
||||
room_name=recording.room_name,
|
||||
s3_key=s3_key,
|
||||
duration=recording.duration,
|
||||
start_ts=recording.start_ts,
|
||||
source="polling",
|
||||
)
|
||||
if stored:
|
||||
stored_count += 1
|
||||
meeting_id, _ = match
|
||||
|
||||
logger.info(
|
||||
"Cloud recording polling complete",
|
||||
total=len(cloud_recordings),
|
||||
stored=stored_count,
|
||||
)
|
||||
if not rec.s3key:
|
||||
logger.error("Cloud recording missing s3_key", recording_id=rec.id)
|
||||
continue
|
||||
|
||||
# Store in meeting table (atomic, only if not already set)
|
||||
success = await meetings_controller.set_cloud_recording_if_missing(
|
||||
meeting_id=meeting_id,
|
||||
s3_key=rec.s3key,
|
||||
duration=rec.duration,
|
||||
)
|
||||
|
||||
if success:
|
||||
logger.info(
|
||||
"Stored cloud recording", recording_id=rec.id, meeting_id=meeting_id
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"Cloud recording already exists for meeting (stop/restart?)",
|
||||
recording_id=rec.id,
|
||||
meeting_id=meeting_id,
|
||||
)
|
||||
|
||||
|
||||
async def _poll_raw_tracks_recordings(
|
||||
raw_tracks_recordings: List[FinishedRecordingResponse],
|
||||
bucket_name: str,
|
||||
):
|
||||
"""Queue raw-tracks recordings missing from DB (existing logic)."""
|
||||
bucket_name: NonEmptyString,
|
||||
) -> None:
|
||||
"""Process raw-tracks (database deduplication, worker-agnostic)."""
|
||||
if not raw_tracks_recordings:
|
||||
return
|
||||
|
||||
recording_ids = [rec.id for rec in raw_tracks_recordings]
|
||||
existing_recordings = await recordings_controller.get_by_ids(recording_ids)
|
||||
existing_ids = {rec.id for rec in existing_recordings}
|
||||
for rec in raw_tracks_recordings:
|
||||
# Lookup request FIRST (before any DB writes)
|
||||
match = await daily_recording_requests_controller.find_by_recording_id(rec.id)
|
||||
|
||||
missing_recordings = [
|
||||
rec for rec in raw_tracks_recordings if rec.id not in existing_ids
|
||||
]
|
||||
|
||||
if not missing_recordings:
|
||||
logger.debug(
|
||||
"All raw-tracks recordings already in DB",
|
||||
api_count=len(raw_tracks_recordings),
|
||||
existing_count=len(existing_recordings),
|
||||
)
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"Found raw-tracks recordings missing from DB",
|
||||
missing_count=len(missing_recordings),
|
||||
total_api_count=len(raw_tracks_recordings),
|
||||
existing_count=len(existing_recordings),
|
||||
)
|
||||
|
||||
for recording in missing_recordings:
|
||||
if not recording.tracks:
|
||||
logger.warning(
|
||||
"Finished raw-tracks recording has no tracks (no audio captured)",
|
||||
recording_id=recording.id,
|
||||
room_name=recording.room_name,
|
||||
if not match:
|
||||
await create_and_log_orphan(
|
||||
recording_id=rec.id,
|
||||
bucket_name=bucket_name,
|
||||
room_name=rec.room_name,
|
||||
start_ts=rec.start_ts,
|
||||
track_keys=[t.s3Key for t in rec.tracks if t.type == "audio"],
|
||||
source="polling",
|
||||
)
|
||||
continue
|
||||
|
||||
track_keys = [t.s3Key for t in recording.tracks if t.type == "audio"]
|
||||
meeting_id, _ = match
|
||||
|
||||
if not track_keys:
|
||||
logger.warning(
|
||||
"No audio tracks found in raw-tracks recording",
|
||||
recording_id=recording.id,
|
||||
room_name=recording.room_name,
|
||||
total_tracks=len(recording.tracks),
|
||||
# Verify meeting exists
|
||||
meeting = await meetings_controller.get_by_id(meeting_id)
|
||||
if not meeting:
|
||||
logger.error(
|
||||
"Meeting not found", recording_id=rec.id, meeting_id=meeting_id
|
||||
)
|
||||
await create_and_log_orphan(
|
||||
recording_id=rec.id,
|
||||
bucket_name=bucket_name,
|
||||
room_name=rec.room_name,
|
||||
start_ts=rec.start_ts,
|
||||
track_keys=[t.s3Key for t in rec.tracks if t.type == "audio"],
|
||||
source="polling",
|
||||
)
|
||||
continue
|
||||
|
||||
logger.info(
|
||||
"Queueing missing raw-tracks recording for processing",
|
||||
recording_id=recording.id,
|
||||
room_name=recording.room_name,
|
||||
track_count=len(track_keys),
|
||||
# DEDUPLICATION: Atomically create recording (single operation, no race window)
|
||||
# ON CONFLICT → concurrent poller already got it, skip entire logic
|
||||
track_keys = [t.s3Key for t in rec.tracks if t.type == "audio"]
|
||||
|
||||
created = await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id=rec.id,
|
||||
bucket_name=bucket_name,
|
||||
object_key=os.path.dirname(track_keys[0]) if track_keys else "",
|
||||
recorded_at=datetime.fromtimestamp(rec.start_ts, tz=timezone.utc),
|
||||
track_keys=track_keys,
|
||||
meeting_id=meeting_id, # Set at creation (constraint-safe)
|
||||
status="pending",
|
||||
)
|
||||
)
|
||||
|
||||
if not created:
|
||||
# Conflict: another poller already created/queued this
|
||||
# Skip all remaining logic (match already done by winner)
|
||||
continue
|
||||
|
||||
# Only winner reaches here - queue processing (works with Celery or Hatchet)
|
||||
process_multitrack_recording.delay(
|
||||
recording_id=rec.id,
|
||||
daily_room_name=rec.room_name,
|
||||
recording_start_ts=rec.start_ts,
|
||||
bucket_name=bucket_name,
|
||||
daily_room_name=recording.room_name,
|
||||
recording_id=recording.id,
|
||||
track_keys=track_keys,
|
||||
recording_start_ts=recording.start_ts,
|
||||
)
|
||||
|
||||
logger.info("Queued recording", recording_id=rec.id, meeting_id=meeting_id)
|
||||
|
||||
|
||||
async def poll_daily_room_presence(meeting_id: str) -> None:
|
||||
"""Poll Daily.co room presence and reconcile with DB sessions. New presence is added, old presence is marked as closed.
|
||||
@@ -822,15 +799,47 @@ async def process_meetings():
|
||||
end_date = end_date.replace(tzinfo=timezone.utc)
|
||||
|
||||
client = create_platform_client(meeting.platform)
|
||||
room_sessions = await client.get_room_sessions(meeting.room_name)
|
||||
has_active_sessions = False
|
||||
has_had_sessions = False
|
||||
|
||||
has_active_sessions = bool(
|
||||
room_sessions and any(s.ended_at is None for s in room_sessions)
|
||||
)
|
||||
has_had_sessions = bool(room_sessions)
|
||||
logger_.info(
|
||||
f"has_active_sessions={has_active_sessions}, has_had_sessions={has_had_sessions}"
|
||||
)
|
||||
if meeting.platform == "daily":
|
||||
try:
|
||||
presence = await client.get_room_presence(meeting.room_name)
|
||||
has_active_sessions = presence.total_count > 0
|
||||
|
||||
room_sessions = await client.get_room_sessions(
|
||||
meeting.room_name
|
||||
)
|
||||
has_had_sessions = bool(room_sessions)
|
||||
|
||||
logger_.info(
|
||||
"Daily.co presence check",
|
||||
has_active_sessions=has_active_sessions,
|
||||
has_had_sessions=has_had_sessions,
|
||||
presence_count=presence.total_count,
|
||||
)
|
||||
except Exception:
|
||||
logger_.warning(
|
||||
"Daily.co presence API failed, falling back to DB sessions",
|
||||
exc_info=True,
|
||||
)
|
||||
room_sessions = await client.get_room_sessions(
|
||||
meeting.room_name
|
||||
)
|
||||
has_active_sessions = bool(
|
||||
room_sessions
|
||||
and any(s.ended_at is None for s in room_sessions)
|
||||
)
|
||||
has_had_sessions = bool(room_sessions)
|
||||
else:
|
||||
room_sessions = await client.get_room_sessions(meeting.room_name)
|
||||
has_active_sessions = bool(
|
||||
room_sessions and any(s.ended_at is None for s in room_sessions)
|
||||
)
|
||||
has_had_sessions = bool(room_sessions)
|
||||
logger_.info(
|
||||
f"has_active_sessions={has_active_sessions}, has_had_sessions={has_had_sessions}"
|
||||
)
|
||||
|
||||
if has_active_sessions:
|
||||
logger_.debug("Meeting still has active sessions, keep it")
|
||||
@@ -849,7 +858,20 @@ async def process_meetings():
|
||||
await meetings_controller.update_meeting(
|
||||
meeting.id, is_active=False
|
||||
)
|
||||
logger_.info("Meeting is deactivated")
|
||||
logger_.info("Meeting deactivated in database")
|
||||
|
||||
if meeting.platform == "daily":
|
||||
try:
|
||||
await client.delete_room(meeting.room_name)
|
||||
logger_.info(
|
||||
"Daily.co room deleted", room_name=meeting.room_name
|
||||
)
|
||||
except Exception:
|
||||
logger_.warning(
|
||||
"Failed to delete Daily.co room",
|
||||
room_name=meeting.room_name,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
processed_count += 1
|
||||
|
||||
@@ -1049,43 +1071,66 @@ async def reprocess_failed_daily_recordings():
|
||||
)
|
||||
continue
|
||||
|
||||
# Multitrack reprocessing always uses Hatchet (no Celery fallback)
|
||||
if not transcript:
|
||||
logger.warning(
|
||||
"No transcript for Hatchet reprocessing, skipping",
|
||||
recording_id=recording.id,
|
||||
use_celery = room and room.use_celery
|
||||
use_hatchet = not use_celery
|
||||
|
||||
if use_hatchet:
|
||||
if not transcript:
|
||||
logger.warning(
|
||||
"No transcript for Hatchet reprocessing, skipping",
|
||||
recording_id=recording.id,
|
||||
)
|
||||
continue
|
||||
|
||||
workflow_id = await HatchetClientManager.start_workflow(
|
||||
workflow_name="DiarizationPipeline",
|
||||
input_data={
|
||||
"recording_id": recording.id,
|
||||
"tracks": [
|
||||
{"s3_key": k}
|
||||
for k in filter_cam_audio_tracks(recording.track_keys)
|
||||
],
|
||||
"bucket_name": bucket_name,
|
||||
"transcript_id": transcript.id,
|
||||
"room_id": room.id if room else None,
|
||||
},
|
||||
additional_metadata={
|
||||
"transcript_id": transcript.id,
|
||||
"recording_id": recording.id,
|
||||
"reprocess": True,
|
||||
},
|
||||
)
|
||||
await transcripts_controller.update(
|
||||
transcript, {"workflow_run_id": workflow_id}
|
||||
)
|
||||
continue
|
||||
|
||||
workflow_id = await HatchetClientManager.start_workflow(
|
||||
workflow_name="DiarizationPipeline",
|
||||
input_data={
|
||||
"recording_id": recording.id,
|
||||
"tracks": [
|
||||
{"s3_key": k}
|
||||
for k in filter_cam_audio_tracks(recording.track_keys)
|
||||
],
|
||||
"bucket_name": bucket_name,
|
||||
"transcript_id": transcript.id,
|
||||
"room_id": room.id if room else None,
|
||||
},
|
||||
additional_metadata={
|
||||
"transcript_id": transcript.id,
|
||||
"recording_id": recording.id,
|
||||
"reprocess": True,
|
||||
},
|
||||
)
|
||||
await transcripts_controller.update(
|
||||
transcript, {"workflow_run_id": workflow_id}
|
||||
)
|
||||
logger.info(
|
||||
"Queued Daily recording for Hatchet reprocessing",
|
||||
recording_id=recording.id,
|
||||
workflow_id=workflow_id,
|
||||
room_name=meeting.room_name,
|
||||
track_count=len(recording.track_keys),
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
"Queueing Daily recording for Celery reprocessing",
|
||||
recording_id=recording.id,
|
||||
room_name=meeting.room_name,
|
||||
track_count=len(recording.track_keys),
|
||||
transcript_status=transcript.status if transcript else None,
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"Queued Daily recording for Hatchet reprocessing",
|
||||
recording_id=recording.id,
|
||||
workflow_id=workflow_id,
|
||||
room_name=meeting.room_name,
|
||||
track_count=len(recording.track_keys),
|
||||
)
|
||||
# For reprocessing, pass actual recording time (though it's ignored - see _process_multitrack_recording_inner)
|
||||
# Reprocessing uses recording.meeting_id directly instead of time-based matching
|
||||
recording_start_ts = int(recording.recorded_at.timestamp())
|
||||
|
||||
process_multitrack_recording.delay(
|
||||
bucket_name=bucket_name,
|
||||
daily_room_name=meeting.room_name,
|
||||
recording_id=recording.id,
|
||||
track_keys=recording.track_keys,
|
||||
recording_start_ts=recording_start_ts,
|
||||
)
|
||||
|
||||
reprocessed_count += 1
|
||||
|
||||
|
||||
@@ -48,15 +48,7 @@ class RedisPubSubManager:
|
||||
if not self.redis_connection:
|
||||
await self.connect()
|
||||
message = json.dumps(message)
|
||||
try:
|
||||
await self.redis_connection.publish(room_id, message)
|
||||
except RuntimeError:
|
||||
# Celery workers run each task in a new event loop (asyncio.run),
|
||||
# which closes the previous loop. Cached Redis connection is dead.
|
||||
# Reconnect on the current loop and retry.
|
||||
self.redis_connection = None
|
||||
await self.connect()
|
||||
await self.redis_connection.publish(room_id, message)
|
||||
await self.redis_connection.publish(room_id, message)
|
||||
|
||||
async def subscribe(self, room_id: str) -> redis.Redis:
|
||||
await self.pubsub.subscribe(room_id)
|
||||
|
||||
@@ -15,7 +15,8 @@ from reflector.settings import settings
|
||||
|
||||
async def setup_webhook(webhook_url: str):
|
||||
"""
|
||||
Create Daily.co webhook. Deletes any existing webhooks first, then creates the new one.
|
||||
Create or update Daily.co webhook for this environment using dailyco_api module.
|
||||
Uses DAILY_WEBHOOK_UUID to identify existing webhook.
|
||||
"""
|
||||
if not settings.DAILY_API_KEY:
|
||||
print("Error: DAILY_API_KEY not set")
|
||||
@@ -34,37 +35,79 @@ async def setup_webhook(webhook_url: str):
|
||||
]
|
||||
|
||||
async with DailyApiClient(api_key=settings.DAILY_API_KEY) as client:
|
||||
webhooks = await client.list_webhooks()
|
||||
for wh in webhooks:
|
||||
await client.delete_webhook(wh.uuid)
|
||||
print(f"Deleted webhook {wh.uuid}")
|
||||
webhook_uuid = settings.DAILY_WEBHOOK_UUID
|
||||
|
||||
request = CreateWebhookRequest(
|
||||
url=webhook_url,
|
||||
eventTypes=event_types,
|
||||
hmac=settings.DAILY_WEBHOOK_SECRET,
|
||||
)
|
||||
result = await client.create_webhook(request)
|
||||
webhook_uuid = result.uuid
|
||||
if webhook_uuid:
|
||||
print(f"Updating existing webhook {webhook_uuid}...")
|
||||
try:
|
||||
# Note: Daily.co doesn't support PATCH well, so we delete + recreate
|
||||
await client.delete_webhook(webhook_uuid)
|
||||
print(f"Deleted old webhook {webhook_uuid}")
|
||||
|
||||
print(f"✓ Created webhook {webhook_uuid} (state: {result.state})")
|
||||
print(f" URL: {result.url}")
|
||||
request = CreateWebhookRequest(
|
||||
url=webhook_url,
|
||||
eventTypes=event_types,
|
||||
hmac=settings.DAILY_WEBHOOK_SECRET,
|
||||
)
|
||||
result = await client.create_webhook(request)
|
||||
|
||||
env_file = Path(__file__).parent.parent / ".env"
|
||||
if env_file.exists():
|
||||
lines = env_file.read_text().splitlines()
|
||||
updated = False
|
||||
for i, line in enumerate(lines):
|
||||
if line.startswith("DAILY_WEBHOOK_UUID="):
|
||||
lines[i] = f"DAILY_WEBHOOK_UUID={webhook_uuid}"
|
||||
updated = True
|
||||
break
|
||||
if not updated:
|
||||
lines.append(f"DAILY_WEBHOOK_UUID={webhook_uuid}")
|
||||
env_file.write_text("\n".join(lines) + "\n")
|
||||
print("✓ Saved DAILY_WEBHOOK_UUID to .env")
|
||||
print(
|
||||
f"✓ Created replacement webhook {result.uuid} (state: {result.state})"
|
||||
)
|
||||
print(f" URL: {result.url}")
|
||||
|
||||
return 0
|
||||
webhook_uuid = result.uuid
|
||||
|
||||
except Exception as e:
|
||||
if hasattr(e, "response") and e.response.status_code == 404:
|
||||
print(f"Webhook {webhook_uuid} not found, creating new one...")
|
||||
webhook_uuid = None # Fall through to creation
|
||||
else:
|
||||
print(f"Error updating webhook: {e}")
|
||||
return 1
|
||||
|
||||
if not webhook_uuid:
|
||||
print("Creating new webhook...")
|
||||
request = CreateWebhookRequest(
|
||||
url=webhook_url,
|
||||
eventTypes=event_types,
|
||||
hmac=settings.DAILY_WEBHOOK_SECRET,
|
||||
)
|
||||
result = await client.create_webhook(request)
|
||||
webhook_uuid = result.uuid
|
||||
|
||||
print(f"✓ Created webhook {webhook_uuid} (state: {result.state})")
|
||||
print(f" URL: {result.url}")
|
||||
print()
|
||||
print("=" * 60)
|
||||
print("IMPORTANT: Add this to your environment variables:")
|
||||
print("=" * 60)
|
||||
print(f"DAILY_WEBHOOK_UUID: {webhook_uuid}")
|
||||
print("=" * 60)
|
||||
print()
|
||||
|
||||
# Try to write UUID to .env file
|
||||
env_file = Path(__file__).parent.parent / ".env"
|
||||
if env_file.exists():
|
||||
lines = env_file.read_text().splitlines()
|
||||
updated = False
|
||||
|
||||
# Update existing DAILY_WEBHOOK_UUID line or add it
|
||||
for i, line in enumerate(lines):
|
||||
if line.startswith("DAILY_WEBHOOK_UUID="):
|
||||
lines[i] = f"DAILY_WEBHOOK_UUID={webhook_uuid}"
|
||||
updated = True
|
||||
break
|
||||
|
||||
if not updated:
|
||||
lines.append(f"DAILY_WEBHOOK_UUID={webhook_uuid}")
|
||||
|
||||
env_file.write_text("\n".join(lines) + "\n")
|
||||
print(f"✓ Also saved to local .env file")
|
||||
else:
|
||||
print(f"⚠ Local .env file not found - please add manually")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
@@ -74,7 +117,11 @@ if __name__ == "__main__":
|
||||
"Example: python recreate_daily_webhook.py https://example.com/v1/daily/webhook"
|
||||
)
|
||||
print()
|
||||
print("Deletes all existing webhooks, then creates a new one.")
|
||||
print("Behavior:")
|
||||
print(" - If DAILY_WEBHOOK_UUID set: Deletes old webhook, creates new one")
|
||||
print(
|
||||
" - If DAILY_WEBHOOK_UUID empty: Creates new webhook, saves UUID to .env"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
sys.exit(asyncio.run(setup_webhook(sys.argv[1])))
|
||||
|
||||
39
server/test_daily_api_recordings.py
Normal file
39
server/test_daily_api_recordings.py
Normal file
@@ -0,0 +1,39 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test script to fetch Daily.co recordings for a specific room and show raw API response."""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
|
||||
from reflector.video_platforms.factory import create_platform_client
|
||||
|
||||
|
||||
async def main():
|
||||
room_name = "daily-private-igor-20260110042117"
|
||||
|
||||
print(f"\n=== Fetching recordings for room: {room_name} ===\n")
|
||||
|
||||
async with create_platform_client("daily") as client:
|
||||
recordings = await client.list_recordings(room_name=room_name)
|
||||
|
||||
print(f"Found {len(recordings)} recording objects from Daily.co API\n")
|
||||
|
||||
for i, rec in enumerate(recordings, 1):
|
||||
print(f"--- Recording #{i} ---")
|
||||
print(f"ID: {rec.id}")
|
||||
print(f"Room: {rec.room_name}")
|
||||
print(f"Start TS: {rec.start_ts}")
|
||||
print(f"Status: {rec.status}")
|
||||
print(f"Duration: {rec.duration}")
|
||||
print(f"Type: {rec.type}")
|
||||
print(f"Tracks count: {len(rec.tracks)}")
|
||||
|
||||
if rec.tracks:
|
||||
print(f"Tracks:")
|
||||
for j, track in enumerate(rec.tracks, 1):
|
||||
print(f" Track {j}: {track.s3Key}")
|
||||
|
||||
print(f"\nRaw JSON:\n{json.dumps(rec.model_dump(), indent=2, default=str)}\n")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -4,7 +4,7 @@ from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from reflector.schemas.platform import DAILY_PLATFORM, WHEREBY_PLATFORM
|
||||
from reflector.schemas.platform import WHEREBY_PLATFORM
|
||||
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
@@ -14,7 +14,6 @@ def register_mock_platform():
|
||||
from reflector.video_platforms.registry import register_platform
|
||||
|
||||
register_platform(WHEREBY_PLATFORM, MockPlatformClient)
|
||||
register_platform(DAILY_PLATFORM, MockPlatformClient)
|
||||
yield
|
||||
|
||||
|
||||
|
||||
286
server/tests/test_daily_presence_deactivation.py
Normal file
286
server/tests/test_daily_presence_deactivation.py
Normal file
@@ -0,0 +1,286 @@
|
||||
"""Unit tests for Daily.co presence-based meeting deactivation logic.
|
||||
|
||||
Tests the fix for split room race condition by verifying:
|
||||
1. Real-time presence checking via Daily.co API
|
||||
2. Room deletion when meetings deactivate
|
||||
"""
|
||||
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from reflector.dailyco_api.responses import (
|
||||
RoomPresenceParticipant,
|
||||
RoomPresenceResponse,
|
||||
)
|
||||
from reflector.db.daily_participant_sessions import (
|
||||
DailyParticipantSession,
|
||||
daily_participant_sessions_controller,
|
||||
)
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.rooms import rooms_controller
|
||||
from reflector.video_platforms.daily import DailyClient
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def daily_room_and_meeting():
|
||||
"""Create test room and meeting for Daily platform."""
|
||||
room = await rooms_controller.add(
|
||||
name="test-daily",
|
||||
user_id="test-user",
|
||||
platform="daily",
|
||||
zulip_auto_post=False,
|
||||
zulip_stream="",
|
||||
zulip_topic="",
|
||||
is_locked=False,
|
||||
room_mode="normal",
|
||||
recording_type="cloud",
|
||||
recording_trigger="automatic-2nd-participant",
|
||||
is_shared=False,
|
||||
)
|
||||
|
||||
current_time = datetime.now(timezone.utc)
|
||||
end_time = current_time + timedelta(hours=2)
|
||||
|
||||
meeting = await meetings_controller.create(
|
||||
id="test-meeting-id",
|
||||
room_name="test-daily-20260129120000",
|
||||
room_url="https://daily.co/test",
|
||||
host_room_url="https://daily.co/test",
|
||||
start_date=current_time,
|
||||
end_date=end_time,
|
||||
room=room,
|
||||
)
|
||||
|
||||
return room, meeting
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_daily_client_has_delete_room_method():
|
||||
"""Verify DailyClient has delete_room method for cleanup."""
|
||||
# Create a mock DailyClient
|
||||
with patch("reflector.dailyco_api.client.DailyApiClient"):
|
||||
from reflector.video_platforms.models import VideoPlatformConfig
|
||||
|
||||
config = VideoPlatformConfig(api_key="test-key", webhook_secret="test-secret")
|
||||
client = DailyClient(config)
|
||||
|
||||
# Verify delete_room method exists
|
||||
assert hasattr(client, "delete_room")
|
||||
assert callable(getattr(client, "delete_room"))
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_room_presence_returns_realtime_data(daily_room_and_meeting):
|
||||
"""Test that get_room_presence returns real-time participant data."""
|
||||
room, meeting = daily_room_and_meeting
|
||||
|
||||
# Mock Daily.co API response
|
||||
mock_presence = RoomPresenceResponse(
|
||||
total_count=2,
|
||||
data=[
|
||||
RoomPresenceParticipant(
|
||||
room=meeting.room_name,
|
||||
id="session-1",
|
||||
userId="user-1",
|
||||
userName="User One",
|
||||
joinTime="2026-01-29T12:00:00.000Z",
|
||||
duration=120,
|
||||
),
|
||||
RoomPresenceParticipant(
|
||||
room=meeting.room_name,
|
||||
id="session-2",
|
||||
userId="user-2",
|
||||
userName="User Two",
|
||||
joinTime="2026-01-29T12:05:00.000Z",
|
||||
duration=60,
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
with patch("reflector.dailyco_api.client.DailyApiClient") as mock_api:
|
||||
from reflector.video_platforms.models import VideoPlatformConfig
|
||||
|
||||
config = VideoPlatformConfig(api_key="test-key", webhook_secret="test-secret")
|
||||
client = DailyClient(config)
|
||||
|
||||
# Mock the API client method
|
||||
client._api_client.get_room_presence = AsyncMock(return_value=mock_presence)
|
||||
|
||||
# Call get_room_presence
|
||||
result = await client.get_room_presence(meeting.room_name)
|
||||
|
||||
# Verify it calls Daily.co API
|
||||
client._api_client.get_room_presence.assert_called_once_with(meeting.room_name)
|
||||
|
||||
# Verify result contains real-time data
|
||||
assert result.total_count == 2
|
||||
assert len(result.data) == 2
|
||||
assert result.data[0].id == "session-1"
|
||||
assert result.data[1].id == "session-2"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_presence_shows_active_even_when_db_stale(daily_room_and_meeting):
|
||||
"""Test that Daily.co presence API is source of truth, not stale DB sessions."""
|
||||
room, meeting = daily_room_and_meeting
|
||||
current_time = datetime.now(timezone.utc)
|
||||
|
||||
# Create stale DB session (left_at=NULL but user actually left)
|
||||
session_id = f"{meeting.id}:stale-user:{int((current_time - timedelta(minutes=5)).timestamp() * 1000)}"
|
||||
await daily_participant_sessions_controller.upsert_joined(
|
||||
DailyParticipantSession(
|
||||
id=session_id,
|
||||
meeting_id=meeting.id,
|
||||
room_id=room.id,
|
||||
session_id="stale-daily-session",
|
||||
user_name="Stale User",
|
||||
user_id="stale-user",
|
||||
joined_at=current_time - timedelta(minutes=5),
|
||||
left_at=None, # Stale - shows active but user left
|
||||
)
|
||||
)
|
||||
|
||||
# Verify DB shows active session
|
||||
db_sessions = await daily_participant_sessions_controller.get_active_by_meeting(
|
||||
meeting.id
|
||||
)
|
||||
assert len(db_sessions) == 1
|
||||
|
||||
# But Daily.co API shows room is empty
|
||||
mock_presence = RoomPresenceResponse(total_count=0, data=[])
|
||||
|
||||
with patch("reflector.dailyco_api.client.DailyApiClient"):
|
||||
from reflector.video_platforms.models import VideoPlatformConfig
|
||||
|
||||
config = VideoPlatformConfig(api_key="test-key", webhook_secret="test-secret")
|
||||
client = DailyClient(config)
|
||||
client._api_client.get_room_presence = AsyncMock(return_value=mock_presence)
|
||||
|
||||
# Get real-time presence
|
||||
presence = await client.get_room_presence(meeting.room_name)
|
||||
|
||||
# Real-time API shows no participants (truth)
|
||||
assert presence.total_count == 0
|
||||
assert len(presence.data) == 0
|
||||
|
||||
# DB shows 1 participant (stale)
|
||||
assert len(db_sessions) == 1
|
||||
|
||||
# Implementation should trust presence API, not DB
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_meeting_deactivation_logic_with_presence_empty():
|
||||
"""Test the core deactivation decision logic when presence shows room empty."""
|
||||
# This tests the logic that will be in process_meetings
|
||||
|
||||
# Simulate: DB shows stale active session
|
||||
has_active_db_sessions = True # DB is stale
|
||||
|
||||
# Simulate: Daily.co presence API shows room empty
|
||||
presence_count = 0 # Real-time truth
|
||||
|
||||
# Simulate: Meeting has been used before
|
||||
has_had_sessions = True
|
||||
|
||||
# Decision logic (what process_meetings should do):
|
||||
# - If presence API available: trust it
|
||||
# - If presence shows empty AND has_had_sessions: deactivate
|
||||
|
||||
if presence_count == 0 and has_had_sessions:
|
||||
should_deactivate = True
|
||||
else:
|
||||
should_deactivate = False
|
||||
|
||||
assert should_deactivate is True # Should deactivate despite stale DB
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_meeting_deactivation_logic_with_presence_active():
|
||||
"""Test that meetings stay active when presence shows participants."""
|
||||
# Simulate: DB shows no sessions (not yet updated)
|
||||
has_active_db_sessions = False # DB hasn't caught up
|
||||
|
||||
# Simulate: Daily.co presence API shows active participant
|
||||
presence_count = 1 # Real-time truth
|
||||
|
||||
# Decision logic: presence shows activity, keep meeting active
|
||||
if presence_count > 0:
|
||||
should_deactivate = False
|
||||
else:
|
||||
should_deactivate = True
|
||||
|
||||
assert should_deactivate is False # Should stay active
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delete_room_called_on_deactivation(daily_room_and_meeting):
|
||||
"""Test that Daily.co room is deleted when meeting deactivates."""
|
||||
room, meeting = daily_room_and_meeting
|
||||
|
||||
with patch("reflector.dailyco_api.client.DailyApiClient"):
|
||||
from reflector.video_platforms.models import VideoPlatformConfig
|
||||
|
||||
config = VideoPlatformConfig(api_key="test-key", webhook_secret="test-secret")
|
||||
client = DailyClient(config)
|
||||
|
||||
# Mock delete_room API call
|
||||
client._api_client.delete_room = AsyncMock()
|
||||
|
||||
# Simulate deactivation - should delete room
|
||||
await client._api_client.delete_room(meeting.room_name)
|
||||
|
||||
# Verify delete was called
|
||||
client._api_client.delete_room.assert_called_once_with(meeting.room_name)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delete_room_idempotent_on_404():
|
||||
"""Test that room deletion is idempotent (succeeds even if room doesn't exist)."""
|
||||
from reflector.dailyco_api.client import DailyApiClient
|
||||
|
||||
# Create real client to test delete_room logic
|
||||
client = DailyApiClient(api_key="test-key")
|
||||
|
||||
# Mock the HTTP client
|
||||
mock_http_client = AsyncMock()
|
||||
mock_response = AsyncMock()
|
||||
mock_response.status_code = 404 # Room not found
|
||||
mock_http_client.delete = AsyncMock(return_value=mock_response)
|
||||
|
||||
# Mock _get_client to return our mock
|
||||
async def mock_get_client():
|
||||
return mock_http_client
|
||||
|
||||
client._get_client = mock_get_client
|
||||
|
||||
# delete_room should succeed even on 404 (idempotent)
|
||||
await client.delete_room("nonexistent-room")
|
||||
|
||||
# Verify delete was attempted
|
||||
mock_http_client.delete.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_api_failure_fallback_to_db_sessions():
|
||||
"""Test that system falls back to DB sessions if Daily.co API fails."""
|
||||
# Simulate: Daily.co API throws exception
|
||||
api_exception = Exception("API unavailable")
|
||||
|
||||
# Simulate: DB shows active session
|
||||
has_active_db_sessions = True
|
||||
|
||||
# Decision logic with fallback:
|
||||
try:
|
||||
presence_count = None
|
||||
raise api_exception # Simulating API failure
|
||||
except Exception:
|
||||
# Fallback: use DB sessions (conservative - don't deactivate if unsure)
|
||||
if has_active_db_sessions:
|
||||
should_deactivate = False
|
||||
else:
|
||||
should_deactivate = True
|
||||
|
||||
assert should_deactivate is False # Conservative: keep active on API failure
|
||||
258
server/tests/test_daily_recording_requests.py
Normal file
258
server/tests/test_daily_recording_requests.py
Normal file
@@ -0,0 +1,258 @@
|
||||
from datetime import datetime, timezone
|
||||
from uuid import UUID
|
||||
|
||||
import pytest
|
||||
|
||||
from reflector.db.daily_recording_requests import (
|
||||
DailyRecordingRequest,
|
||||
daily_recording_requests_controller,
|
||||
)
|
||||
from reflector.db.meetings import Meeting, meetings_controller
|
||||
from reflector.db.recordings import Recording, recordings_controller
|
||||
from reflector.db.rooms import Room, rooms_controller
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_create_request():
|
||||
"""Test creating a recording request."""
|
||||
# Create meeting first
|
||||
room = Room(id="test-room", name="Test Room", slug="test-room", user_id="test-user")
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting = Meeting(
|
||||
id="meeting-123",
|
||||
room_name="test-room",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="cloud",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
request = DailyRecordingRequest(
|
||||
recording_id="rec-1",
|
||||
meeting_id="meeting-123",
|
||||
instance_id=UUID("a1b2c3d4-e5f6-7890-abcd-ef1234567890"),
|
||||
type="cloud",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
await daily_recording_requests_controller.create(request)
|
||||
|
||||
result = await daily_recording_requests_controller.find_by_recording_id("rec-1")
|
||||
assert result is not None
|
||||
assert result[0] == "meeting-123"
|
||||
assert result[1] == "cloud"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_multiple_recordings_same_meeting():
|
||||
"""Test stop/restart creates multiple request rows."""
|
||||
# Create room and meeting
|
||||
room = Room(
|
||||
id="test-room-2", name="Test Room 2", slug="test-room-2", user_id="test-user"
|
||||
)
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting_id = "meeting-456"
|
||||
meeting = Meeting(
|
||||
id=meeting_id,
|
||||
room_name="test-room-2",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="cloud",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
instance_id = UUID("b1c2d3e4-f5a6-7890-abcd-ef1234567890")
|
||||
|
||||
# First recording
|
||||
await daily_recording_requests_controller.create(
|
||||
DailyRecordingRequest(
|
||||
recording_id="rec-1",
|
||||
meeting_id=meeting_id,
|
||||
instance_id=instance_id,
|
||||
type="cloud",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
)
|
||||
|
||||
# Stop, then restart (new recording_id, same instance_id)
|
||||
await daily_recording_requests_controller.create(
|
||||
DailyRecordingRequest(
|
||||
recording_id="rec-2", # DIFFERENT
|
||||
meeting_id=meeting_id,
|
||||
instance_id=instance_id, # SAME
|
||||
type="cloud",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
)
|
||||
|
||||
# Both exist
|
||||
requests = await daily_recording_requests_controller.get_by_meeting_id(meeting_id)
|
||||
assert len(requests) == 2
|
||||
assert {r.recording_id for r in requests} == {"rec-1", "rec-2"}
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_deduplication_via_database():
|
||||
"""Test concurrent pollers use database for deduplication."""
|
||||
# Create room and meeting
|
||||
room = Room(
|
||||
id="test-room-3", name="Test Room 3", slug="test-room-3", user_id="test-user"
|
||||
)
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting = Meeting(
|
||||
id="meeting-789",
|
||||
room_name="test-room-3",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="raw-tracks",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
recording_id = "rec-123"
|
||||
|
||||
# Poller 1
|
||||
created1 = await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name="test-bucket",
|
||||
object_key="test-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id="meeting-789",
|
||||
status="pending",
|
||||
track_keys=["track1.webm", "track2.webm"],
|
||||
)
|
||||
)
|
||||
assert created1 is True # First wins
|
||||
|
||||
# Poller 2 (concurrent)
|
||||
created2 = await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name="test-bucket",
|
||||
object_key="test-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id="meeting-789",
|
||||
status="pending",
|
||||
track_keys=["track1.webm", "track2.webm"],
|
||||
)
|
||||
)
|
||||
assert created2 is False # Conflict, skip
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_orphan_logged_once():
|
||||
"""Test orphan marked once, skipped on re-poll."""
|
||||
# First poll
|
||||
created1 = await recordings_controller.create_orphan(
|
||||
Recording(
|
||||
id="orphan-123",
|
||||
bucket_name="test-bucket",
|
||||
object_key="orphan-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=None,
|
||||
status="orphan",
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
assert created1 is True
|
||||
|
||||
# Second poll (same orphan discovered again)
|
||||
created2 = await recordings_controller.create_orphan(
|
||||
Recording(
|
||||
id="orphan-123",
|
||||
bucket_name="test-bucket",
|
||||
object_key="orphan-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=None,
|
||||
status="orphan",
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
assert created2 is False # Already exists
|
||||
|
||||
# Verify it exists
|
||||
existing = await recordings_controller.get_by_id("orphan-123")
|
||||
assert existing is not None
|
||||
assert existing.status == "orphan"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_orphan_constraints():
|
||||
"""Test orphan invariants are enforced."""
|
||||
# Can't create orphan with meeting_id
|
||||
with pytest.raises(AssertionError, match="meeting_id must be NULL"):
|
||||
await recordings_controller.create_orphan(
|
||||
Recording(
|
||||
id="bad-orphan-1",
|
||||
bucket_name="test",
|
||||
object_key="test",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id="meeting-123", # Should be None
|
||||
status="orphan",
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
|
||||
# Can't create orphan with wrong status
|
||||
with pytest.raises(AssertionError, match="status must be 'orphan'"):
|
||||
await recordings_controller.create_orphan(
|
||||
Recording(
|
||||
id="bad-orphan-2",
|
||||
bucket_name="test",
|
||||
object_key="test",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=None,
|
||||
status="pending", # Should be "orphan"
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_try_create_with_meeting_constraints():
|
||||
"""Test try_create_with_meeting enforces constraints."""
|
||||
# Create room and meeting
|
||||
room = Room(
|
||||
id="test-room-4", name="Test Room 4", slug="test-room-4", user_id="test-user"
|
||||
)
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting = Meeting(
|
||||
id="meeting-999",
|
||||
room_name="test-room-4",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="cloud",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
# Can't create with orphan status
|
||||
with pytest.raises(AssertionError, match="use create_orphan"):
|
||||
await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id="bad-rec-1",
|
||||
bucket_name="test",
|
||||
object_key="test",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id="meeting-999",
|
||||
status="orphan", # Should not be orphan
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
|
||||
# Can't create without meeting_id
|
||||
with pytest.raises(AssertionError, match="meeting_id required"):
|
||||
await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id="bad-rec-2",
|
||||
bucket_name="test",
|
||||
object_key="test",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=None, # Should have meeting_id
|
||||
status="pending",
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
@@ -255,7 +255,7 @@ async def test_validation_locked_transcript():
|
||||
@pytest.mark.usefixtures("setup_database")
|
||||
@pytest.mark.asyncio
|
||||
async def test_validation_idle_transcript():
|
||||
"""Test that validation rejects idle transcripts without recording (file upload not ready)."""
|
||||
"""Test that validation rejects idle transcripts (not ready)."""
|
||||
from reflector.services.transcript_process import (
|
||||
ValidationNotReady,
|
||||
validate_transcript_for_processing,
|
||||
@@ -274,34 +274,6 @@ async def test_validation_idle_transcript():
|
||||
assert "not ready" in result.detail.lower()
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("setup_database")
|
||||
@pytest.mark.asyncio
|
||||
async def test_validation_idle_transcript_with_recording_allowed():
|
||||
"""Test that validation allows idle transcripts with recording_id (multitrack ready/retry)."""
|
||||
from reflector.services.transcript_process import (
|
||||
ValidationOk,
|
||||
validate_transcript_for_processing,
|
||||
)
|
||||
|
||||
mock_transcript = Transcript(
|
||||
id="test-transcript-id",
|
||||
name="Test",
|
||||
status="idle",
|
||||
source_kind="room",
|
||||
recording_id="test-recording-id",
|
||||
)
|
||||
|
||||
with patch(
|
||||
"reflector.services.transcript_process.task_is_scheduled_or_active"
|
||||
) as mock_celery_check:
|
||||
mock_celery_check.return_value = False
|
||||
|
||||
result = await validate_transcript_for_processing(mock_transcript)
|
||||
|
||||
assert isinstance(result, ValidationOk)
|
||||
assert result.recording_id == "test-recording-id"
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("setup_database")
|
||||
@pytest.mark.asyncio
|
||||
async def test_prepare_multitrack_config():
|
||||
|
||||
300
server/tests/test_recording_request_flow.py
Normal file
300
server/tests/test_recording_request_flow.py
Normal file
@@ -0,0 +1,300 @@
|
||||
"""
|
||||
Integration tests for recording request flow.
|
||||
|
||||
These tests verify the end-to-end flow of:
|
||||
1. Starting a recording (creates request)
|
||||
2. Webhook/polling discovering recording (matches via request)
|
||||
3. Recording processing (uses existing meeting_id)
|
||||
"""
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from uuid import UUID, uuid4
|
||||
|
||||
import pytest
|
||||
|
||||
from reflector.db.daily_recording_requests import (
|
||||
DailyRecordingRequest,
|
||||
daily_recording_requests_controller,
|
||||
)
|
||||
from reflector.db.meetings import Meeting, meetings_controller
|
||||
from reflector.db.recordings import Recording, recordings_controller
|
||||
from reflector.db.rooms import Room, rooms_controller
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_recording_request_flow_cloud(client):
|
||||
"""Test full cloud recording flow: start -> webhook -> match"""
|
||||
# Create room and meeting
|
||||
room = Room(id="test-room", name="Test Room", slug="test-room", user_id="test-user")
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting_id = f"meeting-{uuid4()}"
|
||||
meeting = Meeting(
|
||||
id=meeting_id,
|
||||
room_name="test-room",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="cloud",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
# Simulate recording start (what endpoint does)
|
||||
recording_id = "rec-cloud-123"
|
||||
instance_id = UUID("a1b2c3d4-e5f6-7890-abcd-ef1234567890")
|
||||
|
||||
request = DailyRecordingRequest(
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting_id,
|
||||
instance_id=instance_id,
|
||||
type="cloud",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
await daily_recording_requests_controller.create(request)
|
||||
|
||||
# Verify request exists
|
||||
match = await daily_recording_requests_controller.find_by_recording_id(recording_id)
|
||||
assert match is not None
|
||||
assert match[0] == meeting_id
|
||||
assert match[1] == "cloud"
|
||||
|
||||
# Simulate webhook/polling storing cloud recording
|
||||
success = await meetings_controller.set_cloud_recording_if_missing(
|
||||
meeting_id=meeting_id,
|
||||
s3_key="s3://bucket/recording.mp4",
|
||||
duration=120,
|
||||
)
|
||||
assert success is True
|
||||
|
||||
# Verify meeting updated
|
||||
updated_meeting = await meetings_controller.get_by_id(meeting_id)
|
||||
assert updated_meeting.daily_composed_video_s3_key == "s3://bucket/recording.mp4"
|
||||
assert updated_meeting.daily_composed_video_duration == 120
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_recording_request_flow_raw_tracks(client):
|
||||
"""Test full raw-tracks recording flow: start -> webhook/polling -> process"""
|
||||
# Create room and meeting
|
||||
room = Room(
|
||||
id="test-room-2",
|
||||
name="Test Room 2",
|
||||
slug="test-room-2",
|
||||
user_id="test-user",
|
||||
)
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting_id = f"meeting-{uuid4()}"
|
||||
meeting = Meeting(
|
||||
id=meeting_id,
|
||||
room_name="test-room-2",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="raw-tracks",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
# Simulate recording start
|
||||
recording_id = "rec-raw-456"
|
||||
instance_id = UUID("b1c2d3e4-f5a6-7890-abcd-ef1234567890")
|
||||
|
||||
request = DailyRecordingRequest(
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting_id,
|
||||
instance_id=instance_id,
|
||||
type="raw-tracks",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
await daily_recording_requests_controller.create(request)
|
||||
|
||||
# Simulate webhook/polling discovering recording
|
||||
match = await daily_recording_requests_controller.find_by_recording_id(recording_id)
|
||||
assert match is not None
|
||||
found_meeting_id, recording_type = match
|
||||
assert found_meeting_id == meeting_id
|
||||
assert recording_type == "raw-tracks"
|
||||
|
||||
# Create recording (what webhook/polling does)
|
||||
created = await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name="test-bucket",
|
||||
object_key="recordings/20260120/",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
track_keys=["track1.webm", "track2.webm"],
|
||||
meeting_id=meeting_id,
|
||||
status="pending",
|
||||
)
|
||||
)
|
||||
assert created is True
|
||||
|
||||
# Verify recording exists with meeting_id
|
||||
recording = await recordings_controller.get_by_id(recording_id)
|
||||
assert recording is not None
|
||||
assert recording.meeting_id == meeting_id
|
||||
assert recording.status == "pending"
|
||||
assert len(recording.track_keys) == 2
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stop_restart_creates_multiple_requests(client):
|
||||
"""Test stop/restart creates multiple request rows with same instance_id"""
|
||||
# Create room and meeting
|
||||
room = Room(
|
||||
id="test-room-3",
|
||||
name="Test Room 3",
|
||||
slug="test-room-3",
|
||||
user_id="test-user",
|
||||
)
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting_id = f"meeting-{uuid4()}"
|
||||
meeting = Meeting(
|
||||
id=meeting_id,
|
||||
room_name="test-room-3",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="cloud",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
instance_id = UUID("c1d2e3f4-a5b6-7890-abcd-ef1234567890")
|
||||
|
||||
# First recording
|
||||
await daily_recording_requests_controller.create(
|
||||
DailyRecordingRequest(
|
||||
recording_id="rec-first",
|
||||
meeting_id=meeting_id,
|
||||
instance_id=instance_id,
|
||||
type="cloud",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
)
|
||||
|
||||
# Stop, then restart (new recording_id, same instance_id)
|
||||
await daily_recording_requests_controller.create(
|
||||
DailyRecordingRequest(
|
||||
recording_id="rec-second", # DIFFERENT
|
||||
meeting_id=meeting_id,
|
||||
instance_id=instance_id, # SAME
|
||||
type="cloud",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
)
|
||||
|
||||
# Both exist
|
||||
requests = await daily_recording_requests_controller.get_by_meeting_id(meeting_id)
|
||||
assert len(requests) == 2
|
||||
assert {r.recording_id for r in requests} == {"rec-first", "rec-second"}
|
||||
assert all(r.instance_id == instance_id for r in requests)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_orphan_recording_no_request(client):
|
||||
"""Test orphan recording (no request found)"""
|
||||
# Simulate polling discovering recording with no request
|
||||
recording_id = "rec-orphan"
|
||||
|
||||
match = await daily_recording_requests_controller.find_by_recording_id(recording_id)
|
||||
assert match is None # No request
|
||||
|
||||
# Mark as orphan
|
||||
created = await recordings_controller.create_orphan(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name="test-bucket",
|
||||
object_key="orphan-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=None,
|
||||
status="orphan",
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
assert created is True
|
||||
|
||||
# Verify orphan exists
|
||||
recording = await recordings_controller.get_by_id(recording_id)
|
||||
assert recording is not None
|
||||
assert recording.status == "orphan"
|
||||
assert recording.meeting_id is None
|
||||
|
||||
# Second poll - already exists
|
||||
created_again = await recordings_controller.create_orphan(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name="test-bucket",
|
||||
object_key="orphan-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=None,
|
||||
status="orphan",
|
||||
track_keys=None,
|
||||
)
|
||||
)
|
||||
assert created_again is False # Already exists
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_polling_deduplication(client):
|
||||
"""Test concurrent pollers only queue once"""
|
||||
# Create room and meeting
|
||||
room = Room(
|
||||
id="test-room-4",
|
||||
name="Test Room 4",
|
||||
slug="test-room-4",
|
||||
user_id="test-user",
|
||||
)
|
||||
await rooms_controller.create(room)
|
||||
|
||||
meeting_id = f"meeting-{uuid4()}"
|
||||
meeting = Meeting(
|
||||
id=meeting_id,
|
||||
room_name="test-room-4",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=None,
|
||||
recording_type="raw-tracks",
|
||||
)
|
||||
await meetings_controller.create(meeting)
|
||||
|
||||
# Create request
|
||||
recording_id = "rec-concurrent"
|
||||
await daily_recording_requests_controller.create(
|
||||
DailyRecordingRequest(
|
||||
recording_id=recording_id,
|
||||
meeting_id=meeting_id,
|
||||
instance_id=UUID("d1e2f3a4-b5c6-7890-abcd-ef1234567890"),
|
||||
type="raw-tracks",
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
)
|
||||
)
|
||||
|
||||
# Poller 1
|
||||
created1 = await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name="test-bucket",
|
||||
object_key="test-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=meeting_id,
|
||||
status="pending",
|
||||
track_keys=["track1.webm"],
|
||||
)
|
||||
)
|
||||
assert created1 is True # First wins
|
||||
|
||||
# Poller 2 (concurrent)
|
||||
created2 = await recordings_controller.try_create_with_meeting(
|
||||
Recording(
|
||||
id=recording_id,
|
||||
bucket_name="test-bucket",
|
||||
object_key="test-key",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
meeting_id=meeting_id,
|
||||
status="pending",
|
||||
track_keys=["track1.webm"],
|
||||
)
|
||||
)
|
||||
assert created2 is False # Conflict, skip
|
||||
|
||||
# Only one recording exists
|
||||
recording = await recordings_controller.get_by_id(recording_id)
|
||||
assert recording is not None
|
||||
assert recording.meeting_id == meeting_id
|
||||
@@ -319,51 +319,3 @@ def test_aws_storage_constructor_rejects_mixed_auth():
|
||||
aws_secret_access_key="test-secret",
|
||||
aws_role_arn="arn:aws:iam::123456789012:role/test-role",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_aws_storage_custom_endpoint_url():
|
||||
"""Test that custom endpoint_url configures path-style addressing and passes endpoint to client."""
|
||||
storage = AwsStorage(
|
||||
aws_bucket_name="reflector-media",
|
||||
aws_region="garage",
|
||||
aws_access_key_id="GKtest",
|
||||
aws_secret_access_key="secret",
|
||||
aws_endpoint_url="http://garage:3900",
|
||||
)
|
||||
assert storage._endpoint_url == "http://garage:3900"
|
||||
assert storage.boto_config.s3["addressing_style"] == "path"
|
||||
assert storage.base_url == "http://garage:3900/reflector-media/"
|
||||
# retries config preserved (merge, not replace)
|
||||
assert storage.boto_config.retries["max_attempts"] == 3
|
||||
|
||||
mock_client = AsyncMock()
|
||||
mock_client.put_object = AsyncMock()
|
||||
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
|
||||
mock_client.__aexit__ = AsyncMock(return_value=None)
|
||||
mock_client.generate_presigned_url = AsyncMock(
|
||||
return_value="http://garage:3900/reflector-media/test.txt"
|
||||
)
|
||||
|
||||
with patch.object(
|
||||
storage.session, "client", return_value=mock_client
|
||||
) as mock_session_client:
|
||||
await storage.put_file("test.txt", b"data")
|
||||
mock_session_client.assert_called_with(
|
||||
"s3", config=storage.boto_config, endpoint_url="http://garage:3900"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_aws_storage_none_endpoint_url():
|
||||
"""Test that None endpoint preserves current AWS behavior."""
|
||||
storage = AwsStorage(
|
||||
aws_bucket_name="reflector-bucket",
|
||||
aws_region="us-east-1",
|
||||
aws_access_key_id="AKIAtest",
|
||||
aws_secret_access_key="secret",
|
||||
)
|
||||
assert storage._endpoint_url is None
|
||||
assert storage.base_url == "https://reflector-bucket.s3.amazonaws.com/"
|
||||
# No s3 addressing_style override — boto_config should only have retries
|
||||
assert not hasattr(storage.boto_config, "s3") or storage.boto_config.s3 is None
|
||||
|
||||
@@ -1,374 +0,0 @@
|
||||
"""
|
||||
Integration tests for time-based meeting-to-recording matching.
|
||||
|
||||
Tests the critical path for matching Daily.co recordings to meetings when
|
||||
API doesn't return instanceId.
|
||||
"""
|
||||
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
import pytest
|
||||
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.rooms import rooms_controller
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def test_room():
|
||||
"""Create a test room for meetings."""
|
||||
room = await rooms_controller.add(
|
||||
name="test-room-time",
|
||||
user_id="test-user-id",
|
||||
zulip_auto_post=False,
|
||||
zulip_stream="",
|
||||
zulip_topic="",
|
||||
is_locked=False,
|
||||
room_mode="normal",
|
||||
recording_type="cloud",
|
||||
recording_trigger="automatic",
|
||||
is_shared=False,
|
||||
platform="daily",
|
||||
)
|
||||
return room
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def base_time():
|
||||
"""Fixed timestamp for deterministic tests."""
|
||||
return datetime(2026, 1, 14, 9, 0, 0, tzinfo=timezone.utc)
|
||||
|
||||
|
||||
class TestTimeBasedMatching:
|
||||
"""Test get_by_room_name_and_time() matching logic."""
|
||||
|
||||
async def test_exact_time_match(self, test_room, base_time):
|
||||
"""Recording timestamp exactly matches meeting start_date."""
|
||||
meeting = await meetings_controller.create(
|
||||
id="meeting-exact",
|
||||
room_name="daily-test-20260114090000",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-20260114090000",
|
||||
recording_start=base_time,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == meeting.id
|
||||
|
||||
async def test_recording_slightly_after_meeting_start(self, test_room, base_time):
|
||||
"""Recording started 1 minute after meeting (participants joined late)."""
|
||||
meeting = await meetings_controller.create(
|
||||
id="meeting-late",
|
||||
room_name="daily-test-20260114090100",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
recording_start = base_time + timedelta(minutes=1)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-20260114090100",
|
||||
recording_start=recording_start,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == meeting.id
|
||||
|
||||
async def test_duplicate_room_names_picks_closest(self, test_room, base_time):
|
||||
"""
|
||||
Two meetings with same room_name (duplicate/race condition).
|
||||
Should pick closest by timestamp.
|
||||
"""
|
||||
meeting1 = await meetings_controller.create(
|
||||
id="meeting-1-first",
|
||||
room_name="daily-duplicate-room",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
meeting2 = await meetings_controller.create(
|
||||
id="meeting-2-second",
|
||||
room_name="daily-duplicate-room", # Same room_name!
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time + timedelta(seconds=0.99), # 0.99s later
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
# Recording started 0.5s after meeting1
|
||||
# Distance: meeting1 = 0.5s, meeting2 = 0.49s → meeting2 is closer
|
||||
recording_start = base_time + timedelta(seconds=0.5)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-duplicate-room",
|
||||
recording_start=recording_start,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == meeting2.id # meeting2 is closer (0.49s vs 0.5s)
|
||||
|
||||
async def test_outside_time_window_returns_none(self, test_room, base_time):
|
||||
"""Recording outside 1-week window returns None."""
|
||||
await meetings_controller.create(
|
||||
id="meeting-old",
|
||||
room_name="daily-test-old",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
# Recording 8 days later (outside 7-day window)
|
||||
recording_start = base_time + timedelta(days=8)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-old",
|
||||
recording_start=recording_start,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is None
|
||||
|
||||
async def test_tie_breaker_deterministic(self, test_room, base_time):
|
||||
"""When time delta identical, tie-breaker by meeting.id is deterministic."""
|
||||
meeting_z = await meetings_controller.create(
|
||||
id="zzz-last-uuid",
|
||||
room_name="daily-test-tie",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
meeting_a = await meetings_controller.create(
|
||||
id="aaa-first-uuid",
|
||||
room_name="daily-test-tie",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time, # Exact same start_date
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-tie",
|
||||
recording_start=base_time,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
# Tie-breaker: lexicographically first UUID
|
||||
assert result.id == "aaa-first-uuid"
|
||||
|
||||
async def test_timezone_naive_datetime_raises(self, test_room, base_time):
|
||||
"""Timezone-naive datetime raises ValueError."""
|
||||
await meetings_controller.create(
|
||||
id="meeting-tz",
|
||||
room_name="daily-test-tz",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
# Naive datetime (no timezone)
|
||||
naive_dt = datetime(2026, 1, 14, 9, 0, 0)
|
||||
|
||||
with pytest.raises(ValueError, match="timezone-aware"):
|
||||
await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-tz",
|
||||
recording_start=naive_dt,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
async def test_one_week_boundary_after_included(self, test_room, base_time):
|
||||
"""Meeting 1-week AFTER recording is included (window_end boundary)."""
|
||||
meeting_time = base_time + timedelta(hours=168)
|
||||
|
||||
await meetings_controller.create(
|
||||
id="meeting-boundary-after",
|
||||
room_name="daily-test-boundary-after",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=meeting_time,
|
||||
end_date=meeting_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-boundary-after",
|
||||
recording_start=base_time,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == "meeting-boundary-after"
|
||||
|
||||
async def test_one_week_boundary_before_included(self, test_room, base_time):
|
||||
"""Meeting 1-week BEFORE recording is included (window_start boundary)."""
|
||||
meeting_time = base_time - timedelta(hours=168)
|
||||
|
||||
await meetings_controller.create(
|
||||
id="meeting-boundary-before",
|
||||
room_name="daily-test-boundary-before",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=meeting_time,
|
||||
end_date=meeting_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-boundary-before",
|
||||
recording_start=base_time,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == "meeting-boundary-before"
|
||||
|
||||
async def test_recording_before_meeting_start(self, test_room, base_time):
|
||||
"""Recording started before meeting (clock skew or early join)."""
|
||||
await meetings_controller.create(
|
||||
id="meeting-early",
|
||||
room_name="daily-test-early",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
recording_start = base_time - timedelta(minutes=2)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-early",
|
||||
recording_start=recording_start,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == "meeting-early"
|
||||
|
||||
async def test_mixed_inside_outside_window(self, test_room, base_time):
|
||||
"""Multiple meetings, only one inside window - returns the inside one."""
|
||||
await meetings_controller.create(
|
||||
id="meeting-old",
|
||||
room_name="daily-test-mixed",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time - timedelta(days=10),
|
||||
end_date=base_time - timedelta(days=10, hours=-1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
await meetings_controller.create(
|
||||
id="meeting-inside",
|
||||
room_name="daily-test-mixed",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time - timedelta(days=2),
|
||||
end_date=base_time - timedelta(days=2, hours=-1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
await meetings_controller.create(
|
||||
id="meeting-future",
|
||||
room_name="daily-test-mixed",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time + timedelta(days=10),
|
||||
end_date=base_time + timedelta(days=10, hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
result = await meetings_controller.get_by_room_name_and_time(
|
||||
room_name="daily-test-mixed",
|
||||
recording_start=base_time,
|
||||
time_window_hours=168,
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.id == "meeting-inside"
|
||||
|
||||
|
||||
class TestAtomicCloudRecordingUpdate:
|
||||
"""Test atomic update prevents race conditions."""
|
||||
|
||||
async def test_first_update_succeeds(self, test_room, base_time):
|
||||
"""First call to set_cloud_recording_if_missing succeeds."""
|
||||
meeting = await meetings_controller.create(
|
||||
id="meeting-atomic-1",
|
||||
room_name="daily-test-atomic",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
success = await meetings_controller.set_cloud_recording_if_missing(
|
||||
meeting_id=meeting.id,
|
||||
s3_key="first-s3-key",
|
||||
duration=100,
|
||||
)
|
||||
|
||||
assert success is True
|
||||
|
||||
updated = await meetings_controller.get_by_id(meeting.id)
|
||||
assert updated.daily_composed_video_s3_key == "first-s3-key"
|
||||
assert updated.daily_composed_video_duration == 100
|
||||
|
||||
async def test_second_update_fails_atomically(self, test_room, base_time):
|
||||
"""Second call to update same meeting doesn't overwrite (atomic check)."""
|
||||
meeting = await meetings_controller.create(
|
||||
id="meeting-atomic-2",
|
||||
room_name="daily-test-atomic2",
|
||||
room_url="https://example.daily.co/test",
|
||||
host_room_url="https://example.daily.co/test?t=host",
|
||||
start_date=base_time,
|
||||
end_date=base_time + timedelta(hours=1),
|
||||
room=test_room,
|
||||
)
|
||||
|
||||
success1 = await meetings_controller.set_cloud_recording_if_missing(
|
||||
meeting_id=meeting.id,
|
||||
s3_key="first-s3-key",
|
||||
duration=100,
|
||||
)
|
||||
|
||||
assert success1 is True
|
||||
|
||||
after_first = await meetings_controller.get_by_id(meeting.id)
|
||||
assert after_first.daily_composed_video_s3_key == "first-s3-key"
|
||||
|
||||
success2 = await meetings_controller.set_cloud_recording_if_missing(
|
||||
meeting_id=meeting.id,
|
||||
s3_key="bucket/path/should-not-overwrite",
|
||||
duration=200,
|
||||
)
|
||||
|
||||
assert success2 is False
|
||||
|
||||
final = await meetings_controller.get_by_id(meeting.id)
|
||||
assert final.daily_composed_video_s3_key == "first-s3-key"
|
||||
assert final.daily_composed_video_duration == 100
|
||||
@@ -1,6 +1,6 @@
|
||||
import asyncio
|
||||
import time
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
from httpx import ASGITransport, AsyncClient
|
||||
@@ -142,17 +142,17 @@ async def test_whereby_recording_uses_file_pipeline(client):
|
||||
"reflector.services.transcript_process.task_pipeline_file_process"
|
||||
) as mock_file_pipeline,
|
||||
patch(
|
||||
"reflector.services.transcript_process.HatchetClientManager"
|
||||
) as mock_hatchet,
|
||||
"reflector.services.transcript_process.task_pipeline_multitrack_process"
|
||||
) as mock_multitrack_pipeline,
|
||||
):
|
||||
response = await client.post(f"/transcripts/{transcript.id}/process")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "ok"
|
||||
|
||||
# Whereby recordings should use file pipeline, not Hatchet
|
||||
# Whereby recordings should use file pipeline
|
||||
mock_file_pipeline.delay.assert_called_once_with(transcript_id=transcript.id)
|
||||
mock_hatchet.start_workflow.assert_not_called()
|
||||
mock_multitrack_pipeline.delay.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("setup_database")
|
||||
@@ -177,6 +177,8 @@ async def test_dailyco_recording_uses_multitrack_pipeline(client):
|
||||
recording_trigger="automatic-2nd-participant",
|
||||
is_shared=False,
|
||||
)
|
||||
# Force Celery backend for test
|
||||
await rooms_controller.update(room, {"use_celery": True})
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
"",
|
||||
@@ -211,23 +213,18 @@ async def test_dailyco_recording_uses_multitrack_pipeline(client):
|
||||
"reflector.services.transcript_process.task_pipeline_file_process"
|
||||
) as mock_file_pipeline,
|
||||
patch(
|
||||
"reflector.services.transcript_process.HatchetClientManager"
|
||||
) as mock_hatchet,
|
||||
"reflector.services.transcript_process.task_pipeline_multitrack_process"
|
||||
) as mock_multitrack_pipeline,
|
||||
):
|
||||
mock_hatchet.start_workflow = AsyncMock(return_value="test-workflow-id")
|
||||
|
||||
response = await client.post(f"/transcripts/{transcript.id}/process")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "ok"
|
||||
|
||||
# Daily.co multitrack recordings should use Hatchet workflow
|
||||
mock_hatchet.start_workflow.assert_called_once()
|
||||
call_kwargs = mock_hatchet.start_workflow.call_args.kwargs
|
||||
assert call_kwargs["workflow_name"] == "DiarizationPipeline"
|
||||
assert call_kwargs["input_data"]["transcript_id"] == transcript.id
|
||||
assert call_kwargs["input_data"]["bucket_name"] == "daily-bucket"
|
||||
assert call_kwargs["input_data"]["tracks"] == [
|
||||
{"s3_key": k} for k in track_keys
|
||||
]
|
||||
# Daily.co multitrack recordings should use multitrack pipeline
|
||||
mock_multitrack_pipeline.delay.assert_called_once_with(
|
||||
transcript_id=transcript.id,
|
||||
bucket_name="daily-bucket",
|
||||
track_keys=track_keys,
|
||||
)
|
||||
mock_file_pipeline.delay.assert_not_called()
|
||||
|
||||
676
server/uv.lock
generated
676
server/uv.lock
generated
@@ -235,6 +235,12 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "antlr4-python3-runtime"
|
||||
version = "4.9.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/3e/38/7859ff46355f76f8d19459005ca000b6e7012f2f1ca597746cbcd1fbfe5e/antlr4-python3-runtime-4.9.3.tar.gz", hash = "sha256:f224469b4168294902bb1efa80a8bf7855f24c99aef99cbefc1bcd3cce77881b", size = 117034 }
|
||||
|
||||
[[package]]
|
||||
name = "anyio"
|
||||
version = "4.9.0"
|
||||
@@ -261,6 +267,21 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/f5/c36551e93acba41a59939ae6a0fb77ddb3f2e8e8caa716410c65f7341f72/asgi_lifespan-2.1.0-py3-none-any.whl", hash = "sha256:ed840706680e28428c01e14afb3875d7d76d3206f3d5b2f2294e059b5c23804f", size = 10895 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "asteroid-filterbanks"
|
||||
version = "0.4.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/90/fa/5c2be1f96dc179f83cdd3bb267edbd1f47d08f756785c016d5c2163901a7/asteroid-filterbanks-0.4.0.tar.gz", hash = "sha256:415f89d1dcf2b13b35f03f7a9370968ac4e6fa6800633c522dac992b283409b9", size = 24599 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c5/7c/83ff6046176a675e6a1e8aeefed8892cd97fe7c46af93cc540d1b24b8323/asteroid_filterbanks-0.4.0-py3-none-any.whl", hash = "sha256:4932ac8b6acc6e08fb87cbe8ece84215b5a74eee284fe83acf3540a72a02eaf5", size = 29912 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-timeout"
|
||||
version = "5.0.1"
|
||||
@@ -582,6 +603,56 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/06/3d6badcf13db419e25b07041d9c7b4a2c331d3f4e7134445ec5df57714cd/coloredlogs-15.0.1-py2.py3-none-any.whl", hash = "sha256:612ee75c546f53e92e70049c9dbfcc18c935a2b9a53b66085ce9ef6a6e5c0934", size = 46018 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "colorlog"
|
||||
version = "6.9.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d3/7a/359f4d5df2353f26172b3cc39ea32daa39af8de522205f512f458923e677/colorlog-6.9.0.tar.gz", hash = "sha256:bfba54a1b93b94f54e1f4fe48395725a3d92fd2a4af702f6bd70946bdc0c6ac2", size = 16624 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/51/9b208e85196941db2f0654ad0357ca6388ab3ed67efdbfc799f35d1f83aa/colorlog-6.9.0-py3-none-any.whl", hash = "sha256:5906e71acd67cb07a71e779c47c4bcb45fb8c2993eebe9e5adcd6a6f1b283eff", size = 11424 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "contourpy"
|
||||
version = "1.3.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/58/01/1253e6698a07380cd31a736d248a3f2a50a7c88779a1813da27503cadc2a/contourpy-1.3.3.tar.gz", hash = "sha256:083e12155b210502d0bca491432bb04d56dc3432f95a979b429f2848c3dbe880", size = 13466174 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/91/2e/c4390a31919d8a78b90e8ecf87cd4b4c4f05a5b48d05ec17db8e5404c6f4/contourpy-1.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:709a48ef9a690e1343202916450bc48b9e51c049b089c7f79a267b46cffcdaa1", size = 288773 },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/44/c4b0b6095fef4dc9c420e041799591e3b63e9619e3044f7f4f6c21c0ab24/contourpy-1.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:23416f38bfd74d5d28ab8429cc4d63fa67d5068bd711a85edb1c3fb0c3e2f381", size = 270149 },
|
||||
{ url = "https://files.pythonhosted.org/packages/30/2e/dd4ced42fefac8470661d7cb7e264808425e6c5d56d175291e93890cce09/contourpy-1.3.3-cp311-cp311-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:929ddf8c4c7f348e4c0a5a3a714b5c8542ffaa8c22954862a46ca1813b667ee7", size = 329222 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/74/cc6ec2548e3d276c71389ea4802a774b7aa3558223b7bade3f25787fafc2/contourpy-1.3.3-cp311-cp311-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:9e999574eddae35f1312c2b4b717b7885d4edd6cb46700e04f7f02db454e67c1", size = 377234 },
|
||||
{ url = "https://files.pythonhosted.org/packages/03/b3/64ef723029f917410f75c09da54254c5f9ea90ef89b143ccadb09df14c15/contourpy-1.3.3-cp311-cp311-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf67e0e3f482cb69779dd3061b534eb35ac9b17f163d851e2a547d56dba0a3a", size = 380555 },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/4b/6157f24ca425b89fe2eb7e7be642375711ab671135be21e6faa100f7448c/contourpy-1.3.3-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:51e79c1f7470158e838808d4a996fa9bac72c498e93d8ebe5119bc1e6becb0db", size = 355238 },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/56/f914f0dd678480708a04cfd2206e7c382533249bc5001eb9f58aa693e200/contourpy-1.3.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:598c3aaece21c503615fd59c92a3598b428b2f01bfb4b8ca9c4edeecc2438620", size = 1326218 },
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/d7/4a972334a0c971acd5172389671113ae82aa7527073980c38d5868ff1161/contourpy-1.3.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:322ab1c99b008dad206d406bb61d014cf0174df491ae9d9d0fac6a6fda4f977f", size = 1392867 },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/3e/f2cc6cd56dc8cff46b1a56232eabc6feea52720083ea71ab15523daab796/contourpy-1.3.3-cp311-cp311-win32.whl", hash = "sha256:fd907ae12cd483cd83e414b12941c632a969171bf90fc937d0c9f268a31cafff", size = 183677 },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/4b/9bd370b004b5c9d8045c6c33cf65bae018b27aca550a3f657cdc99acdbd8/contourpy-1.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:3519428f6be58431c56581f1694ba8e50626f2dd550af225f82fb5f5814d2a42", size = 225234 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/b6/71771e02c2e004450c12b1120a5f488cad2e4d5b590b1af8bad060360fe4/contourpy-1.3.3-cp311-cp311-win_arm64.whl", hash = "sha256:15ff10bfada4bf92ec8b31c62bf7c1834c244019b4a33095a68000d7075df470", size = 193123 },
|
||||
{ url = "https://files.pythonhosted.org/packages/be/45/adfee365d9ea3d853550b2e735f9d66366701c65db7855cd07621732ccfc/contourpy-1.3.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b08a32ea2f8e42cf1d4be3169a98dd4be32bafe4f22b6c4cb4ba810fa9e5d2cb", size = 293419 },
|
||||
{ url = "https://files.pythonhosted.org/packages/53/3e/405b59cfa13021a56bba395a6b3aca8cec012b45bf177b0eaf7a202cde2c/contourpy-1.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:556dba8fb6f5d8742f2923fe9457dbdd51e1049c4a43fd3986a0b14a1d815fc6", size = 273979 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/1c/a12359b9b2ca3a845e8f7f9ac08bdf776114eb931392fcad91743e2ea17b/contourpy-1.3.3-cp312-cp312-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:92d9abc807cf7d0e047b95ca5d957cf4792fcd04e920ca70d48add15c1a90ea7", size = 332653 },
|
||||
{ url = "https://files.pythonhosted.org/packages/63/12/897aeebfb475b7748ea67b61e045accdfcf0d971f8a588b67108ed7f5512/contourpy-1.3.3-cp312-cp312-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b2e8faa0ed68cb29af51edd8e24798bb661eac3bd9f65420c1887b6ca89987c8", size = 379536 },
|
||||
{ url = "https://files.pythonhosted.org/packages/43/8a/a8c584b82deb248930ce069e71576fc09bd7174bbd35183b7943fb1064fd/contourpy-1.3.3-cp312-cp312-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:626d60935cf668e70a5ce6ff184fd713e9683fb458898e4249b63be9e28286ea", size = 384397 },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/8f/ec6289987824b29529d0dfda0d74a07cec60e54b9c92f3c9da4c0ac732de/contourpy-1.3.3-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4d00e655fcef08aba35ec9610536bfe90267d7ab5ba944f7032549c55a146da1", size = 362601 },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/0a/a3fe3be3ee2dceb3e615ebb4df97ae6f3828aa915d3e10549ce016302bd1/contourpy-1.3.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:451e71b5a7d597379ef572de31eeb909a87246974d960049a9848c3bc6c41bf7", size = 1331288 },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/1d/acad9bd4e97f13f3e2b18a3977fe1b4a37ecf3d38d815333980c6c72e963/contourpy-1.3.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:459c1f020cd59fcfe6650180678a9993932d80d44ccde1fa1868977438f0b411", size = 1403386 },
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/8f/5847f44a7fddf859704217a99a23a4f6417b10e5ab1256a179264561540e/contourpy-1.3.3-cp312-cp312-win32.whl", hash = "sha256:023b44101dfe49d7d53932be418477dba359649246075c996866106da069af69", size = 185018 },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/e8/6026ed58a64563186a9ee3f29f41261fd1828f527dd93d33b60feca63352/contourpy-1.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:8153b8bfc11e1e4d75bcb0bff1db232f9e10b274e0929de9d608027e0d34ff8b", size = 226567 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/e2/f05240d2c39a1ed228d8328a78b6f44cd695f7ef47beb3e684cf93604f86/contourpy-1.3.3-cp312-cp312-win_arm64.whl", hash = "sha256:07ce5ed73ecdc4a03ffe3e1b3e3c1166db35ae7584be76f65dbbe28a7791b0cc", size = 193655 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/29/8dcfe16f0107943fa92388c23f6e05cff0ba58058c4c95b00280d4c75a14/contourpy-1.3.3-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:cd5dfcaeb10f7b7f9dc8941717c6c2ade08f587be2226222c12b25f0483ed497", size = 278809 },
|
||||
{ url = "https://files.pythonhosted.org/packages/85/a9/8b37ef4f7dafeb335daee3c8254645ef5725be4d9c6aa70b50ec46ef2f7e/contourpy-1.3.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:0c1fc238306b35f246d61a1d416a627348b5cf0648648a031e14bb8705fcdfe8", size = 261593 },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/59/ebfb8c677c75605cc27f7122c90313fd2f375ff3c8d19a1694bda74aaa63/contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70f9aad7de812d6541d29d2bbf8feb22ff7e1c299523db288004e3157ff4674e", size = 302202 },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/37/21972a15834d90bfbfb009b9d004779bd5a07a0ec0234e5ba8f64d5736f4/contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5ed3657edf08512fc3fe81b510e35c2012fbd3081d2e26160f27ca28affec989", size = 329207 },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/58/bd257695f39d05594ca4ad60df5bcb7e32247f9951fd09a9b8edb82d1daa/contourpy-1.3.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:3d1a3799d62d45c18bafd41c5fa05120b96a28079f2393af559b843d1a966a77", size = 225315 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "coverage"
|
||||
version = "7.9.2"
|
||||
@@ -682,6 +753,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/4c/0ecd260233290bee4b2facec4d8e755e57d8781d68f276e1248433993c9f/ctranslate2-4.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:511cdf810a5bf6a2cec735799e5cd47966e63f8f7688fdee1b97fed621abda00", size = 19470040 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cycler"
|
||||
version = "0.12.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a9/95/a3dbbb5028f35eafb79008e7522a75244477d2838f38cbb722248dabc2a8/cycler-0.12.1.tar.gz", hash = "sha256:88bb128f02ba341da8ef447245a9e138fae777f6a23943da4540077d3601eb1c", size = 7615 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/05/c19819d5e3d95294a6f5947fb9b9629efb316b96de511b418c53d245aae6/cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30", size = 8321 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "databases"
|
||||
version = "0.8.0"
|
||||
@@ -794,6 +874,12 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/26/57c6fb270950d476074c087527a558ccb6f4436657314bfb6cdf484114c4/docker-7.1.0-py3-none-any.whl", hash = "sha256:c96b93b7f0a746f9e77d325bcfb87422a3d8bd4f03136ae8a85b37f1898d5fc0", size = 147774 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "docopt"
|
||||
version = "0.6.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a2/55/8f8cab2afd404cf578136ef2cc5dfb50baa1761b68c9da1fb1e4eed343c9/docopt-0.6.2.tar.gz", hash = "sha256:49b3a825280bd66b3aa83585ef59c4a8c82f2c8a522dbe754a8bc8d08c85c491", size = 25901 }
|
||||
|
||||
[[package]]
|
||||
name = "ecdsa"
|
||||
version = "0.19.1"
|
||||
@@ -806,6 +892,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/a3/460c57f094a4a165c84a1341c373b0a4f5ec6ac244b998d5021aade89b77/ecdsa-0.19.1-py2.py3-none-any.whl", hash = "sha256:30638e27cf77b7e15c4c4cc1973720149e1033827cfd00661ca5c8cc0cdb24c3", size = 150607 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "einops"
|
||||
version = "0.8.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e5/81/df4fbe24dff8ba3934af99044188e20a98ed441ad17a274539b74e82e126/einops-0.8.1.tar.gz", hash = "sha256:de5d960a7a761225532e0f1959e5315ebeafc0cd43394732f103ca44b9837e84", size = 54805 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/87/62/9773de14fe6c45c23649e98b83231fffd7b9892b6cf863251dc2afa73643/einops-0.8.1-py3-none-any.whl", hash = "sha256:919387eb55330f5757c6bea9165c5ff5cfe63a642682ea788a6d472576d81737", size = 64359 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "email-validator"
|
||||
version = "2.2.0"
|
||||
@@ -939,6 +1034,31 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/25/155f9f080d5e4bc0082edfda032ea2bc2b8fab3f4d25d46c1e9dd22a1a89/flatbuffers-25.2.10-py2.py3-none-any.whl", hash = "sha256:ebba5f4d5ea615af3f7fd70fc310636fbb2bbd1f566ac0a23d98dd412de50051", size = 30953 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fonttools"
|
||||
version = "4.59.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0d/a5/fba25f9fbdab96e26dedcaeeba125e5f05a09043bf888e0305326e55685b/fonttools-4.59.2.tar.gz", hash = "sha256:e72c0749b06113f50bcb80332364c6be83a9582d6e3db3fe0b280f996dc2ef22", size = 3540889 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/53/742fcd750ae0bdc74de4c0ff923111199cc2f90a4ee87aaddad505b6f477/fonttools-4.59.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:511946e8d7ea5c0d6c7a53c4cb3ee48eda9ab9797cd9bf5d95829a398400354f", size = 2774961 },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/2a/976f5f9fa3b4dd911dc58d07358467bec20e813d933bc5d3db1a955dd456/fonttools-4.59.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8e5e2682cf7be766d84f462ba8828d01e00c8751a8e8e7ce12d7784ccb69a30d", size = 2344690 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/8f/b7eefc274fcf370911e292e95565c8253b0b87c82a53919ab3c795a4f50e/fonttools-4.59.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5729e12a982dba3eeae650de48b06f3b9ddb51e9aee2fcaf195b7d09a96250e2", size = 5026910 },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/95/864726eaa8f9d4e053d0c462e64d5830ec7c599cbdf1db9e40f25ca3972e/fonttools-4.59.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c52694eae5d652361d59ecdb5a2246bff7cff13b6367a12da8499e9df56d148d", size = 4971031 },
|
||||
{ url = "https://files.pythonhosted.org/packages/24/4c/b8c4735ebdea20696277c70c79e0de615dbe477834e5a7c2569aa1db4033/fonttools-4.59.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:f1f1bbc23ba1312bd8959896f46f667753b90216852d2a8cfa2d07e0cb234144", size = 5006112 },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/23/f9ea29c292aa2fc1ea381b2e5621ac436d5e3e0a5dee24ffe5404e58eae8/fonttools-4.59.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1a1bfe5378962825dabe741720885e8b9ae9745ec7ecc4a5ec1f1ce59a6062bf", size = 5117671 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/07/cfea304c555bf06e86071ff2a3916bc90f7c07ec85b23bab758d4908c33d/fonttools-4.59.2-cp311-cp311-win32.whl", hash = "sha256:e937790f3c2c18a1cbc7da101550a84319eb48023a715914477d2e7faeaba570", size = 2218157 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/de/35d839aa69db737a3f9f3a45000ca24721834d40118652a5775d5eca8ebb/fonttools-4.59.2-cp311-cp311-win_amd64.whl", hash = "sha256:9836394e2f4ce5f9c0a7690ee93bd90aa1adc6b054f1a57b562c5d242c903104", size = 2265846 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/3d/1f45db2df51e7bfa55492e8f23f383d372200be3a0ded4bf56a92753dd1f/fonttools-4.59.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:82906d002c349cad647a7634b004825a7335f8159d0d035ae89253b4abf6f3ea", size = 2769711 },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/df/cd236ab32a8abfd11558f296e064424258db5edefd1279ffdbcfd4fd8b76/fonttools-4.59.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:a10c1bd7644dc58f8862d8ba0cf9fb7fef0af01ea184ba6ce3f50ab7dfe74d5a", size = 2340225 },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/12/b6f9f964fe6d4b4dd4406bcbd3328821c3de1f909ffc3ffa558fe72af48c/fonttools-4.59.2-cp312-cp312-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:738f31f23e0339785fd67652a94bc69ea49e413dfdb14dcb8c8ff383d249464e", size = 4912766 },
|
||||
{ url = "https://files.pythonhosted.org/packages/73/78/82bde2f2d2c306ef3909b927363170b83df96171f74e0ccb47ad344563cd/fonttools-4.59.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0ec99f9bdfee9cdb4a9172f9e8fd578cce5feb231f598909e0aecf5418da4f25", size = 4955178 },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/77/7de766afe2d31dda8ee46d7e479f35c7d48747e558961489a2d6e3a02bd4/fonttools-4.59.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0476ea74161322e08c7a982f83558a2b81b491509984523a1a540baf8611cc31", size = 4897898 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c5/77/ce0e0b905d62a06415fda9f2b2e109a24a5db54a59502b769e9e297d2242/fonttools-4.59.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:95922a922daa1f77cc72611747c156cfb38030ead72436a2c551d30ecef519b9", size = 5049144 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/ea/870d93aefd23fff2e07cbeebdc332527868422a433c64062c09d4d5e7fe6/fonttools-4.59.2-cp312-cp312-win32.whl", hash = "sha256:39ad9612c6a622726a6a130e8ab15794558591f999673f1ee7d2f3d30f6a3e1c", size = 2206473 },
|
||||
{ url = "https://files.pythonhosted.org/packages/61/c4/e44bad000c4a4bb2e9ca11491d266e857df98ab6d7428441b173f0fe2517/fonttools-4.59.2-cp312-cp312-win_amd64.whl", hash = "sha256:980fd7388e461b19a881d35013fec32c713ffea1fc37aef2f77d11f332dfd7da", size = 2254706 },
|
||||
{ url = "https://files.pythonhosted.org/packages/65/a4/d2f7be3c86708912c02571db0b550121caab8cd88a3c0aacb9cfa15ea66e/fonttools-4.59.2-py3-none-any.whl", hash = "sha256:8bd0f759020e87bb5d323e6283914d9bf4ae35a7307dafb2cbd1e379e720ad37", size = 1132315 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "frozenlist"
|
||||
version = "1.7.0"
|
||||
@@ -991,6 +1111,11 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/e0/014d5d9d7a4564cf1c40b5039bc882db69fd881111e03ab3657ac0b218e2/fsspec-2025.7.0-py3-none-any.whl", hash = "sha256:8b012e39f63c7d5f10474de957f3ab793b47b45ae7d39f2fb735f8bbe25c0e21", size = 199597 },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
http = [
|
||||
{ name = "aiohttp" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "google-crc32c"
|
||||
version = "1.7.1"
|
||||
@@ -1255,6 +1380,19 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/0f/310fb31e39e2d734ccaa2c0fb981ee41f7bd5056ce9bc29b2248bd569169/humanfriendly-10.0-py2.py3-none-any.whl", hash = "sha256:1697e1a8a8f550fd43c2865cd84542fc175a61dcb779b6fee18cf6b6ccba1477", size = 86794 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "hyperpyyaml"
|
||||
version = "1.2.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pyyaml" },
|
||||
{ name = "ruamel-yaml" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/52/e3/3ac46d9a662b037f699a6948b39c8d03bfcff0b592335d5953ba0c55d453/HyperPyYAML-1.2.2.tar.gz", hash = "sha256:bdb734210d18770a262f500fe5755c7a44a5d3b91521b06e24f7a00a36ee0f87", size = 17085 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/33/c9/751b6401887f4b50f9307cc1e53d287b3dc77c375c126aeb6335aff73ccb/HyperPyYAML-1.2.2-py3-none-any.whl", hash = "sha256:3c5864bdc8864b2f0fbd7bc495e7e8fdf2dfd5dd80116f72da27ca96a128bdeb", size = 16118 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "icalendar"
|
||||
version = "6.3.1"
|
||||
@@ -1397,6 +1535,55 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/01/0e/b27cdbaccf30b890c40ed1da9fd4a3593a5cf94dae54fb34f8a4b74fcd3f/jsonschema_specifications-2025.4.1-py3-none-any.whl", hash = "sha256:4653bffbd6584f7de83a67e0d620ef16900b390ddc7939d56684d6c81e33f1af", size = 18437 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "julius"
|
||||
version = "0.2.7"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a1/19/c9e1596b5572c786b93428d0904280e964c930fae7e6c9368ed9e1b63922/julius-0.2.7.tar.gz", hash = "sha256:3c0f5f5306d7d6016fcc95196b274cae6f07e2c9596eed314e4e7641554fbb08", size = 59640 }
|
||||
|
||||
[[package]]
|
||||
name = "kiwisolver"
|
||||
version = "1.4.9"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/5c/3c/85844f1b0feb11ee581ac23fe5fce65cd049a200c1446708cc1b7f922875/kiwisolver-1.4.9.tar.gz", hash = "sha256:c3b22c26c6fd6811b0ae8363b95ca8ce4ea3c202d3d0975b2914310ceb1bcc4d", size = 97564 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/ab/c80b0d5a9d8a1a65f4f815f2afff9798b12c3b9f31f1d304dd233dd920e2/kiwisolver-1.4.9-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:eb14a5da6dc7642b0f3a18f13654847cd8b7a2550e2645a5bda677862b03ba16", size = 124167 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/c0/27fe1a68a39cf62472a300e2879ffc13c0538546c359b86f149cc19f6ac3/kiwisolver-1.4.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:39a219e1c81ae3b103643d2aedb90f1ef22650deb266ff12a19e7773f3e5f089", size = 66579 },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/a2/a12a503ac1fd4943c50f9822678e8015a790a13b5490354c68afb8489814/kiwisolver-1.4.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2405a7d98604b87f3fc28b1716783534b1b4b8510d8142adca34ee0bc3c87543", size = 65309 },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/e1/e533435c0be77c3f64040d68d7a657771194a63c279f55573188161e81ca/kiwisolver-1.4.9-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dc1ae486f9abcef254b5618dfb4113dd49f94c68e3e027d03cf0143f3f772b61", size = 1435596 },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/1e/51b73c7347f9aabdc7215aa79e8b15299097dc2f8e67dee2b095faca9cb0/kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8a1f570ce4d62d718dce3f179ee78dac3b545ac16c0c04bb363b7607a949c0d1", size = 1246548 },
|
||||
{ url = "https://files.pythonhosted.org/packages/21/aa/72a1c5d1e430294f2d32adb9542719cfb441b5da368d09d268c7757af46c/kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:cb27e7b78d716c591e88e0a09a2139c6577865d7f2e152488c2cc6257f460872", size = 1263618 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/af/db1509a9e79dbf4c260ce0cfa3903ea8945f6240e9e59d1e4deb731b1a40/kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:15163165efc2f627eb9687ea5f3a28137217d217ac4024893d753f46bce9de26", size = 1317437 },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/f2/3ea5ee5d52abacdd12013a94130436e19969fa183faa1e7c7fbc89e9a42f/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:bdee92c56a71d2b24c33a7d4c2856bd6419d017e08caa7802d2963870e315028", size = 2195742 },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/9b/1efdd3013c2d9a2566aa6a337e9923a00590c516add9a1e89a768a3eb2fc/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:412f287c55a6f54b0650bd9b6dce5aceddb95864a1a90c87af16979d37c89771", size = 2290810 },
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/e5/cfdc36109ae4e67361f9bc5b41323648cb24a01b9ade18784657e022e65f/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:2c93f00dcba2eea70af2be5f11a830a742fe6b579a1d4e00f47760ef13be247a", size = 2461579 },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/86/b589e5e86c7610842213994cdea5add00960076bef4ae290c5fa68589cac/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f117e1a089d9411663a3207ba874f31be9ac8eaa5b533787024dc07aeb74f464", size = 2268071 },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/c6/f8df8509fd1eee6c622febe54384a96cfaf4d43bf2ccec7a0cc17e4715c9/kiwisolver-1.4.9-cp311-cp311-win_amd64.whl", hash = "sha256:be6a04e6c79819c9a8c2373317d19a96048e5a3f90bec587787e86a1153883c2", size = 73840 },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/2d/16e0581daafd147bc11ac53f032a2b45eabac897f42a338d0a13c1e5c436/kiwisolver-1.4.9-cp311-cp311-win_arm64.whl", hash = "sha256:0ae37737256ba2de764ddc12aed4956460277f00c4996d51a197e72f62f5eec7", size = 65159 },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/c9/13573a747838aeb1c76e3267620daa054f4152444d1f3d1a2324b78255b5/kiwisolver-1.4.9-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ac5a486ac389dddcc5bef4f365b6ae3ffff2c433324fb38dd35e3fab7c957999", size = 123686 },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/ea/2ecf727927f103ffd1739271ca19c424d0e65ea473fbaeea1c014aea93f6/kiwisolver-1.4.9-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f2ba92255faa7309d06fe44c3a4a97efe1c8d640c2a79a5ef728b685762a6fd2", size = 66460 },
|
||||
{ url = "https://files.pythonhosted.org/packages/5b/5a/51f5464373ce2aeb5194508298a508b6f21d3867f499556263c64c621914/kiwisolver-1.4.9-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4a2899935e724dd1074cb568ce7ac0dce28b2cd6ab539c8e001a8578eb106d14", size = 64952 },
|
||||
{ url = "https://files.pythonhosted.org/packages/70/90/6d240beb0f24b74371762873e9b7f499f1e02166a2d9c5801f4dbf8fa12e/kiwisolver-1.4.9-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f6008a4919fdbc0b0097089f67a1eb55d950ed7e90ce2cc3e640abadd2757a04", size = 1474756 },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/42/f36816eaf465220f683fb711efdd1bbf7a7005a2473d0e4ed421389bd26c/kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:67bb8b474b4181770f926f7b7d2f8c0248cbcb78b660fdd41a47054b28d2a752", size = 1276404 },
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/64/bc2de94800adc830c476dce44e9b40fd0809cddeef1fde9fcf0f73da301f/kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2327a4a30d3ee07d2fbe2e7933e8a37c591663b96ce42a00bc67461a87d7df77", size = 1294410 },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/42/2dc82330a70aa8e55b6d395b11018045e58d0bb00834502bf11509f79091/kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:7a08b491ec91b1d5053ac177afe5290adacf1f0f6307d771ccac5de30592d198", size = 1343631 },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/fd/f4c67a6ed1aab149ec5a8a401c323cee7a1cbe364381bb6c9c0d564e0e20/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d8fc5c867c22b828001b6a38d2eaeb88160bf5783c6cb4a5e440efc981ce286d", size = 2224963 },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/aa/76720bd4cb3713314677d9ec94dcc21ced3f1baf4830adde5bb9b2430a5f/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:3b3115b2581ea35bb6d1f24a4c90af37e5d9b49dcff267eeed14c3893c5b86ab", size = 2321295 },
|
||||
{ url = "https://files.pythonhosted.org/packages/80/19/d3ec0d9ab711242f56ae0dc2fc5d70e298bb4a1f9dfab44c027668c673a1/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:858e4c22fb075920b96a291928cb7dea5644e94c0ee4fcd5af7e865655e4ccf2", size = 2487987 },
|
||||
{ url = "https://files.pythonhosted.org/packages/39/e9/61e4813b2c97e86b6fdbd4dd824bf72d28bcd8d4849b8084a357bc0dd64d/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ed0fecd28cc62c54b262e3736f8bb2512d8dcfdc2bcf08be5f47f96bf405b145", size = 2291817 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/41/85d82b0291db7504da3c2defe35c9a8a5c9803a730f297bd823d11d5fb77/kiwisolver-1.4.9-cp312-cp312-win_amd64.whl", hash = "sha256:f68208a520c3d86ea51acf688a3e3002615a7f0238002cccc17affecc86a8a54", size = 73895 },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/92/5f3068cf15ee5cb624a0c7596e67e2a0bb2adee33f71c379054a491d07da/kiwisolver-1.4.9-cp312-cp312-win_arm64.whl", hash = "sha256:2c1a4f57df73965f3f14df20b80ee29e6a7930a57d2d9e8491a25f676e197c60", size = 64992 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/0f/36d89194b5a32c054ce93e586d4049b6c2c22887b0eb229c61c68afd3078/kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:720e05574713db64c356e86732c0f3c5252818d05f9df320f0ad8380641acea5", size = 60104 },
|
||||
{ url = "https://files.pythonhosted.org/packages/52/ba/4ed75f59e4658fd21fe7dde1fee0ac397c678ec3befba3fe6482d987af87/kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:17680d737d5335b552994a2008fab4c851bcd7de33094a82067ef3a576ff02fa", size = 58592 },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/01/a8ea7c5ea32a9b45ceeaee051a04c8ed4320f5add3c51bfa20879b765b70/kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:85b5352f94e490c028926ea567fc569c52ec79ce131dadb968d3853e809518c2", size = 80281 },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/e3/dbd2ecdce306f1d07a1aaf324817ee993aab7aee9db47ceac757deabafbe/kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:464415881e4801295659462c49461a24fb107c140de781d55518c4b80cb6790f", size = 78009 },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/e9/0d4add7873a73e462aeb45c036a2dead2562b825aa46ba326727b3f31016/kiwisolver-1.4.9-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:fb940820c63a9590d31d88b815e7a3aa5915cad3ce735ab45f0c730b39547de1", size = 73929 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "kombu"
|
||||
version = "5.5.4"
|
||||
@@ -1459,6 +1646,41 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/1e/408fd10217eac0e43aea0604be22b4851a09e03d761d44d4ea12089dd70e/levenshtein-0.27.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:7987ef006a3cf56a4532bd4c90c2d3b7b4ca9ad3bf8ae1ee5713c4a3bdfda913", size = 98045 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lightning"
|
||||
version = "2.5.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "fsspec", extra = ["http"] },
|
||||
{ name = "lightning-utilities" },
|
||||
{ name = "packaging" },
|
||||
{ name = "pytorch-lightning" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "torchmetrics" },
|
||||
{ name = "tqdm" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0f/dd/86bb3bebadcdbc6e6e5a63657f0a03f74cd065b5ea965896679f76fec0b4/lightning-2.5.5.tar.gz", hash = "sha256:4d3d66c5b1481364a7e6a1ce8ddde1777a04fa740a3145ec218a9941aed7dd30", size = 640770 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/d0/4b4fbafc3b18df91207a6e46782d9fd1905f9f45cb2c3b8dfbb239aef781/lightning-2.5.5-py3-none-any.whl", hash = "sha256:69eb248beadd7b600bf48eff00a0ec8af171ec7a678d23787c4aedf12e225e8f", size = 828490 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lightning-utilities"
|
||||
version = "0.15.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "packaging" },
|
||||
{ name = "setuptools" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b8/39/6fc58ca81492db047149b4b8fd385aa1bfb8c28cd7cacb0c7eb0c44d842f/lightning_utilities-0.15.2.tar.gz", hash = "sha256:cdf12f530214a63dacefd713f180d1ecf5d165338101617b4742e8f22c032e24", size = 31090 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/de/73/3d757cb3fc16f0f9794dd289bcd0c4a031d9cf54d8137d6b984b2d02edf3/lightning_utilities-0.15.2-py3-none-any.whl", hash = "sha256:ad3ab1703775044bbf880dbf7ddaaac899396c96315f3aa1779cec9d618a9841", size = 29431 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "llama-cloud"
|
||||
version = "0.1.32"
|
||||
@@ -1806,6 +2028,42 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/34/75/51952c7b2d3873b44a0028b1bd26a25078c18f92f256608e8d1dc61b39fd/marshmallow-3.26.1-py3-none-any.whl", hash = "sha256:3350409f20a70a7e4e11a27661187b77cdcaeb20abca41c1454fe33636bea09c", size = 50878 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matplotlib"
|
||||
version = "3.10.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "contourpy" },
|
||||
{ name = "cycler" },
|
||||
{ name = "fonttools" },
|
||||
{ name = "kiwisolver" },
|
||||
{ name = "numpy" },
|
||||
{ name = "packaging" },
|
||||
{ name = "pillow" },
|
||||
{ name = "pyparsing" },
|
||||
{ name = "python-dateutil" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a0/59/c3e6453a9676ffba145309a73c462bb407f4400de7de3f2b41af70720a3c/matplotlib-3.10.6.tar.gz", hash = "sha256:ec01b645840dd1996df21ee37f208cd8ba57644779fa20464010638013d3203c", size = 34804264 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/80/d6/5d3665aa44c49005aaacaa68ddea6fcb27345961cd538a98bb0177934ede/matplotlib-3.10.6-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:905b60d1cb0ee604ce65b297b61cf8be9f4e6cfecf95a3fe1c388b5266bc8f4f", size = 8257527 },
|
||||
{ url = "https://files.pythonhosted.org/packages/8c/af/30ddefe19ca67eebd70047dabf50f899eaff6f3c5e6a1a7edaecaf63f794/matplotlib-3.10.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7bac38d816637343e53d7185d0c66677ff30ffb131044a81898b5792c956ba76", size = 8119583 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/29/4a8650a3dcae97fa4f375d46efcb25920d67b512186f8a6788b896062a81/matplotlib-3.10.6-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:942a8de2b5bfff1de31d95722f702e2966b8a7e31f4e68f7cd963c7cd8861cf6", size = 8692682 },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/d3/b793b9cb061cfd5d42ff0f69d1822f8d5dbc94e004618e48a97a8373179a/matplotlib-3.10.6-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a3276c85370bc0dfca051ec65c5817d1e0f8f5ce1b7787528ec8ed2d524bbc2f", size = 9521065 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f7/c5/53de5629f223c1c66668d46ac2621961970d21916a4bc3862b174eb2a88f/matplotlib-3.10.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9df5851b219225731f564e4b9e7f2ac1e13c9e6481f941b5631a0f8e2d9387ce", size = 9576888 },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/8e/0a18d6d7d2d0a2e66585032a760d13662e5250c784d53ad50434e9560991/matplotlib-3.10.6-cp311-cp311-win_amd64.whl", hash = "sha256:abb5d9478625dd9c9eb51a06d39aae71eda749ae9b3138afb23eb38824026c7e", size = 8115158 },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/b3/1a5107bb66c261e23b9338070702597a2d374e5aa7004b7adfc754fbed02/matplotlib-3.10.6-cp311-cp311-win_arm64.whl", hash = "sha256:886f989ccfae63659183173bb3fced7fd65e9eb793c3cc21c273add368536951", size = 7992444 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/1a/7042f7430055d567cc3257ac409fcf608599ab27459457f13772c2d9778b/matplotlib-3.10.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:31ca662df6a80bd426f871105fdd69db7543e28e73a9f2afe80de7e531eb2347", size = 8272404 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a9/5d/1d5f33f5b43f4f9e69e6a5fe1fb9090936ae7bc8e2ff6158e7a76542633b/matplotlib-3.10.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1678bb61d897bb4ac4757b5ecfb02bfb3fddf7f808000fb81e09c510712fda75", size = 8128262 },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/c3/135fdbbbf84e0979712df58e5e22b4f257b3f5e52a3c4aacf1b8abec0d09/matplotlib-3.10.6-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:56cd2d20842f58c03d2d6e6c1f1cf5548ad6f66b91e1e48f814e4fb5abd1cb95", size = 8697008 },
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/be/c443ea428fb2488a3ea7608714b1bd85a82738c45da21b447dc49e2f8e5d/matplotlib-3.10.6-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:662df55604a2f9a45435566d6e2660e41efe83cd94f4288dfbf1e6d1eae4b0bb", size = 9530166 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a9/35/48441422b044d74034aea2a3e0d1a49023f12150ebc58f16600132b9bbaf/matplotlib-3.10.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:08f141d55148cd1fc870c3387d70ca4df16dee10e909b3b038782bd4bda6ea07", size = 9593105 },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/c3/994ef20eb4154ab84cc08d033834555319e4af970165e6c8894050af0b3c/matplotlib-3.10.6-cp312-cp312-win_amd64.whl", hash = "sha256:590f5925c2d650b5c9d813c5b3b5fc53f2929c3f8ef463e4ecfa7e052044fb2b", size = 8122784 },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/b8/5c85d9ae0e40f04e71bedb053aada5d6bab1f9b5399a0937afb5d6b02d98/matplotlib-3.10.6-cp312-cp312-win_arm64.whl", hash = "sha256:f44c8d264a71609c79a78d50349e724f5d5fc3684ead7c2a473665ee63d868aa", size = 7992823 },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/bb/02c35a51484aae5f49bd29f091286e7af5f3f677a9736c58a92b3c78baeb/matplotlib-3.10.6-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:f2d684c3204fa62421bbf770ddfebc6b50130f9cad65531eeba19236d73bb488", size = 8252296 },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/85/41701e3092005aee9a2445f5ee3904d9dbd4a7df7a45905ffef29b7ef098/matplotlib-3.10.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:6f4a69196e663a41d12a728fab8751177215357906436804217d6d9cf0d4d6cf", size = 8116749 },
|
||||
{ url = "https://files.pythonhosted.org/packages/16/53/8d8fa0ea32a8c8239e04d022f6c059ee5e1b77517769feccd50f1df43d6d/matplotlib-3.10.6-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d6ca6ef03dfd269f4ead566ec6f3fb9becf8dab146fb999022ed85ee9f6b3eb", size = 8693933 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mdurl"
|
||||
version = "0.1.2"
|
||||
@@ -1947,6 +2205,19 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/48/6b/1c6b515a83d5564b1698a61efa245727c8feecf308f4091f565988519d20/numpy-2.3.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:e610832418a2bc09d974cc9fecebfa51e9532d6190223bc5ef6a7402ebf3b5cb", size = 12927246 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "omegaconf"
|
||||
version = "2.3.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "antlr4-python3-runtime" },
|
||||
{ name = "pyyaml" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/09/48/6388f1bb9da707110532cb70ec4d2822858ddfb44f1cdf1233c20a80ea4b/omegaconf-2.3.0.tar.gz", hash = "sha256:d5d4b6d29955cc50ad50c46dc269bcd92c6e00f5f90d23ab5fee7bfca4ba4cc7", size = 3298120 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/94/1843518e420fa3ed6919835845df698c7e27e183cb997394e4a670973a65/omegaconf-2.3.0-py3-none-any.whl", hash = "sha256:7b4df175cdb08ba400f45cae3bdcae7ba8365db4d165fc65fd04b050ab63b46b", size = 79500 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "onnxruntime"
|
||||
version = "1.22.1"
|
||||
@@ -1989,6 +2260,24 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/91/1f1cf577f745e956b276a8b1d3d76fa7a6ee0c2b05db3b001b900f2c71db/openai-1.97.0-py3-none-any.whl", hash = "sha256:a1c24d96f4609f3f7f51c9e1c2606d97cc6e334833438659cfd687e9c972c610", size = 764953 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "optuna"
|
||||
version = "4.5.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "alembic" },
|
||||
{ name = "colorlog" },
|
||||
{ name = "numpy" },
|
||||
{ name = "packaging" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "sqlalchemy" },
|
||||
{ name = "tqdm" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/53/a3/bcd1e5500de6ec794c085a277e5b624e60b4fac1790681d7cdbde25b93a2/optuna-4.5.0.tar.gz", hash = "sha256:264844da16dad744dea295057d8bc218646129c47567d52c35a201d9f99942ba", size = 472338 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7f/12/cba81286cbaf0f0c3f0473846cfd992cb240bdcea816bf2ef7de8ed0f744/optuna-4.5.0-py3-none-any.whl", hash = "sha256:5b8a783e84e448b0742501bc27195344a28d2c77bd2feef5b558544d954851b0", size = 400872 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "25.0"
|
||||
@@ -2090,6 +2379,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "primepy"
|
||||
version = "1.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/35/77/0cfa1b4697cfb5336f3a96e8bc73327f64610be3a64c97275f1801afb395/primePy-1.3.tar.gz", hash = "sha256:25fd7e25344b0789a5984c75d89f054fcf1f180bef20c998e4befbac92de4669", size = 3914 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/74/c1/bb7e334135859c3a92ec399bc89293ea73f28e815e35b43929c8db6af030/primePy-1.3-py3-none-any.whl", hash = "sha256:5ed443718765be9bf7e2ff4c56cdff71b42140a15b39d054f9d99f0009e2317a", size = 4040 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "prometheus-client"
|
||||
version = "0.22.1"
|
||||
@@ -2226,6 +2524,109 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/92/29/06261ea000e2dc1e22907dbbc483a1093665509ea586b29b8986a0e56733/psycopg2_binary-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:18c5ee682b9c6dd3696dad6e54cc7ff3a1a9020df6a5c0f861ef8bfd338c3ca0", size = 1164031 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyannote-audio"
|
||||
version = "3.3.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "asteroid-filterbanks" },
|
||||
{ name = "einops" },
|
||||
{ name = "huggingface-hub" },
|
||||
{ name = "lightning" },
|
||||
{ name = "omegaconf" },
|
||||
{ name = "pyannote-core" },
|
||||
{ name = "pyannote-database" },
|
||||
{ name = "pyannote-metrics" },
|
||||
{ name = "pyannote-pipeline" },
|
||||
{ name = "pytorch-metric-learning" },
|
||||
{ name = "rich" },
|
||||
{ name = "semver" },
|
||||
{ name = "soundfile" },
|
||||
{ name = "speechbrain" },
|
||||
{ name = "tensorboardx" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "torch-audiomentations" },
|
||||
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
|
||||
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
{ name = "torchmetrics" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e9/00/3b96ca7ad0641e4f64cfaa2af153dc7da0998ff972280e1c1681b1fcc243/pyannote_audio-3.3.2.tar.gz", hash = "sha256:b2115e86b0db5faedb9f36ee1a150cebd07f7758e65e815accdac1a12ca9c777", size = 13664309 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/17/e6/76049470d90217f9a15a34abf3e92d782cabc3fb4ab27515c9baaa5495d1/pyannote.audio-3.3.2-py2.py3-none-any.whl", hash = "sha256:599c694acd5d193215147ff82d0bf638bb191204ed502bd9fde8ff582e20aa1c", size = 898707 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/9a/98a8992727e762b031ed30451d5726ece46cf8bb7b872a9dba5cef011e5d/pyannote_audio-3.3.2-py2.py3-none-any.whl", hash = "sha256:23e0dcedda920cb2e154e146bcd9663289ee7942d0e012663dad76f2e571ebeb", size = 897827 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyannote-core"
|
||||
version = "5.0.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
{ name = "scipy" },
|
||||
{ name = "sortedcontainers" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/65/03/feaf7534206f02c75baf151ce4b8c322b402a6f477c2be82f69d9269cbe6/pyannote.core-5.0.0.tar.gz", hash = "sha256:1a55bcc8bd680ba6be5fa53efa3b6f3d2cdd67144c07b6b4d8d66d5cb0d2096f", size = 59247 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/84/c4/370bc8ba66815a5832ece753a1009388bb07ea353d21c83f2d5a1a436f2c/pyannote.core-5.0.0-py3-none-any.whl", hash = "sha256:04920a6754492242ce0dc6017545595ab643870fe69a994f20c1a5f2da0544d0", size = 58475 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyannote-database"
|
||||
version = "5.1.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pandas" },
|
||||
{ name = "pyannote-core" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "typer" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a9/ae/de36413d69a46be87cb612ebbcdc4eacbeebce3bc809124603e44a88fe26/pyannote.database-5.1.3.tar.gz", hash = "sha256:0eaf64c1cc506718de60d2d702f1359b1ae7ff252ee3e4799f1c5e378cd52c31", size = 49957 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/64/92d51a3a05615ba58be8ba62a43f9f9f952d9f3646f7e4fb7826e5a3a24e/pyannote.database-5.1.3-py3-none-any.whl", hash = "sha256:37887844c7dfbcc075cb591eddc00aff45fae1ed905344e1f43e0090e63bd40a", size = 48127 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyannote-metrics"
|
||||
version = "3.2.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "docopt" },
|
||||
{ name = "matplotlib" },
|
||||
{ name = "numpy" },
|
||||
{ name = "pandas" },
|
||||
{ name = "pyannote-core" },
|
||||
{ name = "pyannote-database" },
|
||||
{ name = "scikit-learn" },
|
||||
{ name = "scipy" },
|
||||
{ name = "sympy" },
|
||||
{ name = "tabulate" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/39/2b/6c5f01d3c49aa1c160765946e23782ca6436ae8b9bc514b56319ff5f16e7/pyannote.metrics-3.2.1.tar.gz", hash = "sha256:08024255a3550e96a8e9da4f5f4af326886548480de891414567c8900920ee5c", size = 49086 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/7d/035b370ab834b30e849fe9cd092b7bd7f321fcc4a2c56b84e96476b7ede5/pyannote.metrics-3.2.1-py3-none-any.whl", hash = "sha256:46be797cdade26c82773e5018659ae610145260069c7c5bf3d3c8a029ade8e22", size = 51386 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyannote-pipeline"
|
||||
version = "3.0.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "docopt" },
|
||||
{ name = "filelock" },
|
||||
{ name = "optuna" },
|
||||
{ name = "pyannote-core" },
|
||||
{ name = "pyannote-database" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "scikit-learn" },
|
||||
{ name = "tqdm" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/35/04/4bcfe0dd588577a188328b806f3a7213d8cead0ce5fe5784d01fd57df93f/pyannote.pipeline-3.0.1.tar.gz", hash = "sha256:021794e26a2cf5d8fb5bb1835951e71f5fac33eb14e23dfb7468e16b1b805151", size = 34486 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/83/42/1bf7cbf061ed05c580bfb63bffdd3f3474cbd5c02bee4fac518eea9e9d9e/pyannote.pipeline-3.0.1-py3-none-any.whl", hash = "sha256:819bde4c4dd514f740f2373dfec794832b9fc8e346a35e43a7681625ee187393", size = 31517 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyasn1"
|
||||
version = "0.6.1"
|
||||
@@ -2405,6 +2806,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/80/28/2659c02301b9500751f8d42f9a6632e1508aa5120de5e43042b8b30f8d5d/pyopenssl-25.1.0-py3-none-any.whl", hash = "sha256:2b11f239acc47ac2e5aca04fd7fa829800aeee22a2eb30d744572a157bd8a1ab", size = 56771 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyparsing"
|
||||
version = "3.2.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/bb/22/f1129e69d94ffff626bdb5c835506b3a5b4f3d070f17ea295e12c2c6f60f/pyparsing-3.2.3.tar.gz", hash = "sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be", size = 1088608 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/05/e7/df2285f3d08fee213f2d041540fa4fc9ca6c2d44cf36d3a035bf2a8d2bcc/pyparsing-3.2.3-py3-none-any.whl", hash = "sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf", size = 111120 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "5.8.0"
|
||||
@@ -2612,6 +3022,42 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/45/58/38b5afbc1a800eeea951b9285d3912613f2603bdf897a4ab0f4bd7f405fc/python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104", size = 24546 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytorch-lightning"
|
||||
version = "2.5.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "fsspec", extra = ["http"] },
|
||||
{ name = "lightning-utilities" },
|
||||
{ name = "packaging" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "torchmetrics" },
|
||||
{ name = "tqdm" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/16/78/bce84aab9a5b3b2e9d087d4f1a6be9b481adbfaac4903bc9daaaf09d49a3/pytorch_lightning-2.5.5.tar.gz", hash = "sha256:d6fc8173d1d6e49abfd16855ea05d2eb2415e68593f33d43e59028ecb4e64087", size = 643703 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/04/f6/99a5c66478f469598dee25b0e29b302b5bddd4e03ed0da79608ac964056e/pytorch_lightning-2.5.5-py3-none-any.whl", hash = "sha256:0b533991df2353c0c6ea9ca10a7d0728b73631fd61f5a15511b19bee2aef8af0", size = 832431 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytorch-metric-learning"
|
||||
version = "2.9.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
{ name = "scikit-learn" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "tqdm" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9b/80/6e61b1a91debf4c1b47d441f9a9d7fe2aabcdd9575ed70b2811474eb95c3/pytorch-metric-learning-2.9.0.tar.gz", hash = "sha256:27a626caf5e2876a0fd666605a78cb67ef7597e25d7a68c18053dd503830701f", size = 84530 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/46/7d/73ef5052f57b7720cad00e16598db3592a5ef4826745ffca67a2f085d4dc/pytorch_metric_learning-2.9.0-py3-none-any.whl", hash = "sha256:d51646006dc87168f00cf954785db133a4c5aac81253877248737aa42ef6432a", size = 127801 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytz"
|
||||
version = "2025.2"
|
||||
@@ -2788,6 +3234,7 @@ evaluation = [
|
||||
]
|
||||
local = [
|
||||
{ name = "faster-whisper" },
|
||||
{ name = "pyannote-audio" },
|
||||
]
|
||||
silero-vad = [
|
||||
{ name = "silero-vad" },
|
||||
@@ -2860,7 +3307,10 @@ evaluation = [
|
||||
{ name = "pydantic", specifier = ">=2.1.1" },
|
||||
{ name = "tqdm", specifier = ">=4.66.0" },
|
||||
]
|
||||
local = [{ name = "faster-whisper", specifier = ">=0.10.0" }]
|
||||
local = [
|
||||
{ name = "faster-whisper", specifier = ">=0.10.0" },
|
||||
{ name = "pyannote-audio", specifier = ">=3.3.2" },
|
||||
]
|
||||
silero-vad = [
|
||||
{ name = "silero-vad", specifier = ">=5.1.2" },
|
||||
{ name = "torch", specifier = ">=2.8.0", index = "https://download.pytorch.org/whl/cpu" },
|
||||
@@ -3064,6 +3514,44 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/64/8d/0133e4eb4beed9e425d9a98ed6e081a55d195481b7632472be1af08d2f6b/rsa-4.9.1-py3-none-any.whl", hash = "sha256:68635866661c6836b8d39430f97a996acbd61bfa49406748ea243539fe239762", size = 34696 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ruamel-yaml"
|
||||
version = "0.18.15"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "ruamel-yaml-clib", marker = "platform_python_implementation == 'CPython'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/3e/db/f3950f5e5031b618aae9f423a39bf81a55c148aecd15a34527898e752cf4/ruamel.yaml-0.18.15.tar.gz", hash = "sha256:dbfca74b018c4c3fba0b9cc9ee33e53c371194a9000e694995e620490fd40700", size = 146865 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/e5/f2a0621f1781b76a38194acae72f01e37b1941470407345b6e8653ad7640/ruamel.yaml-0.18.15-py3-none-any.whl", hash = "sha256:148f6488d698b7a5eded5ea793a025308b25eca97208181b6a026037f391f701", size = 119702 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ruamel-yaml-clib"
|
||||
version = "0.2.12"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/20/84/80203abff8ea4993a87d823a5f632e4d92831ef75d404c9fc78d0176d2b5/ruamel.yaml.clib-0.2.12.tar.gz", hash = "sha256:6c8fbb13ec503f99a91901ab46e0b07ae7941cd527393187039aec586fdfd36f", size = 225315 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/8f/683c6ad562f558cbc4f7c029abcd9599148c51c54b5ef0f24f2638da9fbb/ruamel.yaml.clib-0.2.12-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:4a6679521a58256a90b0d89e03992c15144c5f3858f40d7c18886023d7943db6", size = 132224 },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/d2/b79b7d695e2f21da020bd44c782490578f300dd44f0a4c57a92575758a76/ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:d84318609196d6bd6da0edfa25cedfbabd8dbde5140a0a23af29ad4b8f91fb1e", size = 641480 },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/6e/264c50ce2a31473a9fdbf4fa66ca9b2b17c7455b31ef585462343818bd6c/ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb43a269eb827806502c7c8efb7ae7e9e9d0573257a46e8e952f4d4caba4f31e", size = 739068 },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/29/88c2567bc893c84d88b4c48027367c3562ae69121d568e8a3f3a8d363f4d/ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:811ea1594b8a0fb466172c384267a4e5e367298af6b228931f273b111f17ef52", size = 703012 },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/46/879763c619b5470820f0cd6ca97d134771e502776bc2b844d2adb6e37753/ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cf12567a7b565cbf65d438dec6cfbe2917d3c1bdddfce84a9930b7d35ea59642", size = 704352 },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/80/ece7e6034256a4186bbe50dee28cd032d816974941a6abf6a9d65e4228a7/ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7dd5adc8b930b12c8fc5b99e2d535a09889941aa0d0bd06f4749e9a9397c71d2", size = 737344 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/ca/e4106ac7e80efbabdf4bf91d3d32fc424e41418458251712f5672eada9ce/ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1492a6051dab8d912fc2adeef0e8c72216b24d57bd896ea607cb90bb0c4981d3", size = 714498 },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/58/b1f60a1d591b771298ffa0428237afb092c7f29ae23bad93420b1eb10703/ruamel.yaml.clib-0.2.12-cp311-cp311-win32.whl", hash = "sha256:bd0a08f0bab19093c54e18a14a10b4322e1eacc5217056f3c063bd2f59853ce4", size = 100205 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/4f/b52f634c9548a9291a70dfce26ca7ebce388235c93588a1068028ea23fcc/ruamel.yaml.clib-0.2.12-cp311-cp311-win_amd64.whl", hash = "sha256:a274fb2cb086c7a3dea4322ec27f4cb5cc4b6298adb583ab0e211a4682f241eb", size = 118185 },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/41/e7a405afbdc26af961678474a55373e1b323605a4f5e2ddd4a80ea80f628/ruamel.yaml.clib-0.2.12-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:20b0f8dc160ba83b6dcc0e256846e1a02d044e13f7ea74a3d1d56ede4e48c632", size = 133433 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/b0/b850385604334c2ce90e3ee1013bd911aedf058a934905863a6ea95e9eb4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:943f32bc9dedb3abff9879edc134901df92cfce2c3d5c9348f172f62eb2d771d", size = 647362 },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/d0/3f68a86e006448fb6c005aee66565b9eb89014a70c491d70c08de597f8e4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95c3829bb364fdb8e0332c9931ecf57d9be3519241323c5274bd82f709cebc0c", size = 754118 },
|
||||
{ url = "https://files.pythonhosted.org/packages/52/a9/d39f3c5ada0a3bb2870d7db41901125dbe2434fa4f12ca8c5b83a42d7c53/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:749c16fcc4a2b09f28843cda5a193e0283e47454b63ec4b81eaa2242f50e4ccd", size = 706497 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/fa/097e38135dadd9ac25aecf2a54be17ddf6e4c23e43d538492a90ab3d71c6/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:bf165fef1f223beae7333275156ab2022cffe255dcc51c27f066b4370da81e31", size = 698042 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/d5/a659ca6f503b9379b930f13bc6b130c9f176469b73b9834296822a83a132/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:32621c177bbf782ca5a18ba4d7af0f1082a3f6e517ac2a18b3974d4edf349680", size = 745831 },
|
||||
{ url = "https://files.pythonhosted.org/packages/db/5d/36619b61ffa2429eeaefaab4f3374666adf36ad8ac6330d855848d7d36fd/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b82a7c94a498853aa0b272fd5bc67f29008da798d4f93a2f9f289feb8426a58d", size = 715692 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/82/85cb92f15a4231c89b95dfe08b09eb6adca929ef7df7e17ab59902b6f589/ruamel.yaml.clib-0.2.12-cp312-cp312-win32.whl", hash = "sha256:e8c4ebfcfd57177b572e2040777b8abc537cdef58a2120e830124946aa9b42c5", size = 98777 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/8f/c3654f6f1ddb75daf3922c3d8fc6005b1ab56671ad56ffb874d908bfa668/ruamel.yaml.clib-0.2.12-cp312-cp312-win_amd64.whl", hash = "sha256:0467c5965282c62203273b838ae77c0d29d7638c8a4e3a1c8bdd3602c10904e4", size = 115523 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "s3transfer"
|
||||
version = "0.13.0"
|
||||
@@ -3098,6 +3586,68 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/69/e2/b011c38e5394c4c18fb5500778a55ec43ad6106126e74723ffaee246f56e/safetensors-0.5.3-cp38-abi3-win_amd64.whl", hash = "sha256:836cbbc320b47e80acd40e44c8682db0e8ad7123209f69b093def21ec7cafd11", size = 308878 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "scikit-learn"
|
||||
version = "1.7.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "joblib" },
|
||||
{ name = "numpy" },
|
||||
{ name = "scipy" },
|
||||
{ name = "threadpoolctl" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/41/84/5f4af978fff619706b8961accac84780a6d298d82a8873446f72edb4ead0/scikit_learn-1.7.1.tar.gz", hash = "sha256:24b3f1e976a4665aa74ee0fcaac2b8fccc6ae77c8e07ab25da3ba6d3292b9802", size = 7190445 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/bd/a23177930abd81b96daffa30ef9c54ddbf544d3226b8788ce4c3ef1067b4/scikit_learn-1.7.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:90c8494ea23e24c0fb371afc474618c1019dc152ce4a10e4607e62196113851b", size = 9334838 },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/a1/d3a7628630a711e2ac0d1a482910da174b629f44e7dd8cfcd6924a4ef81a/scikit_learn-1.7.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:bb870c0daf3bf3be145ec51df8ac84720d9972170786601039f024bf6d61a518", size = 8651241 },
|
||||
{ url = "https://files.pythonhosted.org/packages/26/92/85ec172418f39474c1cd0221d611345d4f433fc4ee2fc68e01f524ccc4e4/scikit_learn-1.7.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:40daccd1b5623f39e8943ab39735cadf0bdce80e67cdca2adcb5426e987320a8", size = 9718677 },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/ce/abdb1dcbb1d2b66168ec43b23ee0cee356b4cc4100ddee3943934ebf1480/scikit_learn-1.7.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:30d1f413cfc0aa5a99132a554f1d80517563c34a9d3e7c118fde2d273c6fe0f7", size = 9511189 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/3b/47b5eaee01ef2b5a80ba3f7f6ecf79587cb458690857d4777bfd77371c6f/scikit_learn-1.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:c711d652829a1805a95d7fe96654604a8f16eab5a9e9ad87b3e60173415cb650", size = 8914794 },
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/16/57f176585b35ed865f51b04117947fe20f130f78940c6477b6d66279c9c2/scikit_learn-1.7.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3cee419b49b5bbae8796ecd690f97aa412ef1674410c23fc3257c6b8b85b8087", size = 9260431 },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/4e/899317092f5efcab0e9bc929e3391341cec8fb0e816c4789686770024580/scikit_learn-1.7.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:2fd8b8d35817b0d9ebf0b576f7d5ffbbabdb55536b0655a8aaae629d7ffd2e1f", size = 8637191 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f3/1b/998312db6d361ded1dd56b457ada371a8d8d77ca2195a7d18fd8a1736f21/scikit_learn-1.7.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:588410fa19a96a69763202f1d6b7b91d5d7a5d73be36e189bc6396bfb355bd87", size = 9486346 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ad/09/a2aa0b4e644e5c4ede7006748f24e72863ba2ae71897fecfd832afea01b4/scikit_learn-1.7.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e3142f0abe1ad1d1c31a2ae987621e41f6b578144a911ff4ac94781a583adad7", size = 9290988 },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/fa/c61a787e35f05f17fc10523f567677ec4eeee5f95aa4798dbbbcd9625617/scikit_learn-1.7.1-cp312-cp312-win_amd64.whl", hash = "sha256:3ddd9092c1bd469acab337d87930067c87eac6bd544f8d5027430983f1e1ae88", size = 8735568 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "scipy"
|
||||
version = "1.16.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f5/4a/b927028464795439faec8eaf0b03b011005c487bb2d07409f28bf30879c4/scipy-1.16.1.tar.gz", hash = "sha256:44c76f9e8b6e8e488a586190ab38016e4ed2f8a038af7cd3defa903c0a2238b3", size = 30580861 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/da/91/812adc6f74409b461e3a5fa97f4f74c769016919203138a3bf6fc24ba4c5/scipy-1.16.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:c033fa32bab91dc98ca59d0cf23bb876454e2bb02cbe592d5023138778f70030", size = 36552519 },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/18/8e355edcf3b71418d9e9f9acd2708cc3a6c27e8f98fde0ac34b8a0b45407/scipy-1.16.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:6e5c2f74e5df33479b5cd4e97a9104c511518fbd979aa9b8f6aec18b2e9ecae7", size = 28638010 },
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/eb/e931853058607bdfbc11b86df19ae7a08686121c203483f62f1ecae5989c/scipy-1.16.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:0a55ffe0ba0f59666e90951971a884d1ff6f4ec3275a48f472cfb64175570f77", size = 20909790 },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/0c/be83a271d6e96750cd0be2e000f35ff18880a46f05ce8b5d3465dc0f7a2a/scipy-1.16.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:f8a5d6cd147acecc2603fbd382fed6c46f474cccfcf69ea32582e033fb54dcfe", size = 23513352 },
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/bf/fe6eb47e74f762f933cca962db7f2c7183acfdc4483bd1c3813cfe83e538/scipy-1.16.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cb18899127278058bcc09e7b9966d41a5a43740b5bb8dcba401bd983f82e885b", size = 33534643 },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/ba/63f402e74875486b87ec6506a4f93f6d8a0d94d10467280f3d9d7837ce3a/scipy-1.16.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:adccd93a2fa937a27aae826d33e3bfa5edf9aa672376a4852d23a7cd67a2e5b7", size = 35376776 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/b4/04eb9d39ec26a1b939689102da23d505ea16cdae3dbb18ffc53d1f831044/scipy-1.16.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:18aca1646a29ee9a0625a1be5637fa798d4d81fdf426481f06d69af828f16958", size = 35698906 },
|
||||
{ url = "https://files.pythonhosted.org/packages/04/d6/bb5468da53321baeb001f6e4e0d9049eadd175a4a497709939128556e3ec/scipy-1.16.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d85495cef541729a70cdddbbf3e6b903421bc1af3e8e3a9a72a06751f33b7c39", size = 38129275 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/94/994369978509f227cba7dfb9e623254d0d5559506fe994aef4bea3ed469c/scipy-1.16.1-cp311-cp311-win_amd64.whl", hash = "sha256:226652fca853008119c03a8ce71ffe1b3f6d2844cc1686e8f9806edafae68596", size = 38644572 },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/d9/ec4864f5896232133f51382b54a08de91a9d1af7a76dfa372894026dfee2/scipy-1.16.1-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:81b433bbeaf35728dad619afc002db9b189e45eebe2cd676effe1fb93fef2b9c", size = 36575194 },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/6d/40e81ecfb688e9d25d34a847dca361982a6addf8e31f0957b1a54fbfa994/scipy-1.16.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:886cc81fdb4c6903a3bb0464047c25a6d1016fef77bb97949817d0c0d79f9e04", size = 28594590 },
|
||||
{ url = "https://files.pythonhosted.org/packages/0e/37/9f65178edfcc629377ce9a64fc09baebea18c80a9e57ae09a52edf84880b/scipy-1.16.1-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:15240c3aac087a522b4eaedb09f0ad061753c5eebf1ea430859e5bf8640d5919", size = 20866458 },
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/7b/749a66766871ea4cb1d1ea10f27004db63023074c22abed51f22f09770e0/scipy-1.16.1-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:65f81a25805f3659b48126b5053d9e823d3215e4a63730b5e1671852a1705921", size = 23539318 },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/db/8d4afec60eb833a666434d4541a3151eedbf2494ea6d4d468cbe877f00cd/scipy-1.16.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:6c62eea7f607f122069b9bad3f99489ddca1a5173bef8a0c75555d7488b6f725", size = 33292899 },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/1e/79023ca3bbb13a015d7d2757ecca3b81293c663694c35d6541b4dca53e98/scipy-1.16.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f965bbf3235b01c776115ab18f092a95aa74c271a52577bcb0563e85738fd618", size = 35162637 },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/49/0648665f9c29fdaca4c679182eb972935b3b4f5ace41d323c32352f29816/scipy-1.16.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f006e323874ffd0b0b816d8c6a8e7f9a73d55ab3b8c3f72b752b226d0e3ac83d", size = 35490507 },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/8f/66cbb9d6bbb18d8c658f774904f42a92078707a7c71e5347e8bf2f52bb89/scipy-1.16.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e8fd15fc5085ab4cca74cb91fe0a4263b1f32e4420761ddae531ad60934c2119", size = 37923998 },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/c3/61f273ae550fbf1667675701112e380881905e28448c080b23b5a181df7c/scipy-1.16.1-cp312-cp312-win_amd64.whl", hash = "sha256:f7b8013c6c066609577d910d1a2a077021727af07b6fab0ee22c2f901f22352a", size = 38508060 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "semver"
|
||||
version = "3.0.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/72/d1/d3159231aec234a59dd7d601e9dd9fe96f3afff15efd33c1070019b26132/semver-3.0.4.tar.gz", hash = "sha256:afc7d8c584a5ed0a11033af086e8af226a9c0b206f313e0301f8dd7b6b589602", size = 269730 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/24/4d91e05817e92e3a61c8a21e08fd0f390f5301f1c448b137c57c4bc6e543/semver-3.0.4-py3-none-any.whl", hash = "sha256:9c824d87ba7f7ab4a1890799cec8596f15c1241cb473404ea1cb0c55e4b04746", size = 17912 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sentencepiece"
|
||||
version = "0.2.0"
|
||||
@@ -3201,6 +3751,25 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/32/46/9cb0e58b2deb7f82b84065f37f3bffeb12413f947f9388e4cac22c4621ce/sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0", size = 29575 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "soundfile"
|
||||
version = "0.13.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "cffi" },
|
||||
{ name = "numpy" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e1/41/9b873a8c055582859b239be17902a85339bec6a30ad162f98c9b0288a2cc/soundfile-0.13.1.tar.gz", hash = "sha256:b2c68dab1e30297317080a5b43df57e302584c49e2942defdde0acccc53f0e5b", size = 46156 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/64/28/e2a36573ccbcf3d57c00626a21fe51989380636e821b341d36ccca0c1c3a/soundfile-0.13.1-py2.py3-none-any.whl", hash = "sha256:a23c717560da2cf4c7b5ae1142514e0fd82d6bbd9dfc93a50423447142f2c445", size = 25751 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/ab/73e97a5b3cc46bba7ff8650a1504348fa1863a6f9d57d7001c6b67c5f20e/soundfile-0.13.1-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:82dc664d19831933fe59adad199bf3945ad06d84bc111a5b4c0d3089a5b9ec33", size = 1142250 },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/e5/58fd1a8d7b26fc113af244f966ee3aecf03cb9293cb935daaddc1e455e18/soundfile-0.13.1-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:743f12c12c4054921e15736c6be09ac26b3b3d603aef6fd69f9dde68748f2593", size = 1101406 },
|
||||
{ url = "https://files.pythonhosted.org/packages/58/ae/c0e4a53d77cf6e9a04179535766b3321b0b9ced5f70522e4caf9329f0046/soundfile-0.13.1-py2.py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:9c9e855f5a4d06ce4213f31918653ab7de0c5a8d8107cd2427e44b42df547deb", size = 1235729 },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/5e/70bdd9579b35003a489fc850b5047beeda26328053ebadc1fb60f320f7db/soundfile-0.13.1-py2.py3-none-manylinux_2_28_x86_64.whl", hash = "sha256:03267c4e493315294834a0870f31dbb3b28a95561b80b134f0bd3cf2d5f0e618", size = 1313646 },
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/df/8c11dc4dfceda14e3003bb81a0d0edcaaf0796dd7b4f826ea3e532146bba/soundfile-0.13.1-py2.py3-none-win32.whl", hash = "sha256:c734564fab7c5ddf8e9be5bf70bab68042cd17e9c214c06e365e20d64f9a69d5", size = 899881 },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/e9/6b761de83277f2f02ded7e7ea6f07828ec78e4b229b80e4ca55dd205b9dc/soundfile-0.13.1-py2.py3-none-win_amd64.whl", hash = "sha256:1e70a05a0626524a69e9f0f4dd2ec174b4e9567f4d8b6c11d38b5c289be36ee9", size = 1019162 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "soupsieve"
|
||||
version = "2.7"
|
||||
@@ -3210,6 +3779,29 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/9c/0e6afc12c269578be5c0c1c9f4b49a8d32770a080260c333ac04cc1c832d/soupsieve-2.7-py3-none-any.whl", hash = "sha256:6e60cc5c1ffaf1cebcc12e8188320b72071e922c2e897f737cadce79ad5d30c4", size = 36677 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "speechbrain"
|
||||
version = "1.0.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "huggingface-hub" },
|
||||
{ name = "hyperpyyaml" },
|
||||
{ name = "joblib" },
|
||||
{ name = "numpy" },
|
||||
{ name = "packaging" },
|
||||
{ name = "scipy" },
|
||||
{ name = "sentencepiece" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
|
||||
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
{ name = "tqdm" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ab/10/87e666544a4e0cec7cbdc09f26948994831ae0f8bbc58de3bf53b68285ff/speechbrain-1.0.3.tar.gz", hash = "sha256:fcab3c6e90012cecb1eed40ea235733b550137e73da6bfa2340ba191ec714052", size = 747735 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/58/13/e61f1085aebee17d5fc2df19fcc5177c10379be52578afbecdd615a831c9/speechbrain-1.0.3-py3-none-any.whl", hash = "sha256:9859d4c1b1fb3af3b85523c0c89f52e45a04f305622ed55f31aa32dd2fba19e9", size = 864091 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sqlalchemy"
|
||||
version = "1.4.54"
|
||||
@@ -3291,6 +3883,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a2/09/77d55d46fd61b4a135c444fc97158ef34a095e5681d0a6c10b75bf356191/sympy-1.14.0-py3-none-any.whl", hash = "sha256:e091cc3e99d2141a0ba2847328f5479b05d94a6635cb96148ccb3f34671bd8f5", size = 6299353 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tabulate"
|
||||
version = "0.9.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ec/fe/802052aecb21e3797b8f7902564ab6ea0d60ff8ca23952079064155d1ae1/tabulate-0.9.0.tar.gz", hash = "sha256:0095b12bf5966de529c0feb1fa08671671b3368eec77d7ef7ab114be2c068b3c", size = 81090 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl", hash = "sha256:024ca478df22e9340661486f85298cff5f6dcdba14f3813e8830015b9ed1948f", size = 35252 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tenacity"
|
||||
version = "9.1.2"
|
||||
@@ -3300,6 +3901,29 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/30/643397144bfbfec6f6ef821f36f33e57d35946c44a2352d3c9f0ae847619/tenacity-9.1.2-py3-none-any.whl", hash = "sha256:f77bf36710d8b73a50b2dd155c97b870017ad21afe6ab300326b0371b3b05138", size = 28248 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tensorboardx"
|
||||
version = "2.6.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
{ name = "packaging" },
|
||||
{ name = "protobuf" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/2b/c5/d4cc6e293fb837aaf9f76dd7745476aeba8ef7ef5146c3b3f9ee375fe7a5/tensorboardx-2.6.4.tar.gz", hash = "sha256:b163ccb7798b31100b9f5fa4d6bc22dad362d7065c2f24b51e50731adde86828", size = 4769801 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/1d/b5d63f1a6b824282b57f7b581810d20b7a28ca951f2d5b59f1eb0782c12b/tensorboardx-2.6.4-py3-none-any.whl", hash = "sha256:5970cf3a1f0a6a6e8b180ccf46f3fe832b8a25a70b86e5a237048a7c0beb18e2", size = 87201 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "threadpoolctl"
|
||||
version = "3.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b7/4d/08c89e34946fce2aec4fbb45c9016efd5f4d7f24af8e5d93296e935631d8/threadpoolctl-3.6.0.tar.gz", hash = "sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e", size = 21274 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/32/d5/f9a850d79b0851d1d4ef6456097579a9005b31fea68726a4ae5f2d82ddd9/threadpoolctl-3.6.0-py3-none-any.whl", hash = "sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb", size = 18638 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tiktoken"
|
||||
version = "0.9.0"
|
||||
@@ -3440,6 +4064,40 @@ wheels = [
|
||||
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-win_arm64.whl", hash = "sha256:99fc421a5d234580e45957a7b02effbf3e1c884a5dd077afc85352c77bf41434" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "torch-audiomentations"
|
||||
version = "0.12.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "julius" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "torch-pitch-shift" },
|
||||
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
|
||||
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/31/8d/2f8fd7e34c75f5ee8de4310c3bd3f22270acd44d1f809e2fe7c12fbf35f8/torch_audiomentations-0.12.0.tar.gz", hash = "sha256:b02d4c5eb86376986a53eb405cca5e34f370ea9284411237508e720c529f7888", size = 52094 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/21/9d/1ee04f49c15d2d632f6f7102061d7c07652858e6d91b58a091531034e84f/torch_audiomentations-0.12.0-py3-none-any.whl", hash = "sha256:1b80b91d2016ccf83979622cac8f702072a79b7dcc4c2bee40f00b26433a786b", size = 48506 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "torch-pitch-shift"
|
||||
version = "1.2.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "packaging" },
|
||||
{ name = "primepy" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
|
||||
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/79/a6/722a832bca75d5079f6731e005b3d0c2eec7c6c6863d030620952d143d57/torch_pitch_shift-1.2.5.tar.gz", hash = "sha256:6e1c7531f08d0f407a4c55e5ff8385a41355c5c5d27ab7fa08632e51defbd0ed", size = 4725 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/27/4c/96ac2a09efb56cc3c41fb3ce9b6f4d8c0604499f7481d4a13a7b03e21382/torch_pitch_shift-1.2.5-py3-none-any.whl", hash = "sha256:6f8500cbc13f1c98b11cde1805ce5084f82cdd195c285f34287541f168a7c6a7", size = 5005 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "torchaudio"
|
||||
version = "2.8.0"
|
||||
@@ -3487,6 +4145,22 @@ wheels = [
|
||||
{ url = "https://download.pytorch.org/whl/cpu/torchaudio-2.8.0%2Bcpu-cp312-cp312-win_amd64.whl", hash = "sha256:9b302192b570657c1cc787a4d487ae4bbb7f2aab1c01b1fcc46757e7f86f391e" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "torchmetrics"
|
||||
version = "1.8.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "lightning-utilities" },
|
||||
{ name = "numpy" },
|
||||
{ name = "packaging" },
|
||||
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
|
||||
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/85/2e/48a887a59ecc4a10ce9e8b35b3e3c5cef29d902c4eac143378526e7485cb/torchmetrics-1.8.2.tar.gz", hash = "sha256:cf64a901036bf107f17a524009eea7781c9c5315d130713aeca5747a686fe7a5", size = 580679 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/02/21/aa0f434434c48490f91b65962b1ce863fdcce63febc166ca9fe9d706c2b6/torchmetrics-1.8.2-py3-none-any.whl", hash = "sha256:08382fd96b923e39e904c4d570f3d49e2cc71ccabd2a94e0f895d1f0dac86242", size = 983161 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tqdm"
|
||||
version = "4.67.1"
|
||||
|
||||
@@ -11,7 +11,6 @@ import {
|
||||
import { useRouter } from "next/navigation";
|
||||
import { useTranscriptGet } from "../../../../lib/apiHooks";
|
||||
import { parseNonEmptyString } from "../../../../lib/utils";
|
||||
import { useWebSockets } from "../../useWebSockets";
|
||||
|
||||
type TranscriptProcessing = {
|
||||
params: Promise<{
|
||||
@@ -25,7 +24,6 @@ export default function TranscriptProcessing(details: TranscriptProcessing) {
|
||||
const router = useRouter();
|
||||
|
||||
const transcript = useTranscriptGet(transcriptId);
|
||||
useWebSockets(transcriptId);
|
||||
|
||||
useEffect(() => {
|
||||
const status = transcript.data?.status;
|
||||
|
||||
@@ -23,16 +23,7 @@ const useWebRTC = (
|
||||
let p: Peer;
|
||||
|
||||
try {
|
||||
p = new Peer({
|
||||
initiator: true,
|
||||
stream: stream,
|
||||
// Disable trickle ICE: single SDP exchange (offer + answer) with all candidates.
|
||||
// Required for HTTP-based signaling; trickle needs WebSocket for candidate exchange.
|
||||
trickle: false,
|
||||
config: {
|
||||
iceServers: [{ urls: "stun:stun.l.google.com:19302" }],
|
||||
},
|
||||
});
|
||||
p = new Peer({ initiator: true, stream: stream });
|
||||
} catch (error) {
|
||||
setError(error as Error, "Error creating WebRTC");
|
||||
return;
|
||||
|
||||
@@ -431,7 +431,6 @@ export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
|
||||
);
|
||||
}
|
||||
setStatus(message.data);
|
||||
invalidateTranscript(queryClient, transcriptId as NonEmptyString);
|
||||
if (message.data.value === "ended") {
|
||||
ws.close();
|
||||
}
|
||||
|
||||
@@ -24,15 +24,24 @@ import { useAuth } from "../../lib/AuthProvider";
|
||||
import { useConsentDialog } from "../../lib/consent";
|
||||
import {
|
||||
useRoomJoinMeeting,
|
||||
useRoomJoinedMeeting,
|
||||
useRoomLeaveMeeting,
|
||||
useMeetingStartRecording,
|
||||
leaveRoomPostUrl,
|
||||
LeaveRoomBody,
|
||||
} from "../../lib/apiHooks";
|
||||
import { omit } from "remeda";
|
||||
import {
|
||||
assertExists,
|
||||
assertExistsAndNonEmptyString,
|
||||
NonEmptyString,
|
||||
parseNonEmptyString,
|
||||
} from "../../lib/utils";
|
||||
import { assertMeetingId, DailyRecordingType } from "../../lib/types";
|
||||
import {
|
||||
assertMeetingId,
|
||||
DailyRecordingType,
|
||||
MeetingId,
|
||||
} from "../../lib/types";
|
||||
import { useUuidV5 } from "react-uuid-hook";
|
||||
|
||||
const CONSENT_BUTTON_ID = "recording-consent";
|
||||
@@ -179,6 +188,58 @@ const useFrame = (
|
||||
] as const;
|
||||
};
|
||||
|
||||
const leaveDaily = () => {
|
||||
const frame = DailyIframe.getCallInstance();
|
||||
frame?.leave();
|
||||
};
|
||||
|
||||
const useDirtyDisconnects = (
|
||||
meetingId: NonEmptyString,
|
||||
roomName: NonEmptyString,
|
||||
) => {
|
||||
useEffect(() => {
|
||||
if (!meetingId || !roomName) return;
|
||||
|
||||
const handleBeforeUnload = () => {
|
||||
leaveDaily();
|
||||
navigator.sendBeacon(
|
||||
leaveRoomPostUrl(
|
||||
{
|
||||
room_name: roomName,
|
||||
meeting_id: meetingId,
|
||||
},
|
||||
{
|
||||
delay_seconds: 5,
|
||||
},
|
||||
),
|
||||
undefined satisfies LeaveRoomBody,
|
||||
);
|
||||
};
|
||||
window.addEventListener("beforeunload", handleBeforeUnload);
|
||||
return () => window.removeEventListener("beforeunload", handleBeforeUnload);
|
||||
}, [meetingId, roomName]);
|
||||
};
|
||||
|
||||
const useDisconnects = (
|
||||
meetingId: NonEmptyString,
|
||||
roomName: NonEmptyString,
|
||||
leaveMutation: ReturnType<typeof useRoomLeaveMeeting>,
|
||||
) => {
|
||||
useDirtyDisconnects(meetingId, roomName);
|
||||
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
leaveDaily();
|
||||
leaveMutation.mutate({
|
||||
params: {
|
||||
path: { meeting_id: meetingId, room_name: roomName },
|
||||
query: { delay_seconds: 5 },
|
||||
},
|
||||
});
|
||||
};
|
||||
}, [meetingId, roomName]);
|
||||
};
|
||||
|
||||
export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
const router = useRouter();
|
||||
const params = useParams();
|
||||
@@ -186,6 +247,8 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
const authLastUserId = auth.lastUserId;
|
||||
const [container, setContainer] = useState<HTMLDivElement | null>(null);
|
||||
const joinMutation = useRoomJoinMeeting();
|
||||
const joinedMutation = useRoomJoinedMeeting();
|
||||
const leaveMutation = useRoomLeaveMeeting();
|
||||
const startRecordingMutation = useMeetingStartRecording();
|
||||
const [joinedMeeting, setJoinedMeeting] = useState<Meeting | null>(null);
|
||||
|
||||
@@ -195,7 +258,9 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
useUuidV5(meeting.id, RAW_TRACKS_NAMESPACE)[0],
|
||||
);
|
||||
|
||||
const roomName = params?.roomName as string;
|
||||
if (typeof params.roomName === "object")
|
||||
throw new Error(`Invalid room name in params. array? ${params.roomName}`);
|
||||
const roomName = assertExistsAndNonEmptyString(params.roomName);
|
||||
|
||||
const {
|
||||
showConsentModal,
|
||||
@@ -237,6 +302,8 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
router.push("/browse");
|
||||
}, [router]);
|
||||
|
||||
useDisconnects(meeting.id as MeetingId, roomName, leaveMutation);
|
||||
|
||||
const handleCustomButtonClick = useCallback(
|
||||
(ev: DailyEventObjectCustomButtonClick) => {
|
||||
if (ev.button_id === CONSENT_BUTTON_ID) {
|
||||
@@ -249,6 +316,15 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
);
|
||||
|
||||
const handleFrameJoinMeeting = useCallback(() => {
|
||||
joinedMutation.mutate({
|
||||
params: {
|
||||
path: {
|
||||
room_name: roomName,
|
||||
meeting_id: meeting.id,
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
if (meeting.recording_type === "cloud") {
|
||||
console.log("Starting dual recording via REST API", {
|
||||
cloudInstanceId,
|
||||
@@ -308,8 +384,10 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
startRecordingWithRetry("raw-tracks", rawTracksInstanceId);
|
||||
}
|
||||
}, [
|
||||
meeting.recording_type,
|
||||
joinedMutation,
|
||||
roomName,
|
||||
meeting.id,
|
||||
meeting.recording_type,
|
||||
startRecordingMutation,
|
||||
cloudInstanceId,
|
||||
rawTracksInstanceId,
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
"use client";
|
||||
|
||||
import { $api } from "./apiClient";
|
||||
import { $api, API_URL } from "./apiClient";
|
||||
import { useError } from "../(errors)/errorContext";
|
||||
import { QueryClient, useQueryClient } from "@tanstack/react-query";
|
||||
import type { components } from "../reflector-api";
|
||||
import type { components, operations } from "../reflector-api";
|
||||
import { useAuth } from "./AuthProvider";
|
||||
import { MeetingId } from "./types";
|
||||
import { NonEmptyString } from "./utils";
|
||||
import { createFinalURL, createQuerySerializer } from "openapi-fetch";
|
||||
|
||||
/*
|
||||
* XXX error types returned from the hooks are not always correct; declared types are ValidationError but real type could be string or any other
|
||||
@@ -807,6 +808,44 @@ export function useRoomJoinMeeting() {
|
||||
);
|
||||
}
|
||||
|
||||
export const LEAVE_ROOM_POST_URL_TEMPLATE =
|
||||
"/v1/rooms/{room_name}/meetings/{meeting_id}/leave" as const;
|
||||
|
||||
export const leaveRoomPostUrl = (
|
||||
path: operations["v1_rooms_leave_meeting"]["parameters"]["path"],
|
||||
query?: operations["v1_rooms_leave_meeting"]["parameters"]["query"],
|
||||
): string =>
|
||||
createFinalURL(LEAVE_ROOM_POST_URL_TEMPLATE, {
|
||||
baseUrl: API_URL,
|
||||
params: { path, query },
|
||||
querySerializer: createQuerySerializer(),
|
||||
});
|
||||
|
||||
export type LeaveRoomBody = operations["v1_rooms_leave_meeting"]["requestBody"];
|
||||
|
||||
export function useRoomLeaveMeeting() {
|
||||
return $api.useMutation("post", LEAVE_ROOM_POST_URL_TEMPLATE);
|
||||
}
|
||||
|
||||
export const JOINED_ROOM_POST_URL_TEMPLATE =
|
||||
"/v1/rooms/{room_name}/meetings/{meeting_id}/joined" as const;
|
||||
|
||||
export const joinedRoomPostUrl = (
|
||||
params: operations["v1_rooms_joined_meeting"]["parameters"]["path"],
|
||||
): string =>
|
||||
createFinalURL(JOINED_ROOM_POST_URL_TEMPLATE, {
|
||||
baseUrl: API_URL,
|
||||
params: { path: params },
|
||||
querySerializer: () => "",
|
||||
});
|
||||
|
||||
export type JoinedRoomBody =
|
||||
operations["v1_rooms_joined_meeting"]["requestBody"];
|
||||
|
||||
export function useRoomJoinedMeeting() {
|
||||
return $api.useMutation("post", JOINED_ROOM_POST_URL_TEMPLATE);
|
||||
}
|
||||
|
||||
export function useRoomIcsSync() {
|
||||
const { setError } = useError();
|
||||
|
||||
|
||||
108
www/app/reflector-api.d.ts
vendored
108
www/app/reflector-api.d.ts
vendored
@@ -171,6 +171,48 @@ export interface paths {
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/rooms/{room_name}/meetings/{meeting_id}/joined": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
get?: never;
|
||||
put?: never;
|
||||
/**
|
||||
* Rooms Joined Meeting
|
||||
* @description Trigger presence poll (ideally when user actually joins meeting in Daily iframe)
|
||||
*/
|
||||
post: operations["v1_rooms_joined_meeting"];
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/rooms/{room_name}/meetings/{meeting_id}/leave": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
get?: never;
|
||||
put?: never;
|
||||
/**
|
||||
* Rooms Leave Meeting
|
||||
* @description Trigger presence recheck when user leaves meeting (e.g., tab close/navigation).
|
||||
*
|
||||
* Queues presence poll with optional delay to allow Daily.co to detect disconnect.
|
||||
*/
|
||||
post: operations["v1_rooms_leave_meeting"];
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/rooms/{room_id}/webhook/test": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -2435,6 +2477,72 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_rooms_joined_meeting: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path: {
|
||||
room_name: string;
|
||||
meeting_id: string;
|
||||
};
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": unknown;
|
||||
};
|
||||
};
|
||||
/** @description Validation Error */
|
||||
422: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["HTTPValidationError"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_rooms_leave_meeting: {
|
||||
parameters: {
|
||||
query?: {
|
||||
delay_seconds?: number;
|
||||
};
|
||||
header?: never;
|
||||
path: {
|
||||
room_name: string;
|
||||
meeting_id: string;
|
||||
};
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": unknown;
|
||||
};
|
||||
};
|
||||
/** @description Validation Error */
|
||||
422: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["HTTPValidationError"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_rooms_test_webhook: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
|
||||
Reference in New Issue
Block a user