Compare commits

..

105 Commits

Author SHA1 Message Date
dabf7251db chore(main): release 0.22.2 (#756) 2025-12-01 23:39:32 -05:00
Igor Monadical
b51b7aa917 fix: Skip mixdown for multitrack (#760)
* multitrack mixdown optimisation

* skip mixdown for multitrack

* skip mixdown for multitrack

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-01 23:35:12 -05:00
Igor Monadical
a8983b4e7e daily auth hotfix (#757)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-28 14:52:59 -05:00
Igor Monadical
fe47c46489 fix: daily auto refresh fix (#755)
* daily auto refresh fix

* Update www/app/lib/AuthProvider.tsx

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* Update www/app/[roomName]/components/DailyRoom.tsx

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* fix bot lint

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2025-11-27 18:31:03 -05:00
a2bb6a27d6 chore(main): release 0.22.1 (#750) 2025-11-27 16:55:08 +01:00
7f0b728991 fix: participants update from daily (#749)
* Fix participants update from daily

* Use track keys from params
2025-11-27 16:53:26 +01:00
692895c859 chore(main): release 0.22.0 (#748) 2025-11-26 16:53:27 -05:00
Igor Monadical
d63040e2fd feat: Multitrack segmentation (#747)
* segmentation multitrack (no-mistakes)

* segmentation multitrack (no-mistakes)

* self review

* self review

* recording poll daily doc

* filter cam_audio tracks to remove screensharing from daily processing

* pr review

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-26 16:21:32 -05:00
8d696aa775 chore(main): release 0.21.0 (#746) 2025-11-26 19:12:02 +01:00
f6ca07505f feat: add transcript format parameter to GET endpoint (#709)
* feat: add transcript format parameter to GET endpoint

Add transcript_format query parameter to /v1/transcripts/{id} endpoint
with support for multiple output formats using discriminated unions.

Formats supported:
- text: Plain speaker dialogue (default)
- text-timestamped: Dialogue with [MM:SS] timestamps
- webvtt-named: WebVTT subtitles with participant names
- json: Structured segments with full metadata

Response models use Pydantic discriminated unions with transcript_format
as discriminator field. POST/PATCH endpoints return GetTranscriptWithParticipants
for minimal responses. GET endpoint returns format-specific models.

* Copy transcript format

* Regenerate types

* Fix transcript formats

* Don't throw inside try

* Remove any type

* Toast share copy errors

* transcript_format exhaustiveness and python idiomatic assert_never

* format_timestamp_mmss clear type definition

* Rename seconds_to_timestamp

* Test transcript format with overlapping speakers

* exact match for vtt multispeaker test

---------

Co-authored-by: Sergey Mankovsky <sergey@monadical.com>
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-26 18:51:14 +01:00
Igor Monadical
3aef926203 room creatio hotfix (#744)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-25 22:42:09 -05:00
Igor Monadical
0b2c82227d is_owner pass for dailyco (#745)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-25 22:41:54 -05:00
Igor Monadical
689c8075cc transcription reprocess doc (#743)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-25 17:05:46 -05:00
201671368a chore(main): release 0.20.0 (#740) 2025-11-25 16:32:49 -05:00
Igor Monadical
86d5e26224 feat: transcript restart script (#742)
* transcript restart script

* fix tests?

* remove useless comment

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-25 16:28:43 -05:00
9bec39808f feat: link transcript participants (#737)
* Sync authentik users

* Migrate user_id from uid to id

* Fix auth user id

* Fix ci migration test

* Fix meeting token creation

* Move user id migration to a script

* Add user on first login

* Fix migration chain

* Rename uid column to authentik_uid

* Fix broken ws test
2025-11-25 19:13:19 +01:00
86ac23868b chore(main): release 0.19.0 (#727) 2025-11-25 12:02:33 -05:00
Igor Monadical
c442a62787 fix: default platform fix (#736)
* default platform fix

* default platform fix

* default platform fix

* Update server/reflector/db/rooms.py

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* default platform fix

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2025-11-24 23:10:34 -05:00
Igor Monadical
8e438ca285 feat: dailyco poll (#730)
* dailyco api module (no-mistakes)

* daily co library self-review

* uncurse

* self-review: daily resource leak, uniform types, enable_recording bomb, daily custom error, video_platforms/daily typing, daily timestamp dry

* dailyco docs parser

* phase 1-2 of daily poll

* dailyco poll (no-mistakes)

* poll docs

* fix tests

* forgotten utils file

* remove generated daily docs

* pr comments

* dailyco poll pr review and self-review

* daily recording poll api fix

* daily recording poll api fix

* review

* review

* fix tests

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-24 22:24:03 -05:00
Igor Monadical
11731c9d38 feat: multitrack cli (#735)
* multitrack cli prd

* prd/todo (no-mistakes)

* multitrack cli (no-mistakes)

* multitrack cli (no-mistakes)

* multitrack cli (no-mistakes)

* multitrack cli (no-mistakes)

* remove multitrack tests most worthless

* useless comments away

* useless comments away

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-24 10:35:06 -05:00
Igor Monadical
4287f8b8ae feat: dailyco api module (#725)
* dailyco api module (no-mistakes)

* daily co library self-review

* uncurse

* self-review: daily resource leak, uniform types, enable_recording bomb, daily custom error, video_platforms/daily typing, daily timestamp dry

* dailyco docs parser

* remove generated daily docs

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-21 10:24:04 -05:00
3e47c2c057 fix: start raw tracks recording (#729)
* Start raw tracks recording

* Bring back recording properties
2025-11-18 21:04:32 +01:00
Igor Monadical
616092a9bb keep only debug log for tracks with no words (#724)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-18 10:40:46 -05:00
18ed713369 fix: parakeet vad not getting the end timestamp (#728) 2025-11-18 09:15:29 -06:00
2801ab3643 chore(main): release 0.18.0 (#722) 2025-11-14 16:10:26 -05:00
Igor Monadical
b20cad76e6 feat: daily QOL: participants dictionary (#721)
* daily QOL: participants dictionary

* meeting deactivation fix

* meeting deactivation fix

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-14 14:31:52 -05:00
28a7258e45 fix: add proccessing page to file upload and reprocessing (#650) 2025-11-14 14:28:39 +01:00
a9a4f32324 fix: copy transcript (#674)
* Copy transcript

* Fix share copy transcript

* Move copy button above transcript
2025-11-14 13:36:25 +01:00
Igor Monadical
857e035562 fix whereby reprocess logic branch (#720)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-13 11:35:29 -05:00
34a3f5618c chore(main): release 0.17.0 (#717) 2025-11-12 21:25:59 -05:00
Igor Monadical
1473fd82dc feat: daily.co support as alternative to whereby (#691)
* llm instructions

* vibe dailyco

* vibe dailyco

* doc update (vibe)

* dont show recording ui on call

* stub processor (vibe)

* stub processor (vibe) self-review

* stub processor (vibe) self-review

* chore(main): release 0.14.0 (#670)

* Add multitrack pipeline

* Mixdown audio tracks

* Mixdown with pyav filter graph

* Trigger multitrack processing for daily recordings

* apply platform from envs in priority: non-dry

* Use explicit track keys for processing

* Align tracks of a multitrack recording

* Generate waveforms for the mixed audio

* Emit multriack pipeline events

* Fix multitrack pipeline track alignment

* dailico docs

* Enable multitrack reprocessing

* modal temp files uniform names, cleanup. remove llm temporary docs

* docs cleanup

* dont proceed with raw recordings if any of the downloads fail

* dry transcription pipelines

* remove is_miltitrack

* comments

* explicit dailyco room name

* docs

* remove stub data/method

* frontend daily/whereby code self-review (no-mistake)

* frontend daily/whereby code self-review (no-mistakes)

* frontend daily/whereby code self-review (no-mistakes)

* consent cleanup for multitrack (no-mistakes)

* llm fun

* remove extra comments

* fix tests

* merge migrations

* Store participant names

* Get participants by meeting session id

* pop back main branch migration

* s3 paddington (no-mistakes)

* comment

* pr comments

* pr comments

* pr comments

* platform / meeting cleanup

* Use participant names in summary generation

* platform assignment to meeting at controller level

* pr comment

* room playform properly default none

* room playform properly default none

* restore migration lost

* streaming WIP

* extract storage / use common storage / proper env vars for storage

* fix mocks tests

* remove fall back

* streaming for multifile

* cenrtal storage abstraction (no-mistakes)

* remove dead code / vars

* Set participant user id for authenticated users

* whereby recording name parsing fix

* whereby recording name parsing fix

* more file stream

* storage dry + tests

* remove homemade boto3 streaming and use proper boto

* update migration guide

* webhook creation script - print uuid

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Co-authored-by: Mathieu Virbel <mat@meltingrocks.com>
Co-authored-by: Sergey Mankovsky <sergey@monadical.com>
2025-11-12 21:21:16 -05:00
372202b0e1 feat: add API key management UI (#716)
* feat: add API key management UI

- Created settings page for users to create, view, and delete API keys
- Added Settings link to app navigation header
- Fixed delete operation return value handling in backend to properly handle asyncpg's None response

* feat: replace browser confirm with dialog for API key deletion

- Added Chakra UI Dialog component for better UX when confirming API key deletion
- Implemented proper focus management with cancelRef for accessibility
- Replaced native browser confirm() with controlled dialog state

* style: format API keys page with consistent line breaks

* feat: auto-select API key text for easier copying

- Added automatic text selection after API key creation to streamline the copy workflow
- Applied className to Code component for DOM targeting

* feat: improve API keys page layout and responsiveness

- Reduced max width from 1200px to 800px for better readability
- Added explicit width constraint to ensure consistent sizing across viewports

* refactor: remove redundant comments from API keys page
2025-11-10 18:25:08 -05:00
Igor Monadical
d20aac66c4 ui search pagination 2+page re-search fix (#714)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-10 14:18:41 -05:00
dc4b737daa chore(main): release 0.16.0 (#711) 2025-10-24 16:18:49 -06:00
Igor Monadical
0baff7abf7 transcript ui copy button placement (#712)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-10-24 16:52:02 -04:00
Igor Monadical
962c40e2b6 feat: search date filter (#710)
* search date filter

* search date filter

* search date filter

* search date filter

* pr comment

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-10-23 20:16:43 -04:00
Igor Monadical
3c4b9f2103 chore: error reporting and naming (#708)
* chore: error reporting and naming

* chore: error reporting and naming

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-10-22 13:45:08 -04:00
Igor Monadical
c6c035aacf removal of email-verified from /me (#707)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-10-21 14:49:33 -04:00
c086b91445 chore(main): release 0.15.0 (#706) 2025-10-21 08:30:22 -06:00
Igor Monadical
9a258abc02 feat: api tokens (#705)
* feat: api tokens (vibe)

* self-review

* remove token terminology + pr comments (vibe)

* return email_verified

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-10-20 12:55:25 -04:00
af86c47f1d chore(main): release 0.14.0 (#670) 2025-10-08 14:57:31 -06:00
5f6910e513 feat: Add calendar event data to transcript webhook payload (#689)
* feat: add calendar event data to transcript webhook payload and implement get_by_id method

* Update server/reflector/worker/webhook.py

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* Update server/reflector/worker/webhook.py

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* style: format conditional time fields with line breaks for better readability

* docs: add calendar event fields to transcript.completed webhook payload schema

---------

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2025-10-08 11:11:57 -05:00
9a71af145e fix: update transcript list on reprocess (#676)
* Update transcript list on reprocess

* Fix transcript create

* Fix multiple sockets issue

* Pass token in sec websocket protocol

* userEvent parse example

* transcript list invalidation non-abstraction

* Emit only relevant events to the user room

* Add ws close code const

* Refactor user websocket endpoint

* Refactor user events provider

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-10-07 19:11:30 +02:00
eef6dc3903 fix: upgrade nemo toolkit (#678) 2025-10-07 16:45:02 +02:00
Igor Monadical
1dee255fed parakeet endpoint doc (#679)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-10-07 10:41:01 -04:00
5d98754305 fix: security review (#656)
* Add security review doc

* Add tests to reproduce security issues

* Fix security issues

* Fix tests

* Set auth auth backend for tests

* Fix ics api tests

* Fix transcript mutate check

* Update frontent env var names

* Remove permissions doc
2025-09-29 23:07:49 +02:00
Igor Monadical
969bd84fcc feat: container build for www / github (#672)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-24 12:27:45 -04:00
Igor Monadical
36608849ec fix: restore feature boolean logic (#671)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-24 11:57:49 -04:00
Igor Monadical
5bf64b5a41 feat: docker-compose for production frontend (#664)
* docker-compose for production frontend

* fix: Remove external Redis port mapping for Coolify compatibility

Redis should only be accessible within the internal Docker network in Coolify deployments to avoid port conflicts with other applications.

* fix: Remove external port mapping for web service in Coolify

Coolify handles port exposure through its proxy (Traefik), so services should not expose ports directly in the docker-compose file.

* server side client envs

* missing vars

* nextjs experimental

* fix claude 'fix'

* remove build env vars compose

* docker

* remove ports for coolify

* review

* cleanup

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-24 11:15:27 -04:00
0aaa42528a chore(main): release 0.13.1 (#668) 2025-09-22 16:47:44 -06:00
565a62900f fix: TypeError on not all arguments converted during string formatting in logger (#667) 2025-09-22 16:45:28 -06:00
Igor Monadical
27016e6051 minimum release age for npm (#665)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-22 13:38:23 -04:00
6ddfee0b4e chore(main): release 0.13.0 (#661) 2025-09-21 20:50:47 -06:00
Igor Monadical
47716f6e5d feat: room form edit with enter (#662)
* room form edit with enter

* mobile form enter do nothing

* restore overwritten older change

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-19 15:14:40 -04:00
0abcebfc94 fix: invalid cleanup call (#660) 2025-09-18 10:02:30 -06:00
Igor Monadical
2b723da08b rooms-page-calendar-ics-room-name-fix (#659)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-17 20:02:17 -04:00
6566e04300 chore(main): release 0.12.1 (#658) 2025-09-17 17:17:22 -06:00
870e860517 fix: production blocked because having existing meeting with room_id null (#657) 2025-09-17 17:09:54 -06:00
396a95d5ce chore(main): release 0.12.0 (#654) 2025-09-17 16:44:11 -06:00
6f680b5795 feat: calendar integration (#608)
* feat: calendar integration

* feat: add ICS calendar API endpoints for room configuration and sync

* feat: add Celery background tasks for ICS sync

* feat: implement Phase 2 - Multiple active meetings per room with grace period

This commit adds support for multiple concurrent meetings per room, implementing
grace period logic and improved meeting lifecycle management for calendar integration.

## Database Changes
- Remove unique constraint preventing multiple active meetings per room
- Add last_participant_left_at field to track when meeting becomes empty
- Add grace_period_minutes field (default: 15) for configurable grace period

## Meeting Controller Enhancements
- Add get_all_active_for_room() to retrieve all active meetings for a room
- Add get_active_by_calendar_event() to find meetings by calendar event ID
- Maintain backward compatibility with existing get_active() method

## New API Endpoints
- GET /rooms/{room_name}/meetings/active - List all active meetings
- POST /rooms/{room_name}/meetings/{meeting_id}/join - Join specific meeting

## Meeting Lifecycle Improvements
- 15-minute grace period after last participant leaves
- Automatic reactivation when participant rejoins during grace period
- Force close calendar meetings 30 minutes after scheduled end time
- Update process_meetings task to handle multiple active meetings

## Whereby Integration
- Clear grace period when participants join via webhook events
- Track participant count for grace period management

## Testing
- Add comprehensive tests for multiple active meetings
- Test grace period behavior and participant rejoin scenarios
- Test calendar meeting force closure logic
- All 5 new tests passing

This enables proper calendar integration with overlapping meetings while
preventing accidental meeting closures through the grace period mechanism.

* feat: implement frontend for calendar integration (Phase 3 & 4)

- Created MeetingSelection component for choosing between multiple active meetings
- Shows both active meetings and upcoming calendar events (30 min ahead)
- Displays meeting metadata with privacy controls (owner-only details)
- Supports creation of unscheduled meetings alongside calendar meetings

- Added waiting page for users joining before scheduled start time
- Shows countdown timer until meeting begins
- Auto-transitions to meeting when calendar event becomes active
- Handles early joining with proper routing

- Created collapsible info panel showing meeting details
- Displays calendar metadata (title, description, attendees)
- Shows participant count and duration
- Privacy-aware: sensitive info only visible to room owners

- Integrated ICS settings into room configuration dialog
- Test connection functionality with immediate feedback
- Manual sync trigger with detailed results
- Shows last sync time and ETag for monitoring
- Configurable sync intervals (1 min to 1 hour)

- New /room/{roomName} route for meeting selection
- Waiting room at /room/{roomName}/wait?eventId={id}
- Classic room page at /{roomName} with meeting info
- Uses sessionStorage to pass selected meeting between pages

- Added new endpoints for active/upcoming meetings
- Regenerated TypeScript client with latest OpenAPI spec
- Proper error handling and loading states
- Auto-refresh every 30 seconds for live updates

- Color-coded badges for meeting status
- Attendee status indicators (accepted/declined/tentative)
- Responsive design with Chakra UI components
- Clear visual hierarchy between active and upcoming meetings
- Smart truncation for long attendee lists

This completes the frontend implementation for calendar integration,
enabling users to seamlessly join scheduled meetings from their
calendar applications.

* WIP: Migrate calendar integration frontend to React Query

- Migrate all calendar components from useApi to React Query hooks
- Fix Chakra UI v3 compatibility issues (Card, Progress, spacing props, leftIcon)
- Update backend Meeting model to include calendar fields
- Replace imperative API calls with declarative React Query patterns
- Remove old OpenAPI generated files that conflict with new structure

* fix: alembic migrations

* feat: add calendar migration

* feat: update ics, first version working

* feat: implement tabbed interface for room edit dialog

- Add General, Calendar, and Share tabs to organize room settings
- Move ICS settings to dedicated Calendar tab
- Move Zulip configuration to Share tab
- Keep basic room settings and webhooks in General tab
- Remove redundant migration file
- Fix Chakra UI v3 compatibility issues in calendar components

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: infinite loop

* feat: improve ICS calendar sync UX and fix room URL matching

- Replace "Test Connection" button with "Force Sync" button (Edit Room only)
- Show detailed sync results: total events downloaded vs room matches
- Remove emoticons and auto-hide timeout for cleaner UX
- Fix room URL matching to use UI_BASE_URL instead of BASE_URL
- Replace FaSync icon with LuRefreshCw for consistency
- Clear sync results when dialog closes or Force Sync pressed
- Update tests to reflect UI_BASE_URL change and exact URL matching

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: reorganize room edit dialog and fix Force Sync button

- Move WebHook configuration from General to dedicated WebHook tab
- Add WebHook tab after Share tab in room edit dialog
- Fix Force Sync button not appearing by adding missing isEditing prop
- Fix indentation issues in MeetingSelection component

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: complete calendar integration with UI improvements and code cleanup

Calendar Integration Tasks:
- Update upcoming meetings window from 30 to 120 minutes
- Include currently happening events in upcoming meetings API
- Create shared time utility functions (formatDateTime, formatCountdown, formatStartedAgo)
- Improve ongoing meetings UI logic with proper time detection
- Fix backend code organization and remove excessive documentation

UI/UX Improvements:
- Restructure room page layout using MinimalHeader pattern
- Remove borders from header and footer elements
- Change button text from "Leave Meeting" to "Leave Room"
- Remove "Back to Reflector" footer for cleaner design
- Extract WaitPageClient component for better separation

Backend Changes:
- calendar_events.py: Fix import organization and extend timing window
- rooms.py: Update API default from 30 to 120 minutes
- Enhanced test coverage for ongoing meeting scenarios

Frontend Changes:
- MinimalHeader: Add onLeave prop for custom navigation
- MeetingSelection: Complete layout restructure with shared utilities
- timeUtils: New shared utility file for consistent time formatting

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: remove wait page and simplify Join button with 5-minute disable logic

- Remove entire wait page directory and associated files
- Update handleJoinUpcoming to create unscheduled meeting directly
- Simplify Join button to single state:
  - Always shows "Join" text
  - Blue when meeting can be joined (ongoing or within 5 minutes)
  - Gray/disabled when more than 5 minutes away
- Remove confusing "Join Now", "Join Early" text variations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: improve calendar integration and meeting UI

- Refactor ICS sync tasks to use @asynctask decorator for cleaner async handling
- Extract meeting creation logic into reusable function
- Improve meeting selection UI with distinct current/upcoming sections
- Add early join functionality for upcoming meetings within 5-minute window
- Simplify non-ICS room workflow with direct Whereby embed
- Fix import paths and component organization

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: restore original recording consent functionality

- Remove custom ConsentDialogButton and WherebyEmbed components
- Merge RoomClient logic back into main room page
- Restore original consent UI: blue button with toast modal
- Maintain calendar integration features for ICS-enabled rooms
- Add consent-handler.md documentation of original functionality
- Preserve focus management and accessibility features

* fix: redirect Join Now button to local meeting page

- Change handleJoinDirect to use onMeetingSelect instead of opening external URL
- Join Now button now navigates to /{roomName}/{meetingId} instead of whereby.com
- Maintains proper routing within the application

* feat: remove restrictive message for non-owners in private rooms

- Remove confusing message about room owner permissions
- Cleaner UI for all users regardless of ownership status
- Users will only see available meetings and join options

* feat: improve meeting selection UI for better readability

- Limit page content to max 800px width for better 4K display readability
- Remove LIVE tag badge for cleaner interface
- Remove shadow from main live meeting box
- Remove blue border and hover effects for minimal design
- Change background to neutral gray for less visual noise

* feat: add room by name endpoint for non-authenticated access

- Add GET /rooms/name/{room_name} backend endpoint
- Endpoint supports non-authenticated access for public rooms
- Returns RoomDetails with webhook fields hidden for non-owners
- Update useRoomGetByName hook to use new direct endpoint
- Remove authentication requirement from frontend hook
- Regenerate API client types

Fixes: Non-authenticated users can now access room lobbies

* feat: add friendly message when no meetings are ongoing

- Show centered message with calendar icon when no meetings are active
- Message text: 'No meetings right now' with helpful description
- Contextual text for owners/shared rooms mentioning quick meeting option
- Consistent gray styling matching the rest of the interface
- Only displays when both currentMeetings and upcomingMeetings are empty

* style: center no meetings message and remove background

- Change from Box to Flex with flex=1 for vertical centering
- Remove gray background, border radius, and padding
- Message now appears cleanly centered in available space
- Maintains horizontal and vertical centering

* feat: move Create Meeting button to header

- Remove 'Start a Quick Meeting' box from main content area
- Add showCreateButton and onCreateMeeting props to MinimalHeader
- Create Meeting button now appears in header left of Leave Room
- Only shows for room owners or shared room users
- Update no meetings message to remove reference to quick meeting below
- Cleaner, more accessible UI with actions in the header

* style: change room title and no meetings text to pure black

- Update room title in MinimalHeader from gray.700 to black
- Update 'No meetings right now' text from gray.700 to black
- Improves visual hierarchy and readability
- Consistent with other pages' styling

* style: linting

* fix: remove plan files

* fix: alembic migration with named foreign keys

* feat: add SyncStatus enum and refactor ICS sync to use rooms controller

- Add SyncStatus enum to replace string literals in ICS sync status
- Replace direct SQL queries in worker with rooms_controller.get_ics_enabled()
- Improve type safety and maintainability of ICS sync code
- Enum values: SUCCESS, UNCHANGED, ERROR, SKIPPED maintain backward compatibility

* refactor: remove unnecessary docstring from get_ics_enabled method

The function name is self-explanatory

* fix: import top level

* feat: use Literal type for ICSStatus.status field

- Changed ICSStatus.status from str to Literal['enabled', 'disabled']
- Improves type safety and API documentation

* feat: update TypeScript definitions for ICSStatus Literal type

- OpenAPI generation now properly reflects Literal['enabled', 'disabled'] type
- Improves type safety for frontend consumers of the API
- Applied automatic formatting via pre-commit hooks

* refactor: replace loguru with structlog in ics_sync service

- Replace loguru import with structlog in services/ics_sync.py
- Update logging calls to use structlog's structured format with keyword args
- Maintains consistency with other services using structlog
- Changes: logger.info(f'...') -> logger.info('...', key=value) format

* chore: remove loguru dependency and improve type annotations

- Remove loguru from dependencies in pyproject.toml (replaced with structlog)
- Update meeting controller methods to properly return Optional types
- Update dependency lock file after loguru removal

* fix: resolve pyflakes warnings in ics_sync and meetings modules

Remove unused imports and variables to clean up code quality

* Remove grace period logic and improve meeting deactivation

- Removed grace_period_minutes and last_participant_left_at fields
- Simplified deactivation logic based on actual usage patterns:
  * Active sessions: Keep meeting active regardless of scheduled time
  * Calendar meetings: Wait until scheduled end if unused, deactivate immediately once used and empty
  * On-the-fly meetings: Deactivate immediately when empty
- Created migration to drop unused database columns
- Updated tests to remove grace period test cases

* Update test to match new deactivation logic for calendar meetings

* fix: remove unwanted file

* fix: incompleted changes from EVENT_WINDOW*

* fix: update room ICS API tests to include required webhook fields and correct URL

- Add webhook_url and webhook_secret fields to room creation tests
- Fix room URL matching in ICS sync test to use UI_BASE_URL instead of BASE_URL
- Aligns test with actual API requirements and ICS sync service implementation

* fix: add Redis distributed locking to prevent race conditions in process_meetings

- Implement per-meeting locks using Redis to prevent concurrent processing
- Add lock extension after slow API calls (Whereby) to handle long-running operations
- Use redis-py's built-in lock.extend() with replace_ttl=True for simple TTL refresh
- Track and log skipped meetings when locked by other workers
- Document SSRF analysis showing it's low-risk due to async worker isolation

This prevents multiple workers from processing the same meeting simultaneously,
which could cause state corruption or duplicate deactivations.

* refactor: rename MinimalHeader to MeetingMinimalHeader for clarity

* fix: minor code quality improvements - add emoji constants, fix type safety, cleanup comments

* fix: database migration

* self-pr review

* self-pr review

* self-pr review treeshake

* fix: local fixes

* fix: creation of meeting

* fix: meeting selection create button

* compile fix

* fix: meeting selection responsive

* fix: rework process logic for meeting

* fix: meeting useEffect frontend-only dedupe (#647)

* meeting useEffect frontend-only dedupe

* format

* also get room by name backend fix

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>

* invalidate meeting list on new meeting

* test fix

* room url copy button for ics

* calendar refresh quick action icon

* remove work.md

* meeting page frontend fixes

* hide number of meeting participants

* Revert "hide number of meeting participants"

This reverts commit 38906c5d1a.

* ui bits

* ui bits

* remove log

* room name typing stricten

* feat: protect atomic operation involving external service with redlock

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Igor Monadical <igor@monadical.com>
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-17 16:43:20 -06:00
ab859d65a6 feat: self-hosted gpu api (#636)
* Self-hosted gpu api

* Refactor self-hosted api

* Rename model api tests

* Use lifespan instead of startup event

* Fix self hosted imports

* Add newlines

* Add response models

* Move gpu dir to the root

* Add project description

* Refactor lifespan

* Update env var names for model api tests

* Preload diarizarion service

* Refactor uploaded file paths
2025-09-17 18:52:03 +02:00
fa049e8d06 fix: ignore player hotkeys for text inputs (#646)
* Ignore player hotkeys for text inputs

* Fix event listener effect
2025-09-16 10:57:35 +02:00
2ce7479967 chore(main): release 0.11.0 (#648) 2025-09-15 22:42:53 -06:00
b42f7cfc60 feat: remove profanity filter that was there for conference (#652) 2025-09-15 18:19:19 -06:00
c546e69739 fix: zulip stream and topic selection in share dialog (#644)
* fix: zulip stream and topic selection in share dialog

Replace useListCollection with createListCollection to match the working
room edit implementation. This ensures collections update when data loads,
fixing the issue where streams and topics wouldn't appear until navigation.

* fix: wrap createListCollection in useMemo to prevent recreation on every render

Both streamCollection and topicCollection are now memoized to improve performance
and prevent unnecessary re-renders of Combobox components
2025-09-15 12:34:51 -06:00
Igor Monadical
3f1fe8c9bf chore: remove timeout-based auth session logic (#649)
* remove timeout-based auth session logic

* remove timeout-based auth session logic

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-15 14:19:10 -04:00
5f143fe364 fix: zulip and consent handler on the file pipeline (#645) 2025-09-15 10:49:20 -06:00
Igor Monadical
79f161436e chore: meeting user id removal and room id requirement (#635)
* chore: remove meeting user id and make meeting room id required

* meeting room_id optional

* orphaned meeting room ids DATA migration

* ci fix

* fix meeting_room_id_fkey downgrade

* fix migration rollback

* fix: put index back (meeting room id)

* fix: put index back (meeting room id)

* fix: put index back (meeting room id)

* remove noop migrations

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-12 13:07:58 -04:00
Igor Monadical
5cba5d310d chore: sentry and nextjs major bumps (#633)
* chore: remove nextjs-config

* build fix

* sentry update

* nextjs update

* feature flags doc

* update readme

* explicit nextjs env vars + remove feature-unrelated things and obsolete vars from config

* full config removal

* remove force-dynamic from pages

* compile fix

* restore claude-deleted tests

* no sentry backward compat

* better .env.example

* AUTHENTIK_REFRESH_TOKEN_URL not so required

* accommodate auth system to requiredLogin feature

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-12 12:41:44 -04:00
43ea9349f5 chore(main): release 0.10.0 (#616) 2025-09-11 20:57:19 -06:00
Igor Monadical
b3a8e9739d chore: whereby & s3 settings env error reporting (#637)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-11 17:52:34 -04:00
Igor Monadical
369ecdff13 feat: replace nextjs-config with environment variables (#632)
* chore: remove nextjs-config

* build fix

* update readme

* explicit nextjs env vars + remove feature-unrelated things and obsolete vars from config

* full config removal

* remove force-dynamic from pages

* compile fix

* restore claude-deleted tests

* better .env.example

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-11 11:20:41 -04:00
fc363bd49b fix: missing follow_redirects=True on modal endpoint (#630) 2025-09-10 08:15:47 -06:00
Igor Monadical
962038ee3f fix: auth post (#627)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 16:46:57 -04:00
Igor Monadical
3b85ff3bdf fix: auth post (#626)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 16:27:46 -04:00
Igor Monadical
cde99ca271 fix: auth post (#624)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 15:48:07 -04:00
Igor Monadical
f81fe9948a fix: anonymous users transcript permissions (#621)
* fix: public transcript visibility

* fix: transcript permissions frontend

* dead code removal

* chore: remove unused code

* fix search tests

* fix search tests

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 10:50:29 -04:00
Igor Monadical
5a5b323382 fix: sync backend and frontend token refresh logic (#614)
* sync backend and frontend token refresh logic

* return react strict mode

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-08 10:40:18 -04:00
02a3938822 chore(main): release 0.9.0 (#603) 2025-09-05 22:50:10 -06:00
Igor Monadical
7f5a4c9ddc fix: token refresh locking (#613)
* fix: kv use tls explicit

* fix: token refresh locking

* remove logs

* compile fix

* compile fix

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 23:03:24 -04:00
Igor Monadical
08d88ec349 fix: kv use tls explicit (#610)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 18:39:32 -04:00
Igor Monadical
c4d2825c81 feat: frontend openapi react query (#606)
* refactor: migrate from @hey-api/openapi-ts to openapi-react-query

- Replace @hey-api/openapi-ts with openapi-typescript and openapi-react-query
- Generate TypeScript types from OpenAPI spec
- Set up React Query infrastructure with QueryClientProvider
- Migrate all API hooks to use React Query patterns
- Maintain backward compatibility for existing components
- Remove old API infrastructure and dependencies

* fix: resolve import errors and add missing api hooks

- Create constants.ts for RECORD_A_MEETING_URL
- Add api-types.ts for backward compatible type exports
- Update all imports from deleted api folder to new locations
- Add missing React Query hooks for rooms and zulip operations
- Create useApi compatibility layer for unmigrated components

* feat: migrate components to React Query hooks

- Add comprehensive API hooks for all operations
- Migrate rooms page to use React Query mutations
- Update transcript title component to use mutation hook
- Refactor share/privacy component with proper error handling
- Remove direct API client usage in favor of hooks

* feat: complete migration from @hey-api/openapi-ts to openapi-react-query

- Migrated all components from useApi compatibility layer to direct React Query hooks
- Added new hooks for participant operations, room meetings, and speaker operations
- Updated all imports from old api module to api-types
- Fixed TypeScript types and API endpoint signatures
- Removed deprecated useApi.ts compatibility layer
- Fixed SourceKind enum values to match OpenAPI spec
- Added @ts-ignore for Zulip endpoints not in OpenAPI spec yet
- Fixed all compilation errors and type issues

* fix: authentication flow with React Query migration

- Fix middleware management in apiClient to properly handle auth tokens
- Update ApiAuthProvider to correctly configure base URL and auth
- Add missing NextAuth API route handler at app/api/auth/[...nextauth]/route.ts
- Remove middleware ejection attempts (not supported by openapi-fetch)
- Use global variables to store current auth token and API URL
- Setup middleware once on initialization instead of repeatedly adding

This fixes the login/logout flow that was broken after migrating from
the useApi compatibility layer to native React Query hooks.

* fix: prevent unauthorized API calls before authentication

- Add global AuthGuard component to handle authentication at layout level
- Make all API query hooks conditional on authentication status
- Define public routes (like /transcripts/new) that don't require auth
- Fix login flow to use NextAuth signIn instead of non-existent /login route
- Prevent 401 errors by waiting for auth token before making API calls

Previously, all routes under (app) were publicly accessible with each page
handling auth individually. Now authentication is enforced globally while
still allowing specific routes to remain public.

* refactor: remove redundant client-side AuthGuard

The authentication is already properly handled by Next.js middleware
in middleware.ts with LOGIN_REQUIRED_PAGES. The middleware approach is
superior as it:
- Provides server-side protection before page loads
- Prevents flash of unauthorized content
- Centralizes auth logic in one place
- Better performance (no client-side JS needed)

Keep the API hooks conditional to prevent 401 errors before token is ready.

* fix: use direct status check for API query authentication

Changed all query hooks to use direct `status === "authenticated"` check
instead of derived `isAuthenticated && !isLoading` to avoid race conditions
where queries might fire before the authentication token is properly set.

This prevents the brief 401 errors that occur on page refresh when the
session is being restored.

* fix: correct content-type header for FormData uploads

Previously, the API client was setting a default Content-Type of application/json
for all requests, which broke file uploads that need multipart/form-data.

Now the client only sets application/json when the body is not FormData,
allowing FormData to automatically set the correct multipart boundary.

* fix: resolve authentication race condition with React Query

Previously, API calls were being made before the auth token was configured,
causing initial 401 errors that would retry with 200 after token setup.

Changes:
- Add global auth readiness tracking in apiClient
- Create useAuthReady hook that checks both session and token state
- Update all API hooks to use isAuthReady instead of just session status
- Add AuthWrapper component at layout level for consistent loading UX
- Show spinner while authentication initializes across all pages

This ensures API calls only fire after authentication is fully configured,
eliminating the 401/retry pattern and improving user experience.

* refactor: clean up api-hooks.ts comments and improve search invalidation

- Remove redundant function category comments (exports are self-explanatory)
- Remove obvious inline comments for query invalidation
- Fix search endpoint invalidation to clear all queries regardless of parameters

* refactor: remove api-types.ts compatibility layer

- Migrated all 29 files from api-types.ts to use reflector-api.d.ts directly
- Removed $SourceKind manual enum in favor of OpenAPI-generated types
- Fixed unrelated Spinner component TypeScript error in AuthWrapper.tsx
- All imports now use: import type { components } from "path/to/reflector-api"
- Deleted api-types.ts file completely

* refactor: rename api-hooks.ts to apiHooks.ts for consistency

- Renamed api-hooks.ts to apiHooks.ts to follow camelCase convention
- Updated all 21 import statements across the codebase
- Maintains consistency with other non-component files (apiClient.tsx, useAuthReady.ts, etc.)
- Follows established naming pattern: PascalCase for components, camelCase for utilities/hooks

* chore: add .playwright-mcp to .gitignore

* refactor: remove SK helper object and use inline type casting in FilterSidebar

Replace the SK (SourceKind) helper object with direct inline type casting
to simplify the code and reduce unnecessary abstraction.

* chore: clean up migration comments from React Query refactoring

- Remove temporary "// Use new React Query hooks" comments
- Remove "// React Query hooks" comments from browse and rooms pages
- Update package.json script name from codegen to openapi for consistency

* refactor: remove Redis dependencies from frontend authentication

- Replace Redis/Redlock with in-memory cache for token management
- Remove @vercel/kv, ioredis, and redlock dependencies from package.json
- Implement simple lock mechanism for concurrent token refresh prevention
- Use Map-based cache with TTL for token storage
- Maintain same authentication flow without external dependencies

This simplifies the infrastructure requirements and removes the need for
Redis while maintaining the same functionality through in-memory caching.

* fix: add staleTime to prevent cross-tab staled data

* fix: remove infinite re-render loop in useSessionAccessToken

The hook was maintaining redundant local state that caused re-renders
on every update, which triggered NextAuth to continuously refetch the
session, resulting in hundreds of POST requests to /api/auth/session.

Simplified the hook to directly return session values without
unnecessary state duplication.

* fix: handle undefined access tokens in auth.ts

Added fallback to empty string for potentially undefined access_token
and refresh_token from NextAuth account object to satisfy
JWTWithAccessToken type requirements.

* Igor/mathieu/frontend openapi react query (#597)

* small typing

* typing fixes

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>

* self-review-fix

* authReady callback simplify

* fix auth

* fix compose

* room detail page fix

* compile fix

* room edit fix

* normalize auth provider

* room edition state granular management

* cover TODOs + cross-tab cache

* session auto refresh blink

* schema generator error type doc

* protect from zombie auth

* clarify access token refresh logic a bit

* remove react-query tab sharing cache

* remove react-query tab sharing cache

* websocket dupe react devmode protection

* invalidate room on room update

* redis cache

* test ts server

* ci randomness

* less edgy config (ci)

* less edgy config (ci)

* less edgy config (ci)

* ci randomness

* ci randomness

* ci randomness

* ci randomness

* less edgy config (ci)

* added vs edited room state cleanup

* file upload real-time state management fix

* prettier auth state ternary

* prettier auth state ternary

* proper api address from env

* INTERVAL_REFRESH_MS

* node version 20 for tests

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* CI debug

* CI debug

* nextjs magic

* nextjs magic

* doc

* client-side stale auth soft safety net

---------

Co-authored-by: Mathieu Virbel <mat@meltingrocks.com>
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 16:01:31 -06:00
0663700a61 fix: align whisper transcriber api with parakeet (#602)
* Documents transcriber api

* Update whisper transcriber api to match parakeet

* Update api transcription spec

* Return 400 for unsupported file type

* Add params to api spec

* Update whisper transcriber implementation to match parakeet
2025-09-05 10:52:14 +02:00
dc82f8bb3b fix: source kind for file processing (#601) 2025-09-04 08:42:21 -06:00
457823e1c1 chore(main): release 0.8.2 (#595) 2025-09-01 19:09:09 -06:00
Igor Monadical
695d1a957d fix: search-logspam (#593)
* fix: search-logspam

* llm comment

* fix tests

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-08-29 18:55:51 -04:00
ccffdba75b chore(main): release 0.8.1 (#591) 2025-08-29 11:56:11 -06:00
84a381220b fix: make webhook secret/url allowing null (#590) 2025-08-29 11:55:18 -06:00
5f2f0e9317 chore(main): release 0.8.0 (#579) 2025-08-29 11:34:24 -06:00
88ed7cfa78 feat(rooms): add webhook for transcript completion (#578)
* feat(rooms): add webhook notifications for transcript completion

- Add webhook_url and webhook_secret fields to rooms table
- Create Celery task with 24-hour retry window using exponential backoff
- Send transcript metadata, diarized text, topics, and summaries via webhook
- Add HMAC signature verification for webhook security
- Add test endpoint POST /v1/rooms/{room_id}/webhook/test
- Update frontend with webhook configuration UI and test button
- Auto-generate webhook secret if not provided
- Trigger webhook after successful file pipeline processing for room recordings

* style: linting

* fix: remove unwanted files

* fix: update openapi gen

* fix: self-review

* docs: add comprehensive webhook documentation

- Document webhook configuration, events, and payloads
- Include transcript.completed and test event examples
- Add security considerations and best practices
- Provide example webhook receiver implementation
- Document retry policy and signature verification

* fix: remove audio_mp3_url from webhook payload

- Remove audio download URL generation from webhook
- Update documentation to reflect the change
- Keep only frontend_url for accessing transcripts

* docs: remove unwanted section

* fix: correct API method name and type imports for rooms

- Fix v1RoomsRetrieve to v1RoomsGet
- Update Room type to RoomDetails throughout frontend
- Fix type imports in useRoomList, RoomList, RoomTable, and RoomCards

* feat: add show/hide toggle for webhook secret field

- Add eye icon button to reveal/hide webhook secret when editing
- Show password dots when webhook secret is hidden
- Reset visibility state when opening/closing dialog
- Only show toggle button when editing existing room with secret

* fix: resolve event loop conflict in webhook test endpoint

- Extract webhook test logic into shared async function
- Call async function directly from FastAPI endpoint
- Keep Celery task wrapper for background processing
- Fixes RuntimeError: event loop already running

* refactor: remove unnecessary Celery task for webhook testing

- Webhook testing is synchronous and provides immediate feedback
- No need for background processing via Celery
- Keep only the async function called directly from API endpoint

* feat: improve webhook test error messages and display

- Show HTTP status code in error messages
- Parse JSON error responses to extract meaningful messages
- Improved UI layout for webhook test results
- Added colored background for success/error states
- Better text wrapping for long error messages

* docs: adjust doc

* fix: review

* fix: update attempts to match close 24h

* fix: add event_id

* fix: changed to uuid, to have new event_id when reprocess.

* style: linting

* fix: alembic revision
2025-08-29 10:07:49 -06:00
6f0c7c1a5e feat(cleanup): add automatic data retention for public instances (#574)
* feat(cleanup): add automatic data retention for public instances

- Add Celery task to clean up anonymous data after configurable retention period
- Delete transcripts, meetings, and orphaned recordings older than retention days
- Only runs when PUBLIC_MODE is enabled to prevent accidental data loss
- Properly removes all associated files (local and S3 storage)
- Add manual cleanup tool for testing and intervention
- Configure retention via PUBLIC_DATA_RETENTION_DAYS setting (default: 7 days)

Fixes #571

* fix: apply pre-commit formatting fixes

* fix: properly delete recording files from storage during cleanup

- Add storage deletion for orphaned recordings in both cleanup task and manual tool
- Delete from storage before removing database records
- Log warnings if storage deletion fails but continue with database cleanup

* Apply suggestion from @pr-agent-monadical[bot]

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* Apply suggestion from @pr-agent-monadical[bot]

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* refactor: cleanup_old_data for better logging

* fix: linting

* test: fix meeting cleanup test to not require room controller

- Simplify test by directly inserting meetings into database
- Remove dependency on non-existent rooms_controller.create method
- Tests now pass successfully

* fix: linting

* refactor: simplify cleanup tool to use worker implementation

- Remove duplicate cleanup logic from manual tool
- Use the same _cleanup_old_public_data function from worker
- Remove dry-run feature as requested
- Prevent code duplication and ensure consistency
- Update documentation to reflect changes

* refactor: split cleanup worker into smaller functions

- Move all imports to the top of the file
- Extract cleanup logic into separate functions:
  - cleanup_old_transcripts()
  - cleanup_old_meetings()
  - cleanup_orphaned_recordings()
  - log_cleanup_results()
- Make code more maintainable and testable
- Add days parameter support to Celery task
- Update manual tool to work with refactored code

* feat: add TypedDict typing for cleanup stats

- Add CleanupStats TypedDict for better type safety
- Update all function signatures to use proper typing
- Add return type annotations to _cleanup_old_public_data
- Improves code maintainability and IDE support

* feat: add CASCADE DELETE to meeting_consent foreign key

- Add ondelete="CASCADE" to meeting_consent.meeting_id foreign key
- Generate and apply migration to update existing constraint
- Remove manual consent deletion from cleanup code
- Add unit test to verify CASCADE DELETE behavior

* style: linting

* fix: alembic migration branchpoint

* fix: correct downgrade constraint name in CASCADE DELETE migration

* fix: regenerate CASCADE DELETE migration with proper constraint names

- Delete problematic migration and regenerate with correct names
- Use explicit constraint name in both upgrade and downgrade
- Ensure migration works bidirectionally
- All tests passing including CASCADE DELETE test

* style: linting

* refactor: simplify cleanup to use transcripts as entry point

- Remove orphaned_recordings cleanup (not part of this PR scope)
- Remove separate old_meetings cleanup
- Transcripts are now the main entry point for cleanup
- Associated meetings and recordings are deleted with their transcript
- Use single database connection for all operations
- Update tests to reflect new approach

* refactor: cleanup and rename functions for clarity

- Rename _cleanup_old_public_data to cleanup_old_public_data (make public)
- Rename celery task to cleanup_old_public_data_task for clarity
- Update docstrings and improve code organization
- Remove unnecessary comments and simplify deletion logic
- Update tests to use new function names
- All tests passing

* style: linting\

* style: typing and review

* fix: add transaction on cleanup_single_transcript

* fix: naming

---------

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2025-08-29 08:47:14 -06:00
9dfd76996f fix: file pipeline status reporting and websocket updates (#589)
* feat: use file pipeline for upload and reprocess action

* fix: make file pipeline correctly report status events

* fix: duplication of transcripts_controller

* fix: tests

* test: fix file upload test

* test: fix reprocess

* fix: also patch from main_file_pipeline

(how patch is done is dependent of file import unfortunately)
2025-08-29 00:58:14 -06:00
55cc8637c6 ci: restrict workflow execution to main branch and add concurrency (#586)
* ci: try adding concurrency

* ci: restrict push on main branch

* ci: fix concurrency key

* ci: fix build concurrency

* refactor: apply suggestion from @pr-agent-monadical[bot]

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

---------

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2025-08-28 16:43:17 -06:00
f5331a2107 style: more type annotations to parakeet transcriber (#581)
* feat: add comprehensive type annotations to Parakeet transcriber

- Add TypedDict for WordTiming with word, start, end fields
- Add NamedTuple for TimeSegment, AudioSegment, and TranscriptResult
- Add type hints to all generator functions (vad_segment_generator, batch_speech_segments, etc.)
- Add enforce_word_timing_constraints function to prevent word timing overlaps
- Refactor batch_segment_to_audio_segment to reuse pad_audio function

* doc: add note about space
2025-08-28 12:22:07 -06:00
Igor Loskutov
124ce03bf8 fix: Igor/evaluation (#575)
* fix: impossible import error (#563)

* evaluation cli - database events experiment

* hallucinations

* evaluation - unhallucinate

* evaluation - unhallucinate

* roll back reliability link

* self reviewio

* lint

* self review

* add file pipeline to cli

* add file pipeline to cli + sorting

* remove cli tests

* remove ai comments

* comments
2025-08-28 12:07:34 -04:00
7030e0f236 fix: optimize parakeet transcription batching algorithm (#577)
* refactor: optimize transcription batching to accumulate speech segments

- Changed VAD segment generator to return full audio array instead of segments
- Removed segment filtering step
- Modified batch_segments to accumulate maximum speech including silence
- Transcribe larger continuous chunks instead of individual speech segments

* fix: correct transcribe_batch call to use list and fix batch unpacking

* fix: simplify

* fix: remove unused variables

* fix: add typing
2025-08-27 10:32:04 -06:00
37f0110892 doc: update local model readme 2025-08-22 17:50:24 -06:00
cf2896a7f4 doc: update readme about installation instructions
Add a note about installation instructions being inaccurate.
2025-08-22 17:48:35 -06:00
aabf2c2572 chore(main): release 0.7.3 (#565) 2025-08-22 16:35:52 -06:00
6a7b08f016 doc: change readme intro 2025-08-22 16:26:25 -06:00
e2736563d9 doc: update readme with new images 2025-08-22 16:15:54 -06:00
0f54b7782d chore: ignore www/.env.[development,production] 2025-08-22 14:41:09 -06:00
359280dd34 fix: cleaned repo, and get git-leaks clean 2025-08-22 11:51:34 -06:00
9265d201b5 fix: restore previous behavior on live pipeline + audio downscaler (#561)
This commit restore the original behavior with frame cutting. While
silero is used on our gpu for files, look like it's not working great on
the live pipeline. To be investigated, but at the moment, what we keep
is:

- refactored to extract the downscale for further processing in the
pipeline
- remove any downscale implementation from audio_chunker and audio_merge
- removed batching from audio_merge too for now
2025-08-22 10:49:26 -06:00
52f9f533d7 chore(main): release 0.7.2 (#559) 2025-08-21 21:00:05 -06:00
335 changed files with 39093 additions and 12157 deletions

View File

@@ -2,6 +2,8 @@ name: Test Database Migrations
on:
push:
branches:
- main
paths:
- "server/migrations/**"
- "server/reflector/db/**"
@@ -17,6 +19,9 @@ on:
jobs:
test-migrations:
runs-on: ubuntu-latest
concurrency:
group: db-ubuntu-latest-${{ github.ref }}
cancel-in-progress: true
services:
postgres:
image: postgres:17

View File

@@ -1,4 +1,4 @@
name: Deploy to Amazon ECS
name: Build container/push to container registry
on: [workflow_dispatch]

57
.github/workflows/docker-frontend.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Build and Push Frontend Docker Image
on:
push:
branches:
- main
paths:
- 'www/**'
- '.github/workflows/docker-frontend.yml'
workflow_dispatch:
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}-frontend
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=sha,prefix={{branch}}-
type=raw,value=latest,enable={{is_default_branch}}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: ./www
file: ./www/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64,linux/arm64

45
.github/workflows/test_next_server.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Test Next Server
on:
pull_request:
paths:
- "www/**"
push:
branches:
- main
paths:
- "www/**"
jobs:
test-next-server:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./www
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 8
- name: Setup Node.js cache
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
cache-dependency-path: './www/pnpm-lock.yaml'
- name: Install dependencies
run: pnpm install
- name: Run tests
run: pnpm test

View File

@@ -5,12 +5,17 @@ on:
paths:
- "server/**"
push:
branches:
- main
paths:
- "server/**"
jobs:
pytest:
runs-on: ubuntu-latest
concurrency:
group: pytest-${{ github.ref }}
cancel-in-progress: true
services:
redis:
image: redis:6
@@ -30,6 +35,9 @@ jobs:
docker-amd64:
runs-on: linux-amd64
concurrency:
group: docker-amd64-${{ github.ref }}
cancel-in-progress: true
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
@@ -45,6 +53,9 @@ jobs:
docker-arm64:
runs-on: linux-arm64
concurrency:
group: docker-arm64-${{ github.ref }}
cancel-in-progress: true
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx

5
.gitignore vendored
View File

@@ -14,4 +14,7 @@ data/
www/REFACTOR.md
www/reload-frontend
server/test.sqlite
CLAUDE.local.md
CLAUDE.local.md
www/.env.development
www/.env.production
.playwright-mcp

1
.gitleaksignore Normal file
View File

@@ -0,0 +1 @@
b9d891d3424f371642cb032ecfd0e2564470a72c:server/tests/test_transcripts_recording_deletion.py:generic-api-key:15

View File

@@ -27,3 +27,8 @@ repos:
files: ^server/
- id: ruff-format
files: ^server/
- repo: https://github.com/gitleaks/gitleaks
rev: v8.28.0
hooks:
- id: gitleaks

View File

@@ -1,5 +1,239 @@
# Changelog
## [0.22.2](https://github.com/Monadical-SAS/reflector/compare/v0.22.1...v0.22.2) (2025-12-02)
### Bug Fixes
* daily auto refresh fix ([#755](https://github.com/Monadical-SAS/reflector/issues/755)) ([fe47c46](https://github.com/Monadical-SAS/reflector/commit/fe47c46489c5aa0cc538109f7559cc9accb35c01))
* Skip mixdown for multitrack ([#760](https://github.com/Monadical-SAS/reflector/issues/760)) ([b51b7aa](https://github.com/Monadical-SAS/reflector/commit/b51b7aa9176c1a53ba57ad99f5e976c804a1e80c))
## [0.22.1](https://github.com/Monadical-SAS/reflector/compare/v0.22.0...v0.22.1) (2025-11-27)
### Bug Fixes
* participants update from daily ([#749](https://github.com/Monadical-SAS/reflector/issues/749)) ([7f0b728](https://github.com/Monadical-SAS/reflector/commit/7f0b728991c1b9f9aae702c96297eae63b561ef5))
## [0.22.0](https://github.com/Monadical-SAS/reflector/compare/v0.21.0...v0.22.0) (2025-11-26)
### Features
* Multitrack segmentation ([#747](https://github.com/Monadical-SAS/reflector/issues/747)) ([d63040e](https://github.com/Monadical-SAS/reflector/commit/d63040e2fdc07e7b272e85a39eb2411cd6a14798))
## [0.21.0](https://github.com/Monadical-SAS/reflector/compare/v0.20.0...v0.21.0) (2025-11-26)
### Features
* add transcript format parameter to GET endpoint ([#709](https://github.com/Monadical-SAS/reflector/issues/709)) ([f6ca075](https://github.com/Monadical-SAS/reflector/commit/f6ca07505f34483b02270a2ef3bd809e9d2e1045))
## [0.20.0](https://github.com/Monadical-SAS/reflector/compare/v0.19.0...v0.20.0) (2025-11-25)
### Features
* link transcript participants ([#737](https://github.com/Monadical-SAS/reflector/issues/737)) ([9bec398](https://github.com/Monadical-SAS/reflector/commit/9bec39808fc6322612d8b87e922a6f7901fc01c1))
* transcript restart script ([#742](https://github.com/Monadical-SAS/reflector/issues/742)) ([86d5e26](https://github.com/Monadical-SAS/reflector/commit/86d5e26224bb55a0f1cc785aeda52065bb92ee6f))
## [0.19.0](https://github.com/Monadical-SAS/reflector/compare/v0.18.0...v0.19.0) (2025-11-25)
### Features
* dailyco api module ([#725](https://github.com/Monadical-SAS/reflector/issues/725)) ([4287f8b](https://github.com/Monadical-SAS/reflector/commit/4287f8b8aeee60e51db7539f4dcbda5f6e696bd8))
* dailyco poll ([#730](https://github.com/Monadical-SAS/reflector/issues/730)) ([8e438ca](https://github.com/Monadical-SAS/reflector/commit/8e438ca285152bd48fdc42767e706fb448d3525c))
* multitrack cli ([#735](https://github.com/Monadical-SAS/reflector/issues/735)) ([11731c9](https://github.com/Monadical-SAS/reflector/commit/11731c9d38439b04e93b1c3afbd7090bad11a11f))
### Bug Fixes
* default platform fix ([#736](https://github.com/Monadical-SAS/reflector/issues/736)) ([c442a62](https://github.com/Monadical-SAS/reflector/commit/c442a627873ca667656eeaefb63e54ab10b8d19e))
* parakeet vad not getting the end timestamp ([#728](https://github.com/Monadical-SAS/reflector/issues/728)) ([18ed713](https://github.com/Monadical-SAS/reflector/commit/18ed7133693653ef4ddac6c659a8c14b320d1657))
* start raw tracks recording ([#729](https://github.com/Monadical-SAS/reflector/issues/729)) ([3e47c2c](https://github.com/Monadical-SAS/reflector/commit/3e47c2c0573504858e0d2e1798b6ed31f16b4a5d))
## [0.18.0](https://github.com/Monadical-SAS/reflector/compare/v0.17.0...v0.18.0) (2025-11-14)
### Features
* daily QOL: participants dictionary ([#721](https://github.com/Monadical-SAS/reflector/issues/721)) ([b20cad7](https://github.com/Monadical-SAS/reflector/commit/b20cad76e69fb6a76405af299a005f1ddcf60eae))
### Bug Fixes
* add proccessing page to file upload and reprocessing ([#650](https://github.com/Monadical-SAS/reflector/issues/650)) ([28a7258](https://github.com/Monadical-SAS/reflector/commit/28a7258e45317b78e60e6397be2bc503647eaace))
* copy transcript ([#674](https://github.com/Monadical-SAS/reflector/issues/674)) ([a9a4f32](https://github.com/Monadical-SAS/reflector/commit/a9a4f32324f66c838e081eee42bb9502f38c1db1))
## [0.17.0](https://github.com/Monadical-SAS/reflector/compare/v0.16.0...v0.17.0) (2025-11-13)
### Features
* add API key management UI ([#716](https://github.com/Monadical-SAS/reflector/issues/716)) ([372202b](https://github.com/Monadical-SAS/reflector/commit/372202b0e1a86823900b0aa77be1bfbc2893d8a1))
* daily.co support as alternative to whereby ([#691](https://github.com/Monadical-SAS/reflector/issues/691)) ([1473fd8](https://github.com/Monadical-SAS/reflector/commit/1473fd82dc472c394cbaa2987212ad662a74bcac))
## [0.16.0](https://github.com/Monadical-SAS/reflector/compare/v0.15.0...v0.16.0) (2025-10-24)
### Features
* search date filter ([#710](https://github.com/Monadical-SAS/reflector/issues/710)) ([962c40e](https://github.com/Monadical-SAS/reflector/commit/962c40e2b6428ac42fd10aea926782d7a6f3f902))
## [0.15.0](https://github.com/Monadical-SAS/reflector/compare/v0.14.0...v0.15.0) (2025-10-20)
### Features
* api tokens ([#705](https://github.com/Monadical-SAS/reflector/issues/705)) ([9a258ab](https://github.com/Monadical-SAS/reflector/commit/9a258abc0209b0ac3799532a507ea6a9125d703a))
## [0.14.0](https://github.com/Monadical-SAS/reflector/compare/v0.13.1...v0.14.0) (2025-10-08)
### Features
* Add calendar event data to transcript webhook payload ([#689](https://github.com/Monadical-SAS/reflector/issues/689)) ([5f6910e](https://github.com/Monadical-SAS/reflector/commit/5f6910e5131b7f28f86c9ecdcc57fed8412ee3cd))
* container build for www / github ([#672](https://github.com/Monadical-SAS/reflector/issues/672)) ([969bd84](https://github.com/Monadical-SAS/reflector/commit/969bd84fcc14851d1a101412a0ba115f1b7cde82))
* docker-compose for production frontend ([#664](https://github.com/Monadical-SAS/reflector/issues/664)) ([5bf64b5](https://github.com/Monadical-SAS/reflector/commit/5bf64b5a41f64535e22849b4bb11734d4dbb4aae))
### Bug Fixes
* restore feature boolean logic ([#671](https://github.com/Monadical-SAS/reflector/issues/671)) ([3660884](https://github.com/Monadical-SAS/reflector/commit/36608849ec64e953e3be456172502762e3c33df9))
* security review ([#656](https://github.com/Monadical-SAS/reflector/issues/656)) ([5d98754](https://github.com/Monadical-SAS/reflector/commit/5d98754305c6c540dd194dda268544f6d88bfaf8))
* update transcript list on reprocess ([#676](https://github.com/Monadical-SAS/reflector/issues/676)) ([9a71af1](https://github.com/Monadical-SAS/reflector/commit/9a71af145ee9b833078c78d0c684590ab12e9f0e))
* upgrade nemo toolkit ([#678](https://github.com/Monadical-SAS/reflector/issues/678)) ([eef6dc3](https://github.com/Monadical-SAS/reflector/commit/eef6dc39037329b65804297786d852dddb0557f9))
## [0.13.1](https://github.com/Monadical-SAS/reflector/compare/v0.13.0...v0.13.1) (2025-09-22)
### Bug Fixes
* TypeError on not all arguments converted during string formatting in logger ([#667](https://github.com/Monadical-SAS/reflector/issues/667)) ([565a629](https://github.com/Monadical-SAS/reflector/commit/565a62900f5a02fc946b68f9269a42190ed70ab6))
## [0.13.0](https://github.com/Monadical-SAS/reflector/compare/v0.12.1...v0.13.0) (2025-09-19)
### Features
* room form edit with enter ([#662](https://github.com/Monadical-SAS/reflector/issues/662)) ([47716f6](https://github.com/Monadical-SAS/reflector/commit/47716f6e5ddee952609d2fa0ffabdfa865286796))
### Bug Fixes
* invalid cleanup call ([#660](https://github.com/Monadical-SAS/reflector/issues/660)) ([0abcebf](https://github.com/Monadical-SAS/reflector/commit/0abcebfc9491f87f605f21faa3e53996fafedd9a))
## [0.12.1](https://github.com/Monadical-SAS/reflector/compare/v0.12.0...v0.12.1) (2025-09-17)
### Bug Fixes
* production blocked because having existing meeting with room_id null ([#657](https://github.com/Monadical-SAS/reflector/issues/657)) ([870e860](https://github.com/Monadical-SAS/reflector/commit/870e8605171a27155a9cbee215eeccb9a8d6c0a2))
## [0.12.0](https://github.com/Monadical-SAS/reflector/compare/v0.11.0...v0.12.0) (2025-09-17)
### Features
* calendar integration ([#608](https://github.com/Monadical-SAS/reflector/issues/608)) ([6f680b5](https://github.com/Monadical-SAS/reflector/commit/6f680b57954c688882c4ed49f40f161c52a00a24))
* self-hosted gpu api ([#636](https://github.com/Monadical-SAS/reflector/issues/636)) ([ab859d6](https://github.com/Monadical-SAS/reflector/commit/ab859d65a6bded904133a163a081a651b3938d42))
### Bug Fixes
* ignore player hotkeys for text inputs ([#646](https://github.com/Monadical-SAS/reflector/issues/646)) ([fa049e8](https://github.com/Monadical-SAS/reflector/commit/fa049e8d068190ce7ea015fd9fcccb8543f54a3f))
## [0.11.0](https://github.com/Monadical-SAS/reflector/compare/v0.10.0...v0.11.0) (2025-09-16)
### Features
* remove profanity filter that was there for conference ([#652](https://github.com/Monadical-SAS/reflector/issues/652)) ([b42f7cf](https://github.com/Monadical-SAS/reflector/commit/b42f7cfc606783afcee792590efcc78b507468ab))
### Bug Fixes
* zulip and consent handler on the file pipeline ([#645](https://github.com/Monadical-SAS/reflector/issues/645)) ([5f143fe](https://github.com/Monadical-SAS/reflector/commit/5f143fe3640875dcb56c26694254a93189281d17))
* zulip stream and topic selection in share dialog ([#644](https://github.com/Monadical-SAS/reflector/issues/644)) ([c546e69](https://github.com/Monadical-SAS/reflector/commit/c546e69739e68bb74fbc877eb62609928e5b8de6))
## [0.10.0](https://github.com/Monadical-SAS/reflector/compare/v0.9.0...v0.10.0) (2025-09-11)
### Features
* replace nextjs-config with environment variables ([#632](https://github.com/Monadical-SAS/reflector/issues/632)) ([369ecdf](https://github.com/Monadical-SAS/reflector/commit/369ecdff13f3862d926a9c0b87df52c9d94c4dde))
### Bug Fixes
* anonymous users transcript permissions ([#621](https://github.com/Monadical-SAS/reflector/issues/621)) ([f81fe99](https://github.com/Monadical-SAS/reflector/commit/f81fe9948a9237b3e0001b2d8ca84f54d76878f9))
* auth post ([#624](https://github.com/Monadical-SAS/reflector/issues/624)) ([cde99ca](https://github.com/Monadical-SAS/reflector/commit/cde99ca2716f84ba26798f289047732f0448742e))
* auth post ([#626](https://github.com/Monadical-SAS/reflector/issues/626)) ([3b85ff3](https://github.com/Monadical-SAS/reflector/commit/3b85ff3bdf4fb053b103070646811bc990c0e70a))
* auth post ([#627](https://github.com/Monadical-SAS/reflector/issues/627)) ([962038e](https://github.com/Monadical-SAS/reflector/commit/962038ee3f2a555dc3c03856be0e4409456e0996))
* missing follow_redirects=True on modal endpoint ([#630](https://github.com/Monadical-SAS/reflector/issues/630)) ([fc363bd](https://github.com/Monadical-SAS/reflector/commit/fc363bd49b17b075e64f9186e5e0185abc325ea7))
* sync backend and frontend token refresh logic ([#614](https://github.com/Monadical-SAS/reflector/issues/614)) ([5a5b323](https://github.com/Monadical-SAS/reflector/commit/5a5b3233820df9536da75e87ce6184a983d4713a))
## [0.9.0](https://github.com/Monadical-SAS/reflector/compare/v0.8.2...v0.9.0) (2025-09-06)
### Features
* frontend openapi react query ([#606](https://github.com/Monadical-SAS/reflector/issues/606)) ([c4d2825](https://github.com/Monadical-SAS/reflector/commit/c4d2825c81f81ad8835629fbf6ea8c7383f8c31b))
### Bug Fixes
* align whisper transcriber api with parakeet ([#602](https://github.com/Monadical-SAS/reflector/issues/602)) ([0663700](https://github.com/Monadical-SAS/reflector/commit/0663700a615a4af69a03c96c410f049e23ec9443))
* kv use tls explicit ([#610](https://github.com/Monadical-SAS/reflector/issues/610)) ([08d88ec](https://github.com/Monadical-SAS/reflector/commit/08d88ec349f38b0d13e0fa4cb73486c8dfd31836))
* source kind for file processing ([#601](https://github.com/Monadical-SAS/reflector/issues/601)) ([dc82f8b](https://github.com/Monadical-SAS/reflector/commit/dc82f8bb3bdf3ab3d4088e592a30fd63907319e1))
* token refresh locking ([#613](https://github.com/Monadical-SAS/reflector/issues/613)) ([7f5a4c9](https://github.com/Monadical-SAS/reflector/commit/7f5a4c9ddc7fd098860c8bdda2ca3b57f63ded2f))
## [0.8.2](https://github.com/Monadical-SAS/reflector/compare/v0.8.1...v0.8.2) (2025-08-29)
### Bug Fixes
* search-logspam ([#593](https://github.com/Monadical-SAS/reflector/issues/593)) ([695d1a9](https://github.com/Monadical-SAS/reflector/commit/695d1a957d4cd862753049f9beed88836cabd5ab))
## [0.8.1](https://github.com/Monadical-SAS/reflector/compare/v0.8.0...v0.8.1) (2025-08-29)
### Bug Fixes
* make webhook secret/url allowing null ([#590](https://github.com/Monadical-SAS/reflector/issues/590)) ([84a3812](https://github.com/Monadical-SAS/reflector/commit/84a381220bc606231d08d6f71d4babc818fa3c75))
## [0.8.0](https://github.com/Monadical-SAS/reflector/compare/v0.7.3...v0.8.0) (2025-08-29)
### Features
* **cleanup:** add automatic data retention for public instances ([#574](https://github.com/Monadical-SAS/reflector/issues/574)) ([6f0c7c1](https://github.com/Monadical-SAS/reflector/commit/6f0c7c1a5e751713366886c8e764c2009e12ba72))
* **rooms:** add webhook for transcript completion ([#578](https://github.com/Monadical-SAS/reflector/issues/578)) ([88ed7cf](https://github.com/Monadical-SAS/reflector/commit/88ed7cfa7804794b9b54cad4c3facc8a98cf85fd))
### Bug Fixes
* file pipeline status reporting and websocket updates ([#589](https://github.com/Monadical-SAS/reflector/issues/589)) ([9dfd769](https://github.com/Monadical-SAS/reflector/commit/9dfd76996f851cc52be54feea078adbc0816dc57))
* Igor/evaluation ([#575](https://github.com/Monadical-SAS/reflector/issues/575)) ([124ce03](https://github.com/Monadical-SAS/reflector/commit/124ce03bf86044c18313d27228a25da4bc20c9c5))
* optimize parakeet transcription batching algorithm ([#577](https://github.com/Monadical-SAS/reflector/issues/577)) ([7030e0f](https://github.com/Monadical-SAS/reflector/commit/7030e0f23649a8cf6c1eb6d5889684a41ce849ec))
## [0.7.3](https://github.com/Monadical-SAS/reflector/compare/v0.7.2...v0.7.3) (2025-08-22)
### Bug Fixes
* cleaned repo, and get git-leaks clean ([359280d](https://github.com/Monadical-SAS/reflector/commit/359280dd340433ba4402ed69034094884c825e67))
* restore previous behavior on live pipeline + audio downscaler ([#561](https://github.com/Monadical-SAS/reflector/issues/561)) ([9265d20](https://github.com/Monadical-SAS/reflector/commit/9265d201b590d23c628c5f19251b70f473859043))
## [0.7.2](https://github.com/Monadical-SAS/reflector/compare/v0.7.1...v0.7.2) (2025-08-21)
### Bug Fixes
* docker image not loading libgomp.so.1 for torch ([#560](https://github.com/Monadical-SAS/reflector/issues/560)) ([773fccd](https://github.com/Monadical-SAS/reflector/commit/773fccd93e887c3493abc2e4a4864dddce610177))
* include shared rooms to search ([#558](https://github.com/Monadical-SAS/reflector/issues/558)) ([499eced](https://github.com/Monadical-SAS/reflector/commit/499eced3360b84fb3a90e1c8a3b554290d21adc2))
## [0.7.1](https://github.com/Monadical-SAS/reflector/compare/v0.7.0...v0.7.1) (2025-08-21)

View File

@@ -66,7 +66,6 @@ pnpm install
# Copy configuration templates
cp .env_template .env
cp config-template.ts config.ts
```
**Development:**
@@ -152,7 +151,7 @@ All endpoints prefixed `/v1/`:
**Frontend** (`www/.env`):
- `NEXTAUTH_URL`, `NEXTAUTH_SECRET` - Authentication configuration
- `NEXT_PUBLIC_REFLECTOR_API_URL` - Backend API endpoint
- `REFLECTOR_API_URL` - Backend API endpoint
- `REFLECTOR_DOMAIN_CONFIG` - Feature flags and domain settings
## Testing Strategy

View File

@@ -1,43 +1,60 @@
<div align="center">
<img width="100" alt="image" src="https://github.com/user-attachments/assets/66fb367b-2c89-4516-9912-f47ac59c6a7f"/>
# Reflector
Reflector Audio Management and Analysis is a cutting-edge web application under development by Monadical. It utilizes AI to record meetings, providing a permanent record with transcripts, translations, and automated summaries.
Reflector is an AI-powered audio transcription and meeting analysis platform that provides real-time transcription, speaker diarization, translation and summarization for audio content and live meetings. It works 100% with local models (whisper/parakeet, pyannote, seamless-m4t, and your local llm like phi-4).
[![Tests](https://github.com/monadical-sas/reflector/actions/workflows/pytests.yml/badge.svg?branch=main&event=push)](https://github.com/monadical-sas/reflector/actions/workflows/pytests.yml)
[![Tests](https://github.com/monadical-sas/reflector/actions/workflows/test_server.yml/badge.svg?branch=main&event=push)](https://github.com/monadical-sas/reflector/actions/workflows/test_server.yml)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
## Screenshots
</div>
<table>
<tr>
<td>
<a href="https://github.com/user-attachments/assets/3a976930-56c1-47ef-8c76-55d3864309e3">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/3a976930-56c1-47ef-8c76-55d3864309e3" />
<a href="https://github.com/user-attachments/assets/21f5597c-2930-4899-a154-f7bd61a59e97">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/21f5597c-2930-4899-a154-f7bd61a59e97" />
</a>
</td>
<td>
<a href="https://github.com/user-attachments/assets/bfe3bde3-08af-4426-a9a1-11ad5cd63b33">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/bfe3bde3-08af-4426-a9a1-11ad5cd63b33" />
<a href="https://github.com/user-attachments/assets/f6b9399a-5e51-4bae-b807-59128d0a940c">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/f6b9399a-5e51-4bae-b807-59128d0a940c" />
</a>
</td>
<td>
<a href="https://github.com/user-attachments/assets/7b60c9d0-efe4-474f-a27b-ea13bd0fabdc">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/7b60c9d0-efe4-474f-a27b-ea13bd0fabdc" />
<a href="https://github.com/user-attachments/assets/a42ce460-c1fd-4489-a995-270516193897">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/a42ce460-c1fd-4489-a995-270516193897" />
</a>
</td>
<td>
<a href="https://github.com/user-attachments/assets/21929f6d-c309-42fe-9c11-f1299e50fbd4">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/21929f6d-c309-42fe-9c11-f1299e50fbd4" />
</a>
</td>
</tr>
</table>
## What is Reflector?
Reflector is a web application that utilizes local models to process audio content, providing:
- **Real-time Transcription**: Convert speech to text using [Whisper](https://github.com/openai/whisper) (multi-language) or [Parakeet](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2) (English) models
- **Speaker Diarization**: Identify and label different speakers using [Pyannote](https://github.com/pyannote/pyannote-audio) 3.1
- **Live Translation**: Translate audio content in real-time to many languages with [Facebook Seamless-M4T](https://github.com/facebookresearch/seamless_communication)
- **Topic Detection & Summarization**: Extract key topics and generate concise summaries using LLMs
- **Meeting Recording**: Create permanent records of meetings with searchable transcripts
Currently we provide [modal.com](https://modal.com/) gpu template to deploy.
## Background
The project architecture consists of three primary components:
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
- **Back-End**: Python server that offers an API and data persistence, found in `server/`.
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations. Most reliable option is Modal deployment
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations.
It also uses authentik for authentication if activated, and Vercel for deployment and configuration of the front-end.
It also uses authentik for authentication if activated.
## Contribution Guidelines
@@ -72,6 +89,8 @@ Note: We currently do not have instructions for Windows users.
## Installation
*Note: we're working toward better installation, theses instructions are not accurate for now*
### Frontend
Start with `cd www`.
@@ -80,11 +99,10 @@ Start with `cd www`.
```bash
pnpm install
cp .env_template .env
cp config-template.ts config.ts
cp .env.example .env
```
Then, fill in the environment variables in `.env` and the configuration in `config.ts` as needed. If you are unsure on how to proceed, ask in Zulip.
Then, fill in the environment variables in `.env` as needed. If you are unsure on how to proceed, ask in Zulip.
**Run in development mode**
@@ -149,3 +167,47 @@ You can manually process an audio file by calling the process tool:
```bash
uv run python -m reflector.tools.process path/to/audio.wav
```
## Reprocessing any transcription
```bash
uv run -m reflector.tools.process_transcript 81ec38d1-9dd7-43d2-b3f8-51f4d34a07cd --sync
```
## Build-time env variables
Next.js projects are more used to NEXT_PUBLIC_ prefixed buildtime vars. We don't have those for the reason we need to serve a ccustomizable prebuild docker container.
Instead, all the variables are runtime. Variables needed to the frontend are served to the frontend app at initial render.
It also means there's no static prebuild and no static files to serve for js/html.
## Feature Flags
Reflector uses environment variable-based feature flags to control application functionality. These flags allow you to enable or disable features without code changes.
### Available Feature Flags
| Feature Flag | Environment Variable |
|-------------|---------------------|
| `requireLogin` | `FEATURE_REQUIRE_LOGIN` |
| `privacy` | `FEATURE_PRIVACY` |
| `browse` | `FEATURE_BROWSE` |
| `sendToZulip` | `FEATURE_SEND_TO_ZULIP` |
| `rooms` | `FEATURE_ROOMS` |
### Setting Feature Flags
Feature flags are controlled via environment variables using the pattern `FEATURE_{FEATURE_NAME}` where `{FEATURE_NAME}` is the SCREAMING_SNAKE_CASE version of the feature name.
**Examples:**
```bash
# Enable user authentication requirement
FEATURE_REQUIRE_LOGIN=true
# Disable browse functionality
FEATURE_BROWSE=false
# Enable Zulip integration
FEATURE_SEND_TO_ZULIP=true
```

39
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,39 @@
# Production Docker Compose configuration for Frontend
# Usage: docker compose -f docker-compose.prod.yml up -d
services:
web:
build:
context: ./www
dockerfile: Dockerfile
image: reflector-frontend:latest
environment:
- KV_URL=${KV_URL:-redis://redis:6379}
- SITE_URL=${SITE_URL}
- API_URL=${API_URL}
- WEBSOCKET_URL=${WEBSOCKET_URL}
- NEXTAUTH_URL=${NEXTAUTH_URL:-http://localhost:3000}
- NEXTAUTH_SECRET=${NEXTAUTH_SECRET:-changeme-in-production}
- AUTHENTIK_ISSUER=${AUTHENTIK_ISSUER}
- AUTHENTIK_CLIENT_ID=${AUTHENTIK_CLIENT_ID}
- AUTHENTIK_CLIENT_SECRET=${AUTHENTIK_CLIENT_SECRET}
- AUTHENTIK_REFRESH_TOKEN_URL=${AUTHENTIK_REFRESH_TOKEN_URL}
- SENTRY_DSN=${SENTRY_DSN}
- SENTRY_IGNORE_API_RESOLUTION_ERROR=${SENTRY_IGNORE_API_RESOLUTION_ERROR:-1}
depends_on:
- redis
restart: unless-stopped
redis:
image: redis:7.2-alpine
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 3s
retries: 3
volumes:
- redis_data:/data
volumes:
redis_data:

View File

@@ -6,6 +6,7 @@ services:
- 1250:1250
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
@@ -16,6 +17,7 @@ services:
context: server
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
@@ -26,6 +28,7 @@ services:
context: server
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
@@ -36,7 +39,7 @@ services:
ports:
- 6379:6379
web:
image: node:18
image: node:22-alpine
ports:
- "3000:3000"
command: sh -c "corepack enable && pnpm install && pnpm dev"
@@ -47,6 +50,8 @@ services:
- /app/node_modules
env_file:
- ./www/.env.local
environment:
- NODE_ENV=development
postgres:
image: postgres:17

241
docs/transcript.md Normal file
View File

@@ -0,0 +1,241 @@
# Transcript Formats
The Reflector API provides multiple output formats for transcript data through the `transcript_format` query parameter on the GET `/v1/transcripts/{id}` endpoint.
## Overview
When retrieving a transcript, you can specify the desired format using the `transcript_format` query parameter. The API supports four formats optimized for different use cases:
- **text** - Plain text with speaker names (default)
- **text-timestamped** - Timestamped text with speaker names
- **webvtt-named** - WebVTT subtitle format with participant names
- **json** - Structured JSON segments with full metadata
All formats include participant information when available, resolving speaker IDs to actual names.
## Query Parameter Usage
```
GET /v1/transcripts/{id}?transcript_format={format}
```
### Parameters
- `transcript_format` (optional): The desired output format
- Type: `"text" | "text-timestamped" | "webvtt-named" | "json"`
- Default: `"text"`
## Format Descriptions
### Text Format (`text`)
**Use case:** Simple, human-readable transcript for display or export.
**Format:** Speaker names followed by their dialogue, one line per segment.
**Example:**
```
John Smith: Hello everyone
Jane Doe: Hi there
John Smith: How are you today?
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=text
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "text",
"transcript": "John Smith: Hello everyone\nJane Doe: Hi there\nJohn Smith: How are you today?",
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
### Text Timestamped Format (`text-timestamped`)
**Use case:** Transcript with timing information for navigation or reference.
**Format:** `[MM:SS]` timestamp prefix before each speaker and dialogue.
**Example:**
```
[00:00] John Smith: Hello everyone
[00:05] Jane Doe: Hi there
[00:12] John Smith: How are you today?
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=text-timestamped
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "text-timestamped",
"transcript": "[00:00] John Smith: Hello everyone\n[00:05] Jane Doe: Hi there\n[00:12] John Smith: How are you today?",
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
### WebVTT Named Format (`webvtt-named`)
**Use case:** Subtitle files for video players, accessibility tools, or video editing.
**Format:** Standard WebVTT subtitle format with voice tags using participant names.
**Example:**
```
WEBVTT
00:00:00.000 --> 00:00:05.000
<v John Smith>Hello everyone
00:00:05.000 --> 00:00:12.000
<v Jane Doe>Hi there
00:00:12.000 --> 00:00:18.000
<v John Smith>How are you today?
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=webvtt-named
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "webvtt-named",
"transcript": "WEBVTT\n\n00:00:00.000 --> 00:00:05.000\n<v John Smith>Hello everyone\n\n...",
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
### JSON Format (`json`)
**Use case:** Programmatic access with full timing and speaker metadata.
**Format:** Array of segment objects with speaker information, text content, and precise timing.
**Example:**
```json
[
{
"speaker": 0,
"speaker_name": "John Smith",
"text": "Hello everyone",
"start": 0.0,
"end": 5.0
},
{
"speaker": 1,
"speaker_name": "Jane Doe",
"text": "Hi there",
"start": 5.0,
"end": 12.0
},
{
"speaker": 0,
"speaker_name": "John Smith",
"text": "How are you today?",
"start": 12.0,
"end": 18.0
}
]
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=json
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "json",
"transcript": [
{
"speaker": 0,
"speaker_name": "John Smith",
"text": "Hello everyone",
"start": 0.0,
"end": 5.0
},
{
"speaker": 1,
"speaker_name": "Jane Doe",
"text": "Hi there",
"start": 5.0,
"end": 12.0
}
],
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
## Response Structure
All formats return the same base transcript metadata with an additional `transcript_format` field and format-specific `transcript` field:
### Common Fields
- `id`: Transcript identifier
- `user_id`: Owner user ID (if authenticated)
- `name`: Transcript name
- `status`: Processing status
- `locked`: Whether transcript is locked for editing
- `duration`: Total duration in seconds
- `title`: Auto-generated or custom title
- `short_summary`: Brief summary
- `long_summary`: Detailed summary
- `created_at`: Creation timestamp
- `share_mode`: Access control setting
- `source_language`: Original audio language
- `target_language`: Translation target language
- `reviewed`: Whether transcript has been reviewed
- `meeting_id`: Associated meeting ID (if applicable)
- `source_kind`: Source type (live, file, room)
- `room_id`: Associated room ID (if applicable)
- `audio_deleted`: Whether audio has been deleted
- `participants`: Array of participant objects with speaker mappings
### Format-Specific Fields
- `transcript_format`: The format identifier (discriminator field)
- `transcript`: The formatted transcript content (string for text/webvtt formats, array for json format)
## Speaker Name Resolution
All formats resolve speaker IDs to participant names when available:
- If a participant exists for the speaker ID, their name is used
- If no participant exists, a default name like "Speaker 0" is generated
- Speaker IDs are integers (0, 1, 2, etc.) assigned during diarization

33
gpu/modal_deployments/.gitignore vendored Normal file
View File

@@ -0,0 +1,33 @@
# OS / Editor
.DS_Store
.vscode/
.idea/
# Python
__pycache__/
*.py[cod]
*$py.class
# Logs
*.log
# Env and secrets
.env
.env.*
*.env
*.secret
# Build / dist
build/
dist/
.eggs/
*.egg-info/
# Coverage / test
.pytest_cache/
.coverage*
htmlcov/
# Modal local state (if any)
modal_mounts/
.modal_cache/

View File

@@ -0,0 +1,608 @@
import os
import sys
import threading
import uuid
from typing import Generator, Mapping, NamedTuple, NewType, TypedDict
from urllib.parse import urlparse
import modal
MODEL_NAME = "large-v2"
MODEL_COMPUTE_TYPE: str = "float16"
MODEL_NUM_WORKERS: int = 1
MINUTES = 60 # seconds
SAMPLERATE = 16000
UPLOADS_PATH = "/uploads"
CACHE_PATH = "/models"
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
VAD_CONFIG = {
"batch_max_duration": 30.0,
"silence_padding": 0.5,
"window_size": 512,
}
WhisperUniqFilename = NewType("WhisperUniqFilename", str)
AudioFileExtension = NewType("AudioFileExtension", str)
app = modal.App("reflector-transcriber")
model_cache = modal.Volume.from_name("models", create_if_missing=True)
upload_volume = modal.Volume.from_name("whisper-uploads", create_if_missing=True)
class TimeSegment(NamedTuple):
"""Represents a time segment with start and end times."""
start: float
end: float
class AudioSegment(NamedTuple):
"""Represents an audio segment with timing and audio data."""
start: float
end: float
audio: any
class TranscriptResult(NamedTuple):
"""Represents a transcription result with text and word timings."""
text: str
words: list["WordTiming"]
class WordTiming(TypedDict):
"""Represents a word with its timing information."""
word: str
start: float
end: float
def download_model():
from faster_whisper import download_model
model_cache.reload()
download_model(MODEL_NAME, cache_dir=CACHE_PATH)
model_cache.commit()
image = (
modal.Image.debian_slim(python_version="3.12")
.env(
{
"HF_HUB_ENABLE_HF_TRANSFER": "1",
"LD_LIBRARY_PATH": (
"/usr/local/lib/python3.12/site-packages/nvidia/cudnn/lib/:"
"/opt/conda/lib/python3.12/site-packages/nvidia/cublas/lib/"
),
}
)
.apt_install("ffmpeg")
.pip_install(
"huggingface_hub==0.27.1",
"hf-transfer==0.1.9",
"torch==2.5.1",
"faster-whisper==1.1.1",
"fastapi==0.115.12",
"requests",
"librosa==0.10.1",
"numpy<2",
"silero-vad==5.1.0",
)
.run_function(download_model, volumes={CACHE_PATH: model_cache})
)
def detect_audio_format(url: str, headers: Mapping[str, str]) -> AudioFileExtension:
parsed_url = urlparse(url)
url_path = parsed_url.path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return AudioFileExtension(ext)
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return AudioFileExtension("mp3")
if "audio/wav" in content_type:
return AudioFileExtension("wav")
if "audio/mp4" in content_type:
return AudioFileExtension("mp4")
raise ValueError(
f"Unsupported audio format for URL: {url}. "
f"Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
)
def download_audio_to_volume(
audio_file_url: str,
) -> tuple[WhisperUniqFilename, AudioFileExtension]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = WhisperUniqFilename(f"{uuid.uuid4()}.{audio_suffix}")
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
def pad_audio(audio_array, sample_rate: int = SAMPLERATE):
"""Add 0.5s of silence if audio is shorter than the silence_padding window.
Whisper does not require this strictly, but aligning behavior with Parakeet
avoids edge-case crashes on extremely short inputs and makes comparisons easier.
"""
import numpy as np
audio_duration = len(audio_array) / sample_rate
if audio_duration < VAD_CONFIG["silence_padding"]:
silence_samples = int(sample_rate * VAD_CONFIG["silence_padding"])
silence = np.zeros(silence_samples, dtype=np.float32)
return np.concatenate([audio_array, silence])
return audio_array
@app.cls(
gpu="A10G",
timeout=5 * MINUTES,
scaledown_window=5 * MINUTES,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
)
@modal.concurrent(max_inputs=10)
class TranscriberWhisperLive:
"""Live transcriber class for small audio segments (A10G).
Mirrors the Parakeet live class API but uses Faster-Whisper under the hood.
"""
@modal.enter()
def enter(self):
import faster_whisper
import torch
self.lock = threading.Lock()
self.use_gpu = torch.cuda.is_available()
self.device = "cuda" if self.use_gpu else "cpu"
self.model = faster_whisper.WhisperModel(
MODEL_NAME,
device=self.device,
compute_type=MODEL_COMPUTE_TYPE,
num_workers=MODEL_NUM_WORKERS,
download_root=CACHE_PATH,
local_files_only=True,
)
print(f"Model is on device: {self.device}")
@modal.method()
def transcribe_segment(
self,
filename: str,
language: str = "en",
):
"""Transcribe a single uploaded audio file by filename."""
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
with self.lock:
with NoStdStreams():
segments, _ = self.model.transcribe(
file_path,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(segment.text for segment in segments).strip()
words = [
{
"word": word.word,
"start": round(float(word.start), 2),
"end": round(float(word.end), 2),
}
for segment in segments
for word in segment.words
]
return {"text": text, "words": words}
@modal.method()
def transcribe_batch(
self,
filenames: list[str],
language: str = "en",
):
"""Transcribe multiple uploaded audio files and return per-file results."""
upload_volume.reload()
results = []
for filename in filenames:
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"Batch file not found: {file_path}")
with self.lock:
with NoStdStreams():
segments, _ = self.model.transcribe(
file_path,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(seg.text for seg in segments).strip()
words = [
{
"word": w.word,
"start": round(float(w.start), 2),
"end": round(float(w.end), 2),
}
for seg in segments
for w in seg.words
]
results.append(
{
"filename": filename,
"text": text,
"words": words,
}
)
return results
@app.cls(
gpu="L40S",
timeout=15 * MINUTES,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
)
class TranscriberWhisperFile:
"""File transcriber for larger/longer audio, using VAD-driven batching (L40S)."""
@modal.enter()
def enter(self):
import faster_whisper
import torch
from silero_vad import load_silero_vad
self.lock = threading.Lock()
self.use_gpu = torch.cuda.is_available()
self.device = "cuda" if self.use_gpu else "cpu"
self.model = faster_whisper.WhisperModel(
MODEL_NAME,
device=self.device,
compute_type=MODEL_COMPUTE_TYPE,
num_workers=MODEL_NUM_WORKERS,
download_root=CACHE_PATH,
local_files_only=True,
)
self.vad_model = load_silero_vad(onnx=False)
@modal.method()
def transcribe_segment(
self, filename: str, timestamp_offset: float = 0.0, language: str = "en"
):
import librosa
import numpy as np
from silero_vad import VADIterator
def vad_segments(
audio_array,
sample_rate: int = SAMPLERATE,
window_size: int = VAD_CONFIG["window_size"],
) -> Generator[TimeSegment, None, None]:
"""Generate speech segments as TimeSegment using Silero VAD."""
iterator = VADIterator(self.vad_model, sampling_rate=sample_rate)
start = None
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(
chunk, (0, window_size - len(chunk)), mode="constant"
)
speech = iterator(chunk)
if not speech:
continue
if "start" in speech:
start = speech["start"]
continue
if "end" in speech and start is not None:
end = speech["end"]
yield TimeSegment(
start / float(SAMPLERATE), end / float(SAMPLERATE)
)
start = None
iterator.reset_states()
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
audio_array, _sr = librosa.load(file_path, sr=SAMPLERATE, mono=True)
# Batch segments up to ~30s windows by merging contiguous VAD segments
merged_batches: list[TimeSegment] = []
batch_start = None
batch_end = None
max_duration = VAD_CONFIG["batch_max_duration"]
for segment in vad_segments(audio_array):
seg_start, seg_end = segment.start, segment.end
if batch_start is None:
batch_start, batch_end = seg_start, seg_end
continue
if seg_end - batch_start <= max_duration:
batch_end = seg_end
else:
merged_batches.append(TimeSegment(batch_start, batch_end))
batch_start, batch_end = seg_start, seg_end
if batch_start is not None and batch_end is not None:
merged_batches.append(TimeSegment(batch_start, batch_end))
all_text = []
all_words = []
for segment in merged_batches:
start_time, end_time = segment.start, segment.end
s_idx = int(start_time * SAMPLERATE)
e_idx = int(end_time * SAMPLERATE)
segment = audio_array[s_idx:e_idx]
segment = pad_audio(segment, SAMPLERATE)
with self.lock:
segments, _ = self.model.transcribe(
segment,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(seg.text for seg in segments).strip()
words = [
{
"word": w.word,
"start": round(float(w.start) + start_time + timestamp_offset, 2),
"end": round(float(w.end) + start_time + timestamp_offset, 2),
}
for seg in segments
for w in seg.words
]
if text:
all_text.append(text)
all_words.extend(words)
return {"text": " ".join(all_text), "words": all_words}
def detect_audio_format(url: str, headers: dict) -> str:
from urllib.parse import urlparse
from fastapi import HTTPException
url_path = urlparse(url).path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return ext
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return "mp3"
if "audio/wav" in content_type:
return "wav"
if "audio/mp4" in content_type:
return "mp4"
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format for URL. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
def download_audio_to_volume(audio_file_url: str) -> tuple[str, str]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
@app.function(
scaledown_window=60,
timeout=600,
secrets=[
modal.Secret.from_name("reflector-gpu"),
],
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
image=image,
)
@modal.concurrent(max_inputs=40)
@modal.asgi_app()
def web():
from fastapi import (
Body,
Depends,
FastAPI,
Form,
HTTPException,
UploadFile,
status,
)
from fastapi.security import OAuth2PasswordBearer
transcriber_live = TranscriberWhisperLive()
transcriber_file = TranscriberWhisperFile()
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
if apikey == os.environ["REFLECTOR_GPU_APIKEY"]:
return
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)
class TranscriptResponse(dict):
pass
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
def transcribe(
file: UploadFile = None,
files: list[UploadFile] | None = None,
model: str = Form(MODEL_NAME),
language: str = Form("en"),
batch: bool = Form(False),
):
if not file and not files:
raise HTTPException(
status_code=400, detail="Either 'file' or 'files' parameter is required"
)
if batch and not files:
raise HTTPException(
status_code=400, detail="Batch transcription requires 'files'"
)
upload_files = [file] if file else files
uploaded_filenames: list[str] = []
for upload_file in upload_files:
audio_suffix = upload_file.filename.split(".")[-1]
if audio_suffix not in SUPPORTED_FILE_EXTENSIONS:
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
content = upload_file.file.read()
f.write(content)
uploaded_filenames.append(unique_filename)
upload_volume.commit()
try:
if batch and len(upload_files) > 1:
func = transcriber_live.transcribe_batch.spawn(
filenames=uploaded_filenames,
language=language,
)
results = func.get()
return {"results": results}
results = []
for filename in uploaded_filenames:
func = transcriber_live.transcribe_segment.spawn(
filename=filename,
language=language,
)
result = func.get()
result["filename"] = filename
results.append(result)
return {"results": results} if len(results) > 1 else results[0]
finally:
for filename in uploaded_filenames:
try:
file_path = f"{UPLOADS_PATH}/{filename}"
os.remove(file_path)
except Exception:
pass
upload_volume.commit()
@app.post("/v1/audio/transcriptions-from-url", dependencies=[Depends(apikey_auth)])
def transcribe_from_url(
audio_file_url: str = Body(
..., description="URL of the audio file to transcribe"
),
model: str = Body(MODEL_NAME),
language: str = Body("en"),
timestamp_offset: float = Body(0.0),
):
unique_filename, _audio_suffix = download_audio_to_volume(audio_file_url)
try:
func = transcriber_file.transcribe_segment.spawn(
filename=unique_filename,
timestamp_offset=timestamp_offset,
language=language,
)
result = func.get()
return result
finally:
try:
file_path = f"{UPLOADS_PATH}/{unique_filename}"
os.remove(file_path)
upload_volume.commit()
except Exception:
pass
return app
class NoStdStreams:
def __init__(self):
self.devnull = open(os.devnull, "w")
def __enter__(self):
self._stdout, self._stderr = sys.stdout, sys.stderr
self._stdout.flush()
self._stderr.flush()
sys.stdout, sys.stderr = self.devnull, self.devnull
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout, sys.stderr = self._stdout, self._stderr
self.devnull.close()

View File

@@ -3,7 +3,7 @@ import os
import sys
import threading
import uuid
from typing import Mapping, NewType
from typing import Generator, Mapping, NamedTuple, NewType, TypedDict
from urllib.parse import urlparse
import modal
@@ -14,10 +14,7 @@ SAMPLERATE = 16000
UPLOADS_PATH = "/uploads"
CACHE_PATH = "/cache"
VAD_CONFIG = {
"max_segment_duration": 30.0,
"batch_max_files": 10,
"batch_max_duration": 5.0,
"min_segment_duration": 0.02,
"batch_max_duration": 30.0,
"silence_padding": 0.5,
"window_size": 512,
}
@@ -25,6 +22,37 @@ VAD_CONFIG = {
ParakeetUniqFilename = NewType("ParakeetUniqFilename", str)
AudioFileExtension = NewType("AudioFileExtension", str)
class TimeSegment(NamedTuple):
"""Represents a time segment with start and end times."""
start: float
end: float
class AudioSegment(NamedTuple):
"""Represents an audio segment with timing and audio data."""
start: float
end: float
audio: any
class TranscriptResult(NamedTuple):
"""Represents a transcription result with text and word timings."""
text: str
words: list["WordTiming"]
class WordTiming(TypedDict):
"""Represents a word with its timing information."""
word: str
start: float
end: float
app = modal.App("reflector-transcriber-parakeet")
# Volume for caching model weights
@@ -49,13 +77,13 @@ image = (
.pip_install(
"hf_transfer==0.1.9",
"huggingface_hub[hf-xet]==0.31.2",
"nemo_toolkit[asr]==2.3.0",
"nemo_toolkit[asr]==2.5.0",
"cuda-python==12.8.0",
"fastapi==0.115.12",
"numpy<2",
"librosa==0.10.1",
"librosa==0.11.0",
"requests",
"silero-vad==5.1.0",
"silero-vad==6.2.0",
"torch",
)
.entrypoint([]) # silence chatty logs by container on start
@@ -170,12 +198,14 @@ class TranscriberParakeetLive:
(output,) = self.model.transcribe([padded_audio], timestamps=True)
text = output.text.strip()
words = [
{
"word": word_info["word"],
"start": round(word_info["start"], 2),
"end": round(word_info["end"], 2),
}
words: list[WordTiming] = [
WordTiming(
# XXX the space added here is to match the output of whisper
# whisper add space to each words, while parakeet don't
word=word_info["word"] + " ",
start=round(word_info["start"], 2),
end=round(word_info["end"], 2),
)
for word_info in output.timestamp["word"]
]
@@ -211,12 +241,12 @@ class TranscriberParakeetLive:
for i, (filename, output) in enumerate(zip(filenames, outputs)):
text = output.text.strip()
words = [
{
"word": word_info["word"],
"start": round(word_info["start"], 2),
"end": round(word_info["end"], 2),
}
words: list[WordTiming] = [
WordTiming(
word=word_info["word"] + " ",
start=round(word_info["start"], 2),
end=round(word_info["end"], 2),
)
for word_info in output.timestamp["word"]
]
@@ -271,9 +301,12 @@ class TranscriberParakeetFile:
audio_array, sample_rate = librosa.load(file_path, sr=SAMPLERATE, mono=True)
return audio_array
def vad_segment_generator(audio_array):
def vad_segment_generator(
audio_array,
) -> Generator[TimeSegment, None, None]:
"""Generate speech segments using VAD with start/end sample indices"""
vad_iterator = VADIterator(self.vad_model, sampling_rate=SAMPLERATE)
audio_duration = len(audio_array) / float(SAMPLERATE)
window_size = VAD_CONFIG["window_size"]
start = None
@@ -297,107 +330,125 @@ class TranscriberParakeetFile:
start_time = start / float(SAMPLERATE)
end_time = end / float(SAMPLERATE)
# Extract the actual audio segment
audio_segment = audio_array[start:end]
yield (start_time, end_time, audio_segment)
yield TimeSegment(start_time, end_time)
start = None
if start is not None:
start_time = start / float(SAMPLERATE)
yield TimeSegment(start_time, audio_duration)
vad_iterator.reset_states()
def vad_segment_filter(segments):
"""Filter VAD segments by duration and chunk large segments"""
min_dur = VAD_CONFIG["min_segment_duration"]
max_dur = VAD_CONFIG["max_segment_duration"]
def batch_speech_segments(
segments: Generator[TimeSegment, None, None], max_duration: int
) -> Generator[TimeSegment, None, None]:
"""
Input segments:
[0-2] [3-5] [6-8] [10-11] [12-15] [17-19] [20-22]
for start_time, end_time, audio_segment in segments:
segment_duration = end_time - start_time
(max_duration=10)
# Skip very small segments
if segment_duration < min_dur:
Output batches:
[0-8] [10-19] [20-22]
Note: silences are kept for better transcription, previous implementation was
passing segments separatly, but the output was less accurate.
"""
batch_start_time = None
batch_end_time = None
for segment in segments:
start_time, end_time = segment.start, segment.end
if batch_start_time is None or batch_end_time is None:
batch_start_time = start_time
batch_end_time = end_time
continue
# If segment is within max duration, yield as-is
if segment_duration <= max_dur:
yield (start_time, end_time, audio_segment)
total_duration = end_time - batch_start_time
if total_duration <= max_duration:
batch_end_time = end_time
continue
# Chunk large segments into smaller pieces
chunk_samples = int(max_dur * SAMPLERATE)
current_start = start_time
yield TimeSegment(batch_start_time, batch_end_time)
batch_start_time = start_time
batch_end_time = end_time
for chunk_offset in range(0, len(audio_segment), chunk_samples):
chunk_audio = audio_segment[
chunk_offset : chunk_offset + chunk_samples
]
if len(chunk_audio) == 0:
break
if batch_start_time is None or batch_end_time is None:
return
chunk_duration = len(chunk_audio) / float(SAMPLERATE)
chunk_end = current_start + chunk_duration
yield TimeSegment(batch_start_time, batch_end_time)
# Only yield chunks that meet minimum duration
if chunk_duration >= min_dur:
yield (current_start, chunk_end, chunk_audio)
def batch_segment_to_audio_segment(
segments: Generator[TimeSegment, None, None],
audio_array,
) -> Generator[AudioSegment, None, None]:
"""Extract audio segments and apply padding for Parakeet compatibility.
current_start = chunk_end
Uses pad_audio to ensure segments are at least 0.5s long, preventing
Parakeet crashes. This padding may cause slight timing overlaps between
segments, which are corrected by enforce_word_timing_constraints.
"""
for segment in segments:
start_time, end_time = segment.start, segment.end
start_sample = int(start_time * SAMPLERATE)
end_sample = int(end_time * SAMPLERATE)
audio_segment = audio_array[start_sample:end_sample]
def batch_segments(segments, max_files=10, max_duration=5.0):
batch = []
batch_duration = 0.0
padded_segment = pad_audio(audio_segment, SAMPLERATE)
for start_time, end_time, audio_segment in segments:
segment_duration = end_time - start_time
yield AudioSegment(start_time, end_time, padded_segment)
if segment_duration < VAD_CONFIG["silence_padding"]:
silence_samples = int(
(VAD_CONFIG["silence_padding"] - segment_duration) * SAMPLERATE
)
padding = np.zeros(silence_samples, dtype=np.float32)
audio_segment = np.concatenate([audio_segment, padding])
segment_duration = VAD_CONFIG["silence_padding"]
batch.append((start_time, end_time, audio_segment))
batch_duration += segment_duration
if len(batch) >= max_files or batch_duration >= max_duration:
yield batch
batch = []
batch_duration = 0.0
if batch:
yield batch
def transcribe_batch(model, audio_segments):
def transcribe_batch(model, audio_segments: list) -> list:
with NoStdStreams():
outputs = model.transcribe(audio_segments, timestamps=True)
return outputs
def enforce_word_timing_constraints(
words: list[WordTiming],
) -> list[WordTiming]:
"""Enforce that word end times don't exceed the start time of the next word.
Due to silence padding added in batch_segment_to_audio_segment for better
transcription accuracy, word timings from different segments may overlap.
This function ensures there are no overlaps by adjusting end times.
"""
if len(words) <= 1:
return words
enforced_words = []
for i, word in enumerate(words):
enforced_word = word.copy()
if i < len(words) - 1:
next_start = words[i + 1]["start"]
if enforced_word["end"] > next_start:
enforced_word["end"] = next_start
enforced_words.append(enforced_word)
return enforced_words
def emit_results(
results,
segments_info,
batch_index,
total_batches,
):
results: list,
segments_info: list[AudioSegment],
) -> Generator[TranscriptResult, None, None]:
"""Yield transcribed text and word timings from model output, adjusting timestamps to absolute positions."""
for i, (output, (start_time, end_time, _)) in enumerate(
zip(results, segments_info)
):
for i, (output, segment) in enumerate(zip(results, segments_info)):
start_time, end_time = segment.start, segment.end
text = output.text.strip()
words = [
{
"word": word_info["word"],
"start": round(
words: list[WordTiming] = [
WordTiming(
word=word_info["word"] + " ",
start=round(
word_info["start"] + start_time + timestamp_offset, 2
),
"end": round(
word_info["end"] + start_time + timestamp_offset, 2
),
}
end=round(word_info["end"] + start_time + timestamp_offset, 2),
)
for word_info in output.timestamp["word"]
]
yield text, words
yield TranscriptResult(text, words)
upload_volume.reload()
@@ -407,41 +458,31 @@ class TranscriberParakeetFile:
audio_array = load_and_convert_audio(file_path)
total_duration = len(audio_array) / float(SAMPLERATE)
processed_duration = 0.0
all_text_parts = []
all_words = []
all_text_parts: list[str] = []
all_words: list[WordTiming] = []
raw_segments = vad_segment_generator(audio_array)
filtered_segments = vad_segment_filter(raw_segments)
batches = batch_segments(
filtered_segments,
VAD_CONFIG["batch_max_files"],
speech_segments = batch_speech_segments(
raw_segments,
VAD_CONFIG["batch_max_duration"],
)
audio_segments = batch_segment_to_audio_segment(speech_segments, audio_array)
batch_index = 0
total_batches = max(
1, int(total_duration / VAD_CONFIG["batch_max_duration"]) + 1
)
for batch in audio_segments:
audio_segment = batch.audio
results = transcribe_batch(self.model, [audio_segment])
for batch in batches:
batch_index += 1
audio_segments = [seg[2] for seg in batch]
results = transcribe_batch(self.model, audio_segments)
for text, words in emit_results(
for result in emit_results(
results,
batch,
batch_index,
total_batches,
[batch],
):
if not text:
if not result.text:
continue
all_text_parts.append(text)
all_words.extend(words)
all_text_parts.append(result.text)
all_words.extend(result.words)
processed_duration += sum(len(seg[2]) / float(SAMPLERATE) for seg in batch)
all_words = enforce_word_timing_constraints(all_words)
combined_text = " ".join(all_text_parts)
return {"text": combined_text, "words": all_words}

View File

@@ -0,0 +1,2 @@
REFLECTOR_GPU_APIKEY=
HF_TOKEN=

38
gpu/self_hosted/.gitignore vendored Normal file
View File

@@ -0,0 +1,38 @@
cache/
# OS / Editor
.DS_Store
.vscode/
.idea/
# Python
__pycache__/
*.py[cod]
*$py.class
# Env and secrets
.env
*.env
*.secret
HF_TOKEN
REFLECTOR_GPU_APIKEY
# Virtual env / uv
.venv/
venv/
ENV/
uv/
# Build / dist
build/
dist/
.eggs/
*.egg-info/
# Coverage / test
.pytest_cache/
.coverage*
htmlcov/
# Logs
*.log

View File

@@ -0,0 +1,46 @@
FROM python:3.12-slim
ENV PYTHONUNBUFFERED=1 \
UV_LINK_MODE=copy \
UV_NO_CACHE=1
WORKDIR /tmp
RUN apt-get update \
&& apt-get install -y \
ffmpeg \
curl \
ca-certificates \
gnupg \
wget \
&& apt-get clean
# Add NVIDIA CUDA repo for Debian 12 (bookworm) and install cuDNN 9 for CUDA 12
ADD https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb /cuda-keyring.deb
RUN dpkg -i /cuda-keyring.deb \
&& rm /cuda-keyring.deb \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
cuda-cudart-12-6 \
libcublas-12-6 \
libcudnn9-cuda-12 \
libcudnn9-dev-cuda-12 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ADD https://astral.sh/uv/install.sh /uv-installer.sh
RUN sh /uv-installer.sh && rm /uv-installer.sh
ENV PATH="/root/.local/bin/:$PATH"
ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"
RUN mkdir -p /app
WORKDIR /app
COPY pyproject.toml uv.lock /app/
COPY ./app /app/app
COPY ./main.py /app/
COPY ./runserver.sh /app/
EXPOSE 8000
CMD ["sh", "/app/runserver.sh"]

73
gpu/self_hosted/README.md Normal file
View File

@@ -0,0 +1,73 @@
# Self-hosted Model API
Run transcription, translation, and diarization services compatible with Reflector's GPU Model API. Works on CPU or GPU.
Environment variables
- REFLECTOR_GPU_APIKEY: Optional Bearer token. If unset, auth is disabled.
- HF_TOKEN: Optional. Required for diarization to download pyannote pipelines
Requirements
- FFmpeg must be installed and on PATH (used for URL-based and segmented transcription)
- Python 3.12+
- NVIDIA GPU optional. If available, it will be used automatically
Local run
Set env vars in self_hosted/.env file
uv sync
uv run uvicorn main:app --host 0.0.0.0 --port 8000
Authentication
- If REFLECTOR_GPU_APIKEY is set, include header: Authorization: Bearer <key>
Endpoints
- POST /v1/audio/transcriptions
- multipart/form-data
- fields: file (single file) OR files[] (multiple files), language, batch (true/false)
- response: single { text, words, filename } or { results: [ ... ] }
- POST /v1/audio/transcriptions-from-url
- application/json
- body: { audio_file_url, language, timestamp_offset }
- response: { text, words }
- POST /translate
- text: query parameter
- body (application/json): { source_language, target_language }
- response: { text: { <src>: original, <tgt>: translated } }
- POST /diarize
- query parameters: audio_file_url, timestamp (optional)
- requires HF_TOKEN to be set (for pyannote)
- response: { diarization: [ { start, end, speaker } ] }
OpenAPI docs
- Visit /docs when the server is running
Docker
- Not yet provided in this directory. A Dockerfile will be added later. For now, use Local run above
Conformance tests
# From this directory
TRANSCRIPT_URL=http://localhost:8000 \
TRANSCRIPT_API_KEY=dev-key \
uv run -m pytest -m model_api --no-cov ../../server/tests/test_model_api_transcript.py
TRANSLATION_URL=http://localhost:8000 \
TRANSLATION_API_KEY=dev-key \
uv run -m pytest -m model_api --no-cov ../../server/tests/test_model_api_translation.py
DIARIZATION_URL=http://localhost:8000 \
DIARIZATION_API_KEY=dev-key \
uv run -m pytest -m model_api --no-cov ../../server/tests/test_model_api_diarization.py

View File

@@ -0,0 +1,19 @@
import os
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
required_key = os.environ.get("REFLECTOR_GPU_APIKEY")
if not required_key:
return
if apikey == required_key:
return
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)

View File

@@ -0,0 +1,12 @@
from pathlib import Path
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
SAMPLE_RATE = 16000
VAD_CONFIG = {
"batch_max_duration": 30.0,
"silence_padding": 0.5,
"window_size": 512,
}
# App-level paths
UPLOADS_PATH = Path("/tmp/whisper-uploads")

View File

@@ -0,0 +1,30 @@
from contextlib import asynccontextmanager
from fastapi import FastAPI
from .routers.diarization import router as diarization_router
from .routers.transcription import router as transcription_router
from .routers.translation import router as translation_router
from .services.transcriber import WhisperService
from .services.diarizer import PyannoteDiarizationService
from .utils import ensure_dirs
@asynccontextmanager
async def lifespan(app: FastAPI):
ensure_dirs()
whisper_service = WhisperService()
whisper_service.load()
app.state.whisper = whisper_service
diarization_service = PyannoteDiarizationService()
diarization_service.load()
app.state.diarizer = diarization_service
yield
def create_app() -> FastAPI:
app = FastAPI(lifespan=lifespan)
app.include_router(transcription_router)
app.include_router(translation_router)
app.include_router(diarization_router)
return app

View File

@@ -0,0 +1,30 @@
from typing import List
from fastapi import APIRouter, Depends, Request
from pydantic import BaseModel
from ..auth import apikey_auth
from ..services.diarizer import PyannoteDiarizationService
from ..utils import download_audio_file
router = APIRouter(tags=["diarization"])
class DiarizationSegment(BaseModel):
start: float
end: float
speaker: int
class DiarizationResponse(BaseModel):
diarization: List[DiarizationSegment]
@router.post(
"/diarize", dependencies=[Depends(apikey_auth)], response_model=DiarizationResponse
)
def diarize(request: Request, audio_file_url: str, timestamp: float = 0.0):
with download_audio_file(audio_file_url) as (file_path, _ext):
file_path = str(file_path)
diarizer: PyannoteDiarizationService = request.app.state.diarizer
return diarizer.diarize_file(file_path, timestamp=timestamp)

View File

@@ -0,0 +1,109 @@
import uuid
from typing import Optional, Union
from fastapi import APIRouter, Body, Depends, Form, HTTPException, Request, UploadFile
from pydantic import BaseModel
from pathlib import Path
from ..auth import apikey_auth
from ..config import SUPPORTED_FILE_EXTENSIONS, UPLOADS_PATH
from ..services.transcriber import MODEL_NAME
from ..utils import cleanup_uploaded_files, download_audio_file
router = APIRouter(prefix="/v1/audio", tags=["transcription"])
class WordTiming(BaseModel):
word: str
start: float
end: float
class TranscriptResult(BaseModel):
text: str
words: list[WordTiming]
filename: Optional[str] = None
class TranscriptBatchResponse(BaseModel):
results: list[TranscriptResult]
@router.post(
"/transcriptions",
dependencies=[Depends(apikey_auth)],
response_model=Union[TranscriptResult, TranscriptBatchResponse],
)
def transcribe(
request: Request,
file: UploadFile = None,
files: list[UploadFile] | None = None,
model: str = Form(MODEL_NAME),
language: str = Form("en"),
batch: bool = Form(False),
):
service = request.app.state.whisper
if not file and not files:
raise HTTPException(
status_code=400, detail="Either 'file' or 'files' parameter is required"
)
if batch and not files:
raise HTTPException(
status_code=400, detail="Batch transcription requires 'files'"
)
upload_files = [file] if file else files
uploaded_paths: list[Path] = []
with cleanup_uploaded_files(uploaded_paths):
for upload_file in upload_files:
audio_suffix = upload_file.filename.split(".")[-1].lower()
if audio_suffix not in SUPPORTED_FILE_EXTENSIONS:
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = UPLOADS_PATH / unique_filename
with open(file_path, "wb") as f:
content = upload_file.file.read()
f.write(content)
uploaded_paths.append(file_path)
if batch and len(upload_files) > 1:
results = []
for path in uploaded_paths:
result = service.transcribe_file(str(path), language=language)
result["filename"] = path.name
results.append(result)
return {"results": results}
results = []
for path in uploaded_paths:
result = service.transcribe_file(str(path), language=language)
result["filename"] = path.name
results.append(result)
return {"results": results} if len(results) > 1 else results[0]
@router.post(
"/transcriptions-from-url",
dependencies=[Depends(apikey_auth)],
response_model=TranscriptResult,
)
def transcribe_from_url(
request: Request,
audio_file_url: str = Body(..., description="URL of the audio file to transcribe"),
model: str = Body(MODEL_NAME),
language: str = Body("en"),
timestamp_offset: float = Body(0.0),
):
service = request.app.state.whisper
with download_audio_file(audio_file_url) as (file_path, _ext):
file_path = str(file_path)
result = service.transcribe_vad_url_segment(
file_path=file_path, timestamp_offset=timestamp_offset, language=language
)
return result

View File

@@ -0,0 +1,28 @@
from typing import Dict
from fastapi import APIRouter, Body, Depends
from pydantic import BaseModel
from ..auth import apikey_auth
from ..services.translator import TextTranslatorService
router = APIRouter(tags=["translation"])
translator = TextTranslatorService()
class TranslationResponse(BaseModel):
text: Dict[str, str]
@router.post(
"/translate",
dependencies=[Depends(apikey_auth)],
response_model=TranslationResponse,
)
def translate(
text: str,
source_language: str = Body("en"),
target_language: str = Body("fr"),
):
return translator.translate(text, source_language, target_language)

View File

@@ -0,0 +1,42 @@
import os
import threading
import torch
import torchaudio
from pyannote.audio import Pipeline
class PyannoteDiarizationService:
def __init__(self):
self._pipeline = None
self._device = "cpu"
self._lock = threading.Lock()
def load(self):
self._device = "cuda" if torch.cuda.is_available() else "cpu"
self._pipeline = Pipeline.from_pretrained(
"pyannote/speaker-diarization-3.1",
use_auth_token=os.environ.get("HF_TOKEN"),
)
self._pipeline.to(torch.device(self._device))
def diarize_file(self, file_path: str, timestamp: float = 0.0) -> dict:
if self._pipeline is None:
self.load()
waveform, sample_rate = torchaudio.load(file_path)
with self._lock:
diarization = self._pipeline(
{"waveform": waveform, "sample_rate": sample_rate}
)
words = []
for diarization_segment, _, speaker in diarization.itertracks(yield_label=True):
words.append(
{
"start": round(timestamp + diarization_segment.start, 3),
"end": round(timestamp + diarization_segment.end, 3),
"speaker": int(speaker[-2:])
if speaker and speaker[-2:].isdigit()
else 0,
}
)
return {"diarization": words}

View File

@@ -0,0 +1,208 @@
import os
import shutil
import subprocess
import threading
from typing import Generator
import faster_whisper
import librosa
import numpy as np
import torch
from fastapi import HTTPException
from silero_vad import VADIterator, load_silero_vad
from ..config import SAMPLE_RATE, VAD_CONFIG
# Whisper configuration (service-local defaults)
MODEL_NAME = "large-v2"
# None delegates compute type to runtime: float16 on CUDA, int8 on CPU
MODEL_COMPUTE_TYPE = None
MODEL_NUM_WORKERS = 1
CACHE_PATH = os.path.join(os.path.expanduser("~"), ".cache", "reflector-whisper")
from ..utils import NoStdStreams
class WhisperService:
def __init__(self):
self.model = None
self.device = "cpu"
self.lock = threading.Lock()
def load(self):
self.device = "cuda" if torch.cuda.is_available() else "cpu"
compute_type = MODEL_COMPUTE_TYPE or (
"float16" if self.device == "cuda" else "int8"
)
self.model = faster_whisper.WhisperModel(
MODEL_NAME,
device=self.device,
compute_type=compute_type,
num_workers=MODEL_NUM_WORKERS,
download_root=CACHE_PATH,
)
def pad_audio(self, audio_array, sample_rate: int = SAMPLE_RATE):
audio_duration = len(audio_array) / sample_rate
if audio_duration < VAD_CONFIG["silence_padding"]:
silence_samples = int(sample_rate * VAD_CONFIG["silence_padding"])
silence = np.zeros(silence_samples, dtype=np.float32)
return np.concatenate([audio_array, silence])
return audio_array
def enforce_word_timing_constraints(self, words: list[dict]) -> list[dict]:
if len(words) <= 1:
return words
enforced: list[dict] = []
for i, word in enumerate(words):
current = dict(word)
if i < len(words) - 1:
next_start = words[i + 1]["start"]
if current["end"] > next_start:
current["end"] = next_start
enforced.append(current)
return enforced
def transcribe_file(self, file_path: str, language: str = "en") -> dict:
input_for_model: str | "object" = file_path
try:
audio_array, _sample_rate = librosa.load(
file_path, sr=SAMPLE_RATE, mono=True
)
if len(audio_array) / float(SAMPLE_RATE) < VAD_CONFIG["silence_padding"]:
input_for_model = self.pad_audio(audio_array, SAMPLE_RATE)
except Exception:
pass
with self.lock:
with NoStdStreams():
segments, _ = self.model.transcribe(
input_for_model,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(segment.text for segment in segments).strip()
words = [
{
"word": word.word,
"start": round(float(word.start), 2),
"end": round(float(word.end), 2),
}
for segment in segments
for word in segment.words
]
words = self.enforce_word_timing_constraints(words)
return {"text": text, "words": words}
def transcribe_vad_url_segment(
self, file_path: str, timestamp_offset: float = 0.0, language: str = "en"
) -> dict:
def load_audio_via_ffmpeg(input_path: str, sample_rate: int) -> np.ndarray:
ffmpeg_bin = shutil.which("ffmpeg") or "ffmpeg"
cmd = [
ffmpeg_bin,
"-nostdin",
"-threads",
"1",
"-i",
input_path,
"-f",
"f32le",
"-acodec",
"pcm_f32le",
"-ac",
"1",
"-ar",
str(sample_rate),
"pipe:1",
]
try:
proc = subprocess.run(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True
)
except Exception as e:
raise HTTPException(status_code=400, detail=f"ffmpeg failed: {e}")
audio = np.frombuffer(proc.stdout, dtype=np.float32)
return audio
def vad_segments(
audio_array,
sample_rate: int = SAMPLE_RATE,
window_size: int = VAD_CONFIG["window_size"],
) -> Generator[tuple[float, float], None, None]:
vad_model = load_silero_vad(onnx=False)
iterator = VADIterator(vad_model, sampling_rate=sample_rate)
start = None
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(
chunk, (0, window_size - len(chunk)), mode="constant"
)
speech = iterator(chunk)
if not speech:
continue
if "start" in speech:
start = speech["start"]
continue
if "end" in speech and start is not None:
end = speech["end"]
yield (start / float(SAMPLE_RATE), end / float(SAMPLE_RATE))
start = None
iterator.reset_states()
audio_array = load_audio_via_ffmpeg(file_path, SAMPLE_RATE)
merged_batches: list[tuple[float, float]] = []
batch_start = None
batch_end = None
max_duration = VAD_CONFIG["batch_max_duration"]
for seg_start, seg_end in vad_segments(audio_array):
if batch_start is None:
batch_start, batch_end = seg_start, seg_end
continue
if seg_end - batch_start <= max_duration:
batch_end = seg_end
else:
merged_batches.append((batch_start, batch_end))
batch_start, batch_end = seg_start, seg_end
if batch_start is not None and batch_end is not None:
merged_batches.append((batch_start, batch_end))
all_text = []
all_words = []
for start_time, end_time in merged_batches:
s_idx = int(start_time * SAMPLE_RATE)
e_idx = int(end_time * SAMPLE_RATE)
segment = audio_array[s_idx:e_idx]
segment = self.pad_audio(segment, SAMPLE_RATE)
with self.lock:
segments, _ = self.model.transcribe(
segment,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(seg.text for seg in segments).strip()
words = [
{
"word": w.word,
"start": round(float(w.start) + start_time + timestamp_offset, 2),
"end": round(float(w.end) + start_time + timestamp_offset, 2),
}
for seg in segments
for w in seg.words
]
if text:
all_text.append(text)
all_words.extend(words)
all_words = self.enforce_word_timing_constraints(all_words)
return {"text": " ".join(all_text), "words": all_words}

View File

@@ -0,0 +1,44 @@
import threading
from transformers import MarianMTModel, MarianTokenizer, pipeline
class TextTranslatorService:
"""Simple text-to-text translator using HuggingFace MarianMT models.
This mirrors the modal translator API shape but uses text translation only.
"""
def __init__(self):
self._pipeline = None
self._lock = threading.Lock()
def load(self, source_language: str = "en", target_language: str = "fr"):
# Pick a default MarianMT model pair if available; fall back to Helsinki-NLP en->fr
model_name = self._resolve_model_name(source_language, target_language)
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
self._pipeline = pipeline("translation", model=model, tokenizer=tokenizer)
def _resolve_model_name(self, src: str, tgt: str) -> str:
# Minimal mapping; extend as needed
pair = (src.lower(), tgt.lower())
mapping = {
("en", "fr"): "Helsinki-NLP/opus-mt-en-fr",
("fr", "en"): "Helsinki-NLP/opus-mt-fr-en",
("en", "es"): "Helsinki-NLP/opus-mt-en-es",
("es", "en"): "Helsinki-NLP/opus-mt-es-en",
("en", "de"): "Helsinki-NLP/opus-mt-en-de",
("de", "en"): "Helsinki-NLP/opus-mt-de-en",
}
return mapping.get(pair, "Helsinki-NLP/opus-mt-en-fr")
def translate(self, text: str, source_language: str, target_language: str) -> dict:
if self._pipeline is None:
self.load(source_language, target_language)
with self._lock:
results = self._pipeline(
text, src_lang=source_language, tgt_lang=target_language
)
translated = results[0]["translation_text"] if results else ""
return {"text": {source_language: text, target_language: translated}}

View File

@@ -0,0 +1,107 @@
import logging
import os
import sys
import uuid
from contextlib import contextmanager
from typing import Mapping
from urllib.parse import urlparse
from pathlib import Path
import requests
from fastapi import HTTPException
from .config import SUPPORTED_FILE_EXTENSIONS, UPLOADS_PATH
logger = logging.getLogger(__name__)
class NoStdStreams:
def __init__(self):
self.devnull = open(os.devnull, "w")
def __enter__(self):
self._stdout, self._stderr = sys.stdout, sys.stderr
self._stdout.flush()
self._stderr.flush()
sys.stdout, sys.stderr = self.devnull, self.devnull
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout, sys.stderr = self._stdout, self._stderr
self.devnull.close()
def ensure_dirs():
UPLOADS_PATH.mkdir(parents=True, exist_ok=True)
def detect_audio_format(url: str, headers: Mapping[str, str]) -> str:
url_path = urlparse(url).path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return ext
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return "mp3"
if "audio/wav" in content_type:
return "wav"
if "audio/mp4" in content_type:
return "mp4"
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format for URL. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
def download_audio_to_uploads(audio_file_url: str) -> tuple[Path, str]:
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path: Path = UPLOADS_PATH / unique_filename
with open(file_path, "wb") as f:
f.write(response.content)
return file_path, audio_suffix
@contextmanager
def download_audio_file(audio_file_url: str):
"""Download an audio file to UPLOADS_PATH and remove it after use.
Yields (file_path: Path, audio_suffix: str).
"""
file_path, audio_suffix = download_audio_to_uploads(audio_file_url)
try:
yield file_path, audio_suffix
finally:
try:
file_path.unlink(missing_ok=True)
except Exception as e:
logger.error("Error deleting temporary file %s: %s", file_path, e)
@contextmanager
def cleanup_uploaded_files(file_paths: list[Path]):
"""Ensure provided file paths are removed after use.
The provided list can be populated inside the context; all present entries
at exit will be deleted.
"""
try:
yield file_paths
finally:
for path in list(file_paths):
try:
path.unlink(missing_ok=True)
except Exception as e:
logger.error("Error deleting temporary file %s: %s", path, e)

View File

@@ -0,0 +1,10 @@
services:
reflector_gpu:
build:
context: .
ports:
- "8000:8000"
env_file:
- .env
volumes:
- ./cache:/root/.cache

3
gpu/self_hosted/main.py Normal file
View File

@@ -0,0 +1,3 @@
from app.factory import create_app
app = create_app()

View File

@@ -0,0 +1,19 @@
[project]
name = "reflector-gpu"
version = "0.1.0"
description = "Self-hosted GPU service for speech transcription, diarization, and translation via FastAPI."
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"fastapi[standard]>=0.116.1",
"uvicorn[standard]>=0.30.0",
"torch>=2.3.0",
"faster-whisper>=1.1.0",
"librosa==0.10.1",
"numpy<2",
"silero-vad==5.1.0",
"transformers>=4.35.0",
"sentencepiece",
"pyannote.audio==3.1.0",
"torchaudio>=2.3.0",
]

View File

@@ -0,0 +1,17 @@
#!/bin/sh
set -e
export PATH="/root/.local/bin:$PATH"
cd /app
# Install Python dependencies at runtime (first run or when FORCE_SYNC=1)
if [ ! -d "/app/.venv" ] || [ "$FORCE_SYNC" = "1" ]; then
echo "[startup] Installing Python dependencies with uv..."
uv sync --compile-bytecode --locked
else
echo "[startup] Using existing virtual environment at /app/.venv"
fi
exec uv run uvicorn main:app --host 0.0.0.0 --port 8000

3013
gpu/self_hosted/uv.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,29 @@
## API Key Management
### Finding Your User ID
```bash
# Get your OAuth sub (user ID) - requires authentication
curl -H "Authorization: Bearer <your_jwt>" http://localhost:1250/v1/me
# Returns: {"sub": "your-oauth-sub-here", "email": "...", ...}
```
### Creating API Keys
```bash
curl -X POST http://localhost:1250/v1/user/api-keys \
-H "Authorization: Bearer <your_jwt>" \
-H "Content-Type: application/json" \
-d '{"name": "My API Key"}'
```
### Using API Keys
```bash
# Use X-API-Key header instead of Authorization
curl -H "X-API-Key: <your_api_key>" http://localhost:1250/v1/transcripts
```
## AWS S3/SQS usage clarification
Whereby.com uploads recordings directly to our S3 bucket when meetings end.

View File

@@ -0,0 +1,95 @@
# Data Retention and Cleanup
## Overview
For public instances of Reflector, a data retention policy is automatically enforced to delete anonymous user data after a configurable period (default: 7 days). This ensures compliance with privacy expectations and prevents unbounded storage growth.
## Configuration
### Environment Variables
- `PUBLIC_MODE` (bool): Must be set to `true` to enable automatic cleanup
- `PUBLIC_DATA_RETENTION_DAYS` (int): Number of days to retain anonymous data (default: 7)
### What Gets Deleted
When data reaches the retention period, the following items are automatically removed:
1. **Transcripts** from anonymous users (where `user_id` is NULL):
- Database records
- Local files (audio.wav, audio.mp3, audio.json waveform)
- Storage files (cloud storage if configured)
## Automatic Cleanup
### Celery Beat Schedule
When `PUBLIC_MODE=true`, a Celery beat task runs daily at 3 AM to clean up old data:
```python
# Automatically scheduled when PUBLIC_MODE=true
"cleanup_old_public_data": {
"task": "reflector.worker.cleanup.cleanup_old_public_data",
"schedule": crontab(hour=3, minute=0), # Daily at 3 AM
}
```
### Running the Worker
Ensure both Celery worker and beat scheduler are running:
```bash
# Start Celery worker
uv run celery -A reflector.worker.app worker --loglevel=info
# Start Celery beat scheduler (in another terminal)
uv run celery -A reflector.worker.app beat
```
## Manual Cleanup
For testing or manual intervention, use the cleanup tool:
```bash
# Delete data older than 7 days (default)
uv run python -m reflector.tools.cleanup_old_data
# Delete data older than 30 days
uv run python -m reflector.tools.cleanup_old_data --days 30
```
Note: The manual tool uses the same implementation as the Celery worker task to ensure consistency.
## Important Notes
1. **User Data Deletion**: Only anonymous data (where `user_id` is NULL) is deleted. Authenticated user data is preserved.
2. **Storage Cleanup**: The system properly cleans up both local files and cloud storage when configured.
3. **Error Handling**: If individual deletions fail, the cleanup continues and logs errors. Failed deletions are reported in the task output.
4. **Public Instance Only**: The automatic cleanup task only runs when `PUBLIC_MODE=true` to prevent accidental data loss in private deployments.
## Testing
Run the cleanup tests:
```bash
uv run pytest tests/test_cleanup.py -v
```
## Monitoring
Check Celery logs for cleanup task execution:
```bash
# Look for cleanup task logs
grep "cleanup_old_public_data" celery.log
grep "Starting cleanup of old public data" celery.log
```
Task statistics are logged after each run:
- Number of transcripts deleted
- Number of meetings deleted
- Number of orphaned recordings deleted
- Any errors encountered

View File

@@ -0,0 +1,194 @@
## Reflector GPU Transcription API (Specification)
This document defines the Reflector GPU transcription API that all implementations must adhere to. Current implementations include NVIDIA Parakeet (NeMo) and Whisper (faster-whisper), both deployed on Modal.com. The API surface and response shapes are OpenAI/Whisper-compatible, so clients can switch implementations by changing only the base URL.
### Base URL and Authentication
- Example base URLs (Modal web endpoints):
- Parakeet: `https://<account>--reflector-transcriber-parakeet-web.modal.run`
- Whisper: `https://<account>--reflector-transcriber-web.modal.run`
- All endpoints are served under `/v1` and require a Bearer token:
```
Authorization: Bearer <REFLECTOR_GPU_APIKEY>
```
Note: To switch implementations, deploy the desired variant and point `TRANSCRIPT_URL` to its base URL. The API is identical.
### Supported file types
`mp3, mp4, mpeg, mpga, m4a, wav, webm`
### Models and languages
- Parakeet (NVIDIA NeMo): default `nvidia/parakeet-tdt-0.6b-v2`
- Language support: only `en`. Other languages return HTTP 400.
- Whisper (faster-whisper): default `large-v2` (or deployment-specific)
- Language support: multilingual (per Whisper model capabilities).
Note: The `model` parameter is accepted by all implementations for interface parity. Some backends may treat it as informational.
### Endpoints
#### POST /v1/audio/transcriptions
Transcribe one or more uploaded audio files.
Request: multipart/form-data
- `file` (File) — optional. Single file to transcribe.
- `files` (File[]) — optional. One or more files to transcribe.
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
- `language` (string) — optional, defaults to `en`.
- Parakeet: only `en` is accepted; other values return HTTP 400
- Whisper: model-dependent; typically multilingual
- `batch` (boolean) — optional, defaults to `false`.
Notes:
- Provide either `file` or `files`, not both. If neither is provided, HTTP 400.
- `batch` requires `files`; using `batch=true` without `files` returns HTTP 400.
- Response shape for multiple files is the same regardless of `batch`.
- Files sent to this endpoint are processed in a single pass (no VAD/chunking). This is intended for short clips (roughly ≤ 30s; depends on GPU memory/model). For longer audio, prefer `/v1/audio/transcriptions-from-url` which supports VAD-based chunking.
Responses
Single file response:
```json
{
"text": "transcribed text",
"words": [
{ "word": "hello", "start": 0.0, "end": 0.5 },
{ "word": "world", "start": 0.5, "end": 1.0 }
],
"filename": "audio.mp3"
}
```
Multiple files response:
```json
{
"results": [
{"filename": "a1.mp3", "text": "...", "words": [...]},
{"filename": "a2.mp3", "text": "...", "words": [...]}]
}
```
Notes:
- Word objects always include keys: `word`, `start`, `end`.
- Some implementations may include a trailing space in `word` to match Whisper tokenization behavior; clients should trim if needed.
Example curl (single file):
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-F "file=@/path/to/audio.mp3" \
-F "language=en" \
"$BASE_URL/v1/audio/transcriptions"
```
Example curl (multiple files, batch):
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-F "files=@/path/a1.mp3" -F "files=@/path/a2.mp3" \
-F "batch=true" -F "language=en" \
"$BASE_URL/v1/audio/transcriptions"
```
#### POST /v1/audio/transcriptions-from-url
Transcribe a single remote audio file by URL.
Request: application/json
Body parameters:
- `audio_file_url` (string) — required. URL of the audio file to transcribe.
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
- `language` (string) — optional, defaults to `en`. Parakeet only accepts `en`.
- `timestamp_offset` (number) — optional, defaults to `0.0`. Added to each word's `start`/`end` in the response.
```json
{
"audio_file_url": "https://example.com/audio.mp3",
"model": "nvidia/parakeet-tdt-0.6b-v2",
"language": "en",
"timestamp_offset": 0.0
}
```
Response:
```json
{
"text": "transcribed text",
"words": [
{ "word": "hello", "start": 10.0, "end": 10.5 },
{ "word": "world", "start": 10.5, "end": 11.0 }
]
}
```
Notes:
- `timestamp_offset` is added to each words `start`/`end` in the response.
- Implementations may perform VAD-based chunking and batching for long-form audio; word timings are adjusted accordingly.
Example curl:
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-H "Content-Type: application/json" \
-d '{
"audio_file_url": "https://example.com/audio.mp3",
"language": "en",
"timestamp_offset": 0
}' \
"$BASE_URL/v1/audio/transcriptions-from-url"
```
### Error handling
- 400 Bad Request
- Parakeet: `language` other than `en`
- Missing required parameters (`file`/`files` for upload; `audio_file_url` for URL endpoint)
- Unsupported file extension
- 401 Unauthorized
- Missing or invalid Bearer token
- 404 Not Found
- `audio_file_url` does not exist
### Implementation details
- GPUs: A10G for small-file/live, L40S for large-file URL transcription (subject to deployment)
- VAD chunking and segment batching; word timings adjusted and overlapping ends constrained
- Pads very short segments (< 0.5s) to avoid model crashes on some backends
### Server configuration (Reflector API)
Set the Reflector server to use the Modal backend and point `TRANSCRIPT_URL` to your chosen deployment:
```
TRANSCRIPT_BACKEND=modal
TRANSCRIPT_URL=https://<account>--reflector-transcriber-parakeet-web.modal.run
TRANSCRIPT_MODAL_API_KEY=<REFLECTOR_GPU_APIKEY>
```
### Conformance tests
Use the pytest-based conformance tests to validate any new implementation (including self-hosted) against this spec:
```
TRANSCRIPT_URL=https://<your-deployment-base> \
TRANSCRIPT_MODAL_API_KEY=your-api-key \
uv run -m pytest -m model_api --no-cov server/tests/test_model_api_transcript.py
```

View File

@@ -0,0 +1,236 @@
# Reflector Architecture: Whereby + Daily.co Recording Storage
## System Overview
```mermaid
graph TB
subgraph "Actors"
APP[Our App<br/>Reflector]
WHEREBY[Whereby Service<br/>External]
DAILY[Daily.co Service<br/>External]
end
subgraph "AWS S3 Buckets"
TRANSCRIPT_BUCKET[Transcript Bucket<br/>reflector-transcripts<br/>Output: Processed MP3s]
WHEREBY_BUCKET[Whereby Bucket<br/>reflector-whereby-recordings<br/>Input: Raw MP4s]
DAILY_BUCKET[Daily.co Bucket<br/>reflector-dailyco-recordings<br/>Input: Raw WebM tracks]
end
subgraph "AWS Infrastructure"
SQS[SQS Queue<br/>Whereby notifications]
end
subgraph "Database"
DB[(PostgreSQL<br/>Recordings, Transcripts, Meetings)]
end
APP -->|Write processed| TRANSCRIPT_BUCKET
APP -->|Read/Delete| WHEREBY_BUCKET
APP -->|Read/Delete| DAILY_BUCKET
APP -->|Poll| SQS
APP -->|Store metadata| DB
WHEREBY -->|Write recordings| WHEREBY_BUCKET
WHEREBY_BUCKET -->|S3 Event| SQS
WHEREBY -->|Participant webhooks<br/>room.client.joined/left| APP
DAILY -->|Write recordings| DAILY_BUCKET
DAILY -->|Recording webhook<br/>recording.ready-to-download| APP
```
**Note on Webhook vs S3 Event for Recording Processing:**
- **Whereby**: Uses S3 Events → SQS for recording availability (S3 as source of truth, no race conditions)
- **Daily.co**: Uses webhooks for recording availability (more immediate, built-in reliability)
- **Both**: Use webhooks for participant tracking (real-time updates)
## Credentials & Permissions
```mermaid
graph LR
subgraph "Master Credentials"
MASTER[TRANSCRIPT_STORAGE_AWS_*<br/>Access Key ID + Secret]
end
subgraph "Whereby Upload Credentials"
WHEREBY_CREDS[AWS_WHEREBY_ACCESS_KEY_*<br/>Access Key ID + Secret]
end
subgraph "Daily.co Upload Role"
DAILY_ROLE[DAILY_STORAGE_AWS_ROLE_ARN<br/>IAM Role ARN]
end
subgraph "Our App Uses"
MASTER -->|Read/Write/Delete| TRANSCRIPT_BUCKET[Transcript Bucket]
MASTER -->|Read/Delete| WHEREBY_BUCKET[Whereby Bucket]
MASTER -->|Read/Delete| DAILY_BUCKET[Daily.co Bucket]
MASTER -->|Poll/Delete| SQS[SQS Queue]
end
subgraph "We Give To Services"
WHEREBY_CREDS -->|Passed in API call| WHEREBY_SERVICE[Whereby Service]
WHEREBY_SERVICE -->|Write Only| WHEREBY_BUCKET
DAILY_ROLE -->|Passed in API call| DAILY_SERVICE[Daily.co Service]
DAILY_SERVICE -->|Assume Role| DAILY_ROLE
DAILY_SERVICE -->|Write Only| DAILY_BUCKET
end
```
# Video Platform Recording Integration
This document explains how Reflector receives and identifies multitrack audio recordings from different video platforms.
## Platform Comparison
| Platform | Delivery Method | Track Identification |
|----------|----------------|---------------------|
| **Daily.co** | Webhook | Explicit track list in payload |
| **Whereby** | SQS (S3 notifications) | Single file per notification |
---
## Daily.co
**Note:** Primary discovery via polling (`poll_daily_recordings`), webhooks as backup.
Daily.co uses **webhooks** to notify Reflector when recordings are ready.
### How It Works
1. **Daily.co sends webhook** when recording is ready
- Event type: `recording.ready-to-download`
- Endpoint: `/v1/daily/webhook` (`reflector/views/daily.py:46-102`)
2. **Webhook payload explicitly includes track list**:
```json
{
"recording_id": "7443ee0a-dab1-40eb-b316-33d6c0d5ff88",
"room_name": "daily-20251020193458",
"tracks": [
{
"type": "audio",
"s3Key": "monadical/daily-20251020193458/1760988935484-52f7f48b-fbab-431f-9a50-87b9abfc8255-cam-audio-1760988935922",
"size": 831843
},
{
"type": "audio",
"s3Key": "monadical/daily-20251020193458/1760988935484-a37c35e3-6f8e-4274-a482-e9d0f102a732-cam-audio-1760988943823",
"size": 408438
},
{
"type": "video",
"s3Key": "monadical/daily-20251020193458/...-video.webm",
"size": 30000000
}
]
}
```
3. **System extracts audio tracks** (`daily.py:211`):
```python
track_keys = [t.s3Key for t in tracks if t.type == "audio"]
```
4. **Triggers multitrack processing** (`daily.py:213-218`):
```python
process_multitrack_recording.delay(
bucket_name=bucket_name, # reflector-dailyco-local
room_name=room_name, # daily-20251020193458
recording_id=recording_id, # 7443ee0a-dab1-40eb-b316-33d6c0d5ff88
track_keys=track_keys # Only audio s3Keys
)
```
### Key Advantage: No Ambiguity
Even though multiple meetings may share the same S3 bucket/folder (`monadical/`), **there's no ambiguity** because:
- Each webhook payload contains the exact `s3Key` list for that specific `recording_id`
- No need to scan folders or guess which files belong together
- Each track's s3Key includes the room timestamp subfolder (e.g., `daily-20251020193458/`)
The room name includes timestamp (`daily-20251020193458`) to keep recordings organized, but **the webhook's explicit track list is what prevents mixing files from different meetings**.
### Track Timeline Extraction
Daily.co provides timing information in two places:
**1. PyAV WebM Metadata (current approach)**:
```python
# Read from WebM container stream metadata
stream.start_time = 8.130s # Meeting-relative timing
```
**2. Filename Timestamps (alternative approach, commit 3bae9076)**:
```
Filename format: {recording_start_ts}-{uuid}-cam-audio-{track_start_ts}.webm
Example: 1760988935484-52f7f48b-fbab-431f-9a50-87b9abfc8255-cam-audio-1760988935922.webm
Parse timestamps:
- recording_start_ts: 1760988935484 (Unix ms)
- track_start_ts: 1760988935922 (Unix ms)
- offset: (1760988935922 - 1760988935484) / 1000 = 0.438s
```
**Time Difference (PyAV vs Filename)**:
```
Track 0:
Filename offset: 438ms
PyAV metadata: 229ms
Difference: 209ms
Track 1:
Filename offset: 8339ms
PyAV metadata: 8130ms
Difference: 209ms
```
**Consistent 209ms delta** suggests network/encoding delay between file upload initiation (filename) and actual audio stream start (metadata).
**Current implementation uses PyAV metadata** because:
- More accurate (represents when audio actually started)
- Padding BEFORE transcription produces correct Whisper timestamps automatically
- No manual offset adjustment needed during transcript merge
### Why Re-encoding During Padding
Padding coincidentally involves re-encoding, which is important for Daily.co + Whisper:
**Problem:** Daily.co skips frames in recordings when microphone is muted or paused
- WebM containers have gaps where audio frames should be
- Whisper doesn't understand these gaps and produces incorrect timestamps
- Example: 5s of audio with 2s muted → file has frames only for 3s, Whisper thinks duration is 3s
**Solution:** Re-encoding via PyAV filter graph (`adelay` + `aresample`)
- Restores missing frames as silence
- Produces continuous audio stream without gaps
- Whisper now sees correct duration and produces accurate timestamps
**Why combined with padding:**
- Already re-encoding for padding (adding initial silence)
- More performant to do both operations in single PyAV pipeline
- Padded values needed for mixdown anyway (creating final MP3)
Implementation: `main_multitrack_pipeline.py:_apply_audio_padding_streaming()`
---
## Whereby (SQS-based)
Whereby uses **AWS SQS** (via S3 notifications) to notify Reflector when files are uploaded.
### How It Works
1. **Whereby uploads recording** to S3
2. **S3 sends notification** to SQS queue (one notification per file)
3. **Reflector polls SQS queue** (`worker/process.py:process_messages()`)
4. **System processes single file** (`worker/process.py:process_recording()`)
### Key Difference from Daily.co
**Whereby (SQS):** System receives S3 notification "file X was created" - only knows about one file at a time, would need to scan folder to find related files
**Daily.co (Webhook):** Daily explicitly tells system which files belong together in the webhook payload
---

233
server/docs/webhook.md Normal file
View File

@@ -0,0 +1,233 @@
# Reflector Webhook Documentation
## Overview
Reflector supports webhook notifications to notify external systems when transcript processing is completed. Webhooks can be configured per room and are triggered automatically after a transcript is successfully processed.
## Configuration
Webhooks are configured at the room level with two fields:
- `webhook_url`: The HTTPS endpoint to receive webhook notifications
- `webhook_secret`: Optional secret key for HMAC signature verification (auto-generated if not provided)
## Events
### `transcript.completed`
Triggered when a transcript has been fully processed, including transcription, diarization, summarization, topic detection and calendar event integration.
### `test`
A test event that can be triggered manually to verify webhook configuration.
## Webhook Request Format
### Headers
All webhook requests include the following headers:
| Header | Description | Example |
|--------|-------------|---------|
| `Content-Type` | Always `application/json` | `application/json` |
| `User-Agent` | Identifies Reflector as the source | `Reflector-Webhook/1.0` |
| `X-Webhook-Event` | The event type | `transcript.completed` or `test` |
| `X-Webhook-Retry` | Current retry attempt number | `0`, `1`, `2`... |
| `X-Webhook-Signature` | HMAC signature (if secret configured) | `t=1735306800,v1=abc123...` |
### Signature Verification
If a webhook secret is configured, Reflector includes an HMAC-SHA256 signature in the `X-Webhook-Signature` header to verify the webhook authenticity.
The signature format is: `t={timestamp},v1={signature}`
To verify the signature:
1. Extract the timestamp and signature from the header
2. Create the signed payload: `{timestamp}.{request_body}`
3. Compute HMAC-SHA256 of the signed payload using your webhook secret
4. Compare the computed signature with the received signature
Example verification (Python):
```python
import hmac
import hashlib
def verify_webhook_signature(payload: bytes, signature_header: str, secret: str) -> bool:
# Parse header: "t=1735306800,v1=abc123..."
parts = dict(part.split("=") for part in signature_header.split(","))
timestamp = parts["t"]
received_signature = parts["v1"]
# Create signed payload
signed_payload = f"{timestamp}.{payload.decode('utf-8')}"
# Compute expected signature
expected_signature = hmac.new(
secret.encode("utf-8"),
signed_payload.encode("utf-8"),
hashlib.sha256
).hexdigest()
# Compare signatures
return hmac.compare_digest(expected_signature, received_signature)
```
## Event Payloads
### `transcript.completed` Event
This event includes a convenient URL for accessing the transcript:
- `frontend_url`: Direct link to view the transcript in the web interface
```json
{
"event": "transcript.completed",
"event_id": "transcript.completed-abc-123-def-456",
"timestamp": "2025-08-27T12:34:56.789012Z",
"transcript": {
"id": "abc-123-def-456",
"room_id": "room-789",
"created_at": "2025-08-27T12:00:00Z",
"duration": 1800.5,
"title": "Q3 Product Planning Meeting",
"short_summary": "Team discussed Q3 product roadmap, prioritizing mobile app features and API improvements.",
"long_summary": "The product team met to finalize the Q3 roadmap. Key decisions included...",
"webvtt": "WEBVTT\n\n00:00:00.000 --> 00:00:05.000\n<v Speaker 1>Welcome everyone to today's meeting...",
"topics": [
{
"title": "Introduction and Agenda",
"summary": "Meeting kickoff with agenda review",
"timestamp": 0.0,
"duration": 120.0,
"webvtt": "WEBVTT\n\n00:00:00.000 --> 00:00:05.000\n<v Speaker 1>Welcome everyone..."
},
{
"title": "Mobile App Features Discussion",
"summary": "Team reviewed proposed mobile app features for Q3",
"timestamp": 120.0,
"duration": 600.0,
"webvtt": "WEBVTT\n\n00:02:00.000 --> 00:02:10.000\n<v Speaker 2>Let's talk about the mobile app..."
}
],
"participants": [
{
"id": "participant-1",
"name": "John Doe",
"speaker": "Speaker 1"
},
{
"id": "participant-2",
"name": "Jane Smith",
"speaker": "Speaker 2"
}
],
"source_language": "en",
"target_language": "en",
"status": "completed",
"frontend_url": "https://app.reflector.com/transcripts/abc-123-def-456"
},
"room": {
"id": "room-789",
"name": "Product Team Room"
},
"calendar_event": {
"id": "calendar-event-123",
"ics_uid": "event-123",
"title": "Q3 Product Planning Meeting",
"start_time": "2025-08-27T12:00:00Z",
"end_time": "2025-08-27T12:30:00Z",
"description": "Team discussed Q3 product roadmap, prioritizing mobile app features and API improvements.",
"location": "Conference Room 1",
"attendees": [
{
"id": "participant-1",
"name": "John Doe",
"speaker": "Speaker 1"
},
{
"id": "participant-2",
"name": "Jane Smith",
"speaker": "Speaker 2"
}
]
}
}
```
### `test` Event
```json
{
"event": "test",
"event_id": "test.2025-08-27T12:34:56.789012Z",
"timestamp": "2025-08-27T12:34:56.789012Z",
"message": "This is a test webhook from Reflector",
"room": {
"id": "room-789",
"name": "Product Team Room"
}
}
```
## Retry Policy
Webhooks are delivered with automatic retry logic to handle transient failures. When a webhook delivery fails due to server errors or network issues, Reflector will automatically retry the delivery multiple times over an extended period.
### Retry Mechanism
Reflector implements an exponential backoff strategy for webhook retries:
- **Initial retry delay**: 60 seconds after the first failure
- **Exponential backoff**: Each subsequent retry waits approximately twice as long as the previous one
- **Maximum retry interval**: 1 hour (backoff is capped at this duration)
- **Maximum retry attempts**: 30 attempts total
- **Total retry duration**: Retries continue for approximately 24 hours
### How Retries Work
When a webhook fails, Reflector will:
1. Wait 60 seconds, then retry (attempt #1)
2. If it fails again, wait ~2 minutes, then retry (attempt #2)
3. Continue doubling the wait time up to a maximum of 1 hour between attempts
4. Keep retrying at 1-hour intervals until successful or 30 attempts are exhausted
The `X-Webhook-Retry` header indicates the current retry attempt number (0 for the initial attempt, 1 for first retry, etc.), allowing your endpoint to track retry attempts.
### Retry Behavior by HTTP Status Code
| Status Code | Behavior |
|-------------|----------|
| 2xx (Success) | No retry, webhook marked as delivered |
| 4xx (Client Error) | No retry, request is considered permanently failed |
| 5xx (Server Error) | Automatic retry with exponential backoff |
| Network/Timeout Error | Automatic retry with exponential backoff |
**Important Notes:**
- Webhooks timeout after 30 seconds. If your endpoint takes longer to respond, it will be considered a timeout error and retried.
- During the retry period (~24 hours), you may receive the same webhook multiple times if your endpoint experiences intermittent failures.
- There is no mechanism to manually retry failed webhooks after the retry period expires.
## Testing Webhooks
You can test your webhook configuration before processing transcripts:
```http
POST /v1/rooms/{room_id}/webhook/test
```
Response:
```json
{
"success": true,
"status_code": 200,
"message": "Webhook test successful",
"response_preview": "OK"
}
```
Or in case of failure:
```json
{
"success": false,
"error": "Webhook request timed out (10 seconds)"
}
```

View File

@@ -27,7 +27,7 @@ AUTH_JWT_AUDIENCE=
#TRANSCRIPT_MODAL_API_KEY=xxxxx
TRANSCRIPT_BACKEND=modal
TRANSCRIPT_URL=https://monadical-sas--reflector-transcriber-web.modal.run
TRANSCRIPT_URL=https://monadical-sas--reflector-transcriber-parakeet-web.modal.run
TRANSCRIPT_MODAL_API_KEY=
## =======================================================
@@ -71,3 +71,30 @@ DIARIZATION_URL=https://monadical-sas--reflector-diarizer-web.modal.run
## Sentry DSN configuration
#SENTRY_DSN=
## =======================================================
## Video Platform Configuration
## =======================================================
## Whereby
#WHEREBY_API_KEY=your-whereby-api-key
#WHEREBY_WEBHOOK_SECRET=your-whereby-webhook-secret
#WHEREBY_STORAGE_AWS_ACCESS_KEY_ID=your-aws-key
#WHEREBY_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret
#AWS_PROCESS_RECORDING_QUEUE_URL=https://sqs.us-west-2.amazonaws.com/...
## Daily.co
#DAILY_API_KEY=your-daily-api-key
#DAILY_WEBHOOK_SECRET=your-daily-webhook-secret
#DAILY_SUBDOMAIN=your-subdomain
#DAILY_WEBHOOK_UUID= # Auto-populated by recreate_daily_webhook.py script
#DAILYCO_STORAGE_AWS_ROLE_ARN=... # IAM role ARN for Daily.co S3 access
#DAILYCO_STORAGE_AWS_BUCKET_NAME=reflector-dailyco
#DAILYCO_STORAGE_AWS_REGION=us-west-2
## Whereby (optional separate bucket)
#WHEREBY_STORAGE_AWS_BUCKET_NAME=reflector-whereby
#WHEREBY_STORAGE_AWS_REGION=us-east-1
## Platform Configuration
#DEFAULT_VIDEO_PLATFORM=whereby # Default platform for new rooms

View File

@@ -1,161 +0,0 @@
import os
import tempfile
import threading
import modal
from pydantic import BaseModel
MODELS_DIR = "/models"
MODEL_NAME = "large-v2"
MODEL_COMPUTE_TYPE: str = "float16"
MODEL_NUM_WORKERS: int = 1
MINUTES = 60 # seconds
volume = modal.Volume.from_name("models", create_if_missing=True)
app = modal.App("reflector-transcriber")
def download_model():
from faster_whisper import download_model
volume.reload()
download_model(MODEL_NAME, cache_dir=MODELS_DIR)
volume.commit()
image = (
modal.Image.debian_slim(python_version="3.12")
.pip_install(
"huggingface_hub==0.27.1",
"hf-transfer==0.1.9",
"torch==2.5.1",
"faster-whisper==1.1.1",
)
.env(
{
"HF_HUB_ENABLE_HF_TRANSFER": "1",
"LD_LIBRARY_PATH": (
"/usr/local/lib/python3.12/site-packages/nvidia/cudnn/lib/:"
"/opt/conda/lib/python3.12/site-packages/nvidia/cublas/lib/"
),
}
)
.run_function(download_model, volumes={MODELS_DIR: volume})
)
@app.cls(
gpu="A10G",
timeout=5 * MINUTES,
scaledown_window=5 * MINUTES,
allow_concurrent_inputs=6,
image=image,
volumes={MODELS_DIR: volume},
)
class Transcriber:
@modal.enter()
def enter(self):
import faster_whisper
import torch
self.lock = threading.Lock()
self.use_gpu = torch.cuda.is_available()
self.device = "cuda" if self.use_gpu else "cpu"
self.model = faster_whisper.WhisperModel(
MODEL_NAME,
device=self.device,
compute_type=MODEL_COMPUTE_TYPE,
num_workers=MODEL_NUM_WORKERS,
download_root=MODELS_DIR,
local_files_only=True,
)
@modal.method()
def transcribe_segment(
self,
audio_data: str,
audio_suffix: str,
language: str,
):
with tempfile.NamedTemporaryFile("wb+", suffix=f".{audio_suffix}") as fp:
fp.write(audio_data)
with self.lock:
segments, _ = self.model.transcribe(
fp.name,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(segment.text for segment in segments)
words = [
{"word": word.word, "start": word.start, "end": word.end}
for segment in segments
for word in segment.words
]
return {"text": text, "words": words}
@app.function(
scaledown_window=60,
timeout=60,
allow_concurrent_inputs=40,
secrets=[
modal.Secret.from_name("reflector-gpu"),
],
volumes={MODELS_DIR: volume},
)
@modal.asgi_app()
def web():
from fastapi import Body, Depends, FastAPI, HTTPException, UploadFile, status
from fastapi.security import OAuth2PasswordBearer
from typing_extensions import Annotated
transcriber = Transcriber()
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
supported_file_types = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
if apikey != os.environ["REFLECTOR_GPU_APIKEY"]:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)
class TranscriptResponse(BaseModel):
result: dict
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
def transcribe(
file: UploadFile,
model: str = "whisper-1",
language: Annotated[str, Body(...)] = "en",
) -> TranscriptResponse:
audio_data = file.file.read()
audio_suffix = file.filename.split(".")[-1]
assert audio_suffix in supported_file_types
func = transcriber.transcribe_segment.spawn(
audio_data=audio_data,
audio_suffix=audio_suffix,
language=language,
)
result = func.get()
return result
return app

View File

@@ -1,622 +0,0 @@
import logging
import os
import sys
import threading
import uuid
from typing import Mapping, NewType
from urllib.parse import urlparse
import modal
MODEL_NAME = "nvidia/parakeet-tdt-0.6b-v3"
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
SAMPLERATE = 16000
UPLOADS_PATH = "/uploads"
CACHE_PATH = "/cache"
VAD_CONFIG = {
"max_segment_duration": 30.0,
"batch_max_files": 10,
"batch_max_duration": 5.0,
"min_segment_duration": 0.02,
"silence_padding": 0.5,
"window_size": 512,
}
ParakeetUniqFilename = NewType("ParakeetUniqFilename", str)
AudioFileExtension = NewType("AudioFileExtension", str)
app = modal.App("reflector-transcriber-parakeet-v3")
# Volume for caching model weights
model_cache = modal.Volume.from_name("parakeet-model-cache", create_if_missing=True)
# Volume for temporary file uploads
upload_volume = modal.Volume.from_name("parakeet-uploads", create_if_missing=True)
image = (
modal.Image.from_registry(
"nvidia/cuda:12.8.0-cudnn-devel-ubuntu22.04", add_python="3.12"
)
.env(
{
"HF_HUB_ENABLE_HF_TRANSFER": "1",
"HF_HOME": "/cache",
"DEBIAN_FRONTEND": "noninteractive",
"CXX": "g++",
"CC": "g++",
}
)
.apt_install("ffmpeg")
.pip_install(
"hf_transfer==0.1.9",
"huggingface_hub[hf-xet]==0.31.2",
"nemo_toolkit[asr]==2.3.0",
"cuda-python==12.8.0",
"fastapi==0.115.12",
"numpy<2",
"librosa==0.10.1",
"requests",
"silero-vad==5.1.0",
"torch",
)
.entrypoint([]) # silence chatty logs by container on start
)
def detect_audio_format(url: str, headers: Mapping[str, str]) -> AudioFileExtension:
parsed_url = urlparse(url)
url_path = parsed_url.path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return AudioFileExtension(ext)
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return AudioFileExtension("mp3")
if "audio/wav" in content_type:
return AudioFileExtension("wav")
if "audio/mp4" in content_type:
return AudioFileExtension("mp4")
raise ValueError(
f"Unsupported audio format for URL: {url}. "
f"Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
)
def download_audio_to_volume(
audio_file_url: str,
) -> tuple[ParakeetUniqFilename, AudioFileExtension]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = ParakeetUniqFilename(f"{uuid.uuid4()}.{audio_suffix}")
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
def pad_audio(audio_array, sample_rate: int = SAMPLERATE):
"""Add 0.5 seconds of silence if audio is less than 500ms.
This is a workaround for a Parakeet bug where very short audio (<500ms) causes:
ValueError: `char_offsets`: [] and `processed_tokens`: [157, 834, 834, 841]
have to be of the same length
See: https://github.com/NVIDIA/NeMo/issues/8451
"""
import numpy as np
audio_duration = len(audio_array) / sample_rate
if audio_duration < 0.5:
silence_samples = int(sample_rate * 0.5)
silence = np.zeros(silence_samples, dtype=np.float32)
return np.concatenate([audio_array, silence])
return audio_array
@app.cls(
gpu="A10G",
timeout=600,
scaledown_window=300,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
enable_memory_snapshot=True,
experimental_options={"enable_gpu_snapshot": True},
)
@modal.concurrent(max_inputs=10)
class TranscriberParakeetLive:
@modal.enter(snap=True)
def enter(self):
import nemo.collections.asr as nemo_asr
logging.getLogger("nemo_logger").setLevel(logging.CRITICAL)
self.lock = threading.Lock()
self.model = nemo_asr.models.ASRModel.from_pretrained(model_name=MODEL_NAME)
device = next(self.model.parameters()).device
print(f"Model is on device: {device}")
@modal.method()
def transcribe_segment(
self,
filename: str,
):
import librosa
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
audio_array, sample_rate = librosa.load(file_path, sr=SAMPLERATE, mono=True)
padded_audio = pad_audio(audio_array, sample_rate)
with self.lock:
with NoStdStreams():
(output,) = self.model.transcribe([padded_audio], timestamps=True)
text = output.text.strip()
words = [
{
"word": word_info["word"] + " ",
"start": round(word_info["start"], 2),
"end": round(word_info["end"], 2),
}
for word_info in output.timestamp["word"]
]
return {"text": text, "words": words}
@modal.method()
def transcribe_batch(
self,
filenames: list[str],
):
import librosa
upload_volume.reload()
results = []
audio_arrays = []
# Load all audio files with padding
for filename in filenames:
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"Batch file not found: {file_path}")
audio_array, sample_rate = librosa.load(file_path, sr=SAMPLERATE, mono=True)
padded_audio = pad_audio(audio_array, sample_rate)
audio_arrays.append(padded_audio)
with self.lock:
with NoStdStreams():
outputs = self.model.transcribe(audio_arrays, timestamps=True)
# Process results for each file
for i, (filename, output) in enumerate(zip(filenames, outputs)):
text = output.text.strip()
words = [
{
"word": word_info["word"] + " ",
"start": round(word_info["start"], 2),
"end": round(word_info["end"], 2),
}
for word_info in output.timestamp["word"]
]
results.append(
{
"filename": filename,
"text": text,
"words": words,
}
)
return results
# L40S class for file transcription (bigger files)
@app.cls(
gpu="L40S",
timeout=900,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
enable_memory_snapshot=True,
experimental_options={"enable_gpu_snapshot": True},
)
class TranscriberParakeetFile:
@modal.enter(snap=True)
def enter(self):
import nemo.collections.asr as nemo_asr
import torch
from silero_vad import load_silero_vad
logging.getLogger("nemo_logger").setLevel(logging.CRITICAL)
self.model = nemo_asr.models.ASRModel.from_pretrained(model_name=MODEL_NAME)
device = next(self.model.parameters()).device
print(f"Model is on device: {device}")
torch.set_num_threads(1)
self.vad_model = load_silero_vad(onnx=False)
print("Silero VAD initialized")
@modal.method()
def transcribe_segment(
self,
filename: str,
timestamp_offset: float = 0.0,
):
import librosa
import numpy as np
from silero_vad import VADIterator
def load_and_convert_audio(file_path):
audio_array, sample_rate = librosa.load(file_path, sr=SAMPLERATE, mono=True)
return audio_array
def vad_segment_generator(audio_array):
"""Generate speech segments using VAD with start/end sample indices"""
vad_iterator = VADIterator(self.vad_model, sampling_rate=SAMPLERATE)
window_size = VAD_CONFIG["window_size"]
start = None
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(
chunk, (0, window_size - len(chunk)), mode="constant"
)
speech_dict = vad_iterator(chunk)
if not speech_dict:
continue
if "start" in speech_dict:
start = speech_dict["start"]
continue
if "end" in speech_dict and start is not None:
end = speech_dict["end"]
start_time = start / float(SAMPLERATE)
end_time = end / float(SAMPLERATE)
# Extract the actual audio segment
audio_segment = audio_array[start:end]
yield (start_time, end_time, audio_segment)
start = None
vad_iterator.reset_states()
def vad_segment_filter(segments):
"""Filter VAD segments by duration and chunk large segments"""
min_dur = VAD_CONFIG["min_segment_duration"]
max_dur = VAD_CONFIG["max_segment_duration"]
for start_time, end_time, audio_segment in segments:
segment_duration = end_time - start_time
# Skip very small segments
if segment_duration < min_dur:
continue
# If segment is within max duration, yield as-is
if segment_duration <= max_dur:
yield (start_time, end_time, audio_segment)
continue
# Chunk large segments into smaller pieces
chunk_samples = int(max_dur * SAMPLERATE)
current_start = start_time
for chunk_offset in range(0, len(audio_segment), chunk_samples):
chunk_audio = audio_segment[
chunk_offset : chunk_offset + chunk_samples
]
if len(chunk_audio) == 0:
break
chunk_duration = len(chunk_audio) / float(SAMPLERATE)
chunk_end = current_start + chunk_duration
# Only yield chunks that meet minimum duration
if chunk_duration >= min_dur:
yield (current_start, chunk_end, chunk_audio)
current_start = chunk_end
def batch_segments(segments, max_files=10, max_duration=5.0):
batch = []
batch_duration = 0.0
for start_time, end_time, audio_segment in segments:
segment_duration = end_time - start_time
if segment_duration < VAD_CONFIG["silence_padding"]:
silence_samples = int(
(VAD_CONFIG["silence_padding"] - segment_duration) * SAMPLERATE
)
padding = np.zeros(silence_samples, dtype=np.float32)
audio_segment = np.concatenate([audio_segment, padding])
segment_duration = VAD_CONFIG["silence_padding"]
batch.append((start_time, end_time, audio_segment))
batch_duration += segment_duration
if len(batch) >= max_files or batch_duration >= max_duration:
yield batch
batch = []
batch_duration = 0.0
if batch:
yield batch
def transcribe_batch(model, audio_segments):
with NoStdStreams():
outputs = model.transcribe(audio_segments, timestamps=True)
return outputs
def emit_results(
results,
segments_info,
batch_index,
total_batches,
):
"""Yield transcribed text and word timings from model output, adjusting timestamps to absolute positions."""
for i, (output, (start_time, end_time, _)) in enumerate(
zip(results, segments_info)
):
text = output.text.strip()
words = [
{
"word": word_info["word"],
"start": round(
word_info["start"] + start_time + timestamp_offset, 2
),
"end": round(
word_info["end"] + start_time + timestamp_offset, 2
),
}
for word_info in output.timestamp["word"]
]
yield text, words
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
audio_array = load_and_convert_audio(file_path)
total_duration = len(audio_array) / float(SAMPLERATE)
processed_duration = 0.0
all_text_parts = []
all_words = []
raw_segments = vad_segment_generator(audio_array)
filtered_segments = vad_segment_filter(raw_segments)
batches = batch_segments(
filtered_segments,
VAD_CONFIG["batch_max_files"],
VAD_CONFIG["batch_max_duration"],
)
batch_index = 0
total_batches = max(
1, int(total_duration / VAD_CONFIG["batch_max_duration"]) + 1
)
for batch in batches:
batch_index += 1
audio_segments = [seg[2] for seg in batch]
results = transcribe_batch(self.model, audio_segments)
for text, words in emit_results(
results,
batch,
batch_index,
total_batches,
):
if not text:
continue
all_text_parts.append(text)
all_words.extend(words)
processed_duration += sum(len(seg[2]) / float(SAMPLERATE) for seg in batch)
combined_text = " ".join(all_text_parts)
return {"text": combined_text, "words": all_words}
@app.function(
scaledown_window=60,
timeout=600,
secrets=[
modal.Secret.from_name("reflector-gpu"),
],
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
image=image,
)
@modal.concurrent(max_inputs=40)
@modal.asgi_app()
def web():
import os
import uuid
from fastapi import (
Body,
Depends,
FastAPI,
Form,
HTTPException,
UploadFile,
status,
)
from fastapi.security import OAuth2PasswordBearer
from pydantic import BaseModel
transcriber_live = TranscriberParakeetLive()
transcriber_file = TranscriberParakeetFile()
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
if apikey == os.environ["REFLECTOR_GPU_APIKEY"]:
return
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)
class TranscriptResponse(BaseModel):
result: dict
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
def transcribe(
file: UploadFile = None,
files: list[UploadFile] | None = None,
model: str = Form(MODEL_NAME),
language: str = Form("en"),
batch: bool = Form(False),
):
# Parakeet only supports English
if language != "en":
raise HTTPException(
status_code=400,
detail=f"Parakeet model only supports English. Got language='{language}'",
)
# Handle both single file and multiple files
if not file and not files:
raise HTTPException(
status_code=400, detail="Either 'file' or 'files' parameter is required"
)
if batch and not files:
raise HTTPException(
status_code=400, detail="Batch transcription requires 'files'"
)
upload_files = [file] if file else files
# Upload files to volume
uploaded_filenames = []
for upload_file in upload_files:
audio_suffix = upload_file.filename.split(".")[-1]
assert audio_suffix in SUPPORTED_FILE_EXTENSIONS
# Generate unique filename
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
print(f"Writing file to: {file_path}")
with open(file_path, "wb") as f:
content = upload_file.file.read()
f.write(content)
uploaded_filenames.append(unique_filename)
upload_volume.commit()
try:
# Use A10G live transcriber for per-file transcription
if batch and len(upload_files) > 1:
# Use batch transcription
func = transcriber_live.transcribe_batch.spawn(
filenames=uploaded_filenames,
)
results = func.get()
return {"results": results}
# Per-file transcription
results = []
for filename in uploaded_filenames:
func = transcriber_live.transcribe_segment.spawn(
filename=filename,
)
result = func.get()
result["filename"] = filename
results.append(result)
return {"results": results} if len(results) > 1 else results[0]
finally:
for filename in uploaded_filenames:
try:
file_path = f"{UPLOADS_PATH}/{filename}"
print(f"Deleting file: {file_path}")
os.remove(file_path)
except Exception as e:
print(f"Error deleting {filename}: {e}")
upload_volume.commit()
@app.post("/v1/audio/transcriptions-from-url", dependencies=[Depends(apikey_auth)])
def transcribe_from_url(
audio_file_url: str = Body(
..., description="URL of the audio file to transcribe"
),
model: str = Body(MODEL_NAME),
language: str = Body("en", description="Language code (only 'en' supported)"),
timestamp_offset: float = Body(0.0),
):
# Parakeet only supports English
if language != "en":
raise HTTPException(
status_code=400,
detail=f"Parakeet model only supports English. Got language='{language}'",
)
unique_filename, audio_suffix = download_audio_to_volume(audio_file_url)
try:
func = transcriber_file.transcribe_segment.spawn(
filename=unique_filename,
timestamp_offset=timestamp_offset,
)
result = func.get()
return result
finally:
try:
file_path = f"{UPLOADS_PATH}/{unique_filename}"
print(f"Deleting file: {file_path}")
os.remove(file_path)
upload_volume.commit()
except Exception as e:
print(f"Error cleaning up {unique_filename}: {e}")
return app
class NoStdStreams:
def __init__(self):
self.devnull = open(os.devnull, "w")
def __enter__(self):
self._stdout, self._stderr = sys.stdout, sys.stderr
self._stdout.flush()
self._stderr.flush()
sys.stdout, sys.stderr = self.devnull, self.devnull
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout, sys.stderr = self._stdout, self._stderr
self.devnull.close()

View File

@@ -0,0 +1,36 @@
"""Add webhook fields to rooms
Revision ID: 0194f65cd6d3
Revises: 5a8907fd1d78
Create Date: 2025-08-27 09:03:19.610995
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "0194f65cd6d3"
down_revision: Union[str, None] = "5a8907fd1d78"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.add_column(sa.Column("webhook_url", sa.String(), nullable=True))
batch_op.add_column(sa.Column("webhook_secret", sa.String(), nullable=True))
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.drop_column("webhook_secret")
batch_op.drop_column("webhook_url")
# ### end Alembic commands ###

View File

@@ -0,0 +1,36 @@
"""remove user_id from meeting table
Revision ID: 0ce521cda2ee
Revises: 6dec9fb5b46c
Create Date: 2025-09-10 12:40:55.688899
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "0ce521cda2ee"
down_revision: Union[str, None] = "6dec9fb5b46c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_column("user_id")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(
sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=True)
)
# ### end Alembic commands ###

View File

@@ -0,0 +1,50 @@
"""add_platform_support
Revision ID: 1e49625677e4
Revises: 9e3f7b2a4c8e
Create Date: 2025-10-08 13:17:29.943612
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "1e49625677e4"
down_revision: Union[str, None] = "9e3f7b2a4c8e"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
"""Add platform field with default 'whereby' for backward compatibility."""
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.add_column(
sa.Column(
"platform",
sa.String(),
nullable=True,
server_default=None,
)
)
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(
sa.Column(
"platform",
sa.String(),
nullable=False,
server_default="whereby",
)
)
def downgrade() -> None:
"""Remove platform field."""
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_column("platform")
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.drop_column("platform")

View File

@@ -0,0 +1,32 @@
"""clean up orphaned room_id references in meeting table
Revision ID: 2ae3db106d4e
Revises: def1b5867d4c
Create Date: 2025-09-11 10:35:15.759967
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "2ae3db106d4e"
down_revision: Union[str, None] = "def1b5867d4c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Set room_id to NULL for meetings that reference non-existent rooms
op.execute("""
UPDATE meeting
SET room_id = NULL
WHERE room_id IS NOT NULL
AND room_id NOT IN (SELECT id FROM room WHERE id IS NOT NULL)
""")
def downgrade() -> None:
# Cannot restore orphaned references - no operation needed
pass

View File

@@ -0,0 +1,79 @@
"""add daily participant session table with immutable left_at
Revision ID: 2b92a1b03caa
Revises: f8294b31f022
Create Date: 2025-11-13 20:29:30.486577
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "2b92a1b03caa"
down_revision: Union[str, None] = "f8294b31f022"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Create table
op.create_table(
"daily_participant_session",
sa.Column("id", sa.String(), nullable=False),
sa.Column("meeting_id", sa.String(), nullable=False),
sa.Column("room_id", sa.String(), nullable=False),
sa.Column("session_id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=True),
sa.Column("user_name", sa.String(), nullable=False),
sa.Column("joined_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("left_at", sa.DateTime(timezone=True), nullable=True),
sa.ForeignKeyConstraint(["meeting_id"], ["meeting.id"], ondelete="CASCADE"),
sa.ForeignKeyConstraint(["room_id"], ["room.id"], ondelete="CASCADE"),
sa.PrimaryKeyConstraint("id"),
)
with op.batch_alter_table("daily_participant_session", schema=None) as batch_op:
batch_op.create_index(
"idx_daily_session_meeting_left", ["meeting_id", "left_at"], unique=False
)
batch_op.create_index("idx_daily_session_room", ["room_id"], unique=False)
# Create trigger function to prevent left_at from being updated once set
op.execute("""
CREATE OR REPLACE FUNCTION prevent_left_at_update()
RETURNS TRIGGER AS $$
BEGIN
IF OLD.left_at IS NOT NULL THEN
RAISE EXCEPTION 'left_at is immutable once set';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
""")
# Create trigger
op.execute("""
CREATE TRIGGER prevent_left_at_update_trigger
BEFORE UPDATE ON daily_participant_session
FOR EACH ROW
EXECUTE FUNCTION prevent_left_at_update();
""")
def downgrade() -> None:
# Drop trigger
op.execute(
"DROP TRIGGER IF EXISTS prevent_left_at_update_trigger ON daily_participant_session;"
)
# Drop trigger function
op.execute("DROP FUNCTION IF EXISTS prevent_left_at_update();")
# Drop indexes and table
with op.batch_alter_table("daily_participant_session", schema=None) as batch_op:
batch_op.drop_index("idx_daily_session_room")
batch_op.drop_index("idx_daily_session_meeting_left")
op.drop_table("daily_participant_session")

View File

@@ -0,0 +1,50 @@
"""add cascade delete to meeting consent foreign key
Revision ID: 5a8907fd1d78
Revises: 0ab2d7ffaa16
Create Date: 2025-08-26 17:26:50.945491
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "5a8907fd1d78"
down_revision: Union[str, None] = "0ab2d7ffaa16"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting_consent", schema=None) as batch_op:
batch_op.drop_constraint(
batch_op.f("meeting_consent_meeting_id_fkey"), type_="foreignkey"
)
batch_op.create_foreign_key(
batch_op.f("meeting_consent_meeting_id_fkey"),
"meeting",
["meeting_id"],
["id"],
ondelete="CASCADE",
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting_consent", schema=None) as batch_op:
batch_op.drop_constraint(
batch_op.f("meeting_consent_meeting_id_fkey"), type_="foreignkey"
)
batch_op.create_foreign_key(
batch_op.f("meeting_consent_meeting_id_fkey"),
"meeting",
["meeting_id"],
["id"],
)
# ### end Alembic commands ###

View File

@@ -0,0 +1,30 @@
"""Make room platform non-nullable with dynamic default
Revision ID: 5d6b9df9b045
Revises: 2b92a1b03caa
Create Date: 2025-11-21 13:22:25.756584
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "5d6b9df9b045"
down_revision: Union[str, None] = "2b92a1b03caa"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.execute("UPDATE room SET platform = 'whereby' WHERE platform IS NULL")
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.alter_column("platform", existing_type=sa.String(), nullable=False)
def downgrade() -> None:
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.alter_column("platform", existing_type=sa.String(), nullable=True)

View File

@@ -0,0 +1,53 @@
"""remove_one_active_meeting_per_room_constraint
Revision ID: 6025e9b2bef2
Revises: 2ae3db106d4e
Create Date: 2025-08-18 18:45:44.418392
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "6025e9b2bef2"
down_revision: Union[str, None] = "2ae3db106d4e"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Remove the unique constraint that prevents multiple active meetings per room
# This is needed to support calendar integration with overlapping meetings
# Check if index exists before trying to drop it
from alembic import context
if context.get_context().dialect.name == "postgresql":
conn = op.get_bind()
result = conn.execute(
sa.text(
"SELECT 1 FROM pg_indexes WHERE indexname = 'idx_one_active_meeting_per_room'"
)
)
if result.fetchone():
op.drop_index("idx_one_active_meeting_per_room", table_name="meeting")
else:
# For SQLite, just try to drop it
try:
op.drop_index("idx_one_active_meeting_per_room", table_name="meeting")
except:
pass
def downgrade() -> None:
# Restore the unique constraint
op.create_index(
"idx_one_active_meeting_per_room",
"meeting",
["room_id"],
unique=True,
postgresql_where=sa.text("is_active = true"),
sqlite_where=sa.text("is_active = 1"),
)

View File

@@ -0,0 +1,28 @@
"""webhook url and secret null by default
Revision ID: 61882a919591
Revises: 0194f65cd6d3
Create Date: 2025-08-29 11:46:36.738091
"""
from typing import Sequence, Union
# revision identifiers, used by Alembic.
revision: str = "61882a919591"
down_revision: Union[str, None] = "0194f65cd6d3"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###

View File

@@ -0,0 +1,35 @@
"""make meeting room_id required and add foreign key
Revision ID: 6dec9fb5b46c
Revises: 61882a919591
Create Date: 2025-09-10 10:47:06.006819
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "6dec9fb5b46c"
down_revision: Union[str, None] = "61882a919591"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.create_foreign_key(
None, "room", ["room_id"], ["id"], ondelete="CASCADE"
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_constraint("meeting_room_id_fkey", type_="foreignkey")
# ### end Alembic commands ###

View File

@@ -0,0 +1,38 @@
"""add user api keys
Revision ID: 9e3f7b2a4c8e
Revises: dc035ff72fd5
Create Date: 2025-10-17 00:00:00.000000
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "9e3f7b2a4c8e"
down_revision: Union[str, None] = "dc035ff72fd5"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.create_table(
"user_api_key",
sa.Column("id", sa.String(), nullable=False),
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("key_hash", sa.String(), nullable=False),
sa.Column("name", sa.String(), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=False),
sa.PrimaryKeyConstraint("id"),
)
with op.batch_alter_table("user_api_key", schema=None) as batch_op:
batch_op.create_index("idx_user_api_key_hash", ["key_hash"], unique=True)
batch_op.create_index("idx_user_api_key_user_id", ["user_id"], unique=False)
def downgrade() -> None:
op.drop_table("user_api_key")

View File

@@ -0,0 +1,38 @@
"""add user table
Revision ID: bbafedfa510c
Revises: 5d6b9df9b045
Create Date: 2025-11-19 21:06:30.543262
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "bbafedfa510c"
down_revision: Union[str, None] = "5d6b9df9b045"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.create_table(
"user",
sa.Column("id", sa.String(), nullable=False),
sa.Column("email", sa.String(), nullable=False),
sa.Column("authentik_uid", sa.String(), nullable=False),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=False),
sa.PrimaryKeyConstraint("id"),
)
with op.batch_alter_table("user", schema=None) as batch_op:
batch_op.create_index("idx_user_authentik_uid", ["authentik_uid"], unique=True)
batch_op.create_index("idx_user_email", ["email"], unique=False)
def downgrade() -> None:
op.drop_table("user")

View File

@@ -0,0 +1,34 @@
"""add_grace_period_fields_to_meeting
Revision ID: d4a1c446458c
Revises: 6025e9b2bef2
Create Date: 2025-08-18 18:50:37.768052
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "d4a1c446458c"
down_revision: Union[str, None] = "6025e9b2bef2"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Add fields to track when participants left for grace period logic
op.add_column(
"meeting", sa.Column("last_participant_left_at", sa.DateTime(timezone=True))
)
op.add_column(
"meeting",
sa.Column("grace_period_minutes", sa.Integer, server_default=sa.text("15")),
)
def downgrade() -> None:
op.drop_column("meeting", "grace_period_minutes")
op.drop_column("meeting", "last_participant_left_at")

View File

@@ -0,0 +1,129 @@
"""add calendar
Revision ID: d8e204bbf615
Revises: d4a1c446458c
Create Date: 2025-09-10 19:56:22.295756
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision: str = "d8e204bbf615"
down_revision: Union[str, None] = "d4a1c446458c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table(
"calendar_event",
sa.Column("id", sa.String(), nullable=False),
sa.Column("room_id", sa.String(), nullable=False),
sa.Column("ics_uid", sa.Text(), nullable=False),
sa.Column("title", sa.Text(), nullable=True),
sa.Column("description", sa.Text(), nullable=True),
sa.Column("start_time", sa.DateTime(timezone=True), nullable=False),
sa.Column("end_time", sa.DateTime(timezone=True), nullable=False),
sa.Column("attendees", postgresql.JSONB(astext_type=sa.Text()), nullable=True),
sa.Column("location", sa.Text(), nullable=True),
sa.Column("ics_raw_data", sa.Text(), nullable=True),
sa.Column("last_synced", sa.DateTime(timezone=True), nullable=False),
sa.Column(
"is_deleted", sa.Boolean(), server_default=sa.text("false"), nullable=False
),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=False),
sa.ForeignKeyConstraint(
["room_id"],
["room.id"],
name="fk_calendar_event_room_id",
ondelete="CASCADE",
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("room_id", "ics_uid", name="uq_room_calendar_event"),
)
with op.batch_alter_table("calendar_event", schema=None) as batch_op:
batch_op.create_index(
"idx_calendar_event_deleted",
["is_deleted"],
unique=False,
postgresql_where=sa.text("NOT is_deleted"),
)
batch_op.create_index(
"idx_calendar_event_room_start", ["room_id", "start_time"], unique=False
)
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(sa.Column("calendar_event_id", sa.String(), nullable=True))
batch_op.add_column(
sa.Column(
"calendar_metadata",
postgresql.JSONB(astext_type=sa.Text()),
nullable=True,
)
)
batch_op.create_index(
"idx_meeting_calendar_event", ["calendar_event_id"], unique=False
)
batch_op.create_foreign_key(
"fk_meeting_calendar_event_id",
"calendar_event",
["calendar_event_id"],
["id"],
ondelete="SET NULL",
)
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.add_column(sa.Column("ics_url", sa.Text(), nullable=True))
batch_op.add_column(
sa.Column(
"ics_fetch_interval", sa.Integer(), server_default="300", nullable=True
)
)
batch_op.add_column(
sa.Column(
"ics_enabled",
sa.Boolean(),
server_default=sa.text("false"),
nullable=False,
)
)
batch_op.add_column(
sa.Column("ics_last_sync", sa.DateTime(timezone=True), nullable=True)
)
batch_op.add_column(sa.Column("ics_last_etag", sa.Text(), nullable=True))
batch_op.create_index("idx_room_ics_enabled", ["ics_enabled"], unique=False)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.drop_index("idx_room_ics_enabled")
batch_op.drop_column("ics_last_etag")
batch_op.drop_column("ics_last_sync")
batch_op.drop_column("ics_enabled")
batch_op.drop_column("ics_fetch_interval")
batch_op.drop_column("ics_url")
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_constraint("fk_meeting_calendar_event_id", type_="foreignkey")
batch_op.drop_index("idx_meeting_calendar_event")
batch_op.drop_column("calendar_metadata")
batch_op.drop_column("calendar_event_id")
with op.batch_alter_table("calendar_event", schema=None) as batch_op:
batch_op.drop_index("idx_calendar_event_room_start")
batch_op.drop_index(
"idx_calendar_event_deleted", postgresql_where=sa.text("NOT is_deleted")
)
op.drop_table("calendar_event")
# ### end Alembic commands ###

View File

@@ -0,0 +1,43 @@
"""remove_grace_period_fields
Revision ID: dc035ff72fd5
Revises: d8e204bbf615
Create Date: 2025-09-11 10:36:45.197588
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "dc035ff72fd5"
down_revision: Union[str, None] = "d8e204bbf615"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Remove grace period columns from meeting table
op.drop_column("meeting", "last_participant_left_at")
op.drop_column("meeting", "grace_period_minutes")
def downgrade() -> None:
# Add back grace period columns to meeting table
op.add_column(
"meeting",
sa.Column(
"last_participant_left_at", sa.DateTime(timezone=True), nullable=True
),
)
op.add_column(
"meeting",
sa.Column(
"grace_period_minutes",
sa.Integer(),
server_default=sa.text("15"),
nullable=True,
),
)

View File

@@ -0,0 +1,34 @@
"""make meeting room_id nullable but keep foreign key
Revision ID: def1b5867d4c
Revises: 0ce521cda2ee
Create Date: 2025-09-11 09:42:18.697264
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "def1b5867d4c"
down_revision: Union[str, None] = "0ce521cda2ee"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=True)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=False)
# ### end Alembic commands ###

View File

@@ -0,0 +1,28 @@
"""add_track_keys
Revision ID: f8294b31f022
Revises: 1e49625677e4
Create Date: 2025-10-27 18:52:17.589167
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "f8294b31f022"
down_revision: Union[str, None] = "1e49625677e4"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
with op.batch_alter_table("recording", schema=None) as batch_op:
batch_op.add_column(sa.Column("track_keys", sa.JSON(), nullable=True))
def downgrade() -> None:
with op.batch_alter_table("recording", schema=None) as batch_op:
batch_op.drop_column("track_keys")

View File

@@ -12,7 +12,6 @@ dependencies = [
"requests>=2.31.0",
"aiortc>=1.5.0",
"sortedcontainers>=2.4.0",
"loguru>=0.7.0",
"pydantic-settings>=2.0.2",
"structlog>=23.1.0",
"uvicorn[standard]>=0.23.1",
@@ -27,7 +26,6 @@ dependencies = [
"prometheus-fastapi-instrumentator>=6.1.0",
"sentencepiece>=0.1.99",
"protobuf>=4.24.3",
"profanityfilter>=2.0.6",
"celery>=5.3.4",
"redis>=5.0.1",
"python-jose[cryptography]>=3.3.0",
@@ -40,6 +38,7 @@ dependencies = [
"llama-index-llms-openai-like>=0.4.0",
"pytest-env>=1.1.5",
"webvtt-py>=0.5.0",
"icalendar>=6.0.0",
]
[dependency-groups]
@@ -113,13 +112,14 @@ source = ["reflector"]
[tool.pytest_env]
ENVIRONMENT = "pytest"
DATABASE_URL = "postgresql://test_user:test_password@localhost:15432/reflector_test"
AUTH_BACKEND = "jwt"
[tool.pytest.ini_options]
addopts = "-ra -q --disable-pytest-warnings --cov --cov-report html -v"
testpaths = ["tests"]
asyncio_mode = "auto"
markers = [
"gpu_modal: mark test to run only with GPU Modal endpoints (deselect with '-m \"not gpu_modal\"')",
"model_api: tests for the unified model-serving HTTP API (backend- and hardware-agnostic)",
]
[tool.ruff.lint]
@@ -131,7 +131,7 @@ select = [
[tool.ruff.lint.per-file-ignores]
"reflector/processors/summary/summary_builder.py" = ["E501"]
"gpu/**.py" = ["PLC0415"]
"gpu/modal_deployments/**.py" = ["PLC0415"]
"reflector/tools/**.py" = ["PLC0415"]
"migrations/versions/**.py" = ["PLC0415"]
"tests/**.py" = ["PLC0415"]

View File

@@ -12,6 +12,7 @@ from reflector.events import subscribers_shutdown, subscribers_startup
from reflector.logger import logger
from reflector.metrics import metrics_init
from reflector.settings import settings
from reflector.views.daily import router as daily_router
from reflector.views.meetings import router as meetings_router
from reflector.views.rooms import router as rooms_router
from reflector.views.rtc_offer import router as rtc_offer_router
@@ -26,6 +27,8 @@ from reflector.views.transcripts_upload import router as transcripts_upload_rout
from reflector.views.transcripts_webrtc import router as transcripts_webrtc_router
from reflector.views.transcripts_websocket import router as transcripts_websocket_router
from reflector.views.user import router as user_router
from reflector.views.user_api_keys import router as user_api_keys_router
from reflector.views.user_websocket import router as user_ws_router
from reflector.views.whereby import router as whereby_router
from reflector.views.zulip import router as zulip_router
@@ -65,6 +68,12 @@ app.add_middleware(
allow_headers=["*"],
)
@app.get("/health")
async def health():
return {"status": "healthy"}
# metrics
instrumentator = Instrumentator(
excluded_handlers=["/docs", "/metrics"],
@@ -84,8 +93,11 @@ app.include_router(transcripts_websocket_router, prefix="/v1")
app.include_router(transcripts_webrtc_router, prefix="/v1")
app.include_router(transcripts_process_router, prefix="/v1")
app.include_router(user_router, prefix="/v1")
app.include_router(user_api_keys_router, prefix="/v1")
app.include_router(user_ws_router, prefix="/v1")
app.include_router(zulip_router, prefix="/v1")
app.include_router(whereby_router, prefix="/v1")
app.include_router(daily_router, prefix="/v1/daily")
add_pagination(app)
# prepare celery

View File

@@ -0,0 +1,27 @@
import asyncio
import functools
from reflector.db import get_database
def asynctask(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
async def run_with_db():
database = get_database()
await database.connect()
try:
return await f(*args, **kwargs)
finally:
await database.disconnect()
coro = run_with_db()
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop and loop.is_running():
return loop.run_until_complete(coro)
return asyncio.run(coro)
return wrapper

View File

@@ -1,14 +1,18 @@
from typing import Annotated, Optional
from typing import Annotated, List, Optional
from fastapi import Depends, HTTPException
from fastapi.security import OAuth2PasswordBearer
from fastapi.security import APIKeyHeader, OAuth2PasswordBearer
from jose import JWTError, jwt
from pydantic import BaseModel
from reflector.db.user_api_keys import user_api_keys_controller
from reflector.db.users import user_controller
from reflector.logger import logger
from reflector.settings import settings
from reflector.utils import generate_uuid4
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token", auto_error=False)
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)
jwt_public_key = open(f"reflector/auth/jwt/keys/{settings.AUTH_JWT_PUBLIC_KEY}").read()
jwt_algorithm = settings.AUTH_JWT_ALGORITHM
@@ -26,7 +30,7 @@ class JWTException(Exception):
class UserInfo(BaseModel):
sub: str
email: str
email: Optional[str] = None
def __getitem__(self, key):
return getattr(self, key)
@@ -58,33 +62,65 @@ def authenticated(token: Annotated[str, Depends(oauth2_scheme)]):
return None
def current_user(
token: Annotated[Optional[str], Depends(oauth2_scheme)],
jwtauth: JWTAuth = Depends(),
):
if token is None:
raise HTTPException(status_code=401, detail="Not authenticated")
try:
payload = jwtauth.verify_token(token)
sub = payload["sub"]
return UserInfo(sub=sub)
except JWTError as e:
logger.error(f"JWT error: {e}")
raise HTTPException(status_code=401, detail="Invalid authentication")
async def _authenticate_user(
jwt_token: Optional[str],
api_key: Optional[str],
jwtauth: JWTAuth,
) -> UserInfo | None:
user_infos: List[UserInfo] = []
if api_key:
user_api_key = await user_api_keys_controller.verify_key(api_key)
if user_api_key:
user_infos.append(UserInfo(sub=user_api_key.user_id, email=None))
if jwt_token:
try:
payload = jwtauth.verify_token(jwt_token)
authentik_uid = payload["sub"]
email = payload["email"]
def current_user_optional(
token: Annotated[Optional[str], Depends(oauth2_scheme)],
jwtauth: JWTAuth = Depends(),
):
# we accept no token, but if one is provided, it must be a valid one.
if token is None:
user = await user_controller.get_by_authentik_uid(authentik_uid)
if not user:
logger.info(
f"Creating new user on first login: {authentik_uid} ({email})"
)
user = await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid=authentik_uid,
email=email,
)
user_infos.append(UserInfo(sub=user.id, email=email))
except JWTError as e:
logger.error(f"JWT error: {e}")
raise HTTPException(status_code=401, detail="Invalid authentication")
if len(user_infos) == 0:
return None
try:
payload = jwtauth.verify_token(token)
sub = payload["sub"]
email = payload["email"]
return UserInfo(sub=sub, email=email)
except JWTError as e:
logger.error(f"JWT error: {e}")
raise HTTPException(status_code=401, detail="Invalid authentication")
if len(set([x.sub for x in user_infos])) > 1:
raise JWTException(
status_code=401,
detail="Invalid authentication: more than one user provided",
)
return user_infos[0]
async def current_user(
jwt_token: Annotated[Optional[str], Depends(oauth2_scheme)],
api_key: Annotated[Optional[str], Depends(api_key_header)],
jwtauth: JWTAuth = Depends(),
):
user = await _authenticate_user(jwt_token, api_key, jwtauth)
if user is None:
raise HTTPException(status_code=401, detail="Not authenticated")
return user
async def current_user_optional(
jwt_token: Annotated[Optional[str], Depends(oauth2_scheme)],
api_key: Annotated[Optional[str], Depends(api_key_header)],
jwtauth: JWTAuth = Depends(),
):
return await _authenticate_user(jwt_token, api_key, jwtauth)

View File

@@ -0,0 +1,6 @@
anything about Daily.co api interaction
- webhook event shapes
- REST api client
No REST api client existing found in the wild; the official lib is about working with videocall as a bot

View File

@@ -0,0 +1,108 @@
"""
Daily.co API Module
"""
# Client
from .client import DailyApiClient, DailyApiError
# Request models
from .requests import (
CreateMeetingTokenRequest,
CreateRoomRequest,
CreateWebhookRequest,
MeetingTokenProperties,
RecordingsBucketConfig,
RoomProperties,
UpdateWebhookRequest,
)
# Response models
from .responses import (
MeetingParticipant,
MeetingParticipantsResponse,
MeetingResponse,
MeetingTokenResponse,
RecordingResponse,
RecordingS3Info,
RoomPresenceParticipant,
RoomPresenceResponse,
RoomResponse,
WebhookResponse,
)
# Webhook utilities
from .webhook_utils import (
extract_room_name,
parse_participant_joined,
parse_participant_left,
parse_recording_error,
parse_recording_ready,
parse_recording_started,
parse_webhook_payload,
verify_webhook_signature,
)
# Webhook models
from .webhooks import (
DailyTrack,
DailyWebhookEvent,
DailyWebhookEventUnion,
ParticipantJoinedEvent,
ParticipantJoinedPayload,
ParticipantLeftEvent,
ParticipantLeftPayload,
RecordingErrorEvent,
RecordingErrorPayload,
RecordingReadyEvent,
RecordingReadyToDownloadPayload,
RecordingStartedEvent,
RecordingStartedPayload,
)
__all__ = [
# Client
"DailyApiClient",
"DailyApiError",
# Requests
"CreateRoomRequest",
"RoomProperties",
"RecordingsBucketConfig",
"CreateMeetingTokenRequest",
"MeetingTokenProperties",
"CreateWebhookRequest",
"UpdateWebhookRequest",
# Responses
"RoomResponse",
"RoomPresenceResponse",
"RoomPresenceParticipant",
"MeetingParticipantsResponse",
"MeetingParticipant",
"MeetingResponse",
"RecordingResponse",
"RecordingS3Info",
"MeetingTokenResponse",
"WebhookResponse",
# Webhooks
"DailyWebhookEvent",
"DailyWebhookEventUnion",
"DailyTrack",
"ParticipantJoinedEvent",
"ParticipantJoinedPayload",
"ParticipantLeftEvent",
"ParticipantLeftPayload",
"RecordingStartedEvent",
"RecordingStartedPayload",
"RecordingReadyEvent",
"RecordingReadyToDownloadPayload",
"RecordingErrorEvent",
"RecordingErrorPayload",
# Webhook utilities
"verify_webhook_signature",
"extract_room_name",
"parse_webhook_payload",
"parse_participant_joined",
"parse_participant_left",
"parse_recording_started",
"parse_recording_ready",
"parse_recording_error",
]

View File

@@ -0,0 +1,573 @@
"""
Daily.co API Client
Complete async client for Daily.co REST API with Pydantic models.
Reference: https://docs.daily.co/reference/rest-api
"""
from http import HTTPStatus
from typing import Any
import httpx
import structlog
from reflector.utils.string import NonEmptyString
from .requests import (
CreateMeetingTokenRequest,
CreateRoomRequest,
CreateWebhookRequest,
UpdateWebhookRequest,
)
from .responses import (
MeetingParticipantsResponse,
MeetingResponse,
MeetingTokenResponse,
RecordingResponse,
RoomPresenceResponse,
RoomResponse,
WebhookResponse,
)
logger = structlog.get_logger(__name__)
class DailyApiError(Exception):
"""Daily.co API error with full request/response context."""
def __init__(self, operation: str, response: httpx.Response):
self.operation = operation
self.response = response
self.status_code = response.status_code
self.response_body = response.text
self.url = str(response.url)
self.request_body = (
response.request.content.decode() if response.request.content else None
)
super().__init__(
f"Daily.co API error: {operation} failed with status {self.status_code}"
)
class DailyApiClient:
"""
Complete async client for Daily.co REST API.
Usage:
# Direct usage
client = DailyApiClient(api_key="your_api_key")
room = await client.create_room(CreateRoomRequest(name="my-room"))
await client.close() # Clean up when done
# Context manager (recommended)
async with DailyApiClient(api_key="your_api_key") as client:
room = await client.create_room(CreateRoomRequest(name="my-room"))
"""
BASE_URL = "https://api.daily.co/v1"
DEFAULT_TIMEOUT = 10.0
def __init__(
self,
api_key: NonEmptyString,
webhook_secret: NonEmptyString | None = None,
timeout: float = DEFAULT_TIMEOUT,
base_url: NonEmptyString | None = None,
):
"""
Initialize Daily.co API client.
Args:
api_key: Daily.co API key (Bearer token)
webhook_secret: Base64-encoded HMAC secret for webhook verification.
Must match the 'hmac' value provided when creating webhooks.
Generate with: base64.b64encode(os.urandom(32)).decode()
timeout: Default request timeout in seconds
base_url: Override base URL (for testing)
"""
self.api_key = api_key
self.webhook_secret = webhook_secret
self.timeout = timeout
self.base_url = base_url or self.BASE_URL
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
self._client: httpx.AsyncClient | None = None
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.close()
async def _get_client(self) -> httpx.AsyncClient:
if self._client is None:
self._client = httpx.AsyncClient(timeout=self.timeout)
return self._client
async def close(self):
if self._client is not None:
await self._client.aclose()
self._client = None
async def _handle_response(
self, response: httpx.Response, operation: str
) -> dict[str, Any]:
"""
Handle API response with error logging.
Args:
response: HTTP response
operation: Operation name for logging (e.g., "create_room")
Returns:
Parsed JSON response
Raises:
DailyApiError: If request failed with full context
"""
if response.status_code >= 400:
logger.error(
f"Daily.co API error: {operation}",
status_code=response.status_code,
response_body=response.text,
request_body=response.request.content.decode()
if response.request.content
else None,
url=str(response.url),
)
raise DailyApiError(operation, response)
return response.json()
# ============================================================================
# ROOMS
# ============================================================================
async def create_room(self, request: CreateRoomRequest) -> RoomResponse:
"""
Create a new Daily.co room.
Reference: https://docs.daily.co/reference/rest-api/rooms/create-room
Args:
request: Room creation request with name, privacy, and properties
Returns:
Created room data including URL and ID
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.post(
f"{self.base_url}/rooms",
headers=self.headers,
json=request.model_dump(exclude_none=True),
)
data = await self._handle_response(response, "create_room")
return RoomResponse(**data)
async def get_room(self, room_name: NonEmptyString) -> RoomResponse:
"""
Get room configuration.
Args:
room_name: Daily.co room name
Returns:
Room configuration data
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.get(
f"{self.base_url}/rooms/{room_name}",
headers=self.headers,
)
data = await self._handle_response(response, "get_room")
return RoomResponse(**data)
async def get_room_presence(
self, room_name: NonEmptyString
) -> RoomPresenceResponse:
"""
Get current participants in a room (real-time presence).
Reference: https://docs.daily.co/reference/rest-api/rooms/get-room-presence
Args:
room_name: Daily.co room name
Returns:
List of currently present participants with join time and duration
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.get(
f"{self.base_url}/rooms/{room_name}/presence",
headers=self.headers,
)
data = await self._handle_response(response, "get_room_presence")
return RoomPresenceResponse(**data)
async def delete_room(self, room_name: NonEmptyString) -> None:
"""
Delete a room (idempotent - succeeds even if room doesn't exist).
Reference: https://docs.daily.co/reference/rest-api/rooms/delete-room
Args:
room_name: Daily.co room name
Raises:
httpx.HTTPStatusError: If API request fails (except 404)
"""
client = await self._get_client()
response = await client.delete(
f"{self.base_url}/rooms/{room_name}",
headers=self.headers,
)
# Idempotent delete - 404 means already deleted
if response.status_code == HTTPStatus.NOT_FOUND:
logger.debug("Room not found (already deleted)", room_name=room_name)
return
await self._handle_response(response, "delete_room")
# ============================================================================
# MEETINGS
# ============================================================================
async def get_meeting(self, meeting_id: NonEmptyString) -> MeetingResponse:
"""
Get full meeting information including participants.
Reference: https://docs.daily.co/reference/rest-api/meetings/get-meeting-information
Args:
meeting_id: Daily.co meeting/session ID
Returns:
Meeting metadata including room, duration, participants, and status
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.get(
f"{self.base_url}/meetings/{meeting_id}",
headers=self.headers,
)
data = await self._handle_response(response, "get_meeting")
return MeetingResponse(**data)
async def get_meeting_participants(
self,
meeting_id: NonEmptyString,
limit: int | None = None,
joined_after: NonEmptyString | None = None,
joined_before: NonEmptyString | None = None,
) -> MeetingParticipantsResponse:
"""
Get historical participant data from a completed meeting (paginated).
Reference: https://docs.daily.co/reference/rest-api/meetings/get-meeting-participants
Args:
meeting_id: Daily.co meeting/session ID
limit: Maximum number of participant records to return
joined_after: Return participants who joined after this participant_id
joined_before: Return participants who joined before this participant_id
Returns:
List of participants with join times and duration
Raises:
httpx.HTTPStatusError: If API request fails (404 when no more participants)
Note:
For pagination, use joined_after with the last participant_id from previous response.
Returns 404 when no more participants remain.
"""
params = {}
if limit is not None:
params["limit"] = limit
if joined_after is not None:
params["joined_after"] = joined_after
if joined_before is not None:
params["joined_before"] = joined_before
client = await self._get_client()
response = await client.get(
f"{self.base_url}/meetings/{meeting_id}/participants",
headers=self.headers,
params=params,
)
data = await self._handle_response(response, "get_meeting_participants")
return MeetingParticipantsResponse(**data)
# ============================================================================
# RECORDINGS
# ============================================================================
async def get_recording(self, recording_id: NonEmptyString) -> RecordingResponse:
"""
https://docs.daily.co/reference/rest-api/recordings/get-recording-information
Get recording metadata and status.
"""
client = await self._get_client()
response = await client.get(
f"{self.base_url}/recordings/{recording_id}",
headers=self.headers,
)
data = await self._handle_response(response, "get_recording")
return RecordingResponse(**data)
async def list_recordings(
self,
room_name: NonEmptyString | None = None,
starting_after: str | None = None,
ending_before: str | None = None,
limit: int = 100,
) -> list[RecordingResponse]:
"""
List recordings with optional filters.
Reference: https://docs.daily.co/reference/rest-api/recordings
Args:
room_name: Filter by room name
starting_after: Pagination cursor - recording ID to start after
ending_before: Pagination cursor - recording ID to end before
limit: Max results per page (default 100, max 100)
Note: starting_after/ending_before are pagination cursors (recording IDs),
NOT time filters. API returns recordings in reverse chronological order.
"""
client = await self._get_client()
params = {"limit": limit}
if room_name:
params["room_name"] = room_name
if starting_after:
params["starting_after"] = starting_after
if ending_before:
params["ending_before"] = ending_before
response = await client.get(
f"{self.base_url}/recordings",
headers=self.headers,
params=params,
)
data = await self._handle_response(response, "list_recordings")
if not isinstance(data, dict) or "data" not in data:
logger.error(
"Daily.co API returned unexpected format for list_recordings",
data_type=type(data).__name__,
data_keys=list(data.keys()) if isinstance(data, dict) else None,
data_sample=str(data)[:500],
room_name=room_name,
operation="list_recordings",
)
raise httpx.HTTPStatusError(
message=f"Unexpected response format from list_recordings: {type(data).__name__}",
request=response.request,
response=response,
)
return [RecordingResponse(**r) for r in data["data"]]
# ============================================================================
# MEETING TOKENS
# ============================================================================
async def create_meeting_token(
self, request: CreateMeetingTokenRequest
) -> MeetingTokenResponse:
"""
Create a meeting token for participant authentication.
Reference: https://docs.daily.co/reference/rest-api/meeting-tokens/create-meeting-token
Args:
request: Token properties including room name, user_id, permissions
Returns:
JWT meeting token
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.post(
f"{self.base_url}/meeting-tokens",
headers=self.headers,
json=request.model_dump(exclude_none=True),
)
data = await self._handle_response(response, "create_meeting_token")
return MeetingTokenResponse(**data)
# ============================================================================
# WEBHOOKS
# ============================================================================
async def list_webhooks(self) -> list[WebhookResponse]:
"""
List all configured webhooks for this account.
Reference: https://docs.daily.co/reference/rest-api/webhooks
Returns:
List of webhook configurations
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.get(
f"{self.base_url}/webhooks",
headers=self.headers,
)
data = await self._handle_response(response, "list_webhooks")
# Daily.co returns array directly (not paginated)
if isinstance(data, list):
return [WebhookResponse(**wh) for wh in data]
# Future-proof: handle potential pagination envelope
if isinstance(data, dict) and "data" in data:
return [WebhookResponse(**wh) for wh in data["data"]]
logger.warning("Unexpected webhook list response format", data=data)
return []
async def create_webhook(self, request: CreateWebhookRequest) -> WebhookResponse:
"""
Create a new webhook subscription.
Reference: https://docs.daily.co/reference/rest-api/webhooks
Args:
request: Webhook configuration with URL, event types, and HMAC secret
Returns:
Created webhook with UUID and state
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.post(
f"{self.base_url}/webhooks",
headers=self.headers,
json=request.model_dump(exclude_none=True),
)
data = await self._handle_response(response, "create_webhook")
return WebhookResponse(**data)
async def update_webhook(
self, webhook_uuid: NonEmptyString, request: UpdateWebhookRequest
) -> WebhookResponse:
"""
Update webhook configuration.
Note: Daily.co may not support PATCH for all fields.
Common pattern is delete + recreate.
Reference: https://docs.daily.co/reference/rest-api/webhooks
Args:
webhook_uuid: Webhook UUID to update
request: Updated webhook configuration
Returns:
Updated webhook configuration
Raises:
httpx.HTTPStatusError: If API request fails
"""
client = await self._get_client()
response = await client.patch(
f"{self.base_url}/webhooks/{webhook_uuid}",
headers=self.headers,
json=request.model_dump(exclude_none=True),
)
data = await self._handle_response(response, "update_webhook")
return WebhookResponse(**data)
async def delete_webhook(self, webhook_uuid: NonEmptyString) -> None:
"""
Delete a webhook.
Reference: https://docs.daily.co/reference/rest-api/webhooks
Args:
webhook_uuid: Webhook UUID to delete
Raises:
httpx.HTTPStatusError: If webhook not found or deletion fails
"""
client = await self._get_client()
response = await client.delete(
f"{self.base_url}/webhooks/{webhook_uuid}",
headers=self.headers,
)
await self._handle_response(response, "delete_webhook")
# ============================================================================
# HELPER METHODS
# ============================================================================
async def find_webhook_by_url(self, url: NonEmptyString) -> WebhookResponse | None:
"""
Find a webhook by its URL.
Args:
url: Webhook endpoint URL to search for
Returns:
Webhook if found, None otherwise
"""
webhooks = await self.list_webhooks()
for webhook in webhooks:
if webhook.url == url:
return webhook
return None
async def find_webhooks_by_pattern(
self, pattern: NonEmptyString
) -> list[WebhookResponse]:
"""
Find webhooks matching a URL pattern (e.g., 'ngrok').
Args:
pattern: String to match in webhook URLs
Returns:
List of matching webhooks
"""
webhooks = await self.list_webhooks()
return [wh for wh in webhooks if pattern in wh.url]

View File

@@ -0,0 +1,158 @@
"""
Daily.co API Request Models
Reference: https://docs.daily.co/reference/rest-api
"""
from typing import List, Literal
from pydantic import BaseModel, Field
from reflector.utils.string import NonEmptyString
class RecordingsBucketConfig(BaseModel):
"""
S3 bucket configuration for raw-tracks recordings.
Reference: https://docs.daily.co/reference/rest-api/rooms/create-room
"""
bucket_name: NonEmptyString = Field(description="S3 bucket name")
bucket_region: NonEmptyString = Field(description="AWS region (e.g., 'us-east-1')")
assume_role_arn: NonEmptyString = Field(
description="AWS IAM role ARN that Daily.co will assume to write recordings"
)
allow_api_access: bool = Field(
default=True,
description="Whether to allow API access to recording metadata",
)
class RoomProperties(BaseModel):
"""
Room configuration properties.
"""
enable_recording: Literal["cloud", "local", "raw-tracks"] | None = Field(
default=None,
description="Recording mode: 'cloud' for mixed, 'local' for local recording, 'raw-tracks' for multitrack, None to disable",
)
enable_chat: bool = Field(default=True, description="Enable in-meeting chat")
enable_screenshare: bool = Field(default=True, description="Enable screen sharing")
start_video_off: bool = Field(
default=False, description="Start with video off for all participants"
)
start_audio_off: bool = Field(
default=False, description="Start with audio muted for all participants"
)
exp: int | None = Field(
None, description="Room expiration timestamp (Unix epoch seconds)"
)
recordings_bucket: RecordingsBucketConfig | None = Field(
None, description="S3 bucket configuration for raw-tracks recordings"
)
class CreateRoomRequest(BaseModel):
"""
Request to create a new Daily.co room.
Reference: https://docs.daily.co/reference/rest-api/rooms/create-room
"""
name: NonEmptyString = Field(description="Room name (must be unique within domain)")
privacy: Literal["public", "private"] = Field(
default="public", description="Room privacy setting"
)
properties: RoomProperties = Field(
default_factory=RoomProperties, description="Room configuration properties"
)
class MeetingTokenProperties(BaseModel):
"""
Properties for meeting token creation.
Reference: https://docs.daily.co/reference/rest-api/meeting-tokens/create-meeting-token
"""
room_name: NonEmptyString = Field(description="Room name this token is valid for")
user_id: NonEmptyString | None = Field(
None, description="User identifier to associate with token"
)
is_owner: bool = Field(
default=False, description="Grant owner privileges to token holder"
)
start_cloud_recording: bool = Field(
default=False, description="Automatically start cloud recording on join"
)
enable_recording_ui: bool = Field(
default=True, description="Show recording controls in UI"
)
eject_at_token_exp: bool = Field(
default=False, description="Eject participant when token expires"
)
nbf: int | None = Field(
None, description="Not-before timestamp (Unix epoch seconds)"
)
exp: int | None = Field(
None, description="Expiration timestamp (Unix epoch seconds)"
)
class CreateMeetingTokenRequest(BaseModel):
"""
Request to create a meeting token for participant authentication.
Reference: https://docs.daily.co/reference/rest-api/meeting-tokens/create-meeting-token
"""
properties: MeetingTokenProperties = Field(description="Token properties")
class CreateWebhookRequest(BaseModel):
"""
Request to create a webhook subscription.
Reference: https://docs.daily.co/reference/rest-api/webhooks
"""
url: NonEmptyString = Field(description="Webhook endpoint URL (must be HTTPS)")
eventTypes: List[
Literal[
"participant.joined",
"participant.left",
"recording.started",
"recording.ready-to-download",
"recording.error",
]
] = Field(
description="Array of event types to subscribe to (only events we handle)"
)
hmac: NonEmptyString = Field(
description="Base64-encoded HMAC secret for webhook signature verification"
)
basicAuth: NonEmptyString | None = Field(
None, description="Optional basic auth credentials for webhook endpoint"
)
class UpdateWebhookRequest(BaseModel):
"""
Request to update an existing webhook.
Note: Daily.co API may not support PATCH for webhooks.
Common pattern is to delete and recreate.
Reference: https://docs.daily.co/reference/rest-api/webhooks
"""
url: NonEmptyString | None = Field(None, description="New webhook endpoint URL")
eventTypes: List[NonEmptyString] | None = Field(
None, description="New array of event types"
)
hmac: NonEmptyString | None = Field(None, description="New HMAC secret")
basicAuth: NonEmptyString | None = Field(
None, description="New basic auth credentials"
)

View File

@@ -0,0 +1,193 @@
"""
Daily.co API Response Models
"""
from typing import Any, Dict, List, Literal
from pydantic import BaseModel, Field
from reflector.dailyco_api.webhooks import DailyTrack
from reflector.utils.string import NonEmptyString
# not documented in daily; we fill it according to observations
RecordingStatus = Literal["in-progress", "finished"]
class RoomResponse(BaseModel):
"""
Response from room creation or retrieval.
Reference: https://docs.daily.co/reference/rest-api/rooms/create-room
"""
id: NonEmptyString = Field(description="Unique room identifier (UUID)")
name: NonEmptyString = Field(description="Room name used in URLs")
api_created: bool = Field(description="Whether room was created via API")
privacy: Literal["public", "private"] = Field(description="Room privacy setting")
url: NonEmptyString = Field(description="Full room URL")
created_at: NonEmptyString = Field(description="ISO 8601 creation timestamp")
config: Dict[NonEmptyString, Any] = Field(
default_factory=dict, description="Room configuration properties"
)
class RoomPresenceParticipant(BaseModel):
"""
Participant presence information in a room.
Reference: https://docs.daily.co/reference/rest-api/rooms/get-room-presence
"""
room: NonEmptyString = Field(description="Room name")
id: NonEmptyString = Field(description="Participant session ID")
userId: NonEmptyString | None = Field(None, description="User ID if provided")
userName: NonEmptyString | None = Field(None, description="User display name")
joinTime: NonEmptyString = Field(description="ISO 8601 join timestamp")
duration: int = Field(description="Duration in room (seconds)")
class RoomPresenceResponse(BaseModel):
"""
Response from room presence endpoint.
Reference: https://docs.daily.co/reference/rest-api/rooms/get-room-presence
"""
total_count: int = Field(
description="Total number of participants currently in room"
)
data: List[RoomPresenceParticipant] = Field(
default_factory=list, description="Array of participant presence data"
)
class MeetingParticipant(BaseModel):
"""
Historical participant data from a meeting.
Reference: https://docs.daily.co/reference/rest-api/meetings/get-meeting-participants
"""
user_id: NonEmptyString | None = Field(None, description="User identifier")
participant_id: NonEmptyString = Field(description="Participant session identifier")
user_name: NonEmptyString | None = Field(None, description="User display name")
join_time: int = Field(description="Join timestamp (Unix epoch seconds)")
duration: int = Field(description="Duration in meeting (seconds)")
class MeetingParticipantsResponse(BaseModel):
"""
Response from meeting participants endpoint.
Reference: https://docs.daily.co/reference/rest-api/meetings/get-meeting-participants
"""
data: List[MeetingParticipant] = Field(
default_factory=list, description="Array of participant data"
)
class MeetingResponse(BaseModel):
"""
Response from meeting information endpoint.
Reference: https://docs.daily.co/reference/rest-api/meetings/get-meeting-information
"""
id: NonEmptyString = Field(description="Meeting session identifier (UUID)")
room: NonEmptyString = Field(description="Room name where meeting occurred")
start_time: int = Field(
description="Meeting start Unix timestamp (~15s granularity)"
)
duration: int = Field(description="Total meeting duration in seconds")
ongoing: bool = Field(description="Whether meeting is currently active")
max_participants: int = Field(description="Peak concurrent participant count")
participants: List[MeetingParticipant] = Field(
default_factory=list, description="Array of participant session data"
)
class RecordingS3Info(BaseModel):
"""
S3 bucket information for a recording.
Reference: https://docs.daily.co/reference/rest-api/recordings
"""
bucket_name: NonEmptyString
bucket_region: NonEmptyString
endpoint: NonEmptyString | None = None
class RecordingResponse(BaseModel):
"""
Response from recording retrieval endpoint.
Reference: https://docs.daily.co/reference/rest-api/recordings
"""
id: NonEmptyString = Field(description="Recording identifier")
room_name: NonEmptyString = Field(description="Room where recording occurred")
start_ts: int = Field(description="Recording start timestamp (Unix epoch seconds)")
status: RecordingStatus = Field(
description="Recording status ('in-progress' or 'finished')"
)
max_participants: int | None = Field(
None, description="Maximum participants during recording (may be missing)"
)
duration: int = Field(description="Recording duration in seconds")
share_token: NonEmptyString | None = Field(
None, description="Token for sharing recording"
)
s3: RecordingS3Info | None = Field(None, description="S3 bucket information")
tracks: list[DailyTrack] = Field(
default_factory=list,
description="Track list for raw-tracks recordings (always array, never null)",
)
# this is not a mistake but a deliberate Daily.co naming decision
mtgSessionId: NonEmptyString | None = Field(
None, description="Meeting session identifier (may be missing)"
)
class MeetingTokenResponse(BaseModel):
"""
Response from meeting token creation.
Reference: https://docs.daily.co/reference/rest-api/meeting-tokens/create-meeting-token
"""
token: NonEmptyString = Field(
description="JWT meeting token for participant authentication"
)
class WebhookResponse(BaseModel):
"""
Response from webhook creation or retrieval.
Reference: https://docs.daily.co/reference/rest-api/webhooks
"""
uuid: NonEmptyString = Field(description="Unique webhook identifier")
url: NonEmptyString = Field(description="Webhook endpoint URL")
hmac: NonEmptyString | None = Field(
None, description="Base64-encoded HMAC secret for signature verification"
)
basicAuth: NonEmptyString | None = Field(
None, description="Basic auth credentials if configured"
)
eventTypes: List[NonEmptyString] = Field(
default_factory=list,
description="Array of event types (e.g., ['recording.started', 'participant.joined'])",
)
state: Literal["ACTIVE", "FAILED"] = Field(
description="Webhook state - FAILED after 3+ consecutive failures"
)
failedCount: int = Field(default=0, description="Number of consecutive failures")
lastMomentPushed: NonEmptyString | None = Field(
None, description="ISO 8601 timestamp of last successful push"
)
domainId: NonEmptyString = Field(description="Daily.co domain/account identifier")
createdAt: NonEmptyString = Field(description="ISO 8601 creation timestamp")
updatedAt: NonEmptyString = Field(description="ISO 8601 last update timestamp")

View File

@@ -0,0 +1,228 @@
"""
Daily.co Webhook Utilities
Utilities for verifying and parsing Daily.co webhook events.
Reference: https://docs.daily.co/reference/rest-api/webhooks
"""
import base64
import hmac
from hashlib import sha256
import structlog
from .webhooks import (
DailyWebhookEvent,
ParticipantJoinedPayload,
ParticipantLeftPayload,
RecordingErrorPayload,
RecordingReadyToDownloadPayload,
RecordingStartedPayload,
)
logger = structlog.get_logger(__name__)
def verify_webhook_signature(
body: bytes,
signature: str,
timestamp: str,
webhook_secret: str,
) -> bool:
"""
Verify Daily.co webhook signature using HMAC-SHA256.
Daily.co signature verification:
1. Base64-decode the webhook secret
2. Create signed content: timestamp + '.' + body
3. Compute HMAC-SHA256(secret, signed_content)
4. Base64-encode the result
5. Compare with provided signature using constant-time comparison
Reference: https://docs.daily.co/reference/rest-api/webhooks
Args:
body: Raw request body bytes
signature: X-Webhook-Signature header value
timestamp: X-Webhook-Timestamp header value
webhook_secret: Base64-encoded HMAC secret
Returns:
True if signature is valid, False otherwise
Example:
>>> body = b'{"version":"1.0.0","type":"participant.joined",...}'
>>> signature = "abc123..."
>>> timestamp = "1234567890"
>>> secret = "your-base64-secret"
>>> is_valid = verify_webhook_signature(body, signature, timestamp, secret)
"""
if not signature or not timestamp or not webhook_secret:
logger.warning(
"Missing required data for webhook verification",
has_signature=bool(signature),
has_timestamp=bool(timestamp),
has_secret=bool(webhook_secret),
)
return False
try:
secret_bytes = base64.b64decode(webhook_secret)
signed_content = timestamp.encode() + b"." + body
expected = hmac.new(secret_bytes, signed_content, sha256).digest()
expected_b64 = base64.b64encode(expected).decode()
# Constant-time comparison to prevent timing attacks
return hmac.compare_digest(expected_b64, signature)
except (base64.binascii.Error, ValueError, TypeError, UnicodeDecodeError) as e:
logger.error(
"Webhook signature verification failed",
error=str(e),
error_type=type(e).__name__,
)
return False
def extract_room_name(event: DailyWebhookEvent) -> str | None:
"""
Extract room name from Daily.co webhook event payload.
Args:
event: Parsed webhook event
Returns:
Room name if present and is a string, None otherwise
Example:
>>> event = DailyWebhookEvent(**webhook_payload)
>>> room_name = extract_room_name(event)
"""
room = event.payload.get("room_name")
# Ensure we return a string, not any falsy value that might be in payload
return room if isinstance(room, str) else None
def parse_participant_joined(event: DailyWebhookEvent) -> ParticipantJoinedPayload:
"""
Parse participant.joined webhook event payload.
Args:
event: Webhook event with type "participant.joined"
Returns:
Parsed participant joined payload
Raises:
pydantic.ValidationError: If payload doesn't match expected schema
"""
return ParticipantJoinedPayload(**event.payload)
def parse_participant_left(event: DailyWebhookEvent) -> ParticipantLeftPayload:
"""
Parse participant.left webhook event payload.
Args:
event: Webhook event with type "participant.left"
Returns:
Parsed participant left payload
Raises:
pydantic.ValidationError: If payload doesn't match expected schema
"""
return ParticipantLeftPayload(**event.payload)
def parse_recording_started(event: DailyWebhookEvent) -> RecordingStartedPayload:
"""
Parse recording.started webhook event payload.
Args:
event: Webhook event with type "recording.started"
Returns:
Parsed recording started payload
Raises:
pydantic.ValidationError: If payload doesn't match expected schema
"""
return RecordingStartedPayload(**event.payload)
def parse_recording_ready(
event: DailyWebhookEvent,
) -> RecordingReadyToDownloadPayload:
"""
Parse recording.ready-to-download webhook event payload.
This event is sent when raw-tracks recordings are complete and uploaded to S3.
The payload includes a 'tracks' array with individual audio/video files.
Args:
event: Webhook event with type "recording.ready-to-download"
Returns:
Parsed recording ready payload with tracks array
Raises:
pydantic.ValidationError: If payload doesn't match expected schema
Example:
>>> event = DailyWebhookEvent(**webhook_payload)
>>> if event.type == "recording.ready-to-download":
... payload = parse_recording_ready(event)
... audio_tracks = [t for t in payload.tracks if t.type == "audio"]
"""
return RecordingReadyToDownloadPayload(**event.payload)
def parse_recording_error(event: DailyWebhookEvent) -> RecordingErrorPayload:
"""
Parse recording.error webhook event payload.
Args:
event: Webhook event with type "recording.error"
Returns:
Parsed recording error payload
Raises:
pydantic.ValidationError: If payload doesn't match expected schema
"""
return RecordingErrorPayload(**event.payload)
WEBHOOK_PARSERS = {
"participant.joined": parse_participant_joined,
"participant.left": parse_participant_left,
"recording.started": parse_recording_started,
"recording.ready-to-download": parse_recording_ready,
"recording.error": parse_recording_error,
}
def parse_webhook_payload(event: DailyWebhookEvent):
"""
Parse webhook event payload based on event type.
Args:
event: Webhook event
Returns:
Typed payload model based on event type, or raw dict if unknown
Example:
>>> event = DailyWebhookEvent(**webhook_payload)
>>> payload = parse_webhook_payload(event)
>>> if isinstance(payload, ParticipantJoinedPayload):
... print(f"User {payload.user_name} joined")
"""
parser = WEBHOOK_PARSERS.get(event.type)
if parser:
return parser(event)
else:
logger.warning("Unknown webhook event type", event_type=event.type)
return event.payload

View File

@@ -0,0 +1,271 @@
"""
Daily.co Webhook Event Models
Reference: https://docs.daily.co/reference/rest-api/webhooks
"""
from typing import Annotated, Any, Dict, Literal, Union
from pydantic import BaseModel, Field, field_validator
from reflector.utils.string import NonEmptyString
def normalize_timestamp_to_int(v):
"""
Normalize float timestamps to int by truncating decimal part.
Daily.co sometimes sends timestamps as floats (e.g., 1708972279.96).
Pydantic expects int for fields typed as `int`.
"""
if v is None:
return v
if isinstance(v, float):
return int(v)
return v
WebhookEventType = Literal[
"participant.joined",
"participant.left",
"recording.started",
"recording.ready-to-download",
"recording.error",
]
class DailyTrack(BaseModel):
"""
Individual audio or video track from a multitrack recording.
Reference: https://docs.daily.co/reference/rest-api/recordings
"""
type: Literal["audio", "video"]
s3Key: NonEmptyString = Field(description="S3 object key for the track file")
size: int = Field(description="File size in bytes")
class DailyWebhookEvent(BaseModel):
"""
Base structure for all Daily.co webhook events.
All events share five common fields documented below.
Reference: https://docs.daily.co/reference/rest-api/webhooks
"""
version: NonEmptyString = Field(
description="Represents the version of the event. This uses semantic versioning to inform a consumer if the payload has introduced any breaking changes"
)
type: WebhookEventType = Field(
description="Represents the type of the event described in the payload"
)
id: NonEmptyString = Field(
description="An identifier representing this specific event"
)
payload: Dict[NonEmptyString, Any] = Field(
description="An object representing the event, whose fields are described in the corresponding payload class"
)
event_ts: int = Field(
description="Documenting when the webhook itself was sent. This timestamp is different than the time of the event the webhook describes. For example, a recording.started event will contain a start_ts timestamp of when the actual recording started, and a slightly later event_ts timestamp indicating when the webhook event was sent"
)
_normalize_event_ts = field_validator("event_ts", mode="before")(
normalize_timestamp_to_int
)
class ParticipantJoinedPayload(BaseModel):
"""
Payload for participant.joined webhook event.
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/participant-joined
"""
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
session_id: NonEmptyString = Field(description="Daily.co session identifier")
user_id: NonEmptyString = Field(description="User identifier (may be encoded)")
user_name: NonEmptyString | None = Field(None, description="User display name")
joined_at: int = Field(description="Join timestamp in Unix epoch seconds")
_normalize_joined_at = field_validator("joined_at", mode="before")(
normalize_timestamp_to_int
)
class ParticipantLeftPayload(BaseModel):
"""
Payload for participant.left webhook event.
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/participant-left
"""
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
session_id: NonEmptyString = Field(description="Daily.co session identifier")
user_id: NonEmptyString = Field(description="User identifier (may be encoded)")
user_name: NonEmptyString | None = Field(None, description="User display name")
joined_at: int = Field(description="Join timestamp in Unix epoch seconds")
duration: int | None = Field(
None, description="Duration of participation in seconds"
)
_normalize_joined_at = field_validator("joined_at", mode="before")(
normalize_timestamp_to_int
)
class RecordingStartedPayload(BaseModel):
"""
Payload for recording.started webhook event.
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-started
"""
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
recording_id: NonEmptyString = Field(description="Recording identifier")
start_ts: int | None = Field(None, description="Recording start timestamp")
_normalize_start_ts = field_validator("start_ts", mode="before")(
normalize_timestamp_to_int
)
class RecordingReadyToDownloadPayload(BaseModel):
"""
Payload for recording.ready-to-download webhook event.
This is sent when raw-tracks recordings are complete and uploaded to S3.
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-ready-to-download
"""
type: Literal["cloud", "raw-tracks"] = Field(
description="The type of recording that was generated"
)
recording_id: NonEmptyString = Field(
description="An ID identifying the recording that was generated"
)
room_name: NonEmptyString = Field(
description="The name of the room where the recording was made"
)
start_ts: int = Field(
description="The Unix epoch time in seconds representing when the recording started"
)
status: Literal["finished"] = Field(
description="The status of the given recording (always 'finished' in ready-to-download webhook, see RecordingStatus in responses.py for full API statuses)"
)
max_participants: int = Field(
description="The number of participants on the call that were recorded"
)
duration: int = Field(description="The duration in seconds of the call")
s3_key: NonEmptyString = Field(
description="The location of the recording in the provided S3 bucket"
)
share_token: NonEmptyString | None = Field(
None, description="undocumented documented secret field"
)
tracks: list[DailyTrack] | None = Field(
None,
description="If the recording is a raw-tracks recording, a tracks field will be provided. If role permissions have been removed, the tracks field may be null",
)
_normalize_start_ts = field_validator("start_ts", mode="before")(
normalize_timestamp_to_int
)
class RecordingErrorPayload(BaseModel):
"""
Payload for recording.error webhook event.
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-error
"""
action: Literal["clourd-recording-err", "cloud-recording-error"] = Field(
description="A string describing the event that was emitted (both variants are documented)"
)
error_msg: NonEmptyString = Field(description="The error message returned")
instance_id: NonEmptyString = Field(
description="The recording instance ID that was passed into the start recording command"
)
room_name: NonEmptyString = Field(
description="The name of the room where the recording was made"
)
timestamp: int = Field(
description="The Unix epoch time in seconds representing when the error was emitted"
)
_normalize_timestamp = field_validator("timestamp", mode="before")(
normalize_timestamp_to_int
)
class ParticipantJoinedEvent(BaseModel):
version: NonEmptyString
type: Literal["participant.joined"]
id: NonEmptyString
payload: ParticipantJoinedPayload
event_ts: int
_normalize_event_ts = field_validator("event_ts", mode="before")(
normalize_timestamp_to_int
)
class ParticipantLeftEvent(BaseModel):
version: NonEmptyString
type: Literal["participant.left"]
id: NonEmptyString
payload: ParticipantLeftPayload
event_ts: int
_normalize_event_ts = field_validator("event_ts", mode="before")(
normalize_timestamp_to_int
)
class RecordingStartedEvent(BaseModel):
version: NonEmptyString
type: Literal["recording.started"]
id: NonEmptyString
payload: RecordingStartedPayload
event_ts: int
_normalize_event_ts = field_validator("event_ts", mode="before")(
normalize_timestamp_to_int
)
class RecordingReadyEvent(BaseModel):
version: NonEmptyString
type: Literal["recording.ready-to-download"]
id: NonEmptyString
payload: RecordingReadyToDownloadPayload
event_ts: int
_normalize_event_ts = field_validator("event_ts", mode="before")(
normalize_timestamp_to_int
)
class RecordingErrorEvent(BaseModel):
version: NonEmptyString
type: Literal["recording.error"]
id: NonEmptyString
payload: RecordingErrorPayload
event_ts: int
_normalize_event_ts = field_validator("event_ts", mode="before")(
normalize_timestamp_to_int
)
DailyWebhookEventUnion = Annotated[
Union[
ParticipantJoinedEvent,
ParticipantLeftEvent,
RecordingStartedEvent,
RecordingReadyEvent,
RecordingErrorEvent,
],
Field(discriminator="type"),
]

View File

@@ -24,10 +24,14 @@ def get_database() -> databases.Database:
# import models
import reflector.db.calendar_events # noqa
import reflector.db.daily_participant_sessions # noqa
import reflector.db.meetings # noqa
import reflector.db.recordings # noqa
import reflector.db.rooms # noqa
import reflector.db.transcripts # noqa
import reflector.db.user_api_keys # noqa
import reflector.db.users # noqa
kwargs = {}
if "postgres" not in settings.DATABASE_URL:

View File

@@ -0,0 +1,187 @@
from datetime import datetime, timedelta, timezone
from typing import Any
import sqlalchemy as sa
from pydantic import BaseModel, Field
from sqlalchemy.dialects.postgresql import JSONB
from reflector.db import get_database, metadata
from reflector.utils import generate_uuid4
calendar_events = sa.Table(
"calendar_event",
metadata,
sa.Column("id", sa.String, primary_key=True),
sa.Column(
"room_id",
sa.String,
sa.ForeignKey("room.id", ondelete="CASCADE", name="fk_calendar_event_room_id"),
nullable=False,
),
sa.Column("ics_uid", sa.Text, nullable=False),
sa.Column("title", sa.Text),
sa.Column("description", sa.Text),
sa.Column("start_time", sa.DateTime(timezone=True), nullable=False),
sa.Column("end_time", sa.DateTime(timezone=True), nullable=False),
sa.Column("attendees", JSONB),
sa.Column("location", sa.Text),
sa.Column("ics_raw_data", sa.Text),
sa.Column("last_synced", sa.DateTime(timezone=True), nullable=False),
sa.Column("is_deleted", sa.Boolean, nullable=False, server_default=sa.false()),
sa.Column("created_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=False),
sa.UniqueConstraint("room_id", "ics_uid", name="uq_room_calendar_event"),
sa.Index("idx_calendar_event_room_start", "room_id", "start_time"),
sa.Index(
"idx_calendar_event_deleted",
"is_deleted",
postgresql_where=sa.text("NOT is_deleted"),
),
)
class CalendarEvent(BaseModel):
id: str = Field(default_factory=generate_uuid4)
room_id: str
ics_uid: str
title: str | None = None
description: str | None = None
start_time: datetime
end_time: datetime
attendees: list[dict[str, Any]] | None = None
location: str | None = None
ics_raw_data: str | None = None
last_synced: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
is_deleted: bool = False
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
class CalendarEventController:
async def get_by_room(
self,
room_id: str,
include_deleted: bool = False,
start_after: datetime | None = None,
end_before: datetime | None = None,
) -> list[CalendarEvent]:
query = calendar_events.select().where(calendar_events.c.room_id == room_id)
if not include_deleted:
query = query.where(calendar_events.c.is_deleted == False)
if start_after:
query = query.where(calendar_events.c.start_time >= start_after)
if end_before:
query = query.where(calendar_events.c.end_time <= end_before)
query = query.order_by(calendar_events.c.start_time.asc())
results = await get_database().fetch_all(query)
return [CalendarEvent(**result) for result in results]
async def get_upcoming(
self, room_id: str, minutes_ahead: int = 120
) -> list[CalendarEvent]:
"""Get upcoming events for a room within the specified minutes, including currently happening events."""
now = datetime.now(timezone.utc)
future_time = now + timedelta(minutes=minutes_ahead)
query = (
calendar_events.select()
.where(
sa.and_(
calendar_events.c.room_id == room_id,
calendar_events.c.is_deleted == False,
calendar_events.c.start_time <= future_time,
calendar_events.c.end_time >= now,
)
)
.order_by(calendar_events.c.start_time.asc())
)
results = await get_database().fetch_all(query)
return [CalendarEvent(**result) for result in results]
async def get_by_id(self, event_id: str) -> CalendarEvent | None:
query = calendar_events.select().where(calendar_events.c.id == event_id)
result = await get_database().fetch_one(query)
return CalendarEvent(**result) if result else None
async def get_by_ics_uid(self, room_id: str, ics_uid: str) -> CalendarEvent | None:
query = calendar_events.select().where(
sa.and_(
calendar_events.c.room_id == room_id,
calendar_events.c.ics_uid == ics_uid,
)
)
result = await get_database().fetch_one(query)
return CalendarEvent(**result) if result else None
async def upsert(self, event: CalendarEvent) -> CalendarEvent:
existing = await self.get_by_ics_uid(event.room_id, event.ics_uid)
if existing:
event.id = existing.id
event.created_at = existing.created_at
event.updated_at = datetime.now(timezone.utc)
query = (
calendar_events.update()
.where(calendar_events.c.id == existing.id)
.values(**event.model_dump())
)
else:
query = calendar_events.insert().values(**event.model_dump())
await get_database().execute(query)
return event
async def soft_delete_missing(
self, room_id: str, current_ics_uids: list[str]
) -> int:
"""Soft delete future events that are no longer in the calendar."""
now = datetime.now(timezone.utc)
select_query = calendar_events.select().where(
sa.and_(
calendar_events.c.room_id == room_id,
calendar_events.c.start_time > now,
calendar_events.c.is_deleted == False,
calendar_events.c.ics_uid.notin_(current_ics_uids)
if current_ics_uids
else True,
)
)
to_delete = await get_database().fetch_all(select_query)
delete_count = len(to_delete)
if delete_count > 0:
update_query = (
calendar_events.update()
.where(
sa.and_(
calendar_events.c.room_id == room_id,
calendar_events.c.start_time > now,
calendar_events.c.is_deleted == False,
calendar_events.c.ics_uid.notin_(current_ics_uids)
if current_ics_uids
else True,
)
)
.values(is_deleted=True, updated_at=now)
)
await get_database().execute(update_query)
return delete_count
async def delete_by_room(self, room_id: str) -> int:
query = calendar_events.delete().where(calendar_events.c.room_id == room_id)
result = await get_database().execute(query)
return result.rowcount
calendar_events_controller = CalendarEventController()

View File

@@ -0,0 +1,229 @@
"""Daily.co participant session tracking.
Stores webhook data for participant.joined and participant.left events to provide
historical session information (Daily.co API only returns current participants).
"""
from datetime import datetime
import sqlalchemy as sa
from pydantic import BaseModel
from sqlalchemy.dialects.postgresql import insert
from reflector.db import get_database, metadata
from reflector.utils.string import NonEmptyString
daily_participant_sessions = sa.Table(
"daily_participant_session",
metadata,
sa.Column("id", sa.String, primary_key=True),
sa.Column(
"meeting_id",
sa.String,
sa.ForeignKey("meeting.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column(
"room_id",
sa.String,
sa.ForeignKey("room.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("session_id", sa.String, nullable=False),
sa.Column("user_id", sa.String, nullable=True),
sa.Column("user_name", sa.String, nullable=False),
sa.Column("joined_at", sa.DateTime(timezone=True), nullable=False),
sa.Column("left_at", sa.DateTime(timezone=True), nullable=True),
sa.Index("idx_daily_session_meeting_left", "meeting_id", "left_at"),
sa.Index("idx_daily_session_room", "room_id"),
)
class DailyParticipantSession(BaseModel):
"""Daily.co participant session record.
Tracks when a participant joined and left a meeting. Populated from webhooks:
- participant.joined: Creates record with left_at=None
- participant.left: Updates record with left_at
ID format: {meeting_id}:{user_id}:{joined_at_ms}
- Ensures idempotency (duplicate webhooks don't create duplicates)
- Allows same user to rejoin (different joined_at = different session)
Duration is calculated as: left_at - joined_at (not stored)
"""
id: NonEmptyString
meeting_id: NonEmptyString
room_id: NonEmptyString
session_id: NonEmptyString # Daily.co's session_id (identifies room session)
user_id: NonEmptyString | None = None
user_name: str
joined_at: datetime
left_at: datetime | None = None
class DailyParticipantSessionController:
"""Controller for Daily.co participant session persistence."""
async def get_by_id(self, id: str) -> DailyParticipantSession | None:
"""Get a session by its ID."""
query = daily_participant_sessions.select().where(
daily_participant_sessions.c.id == id
)
result = await get_database().fetch_one(query)
return DailyParticipantSession(**result) if result else None
async def get_open_session(
self, meeting_id: NonEmptyString, session_id: NonEmptyString
) -> DailyParticipantSession | None:
"""Get the open (not left) session for a user in a meeting."""
query = daily_participant_sessions.select().where(
sa.and_(
daily_participant_sessions.c.meeting_id == meeting_id,
daily_participant_sessions.c.session_id == session_id,
daily_participant_sessions.c.left_at.is_(None),
)
)
results = await get_database().fetch_all(query)
if len(results) > 1:
raise ValueError(
f"Multiple open sessions for daily session {session_id} in meeting {meeting_id}: "
f"found {len(results)} sessions"
)
return DailyParticipantSession(**results[0]) if results else None
async def upsert_joined(self, session: DailyParticipantSession) -> None:
"""Insert or update when participant.joined webhook arrives.
Idempotent: Duplicate webhooks with same ID are safely ignored.
Out-of-order: If left webhook arrived first, preserves left_at.
"""
query = insert(daily_participant_sessions).values(**session.model_dump())
query = query.on_conflict_do_update(
index_elements=["id"],
set_={"user_name": session.user_name},
)
await get_database().execute(query)
async def upsert_left(self, session: DailyParticipantSession) -> None:
"""Update session when participant.left webhook arrives.
Finds the open session for this user in this meeting and updates left_at.
Works around Daily.co webhook timestamp inconsistency (joined_at differs by ~4ms between webhooks).
Handles three cases:
1. Normal flow: open session exists → updates left_at
2. Out-of-order: left arrives first → creates new record with left data
3. Duplicate: left arrives again → idempotent (DB trigger prevents left_at modification)
"""
if session.left_at is None:
raise ValueError("left_at is required for upsert_left")
if session.left_at <= session.joined_at:
raise ValueError(
f"left_at ({session.left_at}) must be after joined_at ({session.joined_at})"
)
# Find existing open session (works around timestamp mismatch in webhooks)
existing = await self.get_open_session(session.meeting_id, session.session_id)
if existing:
# Update existing open session
query = (
daily_participant_sessions.update()
.where(daily_participant_sessions.c.id == existing.id)
.values(left_at=session.left_at)
)
await get_database().execute(query)
else:
# Out-of-order or first webhook: insert new record
query = insert(daily_participant_sessions).values(**session.model_dump())
query = query.on_conflict_do_nothing(index_elements=["id"])
await get_database().execute(query)
async def get_by_meeting(self, meeting_id: str) -> list[DailyParticipantSession]:
"""Get all participant sessions for a meeting (active and ended)."""
query = daily_participant_sessions.select().where(
daily_participant_sessions.c.meeting_id == meeting_id
)
results = await get_database().fetch_all(query)
return [DailyParticipantSession(**result) for result in results]
async def get_active_by_meeting(
self, meeting_id: str
) -> list[DailyParticipantSession]:
"""Get only active (not left) participant sessions for a meeting."""
query = daily_participant_sessions.select().where(
sa.and_(
daily_participant_sessions.c.meeting_id == meeting_id,
daily_participant_sessions.c.left_at.is_(None),
)
)
results = await get_database().fetch_all(query)
return [DailyParticipantSession(**result) for result in results]
async def get_all_sessions_for_meeting(
self, meeting_id: NonEmptyString
) -> dict[NonEmptyString, DailyParticipantSession]:
query = daily_participant_sessions.select().where(
daily_participant_sessions.c.meeting_id == meeting_id
)
results = await get_database().fetch_all(query)
# TODO DailySessionId custom type
return {row["session_id"]: DailyParticipantSession(**row) for row in results}
async def batch_upsert_sessions(
self, sessions: list[DailyParticipantSession]
) -> None:
"""Upsert multiple sessions in single query.
Uses ON CONFLICT for idempotency. Updates user_name on conflict since they may change it during a meeting.
"""
if not sessions:
return
values = [session.model_dump() for session in sessions]
query = insert(daily_participant_sessions).values(values)
query = query.on_conflict_do_update(
index_elements=["id"],
set_={
# Preserve existing left_at to prevent race conditions
"left_at": sa.func.coalesce(
daily_participant_sessions.c.left_at,
query.excluded.left_at,
),
"user_name": query.excluded.user_name,
},
)
await get_database().execute(query)
async def batch_close_sessions(
self, session_ids: list[NonEmptyString], left_at: datetime
) -> None:
"""Mark multiple sessions as left in single query.
Only updates sessions where left_at is NULL (protects already-closed sessions).
Left_at mismatch for existing sessions is ignored, assumed to be not important issue if ever happens.
"""
if not session_ids:
return
query = (
daily_participant_sessions.update()
.where(
sa.and_(
daily_participant_sessions.c.id.in_(session_ids),
daily_participant_sessions.c.left_at.is_(None),
)
)
.values(left_at=left_at)
)
await get_database().execute(query)
daily_participant_sessions_controller = DailyParticipantSessionController()

View File

@@ -1,13 +1,15 @@
from datetime import datetime
from typing import Literal
from typing import Any, Literal
import sqlalchemy as sa
from fastapi import HTTPException
from pydantic import BaseModel, Field
from sqlalchemy.dialects.postgresql import JSONB
from reflector.db import get_database, metadata
from reflector.db.rooms import Room
from reflector.schemas.platform import WHEREBY_PLATFORM, Platform
from reflector.utils import generate_uuid4
from reflector.utils.string import assert_equal
meetings = sa.Table(
"meeting",
@@ -18,8 +20,12 @@ meetings = sa.Table(
sa.Column("host_room_url", sa.String),
sa.Column("start_date", sa.DateTime(timezone=True)),
sa.Column("end_date", sa.DateTime(timezone=True)),
sa.Column("user_id", sa.String),
sa.Column("room_id", sa.String),
sa.Column(
"room_id",
sa.String,
sa.ForeignKey("room.id", ondelete="CASCADE"),
nullable=True,
),
sa.Column("is_locked", sa.Boolean, nullable=False, server_default=sa.false()),
sa.Column("room_mode", sa.String, nullable=False, server_default="normal"),
sa.Column("recording_type", sa.String, nullable=False, server_default="cloud"),
@@ -41,20 +47,36 @@ meetings = sa.Table(
nullable=False,
server_default=sa.true(),
),
sa.Index("idx_meeting_room_id", "room_id"),
sa.Index(
"idx_one_active_meeting_per_room",
"room_id",
unique=True,
postgresql_where=sa.text("is_active = true"),
sa.Column(
"calendar_event_id",
sa.String,
sa.ForeignKey(
"calendar_event.id",
ondelete="SET NULL",
name="fk_meeting_calendar_event_id",
),
),
sa.Column("calendar_metadata", JSONB),
sa.Column(
"platform",
sa.String,
nullable=False,
server_default=assert_equal(WHEREBY_PLATFORM, "whereby"),
),
sa.Index("idx_meeting_room_id", "room_id"),
sa.Index("idx_meeting_calendar_event", "calendar_event_id"),
)
meeting_consent = sa.Table(
"meeting_consent",
metadata,
sa.Column("id", sa.String, primary_key=True),
sa.Column("meeting_id", sa.String, sa.ForeignKey("meeting.id"), nullable=False),
sa.Column(
"meeting_id",
sa.String,
sa.ForeignKey("meeting.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("user_id", sa.String),
sa.Column("consent_given", sa.Boolean, nullable=False),
sa.Column("consent_timestamp", sa.DateTime(timezone=True), nullable=False),
@@ -76,15 +98,18 @@ class Meeting(BaseModel):
host_room_url: str
start_date: datetime
end_date: datetime
user_id: str | None = None
room_id: str | None = None
room_id: str | None
is_locked: bool = False
room_mode: Literal["normal", "group"] = "normal"
recording_type: Literal["none", "local", "cloud"] = "cloud"
recording_trigger: Literal[
recording_trigger: Literal[ # whereby-specific
"none", "prompt", "automatic", "automatic-2nd-participant"
] = "automatic-2nd-participant"
num_clients: int = 0
is_active: bool = True
calendar_event_id: str | None = None
calendar_metadata: dict[str, Any] | None = None
platform: Platform = WHEREBY_PLATFORM
class MeetingController:
@@ -96,12 +121,10 @@ class MeetingController:
host_room_url: str,
start_date: datetime,
end_date: datetime,
user_id: str,
room: Room,
calendar_event_id: str | None = None,
calendar_metadata: dict[str, Any] | None = None,
):
"""
Create a new meeting
"""
meeting = Meeting(
id=id,
room_name=room_name,
@@ -109,41 +132,49 @@ class MeetingController:
host_room_url=host_room_url,
start_date=start_date,
end_date=end_date,
user_id=user_id,
room_id=room.id,
is_locked=room.is_locked,
room_mode=room.room_mode,
recording_type=room.recording_type,
recording_trigger=room.recording_trigger,
calendar_event_id=calendar_event_id,
calendar_metadata=calendar_metadata,
platform=room.platform,
)
query = meetings.insert().values(**meeting.model_dump())
await get_database().execute(query)
return meeting
async def get_all_active(self) -> list[Meeting]:
"""
Get active meetings.
"""
query = meetings.select().where(meetings.c.is_active)
return await get_database().fetch_all(query)
async def get_all_active(self, platform: str | None = None) -> list[Meeting]:
conditions = [meetings.c.is_active]
if platform is not None:
conditions.append(meetings.c.platform == platform)
query = meetings.select().where(sa.and_(*conditions))
results = await get_database().fetch_all(query)
return [Meeting(**result) for result in results]
async def get_by_room_name(
self,
room_name: str,
) -> Meeting:
) -> Meeting | None:
"""
Get a meeting by room name.
For backward compatibility, returns the most recent meeting.
"""
query = meetings.select().where(meetings.c.room_name == room_name)
query = (
meetings.select()
.where(meetings.c.room_name == room_name)
.order_by(meetings.c.end_date.desc())
)
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def get_active(self, room: Room, current_time: datetime) -> Meeting:
async def get_active(self, room: Room, current_time: datetime) -> Meeting | None:
"""
Get latest active meeting for a room.
For backward compatibility, returns the most recent active meeting.
"""
end_date = getattr(meetings.c, "end_date")
query = (
@@ -160,40 +191,97 @@ class MeetingController:
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def get_by_id(self, meeting_id: str, **kwargs) -> Meeting | None:
async def get_all_active_for_room(
self, room: Room, current_time: datetime
) -> list[Meeting]:
end_date = getattr(meetings.c, "end_date")
query = (
meetings.select()
.where(
sa.and_(
meetings.c.room_id == room.id,
meetings.c.end_date > current_time,
meetings.c.is_active,
)
)
.order_by(end_date.desc())
)
results = await get_database().fetch_all(query)
return [Meeting(**result) for result in results]
async def get_active_by_calendar_event(
self, room: Room, calendar_event_id: str, current_time: datetime
) -> Meeting | None:
"""
Get a meeting by id
Get active meeting for a specific calendar event.
"""
query = meetings.select().where(meetings.c.id == meeting_id)
query = meetings.select().where(
sa.and_(
meetings.c.room_id == room.id,
meetings.c.calendar_event_id == calendar_event_id,
meetings.c.end_date > current_time,
meetings.c.is_active,
)
)
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def get_by_id_for_http(self, meeting_id: str, user_id: str | None) -> Meeting:
"""
Get a meeting by ID for HTTP request.
If not found, it will raise a 404 error.
"""
async def get_by_id(
self, meeting_id: str, room: Room | None = None
) -> Meeting | None:
query = meetings.select().where(meetings.c.id == meeting_id)
if room:
query = query.where(meetings.c.room_id == room.id)
result = await get_database().fetch_one(query)
if not result:
raise HTTPException(status_code=404, detail="Meeting not found")
return None
return Meeting(**result)
meeting = Meeting(**result)
if result["user_id"] != user_id:
meeting.host_room_url = ""
return meeting
async def get_by_calendar_event(
self, calendar_event_id: str, room: Room
) -> Meeting | None:
query = meetings.select().where(
meetings.c.calendar_event_id == calendar_event_id
)
if room:
query = query.where(meetings.c.room_id == room.id)
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def update_meeting(self, meeting_id: str, **kwargs):
query = meetings.update().where(meetings.c.id == meeting_id).values(**kwargs)
await get_database().execute(query)
async def increment_num_clients(self, meeting_id: str) -> None:
"""Atomically increment participant count."""
query = (
meetings.update()
.where(meetings.c.id == meeting_id)
.values(num_clients=meetings.c.num_clients + 1)
)
await get_database().execute(query)
async def decrement_num_clients(self, meeting_id: str) -> None:
"""Atomically decrement participant count (min 0)."""
query = (
meetings.update()
.where(meetings.c.id == meeting_id)
.values(
num_clients=sa.case(
(meetings.c.num_clients > 0, meetings.c.num_clients - 1), else_=0
)
)
)
await get_database().execute(query)
class MeetingConsentController:
async def get_by_meeting_id(self, meeting_id: str) -> list[MeetingConsent]:
@@ -214,10 +302,9 @@ class MeetingConsentController:
result = await get_database().fetch_one(query)
if result is None:
return None
return MeetingConsent(**result) if result else None
return MeetingConsent(**result)
async def upsert(self, consent: MeetingConsent) -> MeetingConsent:
"""Create new consent or update existing one for authenticated users"""
if consent.user_id:
# For authenticated users, check if consent already exists
# not transactional but we're ok with that; the consents ain't deleted anyways

View File

@@ -21,6 +21,7 @@ recordings = sa.Table(
server_default="pending",
),
sa.Column("meeting_id", sa.String),
sa.Column("track_keys", sa.JSON, nullable=True),
sa.Index("idx_recording_meeting_id", "meeting_id"),
)
@@ -28,10 +29,20 @@ recordings = sa.Table(
class Recording(BaseModel):
id: str = Field(default_factory=generate_uuid4)
bucket_name: str
# for single-track
object_key: str
recorded_at: datetime
status: Literal["pending", "processing", "completed", "failed"] = "pending"
meeting_id: str | None = None
# for multitrack reprocessing
# track_keys can be empty list [] if recording finished but no audio was captured (silence/muted)
# None means not a multitrack recording, [] means multitrack with no tracks
track_keys: list[str] | None = None
@property
def is_multitrack(self) -> bool:
"""True if recording has separate audio tracks (1+ tracks counts as multitrack)."""
return self.track_keys is not None and len(self.track_keys) > 0
class RecordingController:
@@ -40,12 +51,14 @@ class RecordingController:
await get_database().execute(query)
return recording
async def get_by_id(self, id: str) -> Recording:
async def get_by_id(self, id: str) -> Recording | None:
query = recordings.select().where(recordings.c.id == id)
result = await get_database().fetch_one(query)
return Recording(**result) if result else None
async def get_by_object_key(self, bucket_name: str, object_key: str) -> Recording:
async def get_by_object_key(
self, bucket_name: str, object_key: str
) -> Recording | None:
query = recordings.select().where(
recordings.c.bucket_name == bucket_name,
recordings.c.object_key == object_key,
@@ -57,5 +70,14 @@ class RecordingController:
query = recordings.delete().where(recordings.c.id == id)
await get_database().execute(query)
# no check for existence
async def get_by_ids(self, recording_ids: list[str]) -> list[Recording]:
if not recording_ids:
return []
query = recordings.select().where(recordings.c.id.in_(recording_ids))
results = await get_database().fetch_all(query)
return [Recording(**row) for row in results]
recordings_controller = RecordingController()

View File

@@ -1,3 +1,4 @@
import secrets
from datetime import datetime, timezone
from sqlite3 import IntegrityError
from typing import Literal
@@ -8,6 +9,8 @@ from pydantic import BaseModel, Field
from sqlalchemy.sql import false, or_
from reflector.db import get_database, metadata
from reflector.schemas.platform import Platform
from reflector.settings import settings
from reflector.utils import generate_uuid4
rooms = sqlalchemy.Table(
@@ -40,7 +43,22 @@ rooms = sqlalchemy.Table(
sqlalchemy.Column(
"is_shared", sqlalchemy.Boolean, nullable=False, server_default=false()
),
sqlalchemy.Column("webhook_url", sqlalchemy.String, nullable=True),
sqlalchemy.Column("webhook_secret", sqlalchemy.String, nullable=True),
sqlalchemy.Column("ics_url", sqlalchemy.Text),
sqlalchemy.Column("ics_fetch_interval", sqlalchemy.Integer, server_default="300"),
sqlalchemy.Column(
"ics_enabled", sqlalchemy.Boolean, nullable=False, server_default=false()
),
sqlalchemy.Column("ics_last_sync", sqlalchemy.DateTime(timezone=True)),
sqlalchemy.Column("ics_last_etag", sqlalchemy.Text),
sqlalchemy.Column(
"platform",
sqlalchemy.String,
nullable=False,
),
sqlalchemy.Index("idx_room_is_shared", "is_shared"),
sqlalchemy.Index("idx_room_ics_enabled", "ics_enabled"),
)
@@ -55,10 +73,18 @@ class Room(BaseModel):
is_locked: bool = False
room_mode: Literal["normal", "group"] = "normal"
recording_type: Literal["none", "local", "cloud"] = "cloud"
recording_trigger: Literal[
recording_trigger: Literal[ # whereby-specific
"none", "prompt", "automatic", "automatic-2nd-participant"
] = "automatic-2nd-participant"
is_shared: bool = False
webhook_url: str | None = None
webhook_secret: str | None = None
ics_url: str | None = None
ics_fetch_interval: int = 300
ics_enabled: bool = False
ics_last_sync: datetime | None = None
ics_last_etag: str | None = None
platform: Platform = Field(default_factory=lambda: settings.DEFAULT_VIDEO_PLATFORM)
class RoomController:
@@ -107,22 +133,39 @@ class RoomController:
recording_type: str,
recording_trigger: str,
is_shared: bool,
webhook_url: str = "",
webhook_secret: str = "",
ics_url: str | None = None,
ics_fetch_interval: int = 300,
ics_enabled: bool = False,
platform: Platform = settings.DEFAULT_VIDEO_PLATFORM,
):
"""
Add a new room
"""
room = Room(
name=name,
user_id=user_id,
zulip_auto_post=zulip_auto_post,
zulip_stream=zulip_stream,
zulip_topic=zulip_topic,
is_locked=is_locked,
room_mode=room_mode,
recording_type=recording_type,
recording_trigger=recording_trigger,
is_shared=is_shared,
)
if webhook_url and not webhook_secret:
webhook_secret = secrets.token_urlsafe(32)
room_data = {
"name": name,
"user_id": user_id,
"zulip_auto_post": zulip_auto_post,
"zulip_stream": zulip_stream,
"zulip_topic": zulip_topic,
"is_locked": is_locked,
"room_mode": room_mode,
"recording_type": recording_type,
"recording_trigger": recording_trigger,
"is_shared": is_shared,
"webhook_url": webhook_url,
"webhook_secret": webhook_secret,
"ics_url": ics_url,
"ics_fetch_interval": ics_fetch_interval,
"ics_enabled": ics_enabled,
"platform": platform,
}
room = Room(**room_data)
query = rooms.insert().values(**room.model_dump())
try:
await get_database().execute(query)
@@ -134,6 +177,9 @@ class RoomController:
"""
Update a room fields with key/values in values
"""
if values.get("webhook_url") and not values.get("webhook_secret"):
values["webhook_secret"] = secrets.token_urlsafe(32)
query = rooms.update().where(rooms.c.id == room.id).values(**values)
try:
await get_database().execute(query)
@@ -183,6 +229,13 @@ class RoomController:
return room
async def get_ics_enabled(self) -> list[Room]:
query = rooms.select().where(
rooms.c.ics_enabled == True, rooms.c.ics_url != None
)
results = await get_database().fetch_all(query)
return [Room(**result) for result in results]
async def remove_by_id(
self,
room_id: str,

View File

@@ -8,12 +8,14 @@ from typing import Annotated, Any, Dict, Iterator
import sqlalchemy
import webvtt
from databases.interfaces import Record as DbRecord
from fastapi import HTTPException
from pydantic import (
BaseModel,
Field,
NonNegativeFloat,
NonNegativeInt,
TypeAdapter,
ValidationError,
constr,
field_serializer,
@@ -21,9 +23,10 @@ from pydantic import (
from reflector.db import get_database
from reflector.db.rooms import rooms
from reflector.db.transcripts import SourceKind, transcripts
from reflector.db.transcripts import SourceKind, TranscriptStatus, transcripts
from reflector.db.utils import is_postgresql
from reflector.logger import logger
from reflector.utils.string import NonEmptyString, try_parse_non_empty_string
DEFAULT_SEARCH_LIMIT = 20
SNIPPET_CONTEXT_LENGTH = 50 # Characters before/after match to include
@@ -31,12 +34,13 @@ DEFAULT_SNIPPET_MAX_LENGTH = NonNegativeInt(150)
DEFAULT_MAX_SNIPPETS = NonNegativeInt(3)
LONG_SUMMARY_MAX_SNIPPETS = 2
SearchQueryBase = constr(min_length=0, strip_whitespace=True)
SearchQueryBase = constr(min_length=1, strip_whitespace=True)
SearchLimitBase = Annotated[int, Field(ge=1, le=100)]
SearchOffsetBase = Annotated[int, Field(ge=0)]
SearchTotalBase = Annotated[int, Field(ge=0)]
SearchQuery = Annotated[SearchQueryBase, Field(description="Search query text")]
search_query_adapter = TypeAdapter(SearchQuery)
SearchLimit = Annotated[SearchLimitBase, Field(description="Results per page")]
SearchOffset = Annotated[
SearchOffsetBase, Field(description="Number of results to skip")
@@ -88,7 +92,7 @@ class WebVTTProcessor:
@staticmethod
def generate_snippets(
webvtt_content: WebVTTContent,
query: str,
query: SearchQuery,
max_snippets: NonNegativeInt = DEFAULT_MAX_SNIPPETS,
) -> list[str]:
"""Generate snippets from WebVTT content."""
@@ -125,12 +129,14 @@ class SnippetCandidate:
class SearchParameters(BaseModel):
"""Validated search parameters for full-text search."""
query_text: SearchQuery
query_text: SearchQuery | None = None
limit: SearchLimit = DEFAULT_SEARCH_LIMIT
offset: SearchOffset = 0
user_id: str | None = None
room_id: str | None = None
source_kind: SourceKind | None = None
from_datetime: datetime | None = None
to_datetime: datetime | None = None
class SearchResultDB(BaseModel):
@@ -157,7 +163,7 @@ class SearchResult(BaseModel):
room_name: str | None = None
source_kind: SourceKind
created_at: datetime
status: str = Field(..., min_length=1)
status: TranscriptStatus = Field(..., min_length=1)
rank: float = Field(..., ge=0, le=1)
duration: NonNegativeFloat | None = Field(..., description="Duration in seconds")
search_snippets: list[str] = Field(
@@ -199,15 +205,13 @@ class SnippetGenerator:
prev_start = start
@staticmethod
def count_matches(text: str, query: str) -> NonNegativeInt:
def count_matches(text: str, query: SearchQuery) -> NonNegativeInt:
"""Count total number of matches for a query in text."""
ZERO = NonNegativeInt(0)
if not text:
logger.warning("Empty text for search query in count_matches")
return ZERO
if not query:
logger.warning("Empty query for search text in count_matches")
return ZERO
assert query is not None
return NonNegativeInt(
sum(1 for _ in SnippetGenerator.find_all_matches(text, query))
)
@@ -243,13 +247,14 @@ class SnippetGenerator:
@staticmethod
def generate(
text: str,
query: str,
query: SearchQuery,
max_length: NonNegativeInt = DEFAULT_SNIPPET_MAX_LENGTH,
max_snippets: NonNegativeInt = DEFAULT_MAX_SNIPPETS,
) -> list[str]:
"""Generate snippets from text."""
if not text or not query:
logger.warning("Empty text or query for generate_snippets")
assert query is not None
if not text:
logger.warning("Empty text for generate_snippets")
return []
candidates = (
@@ -270,7 +275,7 @@ class SnippetGenerator:
@staticmethod
def from_summary(
summary: str,
query: str,
query: SearchQuery,
max_snippets: NonNegativeInt = LONG_SUMMARY_MAX_SNIPPETS,
) -> list[str]:
"""Generate snippets from summary text."""
@@ -278,9 +283,9 @@ class SnippetGenerator:
@staticmethod
def combine_sources(
summary: str | None,
summary: NonEmptyString | None,
webvtt: WebVTTContent | None,
query: str,
query: SearchQuery,
max_total: NonNegativeInt = DEFAULT_MAX_SNIPPETS,
) -> tuple[list[str], NonNegativeInt]:
"""Combine snippets from multiple sources and return total match count.
@@ -289,6 +294,11 @@ class SnippetGenerator:
snippets can be empty for real in case of e.g. title match
"""
assert (
summary is not None or webvtt is not None
), "At least one source must be present"
webvtt_matches = 0
summary_matches = 0
@@ -355,8 +365,8 @@ class SearchController:
else_=rooms.c.name,
).label("room_name"),
]
if params.query_text:
search_query = None
if params.query_text is not None:
search_query = sqlalchemy.func.websearch_to_tsquery(
"english", params.query_text
)
@@ -373,7 +383,9 @@ class SearchController:
transcripts.join(rooms, transcripts.c.room_id == rooms.c.id, isouter=True)
)
if params.query_text:
if params.query_text is not None:
# because already initialized based on params.query_text presence above
assert search_query is not None
base_query = base_query.where(
transcripts.c.search_vector_en.op("@@")(search_query)
)
@@ -392,8 +404,16 @@ class SearchController:
base_query = base_query.where(
transcripts.c.source_kind == params.source_kind
)
if params.from_datetime:
base_query = base_query.where(
transcripts.c.created_at >= params.from_datetime
)
if params.to_datetime:
base_query = base_query.where(
transcripts.c.created_at <= params.to_datetime
)
if params.query_text:
if params.query_text is not None:
order_by = sqlalchemy.desc(sqlalchemy.text("rank"))
else:
order_by = sqlalchemy.desc(transcripts.c.created_at)
@@ -407,19 +427,29 @@ class SearchController:
)
total = await get_database().fetch_val(count_query)
def _process_result(r) -> SearchResult:
def _process_result(r: DbRecord) -> SearchResult:
r_dict: Dict[str, Any] = dict(r)
webvtt_raw: str | None = r_dict.pop("webvtt", None)
webvtt: WebVTTContent | None
if webvtt_raw:
webvtt = WebVTTProcessor.parse(webvtt_raw)
else:
webvtt = None
long_summary: str | None = r_dict.pop("long_summary", None)
long_summary_r: str | None = r_dict.pop("long_summary", None)
long_summary: NonEmptyString = try_parse_non_empty_string(long_summary_r)
room_name: str | None = r_dict.pop("room_name", None)
db_result = SearchResultDB.model_validate(r_dict)
snippets, total_match_count = SnippetGenerator.combine_sources(
long_summary, webvtt, params.query_text, DEFAULT_MAX_SNIPPETS
at_least_one_source = webvtt is not None or long_summary is not None
has_query = params.query_text is not None
snippets, total_match_count = (
SnippetGenerator.combine_sources(
long_summary, webvtt, params.query_text, DEFAULT_MAX_SNIPPETS
)
if has_query and at_least_one_source
else ([], 0)
)
return SearchResult(

View File

@@ -21,7 +21,7 @@ from reflector.db.utils import is_postgresql
from reflector.logger import logger
from reflector.processors.types import Word as ProcessorWord
from reflector.settings import settings
from reflector.storage import get_recordings_storage, get_transcripts_storage
from reflector.storage import get_transcripts_storage
from reflector.utils import generate_uuid4
from reflector.utils.webvtt import topics_to_webvtt
@@ -122,6 +122,15 @@ def generate_transcript_name() -> str:
return f"Transcript {now.strftime('%Y-%m-%d %H:%M:%S')}"
TranscriptStatus = Literal[
"idle", "uploaded", "recording", "processing", "error", "ended"
]
class StrValue(BaseModel):
value: str
class AudioWaveform(BaseModel):
data: list[float]
@@ -177,6 +186,7 @@ class TranscriptParticipant(BaseModel):
id: str = Field(default_factory=generate_uuid4)
speaker: int | None
name: str
user_id: str | None = None
class Transcript(BaseModel):
@@ -185,7 +195,7 @@ class Transcript(BaseModel):
id: str = Field(default_factory=generate_uuid4)
user_id: str | None = None
name: str = Field(default_factory=generate_transcript_name)
status: str = "idle"
status: TranscriptStatus = "idle"
duration: float = 0
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
title: str | None = None
@@ -614,7 +624,9 @@ class TranscriptController:
)
if recording:
try:
await get_recordings_storage().delete_file(recording.object_key)
await get_transcripts_storage().delete_file(
recording.object_key, bucket=recording.bucket_name
)
except Exception as e:
logger.warning(
"Failed to delete recording object from S3",
@@ -638,6 +650,19 @@ class TranscriptController:
query = transcripts.delete().where(transcripts.c.recording_id == recording_id)
await get_database().execute(query)
@staticmethod
def user_can_mutate(transcript: Transcript, user_id: str | None) -> bool:
"""
Returns True if the given user is allowed to modify the transcript.
Policy:
- Anonymous transcripts (user_id is None) cannot be modified via API
- Only the owner (matching user_id) can modify their transcript
"""
if transcript.user_id is None:
return False
return user_id and transcript.user_id == user_id
@asynccontextmanager
async def transaction(self):
"""
@@ -703,11 +728,13 @@ class TranscriptController:
"""
Download audio from storage
"""
transcript.audio_mp3_filename.write_bytes(
await get_transcripts_storage().get_file(
transcript.storage_audio_path,
)
)
storage = get_transcripts_storage()
try:
with open(transcript.audio_mp3_filename, "wb") as f:
await storage.stream_to_fileobj(transcript.storage_audio_path, f)
except Exception:
transcript.audio_mp3_filename.unlink(missing_ok=True)
raise
async def upsert_participant(
self,
@@ -732,5 +759,27 @@ class TranscriptController:
transcript.delete_participant(participant_id)
await self.update(transcript, {"participants": transcript.participants_dump()})
async def set_status(
self, transcript_id: str, status: TranscriptStatus
) -> TranscriptEvent | None:
"""
Update the status of a transcript
Will add an event STATUS + update the status field of transcript
"""
async with self.transaction():
transcript = await self.get_by_id(transcript_id)
if not transcript:
raise Exception(f"Transcript {transcript_id} not found")
if transcript.status == status:
return
resp = await self.append_event(
transcript=transcript,
event="STATUS",
data=StrValue(value=status),
)
await self.update(transcript, {"status": status})
return resp
transcripts_controller = TranscriptController()

View File

@@ -0,0 +1,91 @@
import hmac
import secrets
from datetime import datetime, timezone
from hashlib import sha256
import sqlalchemy
from pydantic import BaseModel, Field
from reflector.db import get_database, metadata
from reflector.settings import settings
from reflector.utils import generate_uuid4
from reflector.utils.string import NonEmptyString
user_api_keys = sqlalchemy.Table(
"user_api_key",
metadata,
sqlalchemy.Column("id", sqlalchemy.String, primary_key=True),
sqlalchemy.Column("user_id", sqlalchemy.String, nullable=False),
sqlalchemy.Column("key_hash", sqlalchemy.String, nullable=False),
sqlalchemy.Column("name", sqlalchemy.String, nullable=True),
sqlalchemy.Column("created_at", sqlalchemy.DateTime(timezone=True), nullable=False),
sqlalchemy.Index("idx_user_api_key_hash", "key_hash", unique=True),
sqlalchemy.Index("idx_user_api_key_user_id", "user_id"),
)
class UserApiKey(BaseModel):
id: NonEmptyString = Field(default_factory=generate_uuid4)
user_id: NonEmptyString
key_hash: NonEmptyString
name: NonEmptyString | None = None
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
class UserApiKeyController:
@staticmethod
def generate_key() -> NonEmptyString:
return secrets.token_urlsafe(48)
@staticmethod
def hash_key(key: NonEmptyString) -> str:
return hmac.new(
settings.SECRET_KEY.encode(), key.encode(), digestmod=sha256
).hexdigest()
@classmethod
async def create_key(
cls,
user_id: NonEmptyString,
name: NonEmptyString | None = None,
) -> tuple[UserApiKey, NonEmptyString]:
plaintext = cls.generate_key()
api_key = UserApiKey(
user_id=user_id,
key_hash=cls.hash_key(plaintext),
name=name,
)
query = user_api_keys.insert().values(**api_key.model_dump())
await get_database().execute(query)
return api_key, plaintext
@classmethod
async def verify_key(cls, plaintext_key: NonEmptyString) -> UserApiKey | None:
key_hash = cls.hash_key(plaintext_key)
query = user_api_keys.select().where(
user_api_keys.c.key_hash == key_hash,
)
result = await get_database().fetch_one(query)
return UserApiKey(**result) if result else None
@staticmethod
async def list_by_user_id(user_id: NonEmptyString) -> list[UserApiKey]:
query = (
user_api_keys.select()
.where(user_api_keys.c.user_id == user_id)
.order_by(user_api_keys.c.created_at.desc())
)
results = await get_database().fetch_all(query)
return [UserApiKey(**r) for r in results]
@staticmethod
async def delete_key(key_id: NonEmptyString, user_id: NonEmptyString) -> bool:
query = user_api_keys.delete().where(
(user_api_keys.c.id == key_id) & (user_api_keys.c.user_id == user_id)
)
result = await get_database().execute(query)
# asyncpg returns None for DELETE, consider it success if no exception
return result is None or result > 0
user_api_keys_controller = UserApiKeyController()

View File

@@ -0,0 +1,92 @@
"""User table for storing Authentik user information."""
from datetime import datetime, timezone
import sqlalchemy
from pydantic import BaseModel, Field
from reflector.db import get_database, metadata
from reflector.utils import generate_uuid4
from reflector.utils.string import NonEmptyString
users = sqlalchemy.Table(
"user",
metadata,
sqlalchemy.Column("id", sqlalchemy.String, primary_key=True),
sqlalchemy.Column("email", sqlalchemy.String, nullable=False),
sqlalchemy.Column("authentik_uid", sqlalchemy.String, nullable=False),
sqlalchemy.Column("created_at", sqlalchemy.DateTime(timezone=True), nullable=False),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime(timezone=True), nullable=False),
sqlalchemy.Index("idx_user_authentik_uid", "authentik_uid", unique=True),
sqlalchemy.Index("idx_user_email", "email", unique=False),
)
class User(BaseModel):
id: NonEmptyString = Field(default_factory=generate_uuid4)
email: NonEmptyString
authentik_uid: NonEmptyString
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
class UserController:
@staticmethod
async def get_by_id(user_id: NonEmptyString) -> User | None:
query = users.select().where(users.c.id == user_id)
result = await get_database().fetch_one(query)
return User(**result) if result else None
@staticmethod
async def get_by_authentik_uid(authentik_uid: NonEmptyString) -> User | None:
query = users.select().where(users.c.authentik_uid == authentik_uid)
result = await get_database().fetch_one(query)
return User(**result) if result else None
@staticmethod
async def get_by_email(email: NonEmptyString) -> User | None:
query = users.select().where(users.c.email == email)
result = await get_database().fetch_one(query)
return User(**result) if result else None
@staticmethod
async def create_or_update(
id: NonEmptyString, authentik_uid: NonEmptyString, email: NonEmptyString
) -> User:
existing = await UserController.get_by_authentik_uid(authentik_uid)
now = datetime.now(timezone.utc)
if existing:
query = (
users.update()
.where(users.c.authentik_uid == authentik_uid)
.values(email=email, updated_at=now)
)
await get_database().execute(query)
return User(
id=existing.id,
authentik_uid=authentik_uid,
email=email,
created_at=existing.created_at,
updated_at=now,
)
else:
user = User(
id=id,
authentik_uid=authentik_uid,
email=email,
created_at=now,
updated_at=now,
)
query = users.insert().values(**user.model_dump())
await get_database().execute(query)
return user
@staticmethod
async def list_all() -> list[User]:
query = users.select().order_by(users.c.created_at.desc())
results = await get_database().fetch_all(query)
return [User(**r) for r in results]
user_controller = UserController()

View File

@@ -1,3 +1,4 @@
import logging
from typing import Type, TypeVar
from llama_index.core import Settings
@@ -5,7 +6,7 @@ from llama_index.core.output_parsers import PydanticOutputParser
from llama_index.core.program import LLMTextCompletionProgram
from llama_index.core.response_synthesizers import TreeSummarize
from llama_index.llms.openai_like import OpenAILike
from pydantic import BaseModel
from pydantic import BaseModel, ValidationError
T = TypeVar("T", bound=BaseModel)
@@ -61,6 +62,8 @@ class LLM:
tone_name: str | None = None,
) -> T:
"""Get structured output from LLM for non-function-calling models"""
logger = logging.getLogger(__name__)
summarizer = TreeSummarize(verbose=True)
response = await summarizer.aget_response(prompt, texts, tone_name=tone_name)
@@ -76,8 +79,25 @@ class LLM:
"Please structure the above information in the following JSON format:"
)
output = await program.acall(
analysis=str(response), format_instructions=format_instructions
)
try:
output = await program.acall(
analysis=str(response), format_instructions=format_instructions
)
except ValidationError as e:
# Extract the raw JSON from the error details
errors = e.errors()
if errors and "input" in errors[0]:
raw_json = errors[0]["input"]
logger.error(
f"JSON validation failed for {output_cls.__name__}. "
f"Full raw JSON output:\n{raw_json}\n"
f"Validation errors: {errors}"
)
else:
logger.error(
f"JSON validation failed for {output_cls.__name__}. "
f"Validation errors: {errors}"
)
raise
return output

View File

@@ -0,0 +1 @@
"""Pipeline modules for audio processing."""

View File

@@ -7,29 +7,34 @@ Uses parallel processing for transcription, diarization, and waveform generation
"""
import asyncio
import uuid
from pathlib import Path
import av
import structlog
from celery import shared_task
from celery import chain, shared_task
from reflector.asynctask import asynctask
from reflector.db.rooms import rooms_controller
from reflector.db.transcripts import (
SourceKind,
Transcript,
TranscriptStatus,
transcripts_controller,
)
from reflector.logger import logger
from reflector.pipelines.main_live_pipeline import PipelineMainBase, asynctask
from reflector.processors import (
AudioFileWriterProcessor,
TranscriptFinalSummaryProcessor,
TranscriptFinalTitleProcessor,
TranscriptTopicDetectorProcessor,
from reflector.pipelines import topic_processing
from reflector.pipelines.main_live_pipeline import (
PipelineMainBase,
broadcast_to_sockets,
task_cleanup_consent,
task_pipeline_post_to_zulip,
)
from reflector.pipelines.transcription_helpers import transcribe_file_with_processor
from reflector.processors import AudioFileWriterProcessor
from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
from reflector.processors.file_diarization import FileDiarizationInput
from reflector.processors.file_diarization_auto import FileDiarizationAutoProcessor
from reflector.processors.file_transcript import FileTranscriptInput
from reflector.processors.file_transcript_auto import FileTranscriptAutoProcessor
from reflector.processors.transcript_diarization_assembler import (
TranscriptDiarizationAssemblerInput,
TranscriptDiarizationAssemblerProcessor,
@@ -43,19 +48,7 @@ from reflector.processors.types import (
)
from reflector.settings import settings
from reflector.storage import get_transcripts_storage
class EmptyPipeline:
"""Empty pipeline for processors that need a pipeline reference"""
def __init__(self, logger: structlog.BoundLogger):
self.logger = logger
def get_pref(self, k, d=None):
return d
async def emit(self, event):
pass
from reflector.worker.webhook import send_transcript_webhook
class PipelineMainFile(PipelineMainBase):
@@ -70,7 +63,7 @@ class PipelineMainFile(PipelineMainBase):
def __init__(self, transcript_id: str):
super().__init__(transcript_id=transcript_id)
self.logger = logger.bind(transcript_id=self.transcript_id)
self.empty_pipeline = EmptyPipeline(logger=self.logger)
self.empty_pipeline = topic_processing.EmptyPipeline(logger=self.logger)
def _handle_gather_exceptions(self, results: list, operation: str) -> None:
"""Handle exceptions from asyncio.gather with return_exceptions=True"""
@@ -83,12 +76,27 @@ class PipelineMainFile(PipelineMainBase):
exc_info=result,
)
@broadcast_to_sockets
async def set_status(self, transcript_id: str, status: TranscriptStatus):
async with self.lock_transaction():
return await transcripts_controller.set_status(transcript_id, status)
async def process(self, file_path: Path):
"""Main entry point for file processing"""
self.logger.info(f"Starting file pipeline for {file_path}")
transcript = await self.get_transcript()
# Clear transcript as we're going to regenerate everything
async with self.transaction():
await transcripts_controller.update(
transcript,
{
"events": [],
"topics": [],
},
)
# Extract audio and write to transcript location
audio_path = await self.extract_and_write_audio(file_path, transcript)
@@ -105,6 +113,8 @@ class PipelineMainFile(PipelineMainBase):
self.logger.info("File pipeline complete")
await self.set_status(transcript.id, "ended")
async def extract_and_write_audio(
self, file_path: Path, transcript: Transcript
) -> Path:
@@ -234,24 +244,7 @@ class PipelineMainFile(PipelineMainBase):
async def transcribe_file(self, audio_url: str, language: str) -> TranscriptType:
"""Transcribe complete file"""
processor = FileTranscriptAutoProcessor()
input_data = FileTranscriptInput(audio_url=audio_url, language=language)
# Store result for retrieval
result: TranscriptType | None = None
async def capture_result(transcript):
nonlocal result
result = transcript
processor.on(capture_result)
await processor.push(input_data)
await processor.flush()
if not result:
raise ValueError("No transcript captured")
return result
return await transcribe_file_with_processor(audio_url, language)
async def diarize_file(self, audio_url: str) -> list[DiarizationSegment] | None:
"""Get diarization for file"""
@@ -294,63 +287,53 @@ class PipelineMainFile(PipelineMainBase):
async def detect_topics(
self, transcript: TranscriptType, target_language: str
) -> list[TitleSummary]:
"""Detect topics from complete transcript"""
chunk_size = 300
topics: list[TitleSummary] = []
async def on_topic(topic: TitleSummary):
topics.append(topic)
return await self.on_topic(topic)
topic_detector = TranscriptTopicDetectorProcessor(callback=on_topic)
topic_detector.set_pipeline(self.empty_pipeline)
for i in range(0, len(transcript.words), chunk_size):
chunk_words = transcript.words[i : i + chunk_size]
if not chunk_words:
continue
chunk_transcript = TranscriptType(
words=chunk_words, translation=transcript.translation
)
await topic_detector.push(chunk_transcript)
await topic_detector.flush()
return topics
return await topic_processing.detect_topics(
transcript,
target_language,
on_topic_callback=self.on_topic,
empty_pipeline=self.empty_pipeline,
)
async def generate_title(self, topics: list[TitleSummary]):
"""Generate title from topics"""
if not topics:
self.logger.warning("No topics for title generation")
return
processor = TranscriptFinalTitleProcessor(callback=self.on_title)
processor.set_pipeline(self.empty_pipeline)
for topic in topics:
await processor.push(topic)
await processor.flush()
return await topic_processing.generate_title(
topics,
on_title_callback=self.on_title,
empty_pipeline=self.empty_pipeline,
logger=self.logger,
)
async def generate_summaries(self, topics: list[TitleSummary]):
"""Generate long and short summaries from topics"""
if not topics:
self.logger.warning("No topics for summary generation")
return
transcript = await self.get_transcript()
processor = TranscriptFinalSummaryProcessor(
transcript=transcript,
callback=self.on_long_summary,
on_short_summary=self.on_short_summary,
return await topic_processing.generate_summaries(
topics,
transcript,
on_long_summary_callback=self.on_long_summary,
on_short_summary_callback=self.on_short_summary,
empty_pipeline=self.empty_pipeline,
logger=self.logger,
)
processor.set_pipeline(self.empty_pipeline)
for topic in topics:
await processor.push(topic)
await processor.flush()
@shared_task
@asynctask
async def task_send_webhook_if_needed(*, transcript_id: str):
"""Send webhook if this is a room recording with webhook configured"""
transcript = await transcripts_controller.get_by_id(transcript_id)
if not transcript:
return
if transcript.source_kind == SourceKind.ROOM and transcript.room_id:
room = await rooms_controller.get_by_id(transcript.room_id)
if room and room.webhook_url:
logger.info(
"Dispatching webhook",
transcript_id=transcript_id,
room_id=room.id,
webhook_url=room.webhook_url,
)
send_transcript_webhook.delay(
transcript_id, room.id, event_id=uuid.uuid4().hex
)
@shared_task
@@ -362,14 +345,33 @@ async def task_pipeline_file_process(*, transcript_id: str):
if not transcript:
raise Exception(f"Transcript {transcript_id} not found")
# Find the file to process
audio_file = next(transcript.data_path.glob("upload.*"), None)
if not audio_file:
audio_file = next(transcript.data_path.glob("audio.*"), None)
if not audio_file:
raise Exception("No audio file found to process")
# Run file pipeline
pipeline = PipelineMainFile(transcript_id=transcript_id)
await pipeline.process(audio_file)
try:
await pipeline.set_status(transcript_id, "processing")
# Find the file to process
audio_file = next(transcript.data_path.glob("upload.*"), None)
if not audio_file:
audio_file = next(transcript.data_path.glob("audio.*"), None)
if not audio_file:
raise Exception("No audio file found to process")
await pipeline.process(audio_file)
except Exception as e:
logger.error(
f"File pipeline failed for transcript {transcript_id}: {type(e).__name__}: {str(e)}",
exc_info=True,
transcript_id=transcript_id,
)
await pipeline.set_status(transcript_id, "error")
raise
# Run post-processing chain: consent cleanup -> zulip -> webhook
post_chain = chain(
task_cleanup_consent.si(transcript_id=transcript_id),
task_pipeline_post_to_zulip.si(transcript_id=transcript_id),
task_send_webhook_if_needed.si(transcript_id=transcript_id),
)
post_chain.delay()

View File

@@ -17,12 +17,11 @@ from contextlib import asynccontextmanager
from typing import Generic
import av
import boto3
from celery import chord, current_task, group, shared_task
from pydantic import BaseModel
from structlog import BoundLogger as Logger
from reflector.db import get_database
from reflector.asynctask import asynctask
from reflector.db.meetings import meeting_consent_controller, meetings_controller
from reflector.db.recordings import recordings_controller
from reflector.db.rooms import rooms_controller
@@ -32,6 +31,7 @@ from reflector.db.transcripts import (
TranscriptFinalLongSummary,
TranscriptFinalShortSummary,
TranscriptFinalTitle,
TranscriptStatus,
TranscriptText,
TranscriptTopic,
TranscriptWaveform,
@@ -40,8 +40,9 @@ from reflector.db.transcripts import (
from reflector.logger import logger
from reflector.pipelines.runner import PipelineMessage, PipelineRunner
from reflector.processors import (
AudioChunkerProcessor,
AudioChunkerAutoProcessor,
AudioDiarizationAutoProcessor,
AudioDownscaleProcessor,
AudioFileWriterProcessor,
AudioMergeProcessor,
AudioTranscriptAutoProcessor,
@@ -68,29 +69,6 @@ from reflector.zulip import (
)
def asynctask(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
async def run_with_db():
database = get_database()
await database.connect()
try:
return await f(*args, **kwargs)
finally:
await database.disconnect()
coro = run_with_db()
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop and loop.is_running():
return loop.run_until_complete(coro)
return asyncio.run(coro)
return wrapper
def broadcast_to_sockets(func):
"""
Decorator to broadcast transcript event to websockets
@@ -106,6 +84,20 @@ def broadcast_to_sockets(func):
message=resp.model_dump(mode="json"),
)
transcript = await transcripts_controller.get_by_id(self.transcript_id)
if transcript and transcript.user_id:
# Emit only relevant events to the user room to avoid noisy updates.
# Allowed: STATUS, FINAL_TITLE, DURATION. All are prefixed with TRANSCRIPT_
allowed_user_events = {"STATUS", "FINAL_TITLE", "DURATION"}
if resp.event in allowed_user_events:
await self.ws_manager.send_json(
room_id=f"user:{transcript.user_id}",
message={
"event": f"TRANSCRIPT_{resp.event}",
"data": {"id": self.transcript_id, **resp.data},
},
)
return wrapper
@@ -187,8 +179,15 @@ class PipelineMainBase(PipelineRunner[PipelineMessage], Generic[PipelineMessage]
]
@asynccontextmanager
async def transaction(self):
async def lock_transaction(self):
# This lock is to prevent multiple processor starting adding
# into event array at the same time
async with self._lock:
yield
@asynccontextmanager
async def transaction(self):
async with self.lock_transaction():
async with transcripts_controller.transaction():
yield
@@ -197,14 +196,14 @@ class PipelineMainBase(PipelineRunner[PipelineMessage], Generic[PipelineMessage]
# if it's the first part, update the status of the transcript
# but do not set the ended status yet.
if isinstance(self, PipelineMainLive):
status_mapping = {
status_mapping: dict[str, TranscriptStatus] = {
"started": "recording",
"push": "recording",
"flush": "processing",
"error": "error",
}
elif isinstance(self, PipelineMainFinalSummaries):
status_mapping = {
status_mapping: dict[str, TranscriptStatus] = {
"push": "processing",
"flush": "processing",
"error": "error",
@@ -220,22 +219,8 @@ class PipelineMainBase(PipelineRunner[PipelineMessage], Generic[PipelineMessage]
return
# when the status of the pipeline changes, update the transcript
async with self.transaction():
transcript = await self.get_transcript()
if status == transcript.status:
return
resp = await transcripts_controller.append_event(
transcript=transcript,
event="STATUS",
data=StrValue(value=status),
)
await transcripts_controller.update(
transcript,
{
"status": status,
},
)
return resp
async with self._lock:
return await transcripts_controller.set_status(self.transcript_id, status)
@broadcast_to_sockets
async def on_transcript(self, data):
@@ -365,7 +350,8 @@ class PipelineMainLive(PipelineMainBase):
path=transcript.audio_wav_filename,
on_duration=self.on_duration,
),
AudioChunkerProcessor(),
AudioDownscaleProcessor(),
AudioChunkerAutoProcessor(),
AudioMergeProcessor(),
AudioTranscriptAutoProcessor.as_threaded(),
TranscriptLinerProcessor(),
@@ -597,6 +583,7 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
consent_denied = False
recording = None
meeting = None
try:
if transcript.recording_id:
recording = await recordings_controller.get_by_id(transcript.recording_id)
@@ -607,8 +594,8 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
meeting.id
)
except Exception as e:
logger.error(f"Failed to get fetch consent: {e}", exc_info=e)
consent_denied = True
logger.error(f"Failed to fetch consent: {e}", exc_info=e)
raise
if not consent_denied:
logger.info("Consent approved, keeping all files")
@@ -616,25 +603,24 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
logger.info("Consent denied, cleaning up all related audio files")
if recording and recording.bucket_name and recording.object_key:
s3_whereby = boto3.client(
"s3",
aws_access_key_id=settings.AWS_WHEREBY_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_WHEREBY_ACCESS_KEY_SECRET,
)
try:
s3_whereby.delete_object(
Bucket=recording.bucket_name, Key=recording.object_key
)
logger.info(
f"Deleted original Whereby recording: {recording.bucket_name}/{recording.object_key}"
)
except Exception as e:
logger.error(f"Failed to delete Whereby recording: {e}", exc_info=e)
deletion_errors = []
if recording and recording.bucket_name:
keys_to_delete = []
if recording.track_keys:
keys_to_delete = recording.track_keys
elif recording.object_key:
keys_to_delete = [recording.object_key]
master_storage = get_transcripts_storage()
for key in keys_to_delete:
try:
await master_storage.delete_file(key, bucket=recording.bucket_name)
logger.info(f"Deleted recording file: {recording.bucket_name}/{key}")
except Exception as e:
error_msg = f"Failed to delete {key}: {e}"
logger.error(error_msg, exc_info=e)
deletion_errors.append(error_msg)
# non-transactional, files marked for deletion not actually deleted is possible
await transcripts_controller.update(transcript, {"audio_deleted": True})
# 2. Delete processed audio from transcript storage S3 bucket
if transcript.audio_location == "storage":
storage = get_transcripts_storage()
try:
@@ -643,18 +629,28 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
f"Deleted processed audio from storage: {transcript.storage_audio_path}"
)
except Exception as e:
logger.error(f"Failed to delete processed audio: {e}", exc_info=e)
error_msg = f"Failed to delete processed audio: {e}"
logger.error(error_msg, exc_info=e)
deletion_errors.append(error_msg)
# 3. Delete local audio files
try:
if hasattr(transcript, "audio_mp3_filename") and transcript.audio_mp3_filename:
transcript.audio_mp3_filename.unlink(missing_ok=True)
if hasattr(transcript, "audio_wav_filename") and transcript.audio_wav_filename:
transcript.audio_wav_filename.unlink(missing_ok=True)
except Exception as e:
logger.error(f"Failed to delete local audio files: {e}", exc_info=e)
error_msg = f"Failed to delete local audio files: {e}"
logger.error(error_msg, exc_info=e)
deletion_errors.append(error_msg)
logger.info("Consent cleanup done")
if deletion_errors:
logger.warning(
f"Consent cleanup completed with {len(deletion_errors)} errors",
errors=deletion_errors,
)
else:
await transcripts_controller.update(transcript, {"audio_deleted": True})
logger.info("Consent cleanup done - all audio deleted")
@get_transcript
@@ -792,7 +788,7 @@ def pipeline_post(*, transcript_id: str):
chain_final_summaries,
) | task_pipeline_post_to_zulip.si(transcript_id=transcript_id)
chain.delay()
return chain.delay()
@get_transcript

View File

@@ -0,0 +1,803 @@
import asyncio
import math
import tempfile
from fractions import Fraction
from pathlib import Path
import av
from av.audio.resampler import AudioResampler
from celery import chain, shared_task
from reflector.asynctask import asynctask
from reflector.dailyco_api import MeetingParticipantsResponse
from reflector.db.transcripts import (
Transcript,
TranscriptParticipant,
TranscriptStatus,
TranscriptWaveform,
transcripts_controller,
)
from reflector.logger import logger
from reflector.pipelines import topic_processing
from reflector.pipelines.main_file_pipeline import task_send_webhook_if_needed
from reflector.pipelines.main_live_pipeline import (
PipelineMainBase,
broadcast_to_sockets,
task_cleanup_consent,
task_pipeline_post_to_zulip,
)
from reflector.pipelines.transcription_helpers import transcribe_file_with_processor
from reflector.processors import AudioFileWriterProcessor
from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
from reflector.processors.types import TitleSummary
from reflector.processors.types import Transcript as TranscriptType
from reflector.settings import settings
from reflector.storage import Storage, get_transcripts_storage
from reflector.utils.daily import (
filter_cam_audio_tracks,
parse_daily_recording_filename,
)
from reflector.utils.string import NonEmptyString
from reflector.video_platforms.factory import create_platform_client
# Audio encoding constants
OPUS_STANDARD_SAMPLE_RATE = 48000
OPUS_DEFAULT_BIT_RATE = 128000
# Storage operation constants
PRESIGNED_URL_EXPIRATION_SECONDS = 7200 # 2 hours
class PipelineMainMultitrack(PipelineMainBase):
def __init__(self, transcript_id: str):
super().__init__(transcript_id=transcript_id)
self.logger = logger.bind(transcript_id=self.transcript_id)
self.empty_pipeline = topic_processing.EmptyPipeline(logger=self.logger)
async def pad_track_for_transcription(
self,
track_url: NonEmptyString,
track_idx: int,
storage: Storage,
) -> NonEmptyString:
"""
Pad a single track with silence based on stream metadata start_time.
Downloads from S3 presigned URL, processes via PyAV using tempfile, uploads to S3.
Returns presigned URL of padded track (or original URL if no padding needed).
Memory usage:
- Pattern: fixed_overhead(2-5MB) for PyAV codec/filters
- PyAV streams input efficiently (no full download, verified)
- Output written to tempfile (disk-based, not memory)
- Upload streams from file handle (boto3 chunks, typically 5-10MB)
Daily.co raw-tracks timing - Two approaches:
CURRENT APPROACH (PyAV metadata):
The WebM stream.start_time field encodes MEETING-RELATIVE timing:
- t=0: When Daily.co recording started (first participant joined)
- start_time=8.13s: This participant's track began 8.13s after recording started
- Purpose: Enables track alignment without external manifest files
This is NOT:
- Stream-internal offset (first packet timestamp relative to stream start)
- Absolute/wall-clock time
- Recording duration
ALTERNATIVE APPROACH (filename parsing):
Daily.co filenames contain Unix timestamps (milliseconds):
Format: {recording_start_ts}-{participant_id}-cam-audio-{track_start_ts}.webm
Example: 1760988935484-52f7f48b-fbab-431f-9a50-87b9abfc8255-cam-audio-1760988935922.webm
Can calculate offset: (track_start_ts - recording_start_ts) / 1000
- Track 0: (1760988935922 - 1760988935484) / 1000 = 0.438s
- Track 1: (1760988943823 - 1760988935484) / 1000 = 8.339s
TIME DIFFERENCE: PyAV metadata vs filename timestamps differ by ~209ms:
- Track 0: filename=438ms, metadata=229ms (diff: 209ms)
- Track 1: filename=8339ms, metadata=8130ms (diff: 209ms)
Consistent delta suggests network/encoding delay. PyAV metadata is ground truth
(represents when audio stream actually started vs when file upload initiated).
Example with 2 participants:
Track A: start_time=0.2s → Joined 200ms after recording began
Track B: start_time=8.1s → Joined 8.1 seconds later
After padding:
Track A: [0.2s silence] + [speech...]
Track B: [8.1s silence] + [speech...]
Whisper transcription timestamps are now synchronized:
Track A word at 5.0s → happened at meeting t=5.0s
Track B word at 10.0s → happened at meeting t=10.0s
Merging just sorts by timestamp - no offset calculation needed.
Padding coincidentally involves re-encoding. It's important when we work with Daily.co + Whisper.
This is because Daily.co returns recordings with skipped frames e.g. when microphone muted.
Daily.co doesn't understand those frames and ignores them, causing timestamp issues in transcription.
Re-encoding restores those frames. We do padding and re-encoding together just because it's convenient and more performant:
we need padded values for mix mp3 anyways
"""
transcript = await self.get_transcript()
try:
# PyAV streams input from S3 URL efficiently (2-5MB fixed overhead for codec/filters)
with av.open(track_url) as in_container:
start_time_seconds = self._extract_stream_start_time_from_container(
in_container, track_idx
)
if start_time_seconds <= 0:
self.logger.info(
f"Track {track_idx} requires no padding (start_time={start_time_seconds}s)",
track_idx=track_idx,
)
return track_url
# Use tempfile instead of BytesIO for better memory efficiency
# Reduces peak memory usage during encoding/upload
with tempfile.NamedTemporaryFile(
suffix=".webm", delete=False
) as temp_file:
temp_path = temp_file.name
try:
self._apply_audio_padding_to_file(
in_container, temp_path, start_time_seconds, track_idx
)
storage_path = (
f"file_pipeline/{transcript.id}/tracks/padded_{track_idx}.webm"
)
# Upload using file handle for streaming
with open(temp_path, "rb") as padded_file:
await storage.put_file(storage_path, padded_file)
finally:
# Clean up temp file
Path(temp_path).unlink(missing_ok=True)
padded_url = await storage.get_file_url(
storage_path,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
self.logger.info(
f"Successfully padded track {track_idx}",
track_idx=track_idx,
start_time_seconds=start_time_seconds,
padded_url=padded_url,
)
return padded_url
except Exception as e:
self.logger.error(
f"Failed to process track {track_idx}",
track_idx=track_idx,
url=track_url,
error=str(e),
exc_info=True,
)
raise Exception(
f"Track {track_idx} padding failed - transcript would have incorrect timestamps"
) from e
def _extract_stream_start_time_from_container(
self, container, track_idx: int
) -> float:
"""
Extract meeting-relative start time from WebM stream metadata.
Uses PyAV to read stream.start_time from WebM container.
More accurate than filename timestamps by ~209ms due to network/encoding delays.
"""
start_time_seconds = 0.0
try:
audio_streams = [s for s in container.streams if s.type == "audio"]
stream = audio_streams[0] if audio_streams else container.streams[0]
# 1) Try stream-level start_time (most reliable for Daily.co tracks)
if stream.start_time is not None and stream.time_base is not None:
start_time_seconds = float(stream.start_time * stream.time_base)
# 2) Fallback to container-level start_time (in av.time_base units)
if (start_time_seconds <= 0) and (container.start_time is not None):
start_time_seconds = float(container.start_time * av.time_base)
# 3) Fallback to first packet DTS in stream.time_base
if start_time_seconds <= 0:
for packet in container.demux(stream):
if packet.dts is not None:
start_time_seconds = float(packet.dts * stream.time_base)
break
except Exception as e:
self.logger.warning(
"PyAV metadata read failed; assuming 0 start_time",
track_idx=track_idx,
error=str(e),
)
start_time_seconds = 0.0
self.logger.info(
f"Track {track_idx} stream metadata: start_time={start_time_seconds:.3f}s",
track_idx=track_idx,
)
return start_time_seconds
def _apply_audio_padding_to_file(
self,
in_container,
output_path: str,
start_time_seconds: float,
track_idx: int,
) -> None:
"""Apply silence padding to audio track using PyAV filter graph, writing to file"""
delay_ms = math.floor(start_time_seconds * 1000)
self.logger.info(
f"Padding track {track_idx} with {delay_ms}ms delay using PyAV",
track_idx=track_idx,
delay_ms=delay_ms,
)
try:
with av.open(output_path, "w", format="webm") as out_container:
in_stream = next(
(s for s in in_container.streams if s.type == "audio"), None
)
if in_stream is None:
raise Exception("No audio stream in input")
out_stream = out_container.add_stream(
"libopus", rate=OPUS_STANDARD_SAMPLE_RATE
)
out_stream.bit_rate = OPUS_DEFAULT_BIT_RATE
graph = av.filter.Graph()
abuf_args = (
f"time_base=1/{OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_rate={OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_fmt=s16:"
f"channel_layout=stereo"
)
src = graph.add("abuffer", args=abuf_args, name="src")
aresample_f = graph.add("aresample", args="async=1", name="ares")
# adelay requires one delay value per channel separated by '|'
delays_arg = f"{delay_ms}|{delay_ms}"
adelay_f = graph.add(
"adelay", args=f"delays={delays_arg}:all=1", name="delay"
)
sink = graph.add("abuffersink", name="sink")
src.link_to(aresample_f)
aresample_f.link_to(adelay_f)
adelay_f.link_to(sink)
graph.configure()
resampler = AudioResampler(
format="s16", layout="stereo", rate=OPUS_STANDARD_SAMPLE_RATE
)
# Decode -> resample -> push through graph -> encode Opus
for frame in in_container.decode(in_stream):
out_frames = resampler.resample(frame) or []
for rframe in out_frames:
rframe.sample_rate = OPUS_STANDARD_SAMPLE_RATE
rframe.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
src.push(rframe)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
src.push(None)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
for packet in out_stream.encode(None):
out_container.mux(packet)
except Exception as e:
self.logger.error(
"PyAV padding failed for track",
track_idx=track_idx,
delay_ms=delay_ms,
error=str(e),
exc_info=True,
)
raise
async def mixdown_tracks(
self,
track_urls: list[str],
writer: AudioFileWriterProcessor,
offsets_seconds: list[float] | None = None,
) -> None:
"""Multi-track mixdown using PyAV filter graph (amix), reading from S3 presigned URLs"""
target_sample_rate: int | None = None
for url in track_urls:
if not url:
continue
container = None
try:
container = av.open(url)
for frame in container.decode(audio=0):
target_sample_rate = frame.sample_rate
break
except Exception:
continue
finally:
if container is not None:
container.close()
if target_sample_rate:
break
if not target_sample_rate:
self.logger.error("Mixdown failed - no decodable audio frames found")
raise Exception("Mixdown failed: No decodable audio frames in any track")
# Build PyAV filter graph:
# N abuffer (s32/stereo)
# -> optional adelay per input (for alignment)
# -> amix (s32)
# -> aformat(s16)
# -> sink
graph = av.filter.Graph()
inputs = []
valid_track_urls = [url for url in track_urls if url]
input_offsets_seconds = None
if offsets_seconds is not None:
input_offsets_seconds = [
offsets_seconds[i] for i, url in enumerate(track_urls) if url
]
for idx, url in enumerate(valid_track_urls):
args = (
f"time_base=1/{target_sample_rate}:"
f"sample_rate={target_sample_rate}:"
f"sample_fmt=s32:"
f"channel_layout=stereo"
)
in_ctx = graph.add("abuffer", args=args, name=f"in{idx}")
inputs.append(in_ctx)
if not inputs:
self.logger.error("Mixdown failed - no valid inputs for graph")
raise Exception("Mixdown failed: No valid inputs for filter graph")
mixer = graph.add("amix", args=f"inputs={len(inputs)}:normalize=0", name="mix")
fmt = graph.add(
"aformat",
args=(
f"sample_fmts=s32:channel_layouts=stereo:sample_rates={target_sample_rate}"
),
name="fmt",
)
sink = graph.add("abuffersink", name="out")
# Optional per-input delay before mixing
delays_ms: list[int] = []
if input_offsets_seconds is not None:
base = min(input_offsets_seconds) if input_offsets_seconds else 0.0
delays_ms = [
max(0, int(round((o - base) * 1000))) for o in input_offsets_seconds
]
else:
delays_ms = [0 for _ in inputs]
for idx, in_ctx in enumerate(inputs):
delay_ms = delays_ms[idx] if idx < len(delays_ms) else 0
if delay_ms > 0:
# adelay requires one value per channel; use same for stereo
adelay = graph.add(
"adelay",
args=f"delays={delay_ms}|{delay_ms}:all=1",
name=f"delay{idx}",
)
in_ctx.link_to(adelay)
adelay.link_to(mixer, 0, idx)
else:
in_ctx.link_to(mixer, 0, idx)
mixer.link_to(fmt)
fmt.link_to(sink)
graph.configure()
containers = []
try:
# Open all containers with cleanup guaranteed
for i, url in enumerate(valid_track_urls):
try:
c = av.open(url)
containers.append(c)
except Exception as e:
self.logger.warning(
"Mixdown: failed to open container from URL",
input=i,
url=url,
error=str(e),
)
if not containers:
self.logger.error("Mixdown failed - no valid containers opened")
raise Exception("Mixdown failed: Could not open any track containers")
decoders = [c.decode(audio=0) for c in containers]
active = [True] * len(decoders)
resamplers = [
AudioResampler(format="s32", layout="stereo", rate=target_sample_rate)
for _ in decoders
]
while any(active):
for i, (dec, is_active) in enumerate(zip(decoders, active)):
if not is_active:
continue
try:
frame = next(dec)
except StopIteration:
active[i] = False
continue
if frame.sample_rate != target_sample_rate:
continue
out_frames = resamplers[i].resample(frame) or []
for rf in out_frames:
rf.sample_rate = target_sample_rate
rf.time_base = Fraction(1, target_sample_rate)
inputs[i].push(rf)
while True:
try:
mixed = sink.pull()
except Exception:
break
mixed.sample_rate = target_sample_rate
mixed.time_base = Fraction(1, target_sample_rate)
await writer.push(mixed)
for in_ctx in inputs:
in_ctx.push(None)
while True:
try:
mixed = sink.pull()
except Exception:
break
mixed.sample_rate = target_sample_rate
mixed.time_base = Fraction(1, target_sample_rate)
await writer.push(mixed)
finally:
# Cleanup all containers, even if processing failed
for c in containers:
if c is not None:
try:
c.close()
except Exception:
pass # Best effort cleanup
@broadcast_to_sockets
async def set_status(self, transcript_id: str, status: TranscriptStatus):
async with self.lock_transaction():
return await transcripts_controller.set_status(transcript_id, status)
async def on_waveform(self, data):
async with self.transaction():
waveform = TranscriptWaveform(waveform=data)
transcript = await self.get_transcript()
return await transcripts_controller.append_event(
transcript=transcript, event="WAVEFORM", data=waveform
)
async def update_participants_from_daily(
self, transcript: Transcript, track_keys: list[str]
) -> None:
"""Update transcript participants with user_id and names from Daily.co API."""
if not transcript.recording_id:
return
try:
async with create_platform_client("daily") as daily_client:
id_to_name = {}
id_to_user_id = {}
try:
rec_details = await daily_client.get_recording(
transcript.recording_id
)
mtg_session_id = rec_details.mtgSessionId
if mtg_session_id:
try:
payload: MeetingParticipantsResponse = (
await daily_client.get_meeting_participants(
mtg_session_id
)
)
for p in payload.data:
pid = p.participant_id
name = p.user_name
user_id = p.user_id
if name:
id_to_name[pid] = name
if user_id:
id_to_user_id[pid] = user_id
except Exception as e:
self.logger.warning(
"Failed to fetch Daily meeting participants",
error=str(e),
mtg_session_id=mtg_session_id,
exc_info=True,
)
else:
self.logger.warning(
"No mtgSessionId found for recording; participant names may be generic",
recording_id=transcript.recording_id,
)
except Exception as e:
self.logger.warning(
"Failed to fetch Daily recording details",
error=str(e),
recording_id=transcript.recording_id,
exc_info=True,
)
return
cam_audio_keys = filter_cam_audio_tracks(track_keys)
for idx, key in enumerate(cam_audio_keys):
try:
parsed = parse_daily_recording_filename(key)
participant_id = parsed.participant_id
except ValueError as e:
self.logger.error(
"Failed to parse Daily recording filename",
error=str(e),
key=key,
exc_info=True,
)
continue
default_name = f"Speaker {idx}"
name = id_to_name.get(participant_id, default_name)
user_id = id_to_user_id.get(participant_id)
participant = TranscriptParticipant(
id=participant_id, speaker=idx, name=name, user_id=user_id
)
await transcripts_controller.upsert_participant(
transcript, participant
)
except Exception as e:
self.logger.warning(
"Failed to map participant names", error=str(e), exc_info=True
)
async def process(self, bucket_name: str, track_keys: list[str]):
transcript = await self.get_transcript()
async with self.transaction():
await transcripts_controller.update(
transcript,
{
"events": [],
"topics": [],
"participants": [],
},
)
await self.update_participants_from_daily(transcript, track_keys)
source_storage = get_transcripts_storage()
transcript_storage = source_storage
track_urls: list[str] = []
for key in track_keys:
url = await source_storage.get_file_url(
key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
bucket=bucket_name,
)
track_urls.append(url)
self.logger.info(
f"Generated presigned URL for track from {bucket_name}",
key=key,
)
created_padded_files = set()
padded_track_urls: list[str] = []
for idx, url in enumerate(track_urls):
padded_url = await self.pad_track_for_transcription(
url, idx, transcript_storage
)
padded_track_urls.append(padded_url)
if padded_url != url:
storage_path = f"file_pipeline/{transcript.id}/tracks/padded_{idx}.webm"
created_padded_files.add(storage_path)
self.logger.info(f"Track {idx} processed, padded URL: {padded_url}")
transcript.data_path.mkdir(parents=True, exist_ok=True)
if settings.SKIP_MIXDOWN:
self.logger.warning(
"SKIP_MIXDOWN enabled: Skipping mixdown and waveform generation. "
"UI will have no audio playback or waveform.",
num_tracks=len(padded_track_urls),
transcript_id=transcript.id,
)
else:
mp3_writer = AudioFileWriterProcessor(
path=str(transcript.audio_mp3_filename),
on_duration=self.on_duration,
)
await self.mixdown_tracks(
padded_track_urls, mp3_writer, offsets_seconds=None
)
await mp3_writer.flush()
if not transcript.audio_mp3_filename.exists():
raise Exception(
"Mixdown failed - no MP3 file generated. Cannot proceed without playable audio."
)
storage_path = f"{transcript.id}/audio.mp3"
# Use file handle streaming to avoid loading entire MP3 into memory
mp3_size = transcript.audio_mp3_filename.stat().st_size
with open(transcript.audio_mp3_filename, "rb") as mp3_file:
await transcript_storage.put_file(storage_path, mp3_file)
mp3_url = await transcript_storage.get_file_url(storage_path)
await transcripts_controller.update(
transcript, {"audio_location": "storage"}
)
self.logger.info(
f"Uploaded mixed audio to storage",
storage_path=storage_path,
size=mp3_size,
url=mp3_url,
)
self.logger.info("Generating waveform from mixed audio")
waveform_processor = AudioWaveformProcessor(
audio_path=transcript.audio_mp3_filename,
waveform_path=transcript.audio_waveform_filename,
on_waveform=self.on_waveform,
)
waveform_processor.set_pipeline(self.empty_pipeline)
await waveform_processor.flush()
self.logger.info("Waveform generated successfully")
speaker_transcripts: list[TranscriptType] = []
for idx, padded_url in enumerate(padded_track_urls):
if not padded_url:
continue
t = await self.transcribe_file(padded_url, transcript.source_language)
if not t.words:
self.logger.debug(f"no words in track {idx}")
# not skipping, it may be silence or indistinguishable mumbling
for w in t.words:
w.speaker = idx
speaker_transcripts.append(t)
self.logger.info(
f"Track {idx} transcribed successfully with {len(t.words)} words",
track_idx=idx,
)
valid_track_count = len([url for url in padded_track_urls if url])
if valid_track_count > 0 and len(speaker_transcripts) != valid_track_count:
raise Exception(
f"Only {len(speaker_transcripts)}/{valid_track_count} tracks transcribed successfully. "
f"All tracks must succeed to avoid incomplete transcripts."
)
if not speaker_transcripts:
raise Exception("No valid track transcriptions")
self.logger.info(f"Cleaning up {len(created_padded_files)} temporary S3 files")
cleanup_tasks = []
for storage_path in created_padded_files:
cleanup_tasks.append(transcript_storage.delete_file(storage_path))
if cleanup_tasks:
cleanup_results = await asyncio.gather(
*cleanup_tasks, return_exceptions=True
)
for storage_path, result in zip(created_padded_files, cleanup_results):
if isinstance(result, Exception):
self.logger.warning(
"Failed to cleanup temporary padded track",
storage_path=storage_path,
error=str(result),
)
merged_words = []
for t in speaker_transcripts:
merged_words.extend(t.words)
merged_words.sort(
key=lambda w: w.start if hasattr(w, "start") and w.start is not None else 0
)
merged_transcript = TranscriptType(words=merged_words, translation=None)
await self.on_transcript(merged_transcript)
topics = await self.detect_topics(merged_transcript, transcript.target_language)
await asyncio.gather(
self.generate_title(topics),
self.generate_summaries(topics),
return_exceptions=False,
)
await self.set_status(transcript.id, "ended")
async def transcribe_file(self, audio_url: str, language: str) -> TranscriptType:
return await transcribe_file_with_processor(audio_url, language)
async def detect_topics(
self, transcript: TranscriptType, target_language: str
) -> list[TitleSummary]:
return await topic_processing.detect_topics(
transcript,
target_language,
on_topic_callback=self.on_topic,
empty_pipeline=self.empty_pipeline,
)
async def generate_title(self, topics: list[TitleSummary]):
return await topic_processing.generate_title(
topics,
on_title_callback=self.on_title,
empty_pipeline=self.empty_pipeline,
logger=self.logger,
)
async def generate_summaries(self, topics: list[TitleSummary]):
transcript = await self.get_transcript()
return await topic_processing.generate_summaries(
topics,
transcript,
on_long_summary_callback=self.on_long_summary,
on_short_summary_callback=self.on_short_summary,
empty_pipeline=self.empty_pipeline,
logger=self.logger,
)
@shared_task
@asynctask
async def task_pipeline_multitrack_process(
*, transcript_id: str, bucket_name: str, track_keys: list[str]
):
pipeline = PipelineMainMultitrack(transcript_id=transcript_id)
try:
await pipeline.set_status(transcript_id, "processing")
await pipeline.process(bucket_name, track_keys)
except Exception:
await pipeline.set_status(transcript_id, "error")
raise
post_chain = chain(
task_cleanup_consent.si(transcript_id=transcript_id),
task_pipeline_post_to_zulip.si(transcript_id=transcript_id),
task_send_webhook_if_needed.si(transcript_id=transcript_id),
)
post_chain.delay()

View File

@@ -0,0 +1,109 @@
"""
Topic processing utilities
==========================
Shared topic detection, title generation, and summarization logic
used across file and multitrack pipelines.
"""
from typing import Callable
import structlog
from reflector.db.transcripts import Transcript
from reflector.processors import (
TranscriptFinalSummaryProcessor,
TranscriptFinalTitleProcessor,
TranscriptTopicDetectorProcessor,
)
from reflector.processors.types import TitleSummary
from reflector.processors.types import Transcript as TranscriptType
class EmptyPipeline:
def __init__(self, logger: structlog.BoundLogger):
self.logger = logger
def get_pref(self, k, d=None):
return d
async def emit(self, event):
pass
async def detect_topics(
transcript: TranscriptType,
target_language: str,
*,
on_topic_callback: Callable,
empty_pipeline: EmptyPipeline,
) -> list[TitleSummary]:
chunk_size = 300
topics: list[TitleSummary] = []
async def on_topic(topic: TitleSummary):
topics.append(topic)
return await on_topic_callback(topic)
topic_detector = TranscriptTopicDetectorProcessor(callback=on_topic)
topic_detector.set_pipeline(empty_pipeline)
for i in range(0, len(transcript.words), chunk_size):
chunk_words = transcript.words[i : i + chunk_size]
if not chunk_words:
continue
chunk_transcript = TranscriptType(
words=chunk_words, translation=transcript.translation
)
await topic_detector.push(chunk_transcript)
await topic_detector.flush()
return topics
async def generate_title(
topics: list[TitleSummary],
*,
on_title_callback: Callable,
empty_pipeline: EmptyPipeline,
logger: structlog.BoundLogger,
):
if not topics:
logger.warning("No topics for title generation")
return
processor = TranscriptFinalTitleProcessor(callback=on_title_callback)
processor.set_pipeline(empty_pipeline)
for topic in topics:
await processor.push(topic)
await processor.flush()
async def generate_summaries(
topics: list[TitleSummary],
transcript: Transcript,
*,
on_long_summary_callback: Callable,
on_short_summary_callback: Callable,
empty_pipeline: EmptyPipeline,
logger: structlog.BoundLogger,
):
if not topics:
logger.warning("No topics for summary generation")
return
processor = TranscriptFinalSummaryProcessor(
transcript=transcript,
callback=on_long_summary_callback,
on_short_summary=on_short_summary_callback,
)
processor.set_pipeline(empty_pipeline)
for topic in topics:
await processor.push(topic)
await processor.flush()

View File

@@ -0,0 +1,34 @@
from reflector.processors.file_transcript import FileTranscriptInput
from reflector.processors.file_transcript_auto import FileTranscriptAutoProcessor
from reflector.processors.types import Transcript as TranscriptType
async def transcribe_file_with_processor(
audio_url: str,
language: str,
processor_name: str | None = None,
) -> TranscriptType:
processor = (
FileTranscriptAutoProcessor(name=processor_name)
if processor_name
else FileTranscriptAutoProcessor()
)
input_data = FileTranscriptInput(audio_url=audio_url, language=language)
result: TranscriptType | None = None
async def capture_result(transcript):
nonlocal result
result = transcript
processor.on(capture_result)
await processor.push(input_data)
await processor.flush()
if not result:
processor_label = processor_name or "default"
raise ValueError(
f"No transcript captured from {processor_label} processor for audio: {audio_url}"
)
return result

View File

@@ -1,5 +1,7 @@
from .audio_chunker import AudioChunkerProcessor # noqa: F401
from .audio_chunker_auto import AudioChunkerAutoProcessor # noqa: F401
from .audio_diarization_auto import AudioDiarizationAutoProcessor # noqa: F401
from .audio_downscale import AudioDownscaleProcessor # noqa: F401
from .audio_file_writer import AudioFileWriterProcessor # noqa: F401
from .audio_merge import AudioMergeProcessor # noqa: F401
from .audio_transcript import AudioTranscriptProcessor # noqa: F401

View File

@@ -1,340 +1,78 @@
from typing import Optional
import av
import numpy as np
import torch
from silero_vad import VADIterator, load_silero_vad
from prometheus_client import Counter, Histogram
from reflector.processors.base import Processor
class AudioChunkerProcessor(Processor):
"""
Assemble audio frames into chunks with VAD-based speech detection
Base class for assembling audio frames into chunks
"""
INPUT_TYPE = av.AudioFrame
OUTPUT_TYPE = list[av.AudioFrame]
def __init__(
self,
block_frames=256,
max_frames=1024,
vad_threshold=0.5,
use_onnx=False,
min_frames=2,
):
super().__init__()
m_chunk = Histogram(
"audio_chunker",
"Time spent in AudioChunker.chunk",
["backend"],
)
m_chunk_call = Counter(
"audio_chunker_call",
"Number of calls to AudioChunker.chunk",
["backend"],
)
m_chunk_success = Counter(
"audio_chunker_success",
"Number of successful calls to AudioChunker.chunk",
["backend"],
)
m_chunk_failure = Counter(
"audio_chunker_failure",
"Number of failed calls to AudioChunker.chunk",
["backend"],
)
def __init__(self, *args, **kwargs):
name = self.__class__.__name__
self.m_chunk = self.m_chunk.labels(name)
self.m_chunk_call = self.m_chunk_call.labels(name)
self.m_chunk_success = self.m_chunk_success.labels(name)
self.m_chunk_failure = self.m_chunk_failure.labels(name)
super().__init__(*args, **kwargs)
self.frames: list[av.AudioFrame] = []
self.block_frames = block_frames
self.max_frames = max_frames
self.vad_threshold = vad_threshold
self.min_frames = min_frames
# Initialize Silero VAD
self._init_vad(use_onnx)
def _init_vad(self, use_onnx=False):
"""Initialize Silero VAD model"""
try:
torch.set_num_threads(1)
self.vad_model = load_silero_vad(onnx=use_onnx)
self.vad_iterator = VADIterator(self.vad_model, sampling_rate=16000)
self.logger.info("Silero VAD initialized successfully")
except Exception as e:
self.logger.error(f"Failed to initialize Silero VAD: {e}")
self.vad_model = None
self.vad_iterator = None
async def _push(self, data: av.AudioFrame):
self.frames.append(data)
# print("timestamp", data.pts * data.time_base * 1000)
# Check for speech segments every 32 frames (~1 second)
if len(self.frames) >= 32 and len(self.frames) % 32 == 0:
await self._process_block()
# Safety fallback - emit if we hit max frames
elif len(self.frames) >= self.max_frames:
self.logger.warning(
f"AudioChunkerProcessor: Reached max frames ({self.max_frames}), "
f"emitting first {self.max_frames // 2} frames"
)
frames_to_emit = self.frames[: self.max_frames // 2]
self.frames = self.frames[self.max_frames // 2 :]
if len(frames_to_emit) >= self.min_frames:
await self.emit(frames_to_emit)
else:
self.logger.debug(
f"Ignoring fallback segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
"""Process incoming audio frame"""
# Validate audio format on first frame
if len(self.frames) == 0:
if data.sample_rate != 16000 or len(data.layout.channels) != 1:
raise ValueError(
f"AudioChunkerProcessor expects 16kHz mono audio, got {data.sample_rate}Hz "
f"with {len(data.layout.channels)} channel(s). "
f"Use AudioDownscaleProcessor before this processor."
)
async def _process_block(self):
# Need at least 32 frames for VAD detection (~1 second)
if len(self.frames) < 32 or self.vad_iterator is None:
return
# Processing block with current buffer size
# print(f"Processing block: {len(self.frames)} frames in buffer")
try:
# Convert frames to numpy array for VAD
audio_array = self._frames_to_numpy(self.frames)
self.m_chunk_call.inc()
with self.m_chunk.time():
result = await self._chunk(data)
self.m_chunk_success.inc()
if result:
await self.emit(result)
except Exception:
self.m_chunk_failure.inc()
raise
if audio_array is None:
# Fallback: emit all frames if conversion failed
frames_to_emit = self.frames[:]
self.frames = []
if len(frames_to_emit) >= self.min_frames:
await self.emit(frames_to_emit)
else:
self.logger.debug(
f"Ignoring conversion-failed segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
return
# Find complete speech segments in the buffer
speech_end_frame = self._find_speech_segment_end(audio_array)
if speech_end_frame is None or speech_end_frame <= 0:
# No speech found but buffer is getting large
if len(self.frames) > 512:
# Check if it's all silence and can be discarded
# No speech segment found, buffer at {len(self.frames)} frames
# Could emit silence or discard old frames here
# For now, keep first 256 frames and discard older silence
if len(self.frames) > 768:
self.logger.debug(
f"Discarding {len(self.frames) - 256} old frames (likely silence)"
)
self.frames = self.frames[-256:]
return
# Calculate segment timing information
frames_to_emit = self.frames[:speech_end_frame]
# Get timing from av.AudioFrame
if frames_to_emit:
first_frame = frames_to_emit[0]
last_frame = frames_to_emit[-1]
sample_rate = first_frame.sample_rate
# Calculate duration
total_samples = sum(f.samples for f in frames_to_emit)
duration_seconds = total_samples / sample_rate if sample_rate > 0 else 0
# Get timestamps if available
start_time = (
first_frame.pts * first_frame.time_base if first_frame.pts else 0
)
end_time = (
last_frame.pts * last_frame.time_base if last_frame.pts else 0
)
# Convert to HH:MM:SS format for logging
def format_time(seconds):
if not seconds:
return "00:00:00"
total_seconds = int(float(seconds))
hours = total_seconds // 3600
minutes = (total_seconds % 3600) // 60
secs = total_seconds % 60
return f"{hours:02d}:{minutes:02d}:{secs:02d}"
start_formatted = format_time(start_time)
end_formatted = format_time(end_time)
# Keep remaining frames for next processing
remaining_after = len(self.frames) - speech_end_frame
# Single structured log line
self.logger.info(
"Speech segment found",
start=start_formatted,
end=end_formatted,
frames=speech_end_frame,
duration=round(duration_seconds, 2),
buffer_before=len(self.frames),
remaining=remaining_after,
)
# Keep remaining frames for next processing
self.frames = self.frames[speech_end_frame:]
# Filter out segments with too few frames
if len(frames_to_emit) >= self.min_frames:
await self.emit(frames_to_emit)
else:
self.logger.debug(
f"Ignoring segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
except Exception as e:
self.logger.error(f"Error in VAD processing: {e}")
# Fallback to simple chunking
if len(self.frames) >= self.block_frames:
frames_to_emit = self.frames[: self.block_frames]
self.frames = self.frames[self.block_frames :]
if len(frames_to_emit) >= self.min_frames:
await self.emit(frames_to_emit)
else:
self.logger.debug(
f"Ignoring exception-fallback segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
def _frames_to_numpy(self, frames: list[av.AudioFrame]) -> Optional[np.ndarray]:
"""Convert av.AudioFrame list to numpy array for VAD processing"""
if not frames:
return None
try:
first_frame = frames[0]
original_sample_rate = first_frame.sample_rate
audio_data = []
for frame in frames:
frame_array = frame.to_ndarray()
# Handle stereo -> mono conversion
if len(frame_array.shape) == 2 and frame_array.shape[0] > 1:
frame_array = np.mean(frame_array, axis=0)
elif len(frame_array.shape) == 2:
frame_array = frame_array.flatten()
audio_data.append(frame_array)
if not audio_data:
return None
combined_audio = np.concatenate(audio_data)
# Resample from 48kHz to 16kHz if needed
if original_sample_rate != 16000:
combined_audio = self._resample_audio(
combined_audio, original_sample_rate, 16000
)
# Ensure float32 format
if combined_audio.dtype == np.int16:
# Normalize int16 audio to float32 in range [-1.0, 1.0]
combined_audio = combined_audio.astype(np.float32) / 32768.0
elif combined_audio.dtype != np.float32:
combined_audio = combined_audio.astype(np.float32)
return combined_audio
except Exception as e:
self.logger.error(f"Error converting frames to numpy: {e}")
return None
def _resample_audio(
self, audio: np.ndarray, from_sr: int, to_sr: int
) -> np.ndarray:
"""Simple linear resampling from from_sr to to_sr"""
if from_sr == to_sr:
return audio
try:
# Simple linear interpolation resampling
ratio = to_sr / from_sr
new_length = int(len(audio) * ratio)
# Create indices for interpolation
old_indices = np.linspace(0, len(audio) - 1, new_length)
resampled = np.interp(old_indices, np.arange(len(audio)), audio)
return resampled.astype(np.float32)
except Exception as e:
self.logger.error("Resampling error", exc_info=e)
# Fallback: simple decimation/repetition
if from_sr > to_sr:
# Downsample by taking every nth sample
step = from_sr // to_sr
return audio[::step]
else:
# Upsample by repeating samples
repeat = to_sr // from_sr
return np.repeat(audio, repeat)
def _find_speech_segment_end(self, audio_array: np.ndarray) -> Optional[int]:
"""Find complete speech segments and return frame index at segment end"""
if self.vad_iterator is None or len(audio_array) == 0:
return None
try:
# Process audio in 512-sample windows for VAD
window_size = 512
min_silence_windows = 3 # Require 3 windows of silence after speech
# Track speech state
in_speech = False
speech_start = None
speech_end = None
silence_count = 0
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(chunk, (0, window_size - len(chunk)))
# Detect if this window has speech
speech_dict = self.vad_iterator(chunk, return_seconds=True)
# VADIterator returns dict with 'start' and 'end' when speech segments are detected
if speech_dict:
if not in_speech:
# Speech started
speech_start = i
in_speech = True
# Debug: print(f"Speech START at sample {i}, VAD: {speech_dict}")
silence_count = 0 # Reset silence counter
continue
if not in_speech:
continue
# We're in speech but found silence
silence_count += 1
if silence_count < min_silence_windows:
continue
# Found end of speech segment
speech_end = i - (min_silence_windows - 1) * window_size
# Debug: print(f"Speech END at sample {speech_end}")
# Convert sample position to frame index
samples_per_frame = self.frames[0].samples if self.frames else 1024
# Account for resampling: we process at 16kHz but frames might be 48kHz
resample_ratio = 48000 / 16000 # 3x
actual_sample_pos = int(speech_end * resample_ratio)
frame_index = actual_sample_pos // samples_per_frame
# Ensure we don't exceed buffer
frame_index = min(frame_index, len(self.frames))
return frame_index
return None
except Exception as e:
self.logger.error(f"Error finding speech segment: {e}")
return None
async def _chunk(self, data: av.AudioFrame) -> Optional[list[av.AudioFrame]]:
"""
Process audio frame and return chunk when ready.
Subclasses should implement their chunking logic here.
"""
raise NotImplementedError
async def _flush(self):
frames = self.frames[:]
self.frames = []
if frames:
if len(frames) >= self.min_frames:
await self.emit(frames)
else:
self.logger.debug(
f"Ignoring flush segment with {len(frames)} frames "
f"(< {self.min_frames} minimum)"
)
"""Flush any remaining frames when processing ends"""
raise NotImplementedError

View File

@@ -0,0 +1,32 @@
import importlib
from reflector.processors.audio_chunker import AudioChunkerProcessor
from reflector.settings import settings
class AudioChunkerAutoProcessor(AudioChunkerProcessor):
_registry = {}
@classmethod
def register(cls, name, kclass):
cls._registry[name] = kclass
def __new__(cls, name: str | None = None, **kwargs):
if name is None:
name = settings.AUDIO_CHUNKER_BACKEND
if name not in cls._registry:
module_name = f"reflector.processors.audio_chunker_{name}"
importlib.import_module(module_name)
# gather specific configuration for the processor
# search `AUDIO_CHUNKER_BACKEND_XXX_YYY`, push to constructor as `backend_xxx_yyy`
config = {}
name_upper = name.upper()
settings_prefix = "AUDIO_CHUNKER_"
config_prefix = f"{settings_prefix}{name_upper}_"
for key, value in settings:
if key.startswith(config_prefix):
config_name = key[len(settings_prefix) :].lower()
config[config_name] = value
return cls._registry[name](**config | kwargs)

View File

@@ -0,0 +1,34 @@
from typing import Optional
import av
from reflector.processors.audio_chunker import AudioChunkerProcessor
from reflector.processors.audio_chunker_auto import AudioChunkerAutoProcessor
class AudioChunkerFramesProcessor(AudioChunkerProcessor):
"""
Simple frame-based audio chunker that emits chunks after a fixed number of frames
"""
def __init__(self, max_frames=256, **kwargs):
super().__init__(**kwargs)
self.max_frames = max_frames
async def _chunk(self, data: av.AudioFrame) -> Optional[list[av.AudioFrame]]:
self.frames.append(data)
if len(self.frames) >= self.max_frames:
frames_to_emit = self.frames[:]
self.frames = []
return frames_to_emit
return None
async def _flush(self):
frames = self.frames[:]
self.frames = []
if frames:
await self.emit(frames)
AudioChunkerAutoProcessor.register("frames", AudioChunkerFramesProcessor)

View File

@@ -0,0 +1,298 @@
from typing import Optional
import av
import numpy as np
import torch
from silero_vad import VADIterator, load_silero_vad
from reflector.processors.audio_chunker import AudioChunkerProcessor
from reflector.processors.audio_chunker_auto import AudioChunkerAutoProcessor
class AudioChunkerSileroProcessor(AudioChunkerProcessor):
"""
Assemble audio frames into chunks with VAD-based speech detection using Silero VAD
"""
def __init__(
self,
block_frames=256,
max_frames=1024,
use_onnx=True,
min_frames=2,
**kwargs,
):
super().__init__(**kwargs)
self.block_frames = block_frames
self.max_frames = max_frames
self.min_frames = min_frames
# Initialize Silero VAD
self._init_vad(use_onnx)
def _init_vad(self, use_onnx=False):
"""Initialize Silero VAD model"""
try:
torch.set_num_threads(1)
self.vad_model = load_silero_vad(onnx=use_onnx)
self.vad_iterator = VADIterator(self.vad_model, sampling_rate=16000)
self.logger.info("Silero VAD initialized successfully")
except Exception as e:
self.logger.error(f"Failed to initialize Silero VAD: {e}")
self.vad_model = None
self.vad_iterator = None
async def _chunk(self, data: av.AudioFrame) -> Optional[list[av.AudioFrame]]:
"""Process audio frame and return chunk when ready"""
self.frames.append(data)
# Check for speech segments every 32 frames (~1 second)
if len(self.frames) >= 32 and len(self.frames) % 32 == 0:
return await self._process_block()
# Safety fallback - emit if we hit max frames
elif len(self.frames) >= self.max_frames:
self.logger.warning(
f"AudioChunkerSileroProcessor: Reached max frames ({self.max_frames}), "
f"emitting first {self.max_frames // 2} frames"
)
frames_to_emit = self.frames[: self.max_frames // 2]
self.frames = self.frames[self.max_frames // 2 :]
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring fallback segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
return None
async def _process_block(self) -> Optional[list[av.AudioFrame]]:
# Need at least 32 frames for VAD detection (~1 second)
if len(self.frames) < 32 or self.vad_iterator is None:
return None
# Processing block with current buffer size
print(f"Processing block: {len(self.frames)} frames in buffer")
try:
# Convert frames to numpy array for VAD
audio_array = self._frames_to_numpy(self.frames)
if audio_array is None:
# Fallback: emit all frames if conversion failed
frames_to_emit = self.frames[:]
self.frames = []
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring conversion-failed segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
return None
# Find complete speech segments in the buffer
speech_end_frame = self._find_speech_segment_end(audio_array)
if speech_end_frame is None or speech_end_frame <= 0:
# No speech found but buffer is getting large
if len(self.frames) > 512:
# Check if it's all silence and can be discarded
# No speech segment found, buffer at {len(self.frames)} frames
# Could emit silence or discard old frames here
# For now, keep first 256 frames and discard older silence
if len(self.frames) > 768:
self.logger.debug(
f"Discarding {len(self.frames) - 256} old frames (likely silence)"
)
self.frames = self.frames[-256:]
return None
# Calculate segment timing information
frames_to_emit = self.frames[:speech_end_frame]
# Get timing from av.AudioFrame
if frames_to_emit:
first_frame = frames_to_emit[0]
last_frame = frames_to_emit[-1]
sample_rate = first_frame.sample_rate
# Calculate duration
total_samples = sum(f.samples for f in frames_to_emit)
duration_seconds = total_samples / sample_rate if sample_rate > 0 else 0
# Get timestamps if available
start_time = (
first_frame.pts * first_frame.time_base if first_frame.pts else 0
)
end_time = (
last_frame.pts * last_frame.time_base if last_frame.pts else 0
)
# Convert to HH:MM:SS format for logging
def format_time(seconds):
if not seconds:
return "00:00:00"
total_seconds = int(float(seconds))
hours = total_seconds // 3600
minutes = (total_seconds % 3600) // 60
secs = total_seconds % 60
return f"{hours:02d}:{minutes:02d}:{secs:02d}"
start_formatted = format_time(start_time)
end_formatted = format_time(end_time)
# Keep remaining frames for next processing
remaining_after = len(self.frames) - speech_end_frame
# Single structured log line
self.logger.info(
"Speech segment found",
start=start_formatted,
end=end_formatted,
frames=speech_end_frame,
duration=round(duration_seconds, 2),
buffer_before=len(self.frames),
remaining=remaining_after,
)
# Keep remaining frames for next processing
self.frames = self.frames[speech_end_frame:]
# Filter out segments with too few frames
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
except Exception as e:
self.logger.error(f"Error in VAD processing: {e}")
# Fallback to simple chunking
if len(self.frames) >= self.block_frames:
frames_to_emit = self.frames[: self.block_frames]
self.frames = self.frames[self.block_frames :]
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring exception-fallback segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
return None
def _frames_to_numpy(self, frames: list[av.AudioFrame]) -> Optional[np.ndarray]:
"""Convert av.AudioFrame list to numpy array for VAD processing"""
if not frames:
return None
try:
audio_data = []
for frame in frames:
frame_array = frame.to_ndarray()
if len(frame_array.shape) == 2:
frame_array = frame_array.flatten()
audio_data.append(frame_array)
if not audio_data:
return None
combined_audio = np.concatenate(audio_data)
# Ensure float32 format
if combined_audio.dtype == np.int16:
# Normalize int16 audio to float32 in range [-1.0, 1.0]
combined_audio = combined_audio.astype(np.float32) / 32768.0
elif combined_audio.dtype != np.float32:
combined_audio = combined_audio.astype(np.float32)
return combined_audio
except Exception as e:
self.logger.error(f"Error converting frames to numpy: {e}")
return None
def _find_speech_segment_end(self, audio_array: np.ndarray) -> Optional[int]:
"""Find complete speech segments and return frame index at segment end"""
if self.vad_iterator is None or len(audio_array) == 0:
return None
try:
# Process audio in 512-sample windows for VAD
window_size = 512
min_silence_windows = 3 # Require 3 windows of silence after speech
# Track speech state
in_speech = False
speech_start = None
speech_end = None
silence_count = 0
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(chunk, (0, window_size - len(chunk)))
# Detect if this window has speech
speech_dict = self.vad_iterator(chunk, return_seconds=True)
# VADIterator returns dict with 'start' and 'end' when speech segments are detected
if speech_dict:
if not in_speech:
# Speech started
speech_start = i
in_speech = True
# Debug: print(f"Speech START at sample {i}, VAD: {speech_dict}")
silence_count = 0 # Reset silence counter
continue
if not in_speech:
continue
# We're in speech but found silence
silence_count += 1
if silence_count < min_silence_windows:
continue
# Found end of speech segment
speech_end = i - (min_silence_windows - 1) * window_size
# Debug: print(f"Speech END at sample {speech_end}")
# Convert sample position to frame index
samples_per_frame = self.frames[0].samples if self.frames else 1024
frame_index = speech_end // samples_per_frame
# Ensure we don't exceed buffer
frame_index = min(frame_index, len(self.frames))
return frame_index
return None
except Exception as e:
self.logger.error(f"Error finding speech segment: {e}")
return None
async def _flush(self):
frames = self.frames[:]
self.frames = []
if frames:
if len(frames) >= self.min_frames:
await self.emit(frames)
else:
self.logger.debug(
f"Ignoring flush segment with {len(frames)} frames "
f"(< {self.min_frames} minimum)"
)
AudioChunkerAutoProcessor.register("silero", AudioChunkerSileroProcessor)

View File

@@ -0,0 +1,60 @@
from typing import Optional
import av
from av.audio.resampler import AudioResampler
from reflector.processors.base import Processor
def copy_frame(frame: av.AudioFrame) -> av.AudioFrame:
frame_copy = frame.from_ndarray(
frame.to_ndarray(),
format=frame.format.name,
layout=frame.layout.name,
)
frame_copy.sample_rate = frame.sample_rate
frame_copy.pts = frame.pts
frame_copy.time_base = frame.time_base
return frame_copy
class AudioDownscaleProcessor(Processor):
"""
Downscale audio frames to 16kHz mono format
"""
INPUT_TYPE = av.AudioFrame
OUTPUT_TYPE = av.AudioFrame
def __init__(self, target_rate: int = 16000, target_layout: str = "mono", **kwargs):
super().__init__(**kwargs)
self.target_rate = target_rate
self.target_layout = target_layout
self.resampler: Optional[AudioResampler] = None
self.needs_resampling: Optional[bool] = None
async def _push(self, data: av.AudioFrame):
if self.needs_resampling is None:
self.needs_resampling = (
data.sample_rate != self.target_rate
or data.layout.name != self.target_layout
)
if self.needs_resampling:
self.resampler = AudioResampler(
format="s16", layout=self.target_layout, rate=self.target_rate
)
if not self.needs_resampling or not self.resampler:
await self.emit(data)
return
resampled_frames = self.resampler.resample(copy_frame(data))
for resampled_frame in resampled_frames:
await self.emit(resampled_frame)
async def _flush(self):
if self.needs_resampling and self.resampler:
final_frames = self.resampler.resample(None)
for frame in final_frames:
await self.emit(frame)

View File

@@ -3,24 +3,11 @@ from time import monotonic_ns
from uuid import uuid4
import av
from av.audio.resampler import AudioResampler
from reflector.processors.base import Processor
from reflector.processors.types import AudioFile
def copy_frame(frame: av.AudioFrame) -> av.AudioFrame:
frame_copy = frame.from_ndarray(
frame.to_ndarray(),
format=frame.format.name,
layout=frame.layout.name,
)
frame_copy.sample_rate = frame.sample_rate
frame_copy.pts = frame.pts
frame_copy.time_base = frame.time_base
return frame_copy
class AudioMergeProcessor(Processor):
"""
Merge audio frame into a single file
@@ -29,9 +16,8 @@ class AudioMergeProcessor(Processor):
INPUT_TYPE = list[av.AudioFrame]
OUTPUT_TYPE = AudioFile
def __init__(self, downsample_to_16k_mono: bool = True, **kwargs):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.downsample_to_16k_mono = downsample_to_16k_mono
async def _push(self, data: list[av.AudioFrame]):
if not data:
@@ -39,72 +25,27 @@ class AudioMergeProcessor(Processor):
# get audio information from first frame
frame = data[0]
original_channels = len(frame.layout.channels)
original_sample_rate = frame.sample_rate
original_sample_width = frame.format.bytes
# determine if we need processing
needs_processing = self.downsample_to_16k_mono and (
original_sample_rate != 16000 or original_channels != 1
)
# determine output parameters
if self.downsample_to_16k_mono:
output_sample_rate = 16000
output_channels = 1
output_sample_width = 2 # 16-bit = 2 bytes
else:
output_sample_rate = original_sample_rate
output_channels = original_channels
output_sample_width = original_sample_width
output_channels = len(frame.layout.channels)
output_sample_rate = frame.sample_rate
output_sample_width = frame.format.bytes
# create audio file
uu = uuid4().hex
fd = io.BytesIO()
if needs_processing:
# Process with PyAV resampler
out_container = av.open(fd, "w", format="wav")
out_stream = out_container.add_stream("pcm_s16le", rate=16000)
out_stream.layout = "mono"
# Use PyAV to write frames
out_container = av.open(fd, "w", format="wav")
out_stream = out_container.add_stream("pcm_s16le", rate=output_sample_rate)
out_stream.layout = frame.layout.name
# Create resampler if needed
resampler = None
if original_sample_rate != 16000 or original_channels != 1:
resampler = AudioResampler(format="s16", layout="mono", rate=16000)
for frame in data:
if resampler:
# Resample and convert to mono
# XXX for an unknown reason, if we don't use a copy of the frame, we get
# Invalid Argumment from resample. Debugging indicate that when a previous processor
# already used the frame (like AudioFileWriter), it make it invalid argument here.
resampled_frames = resampler.resample(copy_frame(frame))
for resampled_frame in resampled_frames:
for packet in out_stream.encode(resampled_frame):
out_container.mux(packet)
else:
# Direct encoding without resampling
for packet in out_stream.encode(frame):
out_container.mux(packet)
# Flush the encoder
for packet in out_stream.encode(None):
for frame in data:
for packet in out_stream.encode(frame):
out_container.mux(packet)
out_container.close()
else:
# Use PyAV for original frames (no processing needed)
out_container = av.open(fd, "w", format="wav")
out_stream = out_container.add_stream("pcm_s16le", rate=output_sample_rate)
out_stream.layout = "mono" if output_channels == 1 else frame.layout
for frame in data:
for packet in out_stream.encode(frame):
out_container.mux(packet)
for packet in out_stream.encode(None):
out_container.mux(packet)
out_container.close()
# Flush the encoder
for packet in out_stream.encode(None):
out_container.mux(packet)
out_container.close()
fd.seek(0)

View File

@@ -12,9 +12,6 @@ API will be a POST request to TRANSCRIPT_URL:
"""
from typing import List
import aiohttp
from openai import AsyncOpenAI
from reflector.processors.audio_transcript import AudioTranscriptProcessor
@@ -25,7 +22,9 @@ from reflector.settings import settings
class AudioTranscriptModalProcessor(AudioTranscriptProcessor):
def __init__(
self, modal_api_key: str | None = None, batch_enabled: bool = True, **kwargs
self,
modal_api_key: str | None = None,
**kwargs,
):
super().__init__()
if not settings.TRANSCRIPT_URL:
@@ -35,126 +34,6 @@ class AudioTranscriptModalProcessor(AudioTranscriptProcessor):
self.transcript_url = settings.TRANSCRIPT_URL + "/v1"
self.timeout = settings.TRANSCRIPT_TIMEOUT
self.modal_api_key = modal_api_key
self.max_batch_duration = 10.0
self.max_batch_files = 15
self.batch_enabled = batch_enabled
self.pending_files: List[AudioFile] = [] # Files waiting to be processed
@classmethod
def _calculate_duration(cls, audio_file: AudioFile) -> float:
"""Calculate audio duration in seconds from AudioFile metadata"""
# Duration = total_samples / sample_rate
# We need to estimate total samples from the file data
import wave
try:
# Try to read as WAV file to get duration
audio_file.fd.seek(0)
with wave.open(audio_file.fd, "rb") as wav_file:
frames = wav_file.getnframes()
sample_rate = wav_file.getframerate()
duration = frames / sample_rate
return duration
except Exception:
# Fallback: estimate from file size and audio parameters
audio_file.fd.seek(0, 2) # Seek to end
file_size = audio_file.fd.tell()
audio_file.fd.seek(0) # Reset to beginning
# Estimate: file_size / (sample_rate * channels * sample_width)
bytes_per_second = (
audio_file.sample_rate
* audio_file.channels
* (audio_file.sample_width // 8)
)
estimated_duration = (
file_size / bytes_per_second if bytes_per_second > 0 else 0
)
return max(0, estimated_duration)
def _create_batches(self, audio_files: List[AudioFile]) -> List[List[AudioFile]]:
"""Group audio files into batches with maximum 30s total duration"""
batches = []
current_batch = []
current_duration = 0.0
for audio_file in audio_files:
duration = self._calculate_duration(audio_file)
# If adding this file exceeds max duration, start a new batch
if current_duration + duration > self.max_batch_duration and current_batch:
batches.append(current_batch)
current_batch = [audio_file]
current_duration = duration
else:
current_batch.append(audio_file)
current_duration += duration
# Add the last batch if not empty
if current_batch:
batches.append(current_batch)
return batches
async def _transcript_batch(self, audio_files: List[AudioFile]) -> List[Transcript]:
"""Transcribe a batch of audio files using the parakeet backend"""
if not audio_files:
return []
self.logger.debug(f"Batch transcribing {len(audio_files)} files")
# Prepare form data for batch request
data = aiohttp.FormData()
data.add_field("language", self.get_pref("audio:source_language", "en"))
data.add_field("batch", "true")
for i, audio_file in enumerate(audio_files):
audio_file.fd.seek(0)
data.add_field(
"files",
audio_file.fd,
filename=f"{audio_file.name}",
content_type="audio/wav",
)
# Make batch request
headers = {"Authorization": f"Bearer {self.modal_api_key}"}
async with aiohttp.ClientSession(
timeout=aiohttp.ClientTimeout(total=self.timeout)
) as session:
async with session.post(
f"{self.transcript_url}/audio/transcriptions",
data=data,
headers=headers,
) as response:
if response.status != 200:
error_text = await response.text()
raise Exception(
f"Batch transcription failed: {response.status} {error_text}"
)
result = await response.json()
# Process batch results
transcripts = []
results = result.get("results", [])
for i, (audio_file, file_result) in enumerate(zip(audio_files, results)):
transcript = Transcript(
words=[
Word(
text=word_info["word"],
start=word_info["start"],
end=word_info["end"],
)
for word_info in file_result.get("words", [])
]
)
transcript.add_offset(audio_file.timestamp)
transcripts.append(transcript)
return transcripts
async def _transcript(self, data: AudioFile):
async with AsyncOpenAI(
@@ -187,96 +66,5 @@ class AudioTranscriptModalProcessor(AudioTranscriptProcessor):
return transcript
async def transcript_multiple(
self, audio_files: List[AudioFile]
) -> List[Transcript]:
"""Transcribe multiple audio files using batching"""
if len(audio_files) == 1:
# Single file, use existing method
return [await self._transcript(audio_files[0])]
# Create batches with max 30s duration each
batches = self._create_batches(audio_files)
self.logger.debug(
f"Processing {len(audio_files)} files in {len(batches)} batches"
)
# Process all batches concurrently
all_transcripts = []
for batch in batches:
batch_transcripts = await self._transcript_batch(batch)
all_transcripts.extend(batch_transcripts)
return all_transcripts
async def _push(self, data: AudioFile):
"""Override _push to support batching"""
if not self.batch_enabled:
# Use parent implementation for single file processing
return await super()._push(data)
# Add file to pending batch
self.pending_files.append(data)
self.logger.debug(
f"Added file to batch: {data.name}, batch size: {len(self.pending_files)}"
)
# Calculate total duration of pending files
total_duration = sum(self._calculate_duration(f) for f in self.pending_files)
# Process batch if it reaches max duration or has multiple files ready for optimization
should_process_batch = (
total_duration >= self.max_batch_duration
or len(self.pending_files) >= self.max_batch_files
)
if should_process_batch:
await self._process_pending_batch()
async def _process_pending_batch(self):
"""Process all pending files as batches"""
if not self.pending_files:
return
self.logger.debug(f"Processing batch of {len(self.pending_files)} files")
try:
# Create batches respecting duration limit
batches = self._create_batches(self.pending_files)
# Process each batch
for batch in batches:
self.m_transcript_call.inc()
try:
with self.m_transcript.time():
# Use batch transcription
transcripts = await self._transcript_batch(batch)
self.m_transcript_success.inc()
# Emit each transcript
for transcript in transcripts:
if transcript:
await self.emit(transcript)
except Exception:
self.m_transcript_failure.inc()
raise
finally:
# Release audio files
for audio_file in batch:
audio_file.release()
finally:
# Clear pending files
self.pending_files.clear()
async def _flush(self):
"""Process any remaining files when flushing"""
await self._process_pending_batch()
await super()._flush()
AudioTranscriptAutoProcessor.register("modal", AudioTranscriptModalProcessor)

Some files were not shown because too many files have changed in this diff Show More