Compare commits

...

85 Commits

Author SHA1 Message Date
e91979abbc feat: use jitsi file system 2025-09-17 15:16:03 -06:00
95e8011975 Merge main into jisti-integration branch
- Resolved conflicts in server/reflector/views/rooms.py to keep platform-agnostic approach
- Resolved conflicts in www/app/[roomName]/page.tsx to keep VideoPlatformEmbed approach
- Accepted main's version of generated API files (schemas.gen.ts, services.gen.ts, types.gen.ts)
- Removed config-template.ts as per main branch changes
2025-09-15 12:53:49 -06:00
c546e69739 fix: zulip stream and topic selection in share dialog (#644)
* fix: zulip stream and topic selection in share dialog

Replace useListCollection with createListCollection to match the working
room edit implementation. This ensures collections update when data loads,
fixing the issue where streams and topics wouldn't appear until navigation.

* fix: wrap createListCollection in useMemo to prevent recreation on every render

Both streamCollection and topicCollection are now memoized to improve performance
and prevent unnecessary re-renders of Combobox components
2025-09-15 12:34:51 -06:00
Igor Monadical
3f1fe8c9bf chore: remove timeout-based auth session logic (#649)
* remove timeout-based auth session logic

* remove timeout-based auth session logic

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-15 14:19:10 -04:00
5f143fe364 fix: zulip and consent handler on the file pipeline (#645) 2025-09-15 10:49:20 -06:00
Igor Monadical
79f161436e chore: meeting user id removal and room id requirement (#635)
* chore: remove meeting user id and make meeting room id required

* meeting room_id optional

* orphaned meeting room ids DATA migration

* ci fix

* fix meeting_room_id_fkey downgrade

* fix migration rollback

* fix: put index back (meeting room id)

* fix: put index back (meeting room id)

* fix: put index back (meeting room id)

* remove noop migrations

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-12 13:07:58 -04:00
Igor Monadical
5cba5d310d chore: sentry and nextjs major bumps (#633)
* chore: remove nextjs-config

* build fix

* sentry update

* nextjs update

* feature flags doc

* update readme

* explicit nextjs env vars + remove feature-unrelated things and obsolete vars from config

* full config removal

* remove force-dynamic from pages

* compile fix

* restore claude-deleted tests

* no sentry backward compat

* better .env.example

* AUTHENTIK_REFRESH_TOKEN_URL not so required

* accommodate auth system to requiredLogin feature

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-12 12:41:44 -04:00
43ea9349f5 chore(main): release 0.10.0 (#616) 2025-09-11 20:57:19 -06:00
Igor Monadical
b3a8e9739d chore: whereby & s3 settings env error reporting (#637)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-11 17:52:34 -04:00
Igor Monadical
369ecdff13 feat: replace nextjs-config with environment variables (#632)
* chore: remove nextjs-config

* build fix

* update readme

* explicit nextjs env vars + remove feature-unrelated things and obsolete vars from config

* full config removal

* remove force-dynamic from pages

* compile fix

* restore claude-deleted tests

* better .env.example

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-11 11:20:41 -04:00
fc363bd49b fix: missing follow_redirects=True on modal endpoint (#630) 2025-09-10 08:15:47 -06:00
Igor Monadical
962038ee3f fix: auth post (#627)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 16:46:57 -04:00
Igor Monadical
3b85ff3bdf fix: auth post (#626)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 16:27:46 -04:00
Igor Monadical
cde99ca271 fix: auth post (#624)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 15:48:07 -04:00
Igor Monadical
f81fe9948a fix: anonymous users transcript permissions (#621)
* fix: public transcript visibility

* fix: transcript permissions frontend

* dead code removal

* chore: remove unused code

* fix search tests

* fix search tests

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 10:50:29 -04:00
Igor Monadical
5a5b323382 fix: sync backend and frontend token refresh logic (#614)
* sync backend and frontend token refresh logic

* return react strict mode

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-08 10:40:18 -04:00
02a3938822 chore(main): release 0.9.0 (#603) 2025-09-05 22:50:10 -06:00
Igor Monadical
7f5a4c9ddc fix: token refresh locking (#613)
* fix: kv use tls explicit

* fix: token refresh locking

* remove logs

* compile fix

* compile fix

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 23:03:24 -04:00
Igor Monadical
08d88ec349 fix: kv use tls explicit (#610)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 18:39:32 -04:00
Igor Monadical
c4d2825c81 feat: frontend openapi react query (#606)
* refactor: migrate from @hey-api/openapi-ts to openapi-react-query

- Replace @hey-api/openapi-ts with openapi-typescript and openapi-react-query
- Generate TypeScript types from OpenAPI spec
- Set up React Query infrastructure with QueryClientProvider
- Migrate all API hooks to use React Query patterns
- Maintain backward compatibility for existing components
- Remove old API infrastructure and dependencies

* fix: resolve import errors and add missing api hooks

- Create constants.ts for RECORD_A_MEETING_URL
- Add api-types.ts for backward compatible type exports
- Update all imports from deleted api folder to new locations
- Add missing React Query hooks for rooms and zulip operations
- Create useApi compatibility layer for unmigrated components

* feat: migrate components to React Query hooks

- Add comprehensive API hooks for all operations
- Migrate rooms page to use React Query mutations
- Update transcript title component to use mutation hook
- Refactor share/privacy component with proper error handling
- Remove direct API client usage in favor of hooks

* feat: complete migration from @hey-api/openapi-ts to openapi-react-query

- Migrated all components from useApi compatibility layer to direct React Query hooks
- Added new hooks for participant operations, room meetings, and speaker operations
- Updated all imports from old api module to api-types
- Fixed TypeScript types and API endpoint signatures
- Removed deprecated useApi.ts compatibility layer
- Fixed SourceKind enum values to match OpenAPI spec
- Added @ts-ignore for Zulip endpoints not in OpenAPI spec yet
- Fixed all compilation errors and type issues

* fix: authentication flow with React Query migration

- Fix middleware management in apiClient to properly handle auth tokens
- Update ApiAuthProvider to correctly configure base URL and auth
- Add missing NextAuth API route handler at app/api/auth/[...nextauth]/route.ts
- Remove middleware ejection attempts (not supported by openapi-fetch)
- Use global variables to store current auth token and API URL
- Setup middleware once on initialization instead of repeatedly adding

This fixes the login/logout flow that was broken after migrating from
the useApi compatibility layer to native React Query hooks.

* fix: prevent unauthorized API calls before authentication

- Add global AuthGuard component to handle authentication at layout level
- Make all API query hooks conditional on authentication status
- Define public routes (like /transcripts/new) that don't require auth
- Fix login flow to use NextAuth signIn instead of non-existent /login route
- Prevent 401 errors by waiting for auth token before making API calls

Previously, all routes under (app) were publicly accessible with each page
handling auth individually. Now authentication is enforced globally while
still allowing specific routes to remain public.

* refactor: remove redundant client-side AuthGuard

The authentication is already properly handled by Next.js middleware
in middleware.ts with LOGIN_REQUIRED_PAGES. The middleware approach is
superior as it:
- Provides server-side protection before page loads
- Prevents flash of unauthorized content
- Centralizes auth logic in one place
- Better performance (no client-side JS needed)

Keep the API hooks conditional to prevent 401 errors before token is ready.

* fix: use direct status check for API query authentication

Changed all query hooks to use direct `status === "authenticated"` check
instead of derived `isAuthenticated && !isLoading` to avoid race conditions
where queries might fire before the authentication token is properly set.

This prevents the brief 401 errors that occur on page refresh when the
session is being restored.

* fix: correct content-type header for FormData uploads

Previously, the API client was setting a default Content-Type of application/json
for all requests, which broke file uploads that need multipart/form-data.

Now the client only sets application/json when the body is not FormData,
allowing FormData to automatically set the correct multipart boundary.

* fix: resolve authentication race condition with React Query

Previously, API calls were being made before the auth token was configured,
causing initial 401 errors that would retry with 200 after token setup.

Changes:
- Add global auth readiness tracking in apiClient
- Create useAuthReady hook that checks both session and token state
- Update all API hooks to use isAuthReady instead of just session status
- Add AuthWrapper component at layout level for consistent loading UX
- Show spinner while authentication initializes across all pages

This ensures API calls only fire after authentication is fully configured,
eliminating the 401/retry pattern and improving user experience.

* refactor: clean up api-hooks.ts comments and improve search invalidation

- Remove redundant function category comments (exports are self-explanatory)
- Remove obvious inline comments for query invalidation
- Fix search endpoint invalidation to clear all queries regardless of parameters

* refactor: remove api-types.ts compatibility layer

- Migrated all 29 files from api-types.ts to use reflector-api.d.ts directly
- Removed $SourceKind manual enum in favor of OpenAPI-generated types
- Fixed unrelated Spinner component TypeScript error in AuthWrapper.tsx
- All imports now use: import type { components } from "path/to/reflector-api"
- Deleted api-types.ts file completely

* refactor: rename api-hooks.ts to apiHooks.ts for consistency

- Renamed api-hooks.ts to apiHooks.ts to follow camelCase convention
- Updated all 21 import statements across the codebase
- Maintains consistency with other non-component files (apiClient.tsx, useAuthReady.ts, etc.)
- Follows established naming pattern: PascalCase for components, camelCase for utilities/hooks

* chore: add .playwright-mcp to .gitignore

* refactor: remove SK helper object and use inline type casting in FilterSidebar

Replace the SK (SourceKind) helper object with direct inline type casting
to simplify the code and reduce unnecessary abstraction.

* chore: clean up migration comments from React Query refactoring

- Remove temporary "// Use new React Query hooks" comments
- Remove "// React Query hooks" comments from browse and rooms pages
- Update package.json script name from codegen to openapi for consistency

* refactor: remove Redis dependencies from frontend authentication

- Replace Redis/Redlock with in-memory cache for token management
- Remove @vercel/kv, ioredis, and redlock dependencies from package.json
- Implement simple lock mechanism for concurrent token refresh prevention
- Use Map-based cache with TTL for token storage
- Maintain same authentication flow without external dependencies

This simplifies the infrastructure requirements and removes the need for
Redis while maintaining the same functionality through in-memory caching.

* fix: add staleTime to prevent cross-tab staled data

* fix: remove infinite re-render loop in useSessionAccessToken

The hook was maintaining redundant local state that caused re-renders
on every update, which triggered NextAuth to continuously refetch the
session, resulting in hundreds of POST requests to /api/auth/session.

Simplified the hook to directly return session values without
unnecessary state duplication.

* fix: handle undefined access tokens in auth.ts

Added fallback to empty string for potentially undefined access_token
and refresh_token from NextAuth account object to satisfy
JWTWithAccessToken type requirements.

* Igor/mathieu/frontend openapi react query (#597)

* small typing

* typing fixes

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>

* self-review-fix

* authReady callback simplify

* fix auth

* fix compose

* room detail page fix

* compile fix

* room edit fix

* normalize auth provider

* room edition state granular management

* cover TODOs + cross-tab cache

* session auto refresh blink

* schema generator error type doc

* protect from zombie auth

* clarify access token refresh logic a bit

* remove react-query tab sharing cache

* remove react-query tab sharing cache

* websocket dupe react devmode protection

* invalidate room on room update

* redis cache

* test ts server

* ci randomness

* less edgy config (ci)

* less edgy config (ci)

* less edgy config (ci)

* ci randomness

* ci randomness

* ci randomness

* ci randomness

* less edgy config (ci)

* added vs edited room state cleanup

* file upload real-time state management fix

* prettier auth state ternary

* prettier auth state ternary

* proper api address from env

* INTERVAL_REFRESH_MS

* node version 20 for tests

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* CI debug

* CI debug

* nextjs magic

* nextjs magic

* doc

* client-side stale auth soft safety net

---------

Co-authored-by: Mathieu Virbel <mat@meltingrocks.com>
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 16:01:31 -06:00
0663700a61 fix: align whisper transcriber api with parakeet (#602)
* Documents transcriber api

* Update whisper transcriber api to match parakeet

* Update api transcription spec

* Return 400 for unsupported file type

* Add params to api spec

* Update whisper transcriber implementation to match parakeet
2025-09-05 10:52:14 +02:00
293f7d4f1f feat: implement frontend video platform configuration and abstraction
- Add NEXT_PUBLIC_VIDEO_PLATFORM environment variable support
- Create video platform abstraction layer with factory pattern
- Implement Whereby and Jitsi platform providers
- Update room meeting page to use platform-agnostic component
- Add platform display in room management (cards and table views)
- Support single platform per deployment configuration
- Maintain backward compatibility with existing Whereby integration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-04 12:21:51 -06:00
dc82f8bb3b fix: source kind for file processing (#601) 2025-09-04 08:42:21 -06:00
41224a424c docs: move platform-jitsi.md to docs/ directory 2025-09-02 18:28:50 -06:00
dd0089906f fix: replace datetime.utcnow() with datetime.now(tz=timezone.utc) in Jitsi health check 2025-09-02 18:25:55 -06:00
fa559b1970 feat: update and expand video platform tests
- Update existing tests for StrEnum instead of string literals
- Add comprehensive WherebyClient tests with HTTP mocking
- Add webhook event storage tests for participant and recording events
- Add typing overload tests for create_platform_client factory
- Update webhook test paths to new video_platforms router locations
- Fix mock ordering and parameter issues in async tests
- Test all platform client functionality including signature verification
- Verify webhook event storage with proper timestamp handling

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 18:16:41 -06:00
c26ce65083 feat: update Jitsi documentation with webhook events storage system
- Add comprehensive webhook event storage documentation
- Document event structure and JSON storage in meetings table
- Add practical webhook testing examples with proper signature generation
- Include detailed troubleshooting for webhook signature verification issues
- Add webhook event payload examples for all supported event types
- Document event storage verification and database querying methods
- Enhance existing webhook configuration with real-world examples

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 18:09:44 -06:00
52eff2acc0 feat: clean up legacy code and remove excessive documentation
- Remove excessive inline comments from meeting creation flow
- Remove verbose docstrings from simple property methods and basic functions
- Clean up obvious comments like 'Generate JWT tokens', 'Build room URLs'
- Remove unnecessary explanatory comments in platform clients
- Keep only essential documentation for complex logic
- Simplify race condition handling comments
- Remove excessive method documentation for simple operations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 18:08:12 -06:00
7875ec3432 feat: move platform routers to video_platforms folders
- Move Jitsi router from views/jitsi.py to video_platforms/jitsi/router.py
- Move Whereby router from views/whereby.py to video_platforms/whereby/router.py
- Update __init__.py files to export routers from platform packages
- Update app.py imports to use video_platforms instead of views
- Remove old view files after successful migration
- Maintain exact same API endpoint paths (/v1/jitsi, /v1/whereby)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 18:05:04 -06:00
398be06fad feat: add typing overloads and clean up platform client factory
- Add typing overloads to get_platform_client for JitsiClient and WherebyClient return types
- Add overloads to create_platform_client in factory for better IDE support
- Remove PyJWT fallback imports from views/rooms.py
- Remove platform defaults from CreateRoom and UpdateRoom models
- Clean up legacy whereby fallback code in meeting creation
- Use direct platform client access instead of conditional fallbacks

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 18:02:43 -06:00
da700069d9 Add webhook events storage to meetings model
- Add events column as JSON type to meetings table with default empty array
- Add events: List[Dict[str, Any]] field to Meeting model
- Create migration 2890b5104577 for events column and apply successfully
- Add MeetingController helper methods for event storage:
  - add_event() for generic event storage with timestamps
  - participant_joined(), participant_left() for participant tracking
  - recording_started(), recording_stopped() for recording events
  - get_events() for event retrieval
- Update Jitsi webhook endpoints to store events:
  - Store participant join/leave events with data and timestamps
  - Store recording start/stop events from Prosody webhooks
  - Store recording completion events from Jibri finalize script
- Events stored with type, timestamp, and data for webhook history tracking
- Fix linting and formatting issues

Addresses PR feedback point 12: save webhook events in meetings events field
2025-09-02 17:53:35 -06:00
51229a1790 Fix Jitsi client issues and create typed meeting data
- Remove 'transcription': True from JWT features in _generate_jwt
- Replace int(time.time()) with generate_uuid4() for room naming to avoid conflicts
- Replace datetime.utcnow() with datetime.now(tz=timezone.utc) for proper timezone handling
- Create JitsiMeetingData(MeetingData) class with typed extra_data properties
- Update PLATFORM_NAME = VideoPlatform.JITSI to use enum
- Update create_meeting to return JitsiMeetingData instance with proper typing
- Fix get_room_sessions mock to use timezone-aware datetime
- Export JitsiMeetingData from jitsi module

Addresses PR feedback points 4, 5, 6, 10: remove transcription features, use UUID,
proper datetime handling, and typed meeting data
2025-09-02 17:44:04 -06:00
2d2c23f7cc Create video_platforms/whereby structure and WherebyClient
- Create video_platforms/whereby/ directory with __init__.py, client.py, tasks.py
- Implement WherebyClient inheriting from VideoPlatformClient interface
- Move all functions from whereby.py into WherebyClient methods
- Use VideoPlatform.WHEREBY enum for PLATFORM_NAME
- Register WherebyClient in platform registry
- Update factory.py to include S3 bucket config for whereby
- Update worker process to use platform abstraction for get_room_sessions
- Preserve exact API behavior for meeting activity detection
- Maintain AWS S3 configuration handling in WherebyClient
- Fix linting and formatting issues

Addresses PR feedback point 7: implement video_platforms/whereby structure
Note: whereby.py kept for legacy fallback until task 7 cleanup
2025-09-02 17:40:32 -06:00
0acb9cac79 Replace Literal with VideoPlatform StrEnum for platform field
- Create VideoPlatform StrEnum with WHEREBY and JITSI values
- Update rooms.py and meetings.py to use VideoPlatform enum
- Update views/rooms.py and video_platforms/factory.py to use enum values
- Generate new migration with proper server_default='whereby'
- Apply migration successfully with backward compatibility
- Fix linting and formatting issues

Addresses PR feedback point 1: use StrEnum instead of Literal[]
2025-09-02 17:36:14 -06:00
d861d92cc2 docs: add comprehensive Jitsi Meet integration user guide
- Complete end-user configuration guide for self-hosted Jitsi Meet
- Covers installation, JWT authentication, and Prosody configuration
- Webhook event handling with mod_event_sync setup
- Jibri recording service configuration and finalize script
- Room creation, JWT token management, and security best practices
- Comprehensive troubleshooting with debug commands and solutions
- Performance optimization and scaling considerations
- Migration guidance from Whereby platform

🤖 Generated with Claude Code
2025-09-02 17:07:09 -06:00
24ff83a2ec docs: add comprehensive Whereby integration user guide
- Complete end-user configuration guide for Whereby video platform
- Covers account setup, API key generation, and webhook configuration
- AWS S3 storage setup with IAM permissions and security best practices
- Room creation, recording options, and meeting feature configuration
- Troubleshooting guide with common issues and debug commands
- Security considerations and performance optimization tips
- Migration guidance from other platforms

🤖 Generated with Claude Code
2025-09-02 17:05:40 -06:00
249234238c feat: add comprehensive video platform test suite
- Created complete test coverage for video platform abstraction
- Tests for base classes, JitsiClient implementation, and platform registry
- JWT generation tests with proper mocking and error scenarios
- Webhook signature verification tests (valid/invalid/missing secret)
- Platform factory tests for Jitsi and Whereby configuration
- Registry tests for platform registration and client creation
- Webhook endpoint tests with signature verification and error cases
- Integration tests for rooms endpoint with platform abstraction
- 24 comprehensive test cases covering all video platform functionality
- All tests passing with proper mocking and isolation

🤖 Generated with Claude Code
2025-09-02 16:54:58 -06:00
42a603d5c3 feat: add PyJWT dependency and finalize Jitsi integration
- Added PyJWT>=2.8.0 to pyproject.toml dependencies
- Installed dependency via uv sync successfully
- Verified JWT generation functionality works correctly
- Confirmed platform factory creates JitsiClient instances
- Validated database migrations applied (platform fields available)
- Tested webhook endpoints are registered and functional
- Verified FastAPI app starts without errors with full integration
- All integration tests pass - Jitsi platform fully functional

🤖 Generated with Claude Code
2025-09-02 16:28:44 -06:00
6d2092f950 feat: create comprehensive Jitsi integration documentation
- Added complete end-user configuration guide at server/platform-jitsi.md
- Covers prerequisites, environment setup, and Jitsi Meet configuration
- Includes JWT authentication, Jibri recording, and Prosody event-sync setup
- Provides troubleshooting guide with common issues and solutions
- Documents security best practices and performance optimization
- Includes testing procedures and migration guidance from Whereby
- Ready for production deployment with step-by-step instructions
- Uses environment variable placeholders for security

🤖 Generated with Claude Code
2025-09-02 16:24:47 -06:00
f2bb6aaecb feat: update rooms.py to use video platform abstraction
- Added platform field to Room, CreateRoom, and UpdateRoom models
- Updated rooms_create function to pass platform parameter
- Rewrote rooms_create_meeting to use platform factory pattern
- Added graceful fallback to legacy whereby implementation
- Maintained API compatibility and error handling patterns
- Prepared for multi-platform support (Whereby/Jitsi)

🤖 Generated with Claude Code
2025-09-02 16:21:58 -06:00
2b136ac7b0 feat: create Jitsi webhook endpoints for event handling
- Added comprehensive Jitsi webhook endpoint in views/jitsi.py
- Handles Prosody event-sync events (muc-occupant-joined/left)
- Implements participant counting following whereby.py pattern
- Added Jibri recording completion webhook endpoint
- Includes signature verification with fallback when platform client unavailable
- Registered router in app.py for /v1/jitsi endpoints
- Added health check endpoint for webhook configuration

🤖 Generated with Claude Code
2025-09-02 16:19:54 -06:00
3f4fc26483 feat: register Jitsi platform in video platforms factory and registry
- Added JitsiClient registration to platform registry
- Enables dynamic platform selection through factory pattern
- Factory configuration already supports Jitsi settings
- Platform abstraction layer now supports beide Whereby and Jitsi

🤖 Generated with Claude Code
2025-09-02 16:17:32 -06:00
8e5ef5bca6 feat: implement JitsiClient with JWT authentication
Complete implementation of JitsiClient following VideoPlatformClient interface
with JWT-based room access control and webhook signature verification.

- Add JWT token generation with proper payload structure
- Implement unique room name generation with timestamp
- Create separate user/host JWT tokens with moderator permissions
- Build secure room URLs with embedded JWT parameters
- Add HMAC-SHA256 webhook signature verification for Prosody events
- Implement all abstract methods with Jitsi-specific behavior
- Include comprehensive typing and error handling

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 16:15:49 -06:00
d49fdcb38d feat: create video platforms architecture with Jitsi directory structure
Create complete video platforms abstraction layer following daily.co branch
pattern with Jitsi-specific directory structure.

- Add video_platforms base module with abstract classes
- Create VideoPlatformClient, MeetingData, VideoPlatformConfig interfaces
- Add platform registry system for client management
- Create factory pattern for platform client creation
- Add Jitsi directory structure with __init__.py, tasks.py, client.py
- Configure Jitsi platform in factory with JWT-based authentication

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 16:14:42 -06:00
d42380abf1 feat: add Jitsi configuration settings
Add comprehensive Jitsi Meet configuration settings to settings.py
following the same pattern as WHEREBY settings.

- Add JITSI_DOMAIN with meet.jit.si default
- Add JITSI_JWT_SECRET for JWT token signing
- Add JITSI_WEBHOOK_SECRET for webhook validation
- Add JITSI_APP_ID, JITSI_JWT_ISSUER, JITSI_JWT_AUDIENCE for JWT configuration
- Follow consistent naming and typing patterns

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 16:11:00 -06:00
cf64e1a3d9 feat: add database migration for platform field
Generate Alembic migration to add platform column to rooms and meetings
tables enabling multi-platform video conferencing support.

- Add platform column to meeting table with whereby default
- Add platform column to room table with whereby default
- Migration tested successfully with alembic upgrade head

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 16:10:11 -06:00
ea53ca7000 feat: add platform field to Room and Meeting models
Add platform column to rooms and meetings database tables with Literal typing
to support multiple video conferencing platforms (whereby, jitsi).

- Add platform column to rooms/meetings SQLAlchemy tables with whereby default
- Update Room/Meeting Pydantic models with platform field and Literal typing
- Modify RoomController.add() to accept platform parameter
- Update MeetingController.create() to copy platform from room

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-02 16:08:38 -06:00
457823e1c1 chore(main): release 0.8.2 (#595) 2025-09-01 19:09:09 -06:00
Igor Monadical
695d1a957d fix: search-logspam (#593)
* fix: search-logspam

* llm comment

* fix tests

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-08-29 18:55:51 -04:00
ccffdba75b chore(main): release 0.8.1 (#591) 2025-08-29 11:56:11 -06:00
84a381220b fix: make webhook secret/url allowing null (#590) 2025-08-29 11:55:18 -06:00
5f2f0e9317 chore(main): release 0.8.0 (#579) 2025-08-29 11:34:24 -06:00
88ed7cfa78 feat(rooms): add webhook for transcript completion (#578)
* feat(rooms): add webhook notifications for transcript completion

- Add webhook_url and webhook_secret fields to rooms table
- Create Celery task with 24-hour retry window using exponential backoff
- Send transcript metadata, diarized text, topics, and summaries via webhook
- Add HMAC signature verification for webhook security
- Add test endpoint POST /v1/rooms/{room_id}/webhook/test
- Update frontend with webhook configuration UI and test button
- Auto-generate webhook secret if not provided
- Trigger webhook after successful file pipeline processing for room recordings

* style: linting

* fix: remove unwanted files

* fix: update openapi gen

* fix: self-review

* docs: add comprehensive webhook documentation

- Document webhook configuration, events, and payloads
- Include transcript.completed and test event examples
- Add security considerations and best practices
- Provide example webhook receiver implementation
- Document retry policy and signature verification

* fix: remove audio_mp3_url from webhook payload

- Remove audio download URL generation from webhook
- Update documentation to reflect the change
- Keep only frontend_url for accessing transcripts

* docs: remove unwanted section

* fix: correct API method name and type imports for rooms

- Fix v1RoomsRetrieve to v1RoomsGet
- Update Room type to RoomDetails throughout frontend
- Fix type imports in useRoomList, RoomList, RoomTable, and RoomCards

* feat: add show/hide toggle for webhook secret field

- Add eye icon button to reveal/hide webhook secret when editing
- Show password dots when webhook secret is hidden
- Reset visibility state when opening/closing dialog
- Only show toggle button when editing existing room with secret

* fix: resolve event loop conflict in webhook test endpoint

- Extract webhook test logic into shared async function
- Call async function directly from FastAPI endpoint
- Keep Celery task wrapper for background processing
- Fixes RuntimeError: event loop already running

* refactor: remove unnecessary Celery task for webhook testing

- Webhook testing is synchronous and provides immediate feedback
- No need for background processing via Celery
- Keep only the async function called directly from API endpoint

* feat: improve webhook test error messages and display

- Show HTTP status code in error messages
- Parse JSON error responses to extract meaningful messages
- Improved UI layout for webhook test results
- Added colored background for success/error states
- Better text wrapping for long error messages

* docs: adjust doc

* fix: review

* fix: update attempts to match close 24h

* fix: add event_id

* fix: changed to uuid, to have new event_id when reprocess.

* style: linting

* fix: alembic revision
2025-08-29 10:07:49 -06:00
6f0c7c1a5e feat(cleanup): add automatic data retention for public instances (#574)
* feat(cleanup): add automatic data retention for public instances

- Add Celery task to clean up anonymous data after configurable retention period
- Delete transcripts, meetings, and orphaned recordings older than retention days
- Only runs when PUBLIC_MODE is enabled to prevent accidental data loss
- Properly removes all associated files (local and S3 storage)
- Add manual cleanup tool for testing and intervention
- Configure retention via PUBLIC_DATA_RETENTION_DAYS setting (default: 7 days)

Fixes #571

* fix: apply pre-commit formatting fixes

* fix: properly delete recording files from storage during cleanup

- Add storage deletion for orphaned recordings in both cleanup task and manual tool
- Delete from storage before removing database records
- Log warnings if storage deletion fails but continue with database cleanup

* Apply suggestion from @pr-agent-monadical[bot]

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* Apply suggestion from @pr-agent-monadical[bot]

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* refactor: cleanup_old_data for better logging

* fix: linting

* test: fix meeting cleanup test to not require room controller

- Simplify test by directly inserting meetings into database
- Remove dependency on non-existent rooms_controller.create method
- Tests now pass successfully

* fix: linting

* refactor: simplify cleanup tool to use worker implementation

- Remove duplicate cleanup logic from manual tool
- Use the same _cleanup_old_public_data function from worker
- Remove dry-run feature as requested
- Prevent code duplication and ensure consistency
- Update documentation to reflect changes

* refactor: split cleanup worker into smaller functions

- Move all imports to the top of the file
- Extract cleanup logic into separate functions:
  - cleanup_old_transcripts()
  - cleanup_old_meetings()
  - cleanup_orphaned_recordings()
  - log_cleanup_results()
- Make code more maintainable and testable
- Add days parameter support to Celery task
- Update manual tool to work with refactored code

* feat: add TypedDict typing for cleanup stats

- Add CleanupStats TypedDict for better type safety
- Update all function signatures to use proper typing
- Add return type annotations to _cleanup_old_public_data
- Improves code maintainability and IDE support

* feat: add CASCADE DELETE to meeting_consent foreign key

- Add ondelete="CASCADE" to meeting_consent.meeting_id foreign key
- Generate and apply migration to update existing constraint
- Remove manual consent deletion from cleanup code
- Add unit test to verify CASCADE DELETE behavior

* style: linting

* fix: alembic migration branchpoint

* fix: correct downgrade constraint name in CASCADE DELETE migration

* fix: regenerate CASCADE DELETE migration with proper constraint names

- Delete problematic migration and regenerate with correct names
- Use explicit constraint name in both upgrade and downgrade
- Ensure migration works bidirectionally
- All tests passing including CASCADE DELETE test

* style: linting

* refactor: simplify cleanup to use transcripts as entry point

- Remove orphaned_recordings cleanup (not part of this PR scope)
- Remove separate old_meetings cleanup
- Transcripts are now the main entry point for cleanup
- Associated meetings and recordings are deleted with their transcript
- Use single database connection for all operations
- Update tests to reflect new approach

* refactor: cleanup and rename functions for clarity

- Rename _cleanup_old_public_data to cleanup_old_public_data (make public)
- Rename celery task to cleanup_old_public_data_task for clarity
- Update docstrings and improve code organization
- Remove unnecessary comments and simplify deletion logic
- Update tests to use new function names
- All tests passing

* style: linting\

* style: typing and review

* fix: add transaction on cleanup_single_transcript

* fix: naming

---------

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2025-08-29 08:47:14 -06:00
9dfd76996f fix: file pipeline status reporting and websocket updates (#589)
* feat: use file pipeline for upload and reprocess action

* fix: make file pipeline correctly report status events

* fix: duplication of transcripts_controller

* fix: tests

* test: fix file upload test

* test: fix reprocess

* fix: also patch from main_file_pipeline

(how patch is done is dependent of file import unfortunately)
2025-08-29 00:58:14 -06:00
55cc8637c6 ci: restrict workflow execution to main branch and add concurrency (#586)
* ci: try adding concurrency

* ci: restrict push on main branch

* ci: fix concurrency key

* ci: fix build concurrency

* refactor: apply suggestion from @pr-agent-monadical[bot]

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

---------

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2025-08-28 16:43:17 -06:00
f5331a2107 style: more type annotations to parakeet transcriber (#581)
* feat: add comprehensive type annotations to Parakeet transcriber

- Add TypedDict for WordTiming with word, start, end fields
- Add NamedTuple for TimeSegment, AudioSegment, and TranscriptResult
- Add type hints to all generator functions (vad_segment_generator, batch_speech_segments, etc.)
- Add enforce_word_timing_constraints function to prevent word timing overlaps
- Refactor batch_segment_to_audio_segment to reuse pad_audio function

* doc: add note about space
2025-08-28 12:22:07 -06:00
Igor Loskutov
124ce03bf8 fix: Igor/evaluation (#575)
* fix: impossible import error (#563)

* evaluation cli - database events experiment

* hallucinations

* evaluation - unhallucinate

* evaluation - unhallucinate

* roll back reliability link

* self reviewio

* lint

* self review

* add file pipeline to cli

* add file pipeline to cli + sorting

* remove cli tests

* remove ai comments

* comments
2025-08-28 12:07:34 -04:00
7030e0f236 fix: optimize parakeet transcription batching algorithm (#577)
* refactor: optimize transcription batching to accumulate speech segments

- Changed VAD segment generator to return full audio array instead of segments
- Removed segment filtering step
- Modified batch_segments to accumulate maximum speech including silence
- Transcribe larger continuous chunks instead of individual speech segments

* fix: correct transcribe_batch call to use list and fix batch unpacking

* fix: simplify

* fix: remove unused variables

* fix: add typing
2025-08-27 10:32:04 -06:00
37f0110892 doc: update local model readme 2025-08-22 17:50:24 -06:00
cf2896a7f4 doc: update readme about installation instructions
Add a note about installation instructions being inaccurate.
2025-08-22 17:48:35 -06:00
aabf2c2572 chore(main): release 0.7.3 (#565) 2025-08-22 16:35:52 -06:00
6a7b08f016 doc: change readme intro 2025-08-22 16:26:25 -06:00
e2736563d9 doc: update readme with new images 2025-08-22 16:15:54 -06:00
0f54b7782d chore: ignore www/.env.[development,production] 2025-08-22 14:41:09 -06:00
359280dd34 fix: cleaned repo, and get git-leaks clean 2025-08-22 11:51:34 -06:00
9265d201b5 fix: restore previous behavior on live pipeline + audio downscaler (#561)
This commit restore the original behavior with frame cutting. While
silero is used on our gpu for files, look like it's not working great on
the live pipeline. To be investigated, but at the moment, what we keep
is:

- refactored to extract the downscale for further processing in the
pipeline
- remove any downscale implementation from audio_chunker and audio_merge
- removed batching from audio_merge too for now
2025-08-22 10:49:26 -06:00
52f9f533d7 chore(main): release 0.7.2 (#559) 2025-08-21 21:00:05 -06:00
0c3878ac3c fix: docker image not loading libgomp.so.1 for torch (#560)
On ARM64, the docker iamge crash because torch cannot load libgomp.so.1
-- Look like pytorch does not install the same packages depending the
platform.

AMD64:

/app/.venv/lib/python3.12/site-packages/torch/lib/libgomp.so.1
/app/.venv/lib/python3.12/site-packages/ctranslate2.libs/libgomp-a34b3233.so.1.0.0
/app/.venv/lib/python3.12/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0

ARM64:

/app/.venv/lib/python3.12/site-packages/ctranslate2.libs/libgomp-d22c30c5.so.1.0.0
/app/.venv/lib/python3.12/site-packages/scikit_learn.libs/libgomp-947d5fa1.so.1.0.0
/app/.venv/lib/python3.12/site-packages/torch.libs/libgomp-947d5fa1.so.1.0.0
2025-08-21 16:41:35 -06:00
Igor Loskutov
d70beee51b fix: include shared rooms to search (#558)
* include shared rooms to search

* tests vibe

* tests vibe

* tests vibe

* tests vibe

* tests vibe

* tests vibe

* tests vibe

* remove tests, thats too much
2025-08-21 14:52:29 -04:00
bc5b351d2b chore(main): release 0.7.1 (#557) 2025-08-20 23:23:27 -06:00
Igor Loskutov
07981e8090 fix: webvtt db null expectation mismatch (#556) 2025-08-20 23:22:41 -06:00
7e366f6338 chore(main): release 0.7.0 (#541) 2025-08-20 22:24:36 -06:00
7592679a35 build: separate silero-vad and force torch to be resolved without nvidia (#555)
* build: separate silero-vad and force torch to be resolved without nvidia

* build: also add torchaudio as cpu version
2025-08-20 22:23:48 -06:00
af16178f86 ci: use github-token to get around potential api throttling + rework dockerfile (#554)
* ci: use github-token to get around potential api throttling

* build: put pyannote-audio separate to the project

* fix: now that we have a readme, use it

* build: add UV_NO_CACHE
2025-08-20 21:59:29 -06:00
3ea7f6b7b6 feat: pipeline improvement with file processing, parakeet, silero-vad (#540)
* feat: improve pipeline threading, and transcriber (parakeet and silero vad)

* refactor: remove whisperx, implement parakeet

* refactor: make audio_chunker more smart and wait for speech, instead of fixed frame

* refactor: make audio merge to always downscale the audio to 16k for transcription

* refactor: make the audio transcript modal accepting batches

* refactor: improve type safety and remove prometheus metrics

- Add DiarizationSegment TypedDict for proper diarization typing
- Replace List/Optional with modern Python list/| None syntax
- Remove all Prometheus metrics from TranscriptDiarizationAssemblerProcessor
- Add comprehensive file processing pipeline with parallel execution
- Update processor imports and type annotations throughout
- Implement optimized file pipeline as default in process.py tool

* refactor: convert FileDiarizationProcessor I/O types to BaseModel

Update FileDiarizationInput and FileDiarizationOutput to inherit from
BaseModel instead of plain classes, following the standard pattern
used by other processors in the codebase.

* test: add tests for file transcript and diarization with pytest-recording

* build: add pytest-recording

* feat: add local pyannote for testing

* fix: replace PyAV AudioResampler with torchaudio for reliable audio processing

- Replace problematic PyAV AudioResampler that was causing ValueError: [Errno 22] Invalid argument
- Use torchaudio.functional.resample for robust sample rate conversion
- Optimize processing: skip conversion for already 16kHz mono audio
- Add direct WAV writing with Python wave module for better performance
- Consolidate duplicate downsample checks for cleaner code
- Maintain list[av.AudioFrame] input interface
- Required for Silero VAD which needs 16kHz mono audio

* fix: replace PyAV AudioResampler with torchaudio solution

- Resolves ValueError: [Errno 22] Invalid argument in AudioMergeProcessor
- Replaces problematic PyAV AudioResampler with torchaudio.functional.resample
- Optimizes processing to skip unnecessary conversions when audio is already 16kHz mono
- Uses direct WAV writing with Python's wave module for better performance
- Fixes test_basic_process to disable diarization (pyannote dependency not installed)
- Updates test expectations to match actual processor behavior
- Removes unused pydub dependency from pyproject.toml
- Adds comprehensive TEST_ANALYSIS.md documenting test suite status

* feat: add parameterized test for both diarization modes

- Adds @pytest.mark.parametrize to test_basic_process with enable_diarization=[False, True]
- Test with diarization=False always passes (tests core AudioMergeProcessor functionality)
- Test with diarization=True gracefully skips when pyannote.audio is not installed
- Provides comprehensive test coverage for both pipeline configurations

* fix: resolve pipeline property naming conflict in AudioDiarizationPyannoteProcessor

- Renames 'pipeline' property to 'diarization_pipeline' to avoid conflict with base Processor.pipeline attribute
- Fixes AttributeError: 'property 'pipeline' object has no setter' when set_pipeline() is called
- Updates property usage in _diarize method to use new name
- Now correctly supports pipeline initialization for diarization processing

* fix: add local for pyannote

* test: add diarization test

* fix: resample on audio merge now working

* fix: correctly restore timestamp

* fix: display exception in a threaded processor if that happen

* Update pyproject.toml

* ci: remove option

* ci: update astral-sh/setup-uv

* test: add monadical url for pytest-recording

* refactor: remove previous version

* build: move faster whisper to local dep

* test: fix missing import

* refactor: improve main_file_pipeline organization and error handling

- Move all imports to the top of the file
- Create unified EmptyPipeline class to replace duplicate mock pipeline code
- Remove timeout and fallback logic - let processors handle their own retries
- Fix error handling to raise any exception from parallel tasks
- Add proper type hints and validation for captured results

* fix: wrong function

* fix: remove task_done

* feat: add configurable file processing timeouts for modal processors

- Add TRANSCRIPT_FILE_TIMEOUT setting (default: 600s) for file transcription
- Add DIARIZATION_FILE_TIMEOUT setting (default: 600s) for file diarization
- Replace hardcoded timeout=600 with configurable settings in modal processors
- Allows customization of timeout values via environment variables

* fix: use logger

* fix: worker process meetings now use file pipeline

* fix: topic not gathered

* refactor: remove prepare(), pipeline now work

* refactor: implement many review from Igor

* test: add test for test_pipeline_main_file

* refactor: remove doc

* doc: add doc

* ci: update build to use native arm64 builder

* fix: merge fixes

* refactor: changes from Igor review + add test (not by default) to test gpu modal part

* ci: update to our own runner linux-amd64

* ci: try using suggested mode=min

* fix: update diarizer for latest modal, and use volume

* fix: modal file extension detection

* fix: put the diarizer as A100
2025-08-20 20:07:19 -06:00
Igor Loskutov
009590c080 feat: search frontend (#551)
* feat: better highlight

* feat(search): add long_summary to search vector for improved search results

- Update search vector to include long_summary with weight B (between title A and webvtt C)
- Modify SearchController to fetch long_summary and prioritize its snippets
- Generate snippets from long_summary first (max 2), then from webvtt for remaining slots
- Add comprehensive tests for long_summary search functionality
- Create migration to update search_vector_en column in PostgreSQL

This improves search quality by including summarized content which often contains
key topics and themes that may not be explicitly mentioned in the transcript.

* fix: address code review feedback for search enhancements

- Fix test file inconsistencies by removing references to non-existent model fields
  - Comment out tests for unimplemented features (room_ids, status filters, date ranges)
  - Update tests to only use currently available fields (room_id singular, no room_name/processing_status)
  - Mark future functionality tests with @pytest.mark.skip

- Make snippet counts configurable
  - Add LONG_SUMMARY_MAX_SNIPPETS constant (default: 2)
  - Replace hardcoded value with configurable constant

- Improve error handling consistency in WebVTT parsing
  - Use different log levels for different error types (debug for malformed, warning for decode, error for unexpected)
  - Add catch-all exception handler for unexpected errors
  - Include stack trace for critical errors

All existing tests pass with these changes.

* fix: correct datetime test to include required duration field

* feat: better highlight

* feat: search room names

* feat: acknowledge deleted room

* feat: search filters fix and rank removal

* chore: minor refactoring

* feat: better matches frontend

* chore: self-review (vibe)

* chore: self-review WIP

* chore: self-review WIP

* chore: self-review WIP

* chore: self-review WIP

* chore: self-review WIP

* chore: self-review WIP

* chore: self-review WIP

* remove swc (vibe)

* search url query sync (vibe)

* search url query sync (vibe)

* better casts and cap while

* PR review + simplify frontend hook

* pr: remove search db timeouts

* cleanup tests

* tests cleanup

* frontend cleanup

* index declarations

* refactor frontend (self-review)

* fix search pagination

* clear "x" for search input

* pagination max pages fix

* chore: cleanup

* cleanup

* cleanup

* cleanup

* cleanup

* cleanup

* cleanup

* cleanup

* lockfile

* pr review
2025-08-20 20:56:45 -04:00
Igor Loskutov
fe5d344cff diarization cli: throw on modal errors (#553) 2025-08-20 10:21:52 -04:00
Igor Loskutov
86455ce573 chore: type fixes (#544)
* chore: type fixes

* chore: type fixes
2025-08-18 16:31:23 -04:00
2fccd81bcd fix: use structlog not logging (#550) 2025-08-15 15:41:23 -06:00
1311714451 ci: add pre-commit hook and fix linting issues (#545)
* style: deactivate PLC0415 only on part that it's ok

+ re-run pre-commit run --all

* ci: add pre-commit hook

* build: move from yarn to pnpm

* build: move from yarn to pnpm

* build: fix node-version

* ci: install pnpm prior node (?)

* build: update deps and pnpm trying to fix vercel build

* feat: docker www corepack

* style: pre-commit

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-08-14 20:59:54 -06:00
b9d891d342 feat: delete recording with transcript (#547)
* Delete recording with transcript

* Delete confirmation dialog

* Use aws storage abstraction for recording deletion

* Test recording deleted with transcript

* Use get transcript storage

* Fix the test

* Add env vars for recording storage
2025-08-14 20:45:30 +02:00
9eab952c63 feat: postgresql migration and removal of sqlite in pytest (#546)
* feat: remove support of sqlite, 100% postgres

* fix: more migration and make datetime timezone aware in postgres

* fix: change how database is get, and use contextvar to have difference instance between different loops

* test: properly use client fixture that handle lifetime/database connection

* fix: add missing client fixture parameters to test functions

This commit fixes NameError issues where test functions were trying to use
the 'client' fixture but didn't have it as a parameter. The changes include:

1. Added 'client' parameter to test functions in:
   - test_transcripts_audio_download.py (6 functions including fixture)
   - test_transcripts_speaker.py (3 functions)
   - test_transcripts_upload.py (1 function)
   - test_transcripts_rtc_ws.py (2 functions + appserver fixture)

2. Resolved naming conflicts in test_transcripts_rtc_ws.py where both HTTP
   client and StreamClient were using variable name 'client'. StreamClient
   instances are now named 'stream_client' to avoid conflicts.

3. Added missing 'from reflector.app import app' import in rtc_ws tests.

Background: Previously implemented contextvars solution with get_database()
function resolves asyncio event loop conflicts in Celery tasks. The global
client fixture was also created to replace manual AsyncClient instances,
ensuring proper FastAPI application lifecycle management and database
connections during tests.

All tests now pass except for 2 pre-existing RTC WebSocket test failures
related to asyncpg connection issues unrelated to these fixes.

* fix: ensure task are correctly closed

* fix: make separate event loop for the live server

* fix: make default settings pointing at postgres

* build: remove pytest-docker deps out of dev, just tests group
2025-08-14 11:40:52 -06:00
Igor Loskutov
6fb5cb21c2 feat: search backend (#537)
* docs: transient docs

* chore: cleanup

* webvtt WIP

* webvtt field

* chore: webvtt tests comments

* chore: remove useless tests

* feat: search TASK.md

* feat: full text search by title/webvtt

* chore: search api task

* feat: search api

* feat: search API

* chore: rm task md

* chore: roll back unnecessary validators

* chore: pr review WIP

* chore: pr review WIP

* chore: pr review

* chore: top imports

* feat: better lint + ci

* feat: better lint + ci

* feat: better lint + ci

* feat: better lint + ci

* chore: lint

* chore: lint

* fix: db datetime definitions

* fix: flush() params

* fix: update transcript mutability expectation / test

* fix: update transcript mutability expectation / test

* chore: auto review

* chore: new controller extraction

* chore: new controller extraction

* chore: cleanup

* chore: review WIP

* chore: pr WIP

* chore: remove ci lint

* chore: openapi regeneration

* chore: openapi regeneration

* chore: postgres test doc

* fix: .dockerignore for arm binaries

* fix: .dockerignore for arm binaries

* fix: cap test loops

* fix: cap test loops

* fix: cap test loops

* fix: get_transcript_topics

* chore: remove flow.md docs and claude guidance

* chore: remove claude.md db doc

* chore: remove claude.md db doc

* chore: remove claude.md db doc

* chore: remove claude.md db doc
2025-08-13 10:03:38 -04:00
Igor Loskutov
a42ed12982 fix: evaluation cli event wrap (#536)
* fix: evaluation cli event wrap

* fix: evaluation cli event wrap

* chore: remove unrelated change

* chore: rollback claude.md changes
2025-08-11 19:28:52 -04:00
281 changed files with 39881 additions and 15766 deletions

View File

@@ -2,6 +2,8 @@ name: Test Database Migrations
on:
push:
branches:
- main
paths:
- "server/migrations/**"
- "server/reflector/db/**"
@@ -17,10 +19,43 @@ on:
jobs:
test-migrations:
runs-on: ubuntu-latest
concurrency:
group: db-ubuntu-latest-${{ github.ref }}
cancel-in-progress: true
services:
postgres:
image: postgres:17
env:
POSTGRES_USER: reflector
POSTGRES_PASSWORD: reflector
POSTGRES_DB: reflector
ports:
- 5432:5432
options: >-
--health-cmd pg_isready -h 127.0.0.1 -p 5432
--health-interval 10s
--health-timeout 5s
--health-retries 5
env:
DATABASE_URL: postgresql://reflector:reflector@localhost:5432/reflector
steps:
- uses: actions/checkout@v4
- name: Install PostgreSQL client
run: sudo apt-get update && sudo apt-get install -y postgresql-client | cat
- name: Wait for Postgres
run: |
for i in {1..30}; do
if pg_isready -h localhost -p 5432; then
echo "Postgres is ready"
break
fi
echo "Waiting for Postgres... ($i)" && sleep 1
done
- name: Install uv
uses: astral-sh/setup-uv@v3
with:

View File

@@ -8,18 +8,30 @@ env:
ECR_REPOSITORY: reflector
jobs:
deploy:
runs-on: ubuntu-latest
build:
strategy:
matrix:
include:
- platform: linux/amd64
runner: linux-amd64
arch: amd64
- platform: linux/arm64
runner: linux-arm64
arch: arm64
runs-on: ${{ matrix.runner }}
permissions:
deployments: write
contents: read
outputs:
registry: ${{ steps.login-ecr.outputs.registry }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@0e613a0980cbf65ed5b322eb7a1e075d28913a83
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@@ -27,21 +39,52 @@ jobs:
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@62f4f872db3836360b72999f4b87f1ff13310f3a
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: aws-actions/amazon-ecr-login@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Build and push
id: docker_build
uses: docker/build-push-action@v4
- name: Build and push ${{ matrix.arch }}
uses: docker/build-push-action@v5
with:
context: server
platforms: linux/amd64,linux/arm64
platforms: ${{ matrix.platform }}
push: true
tags: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
tags: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:latest-${{ matrix.arch }}
cache-from: type=gha,scope=${{ matrix.arch }}
cache-to: type=gha,mode=max,scope=${{ matrix.arch }}
github-token: ${{ secrets.GHA_CACHE_TOKEN }}
provenance: false
create-manifest:
runs-on: ubuntu-latest
needs: [build]
permissions:
deployments: write
contents: read
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
uses: aws-actions/amazon-ecr-login@v2
- name: Create and push multi-arch manifest
run: |
# Get the registry URL (since we can't easily access job outputs in matrix)
ECR_REGISTRY=$(aws ecr describe-registry --query 'registryId' --output text).dkr.ecr.${{ env.AWS_REGION }}.amazonaws.com
docker manifest create \
$ECR_REGISTRY/${{ env.ECR_REPOSITORY }}:latest \
$ECR_REGISTRY/${{ env.ECR_REPOSITORY }}:latest-amd64 \
$ECR_REGISTRY/${{ env.ECR_REPOSITORY }}:latest-arm64
docker manifest push $ECR_REGISTRY/${{ env.ECR_REPOSITORY }}:latest
echo "✅ Multi-arch manifest pushed: $ECR_REGISTRY/${{ env.ECR_REPOSITORY }}:latest"

24
.github/workflows/pre-commit.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: pre-commit
on:
pull_request:
push:
branches: [main]
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/setup-python@v5
- uses: pnpm/action-setup@v4
with:
version: 10
- uses: actions/setup-node@v4
with:
node-version: 22
cache: "pnpm"
cache-dependency-path: "www/pnpm-lock.yaml"
- name: Install dependencies
run: cd www && pnpm install --frozen-lockfile
- uses: pre-commit/action@v3.0.1

45
.github/workflows/test_next_server.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Test Next Server
on:
pull_request:
paths:
- "www/**"
push:
branches:
- main
paths:
- "www/**"
jobs:
test-next-server:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./www
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 8
- name: Setup Node.js cache
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
cache-dependency-path: './www/pnpm-lock.yaml'
- name: Install dependencies
run: pnpm install
- name: Run tests
run: pnpm test

View File

@@ -5,12 +5,17 @@ on:
paths:
- "server/**"
push:
branches:
- main
paths:
- "server/**"
jobs:
pytest:
runs-on: ubuntu-latest
concurrency:
group: pytest-${{ github.ref }}
cancel-in-progress: true
services:
redis:
image: redis:6
@@ -19,29 +24,47 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v3
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
working-directory: server
- name: Tests
run: |
cd server
uv run -m pytest -v tests
docker:
runs-on: ubuntu-latest
docker-amd64:
runs-on: linux-amd64
concurrency:
group: docker-amd64-${{ github.ref }}
cancel-in-progress: true
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build and push
id: docker_build
uses: docker/build-push-action@v4
uses: docker/setup-buildx-action@v3
- name: Build AMD64
uses: docker/build-push-action@v6
with:
context: server
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64
cache-from: type=gha,scope=amd64
cache-to: type=gha,mode=max,scope=amd64
github-token: ${{ secrets.GHA_CACHE_TOKEN }}
docker-arm64:
runs-on: linux-arm64
concurrency:
group: docker-arm64-${{ github.ref }}
cancel-in-progress: true
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build ARM64
uses: docker/build-push-action@v6
with:
context: server
platforms: linux/arm64
cache-from: type=gha,scope=arm64
cache-to: type=gha,mode=max,scope=arm64
github-token: ${{ secrets.GHA_CACHE_TOKEN }}

4
.gitignore vendored
View File

@@ -14,3 +14,7 @@ data/
www/REFACTOR.md
www/reload-frontend
server/test.sqlite
CLAUDE.local.md
www/.env.development
www/.env.production
.playwright-mcp

1
.gitleaksignore Normal file
View File

@@ -0,0 +1 @@
b9d891d3424f371642cb032ecfd0e2564470a72c:server/tests/test_transcripts_recording_deletion.py:generic-api-key:15

View File

@@ -3,10 +3,10 @@
repos:
- repo: local
hooks:
- id: yarn-format
name: run yarn format
- id: format
name: run format
language: system
entry: bash -c 'cd www && yarn format'
entry: bash -c 'cd www && pnpm format'
pass_filenames: false
files: ^www/
@@ -23,8 +23,12 @@ repos:
- id: ruff
args:
- --fix
- --select
- I,F401
# Uses select rules from server/pyproject.toml
files: ^server/
- id: ruff-format
files: ^server/
- repo: https://github.com/gitleaks/gitleaks
rev: v8.28.0
hooks:
- id: gitleaks

View File

@@ -1,5 +1,106 @@
# Changelog
## [0.10.0](https://github.com/Monadical-SAS/reflector/compare/v0.9.0...v0.10.0) (2025-09-11)
### Features
* replace nextjs-config with environment variables ([#632](https://github.com/Monadical-SAS/reflector/issues/632)) ([369ecdf](https://github.com/Monadical-SAS/reflector/commit/369ecdff13f3862d926a9c0b87df52c9d94c4dde))
### Bug Fixes
* anonymous users transcript permissions ([#621](https://github.com/Monadical-SAS/reflector/issues/621)) ([f81fe99](https://github.com/Monadical-SAS/reflector/commit/f81fe9948a9237b3e0001b2d8ca84f54d76878f9))
* auth post ([#624](https://github.com/Monadical-SAS/reflector/issues/624)) ([cde99ca](https://github.com/Monadical-SAS/reflector/commit/cde99ca2716f84ba26798f289047732f0448742e))
* auth post ([#626](https://github.com/Monadical-SAS/reflector/issues/626)) ([3b85ff3](https://github.com/Monadical-SAS/reflector/commit/3b85ff3bdf4fb053b103070646811bc990c0e70a))
* auth post ([#627](https://github.com/Monadical-SAS/reflector/issues/627)) ([962038e](https://github.com/Monadical-SAS/reflector/commit/962038ee3f2a555dc3c03856be0e4409456e0996))
* missing follow_redirects=True on modal endpoint ([#630](https://github.com/Monadical-SAS/reflector/issues/630)) ([fc363bd](https://github.com/Monadical-SAS/reflector/commit/fc363bd49b17b075e64f9186e5e0185abc325ea7))
* sync backend and frontend token refresh logic ([#614](https://github.com/Monadical-SAS/reflector/issues/614)) ([5a5b323](https://github.com/Monadical-SAS/reflector/commit/5a5b3233820df9536da75e87ce6184a983d4713a))
## [0.9.0](https://github.com/Monadical-SAS/reflector/compare/v0.8.2...v0.9.0) (2025-09-06)
### Features
* frontend openapi react query ([#606](https://github.com/Monadical-SAS/reflector/issues/606)) ([c4d2825](https://github.com/Monadical-SAS/reflector/commit/c4d2825c81f81ad8835629fbf6ea8c7383f8c31b))
### Bug Fixes
* align whisper transcriber api with parakeet ([#602](https://github.com/Monadical-SAS/reflector/issues/602)) ([0663700](https://github.com/Monadical-SAS/reflector/commit/0663700a615a4af69a03c96c410f049e23ec9443))
* kv use tls explicit ([#610](https://github.com/Monadical-SAS/reflector/issues/610)) ([08d88ec](https://github.com/Monadical-SAS/reflector/commit/08d88ec349f38b0d13e0fa4cb73486c8dfd31836))
* source kind for file processing ([#601](https://github.com/Monadical-SAS/reflector/issues/601)) ([dc82f8b](https://github.com/Monadical-SAS/reflector/commit/dc82f8bb3bdf3ab3d4088e592a30fd63907319e1))
* token refresh locking ([#613](https://github.com/Monadical-SAS/reflector/issues/613)) ([7f5a4c9](https://github.com/Monadical-SAS/reflector/commit/7f5a4c9ddc7fd098860c8bdda2ca3b57f63ded2f))
## [0.8.2](https://github.com/Monadical-SAS/reflector/compare/v0.8.1...v0.8.2) (2025-08-29)
### Bug Fixes
* search-logspam ([#593](https://github.com/Monadical-SAS/reflector/issues/593)) ([695d1a9](https://github.com/Monadical-SAS/reflector/commit/695d1a957d4cd862753049f9beed88836cabd5ab))
## [0.8.1](https://github.com/Monadical-SAS/reflector/compare/v0.8.0...v0.8.1) (2025-08-29)
### Bug Fixes
* make webhook secret/url allowing null ([#590](https://github.com/Monadical-SAS/reflector/issues/590)) ([84a3812](https://github.com/Monadical-SAS/reflector/commit/84a381220bc606231d08d6f71d4babc818fa3c75))
## [0.8.0](https://github.com/Monadical-SAS/reflector/compare/v0.7.3...v0.8.0) (2025-08-29)
### Features
* **cleanup:** add automatic data retention for public instances ([#574](https://github.com/Monadical-SAS/reflector/issues/574)) ([6f0c7c1](https://github.com/Monadical-SAS/reflector/commit/6f0c7c1a5e751713366886c8e764c2009e12ba72))
* **rooms:** add webhook for transcript completion ([#578](https://github.com/Monadical-SAS/reflector/issues/578)) ([88ed7cf](https://github.com/Monadical-SAS/reflector/commit/88ed7cfa7804794b9b54cad4c3facc8a98cf85fd))
### Bug Fixes
* file pipeline status reporting and websocket updates ([#589](https://github.com/Monadical-SAS/reflector/issues/589)) ([9dfd769](https://github.com/Monadical-SAS/reflector/commit/9dfd76996f851cc52be54feea078adbc0816dc57))
* Igor/evaluation ([#575](https://github.com/Monadical-SAS/reflector/issues/575)) ([124ce03](https://github.com/Monadical-SAS/reflector/commit/124ce03bf86044c18313d27228a25da4bc20c9c5))
* optimize parakeet transcription batching algorithm ([#577](https://github.com/Monadical-SAS/reflector/issues/577)) ([7030e0f](https://github.com/Monadical-SAS/reflector/commit/7030e0f23649a8cf6c1eb6d5889684a41ce849ec))
## [0.7.3](https://github.com/Monadical-SAS/reflector/compare/v0.7.2...v0.7.3) (2025-08-22)
### Bug Fixes
* cleaned repo, and get git-leaks clean ([359280d](https://github.com/Monadical-SAS/reflector/commit/359280dd340433ba4402ed69034094884c825e67))
* restore previous behavior on live pipeline + audio downscaler ([#561](https://github.com/Monadical-SAS/reflector/issues/561)) ([9265d20](https://github.com/Monadical-SAS/reflector/commit/9265d201b590d23c628c5f19251b70f473859043))
## [0.7.2](https://github.com/Monadical-SAS/reflector/compare/v0.7.1...v0.7.2) (2025-08-21)
### Bug Fixes
* docker image not loading libgomp.so.1 for torch ([#560](https://github.com/Monadical-SAS/reflector/issues/560)) ([773fccd](https://github.com/Monadical-SAS/reflector/commit/773fccd93e887c3493abc2e4a4864dddce610177))
* include shared rooms to search ([#558](https://github.com/Monadical-SAS/reflector/issues/558)) ([499eced](https://github.com/Monadical-SAS/reflector/commit/499eced3360b84fb3a90e1c8a3b554290d21adc2))
## [0.7.1](https://github.com/Monadical-SAS/reflector/compare/v0.7.0...v0.7.1) (2025-08-21)
### Bug Fixes
* webvtt db null expectation mismatch ([#556](https://github.com/Monadical-SAS/reflector/issues/556)) ([e67ad1a](https://github.com/Monadical-SAS/reflector/commit/e67ad1a4a2054467bfeb1e0258fbac5868aaaf21))
## [0.7.0](https://github.com/Monadical-SAS/reflector/compare/v0.6.1...v0.7.0) (2025-08-21)
### Features
* delete recording with transcript ([#547](https://github.com/Monadical-SAS/reflector/issues/547)) ([99cc984](https://github.com/Monadical-SAS/reflector/commit/99cc9840b3f5de01e0adfbfae93234042d706d13))
* pipeline improvement with file processing, parakeet, silero-vad ([#540](https://github.com/Monadical-SAS/reflector/issues/540)) ([bcc29c9](https://github.com/Monadical-SAS/reflector/commit/bcc29c9e0050ae215f89d460e9d645aaf6a5e486))
* postgresql migration and removal of sqlite in pytest ([#546](https://github.com/Monadical-SAS/reflector/issues/546)) ([cd1990f](https://github.com/Monadical-SAS/reflector/commit/cd1990f8f0fe1503ef5069512f33777a73a93d7f))
* search backend ([#537](https://github.com/Monadical-SAS/reflector/issues/537)) ([5f9b892](https://github.com/Monadical-SAS/reflector/commit/5f9b89260c9ef7f3c921319719467df22830453f))
* search frontend ([#551](https://github.com/Monadical-SAS/reflector/issues/551)) ([3657242](https://github.com/Monadical-SAS/reflector/commit/365724271ca6e615e3425125a69ae2b46ce39285))
### Bug Fixes
* evaluation cli event wrap ([#536](https://github.com/Monadical-SAS/reflector/issues/536)) ([941c3db](https://github.com/Monadical-SAS/reflector/commit/941c3db0bdacc7b61fea412f3746cc5a7cb67836))
* use structlog not logging ([#550](https://github.com/Monadical-SAS/reflector/issues/550)) ([27e2f81](https://github.com/Monadical-SAS/reflector/commit/27e2f81fda5232e53edc729d3e99c5ef03adbfe9))
## [0.6.1](https://github.com/Monadical-SAS/reflector/compare/v0.6.0...v0.6.1) (2025-08-06)

View File

@@ -62,29 +62,28 @@ uv run python -m reflector.tools.process path/to/audio.wav
**Setup:**
```bash
# Install dependencies
yarn install
pnpm install
# Copy configuration templates
cp .env_template .env
cp config-template.ts config.ts
```
**Development:**
```bash
# Start development server
yarn dev
pnpm dev
# Generate TypeScript API client from OpenAPI spec
yarn openapi
pnpm openapi
# Lint code
yarn lint
pnpm lint
# Format code
yarn format
pnpm format
# Build for production
yarn build
pnpm build
```
### Docker Compose (Full Stack)

View File

@@ -1,43 +1,60 @@
<div align="center">
<img width="100" alt="image" src="https://github.com/user-attachments/assets/66fb367b-2c89-4516-9912-f47ac59c6a7f"/>
# Reflector
Reflector Audio Management and Analysis is a cutting-edge web application under development by Monadical. It utilizes AI to record meetings, providing a permanent record with transcripts, translations, and automated summaries.
Reflector is an AI-powered audio transcription and meeting analysis platform that provides real-time transcription, speaker diarization, translation and summarization for audio content and live meetings. It works 100% with local models (whisper/parakeet, pyannote, seamless-m4t, and your local llm like phi-4).
[![Tests](https://github.com/monadical-sas/reflector/actions/workflows/pytests.yml/badge.svg?branch=main&event=push)](https://github.com/monadical-sas/reflector/actions/workflows/pytests.yml)
[![Tests](https://github.com/monadical-sas/reflector/actions/workflows/test_server.yml/badge.svg?branch=main&event=push)](https://github.com/monadical-sas/reflector/actions/workflows/test_server.yml)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
## Screenshots
</div>
<table>
<tr>
<td>
<a href="https://github.com/user-attachments/assets/3a976930-56c1-47ef-8c76-55d3864309e3">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/3a976930-56c1-47ef-8c76-55d3864309e3" />
<a href="https://github.com/user-attachments/assets/21f5597c-2930-4899-a154-f7bd61a59e97">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/21f5597c-2930-4899-a154-f7bd61a59e97" />
</a>
</td>
<td>
<a href="https://github.com/user-attachments/assets/bfe3bde3-08af-4426-a9a1-11ad5cd63b33">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/bfe3bde3-08af-4426-a9a1-11ad5cd63b33" />
<a href="https://github.com/user-attachments/assets/f6b9399a-5e51-4bae-b807-59128d0a940c">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/f6b9399a-5e51-4bae-b807-59128d0a940c" />
</a>
</td>
<td>
<a href="https://github.com/user-attachments/assets/7b60c9d0-efe4-474f-a27b-ea13bd0fabdc">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/7b60c9d0-efe4-474f-a27b-ea13bd0fabdc" />
<a href="https://github.com/user-attachments/assets/a42ce460-c1fd-4489-a995-270516193897">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/a42ce460-c1fd-4489-a995-270516193897" />
</a>
</td>
<td>
<a href="https://github.com/user-attachments/assets/21929f6d-c309-42fe-9c11-f1299e50fbd4">
<img width="700" alt="image" src="https://github.com/user-attachments/assets/21929f6d-c309-42fe-9c11-f1299e50fbd4" />
</a>
</td>
</tr>
</table>
## What is Reflector?
Reflector is a web application that utilizes local models to process audio content, providing:
- **Real-time Transcription**: Convert speech to text using [Whisper](https://github.com/openai/whisper) (multi-language) or [Parakeet](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2) (English) models
- **Speaker Diarization**: Identify and label different speakers using [Pyannote](https://github.com/pyannote/pyannote-audio) 3.1
- **Live Translation**: Translate audio content in real-time to many languages with [Facebook Seamless-M4T](https://github.com/facebookresearch/seamless_communication)
- **Topic Detection & Summarization**: Extract key topics and generate concise summaries using LLMs
- **Meeting Recording**: Create permanent records of meetings with searchable transcripts
Currently we provide [modal.com](https://modal.com/) gpu template to deploy.
## Background
The project architecture consists of three primary components:
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
- **Back-End**: Python server that offers an API and data persistence, found in `server/`.
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations. Most reliable option is Modal deployment
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations.
It also uses authentik for authentication if activated, and Vercel for deployment and configuration of the front-end.
It also uses authentik for authentication if activated.
## Contribution Guidelines
@@ -72,6 +89,8 @@ Note: We currently do not have instructions for Windows users.
## Installation
*Note: we're working toward better installation, theses instructions are not accurate for now*
### Frontend
Start with `cd www`.
@@ -79,17 +98,16 @@ Start with `cd www`.
**Installation**
```bash
yarn install
cp .env_template .env
cp config-template.ts config.ts
pnpm install
cp .env.example .env
```
Then, fill in the environment variables in `.env` and the configuration in `config.ts` as needed. If you are unsure on how to proceed, ask in Zulip.
Then, fill in the environment variables in `.env` as needed. If you are unsure on how to proceed, ask in Zulip.
**Run in development mode**
```bash
yarn dev
pnpm dev
```
Then (after completing server setup and starting it) open [http://localhost:3000](http://localhost:3000) to view it in the browser.
@@ -99,7 +117,7 @@ Then (after completing server setup and starting it) open [http://localhost:3000
To generate the TypeScript files from the openapi.json file, make sure the python server is running, then run:
```bash
yarn openapi
pnpm openapi
```
### Backend
@@ -149,3 +167,34 @@ You can manually process an audio file by calling the process tool:
```bash
uv run python -m reflector.tools.process path/to/audio.wav
```
## Feature Flags
Reflector uses environment variable-based feature flags to control application functionality. These flags allow you to enable or disable features without code changes.
### Available Feature Flags
| Feature Flag | Environment Variable |
|-------------|---------------------|
| `requireLogin` | `NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN` |
| `privacy` | `NEXT_PUBLIC_FEATURE_PRIVACY` |
| `browse` | `NEXT_PUBLIC_FEATURE_BROWSE` |
| `sendToZulip` | `NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP` |
| `rooms` | `NEXT_PUBLIC_FEATURE_ROOMS` |
### Setting Feature Flags
Feature flags are controlled via environment variables using the pattern `NEXT_PUBLIC_FEATURE_{FEATURE_NAME}` where `{FEATURE_NAME}` is the SCREAMING_SNAKE_CASE version of the feature name.
**Examples:**
```bash
# Enable user authentication requirement
NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN=true
# Disable browse functionality
NEXT_PUBLIC_FEATURE_BROWSE=false
# Enable Zulip integration
NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP=true
```

View File

@@ -6,6 +6,7 @@ services:
- 1250:1250
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
@@ -16,6 +17,7 @@ services:
context: server
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
@@ -26,6 +28,7 @@ services:
context: server
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
@@ -39,11 +42,12 @@ services:
image: node:18
ports:
- "3000:3000"
command: sh -c "yarn install && yarn dev"
command: sh -c "corepack enable && pnpm install && pnpm dev"
restart: unless-stopped
working_dir: /app
volumes:
- ./www:/app/
- /app/node_modules
env_file:
- ./www/.env.local

369
docs/jitsi.md Normal file
View File

@@ -0,0 +1,369 @@
# Jitsi Integration for Reflector
This document contains research and planning notes for integrating Jitsi Meet as a replacement for Whereby in Reflector.
## Overview
Jitsi Meet is an open-source video conferencing solution that can replace Whereby in Reflector, providing:
- Cost reduction (no per-minute charges)
- Direct recording access via Jibri
- Real-time event webhooks
- Full customization and control
## Current Whereby Integration Analysis
### Architecture
1. **Room Creation**: User creates a "room" template in Reflector DB with settings
2. **Meeting Creation**: `/rooms/{room_name}/meeting` endpoint calls Whereby API to create meeting
3. **Recording**: Whereby handles recording automatically to S3 bucket
4. **Webhooks**: Whereby sends events for participant tracking
### Database Structure
```python
# Room = Template/Configuration
class Room:
id, name, user_id
recording_type, recording_trigger # cloud, automatic-2nd-participant
webhook_url, webhook_secret
# Meeting = Actual Whereby Meeting Instance
class Meeting:
id # Whereby meetingId
room_name # Generated by Whereby
room_url, host_room_url # Whereby URLs
num_clients # Updated via webhooks
```
## Jitsi Components
### Core Architecture
- **Jitsi Meet**: Web frontend (Next.js + React)
- **Prosody**: XMPP server for messaging/rooms
- **Jicofo**: Conference focus (orchestration)
- **JVB**: Videobridge (media routing)
- **Jibri**: Recording service
- **Jigasi**: SIP gateway (optional, for phone dial-in)
### Exposure Requirements
- **Web service**: 443/80 (frontend)
- **JVB**: 10000/UDP (media streams) - **MUST EXPOSE**
- **Prosody**: 5280 (BOSH/WebSocket) - can proxy via web
- **Jicofo, Jibri, Jigasi**: Internal only
## Recording with Jibri
### How Jibri Works
- Each Jibri instance handles **one recording at a time**
- Records mixed audio/video to MP4 format
- Uses Chrome headless + ffmpeg for capture
- Supports finalize scripts for post-processing
### Jibri Pool for Scaling
- Multiple Jibri instances join "jibribrewery" MUC
- Jicofo distributes recording requests to available instances
- Automatic load balancing and failover
```yaml
# Multiple Jibri instances
jibri1:
environment:
- JIBRI_INSTANCE_ID=jibri1
- JIBRI_BREWERY_MUC=jibribrewery
jibri2:
environment:
- JIBRI_INSTANCE_ID=jibri2
- JIBRI_BREWERY_MUC=jibribrewery
```
### Recording Automation Options
1. **Environment Variables**: `ENABLE_RECORDING=1`, `AUTO_RECORDING=1`
2. **URL Parameters**: `?config.autoRecord=true`
3. **JWT Token**: Include recording permissions in JWT
4. **API Control**: `api.executeCommand('startRecording')`
### Post-Processing Integration
```bash
#!/bin/bash
# finalize.sh - runs after recording completion
RECORDING_FILE=$1
MEETING_METADATA=$2
ROOM_NAME=$3
# Copy to Reflector-accessible location
cp "$RECORDING_FILE" /shared/reflector-uploads/
# Trigger Reflector processing
curl -X POST "http://reflector-api:8000/v1/transcripts/process" \
-H "Content-Type: application/json" \
-d "{
\"file_path\": \"/shared/reflector-uploads/$(basename $RECORDING_FILE)\",
\"room_name\": \"$ROOM_NAME\",
\"source\": \"jitsi\"
}"
```
## React Integration
### Official React SDK
```bash
npm i @jitsi/react-sdk
```
```jsx
import { JitsiMeeting } from '@jitsi/react-sdk'
<JitsiMeeting
room="meeting-room"
serverURL="https://your-jitsi.domain"
jwt="your-jwt-token"
config={{
startWithAudioMuted: true,
fileRecordingsEnabled: true,
autoRecord: true
}}
onParticipantJoined={(participant) => {
// Track participant events
}}
onRecordingStatusChanged={(status) => {
// Handle recording events
}}
/>
```
## Authentication & Room Control
### JWT-Based Access Control
```python
def generate_jitsi_jwt(payload):
return jwt.encode({
"aud": "jitsi",
"iss": "reflector",
"sub": "reflector-user",
"room": payload["room"],
"exp": int(payload["exp"].timestamp()),
"context": {
"user": {
"name": payload["user_name"],
"moderator": payload.get("moderator", False)
},
"features": {
"recording": payload.get("recording", True)
}
}
}, JITSI_JWT_SECRET)
```
### Prevent Anonymous Room Creation
```bash
# Environment configuration
ENABLE_AUTH=1
ENABLE_GUESTS=0
AUTH_TYPE=jwt
JWT_APP_ID=reflector
JWT_APP_SECRET=your-secret-key
```
## Webhook Integration
### Real-time Events via Prosody
Custom event-sync module can send webhooks for:
- Participant join/leave
- Recording start/stop
- Room creation/destruction
- Mute/unmute events
```lua
-- mod_event_sync.lua
module:hook("muc-occupant-joined", function(event)
send_event({
type = "participant_joined",
room = event.room.jid,
participant = {
nick = event.occupant.nick,
jid = event.occupant.jid,
},
timestamp = os.time(),
});
end);
```
### Jibri Recording Webhooks
```bash
# Environment variable
JIBRI_WEBHOOK_SUBSCRIBERS=https://your-reflector.com/webhooks/jibri
```
## Proposed Reflector Integration
### Modified Database Schema
```python
class Meeting(BaseModel):
id: str # Our generated meeting ID
room_name: str # Generated: reflector-{room.name}-{timestamp}
room_url: str # https://jitsi.domain/room_name?jwt=token
host_room_url: str # Same but with moderator JWT
# Add Jitsi-specific fields
jitsi_jwt: str # JWT token
jitsi_room_id: str # Internal room identifier
recording_status: str # pending, recording, completed
recording_file_path: Optional[str]
```
### API Replacement
```python
# Replace whereby.py with jitsi.py
async def create_meeting(room_name_prefix: str, end_date: datetime, room: Room):
# Generate unique room name
jitsi_room = f"reflector-{room.name}-{int(time.time())}"
# Generate JWT tokens
user_jwt = generate_jwt(room=jitsi_room, moderator=False, exp=end_date)
host_jwt = generate_jwt(room=jitsi_room, moderator=True, exp=end_date)
return {
"meetingId": generate_uuid4(), # Our ID
"roomName": jitsi_room,
"roomUrl": f"https://jitsi.domain/{jitsi_room}?jwt={user_jwt}",
"hostRoomUrl": f"https://jitsi.domain/{jitsi_room}?jwt={host_jwt}",
"startDate": datetime.now().isoformat(),
"endDate": end_date.isoformat(),
}
```
### Webhook Endpoints
```python
# Replace whereby webhook with jitsi webhooks
@router.post("/jitsi/events")
async def jitsi_events_webhook(event_data: dict):
event_type = event_data.get("event")
room_name = event_data.get("room", "").split("@")[0]
meeting = await Meeting.get_by_room(room_name)
if event_type == "muc-occupant-joined":
# Update participant count
meeting.num_clients += 1
elif event_type == "jibri-recording-on":
meeting.recording_status = "recording"
elif event_type == "jibri-recording-off":
meeting.recording_status = "processing"
await process_meeting_recording.delay(meeting.id)
@router.post("/jibri/recording-complete")
async def recording_complete(data: dict):
# Handle finalize script webhook
room_name = data.get("room_name")
file_path = data.get("file_path")
meeting = await Meeting.get_by_room(room_name)
meeting.recording_file_path = file_path
meeting.recording_status = "completed"
# Start Reflector processing
await process_recording_for_transcription(meeting.id, file_path)
```
## Deployment with Docker
### Official docker-jitsi-meet
```bash
# Download official release
wget $(wget -q -O - https://api.github.com/repos/jitsi/docker-jitsi-meet/releases/latest | grep zip | cut -d\" -f4)
# Setup
mkdir -p ~/.jitsi-meet-cfg/{web,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}
./gen-passwords.sh # Generate secure passwords
docker compose up -d
```
### Coolify Integration
```yaml
services:
web:
ports: ["80:80", "443:443"]
jvb:
ports: ["10000:10000/udp"] # Must expose for media
jibri1:
environment:
- JIBRI_INSTANCE_ID=jibri1
- JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh
jibri2:
environment:
- JIBRI_INSTANCE_ID=jibri2
```
## Benefits vs Whereby
### Cost & Control
**No per-minute charges** - significant cost savings
**Full recording control** - direct file access
**Custom branding** - complete UI control
**Self-hosted** - no vendor lock-in
### Technical Advantages
**Real-time events** - immediate webhook notifications
**Rich participant metadata** - detailed tracking
**JWT security** - token-based access with expiration
**Multiple recording formats** - audio-only options
**Scalable architecture** - horizontal Jibri scaling
### Integration Benefits
**Same API surface** - minimal changes to existing code
**React SDK** - better frontend integration
**Direct processing** - no S3 download delays
**Event-driven architecture** - better real-time capabilities
## Implementation Plan
1. **Deploy Jitsi Stack** - Set up docker-jitsi-meet with multiple Jibri instances
2. **Create jitsi.py** - Replace whereby.py with Jitsi API functions
3. **Update Database** - Add Jitsi-specific fields to Meeting model
4. **Webhook Integration** - Replace Whereby webhooks with Jitsi events
5. **Frontend Updates** - Replace Whereby embed with Jitsi React SDK
6. **Testing & Migration** - Gradual rollout with fallback to Whereby
## Recording Limitations & Considerations
### Current Limitations
- **Mixed audio only** - Jibri doesn't separate participant tracks natively
- **One recording per Jibri** - requires multiple instances for concurrent recordings
- **Chrome dependency** - Jibri uses headless Chrome for recording
### Metadata Capabilities
**Participant join/leave timestamps** - via webhooks
**Speaking time tracking** - via audio level events
**Meeting duration** - precise timing
**Room-specific data** - custom metadata in JWT
### Alternative Recording Methods
- **Local recording** - browser-based, per-participant
- **Custom recording** - lib-jitsi-meet for individual streams
- **Third-party solutions** - Recall.ai, Otter.ai integrations
## Security Considerations
### JWT Configuration
- **Room-specific tokens** - limit access to specific rooms
- **Time-based expiration** - automatic cleanup
- **Feature permissions** - control recording, moderation rights
- **User identification** - embed user metadata in tokens
### Access Control
- **No anonymous rooms** - all rooms require valid JWT
- **API-only creation** - prevent direct room access
- **Webhook verification** - HMAC signature validation
## Next Steps
1. **Deploy test Jitsi instance** - validate recording pipeline
2. **Prototype jitsi.py** - create equivalent API functions
3. **Test webhook integration** - ensure event delivery works
4. **Performance testing** - validate multiple concurrent recordings
5. **Migration strategy** - plan gradual transition from Whereby
---
*This document serves as the comprehensive planning and research notes for Jitsi integration in Reflector. It should be updated as implementation progresses and new insights are discovered.*

720
docs/video-jitsi.md Normal file
View File

@@ -0,0 +1,720 @@
# Jitsi Meet Integration Configuration Guide
This guide explains how to configure Reflector to use your self-hosted Jitsi Meet installation for video meetings, recording, and participant tracking.
## Overview
Jitsi Meet is an open-source video conferencing platform that can be self-hosted. Reflector integrates with Jitsi Meet to:
- Create secure meeting rooms with JWT authentication
- Track participant join/leave events via Prosody webhooks
- Record meetings using Jibri recording service
- Process recordings for transcription and analysis
## Requirements
### Self-Hosted Jitsi Meet
You need a complete Jitsi Meet installation including:
1. **Jitsi Meet Web Interface** - The main meeting interface
2. **Prosody XMPP Server** - Handles room management and authentication
3. **Jicofo (JItsi COnference FOcus)** - Manages media sessions
4. **Jitsi Videobridge (JVB)** - Handles WebRTC media routing
5. **Jibri Recording Service** - Records meetings (optional but recommended)
### System Requirements
- **Domain with SSL Certificate** - Required for WebRTC functionality
- **Prosody mod_event_sync** - For webhook event handling
- **JWT Authentication** - For secure room access control
- **Storage Solution** - For recording files (local or cloud)
## Configuration Variables
Add the following environment variables to your Reflector `.env` file:
### Required Variables
```bash
# Jitsi Meet Domain (without https://)
JITSI_DOMAIN=meet.example.com
# JWT Secret for room authentication (generate with: openssl rand -hex 32)
JITSI_JWT_SECRET=your-64-character-hex-secret-here
# Webhook secret for event handling (generate with: openssl rand -hex 16)
JITSI_WEBHOOK_SECRET=your-32-character-hex-secret-here
```
### Optional Variables
```bash
# Application identifier (should match Jitsi configuration)
JITSI_APP_ID=reflector
# JWT issuer and audience (should match Jitsi configuration)
JITSI_JWT_ISSUER=reflector
JITSI_JWT_AUDIENCE=jitsi
```
## Installation Steps
### 1. Jitsi Meet Server Installation
#### Quick Installation (Ubuntu/Debian)
```bash
# Add Jitsi repository
curl -fsSL https://download.jitsi.org/jitsi-key.gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/jitsi-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/jitsi-keyring.gpg] https://download.jitsi.org stable/" | sudo tee /etc/apt/sources.list.d/jitsi-stable.list
# Install Jitsi Meet
sudo apt update
sudo apt install jitsi-meet
# Configure SSL certificate
sudo /usr/share/jitsi-meet/scripts/install-letsencrypt-cert.sh
```
#### Docker Installation
```bash
# Clone Jitsi Docker repository
git clone https://github.com/jitsi/docker-jitsi-meet
cd docker-jitsi-meet
# Copy environment template
cp env.example .env
# Edit configuration
nano .env
# Start services
docker-compose up -d
```
### 2. JWT Authentication Setup
#### Update Prosody Configuration
Edit `/etc/prosody/conf.d/your-domain.cfg.lua`:
```lua
VirtualHost "meet.example.com"
authentication = "token"
app_id = "reflector"
app_secret = "your-jwt-secret-here"
-- Allow anonymous access for guests
c2s_require_encryption = false
admins = { "focusUser@auth.meet.example.com" }
modules_enabled = {
"bosh";
"pubsub";
"ping";
"roster";
"saslauth";
"tls";
"dialback";
"disco";
"carbons";
"pep";
"private";
"blocklist";
"vcard";
"version";
"uptime";
"time";
"ping";
"register";
"admin_adhoc";
"token_verification";
"event_sync"; -- Required for webhooks
}
```
#### Configure Jitsi Meet Interface
Edit `/etc/jitsi/meet/your-domain-config.js`:
```javascript
var config = {
hosts: {
domain: 'meet.example.com',
muc: 'conference.meet.example.com'
},
// Enable JWT authentication
enableUserRolesBasedOnToken: true,
// Recording configuration
fileRecordingsEnabled: true,
liveStreamingEnabled: false,
// Reflector integration settings
prejoinPageEnabled: true,
requireDisplayName: true
};
```
### 3. Webhook Event Configuration
#### Install Event Sync Module
```bash
# Download the module
cd /usr/share/jitsi-meet/prosody-plugins/
wget https://raw.githubusercontent.com/jitsi-contrib/prosody-plugins/main/mod_event_sync.lua
```
#### Configure Event Sync
Add to your Prosody configuration:
```lua
Component "conference.meet.example.com" "muc"
storage = "memory"
modules_enabled = {
"muc_meeting_id";
"muc_domain_mapper";
"polls";
"event_sync"; -- Enable event sync
}
-- Event sync webhook configuration
event_sync_url = "https://your-reflector-domain.com/v1/jitsi/events"
event_sync_secret = "your-webhook-secret-here"
-- Events to track
event_sync_events = {
"muc-occupant-joined",
"muc-occupant-left",
"jibri-recording-on",
"jibri-recording-off"
}
#### Webhook Event Payload Examples
**Participant Joined Event:**
```json
{
"event": "muc-occupant-joined",
"room": "reflector-my-room-uuid123",
"timestamp": "2025-01-15T10:30:00.000Z",
"data": {
"occupant_id": "participant-456",
"nick": "John Doe",
"role": "participant",
"affiliation": "none"
}
}
```
**Recording Started Event:**
```json
{
"event": "jibri-recording-on",
"room": "reflector-my-room-uuid123",
"timestamp": "2025-01-15T10:32:00.000Z",
"data": {
"recording_id": "rec-789",
"initiator": "moderator-123"
}
}
```
**Recording Completed Event:**
```json
{
"room_name": "reflector-my-room-uuid123",
"recording_file": "/var/recordings/rec-789.mp4",
"recording_status": "completed",
"timestamp": "2025-01-15T11:15:00.000Z"
}
```
### 4. Jibri Recording Setup (Optional)
#### Install Jibri
```bash
# Install Jibri package
sudo apt install jibri
# Create recording directory
sudo mkdir -p /var/recordings
sudo chown jibri:jibri /var/recordings
```
#### Configure Jibri
Edit `/etc/jitsi/jibri/jibri.conf`:
```hocon
jibri {
recording {
recordings-directory = "/var/recordings"
finalize-script = "/opt/jitsi/jibri/finalize.sh"
}
api {
xmpp {
environments = [{
name = "prod environment"
xmpp-server-hosts = ["meet.example.com"]
xmpp-domain = "meet.example.com"
control-muc {
domain = "internal.auth.meet.example.com"
room-name = "JibriBrewery"
nickname = "jibri-nickname"
}
control-login {
domain = "auth.meet.example.com"
username = "jibri"
password = "jibri-password"
}
}]
}
}
}
```
#### Create Finalize Script
Create `/opt/jitsi/jibri/finalize.sh`:
```bash
#!/bin/bash
# Jibri finalize script for Reflector integration
RECORDING_FILE="$1"
ROOM_NAME="$2"
REFLECTOR_API_URL="${REFLECTOR_API_URL:-http://localhost:1250}"
# Prepare webhook payload
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%S.%3NZ)
PAYLOAD=$(cat <<EOF
{
"room_name": "$ROOM_NAME",
"recording_file": "$RECORDING_FILE",
"recording_status": "completed",
"timestamp": "$TIMESTAMP"
}
EOF
)
# Generate signature
SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$JITSI_WEBHOOK_SECRET" | cut -d' ' -f2)
# Send webhook to Reflector
curl -X POST "$REFLECTOR_API_URL/v1/jibri/recording-complete" \
-H "Content-Type: application/json" \
-H "X-Jitsi-Signature: $SIGNATURE" \
-d "$PAYLOAD"
echo "Recording finalization webhook sent for room: $ROOM_NAME"
```
Make executable:
```bash
sudo chmod +x /opt/jitsi/jibri/finalize.sh
```
### 5. Restart Services
After configuration changes:
```bash
sudo systemctl restart prosody
sudo systemctl restart jicofo
sudo systemctl restart jitsi-videobridge2
sudo systemctl restart jibri
sudo systemctl restart nginx
```
## Room Configuration
### Creating Jitsi Rooms
Create rooms with Jitsi platform in Reflector:
```bash
curl -X POST "https://your-reflector-domain.com/v1/rooms" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
-d '{
"name": "my-jitsi-room",
"platform": "jitsi",
"recording_type": "cloud",
"recording_trigger": "automatic-2nd-participant",
"is_locked": false,
"room_mode": "normal"
}'
```
### Meeting Creation
Meetings automatically use JWT authentication:
```bash
curl -X POST "https://your-reflector-domain.com/v1/rooms/my-jitsi-room/meeting" \
-H "Authorization: Bearer $AUTH_TOKEN"
```
Response includes JWT-authenticated URLs:
```json
{
"id": "meeting-uuid",
"room_name": "reflector-my-jitsi-room-123456",
"room_url": "https://meet.example.com/room?jwt=user-token",
"host_room_url": "https://meet.example.com/room?jwt=moderator-token"
}
```
## Features and Capabilities
### JWT Authentication
Reflector automatically generates JWT tokens with:
- **Room Access Control** - Secure room entry
- **User Roles** - Moderator vs participant permissions
- **Expiration** - Configurable token lifetime (default 8 hours)
- **Custom Claims** - Room-specific metadata
### Recording Options
**Recording Types:**
- `"none"` - No recording
- `"local"` - Local Jibri recording
- `"cloud"` - Cloud recording (requires external storage)
**Recording Triggers:**
- `"none"` - Manual recording only
- `"prompt"` - Prompt users to start
- `"automatic"` - Start immediately
- `"automatic-2nd-participant"` - Start when 2nd person joins
### Event Tracking and Storage
Reflector automatically stores all webhook events in the `meetings` table for comprehensive meeting analytics:
**Supported Event Types:**
- `muc-occupant-joined` - Participant joined the meeting
- `muc-occupant-left` - Participant left the meeting
- `jibri-recording-on` - Recording started
- `jibri-recording-off` - Recording stopped
- `recording_completed` - Recording file ready for processing
**Event Storage Structure:**
Each webhook event is stored as a JSON object in the `meetings.events` column:
```json
{
"type": "muc-occupant-joined",
"timestamp": "2025-01-15T10:30:00.123456Z",
"data": {
"timestamp": "2025-01-15T10:30:00Z",
"user_id": "participant-123",
"display_name": "John Doe"
}
}
```
**Querying Stored Events:**
```sql
-- Get all events for a meeting
SELECT events FROM meeting WHERE id = 'meeting-uuid';
-- Count participant joins
SELECT json_array_length(
json_extract(events, '$[*] ? (@.type == "muc-occupant-joined")')
) as total_joins FROM meeting WHERE id = 'meeting-uuid';
```
## Testing and Verification
### Health Check
Test Jitsi webhook integration:
```bash
curl "https://your-reflector-domain.com/v1/jitsi/health"
```
Expected response:
```json
{
"status": "ok",
"service": "jitsi-webhooks",
"timestamp": "2025-01-15T10:30:00.000Z",
"webhook_secret_configured": true
}
```
### JWT Token Testing
Verify JWT generation works:
```bash
# Create a test meeting
MEETING=$(curl -X POST "https://your-reflector-domain.com/v1/rooms/test-room/meeting" \
-H "Authorization: Bearer $AUTH_TOKEN" | jq -r '.room_url')
echo "Test meeting URL: $MEETING"
```
### Webhook Testing
#### Manual Webhook Event Testing
Test participant join event:
```bash
# Generate proper signature
PAYLOAD='{"event":"muc-occupant-joined","room":"reflector-test-room-uuid","timestamp":"2025-01-15T10:30:00.000Z","data":{"user_id":"test-user","display_name":"Test User"}}'
SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$JITSI_WEBHOOK_SECRET" | cut -d' ' -f2)
curl -X POST "https://your-reflector-domain.com/v1/jitsi/events" \
-H "Content-Type: application/json" \
-H "X-Jitsi-Signature: $SIGNATURE" \
-d "$PAYLOAD"
```
Expected response:
```json
{
"status": "ok",
"event": "muc-occupant-joined",
"room": "reflector-test-room-uuid"
}
```
#### Recording Webhook Testing
Test recording completion event:
```bash
PAYLOAD='{"room_name":"reflector-test-room-uuid","recording_file":"/recordings/test.mp4","recording_status":"completed","timestamp":"2025-01-15T10:30:00.000Z"}'
SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$JITSI_WEBHOOK_SECRET" | cut -d' ' -f2)
curl -X POST "https://your-reflector-domain.com/v1/jibri/recording-complete" \
-H "Content-Type: application/json" \
-H "X-Jitsi-Signature: $SIGNATURE" \
-d "$PAYLOAD"
```
#### Event Storage Verification
Verify events were stored:
```bash
# Check meeting events via API (requires authentication)
curl -H "Authorization: Bearer $AUTH_TOKEN" \
"https://your-reflector-domain.com/v1/meetings/{meeting-id}"
```
## Troubleshooting
### Common Issues
#### JWT Authentication Failures
**Symptoms**: Users cannot join rooms, "Authentication failed" errors
**Solutions**:
1. Verify `JITSI_JWT_SECRET` matches Prosody configuration
2. Check JWT token hasn't expired (default 8 hours)
3. Ensure system clocks are synchronized between servers
4. Validate JWT issuer/audience configuration matches
**Debug JWT tokens**:
```bash
# Decode JWT payload
echo "JWT_TOKEN_HERE" | cut -d'.' -f2 | base64 -d | jq
```
#### Webhook Events Not Received
**Symptoms**: Participant counts not updating, no recording events
**Solutions**:
1. Verify `mod_event_sync` is loaded in Prosody
2. Check webhook URL is accessible from Jitsi server
3. Validate webhook signature generation
4. Review Prosody and Reflector logs
**Debug webhook connectivity**:
```bash
# Test from Jitsi server
curl -v "https://your-reflector-domain.com/v1/jitsi/health"
# Check Prosody logs
sudo tail -f /var/log/prosody/prosody.log
```
#### Webhook Signature Verification Issues
**Symptoms**: HTTP 401 "Invalid webhook signature" errors
**Solutions**:
1. Verify webhook secret matches between Jitsi and Reflector
2. Check payload encoding (no extra whitespace)
3. Ensure proper HMAC-SHA256 signature generation
**Debug signature generation**:
```bash
# Test signature manually
PAYLOAD='{"event":"test","room":"test","timestamp":"2025-01-15T10:30:00.000Z","data":{}}'
SECRET="your-webhook-secret-here"
# Generate signature (should match X-Jitsi-Signature header)
echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$SECRET" | cut -d' ' -f2
# Test with curl
curl -X POST "https://your-reflector-domain.com/v1/jitsi/events" \
-H "Content-Type: application/json" \
-H "X-Jitsi-Signature: $(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$SECRET" | cut -d' ' -f2)" \
-d "$PAYLOAD" -v
```
#### Event Storage Problems
**Symptoms**: Events received but not stored in database
**Solutions**:
1. Check database connectivity and permissions
2. Verify meeting exists before event processing
3. Review Reflector application logs
4. Ensure JSON column support in database
**Debug event storage**:
```bash
# Check meeting exists
curl -H "Authorization: Bearer $TOKEN" \
"https://your-reflector-domain.com/v1/meetings/{meeting-id}"
# Monitor database queries (if using PostgreSQL)
sudo -u postgres psql -c "SELECT * FROM pg_stat_activity WHERE query LIKE '%meeting%';"
# Check Reflector logs for event processing
sudo journalctl -u reflector -f | grep -E "(event|webhook|jitsi)"
```
#### Recording Issues
**Symptoms**: Recordings not starting, finalize script errors
**Solutions**:
1. Verify Jibri service status: `sudo systemctl status jibri`
2. Check recording directory permissions: `/var/recordings`
3. Validate finalize script execution permissions
4. Monitor Jibri logs: `sudo journalctl -u jibri -f`
**Test finalize script**:
```bash
sudo -u jibri /opt/jitsi/jibri/finalize.sh "/test/recording.mp4" "test-room"
```
#### Meeting Creation Failures
**Symptoms**: HTTP 500 errors when creating meetings
**Solutions**:
1. Check Reflector logs for JWT generation errors
2. Verify all required environment variables are set
3. Ensure Jitsi domain is accessible from Reflector
4. Test JWT secret configuration
### Debug Commands
```bash
# Verify Prosody configuration
sudo prosodyctl check config
# Check Jitsi services status
sudo systemctl status prosody jicofo jitsi-videobridge2
# Test JWT generation
curl -X POST "https://your-reflector-domain.com/v1/rooms/test/meeting" \
-H "Authorization: Bearer $TOKEN" -v
# Monitor webhook events
sudo tail -f /var/log/reflector/app.log | grep jitsi
# Check SSL certificates
sudo certbot certificates
```
### Performance Optimization
#### Scaling Considerations
**Single Server Limits:**
- ~50 concurrent participants per JVB instance
- ~10 concurrent Jibri recordings
- CPU and bandwidth become bottlenecks
**Multi-Server Setup:**
- Multiple JVB instances for scaling
- Dedicated Jibri recording servers
- Load balancing for high availability
#### Resource Monitoring
```bash
# Monitor JVB performance
sudo systemctl status jitsi-videobridge2
sudo journalctl -u jitsi-videobridge2 -f
# Check Prosody connections
sudo prosodyctl mod_admin_telnet
> c2s:show()
> muc:rooms()
```
## Security Best Practices
### JWT Security
- Use strong, unique secrets (32+ characters)
- Rotate JWT secrets regularly
- Implement proper token expiration
- Never log or expose JWT tokens
### Network Security
- Use HTTPS/WSS for all communications
- Implement proper firewall rules
- Consider VPN for server-to-server communication
- Monitor for unauthorized access attempts
### Recording Security
- Encrypt recordings at rest
- Implement access controls for recording files
- Regular security audits of file permissions
- Comply with data protection regulations
## Migration from Whereby
If migrating from Whereby to Jitsi:
1. **Parallel Setup** - Configure Jitsi alongside existing Whereby
2. **Room Migration** - Update room platform field to "jitsi"
3. **Test Integration** - Verify meeting creation and webhooks
4. **User Training** - Different UI and feature set
5. **Monitor Performance** - Watch for issues during transition
6. **Cleanup** - Remove Whereby configuration when stable
## Support and Resources
### Jitsi Community Resources
- **Documentation**: [jitsi.github.io/handbook](https://jitsi.github.io/handbook/)
- **Community Forum**: [community.jitsi.org](https://community.jitsi.org/)
- **GitHub Issues**: [github.com/jitsi/jitsi-meet](https://github.com/jitsi/jitsi-meet)
### Professional Support
- **8x8 Commercial Support** - Professional Jitsi hosting and support
- **Community Consulting** - Third-party Jitsi implementation services
### Monitoring and Maintenance
- Monitor system resources (CPU, memory, bandwidth)
- Regular security updates for all components
- Backup configuration files and certificates
- Test disaster recovery procedures

276
docs/video-whereby.md Normal file
View File

@@ -0,0 +1,276 @@
# Whereby Integration Configuration Guide
This guide explains how to configure Reflector to use Whereby as your video meeting platform for room creation, recording, and participant tracking.
## Overview
Whereby is a browser-based video meeting platform that provides hosted meeting rooms with recording capabilities. Reflector integrates with Whereby's API to:
- Create secure meeting rooms with custom branding
- Handle participant join/leave events via webhooks
- Automatically record meetings to AWS S3 storage
- Track meeting sessions and participant counts
## Requirements
### Whereby Account Setup
1. **Whereby Account**: Sign up for a Whereby business account at [whereby.com](https://whereby.com/business)
2. **API Access**: Request API access from Whereby support (required for programmatic room creation)
3. **Webhook Configuration**: Configure webhooks in your Whereby dashboard to point to your Reflector instance
### AWS S3 Storage
Whereby requires AWS S3 for recording storage. You need:
- AWS account with S3 access
- Dedicated S3 bucket for Whereby recordings
- AWS IAM credentials with S3 write permissions
## Configuration Variables
Add the following environment variables to your Reflector `.env` file:
### Required Variables
```bash
# Whereby API Configuration
WHEREBY_API_KEY=your-whereby-jwt-api-key
WHEREBY_WEBHOOK_SECRET=your-webhook-secret-from-whereby
# AWS S3 Storage for Recordings
AWS_WHEREBY_ACCESS_KEY_ID=your-aws-access-key
AWS_WHEREBY_ACCESS_KEY_SECRET=your-aws-secret-key
RECORDING_STORAGE_AWS_BUCKET_NAME=your-s3-bucket-name
```
### Optional Variables
```bash
# Whereby API URL (defaults to production)
WHEREBY_API_URL=https://api.whereby.dev/v1
# SQS Configuration (for recording processing)
AWS_PROCESS_RECORDING_QUEUE_URL=https://sqs.region.amazonaws.com/account/queue
SQS_POLLING_TIMEOUT_SECONDS=60
```
## Configuration Steps
### 1. Whereby API Key Setup
1. **Contact Whereby Support** to request API access for your account
2. **Generate JWT Token** in your Whereby dashboard under API settings
3. **Copy the JWT token** and set it as `WHEREBY_API_KEY` in your environment
The API key is a JWT token that looks like:
```
eyJ[...truncated JWT token...]
```
### 2. Webhook Configuration
1. **Access Whereby Dashboard** and navigate to webhook settings
2. **Set Webhook URL** to your Reflector instance:
```
https://your-reflector-domain.com/v1/whereby
```
3. **Configure Events** to send the following event types:
- `room.client.joined` - When participants join
- `room.client.left` - When participants leave
4. **Generate Webhook Secret** and set it as `WHEREBY_WEBHOOK_SECRET`
5. **Save Configuration** in your Whereby dashboard
### 3. AWS S3 Storage Setup
1. **Create S3 Bucket** dedicated for Whereby recordings
2. **Create IAM User** with programmatic access
3. **Attach S3 Policy** with the following permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
```
4. **Configure Environment Variables** with the IAM credentials
### 4. Room Configuration
When creating rooms in Reflector, set the platform to use Whereby:
```bash
curl -X POST "https://your-reflector-domain.com/v1/rooms" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
-d '{
"name": "my-whereby-room",
"platform": "whereby",
"recording_type": "cloud",
"recording_trigger": "automatic-2nd-participant",
"is_locked": false,
"room_mode": "normal"
}'
```
## Meeting Features
### Recording Options
Whereby supports three recording types:
- **`none`**: No recording
- **`local`**: Local recording (not recommended for production)
- **`cloud`**: Cloud recording to S3 (recommended)
### Recording Triggers
Control when recordings start:
- **`none`**: No automatic recording
- **`prompt`**: Prompt users to start recording
- **`automatic`**: Start immediately when meeting begins
- **`automatic-2nd-participant`**: Start when second participant joins
### Room Modes
- **`normal`**: Standard meeting room
- **`group`**: Group meeting with advanced features
## Webhook Event Handling
Reflector automatically handles these Whereby webhook events:
### Participant Tracking
```json
{
"type": "room.client.joined",
"data": {
"meetingId": "room-uuid",
"numClients": 2
}
}
```
### Recording Events
Whereby sends recording completion events that trigger Reflector's processing pipeline:
- Audio transcription
- Speaker diarization
- Summary generation
## Troubleshooting
### Common Issues
#### API Authentication Errors
**Symptoms**: 401 Unauthorized errors when creating meetings
**Solutions**:
1. Verify your `WHEREBY_API_KEY` is correct and not expired
2. Ensure you have API access enabled on your Whereby account
3. Contact Whereby support if API access is not available
#### Webhook Signature Validation Failed
**Symptoms**: Webhook events rejected with 401 errors
**Solutions**:
1. Verify `WHEREBY_WEBHOOK_SECRET` matches your Whereby dashboard configuration
2. Check webhook URL is correctly configured in Whereby dashboard
3. Ensure webhook endpoint is accessible from Whereby servers
#### Recording Upload Failures
**Symptoms**: Recordings not appearing in S3 bucket
**Solutions**:
1. Verify AWS credentials have S3 write permissions
2. Check S3 bucket name is correct and accessible
3. Ensure AWS region settings match your bucket location
4. Review AWS CloudTrail logs for permission issues
#### Participant Count Not Updating
**Symptoms**: Meeting participant counts remain at 0
**Solutions**:
1. Verify webhook events are being received at `/v1/whereby`
2. Check webhook signature validation is passing
3. Ensure meeting IDs match between Whereby and Reflector database
### Debug Commands
```bash
# Test Whereby API connectivity
curl -H "Authorization: Bearer $WHEREBY_API_KEY" \
https://api.whereby.dev/v1/meetings
# Check webhook endpoint health
curl https://your-reflector-domain.com/v1/whereby/health
# Verify S3 bucket access
aws s3 ls s3://your-bucket-name --profile whereby-user
```
## Security Considerations
### API Key Security
- Store API keys securely using environment variables
- Rotate API keys regularly
- Never commit API keys to version control
- Use separate keys for development and production
### Webhook Security
- Always validate webhook signatures using HMAC-SHA256
- Use HTTPS for all webhook endpoints
- Implement rate limiting on webhook endpoints
- Monitor webhook events for suspicious activity
### Recording Privacy
- Ensure S3 bucket access is restricted to authorized users
- Consider encryption at rest for sensitive recordings
- Implement retention policies for recorded content
- Comply with data protection regulations (GDPR, etc.)
## Performance Optimization
### Meeting Scaling
- Monitor concurrent meeting limits on your Whereby plan
- Implement meeting cleanup for expired sessions
- Use appropriate room modes for different use cases
### Recording Processing
- Configure SQS for asynchronous recording processing
- Monitor S3 storage usage and costs
- Implement automatic cleanup of processed recordings
### Webhook Reliability
- Implement webhook retry mechanisms
- Monitor webhook delivery success rates
- Log webhook events for debugging and auditing
## Migration from Other Platforms
If migrating from another video platform:
1. **Update Room Configuration**: Change existing rooms to use `"platform": "whereby"`
2. **Configure Webhooks**: Set up Whereby webhook endpoints
3. **Test Integration**: Verify meeting creation and event handling
4. **Monitor Performance**: Watch for any issues during transition
5. **Update Documentation**: Inform users of any workflow changes
## Support
For Whereby-specific issues:
- **Whereby Support**: [whereby.com/support](https://whereby.com/support)
- **API Documentation**: [whereby.dev](https://whereby.dev)
- **Status Page**: [status.whereby.com](https://status.whereby.com)
For Reflector integration issues:
- Check application logs for error details
- Verify environment variable configuration
- Test webhook connectivity and authentication
- Review AWS permissions and S3 access

474
docs/video_platforms.md Normal file
View File

@@ -0,0 +1,474 @@
# Video Platforms Architecture (PR #529 Analysis)
This document analyzes the video platforms refactoring implemented in PR #529 for daily.co integration, providing a blueprint for extending support to Jitsi and other video conferencing platforms.
## Overview
The video platforms refactoring introduces a clean abstraction layer that allows Reflector to support multiple video conferencing providers (Whereby, Daily.co, etc.) without changing core application logic. This architecture enables:
- Seamless switching between video platforms
- Platform-specific feature support
- Isolated platform code organization
- Consistent API surface across platforms
- Feature flags for gradual migration
## Architecture Components
### 1. **Directory Structure**
```
server/reflector/video_platforms/
├── __init__.py # Public API exports
├── base.py # Abstract base classes
├── factory.py # Platform client factory
├── registry.py # Platform registration system
├── whereby.py # Whereby implementation
├── daily.py # Daily.co implementation
└── mock.py # Testing implementation
```
### 2. **Core Abstract Classes**
#### `VideoPlatformClient` (base.py)
Abstract base class defining the interface all platforms must implement:
```python
class VideoPlatformClient(ABC):
PLATFORM_NAME: str = ""
@abstractmethod
async def create_meeting(self, room_name_prefix: str, end_date: datetime, room: Room) -> MeetingData
@abstractmethod
async def get_room_sessions(self, room_name: str) -> Dict[str, Any]
@abstractmethod
async def delete_room(self, room_name: str) -> bool
@abstractmethod
async def upload_logo(self, room_name: str, logo_path: str) -> bool
@abstractmethod
def verify_webhook_signature(self, body: bytes, signature: str, timestamp: Optional[str] = None) -> bool
```
#### `MeetingData` (base.py)
Standardized meeting data structure returned by all platforms:
```python
class MeetingData(BaseModel):
meeting_id: str
room_name: str
room_url: str
host_room_url: str
platform: str
extra_data: Dict[str, Any] = {} # Platform-specific data
```
#### `VideoPlatformConfig` (base.py)
Unified configuration structure for all platforms:
```python
class VideoPlatformConfig(BaseModel):
api_key: str
webhook_secret: str
api_url: Optional[str] = None
subdomain: Optional[str] = None
s3_bucket: Optional[str] = None
s3_region: Optional[str] = None
aws_role_arn: Optional[str] = None
aws_access_key_id: Optional[str] = None
aws_access_key_secret: Optional[str] = None
```
### 3. **Platform Registration System**
#### Registry Pattern (registry.py)
- Automatic registration of built-in platforms
- Runtime platform discovery
- Type-safe client instantiation
```python
# Auto-registration of platforms
_PLATFORMS: Dict[str, Type[VideoPlatformClient]] = {}
def register_platform(name: str, client_class: Type[VideoPlatformClient])
def get_platform_client(platform: str, config: VideoPlatformConfig) -> VideoPlatformClient
```
#### Factory System (factory.py)
- Configuration management per platform
- Platform selection logic
- Feature flag integration
```python
def get_platform_for_room(room_id: Optional[str] = None) -> str:
"""Determine which platform to use based on feature flags."""
if not settings.DAILY_MIGRATION_ENABLED:
return "whereby"
if room_id and room_id in settings.DAILY_MIGRATION_ROOM_IDS:
return "daily"
return settings.DEFAULT_VIDEO_PLATFORM
```
### 4. **Database Schema Changes**
#### Room Model Updates
Added `platform` field to track which video platform each room uses:
```python
# Database Schema
platform_column = sqlalchemy.Column(
"platform",
sqlalchemy.String,
nullable=False,
server_default="whereby"
)
# Pydantic Model
class Room(BaseModel):
platform: Literal["whereby", "daily"] = "whereby"
```
#### Meeting Model Updates
Added `platform` field to meetings for tracking and debugging:
```python
# Database Schema
platform_column = sqlalchemy.Column(
"platform",
sqlalchemy.String,
nullable=False,
server_default="whereby"
)
# Pydantic Model
class Meeting(BaseModel):
platform: Literal["whereby", "daily"] = "whereby"
```
**Key Decision**: No platform-specific fields were added to models. Instead, the `extra_data` field in `MeetingData` handles platform-specific information, following the user's rule of using generic `provider_data` as JSON if needed.
### 5. **Settings Configuration**
#### Feature Flags
```python
# Migration control
DAILY_MIGRATION_ENABLED: bool = True
DAILY_MIGRATION_ROOM_IDS: list[str] = []
DEFAULT_VIDEO_PLATFORM: str = "daily"
# Daily.co specific settings
DAILY_API_KEY: str | None = None
DAILY_WEBHOOK_SECRET: str | None = None
DAILY_SUBDOMAIN: str | None = None
AWS_DAILY_S3_BUCKET: str | None = None
AWS_DAILY_S3_REGION: str = "us-west-2"
AWS_DAILY_ROLE_ARN: str | None = None
```
#### Configuration Pattern
Each platform gets its own configuration namespace while sharing common patterns:
```python
def get_platform_config(platform: str) -> VideoPlatformConfig:
if platform == "whereby":
return VideoPlatformConfig(
api_key=settings.WHEREBY_API_KEY or "",
webhook_secret=settings.WHEREBY_WEBHOOK_SECRET or "",
# ... whereby-specific config
)
elif platform == "daily":
return VideoPlatformConfig(
api_key=settings.DAILY_API_KEY or "",
webhook_secret=settings.DAILY_WEBHOOK_SECRET or "",
# ... daily-specific config
)
```
### 6. **API Integration Updates**
#### Room Creation (views/rooms.py)
Updated to use platform factory instead of direct Whereby calls:
```python
@router.post("/rooms/{room_name}/meeting")
async def rooms_create_meeting(room_name: str, user: UserInfo):
# OLD: Direct Whereby integration
# whereby_meeting = await create_meeting("", end_date=end_date, room=room)
# NEW: Platform abstraction
platform = get_platform_for_room(room.id)
client = create_platform_client(platform)
meeting_data = await client.create_meeting(
room_name_prefix=room.name, end_date=end_date, room=room
)
await client.upload_logo(meeting_data.room_name, "./images/logo.png")
```
### 7. **Webhook Handling**
#### Separate Webhook Endpoints
Each platform gets its own webhook endpoint with platform-specific signature verification:
```python
# views/daily.py
@router.post("/daily_webhook")
async def daily_webhook(event: DailyWebhookEvent, request: Request):
# Verify Daily.co signature
body = await request.body()
signature = request.headers.get("X-Daily-Signature", "")
if not verify_daily_webhook_signature(body, signature):
raise HTTPException(status_code=401)
# Handle platform-specific events
if event.type == "participant.joined":
await _handle_participant_joined(event)
```
#### Consistent Event Handling
Despite different event formats, the core business logic remains the same:
```python
async def _handle_participant_joined(event):
room_name = event.data.get("room", {}).get("name") # Daily.co format
meeting = await meetings_controller.get_by_room_name(room_name)
if meeting:
current_count = getattr(meeting, "num_clients", 0)
await meetings_controller.update_meeting(
meeting.id, num_clients=current_count + 1
)
```
### 8. **Worker Task Integration**
#### New Task for Daily.co Recording Processing
Added platform-specific recording processing while maintaining the same pipeline:
```python
@shared_task
@asynctask
async def process_recording_from_url(recording_url: str, meeting_id: str, recording_id: str):
"""Process recording from Direct URL (Daily.co webhook)."""
logger.info("Processing recording from URL for meeting: %s", meeting_id)
# Uses same processing pipeline as Whereby S3 recordings
```
**Key Decision**: Worker tasks remain in main worker module but could be moved to platform-specific folders as suggested by the user.
### 9. **Testing Infrastructure**
#### Comprehensive Test Suite
- Unit tests for each platform client
- Integration tests for platform switching
- Mock platform for testing without external dependencies
- Webhook signature verification tests
```python
class TestPlatformIntegration:
"""Integration tests for platform switching."""
async def test_platform_switching_preserves_interface(self):
"""Test that different platforms provide consistent interface."""
# Test both Mock and Daily platforms return MeetingData objects
# with consistent fields
```
## Implementation Patterns for Jitsi Integration
Based on the daily.co implementation, here's how Jitsi should be integrated:
### 1. **Jitsi Client Implementation**
```python
# video_platforms/jitsi.py
class JitsiClient(VideoPlatformClient):
PLATFORM_NAME = "jitsi"
async def create_meeting(self, room_name_prefix: str, end_date: datetime, room: Room) -> MeetingData:
# Generate unique room name
jitsi_room = f"reflector-{room.name}-{int(time.time())}"
# Generate JWT tokens
user_jwt = self._generate_jwt(room=jitsi_room, moderator=False, exp=end_date)
host_jwt = self._generate_jwt(room=jitsi_room, moderator=True, exp=end_date)
return MeetingData(
meeting_id=generate_uuid4(),
room_name=jitsi_room,
room_url=f"https://jitsi.domain/{jitsi_room}?jwt={user_jwt}",
host_room_url=f"https://jitsi.domain/{jitsi_room}?jwt={host_jwt}",
platform=self.PLATFORM_NAME,
extra_data={"user_jwt": user_jwt, "host_jwt": host_jwt}
)
```
### 2. **Settings Integration**
```python
# settings.py
JITSI_DOMAIN: str = "meet.jit.si"
JITSI_JWT_SECRET: str | None = None
JITSI_WEBHOOK_SECRET: str | None = None
JITSI_API_URL: str | None = None # If using Jitsi API
```
### 3. **Factory Registration**
```python
# registry.py
def _register_builtin_platforms():
from .jitsi import JitsiClient
register_platform("jitsi", JitsiClient)
# factory.py
def get_platform_config(platform: str) -> VideoPlatformConfig:
elif platform == "jitsi":
return VideoPlatformConfig(
api_key="", # Jitsi may not need API key
webhook_secret=settings.JITSI_WEBHOOK_SECRET or "",
api_url=settings.JITSI_API_URL,
)
```
### 4. **Webhook Integration**
```python
# views/jitsi.py
@router.post("/jitsi/events")
async def jitsi_events_webhook(event_data: dict):
# Handle Prosody event-sync webhook format
event_type = event_data.get("event")
room_name = event_data.get("room", "").split("@")[0]
if event_type == "muc-occupant-joined":
# Same participant handling logic as other platforms
```
## Key Benefits of This Architecture
### 1. **Isolation and Organization**
- Platform-specific code contained in separate modules
- No platform logic leaking into core application
- Easy to add/remove platforms without affecting others
### 2. **Consistent Interface**
- All platforms implement the same abstract methods
- Standardized `MeetingData` structure
- Uniform error handling and logging
### 3. **Gradual Migration Support**
- Feature flags for controlled rollouts
- Room-specific platform selection
- Fallback mechanisms for platform failures
### 4. **Configuration Management**
- Centralized settings per platform
- Consistent naming patterns
- Environment-based configuration
### 5. **Testing and Quality**
- Mock platform for testing
- Comprehensive test coverage
- Platform-specific test utilities
## Migration Strategy Applied
The daily.co implementation demonstrates a careful migration approach:
### 1. **Backward Compatibility**
- Default platform remains "whereby"
- Existing rooms continue using Whereby unless explicitly migrated
- Same API endpoints and response formats
### 2. **Feature Flag Control**
```python
# Gradual rollout control
DAILY_MIGRATION_ENABLED: bool = True
DAILY_MIGRATION_ROOM_IDS: list[str] = [] # Specific rooms to migrate
DEFAULT_VIDEO_PLATFORM: str = "daily" # New rooms default
```
### 3. **Data Integrity**
- Platform field tracks which service each room/meeting uses
- No data loss during migration
- Platform-specific data preserved in `extra_data`
### 4. **Monitoring and Rollback**
- Comprehensive logging of platform selection
- Easy rollback by changing feature flags
- Platform-specific error tracking
## Recommendations for Jitsi Integration
Based on this analysis and the user's requirements:
### 1. **Follow the Pattern**
- Create `video_platforms/jitsi/` directory with:
- `client.py` - Main JitsiClient implementation
- `tasks.py` - Jitsi-specific worker tasks
- `__init__.py` - Module exports
### 2. **Settings Organization**
- Use `JITSI_*` prefix for all Jitsi settings
- Follow the same configuration pattern as Daily.co
- Support both environment variables and config files
### 3. **Generic Database Fields**
- Avoid platform-specific columns in database
- Use `provider_data` JSON field if platform-specific data needed
- Keep `platform` field as simple string identifier
### 4. **Worker Task Migration**
According to user requirements, migrate platform-specific tasks:
```
video_platforms/
├── whereby/
│ ├── client.py (moved from whereby.py)
│ └── tasks.py (moved from worker/whereby_tasks.py)
├── daily/
│ ├── client.py (moved from daily.py)
│ └── tasks.py (moved from worker/daily_tasks.py)
└── jitsi/
├── client.py (new JitsiClient)
└── tasks.py (new Jitsi recording tasks)
```
### 5. **Webhook Architecture**
- Create `views/jitsi.py` for Jitsi-specific webhooks
- Follow the same signature verification pattern
- Reuse existing participant tracking logic
## Implementation Checklist for Jitsi
- [ ] Create `video_platforms/jitsi/` directory structure
- [ ] Implement `JitsiClient` following the abstract interface
- [ ] Add Jitsi settings to configuration
- [ ] Register Jitsi platform in factory/registry
- [ ] Create Jitsi webhook endpoint
- [ ] Implement JWT token generation for room access
- [ ] Add Jitsi recording processing tasks
- [ ] Create comprehensive test suite
- [ ] Update database migrations for platform field
- [ ] Document Jitsi-specific configuration
## Conclusion
The video platforms refactoring in PR #529 provides an excellent foundation for adding Jitsi support. The architecture is well-designed with clear separation of concerns, consistent interfaces, and excellent extensibility. The daily.co implementation demonstrates how to add a new platform while maintaining backward compatibility and providing gradual migration capabilities.
The pattern should be directly applicable to Jitsi integration, with the main differences being:
- JWT-based authentication instead of API keys
- Different webhook event formats
- Jibri recording pipeline integration
- Self-hosted deployment considerations
This architecture successfully achieves the user's goals of:
1. Settings-based configuration
2. Generic database fields (no provider-specific columns)
3. Platform isolation in separate directories
4. Worker task organization within platform folders

3
server/.gitignore vendored
View File

@@ -176,7 +176,8 @@ artefacts/
audio_*.wav
# ignore local database
reflector.sqlite3
*.sqlite3
*.db
data/
dump.rdb

View File

@@ -1,7 +1,8 @@
FROM python:3.12-slim
ENV PYTHONUNBUFFERED=1 \
UV_LINK_MODE=copy
UV_LINK_MODE=copy \
UV_NO_CACHE=1
# builder install base dependencies
WORKDIR /tmp
@@ -13,8 +14,8 @@ ENV PATH="/root/.local/bin/:$PATH"
# install application dependencies
RUN mkdir -p /app
WORKDIR /app
COPY pyproject.toml uv.lock /app/
RUN touch README.md && env uv sync --compile-bytecode --locked
COPY pyproject.toml uv.lock README.md /app/
RUN uv sync --compile-bytecode --locked
# pre-download nltk packages
RUN uv run python -c "import nltk; nltk.download('punkt_tab'); nltk.download('averaged_perceptron_tagger_eng')"
@@ -26,4 +27,15 @@ COPY migrations /app/migrations
COPY reflector /app/reflector
WORKDIR /app
# Create symlink for libgomp if it doesn't exist (for ARM64 compatibility)
RUN if [ "$(uname -m)" = "aarch64" ] && [ ! -f /usr/lib/libgomp.so.1 ]; then \
LIBGOMP_PATH=$(find /app/.venv/lib -path "*/torch.libs/libgomp*.so.*" 2>/dev/null | head -n1); \
if [ -n "$LIBGOMP_PATH" ]; then \
ln -sf "$LIBGOMP_PATH" /usr/lib/libgomp.so.1; \
fi \
fi
# Pre-check just to make sure the image will not fail
RUN uv run python -c "import silero_vad.model"
CMD ["./runserver.sh"]

View File

@@ -40,3 +40,5 @@ uv run python -c "from reflector.pipelines.main_live_pipeline import task_pipeli
```bash
uv run python -c "from reflector.pipelines.main_live_pipeline import pipeline_post; pipeline_post(transcript_id='TRANSCRIPT_ID')"
```
.

View File

@@ -0,0 +1,212 @@
# Event Logger for Docker-Jitsi-Meet
A Prosody module that logs Jitsi meeting events to JSONL files alongside recordings, enabling complete participant tracking and speaker statistics.
## Prerequisites
- Running docker-jitsi-meet installation
- Jibri configured for recording
## Installation
### Step 1: Copy the Module
Copy the Prosody module to your custom plugins directory:
```bash
# Create the directory if it doesn't exist
mkdir -p ~/.jitsi-meet-cfg/prosody/prosody-plugins-custom
# Copy the module
cp mod_event_logger.lua ~/.jitsi-meet-cfg/prosody/prosody-plugins-custom/
```
### Step 2: Update Your .env File
Add or modify these variables in your `.env` file:
```bash
# If XMPP_MUC_MODULES already exists, append event_logger
# Example: XMPP_MUC_MODULES=existing_module,event_logger
XMPP_MUC_MODULES=event_logger
# Optional: Configure the module (these are defaults)
JIBRI_RECORDINGS_PATH=/config/recordings
JIBRI_LOG_SPEAKER_STATS=true
JIBRI_SPEAKER_STATS_INTERVAL=10
```
**Important**: If you already have `XMPP_MUC_MODULES` defined, add `event_logger` to the comma-separated list:
```bash
# Existing modules + our module
XMPP_MUC_MODULES=mod_info,mod_alert,event_logger
```
### Step 3: Modify docker-compose.yml
Add a shared recordings volume so Prosody can write events alongside Jibri recordings:
```yaml
services:
prosody:
# ... existing configuration ...
volumes:
- ${CONFIG}/prosody/config:/config:Z
- ${CONFIG}/prosody/prosody-plugins-custom:/prosody-plugins-custom:Z
- ${CONFIG}/recordings:/config/recordings:Z # Add this line
environment:
# Add if not using .env file
- XMPP_MUC_MODULES=${XMPP_MUC_MODULES:-event_logger}
- JIBRI_RECORDINGS_PATH=/config/recordings
jibri:
# ... existing configuration ...
volumes:
- ${CONFIG}/jibri:/config:Z
- ${CONFIG}/recordings:/config/recordings:Z # Add this line
environment:
# For Reflector webhook integration (optional)
- REFLECTOR_WEBHOOK_URL=${REFLECTOR_WEBHOOK_URL:-}
- JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh
```
### Step 4: Add Finalize Script (Optional - For Reflector Integration)
If you want to notify Reflector when recordings complete:
```bash
# Copy the finalize script
cp finalize.sh ~/.jitsi-meet-cfg/jibri/finalize.sh
chmod +x ~/.jitsi-meet-cfg/jibri/finalize.sh
# Add to .env
REFLECTOR_WEBHOOK_URL=http://your-reflector-api:8000
```
### Step 5: Restart Services
```bash
docker-compose down
docker-compose up -d
```
## What Gets Created
After a recording, you'll find in `~/.jitsi-meet-cfg/recordings/{session-id}/`:
- `recording.mp4` - The video recording (created by Jibri)
- `metadata.json` - Basic metadata (created by Jibri)
- `events.jsonl` - Complete participant timeline (created by this module)
## Event Format
Each line in `events.jsonl` is a JSON object:
```json
{"type":"room_created","timestamp":1234567890,"room_name":"TestRoom","room_jid":"testroom@conference.meet.jitsi","meeting_url":"https://meet.jitsi/TestRoom"}
{"type":"recording_started","timestamp":1234567891,"room_name":"TestRoom","session_id":"20240115120000_TestRoom","jibri_jid":"jibri@recorder.meet.jitsi"}
{"type":"participant_joined","timestamp":1234567892,"room_name":"TestRoom","participant":{"jid":"user1@meet.jitsi/web","nick":"John Doe","id":"user1@meet.jitsi","is_moderator":false}}
{"type":"speaker_active","timestamp":1234567895,"room_name":"TestRoom","speaker_jid":"user1@meet.jitsi","speaker_nick":"John Doe","duration":10}
{"type":"participant_left","timestamp":1234567920,"room_name":"TestRoom","participant":{"jid":"user1@meet.jitsi/web","nick":"John Doe","duration_seconds":28}}
{"type":"recording_stopped","timestamp":1234567950,"room_name":"TestRoom","session_id":"20240115120000_TestRoom","meeting_url":"https://meet.jitsi/TestRoom"}
```
## Configuration Options
All configuration can be done via environment variables:
| Environment Variable | Default | Description |
|---------------------|---------|-------------|
| `JIBRI_RECORDINGS_PATH` | `/config/recordings` | Path where recordings are stored |
| `JIBRI_LOG_SPEAKER_STATS` | `true` | Enable speaker statistics logging |
| `JIBRI_SPEAKER_STATS_INTERVAL` | `10` | Seconds between speaker stats updates |
## Verifying Installation
Check that the module is loaded:
```bash
docker-compose logs prosody | grep "Event Logger"
# Should see: "Event Logger loaded - writing to /config/recordings"
```
Check for events after a recording:
```bash
ls -la ~/.jitsi-meet-cfg/recordings/*/events.jsonl
cat ~/.jitsi-meet-cfg/recordings/*/events.jsonl | jq .
```
## Troubleshooting
### No events.jsonl file created
1. **Check module is enabled**:
```bash
docker-compose exec prosody grep -r "event_logger" /config
```
2. **Verify volume permissions**:
```bash
docker-compose exec prosody ls -la /config/recordings
```
3. **Check Prosody logs for errors**:
```bash
docker-compose logs prosody | grep -i error
```
### Module not loading
1. **Verify file exists in container**:
```bash
docker-compose exec prosody ls -la /prosody-plugins-custom/
```
2. **Check XMPP_MUC_MODULES format** (must be comma-separated, no spaces):
- ✅ Correct: `XMPP_MUC_MODULES=mod1,mod2,event_logger`
- ❌ Wrong: `XMPP_MUC_MODULES=mod1, mod2, event_logger`
## Common docker-compose.yml Patterns
### Minimal Addition (if you trust defaults)
```yaml
services:
prosody:
volumes:
- ${CONFIG}/recordings:/config/recordings:Z # Just add this
```
### Full Configuration
```yaml
services:
prosody:
volumes:
- ${CONFIG}/prosody/config:/config:Z
- ${CONFIG}/prosody/prosody-plugins-custom:/prosody-plugins-custom:Z
- ${CONFIG}/recordings:/config/recordings:Z
environment:
- XMPP_MUC_MODULES=event_logger
- JIBRI_RECORDINGS_PATH=/config/recordings
- JIBRI_LOG_SPEAKER_STATS=true
- JIBRI_SPEAKER_STATS_INTERVAL=10
jibri:
volumes:
- ${CONFIG}/jibri:/config:Z
- ${CONFIG}/recordings:/config/recordings:Z
environment:
- JIBRI_RECORDING_DIR=/config/recordings
- JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh
```
## Integration with Reflector
The finalize.sh script will automatically notify Reflector when a recording completes if `REFLECTOR_WEBHOOK_URL` is set. Reflector will receive:
```json
{
"session_id": "20240115120000_TestRoom",
"path": "20240115120000_TestRoom",
"meeting_url": "https://meet.jitsi/TestRoom"
}
```
Reflector then processes the recording along with the complete participant timeline from `events.jsonl`.

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Jibri finalize script to notify Reflector when recording is complete
# This script is called by Jibri with the recording directory as argument
RECORDING_PATH="$1"
SESSION_ID=$(basename "$RECORDING_PATH")
METADATA_FILE="$RECORDING_PATH/metadata.json"
# Extract meeting URL from Jibri's metadata
MEETING_URL=""
if [ -f "$METADATA_FILE" ]; then
MEETING_URL=$(jq -r '.meeting_url' "$METADATA_FILE" 2>/dev/null || echo "")
fi
echo "[$(date)] Recording finalized: $RECORDING_PATH"
echo "[$(date)] Session ID: $SESSION_ID"
echo "[$(date)] Meeting URL: $MEETING_URL"
# Check if events.jsonl was created by our Prosody module
if [ -f "$RECORDING_PATH/events.jsonl" ]; then
EVENT_COUNT=$(wc -l < "$RECORDING_PATH/events.jsonl")
echo "[$(date)] Found events.jsonl with $EVENT_COUNT events"
else
echo "[$(date)] Warning: No events.jsonl found"
fi
# Notify Reflector if webhook URL is configured
if [ -n "$REFLECTOR_WEBHOOK_URL" ]; then
echo "[$(date)] Notifying Reflector at: $REFLECTOR_WEBHOOK_URL"
RESPONSE=$(curl -s -w "\n%{http_code}" -X POST "$REFLECTOR_WEBHOOK_URL/api/v1/jibri/recording-ready" \
-H "Content-Type: application/json" \
-d "{\"session_id\":\"$SESSION_ID\",\"path\":\"$SESSION_ID\",\"meeting_url\":\"$MEETING_URL\"}")
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
BODY=$(echo "$RESPONSE" | sed '$d')
if [ "$HTTP_CODE" = "200" ]; then
echo "[$(date)] Reflector notified successfully"
echo "[$(date)] Response: $BODY"
else
echo "[$(date)] Failed to notify Reflector. HTTP code: $HTTP_CODE"
echo "[$(date)] Response: $BODY"
fi
else
echo "[$(date)] No REFLECTOR_WEBHOOK_URL configured, skipping notification"
fi
echo "[$(date)] Finalize script completed"

View File

@@ -0,0 +1,372 @@
local json = require "util.json"
local st = require "util.stanza"
local jid_bare = require "util.jid".bare
local recordings_path = os.getenv("JIBRI_RECORDINGS_PATH") or
module:get_option_string("jibri_recordings_path", "/recordings")
-- room_jid -> { session_id, participants = {jid -> info} }
local active_recordings = {}
-- room_jid -> { participants = {jid -> info}, created_at }
local room_states = {}
local function get_timestamp()
return os.time()
end
local function write_event(session_id, event)
if not session_id then
module:log("warn", "No session_id for event: %s", event.type)
return
end
local session_dir = string.format("%s/%s", recordings_path, session_id)
local event_file = string.format("%s/events.jsonl", session_dir)
module:log("info", "Writing event %s to %s", event.type, event_file)
-- Create directory
local mkdir_cmd = string.format("mkdir -p '%s' 2>&1", session_dir)
local mkdir_result = os.execute(mkdir_cmd)
module:log("debug", "mkdir result: %s", tostring(mkdir_result))
local file, err = io.open(event_file, "a")
if file then
local json_str = json.encode(event)
file:write(json_str .. "\n")
file:close()
module:log("info", "Successfully wrote event %s", event.type)
else
module:log("error", "Failed to write event to %s: %s", event_file, err)
end
end
local function extract_participant_info(occupant)
local info = {
jid = occupant.jid,
bare_jid = occupant.bare_jid,
nick = occupant.nick,
display_name = nil,
role = occupant.role
}
local presence = occupant:get_presence()
if presence then
local nick_element = presence:get_child("nick", "http://jabber.org/protocol/nick")
if nick_element then
info.display_name = nick_element:get_text()
end
local identity = presence:get_child("identity")
if identity then
local user = identity:get_child("user")
if user then
local name = user:get_child("name")
if name then
info.display_name = name:get_text()
end
local id_element = user:get_child("id")
if id_element then
info.id = id_element:get_text()
end
end
end
if not info.display_name and occupant.nick then
local _, _, resource = occupant.nick:match("([^@]+)@([^/]+)/(.+)")
if resource then
info.display_name = resource
end
end
end
return info
end
local function get_room_participant_count(room)
local count = 0
for _ in room:each_occupant() do
count = count + 1
end
return count
end
local function snapshot_room_participants(room)
local participants = {}
local total = 0
local skipped = 0
module:log("info", "Snapshotting room participants")
for _, occupant in room:each_occupant() do
total = total + 1
-- Skip recorders (Jibri)
if occupant.bare_jid and (occupant.bare_jid:match("^recorder@") or
occupant.bare_jid:match("^jibri@")) then
skipped = skipped + 1
else
local info = extract_participant_info(occupant)
participants[occupant.jid] = info
module:log("debug", "Added participant: %s", info.display_name or info.bare_jid)
end
end
module:log("info", "Snapshot: %d total, %d participants", total, total - skipped)
return participants
end
-- Import utility functions if available
local util = module:require "util";
local get_room_from_jid = util.get_room_from_jid;
local room_jid_match_rewrite = util.room_jid_match_rewrite;
-- Main IQ handler for Jibri stanzas
module:hook("pre-iq/full", function(event)
local stanza = event.stanza
if stanza.name ~= "iq" then
return
end
local jibri = stanza:get_child('jibri', 'http://jitsi.org/protocol/jibri')
if not jibri then
return
end
module:log("info", "=== Jibri IQ intercepted ===")
local action = jibri.attr.action
local session_id = jibri.attr.session_id
local room_jid = jibri.attr.room
local recording_mode = jibri.attr.recording_mode
local app_data = jibri.attr.app_data
module:log("info", "Jibri %s - session: %s, room: %s, mode: %s",
action or "?", session_id or "?", room_jid or "?", recording_mode or "?")
if not room_jid or not session_id then
module:log("warn", "Missing room_jid or session_id")
return
end
-- Get the room using util function
local room = get_room_from_jid(room_jid_match_rewrite(jid_bare(stanza.attr.to)))
if not room then
-- Try with the room_jid directly
room = get_room_from_jid(room_jid)
end
if not room then
module:log("error", "Room not found for jid: %s", room_jid)
return
end
module:log("info", "Room found: %s", room:get_name() or room_jid)
if action == "start" then
module:log("info", "Recording START for session %s", session_id)
-- Count and snapshot participants
local participant_count = 0
for _ in room:each_occupant() do
participant_count = participant_count + 1
end
local participants = snapshot_room_participants(room)
local participant_list = {}
for jid, info in pairs(participants) do
table.insert(participant_list, info)
end
active_recordings[room_jid] = {
session_id = session_id,
participants = participants,
started_at = get_timestamp()
}
write_event(session_id, {
type = "recording_started",
timestamp = get_timestamp(),
room_jid = room_jid,
room_name = room:get_name(),
session_id = session_id,
recording_mode = recording_mode,
app_data = app_data,
participant_count = participant_count,
participants_at_start = participant_list
})
elseif action == "stop" then
module:log("info", "Recording STOP for session %s", session_id)
local recording = active_recordings[room_jid]
if recording and recording.session_id == session_id then
write_event(session_id, {
type = "recording_stopped",
timestamp = get_timestamp(),
room_jid = room_jid,
room_name = room:get_name(),
session_id = session_id,
duration = get_timestamp() - recording.started_at,
participant_count = get_room_participant_count(room)
})
active_recordings[room_jid] = nil
else
module:log("warn", "No active recording found for room %s", room_jid)
end
end
end);
-- Room and participant event hooks
local function setup_room_hooks(host_module)
module:log("info", "Setting up room hooks on %s", host_module.host or "unknown")
-- Room created
host_module:hook("muc-room-created", function(event)
local room = event.room
local room_jid = room.jid
room_states[room_jid] = {
participants = {},
created_at = get_timestamp()
}
module:log("info", "Room created: %s", room_jid)
end)
-- Room destroyed
host_module:hook("muc-room-destroyed", function(event)
local room = event.room
local room_jid = room.jid
room_states[room_jid] = nil
active_recordings[room_jid] = nil
module:log("info", "Room destroyed: %s", room_jid)
end)
-- Occupant joined
host_module:hook("muc-occupant-joined", function(event)
local room = event.room
local occupant = event.occupant
local room_jid = room.jid
-- Skip recorders
if occupant.bare_jid and (occupant.bare_jid:match("^recorder@") or
occupant.bare_jid:match("^jibri@")) then
return
end
local participant_info = extract_participant_info(occupant)
-- Update room state
if room_states[room_jid] then
room_states[room_jid].participants[occupant.jid] = participant_info
end
-- Log to active recording if exists
local recording = active_recordings[room_jid]
if recording then
recording.participants[occupant.jid] = participant_info
write_event(recording.session_id, {
type = "participant_joined",
timestamp = get_timestamp(),
room_jid = room_jid,
room_name = room:get_name(),
participant = participant_info,
participant_count = get_room_participant_count(room)
})
end
module:log("info", "Participant joined %s: %s (%d total)",
room:get_name() or room_jid,
participant_info.display_name or participant_info.bare_jid,
get_room_participant_count(room))
end)
-- Occupant left
host_module:hook("muc-occupant-left", function(event)
local room = event.room
local occupant = event.occupant
local room_jid = room.jid
-- Skip recorders
if occupant.bare_jid and (occupant.bare_jid:match("^recorder@") or
occupant.bare_jid:match("^jibri@")) then
return
end
local participant_info = extract_participant_info(occupant)
-- Update room state
if room_states[room_jid] then
room_states[room_jid].participants[occupant.jid] = nil
end
-- Log to active recording if exists
local recording = active_recordings[room_jid]
if recording then
if recording.participants[occupant.jid] then
recording.participants[occupant.jid] = nil
end
write_event(recording.session_id, {
type = "participant_left",
timestamp = get_timestamp(),
room_jid = room_jid,
room_name = room:get_name(),
participant = participant_info,
participant_count = get_room_participant_count(room)
})
end
module:log("info", "Participant left %s: %s (%d remaining)",
room:get_name() or room_jid,
participant_info.display_name or participant_info.bare_jid,
get_room_participant_count(room))
end)
end
-- Module initialization
local current_host = module:get_host()
local host_type = module:get_host_type()
module:log("info", "Event Logger loading on %s (type: %s)", current_host, host_type or "unknown")
module:log("info", "Recording path: %s", recordings_path)
-- Setup room hooks based on host type
if host_type == "component" and current_host:match("^[^.]+%.") then
setup_room_hooks(module)
else
-- Try to find and hook to MUC component
local process_host_module = util.process_host_module
local muc_component_host = module:get_option_string("muc_component") or
module:get_option_string("main_muc")
if not muc_component_host then
local possible_hosts = {
"muc." .. current_host,
"conference." .. current_host,
"rooms." .. current_host
}
for _, host in ipairs(possible_hosts) do
if prosody.hosts[host] then
muc_component_host = host
module:log("info", "Auto-detected MUC component: %s", muc_component_host)
break
end
end
end
if muc_component_host then
process_host_module(muc_component_host, function(host_module, host)
module:log("info", "Hooking to MUC events on %s", host)
setup_room_hooks(host_module)
end)
else
module:log("error", "Could not find MUC component")
end
end

View File

@@ -0,0 +1,95 @@
# Data Retention and Cleanup
## Overview
For public instances of Reflector, a data retention policy is automatically enforced to delete anonymous user data after a configurable period (default: 7 days). This ensures compliance with privacy expectations and prevents unbounded storage growth.
## Configuration
### Environment Variables
- `PUBLIC_MODE` (bool): Must be set to `true` to enable automatic cleanup
- `PUBLIC_DATA_RETENTION_DAYS` (int): Number of days to retain anonymous data (default: 7)
### What Gets Deleted
When data reaches the retention period, the following items are automatically removed:
1. **Transcripts** from anonymous users (where `user_id` is NULL):
- Database records
- Local files (audio.wav, audio.mp3, audio.json waveform)
- Storage files (cloud storage if configured)
## Automatic Cleanup
### Celery Beat Schedule
When `PUBLIC_MODE=true`, a Celery beat task runs daily at 3 AM to clean up old data:
```python
# Automatically scheduled when PUBLIC_MODE=true
"cleanup_old_public_data": {
"task": "reflector.worker.cleanup.cleanup_old_public_data",
"schedule": crontab(hour=3, minute=0), # Daily at 3 AM
}
```
### Running the Worker
Ensure both Celery worker and beat scheduler are running:
```bash
# Start Celery worker
uv run celery -A reflector.worker.app worker --loglevel=info
# Start Celery beat scheduler (in another terminal)
uv run celery -A reflector.worker.app beat
```
## Manual Cleanup
For testing or manual intervention, use the cleanup tool:
```bash
# Delete data older than 7 days (default)
uv run python -m reflector.tools.cleanup_old_data
# Delete data older than 30 days
uv run python -m reflector.tools.cleanup_old_data --days 30
```
Note: The manual tool uses the same implementation as the Celery worker task to ensure consistency.
## Important Notes
1. **User Data Deletion**: Only anonymous data (where `user_id` is NULL) is deleted. Authenticated user data is preserved.
2. **Storage Cleanup**: The system properly cleans up both local files and cloud storage when configured.
3. **Error Handling**: If individual deletions fail, the cleanup continues and logs errors. Failed deletions are reported in the task output.
4. **Public Instance Only**: The automatic cleanup task only runs when `PUBLIC_MODE=true` to prevent accidental data loss in private deployments.
## Testing
Run the cleanup tests:
```bash
uv run pytest tests/test_cleanup.py -v
```
## Monitoring
Check Celery logs for cleanup task execution:
```bash
# Look for cleanup task logs
grep "cleanup_old_public_data" celery.log
grep "Starting cleanup of old public data" celery.log
```
Task statistics are logged after each run:
- Number of transcripts deleted
- Number of meetings deleted
- Number of orphaned recordings deleted
- Any errors encountered

View File

@@ -0,0 +1,194 @@
## Reflector GPU Transcription API (Specification)
This document defines the Reflector GPU transcription API that all implementations must adhere to. Current implementations include NVIDIA Parakeet (NeMo) and Whisper (faster-whisper), both deployed on Modal.com. The API surface and response shapes are OpenAI/Whisper-compatible, so clients can switch implementations by changing only the base URL.
### Base URL and Authentication
- Example base URLs (Modal web endpoints):
- Parakeet: `https://<account>--reflector-transcriber-parakeet-web.modal.run`
- Whisper: `https://<account>--reflector-transcriber-web.modal.run`
- All endpoints are served under `/v1` and require a Bearer token:
```
Authorization: Bearer <REFLECTOR_GPU_APIKEY>
```
Note: To switch implementations, deploy the desired variant and point `TRANSCRIPT_URL` to its base URL. The API is identical.
### Supported file types
`mp3, mp4, mpeg, mpga, m4a, wav, webm`
### Models and languages
- Parakeet (NVIDIA NeMo): default `nvidia/parakeet-tdt-0.6b-v2`
- Language support: only `en`. Other languages return HTTP 400.
- Whisper (faster-whisper): default `large-v2` (or deployment-specific)
- Language support: multilingual (per Whisper model capabilities).
Note: The `model` parameter is accepted by all implementations for interface parity. Some backends may treat it as informational.
### Endpoints
#### POST /v1/audio/transcriptions
Transcribe one or more uploaded audio files.
Request: multipart/form-data
- `file` (File) — optional. Single file to transcribe.
- `files` (File[]) — optional. One or more files to transcribe.
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
- `language` (string) — optional, defaults to `en`.
- Parakeet: only `en` is accepted; other values return HTTP 400
- Whisper: model-dependent; typically multilingual
- `batch` (boolean) — optional, defaults to `false`.
Notes:
- Provide either `file` or `files`, not both. If neither is provided, HTTP 400.
- `batch` requires `files`; using `batch=true` without `files` returns HTTP 400.
- Response shape for multiple files is the same regardless of `batch`.
- Files sent to this endpoint are processed in a single pass (no VAD/chunking). This is intended for short clips (roughly ≤ 30s; depends on GPU memory/model). For longer audio, prefer `/v1/audio/transcriptions-from-url` which supports VAD-based chunking.
Responses
Single file response:
```json
{
"text": "transcribed text",
"words": [
{ "word": "hello", "start": 0.0, "end": 0.5 },
{ "word": "world", "start": 0.5, "end": 1.0 }
],
"filename": "audio.mp3"
}
```
Multiple files response:
```json
{
"results": [
{"filename": "a1.mp3", "text": "...", "words": [...]},
{"filename": "a2.mp3", "text": "...", "words": [...]}]
}
```
Notes:
- Word objects always include keys: `word`, `start`, `end`.
- Some implementations may include a trailing space in `word` to match Whisper tokenization behavior; clients should trim if needed.
Example curl (single file):
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-F "file=@/path/to/audio.mp3" \
-F "language=en" \
"$BASE_URL/v1/audio/transcriptions"
```
Example curl (multiple files, batch):
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-F "files=@/path/a1.mp3" -F "files=@/path/a2.mp3" \
-F "batch=true" -F "language=en" \
"$BASE_URL/v1/audio/transcriptions"
```
#### POST /v1/audio/transcriptions-from-url
Transcribe a single remote audio file by URL.
Request: application/json
Body parameters:
- `audio_file_url` (string) — required. URL of the audio file to transcribe.
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
- `language` (string) — optional, defaults to `en`. Parakeet only accepts `en`.
- `timestamp_offset` (number) — optional, defaults to `0.0`. Added to each word's `start`/`end` in the response.
```json
{
"audio_file_url": "https://example.com/audio.mp3",
"model": "nvidia/parakeet-tdt-0.6b-v2",
"language": "en",
"timestamp_offset": 0.0
}
```
Response:
```json
{
"text": "transcribed text",
"words": [
{ "word": "hello", "start": 10.0, "end": 10.5 },
{ "word": "world", "start": 10.5, "end": 11.0 }
]
}
```
Notes:
- `timestamp_offset` is added to each words `start`/`end` in the response.
- Implementations may perform VAD-based chunking and batching for long-form audio; word timings are adjusted accordingly.
Example curl:
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-H "Content-Type: application/json" \
-d '{
"audio_file_url": "https://example.com/audio.mp3",
"language": "en",
"timestamp_offset": 0
}' \
"$BASE_URL/v1/audio/transcriptions-from-url"
```
### Error handling
- 400 Bad Request
- Parakeet: `language` other than `en`
- Missing required parameters (`file`/`files` for upload; `audio_file_url` for URL endpoint)
- Unsupported file extension
- 401 Unauthorized
- Missing or invalid Bearer token
- 404 Not Found
- `audio_file_url` does not exist
### Implementation details
- GPUs: A10G for small-file/live, L40S for large-file URL transcription (subject to deployment)
- VAD chunking and segment batching; word timings adjusted and overlapping ends constrained
- Pads very short segments (< 0.5s) to avoid model crashes on some backends
### Server configuration (Reflector API)
Set the Reflector server to use the Modal backend and point `TRANSCRIPT_URL` to your chosen deployment:
```
TRANSCRIPT_BACKEND=modal
TRANSCRIPT_URL=https://<account>--reflector-transcriber-parakeet-web.modal.run
TRANSCRIPT_MODAL_API_KEY=<REFLECTOR_GPU_APIKEY>
```
### Conformance tests
Use the pytest-based conformance tests to validate any new implementation (including self-hosted) against this spec:
```
TRANSCRIPT_URL=https://<your-deployment-base> \
TRANSCRIPT_MODAL_API_KEY=your-api-key \
uv run -m pytest -m gpu_modal --no-cov server/tests/test_gpu_modal_transcript.py
```

View File

@@ -0,0 +1,493 @@
# Jitsi Integration Configuration Guide
This guide provides step-by-step instructions for configuring Reflector to work with a self-hosted Jitsi Meet installation for video meetings and recording.
## Prerequisites
Before configuring Jitsi integration, ensure you have:
- **Self-hosted Jitsi Meet installation** (version 2.0.8922 or later recommended)
- **Jibri recording service** configured and running
- **Prosody XMPP server** with mod_event_sync module installed
- **Docker or system deployment** of Reflector with access to environment variables
- **SSL certificates** for secure communication between services
## Environment Configuration
Add the following environment variables to your Reflector deployment:
### Required Settings
```bash
# Jitsi Meet domain (without https://)
JITSI_DOMAIN=meet.example.com
# JWT secret for room authentication (generate with: openssl rand -hex 32)
JITSI_JWT_SECRET=your-64-character-hex-secret-here
# Webhook secret for secure event handling (generate with: openssl rand -hex 16)
JITSI_WEBHOOK_SECRET=your-32-character-hex-secret-here
# Application identifier (should match Jitsi configuration)
JITSI_APP_ID=reflector
# JWT issuer and audience (should match Jitsi configuration)
JITSI_JWT_ISSUER=reflector
JITSI_JWT_AUDIENCE=jitsi
```
### Example .env Configuration
```bash
# Add to your server/.env file
JITSI_DOMAIN=meet.mycompany.com
JITSI_JWT_SECRET=$(openssl rand -hex 32)
JITSI_WEBHOOK_SECRET=$(openssl rand -hex 16)
JITSI_APP_ID=reflector
JITSI_JWT_ISSUER=reflector
JITSI_JWT_AUDIENCE=jitsi
```
## Jitsi Meet Server Configuration
### 1. JWT Authentication Setup
Edit `/etc/prosody/conf.d/[YOUR_DOMAIN].cfg.lua`:
```lua
VirtualHost "meet.example.com"
authentication = "token"
app_id = "reflector"
app_secret = "your-jwt-secret-here"
-- Allow anonymous access for non-authenticated users
c2s_require_encryption = false
admins = { "focusUser@auth.meet.example.com" }
modules_enabled = {
"bosh";
"pubsub";
"ping";
"roster";
"saslauth";
"tls";
"dialback";
"disco";
"carbons";
"pep";
"private";
"blocklist";
"vcard";
"version";
"uptime";
"time";
"ping";
"register";
"admin_adhoc";
"token_verification";
"event_sync"; -- Required for webhook events
}
```
### 2. Room Access Control
Edit `/etc/jitsi/meet/meet.example.com-config.js`:
```javascript
var config = {
hosts: {
domain: 'meet.example.com',
muc: 'conference.meet.example.com'
},
// Enable JWT authentication
enableUserRolesBasedOnToken: true,
// Recording configuration
fileRecordingsEnabled: true,
liveStreamingEnabled: false,
// Reflector-specific settings
prejoinPageEnabled: true,
requireDisplayName: true,
};
```
### 3. Interface Configuration
Edit `/usr/share/jitsi-meet/interface_config.js`:
```javascript
var interfaceConfig = {
// Customize for Reflector branding
APP_NAME: 'Reflector Meeting',
DEFAULT_WELCOME_PAGE_LOGO_URL: 'https://your-domain.com/logo.png',
// Hide unnecessary buttons
TOOLBAR_BUTTONS: [
'microphone', 'camera', 'closedcaptions', 'desktop',
'fullscreen', 'fodeviceselection', 'hangup',
'chat', 'recording', 'livestreaming', 'etherpad',
'sharedvideo', 'settings', 'raisehand', 'videoquality',
'filmstrip', 'invite', 'feedback', 'stats', 'shortcuts',
'tileview', 'videobackgroundblur', 'download', 'help',
'mute-everyone'
]
};
```
## Jibri Configuration
### 1. Recording Service Setup
Edit `/etc/jitsi/jibri/jibri.conf`:
```hocon
jibri {
recording {
recordings-directory = "/var/recordings"
finalize-script = "/opt/jitsi/jibri/finalize.sh"
}
api {
xmpp {
environments = [{
name = "prod environment"
xmpp-server-hosts = ["meet.example.com"]
xmpp-domain = "meet.example.com"
control-muc {
domain = "internal.auth.meet.example.com"
room-name = "JibriBrewery"
nickname = "jibri-nickname"
}
control-login {
domain = "auth.meet.example.com"
username = "jibri"
password = "jibri-password"
}
}]
}
}
}
```
### 2. Finalize Script Setup
Create `/opt/jitsi/jibri/finalize.sh`:
```bash
#!/bin/bash
# Jibri finalize script for Reflector integration
RECORDING_FILE="$1"
ROOM_NAME="$2"
REFLECTOR_API_URL="${REFLECTOR_API_URL:-http://localhost:1250}"
WEBHOOK_SECRET="${JITSI_WEBHOOK_SECRET}"
# Generate webhook signature
generate_signature() {
local payload="$1"
echo -n "$payload" | openssl dgst -sha256 -hmac "$WEBHOOK_SECRET" | cut -d' ' -f2
}
# Prepare webhook payload
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%S.%3NZ)
PAYLOAD=$(cat <<EOF
{
"room_name": "$ROOM_NAME",
"recording_file": "$RECORDING_FILE",
"recording_status": "completed",
"timestamp": "$TIMESTAMP"
}
EOF
)
# Generate signature
SIGNATURE=$(generate_signature "$PAYLOAD")
# Send webhook to Reflector
curl -X POST "$REFLECTOR_API_URL/v1/jibri/recording-complete" \
-H "Content-Type: application/json" \
-H "X-Jitsi-Signature: $SIGNATURE" \
-d "$PAYLOAD" \
--max-time 30
echo "Recording finalization webhook sent for room: $ROOM_NAME"
```
Make the script executable:
```bash
chmod +x /opt/jitsi/jibri/finalize.sh
```
## Prosody Event Configuration
### 1. Event-Sync Module Installation
Install the mod_event_sync module:
```bash
# Download the module
cd /usr/share/jitsi-meet/prosody-plugins/
wget https://raw.githubusercontent.com/jitsi-contrib/prosody-plugins/main/mod_event_sync.lua
# Or if using git
git clone https://github.com/jitsi-contrib/prosody-plugins.git
cp prosody-plugins/mod_event_sync.lua /usr/share/jitsi-meet/prosody-plugins/
```
### 2. Webhook Configuration
Add to `/etc/prosody/conf.d/[YOUR_DOMAIN].cfg.lua`:
```lua
Component "conference.meet.example.com" "muc"
storage = "memory"
modules_enabled = {
"muc_meeting_id";
"muc_domain_mapper";
"polls";
"event_sync"; -- Enable event sync
}
-- Event sync webhook configuration
event_sync_url = "https://your-reflector-domain.com/v1/jitsi/events"
event_sync_secret = "your-webhook-secret-here"
-- Events to track
event_sync_events = {
"muc-occupant-joined",
"muc-occupant-left",
"jibri-recording-on",
"jibri-recording-off"
}
```
### 3. Restart Services
After configuration changes, restart all services:
```bash
systemctl restart prosody
systemctl restart jicofo
systemctl restart jitsi-videobridge2
systemctl restart jibri
systemctl restart nginx
```
## Reflector Room Configuration
### 1. Create Jitsi Room
When creating rooms in Reflector, set the platform field:
```bash
curl -X POST "https://your-reflector-domain.com/v1/rooms" \
-H "Authorization: Bearer $AUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-jitsi-room",
"platform": "jitsi",
"recording_type": "cloud",
"recording_trigger": "automatic-2nd-participant",
"is_locked": false,
"room_mode": "normal"
}'
```
### 2. Meeting Creation
Meetings will automatically use Jitsi when the room platform is set to "jitsi":
```bash
curl -X POST "https://your-reflector-domain.com/v1/rooms/my-jitsi-room/meeting" \
-H "Authorization: Bearer $AUTH_TOKEN"
```
## Testing the Integration
### 1. Health Check
Verify Jitsi webhook configuration:
```bash
curl "https://your-reflector-domain.com/v1/jitsi/health"
```
Expected response:
```json
{
"status": "ok",
"service": "jitsi-webhooks",
"timestamp": "2025-01-15T10:30:00.000Z",
"webhook_secret_configured": true
}
```
### 2. Room Creation Test
1. Create a Jitsi room via Reflector API
2. Start a meeting - should generate Jitsi Meet URL with JWT token
3. Join with multiple participants - should trigger participant events
4. Start recording - should trigger Jibri recording workflow
### 3. Webhook Event Test
Monitor Reflector logs for incoming webhook events:
```bash
# Check for participant events
curl -X POST "https://your-reflector-domain.com/v1/jitsi/events" \
-H "Content-Type: application/json" \
-H "X-Jitsi-Signature: test-signature" \
-d '{
"event": "muc-occupant-joined",
"room": "test-room-name",
"timestamp": "2025-01-15T10:30:00.000Z",
"data": {}
}'
```
## Troubleshooting
### Common Issues
#### JWT Authentication Failures
**Symptoms:** Users can't join rooms, "Authentication failed" errors
**Solutions:**
1. Verify JWT secret matches between Jitsi and Reflector
2. Check JWT token expiration (default 8 hours)
3. Ensure system clocks are synchronized
4. Validate JWT issuer/audience configuration
```bash
# Debug JWT tokens
echo "JWT_TOKEN_HERE" | cut -d'.' -f2 | base64 -d | jq
```
#### Webhook Events Not Received
**Symptoms:** Participant counts not updating, recording events missing
**Solutions:**
1. Verify event_sync module is loaded in Prosody
2. Check webhook URL accessibility from Jitsi server
3. Validate webhook signature generation
4. Review Prosody and Reflector logs
```bash
# Test webhook connectivity
curl -v "https://your-reflector-domain.com/v1/jitsi/health"
# Check Prosody logs
tail -f /var/log/prosody/prosody.log
# Check Reflector logs
docker logs your-reflector-container
```
#### Recording Issues
**Symptoms:** Recordings not starting, finalize script errors
**Solutions:**
1. Verify Jibri service status and configuration
2. Check recording directory permissions
3. Validate finalize script execution permissions
4. Monitor Jibri logs for errors
```bash
# Check Jibri status
systemctl status jibri
# Test finalize script
sudo -u jibri /opt/jitsi/jibri/finalize.sh "/test/recording.mp4" "test-room"
# Check Jibri logs
journalctl -u jibri -f
```
### Debug Commands
```bash
# Verify Jitsi configuration
prosodyctl check config
# Test JWT generation
curl -X POST "https://your-reflector-domain.com/v1/rooms/test/meeting" \
-H "Authorization: Bearer $TOKEN" -v
# Monitor webhook events
tail -f /var/log/reflector/app.log | grep jitsi
# Check room participant counts
curl "https://your-reflector-domain.com/v1/rooms" \
-H "Authorization: Bearer $TOKEN" | jq '.data[].num_clients'
```
### Performance Optimization
#### For High-Concurrent Usage
1. **Jitsi Videobridge Tuning:**
```bash
# /etc/jitsi/videobridge/sip-communicator.properties
org.jitsi.videobridge.STATISTICS_INTERVAL=5000
org.jitsi.videobridge.load.INITIAL_STREAM_LIMIT=50
```
2. **Database Connection Pooling:**
```python
# In your Reflector settings
DATABASE_POOL_SIZE=20
DATABASE_MAX_OVERFLOW=30
```
3. **Redis Configuration:**
```bash
# For webhook event caching
REDIS_URL=redis://localhost:6379/1
WEBHOOK_EVENT_TTL=3600
```
## Security Considerations
### Network Security
- Use HTTPS/WSS for all communications
- Implement proper firewall rules
- Consider VPN for server-to-server communication
### Authentication Security
- Rotate JWT secrets regularly
- Use strong webhook secrets (32+ characters)
- Implement rate limiting on webhook endpoints
### Recording Security
- Encrypt recordings at rest
- Implement access controls for recording files
- Regular security audits of file permissions
## Support
For additional support:
1. **Reflector Issues:** Check GitHub issues or create new ones
2. **Jitsi Community:** [Community Forum](https://community.jitsi.org/)
3. **Documentation:** [Jitsi Developer Guide](https://jitsi.github.io/handbook/)
## Migration from Whereby
If migrating from Whereby integration:
1. Update existing rooms to use "jitsi" platform
2. Verify webhook configurations are updated
3. Test recording workflows thoroughly
4. Monitor participant event accuracy
5. Update any custom integrations using meeting APIs
The platform abstraction layer ensures smooth migration with minimal API changes.

212
server/docs/webhook.md Normal file
View File

@@ -0,0 +1,212 @@
# Reflector Webhook Documentation
## Overview
Reflector supports webhook notifications to notify external systems when transcript processing is completed. Webhooks can be configured per room and are triggered automatically after a transcript is successfully processed.
## Configuration
Webhooks are configured at the room level with two fields:
- `webhook_url`: The HTTPS endpoint to receive webhook notifications
- `webhook_secret`: Optional secret key for HMAC signature verification (auto-generated if not provided)
## Events
### `transcript.completed`
Triggered when a transcript has been fully processed, including transcription, diarization, summarization, and topic detection.
### `test`
A test event that can be triggered manually to verify webhook configuration.
## Webhook Request Format
### Headers
All webhook requests include the following headers:
| Header | Description | Example |
|--------|-------------|---------|
| `Content-Type` | Always `application/json` | `application/json` |
| `User-Agent` | Identifies Reflector as the source | `Reflector-Webhook/1.0` |
| `X-Webhook-Event` | The event type | `transcript.completed` or `test` |
| `X-Webhook-Retry` | Current retry attempt number | `0`, `1`, `2`... |
| `X-Webhook-Signature` | HMAC signature (if secret configured) | `t=1735306800,v1=abc123...` |
### Signature Verification
If a webhook secret is configured, Reflector includes an HMAC-SHA256 signature in the `X-Webhook-Signature` header to verify the webhook authenticity.
The signature format is: `t={timestamp},v1={signature}`
To verify the signature:
1. Extract the timestamp and signature from the header
2. Create the signed payload: `{timestamp}.{request_body}`
3. Compute HMAC-SHA256 of the signed payload using your webhook secret
4. Compare the computed signature with the received signature
Example verification (Python):
```python
import hmac
import hashlib
def verify_webhook_signature(payload: bytes, signature_header: str, secret: str) -> bool:
# Parse header: "t=1735306800,v1=abc123..."
parts = dict(part.split("=") for part in signature_header.split(","))
timestamp = parts["t"]
received_signature = parts["v1"]
# Create signed payload
signed_payload = f"{timestamp}.{payload.decode('utf-8')}"
# Compute expected signature
expected_signature = hmac.new(
secret.encode("utf-8"),
signed_payload.encode("utf-8"),
hashlib.sha256
).hexdigest()
# Compare signatures
return hmac.compare_digest(expected_signature, received_signature)
```
## Event Payloads
### `transcript.completed` Event
This event includes a convenient URL for accessing the transcript:
- `frontend_url`: Direct link to view the transcript in the web interface
```json
{
"event": "transcript.completed",
"event_id": "transcript.completed-abc-123-def-456",
"timestamp": "2025-08-27T12:34:56.789012Z",
"transcript": {
"id": "abc-123-def-456",
"room_id": "room-789",
"created_at": "2025-08-27T12:00:00Z",
"duration": 1800.5,
"title": "Q3 Product Planning Meeting",
"short_summary": "Team discussed Q3 product roadmap, prioritizing mobile app features and API improvements.",
"long_summary": "The product team met to finalize the Q3 roadmap. Key decisions included...",
"webvtt": "WEBVTT\n\n00:00:00.000 --> 00:00:05.000\n<v Speaker 1>Welcome everyone to today's meeting...",
"topics": [
{
"title": "Introduction and Agenda",
"summary": "Meeting kickoff with agenda review",
"timestamp": 0.0,
"duration": 120.0,
"webvtt": "WEBVTT\n\n00:00:00.000 --> 00:00:05.000\n<v Speaker 1>Welcome everyone..."
},
{
"title": "Mobile App Features Discussion",
"summary": "Team reviewed proposed mobile app features for Q3",
"timestamp": 120.0,
"duration": 600.0,
"webvtt": "WEBVTT\n\n00:02:00.000 --> 00:02:10.000\n<v Speaker 2>Let's talk about the mobile app..."
}
],
"participants": [
{
"id": "participant-1",
"name": "John Doe",
"speaker": "Speaker 1"
},
{
"id": "participant-2",
"name": "Jane Smith",
"speaker": "Speaker 2"
}
],
"source_language": "en",
"target_language": "en",
"status": "completed",
"frontend_url": "https://app.reflector.com/transcripts/abc-123-def-456"
},
"room": {
"id": "room-789",
"name": "Product Team Room"
}
}
```
### `test` Event
```json
{
"event": "test",
"event_id": "test.2025-08-27T12:34:56.789012Z",
"timestamp": "2025-08-27T12:34:56.789012Z",
"message": "This is a test webhook from Reflector",
"room": {
"id": "room-789",
"name": "Product Team Room"
}
}
```
## Retry Policy
Webhooks are delivered with automatic retry logic to handle transient failures. When a webhook delivery fails due to server errors or network issues, Reflector will automatically retry the delivery multiple times over an extended period.
### Retry Mechanism
Reflector implements an exponential backoff strategy for webhook retries:
- **Initial retry delay**: 60 seconds after the first failure
- **Exponential backoff**: Each subsequent retry waits approximately twice as long as the previous one
- **Maximum retry interval**: 1 hour (backoff is capped at this duration)
- **Maximum retry attempts**: 30 attempts total
- **Total retry duration**: Retries continue for approximately 24 hours
### How Retries Work
When a webhook fails, Reflector will:
1. Wait 60 seconds, then retry (attempt #1)
2. If it fails again, wait ~2 minutes, then retry (attempt #2)
3. Continue doubling the wait time up to a maximum of 1 hour between attempts
4. Keep retrying at 1-hour intervals until successful or 30 attempts are exhausted
The `X-Webhook-Retry` header indicates the current retry attempt number (0 for the initial attempt, 1 for first retry, etc.), allowing your endpoint to track retry attempts.
### Retry Behavior by HTTP Status Code
| Status Code | Behavior |
|-------------|----------|
| 2xx (Success) | No retry, webhook marked as delivered |
| 4xx (Client Error) | No retry, request is considered permanently failed |
| 5xx (Server Error) | Automatic retry with exponential backoff |
| Network/Timeout Error | Automatic retry with exponential backoff |
**Important Notes:**
- Webhooks timeout after 30 seconds. If your endpoint takes longer to respond, it will be considered a timeout error and retried.
- During the retry period (~24 hours), you may receive the same webhook multiple times if your endpoint experiences intermittent failures.
- There is no mechanism to manually retry failed webhooks after the retry period expires.
## Testing Webhooks
You can test your webhook configuration before processing transcripts:
```http
POST /v1/rooms/{room_id}/webhook/test
```
Response:
```json
{
"success": true,
"status_code": 200,
"message": "Webhook test successful",
"response_preview": "OK"
}
```
Or in case of failure:
```json
{
"success": false,
"error": "Webhook request timed out (10 seconds)"
}
```

View File

@@ -4,7 +4,8 @@ This repository hold an API for the GPU implementation of the Reflector API serv
and use [Modal.com](https://modal.com)
- `reflector_diarizer.py` - Diarization API
- `reflector_transcriber.py` - Transcription API
- `reflector_transcriber.py` - Transcription API (Whisper)
- `reflector_transcriber_parakeet.py` - Transcription API (NVIDIA Parakeet)
- `reflector_translator.py` - Translation API
## Modal.com deployment
@@ -19,6 +20,10 @@ $ modal deploy reflector_transcriber.py
...
└── 🔨 Created web => https://xxxx--reflector-transcriber-web.modal.run
$ modal deploy reflector_transcriber_parakeet.py
...
└── 🔨 Created web => https://xxxx--reflector-transcriber-parakeet-web.modal.run
$ modal deploy reflector_llm.py
...
└── 🔨 Created web => https://xxxx--reflector-llm-web.modal.run
@@ -68,6 +73,86 @@ Authorization: bearer <REFLECTOR_APIKEY>
### Transcription
#### Parakeet Transcriber (`reflector_transcriber_parakeet.py`)
NVIDIA Parakeet is a state-of-the-art ASR model optimized for real-time transcription with superior word-level timestamps.
**GPU Configuration:**
- **A10G GPU** - Used for `/v1/audio/transcriptions` endpoint (small files, live transcription)
- Higher concurrency (max_inputs=10)
- Optimized for multiple small audio files
- Supports batch processing for efficiency
- **L40S GPU** - Used for `/v1/audio/transcriptions-from-url` endpoint (large files)
- Lower concurrency but more powerful processing
- Optimized for single large audio files
- VAD-based chunking for long-form audio
##### `/v1/audio/transcriptions` - Small file transcription
**request** (multipart/form-data)
- `file` or `files[]` - audio file(s) to transcribe
- `model` - model name (default: `nvidia/parakeet-tdt-0.6b-v2`)
- `language` - language code (default: `en`)
- `batch` - whether to use batch processing for multiple files (default: `true`)
**response**
```json
{
"text": "transcribed text",
"words": [
{"word": "hello", "start": 0.0, "end": 0.5},
{"word": "world", "start": 0.5, "end": 1.0}
],
"filename": "audio.mp3"
}
```
For multiple files with batch=true:
```json
{
"results": [
{
"filename": "audio1.mp3",
"text": "transcribed text",
"words": [...]
},
{
"filename": "audio2.mp3",
"text": "transcribed text",
"words": [...]
}
]
}
```
##### `/v1/audio/transcriptions-from-url` - Large file transcription
**request** (application/json)
```json
{
"audio_file_url": "https://example.com/audio.mp3",
"model": "nvidia/parakeet-tdt-0.6b-v2",
"language": "en",
"timestamp_offset": 0.0
}
```
**response**
```json
{
"text": "transcribed text from large file",
"words": [
{"word": "hello", "start": 0.0, "end": 0.5},
{"word": "world", "start": 0.5, "end": 1.0}
]
}
```
**Supported file types:** mp3, mp4, mpeg, mpga, m4a, wav, webm
#### Whisper Transcriber (`reflector_transcriber.py`)
`POST /transcribe`
**request** (multipart/form-data)

View File

@@ -4,14 +4,80 @@ Reflector GPU backend - diarizer
"""
import os
import uuid
from typing import Mapping, NewType
from urllib.parse import urlparse
import modal.gpu
from modal import App, Image, Secret, asgi_app, enter, method
from pydantic import BaseModel
import modal
PYANNOTE_MODEL_NAME: str = "pyannote/speaker-diarization-3.1"
MODEL_DIR = "/root/diarization_models"
app = App(name="reflector-diarizer")
UPLOADS_PATH = "/uploads"
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
DiarizerUniqFilename = NewType("DiarizerUniqFilename", str)
AudioFileExtension = NewType("AudioFileExtension", str)
app = modal.App(name="reflector-diarizer")
# Volume for temporary file uploads
upload_volume = modal.Volume.from_name("diarizer-uploads", create_if_missing=True)
def detect_audio_format(url: str, headers: Mapping[str, str]) -> AudioFileExtension:
parsed_url = urlparse(url)
url_path = parsed_url.path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return AudioFileExtension(ext)
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return AudioFileExtension("mp3")
if "audio/wav" in content_type:
return AudioFileExtension("wav")
if "audio/mp4" in content_type:
return AudioFileExtension("mp4")
raise ValueError(
f"Unsupported audio format for URL: {url}. "
f"Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
)
def download_audio_to_volume(
audio_file_url: str,
) -> tuple[DiarizerUniqFilename, AudioFileExtension]:
import requests
from fastapi import HTTPException
print(f"Checking audio file at: {audio_file_url}")
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
print(f"Downloading audio file from: {audio_file_url}")
response = requests.get(audio_file_url, allow_redirects=True)
if response.status_code != 200:
print(f"Download failed with status {response.status_code}: {response.text}")
raise HTTPException(
status_code=response.status_code,
detail=f"Failed to download audio file: {response.status_code}",
)
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = DiarizerUniqFilename(f"{uuid.uuid4()}.{audio_suffix}")
file_path = f"{UPLOADS_PATH}/{unique_filename}"
print(f"Writing file to: {file_path} (size: {len(response.content)} bytes)")
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
print(f"File saved as: {unique_filename}")
return unique_filename, audio_suffix
def migrate_cache_llm():
@@ -39,7 +105,7 @@ def download_pyannote_audio():
diarizer_image = (
Image.debian_slim(python_version="3.10.8")
modal.Image.debian_slim(python_version="3.10.8")
.pip_install(
"pyannote.audio==3.1.0",
"requests",
@@ -55,7 +121,8 @@ diarizer_image = (
"hf-transfer",
)
.run_function(
download_pyannote_audio, secrets=[Secret.from_name("my-huggingface-secret")]
download_pyannote_audio,
secrets=[modal.Secret.from_name("hf_token")],
)
.run_function(migrate_cache_llm)
.env(
@@ -70,44 +137,51 @@ diarizer_image = (
@app.cls(
gpu=modal.gpu.A100(size="40GB"),
gpu="A100",
timeout=60 * 30,
scaledown_window=60,
allow_concurrent_inputs=1,
image=diarizer_image,
volumes={UPLOADS_PATH: upload_volume},
enable_memory_snapshot=True,
experimental_options={"enable_gpu_snapshot": True},
secrets=[
modal.Secret.from_name("hf_token"),
],
)
@modal.concurrent(max_inputs=1)
class Diarizer:
@enter()
@modal.enter(snap=True)
def enter(self):
import torch
from pyannote.audio import Pipeline
self.use_gpu = torch.cuda.is_available()
self.device = "cuda" if self.use_gpu else "cpu"
print(f"Using device: {self.device}")
self.diarization_pipeline = Pipeline.from_pretrained(
PYANNOTE_MODEL_NAME, cache_dir=MODEL_DIR
PYANNOTE_MODEL_NAME,
cache_dir=MODEL_DIR,
use_auth_token=os.environ["HF_TOKEN"],
)
self.diarization_pipeline.to(torch.device(self.device))
@method()
def diarize(self, audio_data: str, audio_suffix: str, timestamp: float):
import tempfile
@modal.method()
def diarize(self, filename: str, timestamp: float = 0.0):
import torchaudio
with tempfile.NamedTemporaryFile("wb+", suffix=f".{audio_suffix}") as fp:
fp.write(audio_data)
upload_volume.reload()
print("Diarizing audio")
waveform, sample_rate = torchaudio.load(fp.name)
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
print(f"Diarizing audio from: {file_path}")
waveform, sample_rate = torchaudio.load(file_path)
diarization = self.diarization_pipeline(
{"waveform": waveform, "sample_rate": sample_rate}
)
words = []
for diarization_segment, _, speaker in diarization.itertracks(
yield_label=True
):
for diarization_segment, _, speaker in diarization.itertracks(yield_label=True):
words.append(
{
"start": round(timestamp + diarization_segment.start, 3),
@@ -127,17 +201,18 @@ class Diarizer:
@app.function(
timeout=60 * 10,
scaledown_window=60 * 3,
allow_concurrent_inputs=40,
secrets=[
Secret.from_name("reflector-gpu"),
modal.Secret.from_name("reflector-gpu"),
],
volumes={UPLOADS_PATH: upload_volume},
image=diarizer_image,
)
@asgi_app()
@modal.concurrent(max_inputs=40)
@modal.asgi_app()
def web():
import requests
from fastapi import Depends, FastAPI, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from pydantic import BaseModel
diarizerstub = Diarizer()
@@ -153,35 +228,26 @@ def web():
headers={"WWW-Authenticate": "Bearer"},
)
def validate_audio_file(audio_file_url: str):
# Check if the audio file exists
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(
status_code=response.status_code,
detail="The audio file does not exist.",
)
class DiarizationResponse(BaseModel):
result: dict
@app.post(
"/diarize", dependencies=[Depends(apikey_auth), Depends(validate_audio_file)]
)
def diarize(
audio_file_url: str, timestamp: float = 0.0
) -> HTTPException | DiarizationResponse:
# Currently the uploaded files are in mp3 format
audio_suffix = "mp3"
print("Downloading audio file")
response = requests.get(audio_file_url, allow_redirects=True)
print("Audio file downloaded successfully")
@app.post("/diarize", dependencies=[Depends(apikey_auth)])
def diarize(audio_file_url: str, timestamp: float = 0.0) -> DiarizationResponse:
unique_filename, audio_suffix = download_audio_to_volume(audio_file_url)
try:
func = diarizerstub.diarize.spawn(
audio_data=response.content, audio_suffix=audio_suffix, timestamp=timestamp
filename=unique_filename, timestamp=timestamp
)
result = func.get()
return result
finally:
try:
file_path = f"{UPLOADS_PATH}/{unique_filename}"
print(f"Deleting file: {file_path}")
os.remove(file_path)
upload_volume.commit()
except Exception as e:
print(f"Error cleaning up {unique_filename}: {e}")
return app

View File

@@ -1,41 +1,78 @@
import os
import tempfile
import sys
import threading
import uuid
from typing import Generator, Mapping, NamedTuple, NewType, TypedDict
from urllib.parse import urlparse
import modal
from pydantic import BaseModel
MODELS_DIR = "/models"
MODEL_NAME = "large-v2"
MODEL_COMPUTE_TYPE: str = "float16"
MODEL_NUM_WORKERS: int = 1
MINUTES = 60 # seconds
SAMPLERATE = 16000
UPLOADS_PATH = "/uploads"
CACHE_PATH = "/models"
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
VAD_CONFIG = {
"batch_max_duration": 30.0,
"silence_padding": 0.5,
"window_size": 512,
}
volume = modal.Volume.from_name("models", create_if_missing=True)
WhisperUniqFilename = NewType("WhisperUniqFilename", str)
AudioFileExtension = NewType("AudioFileExtension", str)
app = modal.App("reflector-transcriber")
model_cache = modal.Volume.from_name("models", create_if_missing=True)
upload_volume = modal.Volume.from_name("whisper-uploads", create_if_missing=True)
class TimeSegment(NamedTuple):
"""Represents a time segment with start and end times."""
start: float
end: float
class AudioSegment(NamedTuple):
"""Represents an audio segment with timing and audio data."""
start: float
end: float
audio: any
class TranscriptResult(NamedTuple):
"""Represents a transcription result with text and word timings."""
text: str
words: list["WordTiming"]
class WordTiming(TypedDict):
"""Represents a word with its timing information."""
word: str
start: float
end: float
def download_model():
from faster_whisper import download_model
volume.reload()
model_cache.reload()
download_model(MODEL_NAME, cache_dir=MODELS_DIR)
download_model(MODEL_NAME, cache_dir=CACHE_PATH)
volume.commit()
model_cache.commit()
image = (
modal.Image.debian_slim(python_version="3.12")
.pip_install(
"huggingface_hub==0.27.1",
"hf-transfer==0.1.9",
"torch==2.5.1",
"faster-whisper==1.1.1",
)
.env(
{
"HF_HUB_ENABLE_HF_TRANSFER": "1",
@@ -45,19 +82,98 @@ image = (
),
}
)
.run_function(download_model, volumes={MODELS_DIR: volume})
.apt_install("ffmpeg")
.pip_install(
"huggingface_hub==0.27.1",
"hf-transfer==0.1.9",
"torch==2.5.1",
"faster-whisper==1.1.1",
"fastapi==0.115.12",
"requests",
"librosa==0.10.1",
"numpy<2",
"silero-vad==5.1.0",
)
.run_function(download_model, volumes={CACHE_PATH: model_cache})
)
def detect_audio_format(url: str, headers: Mapping[str, str]) -> AudioFileExtension:
parsed_url = urlparse(url)
url_path = parsed_url.path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return AudioFileExtension(ext)
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return AudioFileExtension("mp3")
if "audio/wav" in content_type:
return AudioFileExtension("wav")
if "audio/mp4" in content_type:
return AudioFileExtension("mp4")
raise ValueError(
f"Unsupported audio format for URL: {url}. "
f"Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
)
def download_audio_to_volume(
audio_file_url: str,
) -> tuple[WhisperUniqFilename, AudioFileExtension]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = WhisperUniqFilename(f"{uuid.uuid4()}.{audio_suffix}")
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
def pad_audio(audio_array, sample_rate: int = SAMPLERATE):
"""Add 0.5s of silence if audio is shorter than the silence_padding window.
Whisper does not require this strictly, but aligning behavior with Parakeet
avoids edge-case crashes on extremely short inputs and makes comparisons easier.
"""
import numpy as np
audio_duration = len(audio_array) / sample_rate
if audio_duration < VAD_CONFIG["silence_padding"]:
silence_samples = int(sample_rate * VAD_CONFIG["silence_padding"])
silence = np.zeros(silence_samples, dtype=np.float32)
return np.concatenate([audio_array, silence])
return audio_array
@app.cls(
gpu="A10G",
timeout=5 * MINUTES,
scaledown_window=5 * MINUTES,
allow_concurrent_inputs=6,
image=image,
volumes={MODELS_DIR: volume},
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
)
class Transcriber:
@modal.concurrent(max_inputs=10)
class TranscriberWhisperLive:
"""Live transcriber class for small audio segments (A10G).
Mirrors the Parakeet live class API but uses Faster-Whisper under the hood.
"""
@modal.enter()
def enter(self):
import faster_whisper
@@ -71,23 +187,28 @@ class Transcriber:
device=self.device,
compute_type=MODEL_COMPUTE_TYPE,
num_workers=MODEL_NUM_WORKERS,
download_root=MODELS_DIR,
download_root=CACHE_PATH,
local_files_only=True,
)
print(f"Model is on device: {self.device}")
@modal.method()
def transcribe_segment(
self,
audio_data: str,
audio_suffix: str,
language: str,
filename: str,
language: str = "en",
):
with tempfile.NamedTemporaryFile("wb+", suffix=f".{audio_suffix}") as fp:
fp.write(audio_data)
"""Transcribe a single uploaded audio file by filename."""
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
with self.lock:
with NoStdStreams():
segments, _ = self.model.transcribe(
fp.name,
file_path,
language=language,
beam_size=5,
word_timestamps=True,
@@ -96,66 +217,392 @@ class Transcriber:
)
segments = list(segments)
text = "".join(segment.text for segment in segments)
text = "".join(segment.text for segment in segments).strip()
words = [
{"word": word.word, "start": word.start, "end": word.end}
{
"word": word.word,
"start": round(float(word.start), 2),
"end": round(float(word.end), 2),
}
for segment in segments
for word in segment.words
]
return {"text": text, "words": words}
@modal.method()
def transcribe_batch(
self,
filenames: list[str],
language: str = "en",
):
"""Transcribe multiple uploaded audio files and return per-file results."""
upload_volume.reload()
results = []
for filename in filenames:
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"Batch file not found: {file_path}")
with self.lock:
with NoStdStreams():
segments, _ = self.model.transcribe(
file_path,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(seg.text for seg in segments).strip()
words = [
{
"word": w.word,
"start": round(float(w.start), 2),
"end": round(float(w.end), 2),
}
for seg in segments
for w in seg.words
]
results.append(
{
"filename": filename,
"text": text,
"words": words,
}
)
return results
@app.cls(
gpu="L40S",
timeout=15 * MINUTES,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
)
class TranscriberWhisperFile:
"""File transcriber for larger/longer audio, using VAD-driven batching (L40S)."""
@modal.enter()
def enter(self):
import faster_whisper
import torch
from silero_vad import load_silero_vad
self.lock = threading.Lock()
self.use_gpu = torch.cuda.is_available()
self.device = "cuda" if self.use_gpu else "cpu"
self.model = faster_whisper.WhisperModel(
MODEL_NAME,
device=self.device,
compute_type=MODEL_COMPUTE_TYPE,
num_workers=MODEL_NUM_WORKERS,
download_root=CACHE_PATH,
local_files_only=True,
)
self.vad_model = load_silero_vad(onnx=False)
@modal.method()
def transcribe_segment(
self, filename: str, timestamp_offset: float = 0.0, language: str = "en"
):
import librosa
import numpy as np
from silero_vad import VADIterator
def vad_segments(
audio_array,
sample_rate: int = SAMPLERATE,
window_size: int = VAD_CONFIG["window_size"],
) -> Generator[TimeSegment, None, None]:
"""Generate speech segments as TimeSegment using Silero VAD."""
iterator = VADIterator(self.vad_model, sampling_rate=sample_rate)
start = None
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(
chunk, (0, window_size - len(chunk)), mode="constant"
)
speech = iterator(chunk)
if not speech:
continue
if "start" in speech:
start = speech["start"]
continue
if "end" in speech and start is not None:
end = speech["end"]
yield TimeSegment(
start / float(SAMPLERATE), end / float(SAMPLERATE)
)
start = None
iterator.reset_states()
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
audio_array, _sr = librosa.load(file_path, sr=SAMPLERATE, mono=True)
# Batch segments up to ~30s windows by merging contiguous VAD segments
merged_batches: list[TimeSegment] = []
batch_start = None
batch_end = None
max_duration = VAD_CONFIG["batch_max_duration"]
for segment in vad_segments(audio_array):
seg_start, seg_end = segment.start, segment.end
if batch_start is None:
batch_start, batch_end = seg_start, seg_end
continue
if seg_end - batch_start <= max_duration:
batch_end = seg_end
else:
merged_batches.append(TimeSegment(batch_start, batch_end))
batch_start, batch_end = seg_start, seg_end
if batch_start is not None and batch_end is not None:
merged_batches.append(TimeSegment(batch_start, batch_end))
all_text = []
all_words = []
for segment in merged_batches:
start_time, end_time = segment.start, segment.end
s_idx = int(start_time * SAMPLERATE)
e_idx = int(end_time * SAMPLERATE)
segment = audio_array[s_idx:e_idx]
segment = pad_audio(segment, SAMPLERATE)
with self.lock:
segments, _ = self.model.transcribe(
segment,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(seg.text for seg in segments).strip()
words = [
{
"word": w.word,
"start": round(float(w.start) + start_time + timestamp_offset, 2),
"end": round(float(w.end) + start_time + timestamp_offset, 2),
}
for seg in segments
for w in seg.words
]
if text:
all_text.append(text)
all_words.extend(words)
return {"text": " ".join(all_text), "words": all_words}
def detect_audio_format(url: str, headers: dict) -> str:
from urllib.parse import urlparse
from fastapi import HTTPException
url_path = urlparse(url).path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return ext
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return "mp3"
if "audio/wav" in content_type:
return "wav"
if "audio/mp4" in content_type:
return "mp4"
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format for URL. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
def download_audio_to_volume(audio_file_url: str) -> tuple[str, str]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
@app.function(
scaledown_window=60,
timeout=60,
allow_concurrent_inputs=40,
timeout=600,
secrets=[
modal.Secret.from_name("reflector-gpu"),
],
volumes={MODELS_DIR: volume},
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
image=image,
)
@modal.concurrent(max_inputs=40)
@modal.asgi_app()
def web():
from fastapi import Body, Depends, FastAPI, HTTPException, UploadFile, status
from fastapi import (
Body,
Depends,
FastAPI,
Form,
HTTPException,
UploadFile,
status,
)
from fastapi.security import OAuth2PasswordBearer
from typing_extensions import Annotated
transcriber = Transcriber()
transcriber_live = TranscriberWhisperLive()
transcriber_file = TranscriberWhisperFile()
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
supported_file_types = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
if apikey != os.environ["REFLECTOR_GPU_APIKEY"]:
if apikey == os.environ["REFLECTOR_GPU_APIKEY"]:
return
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)
class TranscriptResponse(BaseModel):
result: dict
class TranscriptResponse(dict):
pass
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
def transcribe(
file: UploadFile,
model: str = "whisper-1",
language: Annotated[str, Body(...)] = "en",
) -> TranscriptResponse:
audio_data = file.file.read()
audio_suffix = file.filename.split(".")[-1]
assert audio_suffix in supported_file_types
file: UploadFile = None,
files: list[UploadFile] | None = None,
model: str = Form(MODEL_NAME),
language: str = Form("en"),
batch: bool = Form(False),
):
if not file and not files:
raise HTTPException(
status_code=400, detail="Either 'file' or 'files' parameter is required"
)
if batch and not files:
raise HTTPException(
status_code=400, detail="Batch transcription requires 'files'"
)
func = transcriber.transcribe_segment.spawn(
audio_data=audio_data,
audio_suffix=audio_suffix,
upload_files = [file] if file else files
uploaded_filenames: list[str] = []
for upload_file in upload_files:
audio_suffix = upload_file.filename.split(".")[-1]
if audio_suffix not in SUPPORTED_FILE_EXTENSIONS:
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
content = upload_file.file.read()
f.write(content)
uploaded_filenames.append(unique_filename)
upload_volume.commit()
try:
if batch and len(upload_files) > 1:
func = transcriber_live.transcribe_batch.spawn(
filenames=uploaded_filenames,
language=language,
)
results = func.get()
return {"results": results}
results = []
for filename in uploaded_filenames:
func = transcriber_live.transcribe_segment.spawn(
filename=filename,
language=language,
)
result = func.get()
result["filename"] = filename
results.append(result)
return {"results": results} if len(results) > 1 else results[0]
finally:
for filename in uploaded_filenames:
try:
file_path = f"{UPLOADS_PATH}/{filename}"
os.remove(file_path)
except Exception:
pass
upload_volume.commit()
@app.post("/v1/audio/transcriptions-from-url", dependencies=[Depends(apikey_auth)])
def transcribe_from_url(
audio_file_url: str = Body(
..., description="URL of the audio file to transcribe"
),
model: str = Body(MODEL_NAME),
language: str = Body("en"),
timestamp_offset: float = Body(0.0),
):
unique_filename, _audio_suffix = download_audio_to_volume(audio_file_url)
try:
func = transcriber_file.transcribe_segment.spawn(
filename=unique_filename,
timestamp_offset=timestamp_offset,
language=language,
)
result = func.get()
return result
finally:
try:
file_path = f"{UPLOADS_PATH}/{unique_filename}"
os.remove(file_path)
upload_volume.commit()
except Exception:
pass
return app
class NoStdStreams:
def __init__(self):
self.devnull = open(os.devnull, "w")
def __enter__(self):
self._stdout, self._stderr = sys.stdout, sys.stderr
self._stdout.flush()
self._stderr.flush()
sys.stdout, sys.stderr = self.devnull, self.devnull
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout, sys.stderr = self._stdout, self._stderr
self.devnull.close()

View File

@@ -0,0 +1,658 @@
import logging
import os
import sys
import threading
import uuid
from typing import Generator, Mapping, NamedTuple, NewType, TypedDict
from urllib.parse import urlparse
import modal
MODEL_NAME = "nvidia/parakeet-tdt-0.6b-v2"
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
SAMPLERATE = 16000
UPLOADS_PATH = "/uploads"
CACHE_PATH = "/cache"
VAD_CONFIG = {
"batch_max_duration": 30.0,
"silence_padding": 0.5,
"window_size": 512,
}
ParakeetUniqFilename = NewType("ParakeetUniqFilename", str)
AudioFileExtension = NewType("AudioFileExtension", str)
class TimeSegment(NamedTuple):
"""Represents a time segment with start and end times."""
start: float
end: float
class AudioSegment(NamedTuple):
"""Represents an audio segment with timing and audio data."""
start: float
end: float
audio: any
class TranscriptResult(NamedTuple):
"""Represents a transcription result with text and word timings."""
text: str
words: list["WordTiming"]
class WordTiming(TypedDict):
"""Represents a word with its timing information."""
word: str
start: float
end: float
app = modal.App("reflector-transcriber-parakeet")
# Volume for caching model weights
model_cache = modal.Volume.from_name("parakeet-model-cache", create_if_missing=True)
# Volume for temporary file uploads
upload_volume = modal.Volume.from_name("parakeet-uploads", create_if_missing=True)
image = (
modal.Image.from_registry(
"nvidia/cuda:12.8.0-cudnn-devel-ubuntu22.04", add_python="3.12"
)
.env(
{
"HF_HUB_ENABLE_HF_TRANSFER": "1",
"HF_HOME": "/cache",
"DEBIAN_FRONTEND": "noninteractive",
"CXX": "g++",
"CC": "g++",
}
)
.apt_install("ffmpeg")
.pip_install(
"hf_transfer==0.1.9",
"huggingface_hub[hf-xet]==0.31.2",
"nemo_toolkit[asr]==2.3.0",
"cuda-python==12.8.0",
"fastapi==0.115.12",
"numpy<2",
"librosa==0.10.1",
"requests",
"silero-vad==5.1.0",
"torch",
)
.entrypoint([]) # silence chatty logs by container on start
)
def detect_audio_format(url: str, headers: Mapping[str, str]) -> AudioFileExtension:
parsed_url = urlparse(url)
url_path = parsed_url.path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return AudioFileExtension(ext)
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return AudioFileExtension("mp3")
if "audio/wav" in content_type:
return AudioFileExtension("wav")
if "audio/mp4" in content_type:
return AudioFileExtension("mp4")
raise ValueError(
f"Unsupported audio format for URL: {url}. "
f"Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
)
def download_audio_to_volume(
audio_file_url: str,
) -> tuple[ParakeetUniqFilename, AudioFileExtension]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = ParakeetUniqFilename(f"{uuid.uuid4()}.{audio_suffix}")
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
def pad_audio(audio_array, sample_rate: int = SAMPLERATE):
"""Add 0.5 seconds of silence if audio is less than 500ms.
This is a workaround for a Parakeet bug where very short audio (<500ms) causes:
ValueError: `char_offsets`: [] and `processed_tokens`: [157, 834, 834, 841]
have to be of the same length
See: https://github.com/NVIDIA/NeMo/issues/8451
"""
import numpy as np
audio_duration = len(audio_array) / sample_rate
if audio_duration < 0.5:
silence_samples = int(sample_rate * 0.5)
silence = np.zeros(silence_samples, dtype=np.float32)
return np.concatenate([audio_array, silence])
return audio_array
@app.cls(
gpu="A10G",
timeout=600,
scaledown_window=300,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
enable_memory_snapshot=True,
experimental_options={"enable_gpu_snapshot": True},
)
@modal.concurrent(max_inputs=10)
class TranscriberParakeetLive:
@modal.enter(snap=True)
def enter(self):
import nemo.collections.asr as nemo_asr
logging.getLogger("nemo_logger").setLevel(logging.CRITICAL)
self.lock = threading.Lock()
self.model = nemo_asr.models.ASRModel.from_pretrained(model_name=MODEL_NAME)
device = next(self.model.parameters()).device
print(f"Model is on device: {device}")
@modal.method()
def transcribe_segment(
self,
filename: str,
):
import librosa
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
audio_array, sample_rate = librosa.load(file_path, sr=SAMPLERATE, mono=True)
padded_audio = pad_audio(audio_array, sample_rate)
with self.lock:
with NoStdStreams():
(output,) = self.model.transcribe([padded_audio], timestamps=True)
text = output.text.strip()
words: list[WordTiming] = [
WordTiming(
# XXX the space added here is to match the output of whisper
# whisper add space to each words, while parakeet don't
word=word_info["word"] + " ",
start=round(word_info["start"], 2),
end=round(word_info["end"], 2),
)
for word_info in output.timestamp["word"]
]
return {"text": text, "words": words}
@modal.method()
def transcribe_batch(
self,
filenames: list[str],
):
import librosa
upload_volume.reload()
results = []
audio_arrays = []
# Load all audio files with padding
for filename in filenames:
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"Batch file not found: {file_path}")
audio_array, sample_rate = librosa.load(file_path, sr=SAMPLERATE, mono=True)
padded_audio = pad_audio(audio_array, sample_rate)
audio_arrays.append(padded_audio)
with self.lock:
with NoStdStreams():
outputs = self.model.transcribe(audio_arrays, timestamps=True)
# Process results for each file
for i, (filename, output) in enumerate(zip(filenames, outputs)):
text = output.text.strip()
words: list[WordTiming] = [
WordTiming(
word=word_info["word"] + " ",
start=round(word_info["start"], 2),
end=round(word_info["end"], 2),
)
for word_info in output.timestamp["word"]
]
results.append(
{
"filename": filename,
"text": text,
"words": words,
}
)
return results
# L40S class for file transcription (bigger files)
@app.cls(
gpu="L40S",
timeout=900,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
enable_memory_snapshot=True,
experimental_options={"enable_gpu_snapshot": True},
)
class TranscriberParakeetFile:
@modal.enter(snap=True)
def enter(self):
import nemo.collections.asr as nemo_asr
import torch
from silero_vad import load_silero_vad
logging.getLogger("nemo_logger").setLevel(logging.CRITICAL)
self.model = nemo_asr.models.ASRModel.from_pretrained(model_name=MODEL_NAME)
device = next(self.model.parameters()).device
print(f"Model is on device: {device}")
torch.set_num_threads(1)
self.vad_model = load_silero_vad(onnx=False)
print("Silero VAD initialized")
@modal.method()
def transcribe_segment(
self,
filename: str,
timestamp_offset: float = 0.0,
):
import librosa
import numpy as np
from silero_vad import VADIterator
def load_and_convert_audio(file_path):
audio_array, sample_rate = librosa.load(file_path, sr=SAMPLERATE, mono=True)
return audio_array
def vad_segment_generator(
audio_array,
) -> Generator[TimeSegment, None, None]:
"""Generate speech segments using VAD with start/end sample indices"""
vad_iterator = VADIterator(self.vad_model, sampling_rate=SAMPLERATE)
window_size = VAD_CONFIG["window_size"]
start = None
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(
chunk, (0, window_size - len(chunk)), mode="constant"
)
speech_dict = vad_iterator(chunk)
if not speech_dict:
continue
if "start" in speech_dict:
start = speech_dict["start"]
continue
if "end" in speech_dict and start is not None:
end = speech_dict["end"]
start_time = start / float(SAMPLERATE)
end_time = end / float(SAMPLERATE)
yield TimeSegment(start_time, end_time)
start = None
vad_iterator.reset_states()
def batch_speech_segments(
segments: Generator[TimeSegment, None, None], max_duration: int
) -> Generator[TimeSegment, None, None]:
"""
Input segments:
[0-2] [3-5] [6-8] [10-11] [12-15] [17-19] [20-22]
↓ (max_duration=10)
Output batches:
[0-8] [10-19] [20-22]
Note: silences are kept for better transcription, previous implementation was
passing segments separatly, but the output was less accurate.
"""
batch_start_time = None
batch_end_time = None
for segment in segments:
start_time, end_time = segment.start, segment.end
if batch_start_time is None or batch_end_time is None:
batch_start_time = start_time
batch_end_time = end_time
continue
total_duration = end_time - batch_start_time
if total_duration <= max_duration:
batch_end_time = end_time
continue
yield TimeSegment(batch_start_time, batch_end_time)
batch_start_time = start_time
batch_end_time = end_time
if batch_start_time is None or batch_end_time is None:
return
yield TimeSegment(batch_start_time, batch_end_time)
def batch_segment_to_audio_segment(
segments: Generator[TimeSegment, None, None],
audio_array,
) -> Generator[AudioSegment, None, None]:
"""Extract audio segments and apply padding for Parakeet compatibility.
Uses pad_audio to ensure segments are at least 0.5s long, preventing
Parakeet crashes. This padding may cause slight timing overlaps between
segments, which are corrected by enforce_word_timing_constraints.
"""
for segment in segments:
start_time, end_time = segment.start, segment.end
start_sample = int(start_time * SAMPLERATE)
end_sample = int(end_time * SAMPLERATE)
audio_segment = audio_array[start_sample:end_sample]
padded_segment = pad_audio(audio_segment, SAMPLERATE)
yield AudioSegment(start_time, end_time, padded_segment)
def transcribe_batch(model, audio_segments: list) -> list:
with NoStdStreams():
outputs = model.transcribe(audio_segments, timestamps=True)
return outputs
def enforce_word_timing_constraints(
words: list[WordTiming],
) -> list[WordTiming]:
"""Enforce that word end times don't exceed the start time of the next word.
Due to silence padding added in batch_segment_to_audio_segment for better
transcription accuracy, word timings from different segments may overlap.
This function ensures there are no overlaps by adjusting end times.
"""
if len(words) <= 1:
return words
enforced_words = []
for i, word in enumerate(words):
enforced_word = word.copy()
if i < len(words) - 1:
next_start = words[i + 1]["start"]
if enforced_word["end"] > next_start:
enforced_word["end"] = next_start
enforced_words.append(enforced_word)
return enforced_words
def emit_results(
results: list,
segments_info: list[AudioSegment],
) -> Generator[TranscriptResult, None, None]:
"""Yield transcribed text and word timings from model output, adjusting timestamps to absolute positions."""
for i, (output, segment) in enumerate(zip(results, segments_info)):
start_time, end_time = segment.start, segment.end
text = output.text.strip()
words: list[WordTiming] = [
WordTiming(
word=word_info["word"] + " ",
start=round(
word_info["start"] + start_time + timestamp_offset, 2
),
end=round(word_info["end"] + start_time + timestamp_offset, 2),
)
for word_info in output.timestamp["word"]
]
yield TranscriptResult(text, words)
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
audio_array = load_and_convert_audio(file_path)
total_duration = len(audio_array) / float(SAMPLERATE)
all_text_parts: list[str] = []
all_words: list[WordTiming] = []
raw_segments = vad_segment_generator(audio_array)
speech_segments = batch_speech_segments(
raw_segments,
VAD_CONFIG["batch_max_duration"],
)
audio_segments = batch_segment_to_audio_segment(speech_segments, audio_array)
for batch in audio_segments:
audio_segment = batch.audio
results = transcribe_batch(self.model, [audio_segment])
for result in emit_results(
results,
[batch],
):
if not result.text:
continue
all_text_parts.append(result.text)
all_words.extend(result.words)
all_words = enforce_word_timing_constraints(all_words)
combined_text = " ".join(all_text_parts)
return {"text": combined_text, "words": all_words}
@app.function(
scaledown_window=60,
timeout=600,
secrets=[
modal.Secret.from_name("reflector-gpu"),
],
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
image=image,
)
@modal.concurrent(max_inputs=40)
@modal.asgi_app()
def web():
import os
import uuid
from fastapi import (
Body,
Depends,
FastAPI,
Form,
HTTPException,
UploadFile,
status,
)
from fastapi.security import OAuth2PasswordBearer
from pydantic import BaseModel
transcriber_live = TranscriberParakeetLive()
transcriber_file = TranscriberParakeetFile()
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
if apikey == os.environ["REFLECTOR_GPU_APIKEY"]:
return
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)
class TranscriptResponse(BaseModel):
result: dict
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
def transcribe(
file: UploadFile = None,
files: list[UploadFile] | None = None,
model: str = Form(MODEL_NAME),
language: str = Form("en"),
batch: bool = Form(False),
):
# Parakeet only supports English
if language != "en":
raise HTTPException(
status_code=400,
detail=f"Parakeet model only supports English. Got language='{language}'",
)
# Handle both single file and multiple files
if not file and not files:
raise HTTPException(
status_code=400, detail="Either 'file' or 'files' parameter is required"
)
if batch and not files:
raise HTTPException(
status_code=400, detail="Batch transcription requires 'files'"
)
upload_files = [file] if file else files
# Upload files to volume
uploaded_filenames = []
for upload_file in upload_files:
audio_suffix = upload_file.filename.split(".")[-1]
assert audio_suffix in SUPPORTED_FILE_EXTENSIONS
# Generate unique filename
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
print(f"Writing file to: {file_path}")
with open(file_path, "wb") as f:
content = upload_file.file.read()
f.write(content)
uploaded_filenames.append(unique_filename)
upload_volume.commit()
try:
# Use A10G live transcriber for per-file transcription
if batch and len(upload_files) > 1:
# Use batch transcription
func = transcriber_live.transcribe_batch.spawn(
filenames=uploaded_filenames,
)
results = func.get()
return {"results": results}
# Per-file transcription
results = []
for filename in uploaded_filenames:
func = transcriber_live.transcribe_segment.spawn(
filename=filename,
)
result = func.get()
result["filename"] = filename
results.append(result)
return {"results": results} if len(results) > 1 else results[0]
finally:
for filename in uploaded_filenames:
try:
file_path = f"{UPLOADS_PATH}/{filename}"
print(f"Deleting file: {file_path}")
os.remove(file_path)
except Exception as e:
print(f"Error deleting {filename}: {e}")
upload_volume.commit()
@app.post("/v1/audio/transcriptions-from-url", dependencies=[Depends(apikey_auth)])
def transcribe_from_url(
audio_file_url: str = Body(
..., description="URL of the audio file to transcribe"
),
model: str = Body(MODEL_NAME),
language: str = Body("en", description="Language code (only 'en' supported)"),
timestamp_offset: float = Body(0.0),
):
# Parakeet only supports English
if language != "en":
raise HTTPException(
status_code=400,
detail=f"Parakeet model only supports English. Got language='{language}'",
)
unique_filename, audio_suffix = download_audio_to_volume(audio_file_url)
try:
func = transcriber_file.transcribe_segment.spawn(
filename=unique_filename,
timestamp_offset=timestamp_offset,
)
result = func.get()
return result
finally:
try:
file_path = f"{UPLOADS_PATH}/{unique_filename}"
print(f"Deleting file: {file_path}")
os.remove(file_path)
upload_volume.commit()
except Exception as e:
print(f"Error cleaning up {unique_filename}: {e}")
return app
class NoStdStreams:
def __init__(self):
self.devnull = open(os.devnull, "w")
def __enter__(self):
self._stdout, self._stderr = sys.stdout, sys.stderr
self._stdout.flush()
self._stderr.flush()
sys.stdout, sys.stderr = self.devnull, self.devnull
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout, sys.stderr = self._stdout, self._stderr
self.devnull.close()

View File

@@ -1 +1,3 @@
Generic single-database configuration.
Both data migrations and schema migrations must be in migrations.

View File

@@ -0,0 +1,36 @@
"""Add webhook fields to rooms
Revision ID: 0194f65cd6d3
Revises: 5a8907fd1d78
Create Date: 2025-08-27 09:03:19.610995
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "0194f65cd6d3"
down_revision: Union[str, None] = "5a8907fd1d78"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.add_column(sa.Column("webhook_url", sa.String(), nullable=True))
batch_op.add_column(sa.Column("webhook_secret", sa.String(), nullable=True))
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.drop_column("webhook_secret")
batch_op.drop_column("webhook_url")
# ### end Alembic commands ###

View File

@@ -0,0 +1,64 @@
"""add_long_summary_to_search_vector
Revision ID: 0ab2d7ffaa16
Revises: b1c33bd09963
Create Date: 2025-08-15 13:27:52.680211
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "0ab2d7ffaa16"
down_revision: Union[str, None] = "b1c33bd09963"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Drop the existing search vector column and index
op.drop_index("idx_transcript_search_vector_en", table_name="transcript")
op.drop_column("transcript", "search_vector_en")
# Recreate the search vector column with long_summary included
op.execute("""
ALTER TABLE transcript ADD COLUMN search_vector_en tsvector
GENERATED ALWAYS AS (
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(long_summary, '')), 'B') ||
setweight(to_tsvector('english', coalesce(webvtt, '')), 'C')
) STORED
""")
# Recreate the GIN index for the search vector
op.create_index(
"idx_transcript_search_vector_en",
"transcript",
["search_vector_en"],
postgresql_using="gin",
)
def downgrade() -> None:
# Drop the updated search vector column and index
op.drop_index("idx_transcript_search_vector_en", table_name="transcript")
op.drop_column("transcript", "search_vector_en")
# Recreate the original search vector column without long_summary
op.execute("""
ALTER TABLE transcript ADD COLUMN search_vector_en tsvector
GENERATED ALWAYS AS (
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(webvtt, '')), 'B')
) STORED
""")
# Recreate the GIN index for the search vector
op.create_index(
"idx_transcript_search_vector_en",
"transcript",
["search_vector_en"],
postgresql_using="gin",
)

View File

@@ -0,0 +1,25 @@
"""add_webvtt_field_to_transcript
Revision ID: 0bc0f3ff0111
Revises: b7df9609542c
Create Date: 2025-08-05 19:36:41.740957
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
revision: str = "0bc0f3ff0111"
down_revision: Union[str, None] = "b7df9609542c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.add_column("transcript", sa.Column("webvtt", sa.Text(), nullable=True))
def downgrade() -> None:
op.drop_column("transcript", "webvtt")

View File

@@ -0,0 +1,36 @@
"""remove user_id from meeting table
Revision ID: 0ce521cda2ee
Revises: 6dec9fb5b46c
Create Date: 2025-09-10 12:40:55.688899
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "0ce521cda2ee"
down_revision: Union[str, None] = "6dec9fb5b46c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_column("user_id")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(
sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=True)
)
# ### end Alembic commands ###

View File

@@ -0,0 +1,46 @@
"""add_full_text_search
Revision ID: 116b2f287eab
Revises: 0bc0f3ff0111
Create Date: 2025-08-07 11:27:38.473517
"""
from typing import Sequence, Union
from alembic import op
revision: str = "116b2f287eab"
down_revision: Union[str, None] = "0bc0f3ff0111"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
conn = op.get_bind()
if conn.dialect.name != "postgresql":
return
op.execute("""
ALTER TABLE transcript ADD COLUMN search_vector_en tsvector
GENERATED ALWAYS AS (
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(webvtt, '')), 'B')
) STORED
""")
op.create_index(
"idx_transcript_search_vector_en",
"transcript",
["search_vector_en"],
postgresql_using="gin",
)
def downgrade() -> None:
conn = op.get_bind()
if conn.dialect.name != "postgresql":
return
op.drop_index("idx_transcript_search_vector_en", table_name="transcript")
op.drop_column("transcript", "search_vector_en")

View File

@@ -0,0 +1,38 @@
"""Add events column to meetings table
Revision ID: 2890b5104577
Revises: 6e6ea8e607c5
Create Date: 2025-09-02 17:51:41.620777
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "2890b5104577"
down_revision: Union[str, None] = "6e6ea8e607c5"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(
sa.Column(
"events", sa.JSON(), server_default=sa.text("'[]'"), nullable=False
)
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_column("events")
# ### end Alembic commands ###

View File

@@ -0,0 +1,32 @@
"""clean up orphaned room_id references in meeting table
Revision ID: 2ae3db106d4e
Revises: def1b5867d4c
Create Date: 2025-09-11 10:35:15.759967
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "2ae3db106d4e"
down_revision: Union[str, None] = "def1b5867d4c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Set room_id to NULL for meetings that reference non-existent rooms
op.execute("""
UPDATE meeting
SET room_id = NULL
WHERE room_id IS NOT NULL
AND room_id NOT IN (SELECT id FROM room WHERE id IS NOT NULL)
""")
def downgrade() -> None:
# Cannot restore orphaned references - no operation needed
pass

View File

@@ -0,0 +1,50 @@
"""add cascade delete to meeting consent foreign key
Revision ID: 5a8907fd1d78
Revises: 0ab2d7ffaa16
Create Date: 2025-08-26 17:26:50.945491
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "5a8907fd1d78"
down_revision: Union[str, None] = "0ab2d7ffaa16"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting_consent", schema=None) as batch_op:
batch_op.drop_constraint(
batch_op.f("meeting_consent_meeting_id_fkey"), type_="foreignkey"
)
batch_op.create_foreign_key(
batch_op.f("meeting_consent_meeting_id_fkey"),
"meeting",
["meeting_id"],
["id"],
ondelete="CASCADE",
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting_consent", schema=None) as batch_op:
batch_op.drop_constraint(
batch_op.f("meeting_consent_meeting_id_fkey"), type_="foreignkey"
)
batch_op.create_foreign_key(
batch_op.f("meeting_consent_meeting_id_fkey"),
"meeting",
["meeting_id"],
["id"],
)
# ### end Alembic commands ###

View File

@@ -0,0 +1,28 @@
"""webhook url and secret null by default
Revision ID: 61882a919591
Revises: 0194f65cd6d3
Create Date: 2025-08-29 11:46:36.738091
"""
from typing import Sequence, Union
# revision identifiers, used by Alembic.
revision: str = "61882a919591"
down_revision: Union[str, None] = "0194f65cd6d3"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###

View File

@@ -32,7 +32,7 @@ def upgrade() -> None:
sa.Column("user_id", sa.String(), nullable=True),
sa.Column("room_id", sa.String(), nullable=True),
sa.Column(
"is_locked", sa.Boolean(), server_default=sa.text("0"), nullable=False
"is_locked", sa.Boolean(), server_default=sa.text("false"), nullable=False
),
sa.Column("room_mode", sa.String(), server_default="normal", nullable=False),
sa.Column(
@@ -53,12 +53,15 @@ def upgrade() -> None:
sa.Column("user_id", sa.String(), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column(
"zulip_auto_post", sa.Boolean(), server_default=sa.text("0"), nullable=False
"zulip_auto_post",
sa.Boolean(),
server_default=sa.text("false"),
nullable=False,
),
sa.Column("zulip_stream", sa.String(), nullable=True),
sa.Column("zulip_topic", sa.String(), nullable=True),
sa.Column(
"is_locked", sa.Boolean(), server_default=sa.text("0"), nullable=False
"is_locked", sa.Boolean(), server_default=sa.text("false"), nullable=False
),
sa.Column("room_mode", sa.String(), server_default="normal", nullable=False),
sa.Column(

View File

@@ -0,0 +1,38 @@
"""make meeting room_id required and add foreign key
Revision ID: 6dec9fb5b46c
Revises: 61882a919591
Create Date: 2025-09-10 10:47:06.006819
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "6dec9fb5b46c"
down_revision: Union[str, None] = "61882a919591"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=False)
batch_op.create_foreign_key(
None, "room", ["room_id"], ["id"], ondelete="CASCADE"
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_constraint("meeting_room_id_fkey", type_="foreignkey")
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=True)
# ### end Alembic commands ###

View File

@@ -0,0 +1,44 @@
"""Add VideoPlatform enum for rooms and meetings
Revision ID: 6e6ea8e607c5
Revises: 61882a919591
Create Date: 2025-09-02 17:33:21.022214
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "6e6ea8e607c5"
down_revision: Union[str, None] = "61882a919591"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(
sa.Column("platform", sa.String(), server_default="whereby", nullable=False)
)
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.add_column(
sa.Column("platform", sa.String(), server_default="whereby", nullable=False)
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.drop_column("platform")
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_column("platform")
# ### end Alembic commands ###

View File

@@ -20,11 +20,14 @@ depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
sourcekind_enum = sa.Enum("room", "live", "file", name="sourcekind")
sourcekind_enum.create(op.get_bind())
op.add_column(
"transcript",
sa.Column(
"source_kind",
sa.Enum("ROOM", "LIVE", "FILE", name="sourcekind"),
sourcekind_enum,
nullable=True,
),
)
@@ -43,6 +46,8 @@ def upgrade() -> None:
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column("transcript", "source_kind")
sourcekind_enum = sa.Enum(name="sourcekind")
sourcekind_enum.drop(op.get_bind())
# ### end Alembic commands ###

View File

@@ -0,0 +1,106 @@
"""populate_webvtt_from_topics
Revision ID: 8120ebc75366
Revises: 116b2f287eab
Create Date: 2025-08-11 19:11:01.316947
"""
import json
from typing import Sequence, Union
from alembic import op
from sqlalchemy import text
# revision identifiers, used by Alembic.
revision: str = "8120ebc75366"
down_revision: Union[str, None] = "116b2f287eab"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def topics_to_webvtt(topics):
"""Convert topics list to WebVTT format string."""
if not topics:
return None
lines = ["WEBVTT", ""]
for topic in topics:
start_time = format_timestamp(topic.get("start"))
end_time = format_timestamp(topic.get("end"))
text = topic.get("text", "").strip()
if start_time and end_time and text:
lines.append(f"{start_time} --> {end_time}")
lines.append(text)
lines.append("")
return "\n".join(lines).strip()
def format_timestamp(seconds):
"""Format seconds to WebVTT timestamp format (HH:MM:SS.mmm)."""
if seconds is None:
return None
hours = int(seconds // 3600)
minutes = int((seconds % 3600) // 60)
secs = seconds % 60
return f"{hours:02d}:{minutes:02d}:{secs:06.3f}"
def upgrade() -> None:
"""Populate WebVTT field for all transcripts with topics."""
# Get connection
connection = op.get_bind()
# Query all transcripts with topics
result = connection.execute(
text("SELECT id, topics FROM transcript WHERE topics IS NOT NULL")
)
rows = result.fetchall()
print(f"Found {len(rows)} transcripts with topics")
updated_count = 0
error_count = 0
for row in rows:
transcript_id = row[0]
topics_data = row[1]
if not topics_data:
continue
try:
# Parse JSON if it's a string
if isinstance(topics_data, str):
topics_data = json.loads(topics_data)
# Convert topics to WebVTT format
webvtt_content = topics_to_webvtt(topics_data)
if webvtt_content:
# Update the webvtt field
connection.execute(
text("UPDATE transcript SET webvtt = :webvtt WHERE id = :id"),
{"webvtt": webvtt_content, "id": transcript_id},
)
updated_count += 1
print(f"✓ Updated transcript {transcript_id}")
except Exception as e:
error_count += 1
print(f"✗ Error updating transcript {transcript_id}: {e}")
print(f"\nMigration complete!")
print(f" Updated: {updated_count}")
print(f" Errors: {error_count}")
def downgrade() -> None:
"""Clear WebVTT field for all transcripts."""
op.execute(text("UPDATE transcript SET webvtt = NULL"))

View File

@@ -22,7 +22,7 @@ def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.execute(
"UPDATE transcript SET events = "
'REPLACE(events, \'"event": "SUMMARY"\', \'"event": "LONG_SUMMARY"\');'
'REPLACE(events::text, \'"event": "SUMMARY"\', \'"event": "LONG_SUMMARY"\')::json;'
)
op.alter_column("transcript", "summary", new_column_name="long_summary")
op.add_column("transcript", sa.Column("title", sa.String(), nullable=True))
@@ -34,7 +34,7 @@ def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.execute(
"UPDATE transcript SET events = "
'REPLACE(events, \'"event": "LONG_SUMMARY"\', \'"event": "SUMMARY"\');'
'REPLACE(events::text, \'"event": "LONG_SUMMARY"\', \'"event": "SUMMARY"\')::json;'
)
with op.batch_alter_table("transcript", schema=None) as batch_op:
batch_op.alter_column("long_summary", nullable=True, new_column_name="summary")

View File

@@ -0,0 +1,121 @@
"""datetime timezone
Revision ID: 9f5c78d352d6
Revises: 8120ebc75366
Create Date: 2025-08-13 19:18:27.113593
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision: str = "9f5c78d352d6"
down_revision: Union[str, None] = "8120ebc75366"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column(
"start_date",
existing_type=postgresql.TIMESTAMP(),
type_=sa.DateTime(timezone=True),
existing_nullable=True,
)
batch_op.alter_column(
"end_date",
existing_type=postgresql.TIMESTAMP(),
type_=sa.DateTime(timezone=True),
existing_nullable=True,
)
with op.batch_alter_table("meeting_consent", schema=None) as batch_op:
batch_op.alter_column(
"consent_timestamp",
existing_type=postgresql.TIMESTAMP(),
type_=sa.DateTime(timezone=True),
existing_nullable=False,
)
with op.batch_alter_table("recording", schema=None) as batch_op:
batch_op.alter_column(
"recorded_at",
existing_type=postgresql.TIMESTAMP(),
type_=sa.DateTime(timezone=True),
existing_nullable=False,
)
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.alter_column(
"created_at",
existing_type=postgresql.TIMESTAMP(),
type_=sa.DateTime(timezone=True),
existing_nullable=False,
)
with op.batch_alter_table("transcript", schema=None) as batch_op:
batch_op.alter_column(
"created_at",
existing_type=postgresql.TIMESTAMP(),
type_=sa.DateTime(timezone=True),
existing_nullable=True,
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("transcript", schema=None) as batch_op:
batch_op.alter_column(
"created_at",
existing_type=sa.DateTime(timezone=True),
type_=postgresql.TIMESTAMP(),
existing_nullable=True,
)
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.alter_column(
"created_at",
existing_type=sa.DateTime(timezone=True),
type_=postgresql.TIMESTAMP(),
existing_nullable=False,
)
with op.batch_alter_table("recording", schema=None) as batch_op:
batch_op.alter_column(
"recorded_at",
existing_type=sa.DateTime(timezone=True),
type_=postgresql.TIMESTAMP(),
existing_nullable=False,
)
with op.batch_alter_table("meeting_consent", schema=None) as batch_op:
batch_op.alter_column(
"consent_timestamp",
existing_type=sa.DateTime(timezone=True),
type_=postgresql.TIMESTAMP(),
existing_nullable=False,
)
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column(
"end_date",
existing_type=sa.DateTime(timezone=True),
type_=postgresql.TIMESTAMP(),
existing_nullable=True,
)
batch_op.alter_column(
"start_date",
existing_type=sa.DateTime(timezone=True),
type_=postgresql.TIMESTAMP(),
existing_nullable=True,
)
# ### end Alembic commands ###

View File

@@ -25,7 +25,7 @@ def upgrade() -> None:
sa.Column(
"is_shared",
sa.Boolean(),
server_default=sa.text("0"),
server_default=sa.text("false"),
nullable=False,
),
)

View File

@@ -23,7 +23,10 @@ def upgrade() -> None:
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(
sa.Column(
"is_active", sa.Boolean(), server_default=sa.text("1"), nullable=False
"is_active",
sa.Boolean(),
server_default=sa.text("true"),
nullable=False,
)
)

View File

@@ -0,0 +1,41 @@
"""add_search_optimization_indexes
Revision ID: b1c33bd09963
Revises: 9f5c78d352d6
Create Date: 2025-08-14 17:26:02.117408
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "b1c33bd09963"
down_revision: Union[str, None] = "9f5c78d352d6"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Add indexes for actual search filtering patterns used in frontend
# Based on /browse page filters: room_id and source_kind
# Index for room_id + created_at (for room-specific searches with date ordering)
op.create_index(
"idx_transcript_room_id_created_at",
"transcript",
["room_id", "created_at"],
if_not_exists=True,
)
# Index for source_kind alone (actively used filter in frontend)
op.create_index(
"idx_transcript_source_kind", "transcript", ["source_kind"], if_not_exists=True
)
def downgrade() -> None:
# Remove the indexes in reverse order
op.drop_index("idx_transcript_source_kind", "transcript", if_exists=True)
op.drop_index("idx_transcript_room_id_created_at", "transcript", if_exists=True)

View File

@@ -23,7 +23,7 @@ def upgrade() -> None:
op.add_column(
"transcript",
sa.Column(
"reviewed", sa.Boolean(), server_default=sa.text("0"), nullable=False
"reviewed", sa.Boolean(), server_default=sa.text("false"), nullable=False
),
)
# ### end Alembic commands ###

View File

@@ -0,0 +1,34 @@
"""make meeting room_id nullable but keep foreign key
Revision ID: def1b5867d4c
Revises: 0ce521cda2ee
Create Date: 2025-09-11 09:42:18.697264
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "def1b5867d4c"
down_revision: Union[str, None] = "0ce521cda2ee"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=True)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=False)
# ### end Alembic commands ###

View File

@@ -32,7 +32,6 @@ dependencies = [
"redis>=5.0.1",
"python-jose[cryptography]>=3.3.0",
"python-multipart>=0.0.6",
"faster-whisper>=0.10.0",
"transformers>=4.36.2",
"jsonschema>=4.23.0",
"openai>=1.59.7",
@@ -40,6 +39,8 @@ dependencies = [
"llama-index>=0.12.52",
"llama-index-llms-openai-like>=0.4.0",
"pytest-env>=1.1.5",
"webvtt-py>=0.5.0",
"PyJWT>=2.8.0",
]
[dependency-groups]
@@ -56,6 +57,9 @@ tests = [
"httpx-ws>=0.4.1",
"pytest-httpx>=0.23.1",
"pytest-celery>=0.0.0",
"pytest-recording>=0.13.4",
"pytest-docker>=3.2.3",
"asgi-lifespan>=2.1.0",
]
aws = ["aioboto3>=11.2.0"]
evaluation = [
@@ -64,6 +68,15 @@ evaluation = [
"tqdm>=4.66.0",
"pydantic>=2.1.1",
]
local = [
"pyannote-audio>=3.3.2",
"faster-whisper>=0.10.0",
]
silero-vad = [
"silero-vad>=5.1.2",
"torch>=2.8.0",
"torchaudio>=2.8.0",
]
[tool.uv]
default-groups = [
@@ -71,6 +84,21 @@ default-groups = [
"tests",
"aws",
"evaluation",
"local",
"silero-vad"
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu" },
]
torchaudio = [
{ index = "pytorch-cpu" },
]
[build-system]
@@ -85,12 +113,26 @@ source = ["reflector"]
[tool.pytest_env]
ENVIRONMENT = "pytest"
DATABASE_URL = "sqlite:///test.sqlite"
DATABASE_URL = "postgresql://test_user:test_password@localhost:15432/reflector_test"
[tool.pytest.ini_options]
addopts = "-ra -q --disable-pytest-warnings --cov --cov-report html -v"
testpaths = ["tests"]
asyncio_mode = "auto"
markers = [
"gpu_modal: mark test to run only with GPU Modal endpoints (deselect with '-m \"not gpu_modal\"')",
]
[tool.ruff.lint]
select = [
"I", # isort - import sorting
"F401", # unused imports
"PLC0415", # import-outside-top-level - detect inline imports
]
[tool.ruff.lint.per-file-ignores]
"reflector/processors/summary/summary_builder.py" = ["E501"]
"gpu/**.py" = ["PLC0415"]
"reflector/tools/**.py" = ["PLC0415"]
"migrations/versions/**.py" = ["PLC0415"]
"tests/**.py" = ["PLC0415"]

View File

@@ -12,6 +12,9 @@ from reflector.events import subscribers_shutdown, subscribers_startup
from reflector.logger import logger
from reflector.metrics import metrics_init
from reflector.settings import settings
from reflector.video_platforms.jitsi import router as jitsi_router
from reflector.video_platforms.whereby import router as whereby_router
from reflector.views.jibri_webhook import router as jibri_webhook_router
from reflector.views.meetings import router as meetings_router
from reflector.views.rooms import router as rooms_router
from reflector.views.rtc_offer import router as rtc_offer_router
@@ -26,7 +29,6 @@ from reflector.views.transcripts_upload import router as transcripts_upload_rout
from reflector.views.transcripts_webrtc import router as transcripts_webrtc_router
from reflector.views.transcripts_websocket import router as transcripts_websocket_router
from reflector.views.user import router as user_router
from reflector.views.whereby import router as whereby_router
from reflector.views.zulip import router as zulip_router
try:
@@ -86,6 +88,8 @@ app.include_router(transcripts_process_router, prefix="/v1")
app.include_router(user_router, prefix="/v1")
app.include_router(zulip_router, prefix="/v1")
app.include_router(whereby_router, prefix="/v1")
app.include_router(jitsi_router, prefix="/v1")
app.include_router(jibri_webhook_router) # No /v1 prefix, uses /api/v1/jibri
add_pagination(app)
# prepare celery

View File

@@ -0,0 +1,27 @@
import asyncio
import functools
from reflector.db import get_database
def asynctask(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
async def run_with_db():
database = get_database()
await database.connect()
try:
return await f(*args, **kwargs)
finally:
await database.disconnect()
coro = run_with_db()
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop and loop.is_running():
return loop.run_until_complete(coro)
return asyncio.run(coro)
return wrapper

View File

@@ -1,12 +1,28 @@
import contextvars
from typing import Optional
import databases
import sqlalchemy
from reflector.events import subscribers_shutdown, subscribers_startup
from reflector.settings import settings
database = databases.Database(settings.DATABASE_URL)
metadata = sqlalchemy.MetaData()
_database_context: contextvars.ContextVar[Optional[databases.Database]] = (
contextvars.ContextVar("database", default=None)
)
def get_database() -> databases.Database:
"""Get database instance for current asyncio context"""
db = _database_context.get()
if db is None:
db = databases.Database(settings.DATABASE_URL)
_database_context.set(db)
return db
# import models
import reflector.db.meetings # noqa
import reflector.db.recordings # noqa
@@ -14,16 +30,18 @@ import reflector.db.rooms # noqa
import reflector.db.transcripts # noqa
kwargs = {}
if "sqlite" in settings.DATABASE_URL:
kwargs["connect_args"] = {"check_same_thread": False}
if "postgres" not in settings.DATABASE_URL:
raise Exception("Only postgres database is supported in reflector")
engine = sqlalchemy.create_engine(settings.DATABASE_URL, **kwargs)
@subscribers_startup.append
async def database_connect(_):
database = get_database()
await database.connect()
@subscribers_shutdown.append
async def database_disconnect(_):
database = get_database()
await database.disconnect()

View File

@@ -1,12 +1,11 @@
from datetime import datetime
from typing import Literal
from datetime import datetime, timezone
from typing import Any, Dict, List, Literal
import sqlalchemy as sa
from fastapi import HTTPException
from pydantic import BaseModel, Field
from reflector.db import database, metadata
from reflector.db.rooms import Room
from reflector.db import get_database, metadata
from reflector.db.rooms import Room, VideoPlatform
from reflector.utils import generate_uuid4
meetings = sa.Table(
@@ -16,10 +15,14 @@ meetings = sa.Table(
sa.Column("room_name", sa.String),
sa.Column("room_url", sa.String),
sa.Column("host_room_url", sa.String),
sa.Column("start_date", sa.DateTime),
sa.Column("end_date", sa.DateTime),
sa.Column("user_id", sa.String),
sa.Column("room_id", sa.String),
sa.Column("start_date", sa.DateTime(timezone=True)),
sa.Column("end_date", sa.DateTime(timezone=True)),
sa.Column(
"room_id",
sa.String,
sa.ForeignKey("room.id", ondelete="CASCADE"),
nullable=True,
),
sa.Column("is_locked", sa.Boolean, nullable=False, server_default=sa.false()),
sa.Column("room_mode", sa.String, nullable=False, server_default="normal"),
sa.Column("recording_type", sa.String, nullable=False, server_default="cloud"),
@@ -41,17 +44,30 @@ meetings = sa.Table(
nullable=False,
server_default=sa.true(),
),
sa.Column("platform", sa.String, nullable=False, server_default="whereby"),
sa.Column("events", sa.JSON, nullable=False, server_default=sa.text("'[]'")),
sa.Index("idx_meeting_room_id", "room_id"),
sa.Index(
"idx_one_active_meeting_per_room",
"room_id",
unique=True,
postgresql_where=sa.text("is_active = true"),
),
)
meeting_consent = sa.Table(
"meeting_consent",
metadata,
sa.Column("id", sa.String, primary_key=True),
sa.Column("meeting_id", sa.String, sa.ForeignKey("meeting.id"), nullable=False),
sa.Column(
"meeting_id",
sa.String,
sa.ForeignKey("meeting.id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("user_id", sa.String),
sa.Column("consent_given", sa.Boolean, nullable=False),
sa.Column("consent_timestamp", sa.DateTime, nullable=False),
sa.Column("consent_timestamp", sa.DateTime(timezone=True), nullable=False),
)
@@ -70,8 +86,7 @@ class Meeting(BaseModel):
host_room_url: str
start_date: datetime
end_date: datetime
user_id: str | None = None
room_id: str | None = None
room_id: str | None
is_locked: bool = False
room_mode: Literal["normal", "group"] = "normal"
recording_type: Literal["none", "local", "cloud"] = "cloud"
@@ -79,6 +94,8 @@ class Meeting(BaseModel):
"none", "prompt", "automatic", "automatic-2nd-participant"
] = "automatic-2nd-participant"
num_clients: int = 0
platform: VideoPlatform = VideoPlatform.WHEREBY
events: List[Dict[str, Any]] = Field(default_factory=list)
class MeetingController:
@@ -90,12 +107,8 @@ class MeetingController:
host_room_url: str,
start_date: datetime,
end_date: datetime,
user_id: str,
room: Room,
):
"""
Create a new meeting
"""
meeting = Meeting(
id=id,
room_name=room_name,
@@ -103,42 +116,33 @@ class MeetingController:
host_room_url=host_room_url,
start_date=start_date,
end_date=end_date,
user_id=user_id,
room_id=room.id,
is_locked=room.is_locked,
room_mode=room.room_mode,
recording_type=room.recording_type,
recording_trigger=room.recording_trigger,
platform=room.platform,
)
query = meetings.insert().values(**meeting.model_dump())
await database.execute(query)
await get_database().execute(query)
return meeting
async def get_all_active(self) -> list[Meeting]:
"""
Get active meetings.
"""
query = meetings.select().where(meetings.c.is_active)
return await database.fetch_all(query)
return await get_database().fetch_all(query)
async def get_by_room_name(
self,
room_name: str,
) -> Meeting:
"""
Get a meeting by room name.
"""
) -> Meeting | None:
query = meetings.select().where(meetings.c.room_name == room_name)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def get_active(self, room: Room, current_time: datetime) -> Meeting:
"""
Get latest active meeting for a room.
"""
async def get_active(self, room: Room, current_time: datetime) -> Meeting | None:
end_date = getattr(meetings.c, "end_date")
query = (
meetings.select()
@@ -151,42 +155,84 @@ class MeetingController:
)
.order_by(end_date.desc())
)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def get_by_id(self, meeting_id: str, **kwargs) -> Meeting | None:
"""
Get a meeting by id
"""
query = meetings.select().where(meetings.c.id == meeting_id)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def get_by_id_for_http(self, meeting_id: str, user_id: str | None) -> Meeting:
"""
Get a meeting by ID for HTTP request.
If not found, it will raise a 404 error.
"""
query = meetings.select().where(meetings.c.id == meeting_id)
result = await database.fetch_one(query)
if not result:
raise HTTPException(status_code=404, detail="Meeting not found")
meeting = Meeting(**result)
if result["user_id"] != user_id:
meeting.host_room_url = ""
return meeting
async def update_meeting(self, meeting_id: str, **kwargs):
query = meetings.update().where(meetings.c.id == meeting_id).values(**kwargs)
await database.execute(query)
await get_database().execute(query)
async def add_event(
self, meeting_id: str, event_type: str, event_data: Dict[str, Any] = None
):
"""Add an event to a meeting's events list."""
if event_data is None:
event_data = {}
event = {
"type": event_type,
"timestamp": datetime.now(tz=timezone.utc).isoformat(),
"data": event_data,
}
# Get current events
query = meetings.select().where(meetings.c.id == meeting_id)
result = await get_database().fetch_one(query)
if not result:
return
current_events = result["events"] or []
current_events.append(event)
# Update with new events list
update_query = (
meetings.update()
.where(meetings.c.id == meeting_id)
.values(events=current_events)
)
await get_database().execute(update_query)
async def participant_joined(
self, meeting_id: str, participant_data: Dict[str, Any] = None
):
"""Record a participant joined event."""
await self.add_event(meeting_id, "participant_joined", participant_data)
async def participant_left(
self, meeting_id: str, participant_data: Dict[str, Any] = None
):
"""Record a participant left event."""
await self.add_event(meeting_id, "participant_left", participant_data)
async def recording_started(
self, meeting_id: str, recording_data: Dict[str, Any] = None
):
"""Record a recording started event."""
await self.add_event(meeting_id, "recording_started", recording_data)
async def recording_stopped(
self, meeting_id: str, recording_data: Dict[str, Any] = None
):
"""Record a recording stopped event."""
await self.add_event(meeting_id, "recording_stopped", recording_data)
async def get_events(self, meeting_id: str) -> List[Dict[str, Any]]:
"""Get all events for a meeting."""
query = meetings.select().where(meetings.c.id == meeting_id)
result = await get_database().fetch_one(query)
if not result:
return []
return result["events"] or []
class MeetingConsentController:
@@ -194,7 +240,7 @@ class MeetingConsentController:
query = meeting_consent.select().where(
meeting_consent.c.meeting_id == meeting_id
)
results = await database.fetch_all(query)
results = await get_database().fetch_all(query)
return [MeetingConsent(**result) for result in results]
async def get_by_meeting_and_user(
@@ -205,10 +251,10 @@ class MeetingConsentController:
meeting_consent.c.meeting_id == meeting_id,
meeting_consent.c.user_id == user_id,
)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if result is None:
return None
return MeetingConsent(**result) if result else None
return MeetingConsent(**result)
async def upsert(self, consent: MeetingConsent) -> MeetingConsent:
"""Create new consent or update existing one for authenticated users"""
@@ -227,14 +273,14 @@ class MeetingConsentController:
consent_timestamp=consent.consent_timestamp,
)
)
await database.execute(query)
await get_database().execute(query)
existing.consent_given = consent.consent_given
existing.consent_timestamp = consent.consent_timestamp
return existing
query = meeting_consent.insert().values(**consent.model_dump())
await database.execute(query)
await get_database().execute(query)
return consent
async def has_any_denial(self, meeting_id: str) -> bool:
@@ -243,7 +289,7 @@ class MeetingConsentController:
meeting_consent.c.meeting_id == meeting_id,
meeting_consent.c.consent_given.is_(False),
)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
return result is not None

View File

@@ -4,7 +4,7 @@ from typing import Literal
import sqlalchemy as sa
from pydantic import BaseModel, Field
from reflector.db import database, metadata
from reflector.db import get_database, metadata
from reflector.utils import generate_uuid4
recordings = sa.Table(
@@ -13,7 +13,7 @@ recordings = sa.Table(
sa.Column("id", sa.String, primary_key=True),
sa.Column("bucket_name", sa.String, nullable=False),
sa.Column("object_key", sa.String, nullable=False),
sa.Column("recorded_at", sa.DateTime, nullable=False),
sa.Column("recorded_at", sa.DateTime(timezone=True), nullable=False),
sa.Column(
"status",
sa.String,
@@ -37,12 +37,12 @@ class Recording(BaseModel):
class RecordingController:
async def create(self, recording: Recording):
query = recordings.insert().values(**recording.model_dump())
await database.execute(query)
await get_database().execute(query)
return recording
async def get_by_id(self, id: str) -> Recording:
query = recordings.select().where(recordings.c.id == id)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
return Recording(**result) if result else None
async def get_by_object_key(self, bucket_name: str, object_key: str) -> Recording:
@@ -50,8 +50,12 @@ class RecordingController:
recordings.c.bucket_name == bucket_name,
recordings.c.object_key == object_key,
)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
return Recording(**result) if result else None
async def remove_by_id(self, id: str) -> None:
query = recordings.delete().where(recordings.c.id == id)
await get_database().execute(query)
recordings_controller = RecordingController()

View File

@@ -1,4 +1,6 @@
from datetime import datetime
import secrets
from datetime import datetime, timezone
from enum import StrEnum
from sqlite3 import IntegrityError
from typing import Literal
@@ -7,16 +9,22 @@ from fastapi import HTTPException
from pydantic import BaseModel, Field
from sqlalchemy.sql import false, or_
from reflector.db import database, metadata
from reflector.db import get_database, metadata
from reflector.utils import generate_uuid4
class VideoPlatform(StrEnum):
WHEREBY = "whereby"
JITSI = "jitsi"
rooms = sqlalchemy.Table(
"room",
metadata,
sqlalchemy.Column("id", sqlalchemy.String, primary_key=True),
sqlalchemy.Column("name", sqlalchemy.String, nullable=False, unique=True),
sqlalchemy.Column("user_id", sqlalchemy.String, nullable=False),
sqlalchemy.Column("created_at", sqlalchemy.DateTime, nullable=False),
sqlalchemy.Column("created_at", sqlalchemy.DateTime(timezone=True), nullable=False),
sqlalchemy.Column(
"zulip_auto_post", sqlalchemy.Boolean, nullable=False, server_default=false()
),
@@ -40,6 +48,11 @@ rooms = sqlalchemy.Table(
sqlalchemy.Column(
"is_shared", sqlalchemy.Boolean, nullable=False, server_default=false()
),
sqlalchemy.Column("webhook_url", sqlalchemy.String, nullable=True),
sqlalchemy.Column("webhook_secret", sqlalchemy.String, nullable=True),
sqlalchemy.Column(
"platform", sqlalchemy.String, nullable=False, server_default="whereby"
),
sqlalchemy.Index("idx_room_is_shared", "is_shared"),
)
@@ -48,7 +61,7 @@ class Room(BaseModel):
id: str = Field(default_factory=generate_uuid4)
name: str
user_id: str
created_at: datetime = Field(default_factory=datetime.utcnow)
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
zulip_auto_post: bool = False
zulip_stream: str = ""
zulip_topic: str = ""
@@ -59,6 +72,9 @@ class Room(BaseModel):
"none", "prompt", "automatic", "automatic-2nd-participant"
] = "automatic-2nd-participant"
is_shared: bool = False
webhook_url: str | None = None
webhook_secret: str | None = None
platform: VideoPlatform = VideoPlatform.WHEREBY
class RoomController:
@@ -92,7 +108,7 @@ class RoomController:
if return_query:
return query
results = await database.fetch_all(query)
results = await get_database().fetch_all(query)
return results
async def add(
@@ -107,10 +123,16 @@ class RoomController:
recording_type: str,
recording_trigger: str,
is_shared: bool,
webhook_url: str = "",
webhook_secret: str = "",
platform: str = "whereby",
):
"""
Add a new room
"""
if webhook_url and not webhook_secret:
webhook_secret = secrets.token_urlsafe(32)
room = Room(
name=name,
user_id=user_id,
@@ -122,10 +144,13 @@ class RoomController:
recording_type=recording_type,
recording_trigger=recording_trigger,
is_shared=is_shared,
webhook_url=webhook_url,
webhook_secret=webhook_secret,
platform=platform,
)
query = rooms.insert().values(**room.model_dump())
try:
await database.execute(query)
await get_database().execute(query)
except IntegrityError:
raise HTTPException(status_code=400, detail="Room name is not unique")
return room
@@ -134,9 +159,12 @@ class RoomController:
"""
Update a room fields with key/values in values
"""
if values.get("webhook_url") and not values.get("webhook_secret"):
values["webhook_secret"] = secrets.token_urlsafe(32)
query = rooms.update().where(rooms.c.id == room.id).values(**values)
try:
await database.execute(query)
await get_database().execute(query)
except IntegrityError:
raise HTTPException(status_code=400, detail="Room name is not unique")
@@ -151,7 +179,7 @@ class RoomController:
query = rooms.select().where(rooms.c.id == room_id)
if "user_id" in kwargs:
query = query.where(rooms.c.user_id == kwargs["user_id"])
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
return None
return Room(**result)
@@ -163,7 +191,7 @@ class RoomController:
query = rooms.select().where(rooms.c.name == room_name)
if "user_id" in kwargs:
query = query.where(rooms.c.user_id == kwargs["user_id"])
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
return None
return Room(**result)
@@ -175,7 +203,7 @@ class RoomController:
If not found, it will raise a 404 error.
"""
query = rooms.select().where(rooms.c.id == meeting_id)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
raise HTTPException(status_code=404, detail="Room not found")
@@ -197,7 +225,7 @@ class RoomController:
if user_id is not None and room.user_id != user_id:
return
query = rooms.delete().where(rooms.c.id == room_id)
await database.execute(query)
await get_database().execute(query)
rooms_controller = RoomController()

View File

@@ -0,0 +1,468 @@
"""Search functionality for transcripts and other entities."""
import itertools
from dataclasses import dataclass
from datetime import datetime
from io import StringIO
from typing import Annotated, Any, Dict, Iterator
import sqlalchemy
import webvtt
from databases.interfaces import Record as DbRecord
from fastapi import HTTPException
from pydantic import (
BaseModel,
Field,
NonNegativeFloat,
NonNegativeInt,
TypeAdapter,
ValidationError,
constr,
field_serializer,
)
from reflector.db import get_database
from reflector.db.rooms import rooms
from reflector.db.transcripts import SourceKind, TranscriptStatus, transcripts
from reflector.db.utils import is_postgresql
from reflector.logger import logger
from reflector.utils.string import NonEmptyString, try_parse_non_empty_string
DEFAULT_SEARCH_LIMIT = 20
SNIPPET_CONTEXT_LENGTH = 50 # Characters before/after match to include
DEFAULT_SNIPPET_MAX_LENGTH = NonNegativeInt(150)
DEFAULT_MAX_SNIPPETS = NonNegativeInt(3)
LONG_SUMMARY_MAX_SNIPPETS = 2
SearchQueryBase = constr(min_length=1, strip_whitespace=True)
SearchLimitBase = Annotated[int, Field(ge=1, le=100)]
SearchOffsetBase = Annotated[int, Field(ge=0)]
SearchTotalBase = Annotated[int, Field(ge=0)]
SearchQuery = Annotated[SearchQueryBase, Field(description="Search query text")]
search_query_adapter = TypeAdapter(SearchQuery)
SearchLimit = Annotated[SearchLimitBase, Field(description="Results per page")]
SearchOffset = Annotated[
SearchOffsetBase, Field(description="Number of results to skip")
]
SearchTotal = Annotated[
SearchTotalBase, Field(description="Total number of search results")
]
WEBVTT_SPEC_HEADER = "WEBVTT"
WebVTTContent = Annotated[
str,
Field(min_length=len(WEBVTT_SPEC_HEADER), description="WebVTT content"),
]
class WebVTTProcessor:
"""Stateless processor for WebVTT content operations."""
@staticmethod
def parse(raw_content: str) -> WebVTTContent:
"""Parse WebVTT content and return it as a string."""
if not raw_content.startswith(WEBVTT_SPEC_HEADER):
raise ValueError(f"Invalid WebVTT content, no header {WEBVTT_SPEC_HEADER}")
return raw_content
@staticmethod
def extract_text(webvtt_content: WebVTTContent) -> str:
"""Extract plain text from WebVTT content using webvtt library."""
try:
buffer = StringIO(webvtt_content)
vtt = webvtt.read_buffer(buffer)
return " ".join(caption.text for caption in vtt if caption.text)
except webvtt.errors.MalformedFileError as e:
logger.warning(f"Malformed WebVTT content: {e}")
return ""
except (UnicodeDecodeError, ValueError) as e:
logger.warning(f"Failed to decode WebVTT content: {e}")
return ""
except AttributeError as e:
logger.error(
f"WebVTT parsing error - unexpected format: {e}", exc_info=True
)
return ""
except Exception as e:
logger.error(f"Unexpected error parsing WebVTT: {e}", exc_info=True)
return ""
@staticmethod
def generate_snippets(
webvtt_content: WebVTTContent,
query: SearchQuery,
max_snippets: NonNegativeInt = DEFAULT_MAX_SNIPPETS,
) -> list[str]:
"""Generate snippets from WebVTT content."""
return SnippetGenerator.generate(
WebVTTProcessor.extract_text(webvtt_content),
query,
max_snippets=max_snippets,
)
@dataclass(frozen=True)
class SnippetCandidate:
"""Represents a candidate snippet with its position."""
_text: str
start: NonNegativeInt
_original_text_length: int
@property
def end(self) -> NonNegativeInt:
"""Calculate end position from start and raw text length."""
return self.start + len(self._text)
def text(self) -> str:
"""Get display text with ellipses added if needed."""
result = self._text.strip()
if self.start > 0:
result = "..." + result
if self.end < self._original_text_length:
result = result + "..."
return result
class SearchParameters(BaseModel):
"""Validated search parameters for full-text search."""
query_text: SearchQuery | None = None
limit: SearchLimit = DEFAULT_SEARCH_LIMIT
offset: SearchOffset = 0
user_id: str | None = None
room_id: str | None = None
source_kind: SourceKind | None = None
class SearchResultDB(BaseModel):
"""Intermediate model for validating raw database results."""
id: str = Field(..., min_length=1)
created_at: datetime
status: str = Field(..., min_length=1)
duration: float | None = Field(None, ge=0)
user_id: str | None = None
title: str | None = None
source_kind: SourceKind
room_id: str | None = None
rank: float = Field(..., ge=0, le=1)
class SearchResult(BaseModel):
"""Public search result model with computed fields."""
id: str = Field(..., min_length=1)
title: str | None = None
user_id: str | None = None
room_id: str | None = None
room_name: str | None = None
source_kind: SourceKind
created_at: datetime
status: TranscriptStatus = Field(..., min_length=1)
rank: float = Field(..., ge=0, le=1)
duration: NonNegativeFloat | None = Field(..., description="Duration in seconds")
search_snippets: list[str] = Field(
description="Text snippets around search matches"
)
total_match_count: NonNegativeInt = Field(
default=0, description="Total number of matches found in the transcript"
)
@field_serializer("created_at", when_used="json")
def serialize_datetime(self, dt: datetime) -> str:
if dt.tzinfo is None:
return dt.isoformat() + "Z"
return dt.isoformat()
class SnippetGenerator:
"""Stateless generator for text snippets and match operations."""
@staticmethod
def find_all_matches(text: str, query: str) -> Iterator[int]:
"""Generate all match positions for a query in text."""
if not text:
logger.warning("Empty text for search query in find_all_matches")
return
if not query:
logger.warning("Empty query for search text in find_all_matches")
return
text_lower = text.lower()
query_lower = query.lower()
start = 0
prev_start = start
while (pos := text_lower.find(query_lower, start)) != -1:
yield pos
start = pos + len(query_lower)
if start <= prev_start:
raise ValueError("panic! find_all_matches is not incremental")
prev_start = start
@staticmethod
def count_matches(text: str, query: SearchQuery) -> NonNegativeInt:
"""Count total number of matches for a query in text."""
ZERO = NonNegativeInt(0)
if not text:
logger.warning("Empty text for search query in count_matches")
return ZERO
assert query is not None
return NonNegativeInt(
sum(1 for _ in SnippetGenerator.find_all_matches(text, query))
)
@staticmethod
def create_snippet(
text: str, match_pos: int, max_length: int = DEFAULT_SNIPPET_MAX_LENGTH
) -> SnippetCandidate:
"""Create a snippet from a match position."""
snippet_start = NonNegativeInt(max(0, match_pos - SNIPPET_CONTEXT_LENGTH))
snippet_end = min(len(text), match_pos + max_length - SNIPPET_CONTEXT_LENGTH)
snippet_text = text[snippet_start:snippet_end]
return SnippetCandidate(
_text=snippet_text, start=snippet_start, _original_text_length=len(text)
)
@staticmethod
def filter_non_overlapping(
candidates: Iterator[SnippetCandidate],
) -> Iterator[str]:
"""Filter out overlapping snippets and return only display text."""
last_end = 0
for candidate in candidates:
display_text = candidate.text()
# it means that next overlapping snippets simply don't get included
# it's fine as simplistic logic and users probably won't care much because they already have their search results just fin
if candidate.start >= last_end and display_text:
yield display_text
last_end = candidate.end
@staticmethod
def generate(
text: str,
query: SearchQuery,
max_length: NonNegativeInt = DEFAULT_SNIPPET_MAX_LENGTH,
max_snippets: NonNegativeInt = DEFAULT_MAX_SNIPPETS,
) -> list[str]:
"""Generate snippets from text."""
assert query is not None
if not text:
logger.warning("Empty text for generate_snippets")
return []
candidates = (
SnippetGenerator.create_snippet(text, pos, max_length)
for pos in SnippetGenerator.find_all_matches(text, query)
)
filtered = SnippetGenerator.filter_non_overlapping(candidates)
snippets = list(itertools.islice(filtered, max_snippets))
# Fallback to first word search if no full matches
# it's another assumption: proper snippet logic generation is quite complicated and tied to db logic, so simplification is used here
if not snippets and " " in query:
first_word = query.split()[0]
return SnippetGenerator.generate(text, first_word, max_length, max_snippets)
return snippets
@staticmethod
def from_summary(
summary: str,
query: SearchQuery,
max_snippets: NonNegativeInt = LONG_SUMMARY_MAX_SNIPPETS,
) -> list[str]:
"""Generate snippets from summary text."""
return SnippetGenerator.generate(summary, query, max_snippets=max_snippets)
@staticmethod
def combine_sources(
summary: NonEmptyString | None,
webvtt: WebVTTContent | None,
query: SearchQuery,
max_total: NonNegativeInt = DEFAULT_MAX_SNIPPETS,
) -> tuple[list[str], NonNegativeInt]:
"""Combine snippets from multiple sources and return total match count.
Returns (snippets, total_match_count) tuple.
snippets can be empty for real in case of e.g. title match
"""
assert (
summary is not None or webvtt is not None
), "At least one source must be present"
webvtt_matches = 0
summary_matches = 0
if webvtt:
webvtt_text = WebVTTProcessor.extract_text(webvtt)
webvtt_matches = SnippetGenerator.count_matches(webvtt_text, query)
if summary:
summary_matches = SnippetGenerator.count_matches(summary, query)
total_matches = NonNegativeInt(webvtt_matches + summary_matches)
summary_snippets = (
SnippetGenerator.from_summary(summary, query) if summary else []
)
if len(summary_snippets) >= max_total:
return summary_snippets[:max_total], total_matches
remaining = max_total - len(summary_snippets)
webvtt_snippets = (
WebVTTProcessor.generate_snippets(webvtt, query, remaining)
if webvtt
else []
)
return summary_snippets + webvtt_snippets, total_matches
class SearchController:
"""Controller for search operations across different entities."""
@classmethod
async def search_transcripts(
cls, params: SearchParameters
) -> tuple[list[SearchResult], int]:
"""
Full-text search for transcripts using PostgreSQL tsvector.
Returns (results, total_count).
"""
if not is_postgresql():
logger.warning(
"Full-text search requires PostgreSQL. Returning empty results."
)
return [], 0
base_columns = [
transcripts.c.id,
transcripts.c.title,
transcripts.c.created_at,
transcripts.c.duration,
transcripts.c.status,
transcripts.c.user_id,
transcripts.c.room_id,
transcripts.c.source_kind,
transcripts.c.webvtt,
transcripts.c.long_summary,
sqlalchemy.case(
(
transcripts.c.room_id.isnot(None) & rooms.c.id.is_(None),
"Deleted Room",
),
else_=rooms.c.name,
).label("room_name"),
]
search_query = None
if params.query_text is not None:
search_query = sqlalchemy.func.websearch_to_tsquery(
"english", params.query_text
)
rank_column = sqlalchemy.func.ts_rank(
transcripts.c.search_vector_en,
search_query,
32, # normalization flag: rank/(rank+1) for 0-1 range
).label("rank")
else:
rank_column = sqlalchemy.cast(1.0, sqlalchemy.Float).label("rank")
columns = base_columns + [rank_column]
base_query = sqlalchemy.select(columns).select_from(
transcripts.join(rooms, transcripts.c.room_id == rooms.c.id, isouter=True)
)
if params.query_text is not None:
# because already initialized based on params.query_text presence above
assert search_query is not None
base_query = base_query.where(
transcripts.c.search_vector_en.op("@@")(search_query)
)
if params.user_id:
base_query = base_query.where(
sqlalchemy.or_(
transcripts.c.user_id == params.user_id, rooms.c.is_shared
)
)
else:
base_query = base_query.where(rooms.c.is_shared)
if params.room_id:
base_query = base_query.where(transcripts.c.room_id == params.room_id)
if params.source_kind:
base_query = base_query.where(
transcripts.c.source_kind == params.source_kind
)
if params.query_text is not None:
order_by = sqlalchemy.desc(sqlalchemy.text("rank"))
else:
order_by = sqlalchemy.desc(transcripts.c.created_at)
query = base_query.order_by(order_by).limit(params.limit).offset(params.offset)
rs = await get_database().fetch_all(query)
count_query = sqlalchemy.select([sqlalchemy.func.count()]).select_from(
base_query.alias("search_results")
)
total = await get_database().fetch_val(count_query)
def _process_result(r: DbRecord) -> SearchResult:
r_dict: Dict[str, Any] = dict(r)
webvtt_raw: str | None = r_dict.pop("webvtt", None)
webvtt: WebVTTContent | None
if webvtt_raw:
webvtt = WebVTTProcessor.parse(webvtt_raw)
else:
webvtt = None
long_summary_r: str | None = r_dict.pop("long_summary", None)
long_summary: NonEmptyString = try_parse_non_empty_string(long_summary_r)
room_name: str | None = r_dict.pop("room_name", None)
db_result = SearchResultDB.model_validate(r_dict)
at_least_one_source = webvtt is not None or long_summary is not None
has_query = params.query_text is not None
snippets, total_match_count = (
SnippetGenerator.combine_sources(
long_summary, webvtt, params.query_text, DEFAULT_MAX_SNIPPETS
)
if has_query and at_least_one_source
else ([], 0)
)
return SearchResult(
**db_result.model_dump(),
room_name=room_name,
search_snippets=snippets,
total_match_count=total_match_count,
)
try:
results = [_process_result(r) for r in rs]
except ValidationError as e:
logger.error(f"Invalid search result data: {e}", exc_info=True)
raise HTTPException(
status_code=500, detail="Internal search result data consistency error"
)
except Exception as e:
logger.error(f"Error processing search results: {e}", exc_info=True)
raise
return results, total
search_controller = SearchController()
webvtt_processor = WebVTTProcessor()
snippet_generator = SnippetGenerator()

View File

@@ -3,7 +3,7 @@ import json
import os
import shutil
from contextlib import asynccontextmanager
from datetime import datetime, timezone
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Literal
@@ -11,13 +11,19 @@ import sqlalchemy
from fastapi import HTTPException
from pydantic import BaseModel, ConfigDict, Field, field_serializer
from sqlalchemy import Enum
from sqlalchemy.dialects.postgresql import TSVECTOR
from sqlalchemy.sql import false, or_
from reflector.db import database, metadata
from reflector.db import get_database, metadata
from reflector.db.recordings import recordings_controller
from reflector.db.rooms import rooms
from reflector.db.utils import is_postgresql
from reflector.logger import logger
from reflector.processors.types import Word as ProcessorWord
from reflector.settings import settings
from reflector.storage import get_transcripts_storage
from reflector.storage import get_recordings_storage, get_transcripts_storage
from reflector.utils import generate_uuid4
from reflector.utils.webvtt import topics_to_webvtt
class SourceKind(enum.StrEnum):
@@ -34,7 +40,7 @@ transcripts = sqlalchemy.Table(
sqlalchemy.Column("status", sqlalchemy.String),
sqlalchemy.Column("locked", sqlalchemy.Boolean),
sqlalchemy.Column("duration", sqlalchemy.Float),
sqlalchemy.Column("created_at", sqlalchemy.DateTime),
sqlalchemy.Column("created_at", sqlalchemy.DateTime(timezone=True)),
sqlalchemy.Column("title", sqlalchemy.String),
sqlalchemy.Column("short_summary", sqlalchemy.String),
sqlalchemy.Column("long_summary", sqlalchemy.String),
@@ -76,11 +82,38 @@ transcripts = sqlalchemy.Table(
# same field could've been in recording/meeting, and it's maybe even ok to dupe it at need
sqlalchemy.Column("audio_deleted", sqlalchemy.Boolean),
sqlalchemy.Column("room_id", sqlalchemy.String),
sqlalchemy.Column("webvtt", sqlalchemy.Text),
sqlalchemy.Index("idx_transcript_recording_id", "recording_id"),
sqlalchemy.Index("idx_transcript_user_id", "user_id"),
sqlalchemy.Index("idx_transcript_created_at", "created_at"),
sqlalchemy.Index("idx_transcript_user_id_recording_id", "user_id", "recording_id"),
sqlalchemy.Index("idx_transcript_room_id", "room_id"),
sqlalchemy.Index("idx_transcript_source_kind", "source_kind"),
sqlalchemy.Index("idx_transcript_room_id_created_at", "room_id", "created_at"),
)
# Add PostgreSQL-specific full-text search column
# This matches the migration in migrations/versions/116b2f287eab_add_full_text_search.py
if is_postgresql():
transcripts.append_column(
sqlalchemy.Column(
"search_vector_en",
TSVECTOR,
sqlalchemy.Computed(
"setweight(to_tsvector('english', coalesce(title, '')), 'A') || "
"setweight(to_tsvector('english', coalesce(long_summary, '')), 'B') || "
"setweight(to_tsvector('english', coalesce(webvtt, '')), 'C')",
persisted=True,
),
)
)
# Add GIN index for the search vector
transcripts.append_constraint(
sqlalchemy.Index(
"idx_transcript_search_vector_en",
"search_vector_en",
postgresql_using="gin",
)
)
@@ -89,6 +122,15 @@ def generate_transcript_name() -> str:
return f"Transcript {now.strftime('%Y-%m-%d %H:%M:%S')}"
TranscriptStatus = Literal[
"idle", "uploaded", "recording", "processing", "error", "ended"
]
class StrValue(BaseModel):
value: str
class AudioWaveform(BaseModel):
data: list[float]
@@ -147,14 +189,18 @@ class TranscriptParticipant(BaseModel):
class Transcript(BaseModel):
"""Full transcript model with all fields."""
id: str = Field(default_factory=generate_uuid4)
user_id: str | None = None
name: str = Field(default_factory=generate_transcript_name)
status: str = "idle"
locked: bool = False
status: TranscriptStatus = "idle"
duration: float = 0
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
title: str | None = None
source_kind: SourceKind
room_id: str | None = None
locked: bool = False
short_summary: str | None = None
long_summary: str | None = None
topics: list[TranscriptTopic] = []
@@ -168,9 +214,8 @@ class Transcript(BaseModel):
meeting_id: str | None = None
recording_id: str | None = None
zulip_message_id: int | None = None
source_kind: SourceKind
audio_deleted: bool | None = None
room_id: str | None = None
webvtt: str | None = None
@field_serializer("created_at", when_used="json")
def serialize_datetime(self, dt: datetime) -> str:
@@ -271,10 +316,12 @@ class Transcript(BaseModel):
# we need to create an url to be used for diarization
# we can't use the audio_mp3_filename because it's not accessible
# from the diarization processor
from datetime import timedelta
from reflector.app import app
from reflector.views.transcripts import create_access_token
# TODO don't import app in db
from reflector.app import app # noqa: PLC0415
# TODO a util + don''t import views in db
from reflector.views.transcripts import create_access_token # noqa: PLC0415
path = app.url_path_for(
"transcript_get_audio_mp3",
@@ -335,7 +382,6 @@ class TranscriptController:
- `room_id`: filter transcripts by room ID
- `search_term`: filter transcripts by search term
"""
from reflector.db.rooms import rooms
query = transcripts.select().join(
rooms, transcripts.c.room_id == rooms.c.id, isouter=True
@@ -386,7 +432,7 @@ class TranscriptController:
if return_query:
return query
results = await database.fetch_all(query)
results = await get_database().fetch_all(query)
return results
async def get_by_id(self, transcript_id: str, **kwargs) -> Transcript | None:
@@ -396,7 +442,7 @@ class TranscriptController:
query = transcripts.select().where(transcripts.c.id == transcript_id)
if "user_id" in kwargs:
query = query.where(transcripts.c.user_id == kwargs["user_id"])
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
return None
return Transcript(**result)
@@ -410,7 +456,7 @@ class TranscriptController:
query = transcripts.select().where(transcripts.c.recording_id == recording_id)
if "user_id" in kwargs:
query = query.where(transcripts.c.user_id == kwargs["user_id"])
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
return None
return Transcript(**result)
@@ -428,7 +474,7 @@ class TranscriptController:
if order_by.startswith("-"):
field = field.desc()
query = query.order_by(field)
results = await database.fetch_all(query)
results = await get_database().fetch_all(query)
return [Transcript(**result) for result in results]
async def get_by_id_for_http(
@@ -446,7 +492,7 @@ class TranscriptController:
to determine if the user can access the transcript.
"""
query = transcripts.select().where(transcripts.c.id == transcript_id)
result = await database.fetch_one(query)
result = await get_database().fetch_one(query)
if not result:
raise HTTPException(status_code=404, detail="Transcript not found")
@@ -499,23 +545,52 @@ class TranscriptController:
room_id=room_id,
)
query = transcripts.insert().values(**transcript.model_dump())
await database.execute(query)
await get_database().execute(query)
return transcript
async def update(self, transcript: Transcript, values: dict, mutate=True):
# TODO investigate why mutate= is used. it's used in one place currently, maybe because of ORM field updates.
# using mutate=True is discouraged
async def update(
self, transcript: Transcript, values: dict, mutate=False
) -> Transcript:
"""
Update a transcript fields with key/values in values
Update a transcript fields with key/values in values.
Returns a copy of the transcript with updated values.
"""
values = TranscriptController._handle_topics_update(values)
query = (
transcripts.update()
.where(transcripts.c.id == transcript.id)
.values(**values)
)
await database.execute(query)
await get_database().execute(query)
if mutate:
for key, value in values.items():
setattr(transcript, key, value)
updated_transcript = transcript.model_copy(update=values)
return updated_transcript
@staticmethod
def _handle_topics_update(values: dict) -> dict:
"""Auto-update WebVTT when topics are updated."""
if values.get("webvtt") is not None:
logger.warn("trying to update read-only webvtt column")
pass
topics_data = values.get("topics")
if topics_data is None:
return values
return {
**values,
"webvtt": topics_to_webvtt(
[TranscriptTopic(**topic_dict) for topic_dict in topics_data]
),
}
async def remove_by_id(
self,
transcript_id: str,
@@ -529,23 +604,55 @@ class TranscriptController:
return
if user_id is not None and transcript.user_id != user_id:
return
if transcript.audio_location == "storage" and not transcript.audio_deleted:
try:
await get_transcripts_storage().delete_file(
transcript.storage_audio_path
)
except Exception as e:
logger.warning(
"Failed to delete transcript audio from storage",
exc_info=e,
transcript_id=transcript.id,
)
transcript.unlink()
if transcript.recording_id:
try:
recording = await recordings_controller.get_by_id(
transcript.recording_id
)
if recording:
try:
await get_recordings_storage().delete_file(recording.object_key)
except Exception as e:
logger.warning(
"Failed to delete recording object from S3",
exc_info=e,
recording_id=transcript.recording_id,
)
await recordings_controller.remove_by_id(transcript.recording_id)
except Exception as e:
logger.warning(
"Failed to delete recording row",
exc_info=e,
recording_id=transcript.recording_id,
)
query = transcripts.delete().where(transcripts.c.id == transcript_id)
await database.execute(query)
await get_database().execute(query)
async def remove_by_recording_id(self, recording_id: str):
"""
Remove a transcript by recording_id
"""
query = transcripts.delete().where(transcripts.c.recording_id == recording_id)
await database.execute(query)
await get_database().execute(query)
@asynccontextmanager
async def transaction(self):
"""
A context manager for database transaction
"""
async with database.transaction(isolation="serializable"):
async with get_database().transaction(isolation="serializable"):
yield
async def append_event(
@@ -558,11 +665,7 @@ class TranscriptController:
Append an event to a transcript
"""
resp = transcript.add_event(event=event, data=data)
await self.update(
transcript,
{"events": transcript.events_dump()},
mutate=False,
)
await self.update(transcript, {"events": transcript.events_dump()})
return resp
async def upsert_topic(
@@ -574,11 +677,7 @@ class TranscriptController:
Upsert topics to a transcript
"""
transcript.upsert_topic(topic)
await self.update(
transcript,
{"topics": transcript.topics_dump()},
mutate=False,
)
await self.update(transcript, {"topics": transcript.topics_dump()})
async def move_mp3_to_storage(self, transcript: Transcript):
"""
@@ -603,7 +702,8 @@ class TranscriptController:
)
# indicate on the transcript that the audio is now on storage
await self.update(transcript, {"audio_location": "storage"})
# mutates transcript argument
await self.update(transcript, {"audio_location": "storage"}, mutate=True)
# unlink the local file
transcript.audio_mp3_filename.unlink(missing_ok=True)
@@ -627,11 +727,7 @@ class TranscriptController:
Add/update a participant to a transcript
"""
result = transcript.upsert_participant(participant)
await self.update(
transcript,
{"participants": transcript.participants_dump()},
mutate=False,
)
await self.update(transcript, {"participants": transcript.participants_dump()})
return result
async def delete_participant(
@@ -643,11 +739,29 @@ class TranscriptController:
Delete a participant from a transcript
"""
transcript.delete_participant(participant_id)
await self.update(
transcript,
{"participants": transcript.participants_dump()},
mutate=False,
await self.update(transcript, {"participants": transcript.participants_dump()})
async def set_status(
self, transcript_id: str, status: TranscriptStatus
) -> TranscriptEvent | None:
"""
Update the status of a transcript
Will add an event STATUS + update the status field of transcript
"""
async with self.transaction():
transcript = await self.get_by_id(transcript_id)
if not transcript:
raise Exception(f"Transcript {transcript_id} not found")
if transcript.status == status:
return
resp = await self.append_event(
transcript=transcript,
event="STATUS",
data=StrValue(value=status),
)
await self.update(transcript, {"status": status})
return resp
transcripts_controller = TranscriptController()

View File

@@ -0,0 +1,9 @@
"""Database utility functions."""
from reflector.db import get_database
def is_postgresql() -> bool:
return get_database().url.scheme and get_database().url.scheme.startswith(
"postgresql"
)

View File

@@ -0,0 +1,227 @@
import json
from pathlib import Path
from typing import Any, Dict, List, Literal, Optional, Union
from pydantic import BaseModel
from typing_extensions import TypedDict
class ParticipantInfo(BaseModel):
jid: str
nick: str
id: str
is_moderator: bool = False
class ParticipantLeftInfo(BaseModel):
jid: str
nick: Optional[str] = None
duration_seconds: Optional[int] = None
class RoomCreatedEvent(BaseModel):
type: Literal["room_created"]
timestamp: int
room_name: str
room_jid: str
meeting_url: str
class RecordingStartedEvent(BaseModel):
type: Literal["recording_started"]
timestamp: int
room_name: str
session_id: str
jibri_jid: str
class RecordingStoppedEvent(BaseModel):
type: Literal["recording_stopped"]
timestamp: int
room_name: str
session_id: str
meeting_url: str
class ParticipantJoinedEvent(BaseModel):
type: Literal["participant_joined"]
timestamp: int
room_name: str
participant: ParticipantInfo
class ParticipantLeftEvent(BaseModel):
type: Literal["participant_left"]
timestamp: int
room_name: str
participant: ParticipantLeftInfo
class SpeakerActiveEvent(BaseModel):
type: Literal["speaker_active"]
timestamp: int
room_name: str
speaker_jid: str
speaker_nick: str
duration: int
class DominantSpeakerChangedEvent(BaseModel):
type: Literal["dominant_speaker_changed"]
timestamp: int
room_name: str
previous: str
current: str
JitsiEvent = Union[
RoomCreatedEvent,
RecordingStartedEvent,
RecordingStoppedEvent,
ParticipantJoinedEvent,
ParticipantLeftEvent,
SpeakerActiveEvent,
DominantSpeakerChangedEvent,
]
class RoomInfo(TypedDict):
name: str
jid: str
created_at: int
meeting_url: str
recording_stopped_at: Optional[int]
class ParticipantData(TypedDict):
jid: str
nick: str
id: str
is_moderator: bool
joined_at: int
left_at: Optional[int]
duration: Optional[int]
events: List[str]
class SpeakerStats(TypedDict):
total_time: int
nick: str
class ParsedMetadata(TypedDict):
room: RoomInfo
participants: List[ParticipantData]
speaker_stats: Dict[str, SpeakerStats]
event_count: int
class JitsiEventParser:
def parse_event(self, event_data: Dict[str, Any]) -> Optional[JitsiEvent]:
event_type = event_data.get("type")
try:
if event_type == "room_created":
return RoomCreatedEvent(**event_data)
elif event_type == "recording_started":
return RecordingStartedEvent(**event_data)
elif event_type == "recording_stopped":
return RecordingStoppedEvent(**event_data)
elif event_type == "participant_joined":
return ParticipantJoinedEvent(**event_data)
elif event_type == "participant_left":
return ParticipantLeftEvent(**event_data)
elif event_type == "speaker_active":
return SpeakerActiveEvent(**event_data)
elif event_type == "dominant_speaker_changed":
return DominantSpeakerChangedEvent(**event_data)
else:
return None
except Exception:
return None
def parse_events_file(self, recording_path: str) -> ParsedMetadata:
events_file = Path(recording_path) / "events.jsonl"
room_info: RoomInfo = {
"name": "",
"jid": "",
"created_at": 0,
"meeting_url": "",
"recording_stopped_at": None,
}
if not events_file.exists():
return ParsedMetadata(
room=room_info, participants=[], speaker_stats={}, event_count=0
)
events: List[JitsiEvent] = []
participants: Dict[str, ParticipantData] = {}
speaker_stats: Dict[str, SpeakerStats] = {}
with open(events_file, "r") as f:
for line in f:
if not line.strip():
continue
try:
event_data = json.loads(line)
event = self.parse_event(event_data)
if event is None:
continue
events.append(event)
if isinstance(event, RoomCreatedEvent):
room_info = {
"name": event.room_name,
"jid": event.room_jid,
"created_at": event.timestamp,
"meeting_url": event.meeting_url,
"recording_stopped_at": None,
}
elif isinstance(event, ParticipantJoinedEvent):
participants[event.participant.id] = {
"jid": event.participant.jid,
"nick": event.participant.nick,
"id": event.participant.id,
"is_moderator": event.participant.is_moderator,
"joined_at": event.timestamp,
"left_at": None,
"duration": None,
"events": ["joined"],
}
elif isinstance(event, ParticipantLeftEvent):
participant_id = event.participant.jid.split("/")[0]
if participant_id in participants:
participants[participant_id]["left_at"] = event.timestamp
participants[participant_id]["duration"] = (
event.participant.duration_seconds
)
participants[participant_id]["events"].append("left")
elif isinstance(event, SpeakerActiveEvent):
if event.speaker_jid not in speaker_stats:
speaker_stats[event.speaker_jid] = {
"total_time": 0,
"nick": event.speaker_nick,
}
speaker_stats[event.speaker_jid]["total_time"] += event.duration
elif isinstance(event, RecordingStoppedEvent):
room_info["recording_stopped_at"] = event.timestamp
room_info["meeting_url"] = event.meeting_url
except (json.JSONDecodeError, Exception):
continue
return ParsedMetadata(
room=room_info,
participants=list(participants.values()),
speaker_stats=speaker_stats,
event_count=len(events),
)

View File

@@ -0,0 +1,439 @@
"""
File-based processing pipeline
==============================
Optimized pipeline for processing complete audio/video files.
Uses parallel processing for transcription, diarization, and waveform generation.
"""
import asyncio
import uuid
from pathlib import Path
import av
import structlog
from celery import chain, shared_task
from reflector.asynctask import asynctask
from reflector.db.rooms import rooms_controller
from reflector.db.transcripts import (
SourceKind,
Transcript,
TranscriptStatus,
transcripts_controller,
)
from reflector.logger import logger
from reflector.pipelines.main_live_pipeline import (
PipelineMainBase,
broadcast_to_sockets,
task_cleanup_consent,
task_pipeline_post_to_zulip,
)
from reflector.processors import (
AudioFileWriterProcessor,
TranscriptFinalSummaryProcessor,
TranscriptFinalTitleProcessor,
TranscriptTopicDetectorProcessor,
)
from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
from reflector.processors.file_diarization import FileDiarizationInput
from reflector.processors.file_diarization_auto import FileDiarizationAutoProcessor
from reflector.processors.file_transcript import FileTranscriptInput
from reflector.processors.file_transcript_auto import FileTranscriptAutoProcessor
from reflector.processors.transcript_diarization_assembler import (
TranscriptDiarizationAssemblerInput,
TranscriptDiarizationAssemblerProcessor,
)
from reflector.processors.types import (
DiarizationSegment,
TitleSummary,
)
from reflector.processors.types import (
Transcript as TranscriptType,
)
from reflector.settings import settings
from reflector.storage import get_transcripts_storage
from reflector.worker.webhook import send_transcript_webhook
class EmptyPipeline:
"""Empty pipeline for processors that need a pipeline reference"""
def __init__(self, logger: structlog.BoundLogger):
self.logger = logger
def get_pref(self, k, d=None):
return d
async def emit(self, event):
pass
class PipelineMainFile(PipelineMainBase):
"""
Optimized file processing pipeline.
Processes complete audio/video files with parallel execution.
"""
logger: structlog.BoundLogger = None
empty_pipeline = None
def __init__(self, transcript_id: str):
super().__init__(transcript_id=transcript_id)
self.logger = logger.bind(transcript_id=self.transcript_id)
self.empty_pipeline = EmptyPipeline(logger=self.logger)
def _handle_gather_exceptions(self, results: list, operation: str) -> None:
"""Handle exceptions from asyncio.gather with return_exceptions=True"""
for i, result in enumerate(results):
if not isinstance(result, Exception):
continue
self.logger.error(
f"Error in {operation} (task {i}): {result}",
transcript_id=self.transcript_id,
exc_info=result,
)
@broadcast_to_sockets
async def set_status(self, transcript_id: str, status: TranscriptStatus):
async with self.lock_transaction():
return await transcripts_controller.set_status(transcript_id, status)
async def process(self, file_path: Path):
"""Main entry point for file processing"""
self.logger.info(f"Starting file pipeline for {file_path}")
transcript = await self.get_transcript()
# Clear transcript as we're going to regenerate everything
async with self.transaction():
await transcripts_controller.update(
transcript,
{
"events": [],
"topics": [],
},
)
# Extract audio and write to transcript location
audio_path = await self.extract_and_write_audio(file_path, transcript)
# Upload for processing
audio_url = await self.upload_audio(audio_path, transcript)
# Run parallel processing
await self.run_parallel_processing(
audio_path,
audio_url,
transcript.source_language,
transcript.target_language,
)
self.logger.info("File pipeline complete")
await transcripts_controller.set_status(transcript.id, "ended")
async def extract_and_write_audio(
self, file_path: Path, transcript: Transcript
) -> Path:
"""Extract audio from video if needed and write to transcript location as MP3"""
self.logger.info(f"Processing audio file: {file_path}")
# Check if it's already audio-only
container = av.open(str(file_path))
has_video = len(container.streams.video) > 0
container.close()
# Use AudioFileWriterProcessor to write MP3 to transcript location
mp3_writer = AudioFileWriterProcessor(
path=transcript.audio_mp3_filename,
on_duration=self.on_duration,
)
# Process audio frames and write to transcript location
input_container = av.open(str(file_path))
for frame in input_container.decode(audio=0):
await mp3_writer.push(frame)
await mp3_writer.flush()
input_container.close()
if has_video:
self.logger.info(
f"Extracted audio from video and saved to {transcript.audio_mp3_filename}"
)
else:
self.logger.info(
f"Converted audio file and saved to {transcript.audio_mp3_filename}"
)
return transcript.audio_mp3_filename
async def upload_audio(self, audio_path: Path, transcript: Transcript) -> str:
"""Upload audio to storage for processing"""
storage = get_transcripts_storage()
if not storage:
raise Exception(
"Storage backend required for file processing. Configure TRANSCRIPT_STORAGE_* settings."
)
self.logger.info("Uploading audio to storage")
with open(audio_path, "rb") as f:
audio_data = f.read()
storage_path = f"file_pipeline/{transcript.id}/audio.mp3"
await storage.put_file(storage_path, audio_data)
audio_url = await storage.get_file_url(storage_path)
self.logger.info(f"Audio uploaded to {audio_url}")
return audio_url
async def run_parallel_processing(
self,
audio_path: Path,
audio_url: str,
source_language: str,
target_language: str,
):
"""Coordinate parallel processing of transcription, diarization, and waveform"""
self.logger.info(
"Starting parallel processing", transcript_id=self.transcript_id
)
# Phase 1: Parallel processing of independent tasks
transcription_task = self.transcribe_file(audio_url, source_language)
diarization_task = self.diarize_file(audio_url)
waveform_task = self.generate_waveform(audio_path)
results = await asyncio.gather(
transcription_task, diarization_task, waveform_task, return_exceptions=True
)
transcript_result = results[0]
diarization_result = results[1]
# Handle errors - raise any exception that occurred
self._handle_gather_exceptions(results, "parallel processing")
for result in results:
if isinstance(result, Exception):
raise result
# Phase 2: Assemble transcript with diarization
self.logger.info(
"Assembling transcript with diarization", transcript_id=self.transcript_id
)
processor = TranscriptDiarizationAssemblerProcessor()
input_data = TranscriptDiarizationAssemblerInput(
transcript=transcript_result, diarization=diarization_result or []
)
# Store result for retrieval
diarized_transcript: Transcript | None = None
async def capture_result(transcript):
nonlocal diarized_transcript
diarized_transcript = transcript
processor.on(capture_result)
await processor.push(input_data)
await processor.flush()
if not diarized_transcript:
raise ValueError("No diarized transcript captured")
# Phase 3: Generate topics from diarized transcript
self.logger.info("Generating topics", transcript_id=self.transcript_id)
topics = await self.detect_topics(diarized_transcript, target_language)
# Phase 4: Generate title and summaries in parallel
self.logger.info(
"Generating title and summaries", transcript_id=self.transcript_id
)
results = await asyncio.gather(
self.generate_title(topics),
self.generate_summaries(topics),
return_exceptions=True,
)
self._handle_gather_exceptions(results, "title and summary generation")
async def transcribe_file(self, audio_url: str, language: str) -> TranscriptType:
"""Transcribe complete file"""
processor = FileTranscriptAutoProcessor()
input_data = FileTranscriptInput(audio_url=audio_url, language=language)
# Store result for retrieval
result: TranscriptType | None = None
async def capture_result(transcript):
nonlocal result
result = transcript
processor.on(capture_result)
await processor.push(input_data)
await processor.flush()
if not result:
raise ValueError("No transcript captured")
return result
async def diarize_file(self, audio_url: str) -> list[DiarizationSegment] | None:
"""Get diarization for file"""
if not settings.DIARIZATION_BACKEND:
self.logger.info("Diarization disabled")
return None
processor = FileDiarizationAutoProcessor()
input_data = FileDiarizationInput(audio_url=audio_url)
# Store result for retrieval
result = None
async def capture_result(diarization_output):
nonlocal result
result = diarization_output.diarization
try:
processor.on(capture_result)
await processor.push(input_data)
await processor.flush()
return result
except Exception as e:
self.logger.error(f"Diarization failed: {e}")
return None
async def generate_waveform(self, audio_path: Path):
"""Generate and save waveform"""
transcript = await self.get_transcript()
processor = AudioWaveformProcessor(
audio_path=audio_path,
waveform_path=transcript.audio_waveform_filename,
on_waveform=self.on_waveform,
)
processor.set_pipeline(self.empty_pipeline)
await processor.flush()
async def detect_topics(
self, transcript: TranscriptType, target_language: str
) -> list[TitleSummary]:
"""Detect topics from complete transcript"""
chunk_size = 300
topics: list[TitleSummary] = []
async def on_topic(topic: TitleSummary):
topics.append(topic)
return await self.on_topic(topic)
topic_detector = TranscriptTopicDetectorProcessor(callback=on_topic)
topic_detector.set_pipeline(self.empty_pipeline)
for i in range(0, len(transcript.words), chunk_size):
chunk_words = transcript.words[i : i + chunk_size]
if not chunk_words:
continue
chunk_transcript = TranscriptType(
words=chunk_words, translation=transcript.translation
)
await topic_detector.push(chunk_transcript)
await topic_detector.flush()
return topics
async def generate_title(self, topics: list[TitleSummary]):
"""Generate title from topics"""
if not topics:
self.logger.warning("No topics for title generation")
return
processor = TranscriptFinalTitleProcessor(callback=self.on_title)
processor.set_pipeline(self.empty_pipeline)
for topic in topics:
await processor.push(topic)
await processor.flush()
async def generate_summaries(self, topics: list[TitleSummary]):
"""Generate long and short summaries from topics"""
if not topics:
self.logger.warning("No topics for summary generation")
return
transcript = await self.get_transcript()
processor = TranscriptFinalSummaryProcessor(
transcript=transcript,
callback=self.on_long_summary,
on_short_summary=self.on_short_summary,
)
processor.set_pipeline(self.empty_pipeline)
for topic in topics:
await processor.push(topic)
await processor.flush()
@shared_task
@asynctask
async def task_send_webhook_if_needed(*, transcript_id: str):
"""Send webhook if this is a room recording with webhook configured"""
transcript = await transcripts_controller.get_by_id(transcript_id)
if not transcript:
return
if transcript.source_kind == SourceKind.ROOM and transcript.room_id:
room = await rooms_controller.get_by_id(transcript.room_id)
if room and room.webhook_url:
logger.info(
"Dispatching webhook",
transcript_id=transcript_id,
room_id=room.id,
webhook_url=room.webhook_url,
)
send_transcript_webhook.delay(
transcript_id, room.id, event_id=uuid.uuid4().hex
)
@shared_task
@asynctask
async def task_pipeline_file_process(*, transcript_id: str):
"""Celery task for file pipeline processing"""
transcript = await transcripts_controller.get_by_id(transcript_id)
if not transcript:
raise Exception(f"Transcript {transcript_id} not found")
pipeline = PipelineMainFile(transcript_id=transcript_id)
try:
await pipeline.set_status(transcript_id, "processing")
# Find the file to process
audio_file = next(transcript.data_path.glob("upload.*"), None)
if not audio_file:
audio_file = next(transcript.data_path.glob("audio.*"), None)
if not audio_file:
raise Exception("No audio file found to process")
await pipeline.process(audio_file)
except Exception:
await pipeline.set_status(transcript_id, "error")
raise
# Run post-processing chain: consent cleanup -> zulip -> webhook
post_chain = chain(
task_cleanup_consent.si(transcript_id=transcript_id),
task_pipeline_post_to_zulip.si(transcript_id=transcript_id),
task_send_webhook_if_needed.si(transcript_id=transcript_id),
)
post_chain.delay()

View File

@@ -14,12 +14,15 @@ It is directly linked to our data model.
import asyncio
import functools
from contextlib import asynccontextmanager
from typing import Generic
import av
import boto3
from celery import chord, current_task, group, shared_task
from pydantic import BaseModel
from structlog import BoundLogger as Logger
from reflector.asynctask import asynctask
from reflector.db.meetings import meeting_consent_controller, meetings_controller
from reflector.db.recordings import recordings_controller
from reflector.db.rooms import rooms_controller
@@ -29,16 +32,18 @@ from reflector.db.transcripts import (
TranscriptFinalLongSummary,
TranscriptFinalShortSummary,
TranscriptFinalTitle,
TranscriptStatus,
TranscriptText,
TranscriptTopic,
TranscriptWaveform,
transcripts_controller,
)
from reflector.logger import logger
from reflector.pipelines.runner import PipelineRunner
from reflector.pipelines.runner import PipelineMessage, PipelineRunner
from reflector.processors import (
AudioChunkerProcessor,
AudioChunkerAutoProcessor,
AudioDiarizationAutoProcessor,
AudioDownscaleProcessor,
AudioFileWriterProcessor,
AudioMergeProcessor,
AudioTranscriptAutoProcessor,
@@ -65,30 +70,6 @@ from reflector.zulip import (
)
def asynctask(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
async def run_with_db():
from reflector.db import database
await database.connect()
try:
return await f(*args, **kwargs)
finally:
await database.disconnect()
coro = run_with_db()
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop and loop.is_running():
return loop.run_until_complete(coro)
return asyncio.run(coro)
return wrapper
def broadcast_to_sockets(func):
"""
Decorator to broadcast transcript event to websockets
@@ -144,16 +125,19 @@ class StrValue(BaseModel):
value: str
class PipelineMainBase(PipelineRunner):
transcript_id: str
ws_room_id: str | None = None
ws_manager: WebsocketManager | None = None
def prepare(self):
# prepare websocket
class PipelineMainBase(PipelineRunner[PipelineMessage], Generic[PipelineMessage]):
def __init__(self, transcript_id: str):
super().__init__()
self._lock = asyncio.Lock()
self.transcript_id = transcript_id
self.ws_room_id = f"ts:{self.transcript_id}"
self.ws_manager = get_ws_manager()
self._ws_manager = None
@property
def ws_manager(self) -> WebsocketManager:
if self._ws_manager is None:
self._ws_manager = get_ws_manager()
return self._ws_manager
async def get_transcript(self) -> Transcript:
# fetch the transcript
@@ -164,7 +148,11 @@ class PipelineMainBase(PipelineRunner):
raise Exception("Transcript not found")
return result
def get_transcript_topics(self, transcript: Transcript) -> list[TranscriptTopic]:
@staticmethod
def wrap_transcript_topics(
topics: list[TranscriptTopic],
) -> list[TitleSummaryWithIdProcessorType]:
# transformation to a pipe-supported format
return [
TitleSummaryWithIdProcessorType(
id=topic.id,
@@ -174,12 +162,19 @@ class PipelineMainBase(PipelineRunner):
duration=topic.duration,
transcript=TranscriptProcessorType(words=topic.words),
)
for topic in transcript.topics
for topic in topics
]
@asynccontextmanager
async def transaction(self):
async def lock_transaction(self):
# This lock is to prevent multiple processor starting adding
# into event array at the same time
async with self._lock:
yield
@asynccontextmanager
async def transaction(self):
async with self.lock_transaction():
async with transcripts_controller.transaction():
yield
@@ -188,14 +183,14 @@ class PipelineMainBase(PipelineRunner):
# if it's the first part, update the status of the transcript
# but do not set the ended status yet.
if isinstance(self, PipelineMainLive):
status_mapping = {
status_mapping: dict[str, TranscriptStatus] = {
"started": "recording",
"push": "recording",
"flush": "processing",
"error": "error",
}
elif isinstance(self, PipelineMainFinalSummaries):
status_mapping = {
status_mapping: dict[str, TranscriptStatus] = {
"push": "processing",
"flush": "processing",
"error": "error",
@@ -211,22 +206,8 @@ class PipelineMainBase(PipelineRunner):
return
# when the status of the pipeline changes, update the transcript
async with self.transaction():
transcript = await self.get_transcript()
if status == transcript.status:
return
resp = await transcripts_controller.append_event(
transcript=transcript,
event="STATUS",
data=StrValue(value=status),
)
await transcripts_controller.update(
transcript,
{
"status": status,
},
)
return resp
async with self._lock:
return await transcripts_controller.set_status(self.transcript_id, status)
@broadcast_to_sockets
async def on_transcript(self, data):
@@ -349,7 +330,6 @@ class PipelineMainLive(PipelineMainBase):
async def create(self) -> Pipeline:
# create a context for the whole rtc transaction
# add a customised logger to the context
self.prepare()
transcript = await self.get_transcript()
processors = [
@@ -357,7 +337,8 @@ class PipelineMainLive(PipelineMainBase):
path=transcript.audio_wav_filename,
on_duration=self.on_duration,
),
AudioChunkerProcessor(),
AudioDownscaleProcessor(),
AudioChunkerAutoProcessor(),
AudioMergeProcessor(),
AudioTranscriptAutoProcessor.as_threaded(),
TranscriptLinerProcessor(),
@@ -370,6 +351,7 @@ class PipelineMainLive(PipelineMainBase):
pipeline.set_pref("audio:target_language", transcript.target_language)
pipeline.logger.bind(transcript_id=transcript.id)
pipeline.logger.info("Pipeline main live created")
pipeline.describe()
return pipeline
@@ -380,7 +362,7 @@ class PipelineMainLive(PipelineMainBase):
pipeline_post(transcript_id=self.transcript_id)
class PipelineMainDiarization(PipelineMainBase):
class PipelineMainDiarization(PipelineMainBase[AudioDiarizationInput]):
"""
Diarize the audio and update topics
"""
@@ -388,7 +370,6 @@ class PipelineMainDiarization(PipelineMainBase):
async def create(self) -> Pipeline:
# create a context for the whole rtc transaction
# add a customised logger to the context
self.prepare()
pipeline = Pipeline(
AudioDiarizationAutoProcessor(callback=self.on_topic),
)
@@ -404,11 +385,10 @@ class PipelineMainDiarization(PipelineMainBase):
pipeline.logger.info("Audio is local, skipping diarization")
return
topics = self.get_transcript_topics(transcript)
audio_url = await transcript.get_audio_url()
audio_diarization_input = AudioDiarizationInput(
audio_url=audio_url,
topics=topics,
topics=self.wrap_transcript_topics(transcript.topics),
)
# as tempting to use pipeline.push, prefer to use the runner
@@ -421,7 +401,7 @@ class PipelineMainDiarization(PipelineMainBase):
return pipeline
class PipelineMainFromTopics(PipelineMainBase):
class PipelineMainFromTopics(PipelineMainBase[TitleSummaryWithIdProcessorType]):
"""
Pseudo class for generating a pipeline from topics
"""
@@ -430,8 +410,6 @@ class PipelineMainFromTopics(PipelineMainBase):
raise NotImplementedError
async def create(self) -> Pipeline:
self.prepare()
# get transcript
self._transcript = transcript = await self.get_transcript()
@@ -443,7 +421,7 @@ class PipelineMainFromTopics(PipelineMainBase):
pipeline.logger.info(f"{self.__class__.__name__} pipeline created")
# push topics
topics = self.get_transcript_topics(transcript)
topics = PipelineMainBase.wrap_transcript_topics(transcript.topics)
for topic in topics:
await self.push(topic)
@@ -524,8 +502,6 @@ async def pipeline_convert_to_mp3(transcript: Transcript, logger: Logger):
# Convert to mp3
mp3_filename = transcript.audio_mp3_filename
import av
with av.open(wav_filename.as_posix()) as in_container:
in_stream = in_container.streams.audio[0]
with av.open(mp3_filename.as_posix(), "w") as out_container:
@@ -604,7 +580,7 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
meeting.id
)
except Exception as e:
logger.error(f"Failed to get fetch consent: {e}")
logger.error(f"Failed to get fetch consent: {e}", exc_info=e)
consent_denied = True
if not consent_denied:
@@ -627,7 +603,7 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
f"Deleted original Whereby recording: {recording.bucket_name}/{recording.object_key}"
)
except Exception as e:
logger.error(f"Failed to delete Whereby recording: {e}")
logger.error(f"Failed to delete Whereby recording: {e}", exc_info=e)
# non-transactional, files marked for deletion not actually deleted is possible
await transcripts_controller.update(transcript, {"audio_deleted": True})
@@ -640,7 +616,7 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
f"Deleted processed audio from storage: {transcript.storage_audio_path}"
)
except Exception as e:
logger.error(f"Failed to delete processed audio: {e}")
logger.error(f"Failed to delete processed audio: {e}", exc_info=e)
# 3. Delete local audio files
try:
@@ -649,7 +625,7 @@ async def cleanup_consent(transcript: Transcript, logger: Logger):
if hasattr(transcript, "audio_wav_filename") and transcript.audio_wav_filename:
transcript.audio_wav_filename.unlink(missing_ok=True)
except Exception as e:
logger.error(f"Failed to delete local audio files: {e}")
logger.error(f"Failed to delete local audio files: {e}", exc_info=e)
logger.info("Consent cleanup done")
@@ -789,13 +765,11 @@ def pipeline_post(*, transcript_id: str):
chain_final_summaries,
) | task_pipeline_post_to_zulip.si(transcript_id=transcript_id)
chain.delay()
return chain.delay()
@get_transcript
async def pipeline_process(transcript: Transcript, logger: Logger):
import av
try:
if transcript.audio_location == "storage":
await transcripts_controller.download_mp3_from_storage(transcript)

View File

@@ -16,21 +16,16 @@ During its lifecycle, it will emit the following status:
"""
import asyncio
from pydantic import BaseModel, ConfigDict
from typing import Generic, TypeVar
from reflector.logger import logger
from reflector.processors import Pipeline
PipelineMessage = TypeVar("PipelineMessage")
class PipelineRunner(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
status: str = "idle"
pipeline: Pipeline | None = None
def __init__(self, **kwargs):
super().__init__(**kwargs)
class PipelineRunner(Generic[PipelineMessage]):
def __init__(self):
self._task = None
self._q_cmd = asyncio.Queue(maxsize=4096)
self._ev_done = asyncio.Event()
@@ -39,6 +34,8 @@ class PipelineRunner(BaseModel):
runner=id(self),
runner_cls=self.__class__.__name__,
)
self.status = "idle"
self.pipeline: Pipeline | None = None
async def create(self) -> Pipeline:
"""
@@ -67,7 +64,7 @@ class PipelineRunner(BaseModel):
coro = self.run()
asyncio.run(coro)
async def push(self, data):
async def push(self, data: PipelineMessage):
"""
Push data to the pipeline
"""
@@ -92,7 +89,11 @@ class PipelineRunner(BaseModel):
pass
async def _add_cmd(
self, cmd: str, data, max_retries: int = 3, retry_time_limit: int = 3
self,
cmd: str,
data: PipelineMessage,
max_retries: int = 3,
retry_time_limit: int = 3,
):
"""
Enqueue a command to be executed in the runner.
@@ -143,6 +144,9 @@ class PipelineRunner(BaseModel):
cmd, data = await self._q_cmd.get()
func = getattr(self, f"cmd_{cmd.lower()}")
if func:
if cmd.upper() == "FLUSH":
await func()
else:
await func(data)
else:
raise Exception(f"Unknown command {cmd}")
@@ -152,13 +156,13 @@ class PipelineRunner(BaseModel):
self._ev_done.set()
raise
async def cmd_push(self, data):
async def cmd_push(self, data: PipelineMessage):
if self._is_first_push:
await self._set_status("push")
self._is_first_push = False
await self.pipeline.push(data)
async def cmd_flush(self, data):
async def cmd_flush(self):
await self._set_status("flush")
await self.pipeline.flush()
await self._set_status("ended")

View File

@@ -1,5 +1,7 @@
from .audio_chunker import AudioChunkerProcessor # noqa: F401
from .audio_chunker_auto import AudioChunkerAutoProcessor # noqa: F401
from .audio_diarization_auto import AudioDiarizationAutoProcessor # noqa: F401
from .audio_downscale import AudioDownscaleProcessor # noqa: F401
from .audio_file_writer import AudioFileWriterProcessor # noqa: F401
from .audio_merge import AudioMergeProcessor # noqa: F401
from .audio_transcript import AudioTranscriptProcessor # noqa: F401
@@ -11,6 +13,13 @@ from .base import ( # noqa: F401
Processor,
ThreadedProcessor,
)
from .file_diarization import FileDiarizationProcessor # noqa: F401
from .file_diarization_auto import FileDiarizationAutoProcessor # noqa: F401
from .file_transcript import FileTranscriptProcessor # noqa: F401
from .file_transcript_auto import FileTranscriptAutoProcessor # noqa: F401
from .transcript_diarization_assembler import (
TranscriptDiarizationAssemblerProcessor, # noqa: F401
)
from .transcript_final_summary import TranscriptFinalSummaryProcessor # noqa: F401
from .transcript_final_title import TranscriptFinalTitleProcessor # noqa: F401
from .transcript_liner import TranscriptLinerProcessor # noqa: F401

View File

@@ -1,28 +1,78 @@
from typing import Optional
import av
from prometheus_client import Counter, Histogram
from reflector.processors.base import Processor
class AudioChunkerProcessor(Processor):
"""
Assemble audio frames into chunks
Base class for assembling audio frames into chunks
"""
INPUT_TYPE = av.AudioFrame
OUTPUT_TYPE = list[av.AudioFrame]
def __init__(self, max_frames=256):
super().__init__()
m_chunk = Histogram(
"audio_chunker",
"Time spent in AudioChunker.chunk",
["backend"],
)
m_chunk_call = Counter(
"audio_chunker_call",
"Number of calls to AudioChunker.chunk",
["backend"],
)
m_chunk_success = Counter(
"audio_chunker_success",
"Number of successful calls to AudioChunker.chunk",
["backend"],
)
m_chunk_failure = Counter(
"audio_chunker_failure",
"Number of failed calls to AudioChunker.chunk",
["backend"],
)
def __init__(self, *args, **kwargs):
name = self.__class__.__name__
self.m_chunk = self.m_chunk.labels(name)
self.m_chunk_call = self.m_chunk_call.labels(name)
self.m_chunk_success = self.m_chunk_success.labels(name)
self.m_chunk_failure = self.m_chunk_failure.labels(name)
super().__init__(*args, **kwargs)
self.frames: list[av.AudioFrame] = []
self.max_frames = max_frames
async def _push(self, data: av.AudioFrame):
self.frames.append(data)
if len(self.frames) >= self.max_frames:
await self.flush()
"""Process incoming audio frame"""
# Validate audio format on first frame
if len(self.frames) == 0:
if data.sample_rate != 16000 or len(data.layout.channels) != 1:
raise ValueError(
f"AudioChunkerProcessor expects 16kHz mono audio, got {data.sample_rate}Hz "
f"with {len(data.layout.channels)} channel(s). "
f"Use AudioDownscaleProcessor before this processor."
)
try:
self.m_chunk_call.inc()
with self.m_chunk.time():
result = await self._chunk(data)
self.m_chunk_success.inc()
if result:
await self.emit(result)
except Exception:
self.m_chunk_failure.inc()
raise
async def _chunk(self, data: av.AudioFrame) -> Optional[list[av.AudioFrame]]:
"""
Process audio frame and return chunk when ready.
Subclasses should implement their chunking logic here.
"""
raise NotImplementedError
async def _flush(self):
frames = self.frames[:]
self.frames = []
if frames:
await self.emit(frames)
"""Flush any remaining frames when processing ends"""
raise NotImplementedError

View File

@@ -0,0 +1,32 @@
import importlib
from reflector.processors.audio_chunker import AudioChunkerProcessor
from reflector.settings import settings
class AudioChunkerAutoProcessor(AudioChunkerProcessor):
_registry = {}
@classmethod
def register(cls, name, kclass):
cls._registry[name] = kclass
def __new__(cls, name: str | None = None, **kwargs):
if name is None:
name = settings.AUDIO_CHUNKER_BACKEND
if name not in cls._registry:
module_name = f"reflector.processors.audio_chunker_{name}"
importlib.import_module(module_name)
# gather specific configuration for the processor
# search `AUDIO_CHUNKER_BACKEND_XXX_YYY`, push to constructor as `backend_xxx_yyy`
config = {}
name_upper = name.upper()
settings_prefix = "AUDIO_CHUNKER_"
config_prefix = f"{settings_prefix}{name_upper}_"
for key, value in settings:
if key.startswith(config_prefix):
config_name = key[len(settings_prefix) :].lower()
config[config_name] = value
return cls._registry[name](**config | kwargs)

View File

@@ -0,0 +1,34 @@
from typing import Optional
import av
from reflector.processors.audio_chunker import AudioChunkerProcessor
from reflector.processors.audio_chunker_auto import AudioChunkerAutoProcessor
class AudioChunkerFramesProcessor(AudioChunkerProcessor):
"""
Simple frame-based audio chunker that emits chunks after a fixed number of frames
"""
def __init__(self, max_frames=256, **kwargs):
super().__init__(**kwargs)
self.max_frames = max_frames
async def _chunk(self, data: av.AudioFrame) -> Optional[list[av.AudioFrame]]:
self.frames.append(data)
if len(self.frames) >= self.max_frames:
frames_to_emit = self.frames[:]
self.frames = []
return frames_to_emit
return None
async def _flush(self):
frames = self.frames[:]
self.frames = []
if frames:
await self.emit(frames)
AudioChunkerAutoProcessor.register("frames", AudioChunkerFramesProcessor)

View File

@@ -0,0 +1,298 @@
from typing import Optional
import av
import numpy as np
import torch
from silero_vad import VADIterator, load_silero_vad
from reflector.processors.audio_chunker import AudioChunkerProcessor
from reflector.processors.audio_chunker_auto import AudioChunkerAutoProcessor
class AudioChunkerSileroProcessor(AudioChunkerProcessor):
"""
Assemble audio frames into chunks with VAD-based speech detection using Silero VAD
"""
def __init__(
self,
block_frames=256,
max_frames=1024,
use_onnx=True,
min_frames=2,
**kwargs,
):
super().__init__(**kwargs)
self.block_frames = block_frames
self.max_frames = max_frames
self.min_frames = min_frames
# Initialize Silero VAD
self._init_vad(use_onnx)
def _init_vad(self, use_onnx=False):
"""Initialize Silero VAD model"""
try:
torch.set_num_threads(1)
self.vad_model = load_silero_vad(onnx=use_onnx)
self.vad_iterator = VADIterator(self.vad_model, sampling_rate=16000)
self.logger.info("Silero VAD initialized successfully")
except Exception as e:
self.logger.error(f"Failed to initialize Silero VAD: {e}")
self.vad_model = None
self.vad_iterator = None
async def _chunk(self, data: av.AudioFrame) -> Optional[list[av.AudioFrame]]:
"""Process audio frame and return chunk when ready"""
self.frames.append(data)
# Check for speech segments every 32 frames (~1 second)
if len(self.frames) >= 32 and len(self.frames) % 32 == 0:
return await self._process_block()
# Safety fallback - emit if we hit max frames
elif len(self.frames) >= self.max_frames:
self.logger.warning(
f"AudioChunkerSileroProcessor: Reached max frames ({self.max_frames}), "
f"emitting first {self.max_frames // 2} frames"
)
frames_to_emit = self.frames[: self.max_frames // 2]
self.frames = self.frames[self.max_frames // 2 :]
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring fallback segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
return None
async def _process_block(self) -> Optional[list[av.AudioFrame]]:
# Need at least 32 frames for VAD detection (~1 second)
if len(self.frames) < 32 or self.vad_iterator is None:
return None
# Processing block with current buffer size
print(f"Processing block: {len(self.frames)} frames in buffer")
try:
# Convert frames to numpy array for VAD
audio_array = self._frames_to_numpy(self.frames)
if audio_array is None:
# Fallback: emit all frames if conversion failed
frames_to_emit = self.frames[:]
self.frames = []
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring conversion-failed segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
return None
# Find complete speech segments in the buffer
speech_end_frame = self._find_speech_segment_end(audio_array)
if speech_end_frame is None or speech_end_frame <= 0:
# No speech found but buffer is getting large
if len(self.frames) > 512:
# Check if it's all silence and can be discarded
# No speech segment found, buffer at {len(self.frames)} frames
# Could emit silence or discard old frames here
# For now, keep first 256 frames and discard older silence
if len(self.frames) > 768:
self.logger.debug(
f"Discarding {len(self.frames) - 256} old frames (likely silence)"
)
self.frames = self.frames[-256:]
return None
# Calculate segment timing information
frames_to_emit = self.frames[:speech_end_frame]
# Get timing from av.AudioFrame
if frames_to_emit:
first_frame = frames_to_emit[0]
last_frame = frames_to_emit[-1]
sample_rate = first_frame.sample_rate
# Calculate duration
total_samples = sum(f.samples for f in frames_to_emit)
duration_seconds = total_samples / sample_rate if sample_rate > 0 else 0
# Get timestamps if available
start_time = (
first_frame.pts * first_frame.time_base if first_frame.pts else 0
)
end_time = (
last_frame.pts * last_frame.time_base if last_frame.pts else 0
)
# Convert to HH:MM:SS format for logging
def format_time(seconds):
if not seconds:
return "00:00:00"
total_seconds = int(float(seconds))
hours = total_seconds // 3600
minutes = (total_seconds % 3600) // 60
secs = total_seconds % 60
return f"{hours:02d}:{minutes:02d}:{secs:02d}"
start_formatted = format_time(start_time)
end_formatted = format_time(end_time)
# Keep remaining frames for next processing
remaining_after = len(self.frames) - speech_end_frame
# Single structured log line
self.logger.info(
"Speech segment found",
start=start_formatted,
end=end_formatted,
frames=speech_end_frame,
duration=round(duration_seconds, 2),
buffer_before=len(self.frames),
remaining=remaining_after,
)
# Keep remaining frames for next processing
self.frames = self.frames[speech_end_frame:]
# Filter out segments with too few frames
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
except Exception as e:
self.logger.error(f"Error in VAD processing: {e}")
# Fallback to simple chunking
if len(self.frames) >= self.block_frames:
frames_to_emit = self.frames[: self.block_frames]
self.frames = self.frames[self.block_frames :]
if len(frames_to_emit) >= self.min_frames:
return frames_to_emit
else:
self.logger.debug(
f"Ignoring exception-fallback segment with {len(frames_to_emit)} frames "
f"(< {self.min_frames} minimum)"
)
return None
def _frames_to_numpy(self, frames: list[av.AudioFrame]) -> Optional[np.ndarray]:
"""Convert av.AudioFrame list to numpy array for VAD processing"""
if not frames:
return None
try:
audio_data = []
for frame in frames:
frame_array = frame.to_ndarray()
if len(frame_array.shape) == 2:
frame_array = frame_array.flatten()
audio_data.append(frame_array)
if not audio_data:
return None
combined_audio = np.concatenate(audio_data)
# Ensure float32 format
if combined_audio.dtype == np.int16:
# Normalize int16 audio to float32 in range [-1.0, 1.0]
combined_audio = combined_audio.astype(np.float32) / 32768.0
elif combined_audio.dtype != np.float32:
combined_audio = combined_audio.astype(np.float32)
return combined_audio
except Exception as e:
self.logger.error(f"Error converting frames to numpy: {e}")
return None
def _find_speech_segment_end(self, audio_array: np.ndarray) -> Optional[int]:
"""Find complete speech segments and return frame index at segment end"""
if self.vad_iterator is None or len(audio_array) == 0:
return None
try:
# Process audio in 512-sample windows for VAD
window_size = 512
min_silence_windows = 3 # Require 3 windows of silence after speech
# Track speech state
in_speech = False
speech_start = None
speech_end = None
silence_count = 0
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(chunk, (0, window_size - len(chunk)))
# Detect if this window has speech
speech_dict = self.vad_iterator(chunk, return_seconds=True)
# VADIterator returns dict with 'start' and 'end' when speech segments are detected
if speech_dict:
if not in_speech:
# Speech started
speech_start = i
in_speech = True
# Debug: print(f"Speech START at sample {i}, VAD: {speech_dict}")
silence_count = 0 # Reset silence counter
continue
if not in_speech:
continue
# We're in speech but found silence
silence_count += 1
if silence_count < min_silence_windows:
continue
# Found end of speech segment
speech_end = i - (min_silence_windows - 1) * window_size
# Debug: print(f"Speech END at sample {speech_end}")
# Convert sample position to frame index
samples_per_frame = self.frames[0].samples if self.frames else 1024
frame_index = speech_end // samples_per_frame
# Ensure we don't exceed buffer
frame_index = min(frame_index, len(self.frames))
return frame_index
return None
except Exception as e:
self.logger.error(f"Error finding speech segment: {e}")
return None
async def _flush(self):
frames = self.frames[:]
self.frames = []
if frames:
if len(frames) >= self.min_frames:
await self.emit(frames)
else:
self.logger.debug(
f"Ignoring flush segment with {len(frames)} frames "
f"(< {self.min_frames} minimum)"
)
AudioChunkerAutoProcessor.register("silero", AudioChunkerSileroProcessor)

View File

@@ -1,5 +1,10 @@
from reflector.processors.base import Processor
from reflector.processors.types import AudioDiarizationInput, TitleSummary, Word
from reflector.processors.types import (
AudioDiarizationInput,
DiarizationSegment,
TitleSummary,
Word,
)
class AudioDiarizationProcessor(Processor):
@@ -33,18 +38,21 @@ class AudioDiarizationProcessor(Processor):
async def _diarize(self, data: AudioDiarizationInput):
raise NotImplementedError
def assign_speaker(self, words: list[Word], diarization: list[dict]):
self._diarization_remove_overlap(diarization)
self._diarization_remove_segment_without_words(words, diarization)
self._diarization_merge_same_speaker(words, diarization)
self._diarization_assign_speaker(words, diarization)
@classmethod
def assign_speaker(cls, words: list[Word], diarization: list[DiarizationSegment]):
cls._diarization_remove_overlap(diarization)
cls._diarization_remove_segment_without_words(words, diarization)
cls._diarization_merge_same_speaker(diarization)
cls._diarization_assign_speaker(words, diarization)
def iter_words_from_topics(self, topics: TitleSummary):
@staticmethod
def iter_words_from_topics(topics: list[TitleSummary]):
for topic in topics:
for word in topic.transcript.words:
yield word
def is_word_continuation(self, word_prev, word):
@staticmethod
def is_word_continuation(word_prev, word):
"""
Return True if the word is a continuation of the previous word
by checking if the previous word is ending with a punctuation
@@ -57,7 +65,8 @@ class AudioDiarizationProcessor(Processor):
return False
return True
def _diarization_remove_overlap(self, diarization: list[dict]):
@staticmethod
def _diarization_remove_overlap(diarization: list[DiarizationSegment]):
"""
Remove overlap in diarization results
@@ -82,8 +91,9 @@ class AudioDiarizationProcessor(Processor):
else:
diarization_idx += 1
@staticmethod
def _diarization_remove_segment_without_words(
self, words: list[Word], diarization: list[dict]
words: list[Word], diarization: list[DiarizationSegment]
):
"""
Remove diarization segments without words
@@ -112,9 +122,8 @@ class AudioDiarizationProcessor(Processor):
else:
diarization_idx += 1
def _diarization_merge_same_speaker(
self, words: list[Word], diarization: list[dict]
):
@staticmethod
def _diarization_merge_same_speaker(diarization: list[DiarizationSegment]):
"""
Merge diarization contigous segments with the same speaker
@@ -131,7 +140,10 @@ class AudioDiarizationProcessor(Processor):
else:
diarization_idx += 1
def _diarization_assign_speaker(self, words: list[Word], diarization: list[dict]):
@classmethod
def _diarization_assign_speaker(
cls, words: list[Word], diarization: list[DiarizationSegment]
):
"""
Assign speaker to words based on diarization
@@ -139,7 +151,7 @@ class AudioDiarizationProcessor(Processor):
"""
word_idx = 0
last_speaker = None
last_speaker = 0
for d in diarization:
start = d["start"]
end = d["end"]
@@ -154,7 +166,7 @@ class AudioDiarizationProcessor(Processor):
# If it's a continuation, assign with the last speaker
is_continuation = False
if word_idx > 0 and word_idx < len(words) - 1:
is_continuation = self.is_word_continuation(
is_continuation = cls.is_word_continuation(
*words[word_idx - 1 : word_idx + 1]
)
if is_continuation:

View File

@@ -0,0 +1,74 @@
import os
import torch
import torchaudio
from pyannote.audio import Pipeline
from reflector.processors.audio_diarization import AudioDiarizationProcessor
from reflector.processors.audio_diarization_auto import AudioDiarizationAutoProcessor
from reflector.processors.types import AudioDiarizationInput, DiarizationSegment
class AudioDiarizationPyannoteProcessor(AudioDiarizationProcessor):
"""Local diarization processor using pyannote.audio library"""
def __init__(
self,
model_name: str = "pyannote/speaker-diarization-3.1",
pyannote_auth_token: str | None = None,
device: str | None = None,
**kwargs,
):
super().__init__(**kwargs)
self.model_name = model_name
self.auth_token = pyannote_auth_token or os.environ.get("HF_TOKEN")
self.device = device
if device is None:
self.device = "cuda" if torch.cuda.is_available() else "cpu"
self.logger.info(f"Loading pyannote diarization model: {self.model_name}")
self.diarization_pipeline = Pipeline.from_pretrained(
self.model_name, use_auth_token=self.auth_token
)
self.diarization_pipeline.to(torch.device(self.device))
self.logger.info(f"Diarization model loaded on device: {self.device}")
async def _diarize(self, data: AudioDiarizationInput) -> list[DiarizationSegment]:
try:
# Load audio file (audio_url is assumed to be a local file path)
self.logger.info(f"Loading local audio file: {data.audio_url}")
waveform, sample_rate = torchaudio.load(data.audio_url)
audio_input = {"waveform": waveform, "sample_rate": sample_rate}
self.logger.info("Running speaker diarization")
diarization = self.diarization_pipeline(audio_input)
# Convert pyannote diarization output to our format
segments = []
for segment, _, speaker in diarization.itertracks(yield_label=True):
# Extract speaker number from label (e.g., "SPEAKER_00" -> 0)
speaker_id = 0
if speaker.startswith("SPEAKER_"):
try:
speaker_id = int(speaker.split("_")[-1])
except (ValueError, IndexError):
# Fallback to hash-based ID if parsing fails
speaker_id = hash(speaker) % 1000
segments.append(
{
"start": round(segment.start, 3),
"end": round(segment.end, 3),
"speaker": speaker_id,
}
)
self.logger.info(f"Diarization completed with {len(segments)} segments")
return segments
except Exception as e:
self.logger.exception(f"Diarization failed: {e}")
raise
AudioDiarizationAutoProcessor.register("pyannote", AudioDiarizationPyannoteProcessor)

View File

@@ -0,0 +1,60 @@
from typing import Optional
import av
from av.audio.resampler import AudioResampler
from reflector.processors.base import Processor
def copy_frame(frame: av.AudioFrame) -> av.AudioFrame:
frame_copy = frame.from_ndarray(
frame.to_ndarray(),
format=frame.format.name,
layout=frame.layout.name,
)
frame_copy.sample_rate = frame.sample_rate
frame_copy.pts = frame.pts
frame_copy.time_base = frame.time_base
return frame_copy
class AudioDownscaleProcessor(Processor):
"""
Downscale audio frames to 16kHz mono format
"""
INPUT_TYPE = av.AudioFrame
OUTPUT_TYPE = av.AudioFrame
def __init__(self, target_rate: int = 16000, target_layout: str = "mono", **kwargs):
super().__init__(**kwargs)
self.target_rate = target_rate
self.target_layout = target_layout
self.resampler: Optional[AudioResampler] = None
self.needs_resampling: Optional[bool] = None
async def _push(self, data: av.AudioFrame):
if self.needs_resampling is None:
self.needs_resampling = (
data.sample_rate != self.target_rate
or data.layout.name != self.target_layout
)
if self.needs_resampling:
self.resampler = AudioResampler(
format="s16", layout=self.target_layout, rate=self.target_rate
)
if not self.needs_resampling or not self.resampler:
await self.emit(data)
return
resampled_frames = self.resampler.resample(copy_frame(data))
for resampled_frame in resampled_frames:
await self.emit(resampled_frame)
async def _flush(self):
if self.needs_resampling and self.resampler:
final_frames = self.resampler.resample(None)
for frame in final_frames:
await self.emit(frame)

View File

@@ -16,37 +16,46 @@ class AudioMergeProcessor(Processor):
INPUT_TYPE = list[av.AudioFrame]
OUTPUT_TYPE = AudioFile
def __init__(self, **kwargs):
super().__init__(**kwargs)
async def _push(self, data: list[av.AudioFrame]):
if not data:
return
# get audio information from first frame
frame = data[0]
channels = len(frame.layout.channels)
sample_rate = frame.sample_rate
sample_width = frame.format.bytes
output_channels = len(frame.layout.channels)
output_sample_rate = frame.sample_rate
output_sample_width = frame.format.bytes
# create audio file
uu = uuid4().hex
fd = io.BytesIO()
# Use PyAV to write frames
out_container = av.open(fd, "w", format="wav")
out_stream = out_container.add_stream("pcm_s16le", rate=sample_rate)
out_stream = out_container.add_stream("pcm_s16le", rate=output_sample_rate)
out_stream.layout = frame.layout.name
for frame in data:
for packet in out_stream.encode(frame):
out_container.mux(packet)
# Flush the encoder
for packet in out_stream.encode(None):
out_container.mux(packet)
out_container.close()
fd.seek(0)
# emit audio file
audiofile = AudioFile(
name=f"{monotonic_ns()}-{uu}.wav",
fd=fd,
sample_rate=sample_rate,
channels=channels,
sample_width=sample_width,
sample_rate=output_sample_rate,
channels=output_channels,
sample_width=output_sample_width,
timestamp=data[0].pts * data[0].time_base,
)

View File

@@ -21,7 +21,11 @@ from reflector.settings import settings
class AudioTranscriptModalProcessor(AudioTranscriptProcessor):
def __init__(self, modal_api_key: str | None = None, **kwargs):
def __init__(
self,
modal_api_key: str | None = None,
**kwargs,
):
super().__init__()
if not settings.TRANSCRIPT_URL:
raise Exception(

View File

@@ -173,6 +173,7 @@ class Processor(Emitter):
except Exception:
self.m_processor_failure.inc()
self.logger.exception("Error in push")
raise
async def flush(self):
"""
@@ -240,14 +241,15 @@ class ThreadedProcessor(Processor):
self.INPUT_TYPE = processor.INPUT_TYPE
self.OUTPUT_TYPE = processor.OUTPUT_TYPE
self.executor = ThreadPoolExecutor(max_workers=max_workers)
self.queue = asyncio.Queue()
self.task = asyncio.get_running_loop().create_task(self.loop())
self.queue = asyncio.Queue(maxsize=50)
self.task: asyncio.Task | None = None
def set_pipeline(self, pipeline: "Pipeline"):
super().set_pipeline(pipeline)
self.processor.set_pipeline(pipeline)
async def loop(self):
try:
while True:
data = await self.queue.get()
self.m_processor_queue.set(self.queue.qsize())
@@ -265,8 +267,19 @@ class ThreadedProcessor(Processor):
)
finally:
self.queue.task_done()
except Exception as e:
logger.error(f"Crash in {self.__class__.__name__}: {e}", exc_info=e)
async def _ensure_task(self):
if self.task is None:
self.task = asyncio.get_running_loop().create_task(self.loop())
# XXX not doing a sleep here make the whole pipeline prior the thread
# to be running without having a chance to work on the task here.
await asyncio.sleep(0)
async def _push(self, data):
await self._ensure_task()
await self.queue.put(data)
async def _flush(self):

View File

@@ -0,0 +1,33 @@
from pydantic import BaseModel
from reflector.processors.base import Processor
from reflector.processors.types import DiarizationSegment
class FileDiarizationInput(BaseModel):
"""Input for file diarization containing audio URL"""
audio_url: str
class FileDiarizationOutput(BaseModel):
"""Output for file diarization containing speaker segments"""
diarization: list[DiarizationSegment]
class FileDiarizationProcessor(Processor):
"""
Diarize complete audio files from URL
"""
INPUT_TYPE = FileDiarizationInput
OUTPUT_TYPE = FileDiarizationOutput
async def _push(self, data: FileDiarizationInput):
result = await self._diarize(data)
if result:
await self.emit(result)
async def _diarize(self, data: FileDiarizationInput):
raise NotImplementedError

View File

@@ -0,0 +1,33 @@
import importlib
from reflector.processors.file_diarization import FileDiarizationProcessor
from reflector.settings import settings
class FileDiarizationAutoProcessor(FileDiarizationProcessor):
_registry = {}
@classmethod
def register(cls, name, kclass):
cls._registry[name] = kclass
def __new__(cls, name: str | None = None, **kwargs):
if name is None:
name = settings.DIARIZATION_BACKEND
if name not in cls._registry:
module_name = f"reflector.processors.file_diarization_{name}"
importlib.import_module(module_name)
# gather specific configuration for the processor
# search `DIARIZATION_BACKEND_XXX_YYY`, push to constructor as `backend_xxx_yyy`
config = {}
name_upper = name.upper()
settings_prefix = "DIARIZATION_"
config_prefix = f"{settings_prefix}{name_upper}_"
for key, value in settings:
if key.startswith(config_prefix):
config_name = key[len(settings_prefix) :].lower()
config[config_name] = value
return cls._registry[name](**config | kwargs)

View File

@@ -0,0 +1,58 @@
"""
File diarization implementation using the GPU service from modal.com
API will be a POST request to DIARIZATION_URL:
```
POST /diarize?audio_file_url=...&timestamp=0
Authorization: Bearer <modal_api_key>
```
"""
import httpx
from reflector.processors.file_diarization import (
FileDiarizationInput,
FileDiarizationOutput,
FileDiarizationProcessor,
)
from reflector.processors.file_diarization_auto import FileDiarizationAutoProcessor
from reflector.settings import settings
class FileDiarizationModalProcessor(FileDiarizationProcessor):
def __init__(self, modal_api_key: str | None = None, **kwargs):
super().__init__(**kwargs)
if not settings.DIARIZATION_URL:
raise Exception(
"DIARIZATION_URL required to use FileDiarizationModalProcessor"
)
self.diarization_url = settings.DIARIZATION_URL + "/diarize"
self.file_timeout = settings.DIARIZATION_FILE_TIMEOUT
self.modal_api_key = modal_api_key
async def _diarize(self, data: FileDiarizationInput):
"""Get speaker diarization for file"""
self.logger.info(f"Starting diarization from {data.audio_url}")
headers = {}
if self.modal_api_key:
headers["Authorization"] = f"Bearer {self.modal_api_key}"
async with httpx.AsyncClient(timeout=self.file_timeout) as client:
response = await client.post(
self.diarization_url,
headers=headers,
params={
"audio_file_url": data.audio_url,
"timestamp": 0,
},
follow_redirects=True,
)
response.raise_for_status()
diarization_data = response.json()["diarization"]
return FileDiarizationOutput(diarization=diarization_data)
FileDiarizationAutoProcessor.register("modal", FileDiarizationModalProcessor)

View File

@@ -0,0 +1,65 @@
from prometheus_client import Counter, Histogram
from reflector.processors.base import Processor
from reflector.processors.types import Transcript
class FileTranscriptInput:
"""Input for file transcription containing audio URL and language settings"""
def __init__(self, audio_url: str, language: str = "en"):
self.audio_url = audio_url
self.language = language
class FileTranscriptProcessor(Processor):
"""
Transcript complete audio files from URL
"""
INPUT_TYPE = FileTranscriptInput
OUTPUT_TYPE = Transcript
m_transcript = Histogram(
"file_transcript",
"Time spent in FileTranscript.transcript",
["backend"],
)
m_transcript_call = Counter(
"file_transcript_call",
"Number of calls to FileTranscript.transcript",
["backend"],
)
m_transcript_success = Counter(
"file_transcript_success",
"Number of successful calls to FileTranscript.transcript",
["backend"],
)
m_transcript_failure = Counter(
"file_transcript_failure",
"Number of failed calls to FileTranscript.transcript",
["backend"],
)
def __init__(self, *args, **kwargs):
name = self.__class__.__name__
self.m_transcript = self.m_transcript.labels(name)
self.m_transcript_call = self.m_transcript_call.labels(name)
self.m_transcript_success = self.m_transcript_success.labels(name)
self.m_transcript_failure = self.m_transcript_failure.labels(name)
super().__init__(*args, **kwargs)
async def _push(self, data: FileTranscriptInput):
try:
self.m_transcript_call.inc()
with self.m_transcript.time():
result = await self._transcript(data)
self.m_transcript_success.inc()
if result:
await self.emit(result)
except Exception:
self.m_transcript_failure.inc()
raise
async def _transcript(self, data: FileTranscriptInput):
raise NotImplementedError

View File

@@ -0,0 +1,32 @@
import importlib
from reflector.processors.file_transcript import FileTranscriptProcessor
from reflector.settings import settings
class FileTranscriptAutoProcessor(FileTranscriptProcessor):
_registry = {}
@classmethod
def register(cls, name, kclass):
cls._registry[name] = kclass
def __new__(cls, name: str | None = None, **kwargs):
if name is None:
name = settings.TRANSCRIPT_BACKEND
if name not in cls._registry:
module_name = f"reflector.processors.file_transcript_{name}"
importlib.import_module(module_name)
# gather specific configuration for the processor
# search `TRANSCRIPT_BACKEND_XXX_YYY`, push to constructor as `backend_xxx_yyy`
config = {}
name_upper = name.upper()
settings_prefix = "TRANSCRIPT_"
config_prefix = f"{settings_prefix}{name_upper}_"
for key, value in settings:
if key.startswith(config_prefix):
config_name = key[len(settings_prefix) :].lower()
config[config_name] = value
return cls._registry[name](**config | kwargs)

View File

@@ -0,0 +1,78 @@
"""
File transcription implementation using the GPU service from modal.com
API will be a POST request to TRANSCRIPT_URL:
```json
{
"audio_file_url": "https://...",
"language": "en",
"model": "parakeet-tdt-0.6b-v2",
"batch": true
}
```
"""
import httpx
from reflector.processors.file_transcript import (
FileTranscriptInput,
FileTranscriptProcessor,
)
from reflector.processors.file_transcript_auto import FileTranscriptAutoProcessor
from reflector.processors.types import Transcript, Word
from reflector.settings import settings
class FileTranscriptModalProcessor(FileTranscriptProcessor):
def __init__(self, modal_api_key: str | None = None, **kwargs):
super().__init__(**kwargs)
if not settings.TRANSCRIPT_URL:
raise Exception(
"TRANSCRIPT_URL required to use FileTranscriptModalProcessor"
)
self.transcript_url = settings.TRANSCRIPT_URL
self.file_timeout = settings.TRANSCRIPT_FILE_TIMEOUT
self.modal_api_key = modal_api_key
async def _transcript(self, data: FileTranscriptInput):
"""Send full file to Modal for transcription"""
url = f"{self.transcript_url}/v1/audio/transcriptions-from-url"
self.logger.info(f"Starting file transcription from {data.audio_url}")
headers = {}
if self.modal_api_key:
headers["Authorization"] = f"Bearer {self.modal_api_key}"
async with httpx.AsyncClient(timeout=self.file_timeout) as client:
response = await client.post(
url,
headers=headers,
json={
"audio_file_url": data.audio_url,
"language": data.language,
"batch": True,
},
follow_redirects=True,
)
response.raise_for_status()
result = response.json()
words = [
Word(
text=word_info["word"],
start=word_info["start"],
end=word_info["end"],
)
for word_info in result.get("words", [])
]
# words come not in order
words.sort(key=lambda w: w.start)
return Transcript(words=words)
# Register with the auto processor
FileTranscriptAutoProcessor.register("modal", FileTranscriptModalProcessor)

View File

@@ -6,7 +6,7 @@ This script is used to generate a summary of a meeting notes transcript.
import asyncio
import sys
from datetime import datetime
from datetime import datetime, timezone
from enum import Enum
from textwrap import dedent
from typing import Type, TypeVar
@@ -474,7 +474,7 @@ if __name__ == "__main__":
if args.save:
# write the summary to a file, on the format summary-<iso date>.md
filename = f"summary-{datetime.now().isoformat()}.md"
filename = f"summary-{datetime.now(timezone.utc).isoformat()}.md"
with open(filename, "w", encoding="utf-8") as f:
f.write(sm.as_markdown())

View File

@@ -0,0 +1,45 @@
"""
Processor to assemble transcript with diarization results
"""
from reflector.processors.audio_diarization import AudioDiarizationProcessor
from reflector.processors.base import Processor
from reflector.processors.types import DiarizationSegment, Transcript
class TranscriptDiarizationAssemblerInput:
"""Input containing transcript and diarization data"""
def __init__(self, transcript: Transcript, diarization: list[DiarizationSegment]):
self.transcript = transcript
self.diarization = diarization
class TranscriptDiarizationAssemblerProcessor(Processor):
"""
Assemble transcript with diarization results by applying speaker assignments
"""
INPUT_TYPE = TranscriptDiarizationAssemblerInput
OUTPUT_TYPE = Transcript
async def _push(self, data: TranscriptDiarizationAssemblerInput):
result = await self._assemble(data)
if result:
await self.emit(result)
async def _assemble(self, data: TranscriptDiarizationAssemblerInput):
"""Apply diarization to transcript words"""
if not data.diarization:
self.logger.info(
"No diarization data provided, returning original transcript"
)
return data.transcript
# Reuse logic from AudioDiarizationProcessor
processor = AudioDiarizationProcessor()
words = data.transcript.words
processor.assign_speaker(words, data.diarization)
self.logger.info(f"Applied diarization to {len(words)} words")
return data.transcript

View File

@@ -2,12 +2,22 @@ import io
import re
import tempfile
from pathlib import Path
from typing import Annotated, TypedDict
from profanityfilter import ProfanityFilter
from pydantic import BaseModel, PrivateAttr
from pydantic import BaseModel, Field, PrivateAttr
from reflector.redis_cache import redis_cache
class DiarizationSegment(TypedDict):
"""Type definition for diarization segment containing speaker information"""
start: float
end: float
speaker: int
PUNC_RE = re.compile(r"[.;:?!…]")
profanity_filter = ProfanityFilter()
@@ -48,20 +58,70 @@ class AudioFile(BaseModel):
self._path.unlink()
# non-negative seconds with float part
Seconds = Annotated[float, Field(ge=0.0, description="Time in seconds with float part")]
class Word(BaseModel):
text: str
start: float
end: float
start: Seconds
end: Seconds
speaker: int = 0
class TranscriptSegment(BaseModel):
text: str
start: float
end: float
start: Seconds
end: Seconds
speaker: int = 0
def words_to_segments(words: list[Word]) -> list[TranscriptSegment]:
# from a list of word, create a list of segments
# join the word that are less than 2 seconds apart
# but separate if the speaker changes, or if the punctuation is a . , ; : ? !
segments = []
current_segment = None
MAX_SEGMENT_LENGTH = 120
for word in words:
if current_segment is None:
current_segment = TranscriptSegment(
text=word.text,
start=word.start,
end=word.end,
speaker=word.speaker,
)
continue
# If the word is attach to another speaker, push the current segment
# and start a new one
if word.speaker != current_segment.speaker:
segments.append(current_segment)
current_segment = TranscriptSegment(
text=word.text,
start=word.start,
end=word.end,
speaker=word.speaker,
)
continue
# if the word is the end of a sentence, and we have enough content,
# add the word to the current segment and push it
current_segment.text += word.text
current_segment.end = word.end
have_punc = PUNC_RE.search(word.text)
if have_punc and (len(current_segment.text) > MAX_SEGMENT_LENGTH):
segments.append(current_segment)
current_segment = None
if current_segment:
segments.append(current_segment)
return segments
class Transcript(BaseModel):
translation: str | None = None
words: list[Word] = None
@@ -117,49 +177,7 @@ class Transcript(BaseModel):
return Transcript(text=self.text, translation=self.translation, words=words)
def as_segments(self) -> list[TranscriptSegment]:
# from a list of word, create a list of segments
# join the word that are less than 2 seconds apart
# but separate if the speaker changes, or if the punctuation is a . , ; : ? !
segments = []
current_segment = None
MAX_SEGMENT_LENGTH = 120
for word in self.words:
if current_segment is None:
current_segment = TranscriptSegment(
text=word.text,
start=word.start,
end=word.end,
speaker=word.speaker,
)
continue
# If the word is attach to another speaker, push the current segment
# and start a new one
if word.speaker != current_segment.speaker:
segments.append(current_segment)
current_segment = TranscriptSegment(
text=word.text,
start=word.start,
end=word.end,
speaker=word.speaker,
)
continue
# if the word is the end of a sentence, and we have enough content,
# add the word to the current segment and push it
current_segment.text += word.text
current_segment.end = word.end
have_punc = PUNC_RE.search(word.text)
if have_punc and (len(current_segment.text) > MAX_SEGMENT_LENGTH):
segments.append(current_segment)
current_segment = None
if current_segment:
segments.append(current_segment)
return segments
return words_to_segments(self.words)
class TitleSummary(BaseModel):

View File

@@ -1,5 +1,8 @@
from pydantic.types import PositiveInt
from pydantic_settings import BaseSettings, SettingsConfigDict
from reflector.utils.string import NonEmptyString
class Settings(BaseSettings):
model_config = SettingsConfigDict(
@@ -14,16 +17,23 @@ class Settings(BaseSettings):
CORS_ALLOW_CREDENTIALS: bool = False
# Database
DATABASE_URL: str = "sqlite:///./reflector.sqlite3"
DATABASE_URL: str = (
"postgresql+asyncpg://reflector:reflector@localhost:5432/reflector"
)
# local data directory
DATA_DIR: str = "./data"
# Audio Chunking
# backends: silero, frames
AUDIO_CHUNKER_BACKEND: str = "frames"
# Audio Transcription
# backends: whisper, modal
TRANSCRIPT_BACKEND: str = "whisper"
TRANSCRIPT_URL: str | None = None
TRANSCRIPT_TIMEOUT: int = 90
TRANSCRIPT_FILE_TIMEOUT: int = 600
# Audio Transcription: modal backend
TRANSCRIPT_MODAL_API_KEY: str | None = None
@@ -37,6 +47,15 @@ class Settings(BaseSettings):
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID: str | None = None
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY: str | None = None
# Recording storage
RECORDING_STORAGE_BACKEND: str | None = None
# Recording storage configuration for AWS
RECORDING_STORAGE_AWS_BUCKET_NAME: str = "recording-bucket"
RECORDING_STORAGE_AWS_REGION: str = "us-east-1"
RECORDING_STORAGE_AWS_ACCESS_KEY_ID: str | None = None
RECORDING_STORAGE_AWS_SECRET_ACCESS_KEY: str | None = None
# Translate into the target language
TRANSLATION_BACKEND: str = "passthrough"
TRANSLATE_URL: str | None = None
@@ -55,10 +74,14 @@ class Settings(BaseSettings):
DIARIZATION_ENABLED: bool = True
DIARIZATION_BACKEND: str = "modal"
DIARIZATION_URL: str | None = None
DIARIZATION_FILE_TIMEOUT: int = 600
# Diarization: modal backend
DIARIZATION_MODAL_API_KEY: str | None = None
# Diarization: local pyannote.audio
DIARIZATION_PYANNOTE_AUTH_TOKEN: str | None = None
# Sentry
SENTRY_DSN: str | None = None
@@ -70,9 +93,8 @@ class Settings(BaseSettings):
AUTH_JWT_PUBLIC_KEY: str | None = "authentik.monadical.com_public.pem"
AUTH_JWT_AUDIENCE: str | None = None
# API public mode
# if set, all anonymous record will be public
PUBLIC_MODE: bool = False
PUBLIC_DATA_RETENTION_DAYS: PositiveInt = 7
# Min transcript length to generate topic + summary
MIN_TRANSCRIPT_LENGTH: int = 750
@@ -100,14 +122,24 @@ class Settings(BaseSettings):
# Whereby integration
WHEREBY_API_URL: str = "https://api.whereby.dev/v1"
WHEREBY_API_KEY: str | None = None
WHEREBY_API_KEY: NonEmptyString | None = None
# Jibri integration
JIBRI_RECORDINGS_PATH: str = "/recordings"
WHEREBY_WEBHOOK_SECRET: str | None = None
AWS_WHEREBY_S3_BUCKET: str | None = None
AWS_WHEREBY_ACCESS_KEY_ID: str | None = None
AWS_WHEREBY_ACCESS_KEY_SECRET: str | None = None
AWS_PROCESS_RECORDING_QUEUE_URL: str | None = None
SQS_POLLING_TIMEOUT_SECONDS: int = 60
# Jitsi Meet
JITSI_DOMAIN: str = "meet.jit.si"
JITSI_JWT_SECRET: str | None = None
JITSI_WEBHOOK_SECRET: str | None = None
JITSI_APP_ID: str = "reflector"
JITSI_JWT_ISSUER: str = "reflector"
JITSI_JWT_AUDIENCE: str = "jitsi"
# Zulip integration
ZULIP_REALM: str | None = None
ZULIP_API_KEY: str | None = None

View File

@@ -1,10 +1,17 @@
from .base import Storage # noqa
from reflector.settings import settings
def get_transcripts_storage() -> Storage:
from reflector.settings import settings
assert settings.TRANSCRIPT_STORAGE_BACKEND
return Storage.get_instance(
name=settings.TRANSCRIPT_STORAGE_BACKEND,
settings_prefix="TRANSCRIPT_STORAGE_",
)
def get_recordings_storage() -> Storage:
return Storage.get_instance(
name=settings.RECORDING_STORAGE_BACKEND,
settings_prefix="RECORDING_STORAGE_",
)

View File

@@ -0,0 +1,72 @@
#!/usr/bin/env python
"""
Manual cleanup tool for old public data.
Uses the same implementation as the Celery worker task.
"""
import argparse
import asyncio
import sys
import structlog
from reflector.settings import settings
from reflector.worker.cleanup import _cleanup_old_public_data
logger = structlog.get_logger(__name__)
async def cleanup_old_data(days: int = 7):
logger.info(
"Starting manual cleanup",
retention_days=days,
public_mode=settings.PUBLIC_MODE,
)
if not settings.PUBLIC_MODE:
logger.critical(
"WARNING: PUBLIC_MODE is False. "
"This tool is intended for public instances only."
)
raise Exception("Tool intended for public instances only")
result = await _cleanup_old_public_data(days=days)
if result:
logger.info(
"Cleanup completed",
transcripts_deleted=result.get("transcripts_deleted", 0),
meetings_deleted=result.get("meetings_deleted", 0),
recordings_deleted=result.get("recordings_deleted", 0),
errors_count=len(result.get("errors", [])),
)
if result.get("errors"):
logger.warning(
"Errors encountered during cleanup:", errors=result["errors"][:10]
)
else:
logger.info("Cleanup skipped or completed without results")
def main():
parser = argparse.ArgumentParser(
description="Clean up old transcripts and meetings"
)
parser.add_argument(
"--days",
type=int,
default=7,
help="Number of days to keep data (default: 7)",
)
args = parser.parse_args()
if args.days < 1:
logger.error("Days must be at least 1")
sys.exit(1)
asyncio.run(cleanup_old_data(days=args.days))
if __name__ == "__main__":
main()

View File

@@ -9,8 +9,9 @@ async def export_db(filename: str) -> None:
filename = pathlib.Path(filename).resolve()
settings.DATABASE_URL = f"sqlite:///{filename}"
from reflector.db import database, transcripts
from reflector.db import get_database, transcripts
database = get_database()
await database.connect()
transcripts = await database.fetch_all(transcripts.select())
await database.disconnect()

View File

@@ -8,8 +8,9 @@ async def export_db(filename: str) -> None:
filename = pathlib.Path(filename).resolve()
settings.DATABASE_URL = f"sqlite:///{filename}"
from reflector.db import database, transcripts
from reflector.db import get_database, transcripts
database = get_database()
await database.connect()
transcripts = await database.fetch_all(transcripts.select())
await database.disconnect()

View File

@@ -1,105 +1,220 @@
"""
Process audio file with diarization support
"""
import argparse
import asyncio
import json
import shutil
import sys
import time
from pathlib import Path
from typing import Any, Dict, List, Literal
import av
from reflector.db.transcripts import SourceKind, TranscriptTopic, transcripts_controller
from reflector.logger import logger
from reflector.processors import (
AudioChunkerProcessor,
AudioMergeProcessor,
AudioTranscriptAutoProcessor,
Pipeline,
PipelineEvent,
TranscriptFinalSummaryProcessor,
TranscriptFinalTitleProcessor,
TranscriptLinerProcessor,
TranscriptTopicDetectorProcessor,
TranscriptTranslatorAutoProcessor,
from reflector.pipelines.main_file_pipeline import (
task_pipeline_file_process as task_pipeline_file_process,
)
from reflector.pipelines.main_live_pipeline import pipeline_post as live_pipeline_post
from reflector.pipelines.main_live_pipeline import (
pipeline_process as live_pipeline_process,
)
from reflector.processors.base import BroadcastProcessor
async def process_audio_file(
filename,
event_callback,
only_transcript=False,
source_language="en",
target_language="en",
def serialize_topics(topics: List[TranscriptTopic]) -> List[Dict[str, Any]]:
"""Convert TranscriptTopic objects to JSON-serializable dicts"""
serialized = []
for topic in topics:
topic_dict = topic.model_dump()
serialized.append(topic_dict)
return serialized
def debug_print_speakers(serialized_topics: List[Dict[str, Any]]) -> None:
"""Print debug info about speakers found in topics"""
all_speakers = set()
for topic_dict in serialized_topics:
for word in topic_dict.get("words", []):
all_speakers.add(word.get("speaker", 0))
print(
f"Found {len(serialized_topics)} topics with speakers: {all_speakers}",
file=sys.stderr,
)
TranscriptId = str
# common interface for every flow: it needs an Entry in db with specific ceremony (file path + status + actual file in file system)
# ideally we want to get rid of it at some point
async def prepare_entry(
source_path: str,
source_language: str,
target_language: str,
) -> TranscriptId:
file_path = Path(source_path)
transcript = await transcripts_controller.add(
file_path.name,
# note that the real file upload has SourceKind: LIVE for the reason of it's an error
source_kind=SourceKind.FILE,
source_language=source_language,
target_language=target_language,
user_id=None,
)
logger.info(
f"Created empty transcript {transcript.id} for file {file_path.name} because technically we need an empty transcript before we start transcript"
)
# pipelines expect files as upload.*
extension = file_path.suffix
upload_path = transcript.data_path / f"upload{extension}"
upload_path.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(source_path, upload_path)
logger.info(f"Copied {source_path} to {upload_path}")
# pipelines expect entity status "uploaded"
await transcripts_controller.update(transcript, {"status": "uploaded"})
return transcript.id
# same reason as prepare_entry
async def extract_result_from_entry(
transcript_id: TranscriptId, output_path: str
) -> None:
post_final_transcript = await transcripts_controller.get_by_id(transcript_id)
# assert post_final_transcript.status == "ended"
# File pipeline doesn't set status to "ended", only live pipeline does https://github.com/Monadical-SAS/reflector/issues/582
topics = post_final_transcript.topics
if not topics:
raise RuntimeError(
f"No topics found for transcript {transcript_id} after processing"
)
serialized_topics = serialize_topics(topics)
if output_path:
# Write to JSON file
with open(output_path, "w") as f:
for topic_dict in serialized_topics:
json.dump(topic_dict, f)
f.write("\n")
print(f"Results written to {output_path}", file=sys.stderr)
else:
# Write to stdout as JSONL
for topic_dict in serialized_topics:
print(json.dumps(topic_dict))
debug_print_speakers(serialized_topics)
async def process_live_pipeline(
transcript_id: TranscriptId,
):
# build pipeline for audio processing
processors = [
AudioChunkerProcessor(),
AudioMergeProcessor(),
AudioTranscriptAutoProcessor.as_threaded(),
TranscriptLinerProcessor(),
TranscriptTranslatorAutoProcessor.as_threaded(),
]
if not only_transcript:
processors += [
TranscriptTopicDetectorProcessor.as_threaded(),
BroadcastProcessor(
processors=[
TranscriptFinalTitleProcessor.as_threaded(),
TranscriptFinalSummaryProcessor.as_threaded(),
],
),
]
"""Process transcript_id with transcription and diarization"""
# transcription output
pipeline = Pipeline(*processors)
pipeline.set_pref("audio:source_language", source_language)
pipeline.set_pref("audio:target_language", target_language)
pipeline.describe()
pipeline.on(event_callback)
print(f"Processing transcript_id {transcript_id}...", file=sys.stderr)
await live_pipeline_process(transcript_id=transcript_id)
print(f"Processing complete for transcript {transcript_id}", file=sys.stderr)
pre_final_transcript = await transcripts_controller.get_by_id(transcript_id)
# assert documented behaviour: after process, the pipeline isn't ended. this is the reason of calling pipeline_post
assert pre_final_transcript.status != "ended"
# at this point, diarization is running but we have no access to it. run diarization in parallel - one will hopefully win after polling
result = live_pipeline_post(transcript_id=transcript_id)
# result.ready() blocks even without await; it mutates result also
while not result.ready():
print(f"Status: {result.state}")
time.sleep(2)
async def process_file_pipeline(
transcript_id: TranscriptId,
):
"""Process audio/video file using the optimized file pipeline"""
# task_pipeline_file_process is a Celery task, need to use .delay() for async execution
result = task_pipeline_file_process.delay(transcript_id=transcript_id)
# Wait for the Celery task to complete
while not result.ready():
print(f"File pipeline status: {result.state}", file=sys.stderr)
time.sleep(2)
logger.info("File pipeline processing complete")
async def process(
source_path: str,
source_language: str,
target_language: str,
pipeline: Literal["live", "file"],
output_path: str = None,
):
from reflector.db import get_database
database = get_database()
# db connect is a part of ceremony
await database.connect()
# start processing audio
logger.info(f"Opening {filename}")
container = av.open(filename)
try:
logger.info("Start pushing audio into the pipeline")
for frame in container.decode(audio=0):
await pipeline.push(frame)
finally:
logger.info("Flushing the pipeline")
await pipeline.flush()
transcript_id = await prepare_entry(
source_path,
source_language,
target_language,
)
logger.info("All done !")
pipeline_handlers = {
"live": process_live_pipeline,
"file": process_file_pipeline,
}
handler = pipeline_handlers.get(pipeline)
if not handler:
raise ValueError(f"Unknown pipeline type: {pipeline}")
await handler(transcript_id)
await extract_result_from_entry(transcript_id, output_path)
finally:
await database.disconnect()
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser = argparse.ArgumentParser(
description="Process audio files with speaker diarization"
)
parser.add_argument("source", help="Source file (mp3, wav, mp4...)")
parser.add_argument("--only-transcript", "-t", action="store_true")
parser.add_argument("--source-language", default="en")
parser.add_argument("--target-language", default="en")
parser.add_argument(
"--pipeline",
required=True,
choices=["live", "file"],
help="Pipeline type to use for processing (live: streaming/incremental, file: batch/parallel)",
)
parser.add_argument(
"--source-language", default="en", help="Source language code (default: en)"
)
parser.add_argument(
"--target-language", default="en", help="Target language code (default: en)"
)
parser.add_argument("--output", "-o", help="Output file (output.jsonl)")
args = parser.parse_args()
output_fd = None
if args.output:
output_fd = open(args.output, "w")
async def event_callback(event: PipelineEvent):
processor = event.processor
# ignore some processor
if processor in ("AudioChunkerProcessor", "AudioMergeProcessor"):
return
logger.info(f"Event: {event}")
if output_fd:
output_fd.write(event.model_dump_json())
output_fd.write("\n")
asyncio.run(
process_audio_file(
process(
args.source,
event_callback,
only_transcript=args.only_transcript,
source_language=args.source_language,
target_language=args.target_language,
args.source_language,
args.target_language,
args.pipeline,
args.output,
)
)
if output_fd:
output_fd.close()
logger.info(f"Output written to {args.output}")

View File

@@ -1,316 +0,0 @@
"""
@vibe-generated
Process audio file with diarization support
===========================================
Extended version of process.py that includes speaker diarization.
This tool processes audio files locally without requiring the full server infrastructure.
"""
import asyncio
import tempfile
import uuid
from pathlib import Path
from typing import List
import av
from reflector.logger import logger
from reflector.processors import (
AudioChunkerProcessor,
AudioFileWriterProcessor,
AudioMergeProcessor,
AudioTranscriptAutoProcessor,
Pipeline,
PipelineEvent,
TranscriptFinalSummaryProcessor,
TranscriptFinalTitleProcessor,
TranscriptLinerProcessor,
TranscriptTopicDetectorProcessor,
TranscriptTranslatorAutoProcessor,
)
from reflector.processors.base import BroadcastProcessor, Processor
from reflector.processors.types import (
AudioDiarizationInput,
TitleSummary,
TitleSummaryWithId,
)
class TopicCollectorProcessor(Processor):
"""Collect topics for diarization"""
INPUT_TYPE = TitleSummary
OUTPUT_TYPE = TitleSummary
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.topics: List[TitleSummaryWithId] = []
self._topic_id = 0
async def _push(self, data: TitleSummary):
# Convert to TitleSummaryWithId and collect
self._topic_id += 1
topic_with_id = TitleSummaryWithId(
id=str(self._topic_id),
title=data.title,
summary=data.summary,
timestamp=data.timestamp,
duration=data.duration,
transcript=data.transcript,
)
self.topics.append(topic_with_id)
# Pass through the original topic
await self.emit(data)
def get_topics(self) -> List[TitleSummaryWithId]:
return self.topics
async def process_audio_file_with_diarization(
filename,
event_callback,
only_transcript=False,
source_language="en",
target_language="en",
enable_diarization=True,
diarization_backend="modal",
):
# Create temp file for audio if diarization is enabled
audio_temp_path = None
if enable_diarization:
audio_temp_file = tempfile.NamedTemporaryFile(suffix=".wav", delete=False)
audio_temp_path = audio_temp_file.name
audio_temp_file.close()
# Create processor for collecting topics
topic_collector = TopicCollectorProcessor()
# Build pipeline for audio processing
processors = []
# Add audio file writer at the beginning if diarization is enabled
if enable_diarization:
processors.append(AudioFileWriterProcessor(audio_temp_path))
# Add the rest of the processors
processors += [
AudioChunkerProcessor(),
AudioMergeProcessor(),
AudioTranscriptAutoProcessor.as_threaded(),
]
processors += [
TranscriptLinerProcessor(),
TranscriptTranslatorAutoProcessor.as_threaded(),
]
if not only_transcript:
processors += [
TranscriptTopicDetectorProcessor.as_threaded(),
# Collect topics for diarization
topic_collector,
BroadcastProcessor(
processors=[
TranscriptFinalTitleProcessor.as_threaded(),
TranscriptFinalSummaryProcessor.as_threaded(),
],
),
]
# Create main pipeline
pipeline = Pipeline(*processors)
pipeline.set_pref("audio:source_language", source_language)
pipeline.set_pref("audio:target_language", target_language)
pipeline.describe()
pipeline.on(event_callback)
# Start processing audio
logger.info(f"Opening {filename}")
container = av.open(filename)
try:
logger.info("Start pushing audio into the pipeline")
for frame in container.decode(audio=0):
await pipeline.push(frame)
finally:
logger.info("Flushing the pipeline")
await pipeline.flush()
# Run diarization if enabled and we have topics
if enable_diarization and not only_transcript and audio_temp_path:
topics = topic_collector.get_topics()
if topics:
logger.info(f"Starting diarization with {len(topics)} topics")
try:
# Import diarization processor
from reflector.processors import AudioDiarizationAutoProcessor
# Create diarization processor
diarization_processor = AudioDiarizationAutoProcessor(
name=diarization_backend
)
diarization_processor.on(event_callback)
# For Modal backend, we need to upload the file to S3 first
if diarization_backend == "modal":
from datetime import datetime
from reflector.storage import get_transcripts_storage
from reflector.utils.s3_temp_file import S3TemporaryFile
storage = get_transcripts_storage()
# Generate a unique filename in evaluation folder
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S")
audio_filename = f"evaluation/diarization_temp/{timestamp}_{uuid.uuid4().hex}.wav"
# Use context manager for automatic cleanup
async with S3TemporaryFile(storage, audio_filename) as s3_file:
# Read and upload the audio file
with open(audio_temp_path, "rb") as f:
audio_data = f.read()
audio_url = await s3_file.upload(audio_data)
logger.info(f"Uploaded audio to S3: {audio_filename}")
# Create diarization input with S3 URL
diarization_input = AudioDiarizationInput(
audio_url=audio_url, topics=topics
)
# Run diarization
await diarization_processor.push(diarization_input)
await diarization_processor.flush()
logger.info("Diarization complete")
# File will be automatically cleaned up when exiting the context
else:
# For local backend, use local file path
audio_url = audio_temp_path
# Create diarization input
diarization_input = AudioDiarizationInput(
audio_url=audio_url, topics=topics
)
# Run diarization
await diarization_processor.push(diarization_input)
await diarization_processor.flush()
logger.info("Diarization complete")
except ImportError as e:
logger.error(f"Failed to import diarization dependencies: {e}")
logger.error(
"Install with: uv pip install pyannote.audio torch torchaudio"
)
logger.error(
"And set HF_TOKEN environment variable for pyannote models"
)
raise SystemExit(1)
except Exception as e:
logger.error(f"Diarization failed: {e}")
raise SystemExit(1)
else:
logger.warning("Skipping diarization: no topics available")
# Clean up temp file
if audio_temp_path:
try:
Path(audio_temp_path).unlink()
except Exception as e:
logger.warning(f"Failed to clean up temp file {audio_temp_path}: {e}")
logger.info("All done!")
if __name__ == "__main__":
import argparse
import os
parser = argparse.ArgumentParser(
description="Process audio files with optional speaker diarization"
)
parser.add_argument("source", help="Source file (mp3, wav, mp4...)")
parser.add_argument(
"--only-transcript",
"-t",
action="store_true",
help="Only generate transcript without topics/summaries",
)
parser.add_argument(
"--source-language", default="en", help="Source language code (default: en)"
)
parser.add_argument(
"--target-language", default="en", help="Target language code (default: en)"
)
parser.add_argument("--output", "-o", help="Output file (output.jsonl)")
parser.add_argument(
"--enable-diarization",
"-d",
action="store_true",
help="Enable speaker diarization",
)
parser.add_argument(
"--diarization-backend",
default="modal",
choices=["modal"],
help="Diarization backend to use (default: modal)",
)
args = parser.parse_args()
# Set REDIS_HOST to localhost if not provided
if "REDIS_HOST" not in os.environ:
os.environ["REDIS_HOST"] = "localhost"
logger.info("REDIS_HOST not set, defaulting to localhost")
output_fd = None
if args.output:
output_fd = open(args.output, "w")
async def event_callback(event: PipelineEvent):
processor = event.processor
data = event.data
# Ignore internal processors
if processor in (
"AudioChunkerProcessor",
"AudioMergeProcessor",
"AudioFileWriterProcessor",
"TopicCollectorProcessor",
"BroadcastProcessor",
):
return
# If diarization is enabled, skip the original topic events from the pipeline
# The diarization processor will emit the same topics but with speaker info
if processor == "TranscriptTopicDetectorProcessor" and args.enable_diarization:
return
# Log all events
logger.info(f"Event: {processor} - {type(data).__name__}")
# Write to output
if output_fd:
output_fd.write(event.model_dump_json())
output_fd.write("\n")
output_fd.flush()
asyncio.run(
process_audio_file_with_diarization(
args.source,
event_callback,
only_transcript=args.only_transcript,
source_language=args.source_language,
target_language=args.target_language,
enable_diarization=args.enable_diarization,
diarization_backend=args.diarization_backend,
)
)
if output_fd:
output_fd.close()
logger.info(f"Output written to {args.output}")

View File

@@ -53,7 +53,7 @@ async def run_single_processor(args):
async def event_callback(event: PipelineEvent):
processor = event.processor
# ignore some processor
if processor in ("AudioChunkerProcessor", "AudioMergeProcessor"):
if processor in ("AudioChunkerAutoProcessor", "AudioMergeProcessor"):
return
print(f"Event: {event}")
if output_fd:

View File

@@ -1,96 +0,0 @@
#!/usr/bin/env python3
"""
@vibe-generated
Test script for the diarization CLI tool
=========================================
This script helps test the diarization functionality with sample audio files.
"""
import asyncio
import sys
from pathlib import Path
from reflector.logger import logger
async def test_diarization(audio_file: str):
"""Test the diarization functionality"""
# Import the processing function
from process_with_diarization import process_audio_file_with_diarization
# Collect events
events = []
async def event_callback(event):
events.append({"processor": event.processor, "data": event.data})
logger.info(f"Event from {event.processor}")
# Process the audio file
logger.info(f"Processing audio file: {audio_file}")
try:
await process_audio_file_with_diarization(
audio_file,
event_callback,
only_transcript=False,
source_language="en",
target_language="en",
enable_diarization=True,
diarization_backend="modal",
)
# Analyze results
logger.info(f"Processing complete. Received {len(events)} events")
# Look for diarization results
diarized_topics = []
for event in events:
if "TitleSummary" in event["processor"]:
# Check if words have speaker information
if hasattr(event["data"], "transcript") and event["data"].transcript:
words = event["data"].transcript.words
if words and hasattr(words[0], "speaker"):
speakers = set(
w.speaker for w in words if hasattr(w, "speaker")
)
logger.info(
f"Found {len(speakers)} speakers in topic: {event['data'].title}"
)
diarized_topics.append(event["data"])
if diarized_topics:
logger.info(f"Successfully diarized {len(diarized_topics)} topics")
# Print sample output
sample_topic = diarized_topics[0]
logger.info("Sample diarized output:")
for i, word in enumerate(sample_topic.transcript.words[:10]):
logger.info(f" Word {i}: '{word.text}' - Speaker {word.speaker}")
else:
logger.warning("No diarization results found in output")
return events
except Exception as e:
logger.error(f"Error during processing: {e}")
raise
def main():
if len(sys.argv) < 2:
print("Usage: python test_diarization.py <audio_file>")
sys.exit(1)
audio_file = sys.argv[1]
if not Path(audio_file).exists():
print(f"Error: Audio file '{audio_file}' not found")
sys.exit(1)
# Run the test
asyncio.run(test_diarization(audio_file))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,23 @@
from typing import Annotated
from pydantic import Field, TypeAdapter, constr
NonEmptyStringBase = constr(min_length=1, strip_whitespace=False)
NonEmptyString = Annotated[
NonEmptyStringBase,
Field(description="A non-empty string", min_length=1),
]
non_empty_string_adapter = TypeAdapter(NonEmptyString)
def parse_non_empty_string(s: str, error: str | None = None) -> NonEmptyString:
try:
return non_empty_string_adapter.validate_python(s)
except Exception as e:
raise ValueError(f"{e}: {error}" if error else e) from e
def try_parse_non_empty_string(s: str) -> NonEmptyString | None:
if not s:
return None
return parse_non_empty_string(s)

View File

@@ -0,0 +1,63 @@
"""WebVTT utilities for generating subtitle files from transcript data."""
from typing import TYPE_CHECKING, Annotated
import webvtt
from reflector.processors.types import Seconds, Word, words_to_segments
if TYPE_CHECKING:
from reflector.db.transcripts import TranscriptTopic
VttTimestamp = Annotated[str, "vtt_timestamp"]
WebVTTStr = Annotated[str, "webvtt_str"]
def _seconds_to_timestamp(seconds: Seconds) -> VttTimestamp:
# lib doesn't do that
hours = int(seconds // 3600)
minutes = int((seconds % 3600) // 60)
secs = int(seconds % 60)
milliseconds = int((seconds % 1) * 1000)
return f"{hours:02d}:{minutes:02d}:{secs:02d}.{milliseconds:03d}"
def words_to_webvtt(words: list[Word]) -> WebVTTStr:
"""Convert words to WebVTT using existing segmentation logic."""
vtt = webvtt.WebVTT()
if not words:
return vtt.content
segments = words_to_segments(words)
for segment in segments:
text = segment.text.strip()
# lib doesn't do that
text = f"<v Speaker{segment.speaker}>{text}"
caption = webvtt.Caption(
start=_seconds_to_timestamp(segment.start),
end=_seconds_to_timestamp(segment.end),
text=text,
)
vtt.captions.append(caption)
return vtt.content
def topics_to_webvtt(topics: list["TranscriptTopic"]) -> WebVTTStr:
if not topics:
return webvtt.WebVTT().content
all_words: list[Word] = []
for topic in topics:
all_words.extend(topic.words)
# assert it's in sequence
for i in range(len(all_words) - 1):
assert (
all_words[i].start <= all_words[i + 1].start
), f"Words are not in sequence: {all_words[i].text} and {all_words[i + 1].text} are not consecutive: {all_words[i].start} > {all_words[i + 1].start}"
return words_to_webvtt(all_words)

View File

@@ -0,0 +1,17 @@
# Video Platform Abstraction Layer
"""
This module provides an abstraction layer for different video conferencing platforms.
It allows seamless switching between providers (Whereby, Daily.co, etc.) without
changing the core application logic.
"""
from .base import MeetingData, VideoPlatformClient, VideoPlatformConfig
from .registry import get_platform_client, register_platform
__all__ = [
"VideoPlatformClient",
"VideoPlatformConfig",
"MeetingData",
"get_platform_client",
"register_platform",
]

View File

@@ -0,0 +1,82 @@
from abc import ABC, abstractmethod
from datetime import datetime
from typing import Any, Dict, Optional
from pydantic import BaseModel
from reflector.db.rooms import Room
class MeetingData(BaseModel):
"""Standardized meeting data returned by all platforms."""
meeting_id: str
room_name: str
room_url: str
host_room_url: str
platform: str
extra_data: Dict[str, Any] = {} # Platform-specific data
class VideoPlatformConfig(BaseModel):
"""Configuration for a video platform."""
api_key: str
webhook_secret: str
api_url: Optional[str] = None
subdomain: Optional[str] = None
s3_bucket: Optional[str] = None
s3_region: Optional[str] = None
aws_role_arn: Optional[str] = None
aws_access_key_id: Optional[str] = None
aws_access_key_secret: Optional[str] = None
class VideoPlatformClient(ABC):
"""Abstract base class for video platform integrations."""
PLATFORM_NAME: str = ""
def __init__(self, config: VideoPlatformConfig):
self.config = config
@abstractmethod
async def create_meeting(
self, room_name_prefix: str, end_date: datetime, room: Room
) -> MeetingData:
"""Create a new meeting room."""
pass
@abstractmethod
async def get_room_sessions(self, room_name: str) -> Dict[str, Any]:
"""Get session information for a room."""
pass
@abstractmethod
async def delete_room(self, room_name: str) -> bool:
"""Delete a room. Returns True if successful."""
pass
@abstractmethod
async def upload_logo(self, room_name: str, logo_path: str) -> bool:
"""Upload a logo to the room. Returns True if successful."""
pass
@abstractmethod
def verify_webhook_signature(
self, body: bytes, signature: str, timestamp: Optional[str] = None
) -> bool:
"""Verify webhook signature for security."""
pass
def format_recording_config(self, room: Room) -> Dict[str, Any]:
"""Format recording configuration for the platform.
Can be overridden by specific implementations."""
if room.recording_type == "cloud" and self.config.s3_bucket:
return {
"type": room.recording_type,
"bucket": self.config.s3_bucket,
"region": self.config.s3_region,
"trigger": room.recording_trigger,
}
return {"type": room.recording_type}

View File

@@ -0,0 +1,54 @@
"""Factory for creating video platform clients based on configuration."""
from typing import TYPE_CHECKING, Literal, Optional, overload
from reflector.db.rooms import VideoPlatform
from reflector.settings import settings
from .base import VideoPlatformClient, VideoPlatformConfig
from .registry import get_platform_client
if TYPE_CHECKING:
from .jitsi import JitsiClient
from .whereby import WherebyClient
def get_platform_config(platform: str) -> VideoPlatformConfig:
"""Get configuration for a specific platform."""
if platform == VideoPlatform.WHEREBY:
return VideoPlatformConfig(
api_key=settings.WHEREBY_API_KEY or "",
webhook_secret=settings.WHEREBY_WEBHOOK_SECRET or "",
api_url=settings.WHEREBY_API_URL,
s3_bucket=settings.RECORDING_STORAGE_AWS_BUCKET_NAME,
aws_access_key_id=settings.AWS_WHEREBY_ACCESS_KEY_ID,
aws_access_key_secret=settings.AWS_WHEREBY_ACCESS_KEY_SECRET,
)
elif platform == VideoPlatform.JITSI:
return VideoPlatformConfig(
api_key="", # Jitsi uses JWT, no API key
webhook_secret=settings.JITSI_WEBHOOK_SECRET or "",
api_url=f"https://{settings.JITSI_DOMAIN}",
)
else:
raise ValueError(f"Unknown platform: {platform}")
@overload
def create_platform_client(platform: Literal["jitsi"]) -> "JitsiClient": ...
@overload
def create_platform_client(platform: Literal["whereby"]) -> "WherebyClient": ...
def create_platform_client(platform: str) -> VideoPlatformClient:
"""Create a video platform client instance."""
config = get_platform_config(platform)
return get_platform_client(platform, config)
def get_platform_for_room(room_id: Optional[str] = None) -> str:
"""Determine which platform to use for a room based on feature flags."""
# For now, default to whereby since we don't have feature flags yet
return VideoPlatform.WHEREBY

Some files were not shown because too many files have changed in this diff Show More