* Revert "fix: Skip mixdown for multitrack (#760)"
This reverts commit b51b7aa917.
* multitrack mixdown optimisation
* return the "good" ui part of "skip mixdown"
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
* Update transcript list on reprocess
* Fix transcript create
* Fix multiple sockets issue
* Pass token in sec websocket protocol
* userEvent parse example
* transcript list invalidation non-abstraction
* Emit only relevant events to the user room
* Add ws close code const
* Refactor user websocket endpoint
* Refactor user events provider
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
* feat(rooms): add webhook notifications for transcript completion
- Add webhook_url and webhook_secret fields to rooms table
- Create Celery task with 24-hour retry window using exponential backoff
- Send transcript metadata, diarized text, topics, and summaries via webhook
- Add HMAC signature verification for webhook security
- Add test endpoint POST /v1/rooms/{room_id}/webhook/test
- Update frontend with webhook configuration UI and test button
- Auto-generate webhook secret if not provided
- Trigger webhook after successful file pipeline processing for room recordings
* style: linting
* fix: remove unwanted files
* fix: update openapi gen
* fix: self-review
* docs: add comprehensive webhook documentation
- Document webhook configuration, events, and payloads
- Include transcript.completed and test event examples
- Add security considerations and best practices
- Provide example webhook receiver implementation
- Document retry policy and signature verification
* fix: remove audio_mp3_url from webhook payload
- Remove audio download URL generation from webhook
- Update documentation to reflect the change
- Keep only frontend_url for accessing transcripts
* docs: remove unwanted section
* fix: correct API method name and type imports for rooms
- Fix v1RoomsRetrieve to v1RoomsGet
- Update Room type to RoomDetails throughout frontend
- Fix type imports in useRoomList, RoomList, RoomTable, and RoomCards
* feat: add show/hide toggle for webhook secret field
- Add eye icon button to reveal/hide webhook secret when editing
- Show password dots when webhook secret is hidden
- Reset visibility state when opening/closing dialog
- Only show toggle button when editing existing room with secret
* fix: resolve event loop conflict in webhook test endpoint
- Extract webhook test logic into shared async function
- Call async function directly from FastAPI endpoint
- Keep Celery task wrapper for background processing
- Fixes RuntimeError: event loop already running
* refactor: remove unnecessary Celery task for webhook testing
- Webhook testing is synchronous and provides immediate feedback
- No need for background processing via Celery
- Keep only the async function called directly from API endpoint
* feat: improve webhook test error messages and display
- Show HTTP status code in error messages
- Parse JSON error responses to extract meaningful messages
- Improved UI layout for webhook test results
- Added colored background for success/error states
- Better text wrapping for long error messages
* docs: adjust doc
* fix: review
* fix: update attempts to match close 24h
* fix: add event_id
* fix: changed to uuid, to have new event_id when reprocess.
* style: linting
* fix: alembic revision
* feat(cleanup): add automatic data retention for public instances
- Add Celery task to clean up anonymous data after configurable retention period
- Delete transcripts, meetings, and orphaned recordings older than retention days
- Only runs when PUBLIC_MODE is enabled to prevent accidental data loss
- Properly removes all associated files (local and S3 storage)
- Add manual cleanup tool for testing and intervention
- Configure retention via PUBLIC_DATA_RETENTION_DAYS setting (default: 7 days)
Fixes#571
* fix: apply pre-commit formatting fixes
* fix: properly delete recording files from storage during cleanup
- Add storage deletion for orphaned recordings in both cleanup task and manual tool
- Delete from storage before removing database records
- Log warnings if storage deletion fails but continue with database cleanup
* Apply suggestion from @pr-agent-monadical[bot]
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
* Apply suggestion from @pr-agent-monadical[bot]
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
* refactor: cleanup_old_data for better logging
* fix: linting
* test: fix meeting cleanup test to not require room controller
- Simplify test by directly inserting meetings into database
- Remove dependency on non-existent rooms_controller.create method
- Tests now pass successfully
* fix: linting
* refactor: simplify cleanup tool to use worker implementation
- Remove duplicate cleanup logic from manual tool
- Use the same _cleanup_old_public_data function from worker
- Remove dry-run feature as requested
- Prevent code duplication and ensure consistency
- Update documentation to reflect changes
* refactor: split cleanup worker into smaller functions
- Move all imports to the top of the file
- Extract cleanup logic into separate functions:
- cleanup_old_transcripts()
- cleanup_old_meetings()
- cleanup_orphaned_recordings()
- log_cleanup_results()
- Make code more maintainable and testable
- Add days parameter support to Celery task
- Update manual tool to work with refactored code
* feat: add TypedDict typing for cleanup stats
- Add CleanupStats TypedDict for better type safety
- Update all function signatures to use proper typing
- Add return type annotations to _cleanup_old_public_data
- Improves code maintainability and IDE support
* feat: add CASCADE DELETE to meeting_consent foreign key
- Add ondelete="CASCADE" to meeting_consent.meeting_id foreign key
- Generate and apply migration to update existing constraint
- Remove manual consent deletion from cleanup code
- Add unit test to verify CASCADE DELETE behavior
* style: linting
* fix: alembic migration branchpoint
* fix: correct downgrade constraint name in CASCADE DELETE migration
* fix: regenerate CASCADE DELETE migration with proper constraint names
- Delete problematic migration and regenerate with correct names
- Use explicit constraint name in both upgrade and downgrade
- Ensure migration works bidirectionally
- All tests passing including CASCADE DELETE test
* style: linting
* refactor: simplify cleanup to use transcripts as entry point
- Remove orphaned_recordings cleanup (not part of this PR scope)
- Remove separate old_meetings cleanup
- Transcripts are now the main entry point for cleanup
- Associated meetings and recordings are deleted with their transcript
- Use single database connection for all operations
- Update tests to reflect new approach
* refactor: cleanup and rename functions for clarity
- Rename _cleanup_old_public_data to cleanup_old_public_data (make public)
- Rename celery task to cleanup_old_public_data_task for clarity
- Update docstrings and improve code organization
- Remove unnecessary comments and simplify deletion logic
- Update tests to use new function names
- All tests passing
* style: linting\
* style: typing and review
* fix: add transaction on cleanup_single_transcript
* fix: naming
---------
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
* feat: use file pipeline for upload and reprocess action
* fix: make file pipeline correctly report status events
* fix: duplication of transcripts_controller
* fix: tests
* test: fix file upload test
* test: fix reprocess
* fix: also patch from main_file_pipeline
(how patch is done is dependent of file import unfortunately)
This commit restore the original behavior with frame cutting. While
silero is used on our gpu for files, look like it's not working great on
the live pipeline. To be investigated, but at the moment, what we keep
is:
- refactored to extract the downscale for further processing in the
pipeline
- remove any downscale implementation from audio_chunker and audio_merge
- removed batching from audio_merge too for now
* feat: improve pipeline threading, and transcriber (parakeet and silero vad)
* refactor: remove whisperx, implement parakeet
* refactor: make audio_chunker more smart and wait for speech, instead of fixed frame
* refactor: make audio merge to always downscale the audio to 16k for transcription
* refactor: make the audio transcript modal accepting batches
* refactor: improve type safety and remove prometheus metrics
- Add DiarizationSegment TypedDict for proper diarization typing
- Replace List/Optional with modern Python list/| None syntax
- Remove all Prometheus metrics from TranscriptDiarizationAssemblerProcessor
- Add comprehensive file processing pipeline with parallel execution
- Update processor imports and type annotations throughout
- Implement optimized file pipeline as default in process.py tool
* refactor: convert FileDiarizationProcessor I/O types to BaseModel
Update FileDiarizationInput and FileDiarizationOutput to inherit from
BaseModel instead of plain classes, following the standard pattern
used by other processors in the codebase.
* test: add tests for file transcript and diarization with pytest-recording
* build: add pytest-recording
* feat: add local pyannote for testing
* fix: replace PyAV AudioResampler with torchaudio for reliable audio processing
- Replace problematic PyAV AudioResampler that was causing ValueError: [Errno 22] Invalid argument
- Use torchaudio.functional.resample for robust sample rate conversion
- Optimize processing: skip conversion for already 16kHz mono audio
- Add direct WAV writing with Python wave module for better performance
- Consolidate duplicate downsample checks for cleaner code
- Maintain list[av.AudioFrame] input interface
- Required for Silero VAD which needs 16kHz mono audio
* fix: replace PyAV AudioResampler with torchaudio solution
- Resolves ValueError: [Errno 22] Invalid argument in AudioMergeProcessor
- Replaces problematic PyAV AudioResampler with torchaudio.functional.resample
- Optimizes processing to skip unnecessary conversions when audio is already 16kHz mono
- Uses direct WAV writing with Python's wave module for better performance
- Fixes test_basic_process to disable diarization (pyannote dependency not installed)
- Updates test expectations to match actual processor behavior
- Removes unused pydub dependency from pyproject.toml
- Adds comprehensive TEST_ANALYSIS.md documenting test suite status
* feat: add parameterized test for both diarization modes
- Adds @pytest.mark.parametrize to test_basic_process with enable_diarization=[False, True]
- Test with diarization=False always passes (tests core AudioMergeProcessor functionality)
- Test with diarization=True gracefully skips when pyannote.audio is not installed
- Provides comprehensive test coverage for both pipeline configurations
* fix: resolve pipeline property naming conflict in AudioDiarizationPyannoteProcessor
- Renames 'pipeline' property to 'diarization_pipeline' to avoid conflict with base Processor.pipeline attribute
- Fixes AttributeError: 'property 'pipeline' object has no setter' when set_pipeline() is called
- Updates property usage in _diarize method to use new name
- Now correctly supports pipeline initialization for diarization processing
* fix: add local for pyannote
* test: add diarization test
* fix: resample on audio merge now working
* fix: correctly restore timestamp
* fix: display exception in a threaded processor if that happen
* Update pyproject.toml
* ci: remove option
* ci: update astral-sh/setup-uv
* test: add monadical url for pytest-recording
* refactor: remove previous version
* build: move faster whisper to local dep
* test: fix missing import
* refactor: improve main_file_pipeline organization and error handling
- Move all imports to the top of the file
- Create unified EmptyPipeline class to replace duplicate mock pipeline code
- Remove timeout and fallback logic - let processors handle their own retries
- Fix error handling to raise any exception from parallel tasks
- Add proper type hints and validation for captured results
* fix: wrong function
* fix: remove task_done
* feat: add configurable file processing timeouts for modal processors
- Add TRANSCRIPT_FILE_TIMEOUT setting (default: 600s) for file transcription
- Add DIARIZATION_FILE_TIMEOUT setting (default: 600s) for file diarization
- Replace hardcoded timeout=600 with configurable settings in modal processors
- Allows customization of timeout values via environment variables
* fix: use logger
* fix: worker process meetings now use file pipeline
* fix: topic not gathered
* refactor: remove prepare(), pipeline now work
* refactor: implement many review from Igor
* test: add test for test_pipeline_main_file
* refactor: remove doc
* doc: add doc
* ci: update build to use native arm64 builder
* fix: merge fixes
* refactor: changes from Igor review + add test (not by default) to test gpu modal part
* ci: update to our own runner linux-amd64
* ci: try using suggested mode=min
* fix: update diarizer for latest modal, and use volume
* fix: modal file extension detection
* fix: put the diarizer as A100
* feat: remove support of sqlite, 100% postgres
* fix: more migration and make datetime timezone aware in postgres
* fix: change how database is get, and use contextvar to have difference instance between different loops
* test: properly use client fixture that handle lifetime/database connection
* fix: add missing client fixture parameters to test functions
This commit fixes NameError issues where test functions were trying to use
the 'client' fixture but didn't have it as a parameter. The changes include:
1. Added 'client' parameter to test functions in:
- test_transcripts_audio_download.py (6 functions including fixture)
- test_transcripts_speaker.py (3 functions)
- test_transcripts_upload.py (1 function)
- test_transcripts_rtc_ws.py (2 functions + appserver fixture)
2. Resolved naming conflicts in test_transcripts_rtc_ws.py where both HTTP
client and StreamClient were using variable name 'client'. StreamClient
instances are now named 'stream_client' to avoid conflicts.
3. Added missing 'from reflector.app import app' import in rtc_ws tests.
Background: Previously implemented contextvars solution with get_database()
function resolves asyncio event loop conflicts in Celery tasks. The global
client fixture was also created to replace manual AsyncClient instances,
ensuring proper FastAPI application lifecycle management and database
connections during tests.
All tests now pass except for 2 pre-existing RTC WebSocket test failures
related to asyncpg connection issues unrelated to these fixes.
* fix: ensure task are correctly closed
* fix: make separate event loop for the live server
* fix: make default settings pointing at postgres
* build: remove pytest-docker deps out of dev, just tests group
* fix: refactor modal API key configuration for better separation of concerns
- Split generic MODAL_API_KEY into service-specific keys:
- TRANSCRIPT_API_KEY for transcription service
- DIARIZATION_API_KEY for diarization service
- TRANSLATE_API_KEY for translation service
- Remove deprecated *_MODAL_API_KEY settings
- Add proper validation to ensure URLs are set when using modal processors
- Update README with new configuration format
BREAKING CHANGE: Configuration keys have changed. Update your .env file:
- TRANSCRIPT_MODAL_API_KEY → TRANSCRIPT_API_KEY
- LLM_MODAL_API_KEY → (removed, use TRANSCRIPT_API_KEY)
- Add DIARIZATION_API_KEY and TRANSLATE_API_KEY if using those services
* fix: update Modal backend configuration to use service-specific API keys
- Changed from generic MODAL_API_KEY to service-specific keys:
- TRANSCRIPT_MODAL_API_KEY for transcription
- DIARIZATION_MODAL_API_KEY for diarization
- TRANSLATION_MODAL_API_KEY for translation
- Updated audio_transcript_modal.py and audio_diarization_modal.py to use modal_api_key parameter
- Updated documentation in README.md, CLAUDE.md, and env.example
* feat: implement auto/modal pattern for translation processor
- Created TranscriptTranslatorAutoProcessor following the same pattern as transcript/diarization
- Created TranscriptTranslatorModalProcessor with TRANSLATION_MODAL_API_KEY support
- Added TRANSLATION_BACKEND setting (defaults to "modal")
- Updated all imports to use TranscriptTranslatorAutoProcessor instead of TranscriptTranslatorProcessor
- Updated env.example with TRANSLATION_BACKEND and TRANSLATION_MODAL_API_KEY
- Updated test to expect TranscriptTranslatorModalProcessor name
- All tests passing
* refactor: simplify transcript_translator base class to match other processors
- Moved all implementation from base class to modal processor
- Base class now only defines abstract _translate method
- Follows the same minimal pattern as audio_diarization and audio_transcript base classes
- Updated test mock to use _translate instead of get_translation
- All tests passing
* chore: clean up settings and improve type annotations
- Remove deprecated generic API key variables from settings
- Add comments to group Modal-specific settings
- Improve type annotations for modal_api_key parameters
* fix: typing
* fix: passing key to openai
* test: fix rtc test failing due to change on transcript
It also correctly setup database from sqlite, in case our configuration
is setup to postgres.
* ci: deactivate translation backend by default
* test: fix modal->mock
* refactor: implementing igor review, mock to passthrough
This feature a new modal endpoint, and a complete new way to build the
summary.
## SummaryBuilder
The summary builder is based on conversational model, where an exchange
between the model and the user is made. This allow more context
inclusion and a better respect of the rules.
It requires an endpoint with OpenAI-like completions endpoint
(/v1/chat/completions)
## vLLM Hermes3
Unlike previous deployment, this one use vLLM, which gives OpenAI-like
completions endpoint out of the box. It could also handle guided JSON
generation, so jsonformer is not needed. But, the model is quite good to
follow JSON schema if asked in the prompt.
## Conversion of long/short into summary builder
The builder is identifying participants, find key subjects, get a
summary for each, then get a quick recap.
The quick recap is used as a short_summary, while the markdown including
the quick recap + key subjects + summaries are used for the
long_summary.
This is why the nextjs component has to be updated, to correctly style
h1 and keep the new line of the markdown.