* fix: refactor modal API key configuration for better separation of concerns
- Split generic MODAL_API_KEY into service-specific keys:
- TRANSCRIPT_API_KEY for transcription service
- DIARIZATION_API_KEY for diarization service
- TRANSLATE_API_KEY for translation service
- Remove deprecated *_MODAL_API_KEY settings
- Add proper validation to ensure URLs are set when using modal processors
- Update README with new configuration format
BREAKING CHANGE: Configuration keys have changed. Update your .env file:
- TRANSCRIPT_MODAL_API_KEY → TRANSCRIPT_API_KEY
- LLM_MODAL_API_KEY → (removed, use TRANSCRIPT_API_KEY)
- Add DIARIZATION_API_KEY and TRANSLATE_API_KEY if using those services
* fix: update Modal backend configuration to use service-specific API keys
- Changed from generic MODAL_API_KEY to service-specific keys:
- TRANSCRIPT_MODAL_API_KEY for transcription
- DIARIZATION_MODAL_API_KEY for diarization
- TRANSLATION_MODAL_API_KEY for translation
- Updated audio_transcript_modal.py and audio_diarization_modal.py to use modal_api_key parameter
- Updated documentation in README.md, CLAUDE.md, and env.example
* feat: implement auto/modal pattern for translation processor
- Created TranscriptTranslatorAutoProcessor following the same pattern as transcript/diarization
- Created TranscriptTranslatorModalProcessor with TRANSLATION_MODAL_API_KEY support
- Added TRANSLATION_BACKEND setting (defaults to "modal")
- Updated all imports to use TranscriptTranslatorAutoProcessor instead of TranscriptTranslatorProcessor
- Updated env.example with TRANSLATION_BACKEND and TRANSLATION_MODAL_API_KEY
- Updated test to expect TranscriptTranslatorModalProcessor name
- All tests passing
* refactor: simplify transcript_translator base class to match other processors
- Moved all implementation from base class to modal processor
- Base class now only defines abstract _translate method
- Follows the same minimal pattern as audio_diarization and audio_transcript base classes
- Updated test mock to use _translate instead of get_translation
- All tests passing
* chore: clean up settings and improve type annotations
- Remove deprecated generic API key variables from settings
- Add comments to group Modal-specific settings
- Improve type annotations for modal_api_key parameters
* fix: typing
* fix: passing key to openai
* test: fix rtc test failing due to change on transcript
It also correctly setup database from sqlite, in case our configuration
is setup to postgres.
* ci: deactivate translation backend by default
* test: fix modal->mock
* refactor: implementing igor review, mock to passthrough
This feature a new modal endpoint, and a complete new way to build the
summary.
## SummaryBuilder
The summary builder is based on conversational model, where an exchange
between the model and the user is made. This allow more context
inclusion and a better respect of the rules.
It requires an endpoint with OpenAI-like completions endpoint
(/v1/chat/completions)
## vLLM Hermes3
Unlike previous deployment, this one use vLLM, which gives OpenAI-like
completions endpoint out of the box. It could also handle guided JSON
generation, so jsonformer is not needed. But, the model is quite good to
follow JSON schema if asked in the prompt.
## Conversion of long/short into summary builder
The builder is identifying participants, find key subjects, get a
summary for each, then get a quick recap.
The quick recap is used as a short_summary, while the markdown including
the quick recap + key subjects + summaries are used for the
long_summary.
This is why the nextjs component has to be updated, to correctly style
h1 and keep the new line of the markdown.