* feat: add change_seq to transcripts for ingestion support
Add a monotonically increasing change_seq column to the transcript table,
backed by a PostgreSQL sequence and BEFORE INSERT OR UPDATE trigger. Every
mutation gets a new sequence value, letting external ingesters checkpoint
and never miss an update.
* chore: regenerate frontend API types
* fix: live flow real-time updates during processing
Three gaps caused transcript pages to require manual refresh after
live recording/processing:
1. UserEventsProvider only invalidated list queries on TRANSCRIPT_STATUS,
not individual transcript queries. Now parses data.id from the event
and calls invalidateTranscript for the specific transcript.
2. useWebSockets had no reconnection logic — a dropped WS silently
killed all real-time updates. Added exponential backoff reconnection
(1s-30s, max 10 retries) with intentional close detection.
3. No polling fallback — WS was single point of failure. Added
conditional refetchInterval to useTranscriptGet that polls every 5s
when transcript status is processing/uploaded/recording.
* feat: type-safe WebSocket events via OpenAPI stub
Define Pydantic models with Literal discriminators for all WS events
(9 transcript-level, 5 user-level). Expose via stub GET endpoints so
pnpm openapi generates TS discriminated unions with exhaustive switch
narrowing on the frontend.
- New server/reflector/ws_events.py with TranscriptWsEvent and UserWsEvent
- Tighten backend emit signatures with TranscriptEventName literal
- Frontend uses generated types, removes Zod schema and manual casts
- Fix pre-existing bugs: waveform mapping, FINAL_LONG_SUMMARY field name
- STATUS value now typed as TranscriptStatus literal end-to-end
- TOPIC handler simplified to query invalidation only (avoids shape mismatch)
* fix: restore TOPIC WS handler with immediate state update
The setTopics call provides instant topic rendering during live
transcription. Query invalidation still follows for full data sync.
* fix: align TOPIC WS event data with GetTranscriptTopic shape
Convert TranscriptTopic → GetTranscriptTopic in pipeline before
emitting, so WS sends segments instead of words. Removes the
`as unknown as Topic` cast on the frontend.
* fix: use NonEmptyString and TranscriptStatus in user WS event models
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
* feat: local LLM via Ollama + structured output response_format
- Add setup script (scripts/setup-local-llm.sh) for one-command Ollama setup
Mac: native Metal GPU, Linux: containerized via docker-compose profiles
- Add ollama-gpu and ollama-cpu docker-compose profiles for Linux
- Add extra_hosts to server/hatchet-worker-llm for host.docker.internal
- Pass response_format JSON schema in StructuredOutputWorkflow.extract()
enabling grammar-based constrained decoding on Ollama/llama.cpp/vLLM/OpenAI
- Update .env.example with Ollama as default LLM option
- Add Ollama PRD and local dev setup docs
* refactor: move Ollama services to docker-compose.standalone.yml
Ollama profiles (ollama-gpu, ollama-cpu) are only for Linux standalone
deployment. Mac devs never use them. Separate file keeps the main
compose clean and provides a natural home for future standalone services
(MinIO, etc.).
Linux: docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ollama-gpu up -d
Mac: docker compose up -d (native Ollama, no standalone file needed)
* fix: correct PRD goal (demo/eval, not dev replacement) and processor naming
* chore: remove completed PRD, rename setup doc, drop response_format tests
- Remove docs/01_ollama.prd.md (implementation complete)
- Rename local-dev-setup.md -> standalone-local-setup.md
- Remove TestResponseFormat class from test_llm_retry.py
* docs: resolve standalone storage step — skip S3 for live-only mode
* docs: add TASKS.md for standalone env defaults + setup script work
* feat: add unified setup-local-dev.sh for standalone deployment
Single script takes fresh clone to working Reflector: Ollama/LLM setup,
env file generation (server/.env + www/.env.local), docker compose up,
health checks. No Hatchet in standalone — live pipeline is pure Celery.
* chore: rename to setup-standalone, remove redundant setup-local-llm.sh
* feat: add custom S3 endpoint support + Garage standalone storage
Add TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL setting to enable S3-compatible
backends (Garage, MinIO). When set, uses path-style addressing and
routes all requests to the custom endpoint. When unset, AWS behavior
is unchanged.
- AwsStorage: accept aws_endpoint_url, pass to all 6 session.client()
calls, configure path-style addressing and base_url
- Fix 4 direct AwsStorage constructions in Hatchet workflows to pass
endpoint_url (would have silently targeted wrong endpoint)
- Standalone: add Garage service to docker-compose.standalone.yml,
setup script initializes layout/bucket/key and writes credentials
- Fix compose_cmd() bug: Mac path was missing standalone yml
- garage.toml template with runtime secret generation via openssl
* fix: standalone setup — garage config, symlink handling, healthcheck
- garage.toml: fix rpc_secret field name (was secret_transmitter),
move to top-level per Garage v1.1.0 spec, remove unused [s3_web]
- setup-standalone.sh: resolve symlinked .env files before writing,
always ensure all standalone-critical vars via env_set,
fix garage key create/info syntax (positional arg, not --name),
avoid overwriting key secret with "(redacted)" on re-run,
use compose_cmd in health check
- docker-compose.standalone.yml: fix garage healthcheck (no curl in
image, use /garage stats instead)
* docs: update standalone md — symlink handling, garage config template
* docs: add troubleshooting section + port conflict check in setup script
Port conflicts from stale next dev / other worktree processes silently
shadow Docker container port mappings, causing env vars to appear ignored.
* fix: invalidate transcript query on STATUS websocket event
Without this, the processing page never redirects after completion
because the redirect logic watches the REST query data, not the
WebSocket status state.
Cherry-picked from feat-dag-progress (faec509a).
* fix: local env setup (#855)
* Ensure rate limit
* Increase nextjs compilation speed
* Fix daily no content handling
* Simplify daily webhook creation
* Fix webhook request validation
* feat: add local pyannote file diarization processor (#858)
* feat: add local pyannote file diarization processor
Enables file diarization without Modal by using pyannote.audio locally.
Downloads model bundle from S3 on first use, caches locally, patches
config to use local paths. Set DIARIZATION_BACKEND=pyannote to enable.
* fix: standalone setup enables pyannote diarization and public mode
Replace DIARIZATION_ENABLED=false with DIARIZATION_BACKEND=pyannote so
file uploads get speaker diarization out of the box. Add PUBLIC_MODE=true
so unauthenticated users can list/browse transcripts.
* fix: touch env files before first compose_cmd in standalone setup
docker-compose.yml references www/.env.local as env_file, but the
setup script only creates it in step 4. compose_cmd calls in step 3
(Garage) fail on a fresh clone when the file doesn't exist yet.
* feat: standalone uses self-hosted GPU service for transcription+diarization
Replace in-process pyannote approach with self-hosted gpu/self_hosted/ service.
Same HTTP API as Modal — just TRANSCRIPT_URL/DIARIZATION_URL point to local container.
- Add gpu/self_hosted/Dockerfile.cpu (GPU Dockerfile minus NVIDIA CUDA)
- Add S3 model bundle fallback in diarizer.py when HF_TOKEN not set
- Add gpu service to docker-compose.standalone.yml with compose env overrides
- Fix /browse empty in PUBLIC_MODE (search+list queries filtered out roomless transcripts)
- Remove audio_diarization_pyannote.py, file_diarization_pyannote.py and tests
- Remove pyannote-audio from server local deps
* fix: allow unauthenticated GPU requests when no API key configured
OAuth2PasswordBearer with auto_error=True rejects requests without
Authorization header before apikey_auth can check if auth is needed.
* fix: rename standalone gpu service to cpu to match Dockerfile.cpu usage
* docs: add programmatic testing section and fix gpu->cpu naming in setup script/docs
- Add "Testing programmatically" section to standalone docs with curl commands
for creating transcript, uploading audio, polling status, checking result
- Fix setup-standalone.sh to reference `cpu` service (was still `gpu` after rename)
- Update all docs references from gpu to cpu service naming
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
* Fix websocket disconnect errors
* Fix event loop is closed in Celery workers
* Allow reprocessing idle multitrack transcripts
* feat: add local pyannote file diarization processor
Enables file diarization without Modal by using pyannote.audio locally.
Downloads model bundle from S3 on first use, caches locally, patches
config to use local paths. Set DIARIZATION_BACKEND=pyannote to enable.
* feat: standalone uses self-hosted GPU service for transcription+diarization
Replace in-process pyannote approach with self-hosted gpu/self_hosted/ service.
Same HTTP API as Modal — just TRANSCRIPT_URL/DIARIZATION_URL point to local container.
- Add gpu/self_hosted/Dockerfile.cpu (GPU Dockerfile minus NVIDIA CUDA)
- Add S3 model bundle fallback in diarizer.py when HF_TOKEN not set
- Add gpu service to docker-compose.standalone.yml with compose env overrides
- Fix /browse empty in PUBLIC_MODE (search+list queries filtered out roomless transcripts)
- Remove audio_diarization_pyannote.py, file_diarization_pyannote.py and tests
- Remove pyannote-audio from server local deps
* fix: set source_kind to FILE on audio file upload
The upload endpoint left source_kind as the default LIVE even when
a file was uploaded. Now sets it to FILE when the upload completes.
* Add hatchet env vars
* fix: improve port conflict detection and ollama model check in standalone setup
- Filter OrbStack/Docker Desktop PIDs from port conflict check (false positives on Mac)
- Check all infra ports (5432, 6379, 3900, 3903) not just app ports
- Fix ollama model detection to match on name column only
- Document OrbStack and cross-project port conflicts in troubleshooting
* fix: processing page auto-redirect after file upload completes
Three fixes for the processing page not redirecting when status becomes "ended":
- Add useWebSockets to processing page so it receives STATUS events
- Remove OAuth2PasswordBearer from auth_none — broke WebSocket endpoints (500)
- Reconnect stale Redis in ws_manager when Celery worker reuses dead event loop
* fix: mock Celery broker in idle transcript validation test
test_validation_idle_transcript_with_recording_allowed called
validate_transcript_for_processing without mocking
task_is_scheduled_or_active, which attempts a real Celery
broker connection (AMQP port 5672). Other tests in the same
file already mock this — apply the same pattern here.
* Enable server host mode
* Fix webrtc connection
* Remove turbopack
* fix: standalone GPU service connectivity with host network mode
Server runs with network_mode: host and can't resolve Docker service
names. Publish cpu port as 8100 on host, point server at localhost:8100.
Worker stays on bridge network using cpu:8000. Add dummy
TRANSCRIPT_MODAL_API_KEY since OpenAI SDK requires it even for local
endpoints.
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Co-authored-by: Sergey Mankovsky <sergey@mankovsky.dev>
* feat: use file pipeline for upload and reprocess action
* fix: make file pipeline correctly report status events
* fix: duplication of transcripts_controller
* fix: tests
* test: fix file upload test
* test: fix reprocess
* fix: also patch from main_file_pipeline
(how patch is done is dependent of file import unfortunately)
* feat: better highlight
* feat(search): add long_summary to search vector for improved search results
- Update search vector to include long_summary with weight B (between title A and webvtt C)
- Modify SearchController to fetch long_summary and prioritize its snippets
- Generate snippets from long_summary first (max 2), then from webvtt for remaining slots
- Add comprehensive tests for long_summary search functionality
- Create migration to update search_vector_en column in PostgreSQL
This improves search quality by including summarized content which often contains
key topics and themes that may not be explicitly mentioned in the transcript.
* fix: address code review feedback for search enhancements
- Fix test file inconsistencies by removing references to non-existent model fields
- Comment out tests for unimplemented features (room_ids, status filters, date ranges)
- Update tests to only use currently available fields (room_id singular, no room_name/processing_status)
- Mark future functionality tests with @pytest.mark.skip
- Make snippet counts configurable
- Add LONG_SUMMARY_MAX_SNIPPETS constant (default: 2)
- Replace hardcoded value with configurable constant
- Improve error handling consistency in WebVTT parsing
- Use different log levels for different error types (debug for malformed, warning for decode, error for unexpected)
- Add catch-all exception handler for unexpected errors
- Include stack trace for critical errors
All existing tests pass with these changes.
* fix: correct datetime test to include required duration field
* feat: better highlight
* feat: search room names
* feat: acknowledge deleted room
* feat: search filters fix and rank removal
* chore: minor refactoring
* feat: better matches frontend
* chore: self-review (vibe)
* chore: self-review WIP
* chore: self-review WIP
* chore: self-review WIP
* chore: self-review WIP
* chore: self-review WIP
* chore: self-review WIP
* chore: self-review WIP
* remove swc (vibe)
* search url query sync (vibe)
* search url query sync (vibe)
* better casts and cap while
* PR review + simplify frontend hook
* pr: remove search db timeouts
* cleanup tests
* tests cleanup
* frontend cleanup
* index declarations
* refactor frontend (self-review)
* fix search pagination
* clear "x" for search input
* pagination max pages fix
* chore: cleanup
* cleanup
* cleanup
* cleanup
* cleanup
* cleanup
* cleanup
* cleanup
* lockfile
* pr review
* Delete recording with transcript
* Delete confirmation dialog
* Use aws storage abstraction for recording deletion
* Test recording deleted with transcript
* Use get transcript storage
* Fix the test
* Add env vars for recording storage
* feat: remove support of sqlite, 100% postgres
* fix: more migration and make datetime timezone aware in postgres
* fix: change how database is get, and use contextvar to have difference instance between different loops
* test: properly use client fixture that handle lifetime/database connection
* fix: add missing client fixture parameters to test functions
This commit fixes NameError issues where test functions were trying to use
the 'client' fixture but didn't have it as a parameter. The changes include:
1. Added 'client' parameter to test functions in:
- test_transcripts_audio_download.py (6 functions including fixture)
- test_transcripts_speaker.py (3 functions)
- test_transcripts_upload.py (1 function)
- test_transcripts_rtc_ws.py (2 functions + appserver fixture)
2. Resolved naming conflicts in test_transcripts_rtc_ws.py where both HTTP
client and StreamClient were using variable name 'client'. StreamClient
instances are now named 'stream_client' to avoid conflicts.
3. Added missing 'from reflector.app import app' import in rtc_ws tests.
Background: Previously implemented contextvars solution with get_database()
function resolves asyncio event loop conflicts in Celery tasks. The global
client fixture was also created to replace manual AsyncClient instances,
ensuring proper FastAPI application lifecycle management and database
connections during tests.
All tests now pass except for 2 pre-existing RTC WebSocket test failures
related to asyncpg connection issues unrelated to these fixes.
* fix: ensure task are correctly closed
* fix: make separate event loop for the live server
* fix: make default settings pointing at postgres
* build: remove pytest-docker deps out of dev, just tests group
Added a new field in transcript for room_id, and set room_id/meeting_id
in a transcript now. Use this field to list the transcripts. URL is now
very fast.
* refactor: fixes transcript duration type, NaN in waveform, and prepare for postgres migration
* fix: ensure we don't have NaN in waveform
* fix: missing assertionerror
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
* fix: potential empty array
---------
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>