Single script takes fresh clone to working Reflector: Ollama/LLM setup,
env file generation (server/.env + www/.env.local), docker compose up,
health checks. No Hatchet in standalone — live pipeline is pure Celery.
Ollama profiles (ollama-gpu, ollama-cpu) are only for Linux standalone
deployment. Mac devs never use them. Separate file keeps the main
compose clean and provides a natural home for future standalone services
(MinIO, etc.).
Linux: docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ollama-gpu up -d
Mac: docker compose up -d (native Ollama, no standalone file needed)
- Add setup script (scripts/setup-local-llm.sh) for one-command Ollama setup
Mac: native Metal GPU, Linux: containerized via docker-compose profiles
- Add ollama-gpu and ollama-cpu docker-compose profiles for Linux
- Add extra_hosts to server/hatchet-worker-llm for host.docker.internal
- Pass response_format JSON schema in StructuredOutputWorkflow.extract()
enabling grammar-based constrained decoding on Ollama/llama.cpp/vLLM/OpenAI
- Update .env.example with Ollama as default LLM option
- Add Ollama PRD and local dev setup docs
* feat: set Daily as default video platform
Daily.co has been battle-tested and is ready to be the default.
Whereby remains available for rooms that explicitly set it.
* feat: enforce Hatchet for all multitrack processing
Remove use_celery option from rooms - multitrack (Daily) recordings
now always use Hatchet workflows. Celery remains for single-track
(Whereby) file processing only.
- Remove use_celery column from room table
- Simplify dispatch logic to always use Hatchet for multitracks
- Update tests to mock Hatchet instead of Celery
* fix: update whereby test to patch Hatchet instead of removed Celery import
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
* fix websocket tests
* fix: restore timeout and fix celery test infrastructure
- Re-add timeout=1.0 to ws_manager pubsub loop (prevents CPU spin?)
- Use Redis for Celery tests (memory:// broker doesn't support chords)
- Add timeout param to in-memory subscriber mock
- Remove duplicate celery_includes fixture from rtc_ws tests
* fix: remove redundant inline imports in test files
* fix: update gitleaks ignore for moved s3_key line
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Set duration early in get_participants from Daily API (seconds -> ms),
ensuring post_zulip has the value before mixdown_tracks completes.
Removes redundant duration update from mixdown_tracks.
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Topic detection was timing out on longer transcripts when LLM
responses are slow. This affects detect_chunk_topic and other
LLM-calling tasks that use TIMEOUT_MEDIUM.
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
* set hatchet as default for multitracks
* fix: pipeline routing tests for hatchet-default branch
- Create room with use_celery=True to force Celery backend in tests
- Link transcript to room to enable multitrack pipeline routing
- Fixes test failures caused by missing HATCHET_CLIENT_TOKEN in test env
* Update server/reflector/services/transcript_process.py
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
* progress track for some hatchet tasks
* remove inline imports / type fixes
* progress callback for mixdown - move to a function
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>