Compare commits

..

18 Commits

Author SHA1 Message Date
06ac235482 chore(main): release 0.35.0 (#872) 2026-02-23 12:40:15 -05:00
Juan Diego García
0a194c4464 update README (#873) 2026-02-23 11:49:36 -05:00
Juan Diego García
c8db37362b feat: Add Single User authentication to Selfhosted (#870)
* Single user/password for selfhosted

* fix revision id latest migration
2026-02-23 11:10:27 -05:00
2ba0d965e8 chore(main): release 0.34.0 (#859) 2026-02-20 12:09:39 -06:00
527a069ba9 fix: remove max_tokens cap to support thinking models (Kimi-K2.5) (#869)
* fix: remove max_tokens cap to support thinking models (Kimi-K2.5)

Thinking/reasoning models like Kimi-K2.5 use output tokens for internal
chain-of-thought before generating the visible response. When max_tokens
was set (500 or 2048), the thinking budget consumed all available tokens,
leaving an empty response — causing TreeSummarize to return '' and
crashing the topic detection retry workflow.

Set max_tokens default to None so the model controls its own output
budget, allowing thinking models to complete both reasoning and response.

Also fix process.py CLI tool to import the Celery worker app before
dispatching tasks, ensuring the Redis broker config is used instead of
Celery's default AMQP transport.

* fix: remove max_tokens=200 cap from final title processor

Same thinking model issue — 200 tokens is especially tight and would be
entirely consumed by chain-of-thought reasoning, producing an empty title.

* Update server/reflector/tools/process.py

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>

* fix: remove max_tokens=500 cap from topic detector processor

Same thinking model fix — this is the original callsite that was failing
with Kimi-K2.5, producing empty TreeSummarize responses.

---------

Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
2026-02-20 12:07:34 -06:00
d4cc6be1fe feat: add change_seq to transcripts for ingestion support (#868)
* feat: add change_seq to transcripts for ingestion support

Add a monotonically increasing change_seq column to the transcript table,
backed by a PostgreSQL sequence and BEFORE INSERT OR UPDATE trigger. Every
mutation gets a new sequence value, letting external ingesters checkpoint
and never miss an update.

* chore: regenerate frontend API types
2026-02-20 10:12:05 -06:00
Juan Diego García
cdd974b935 chore: create script for selfhosted reflector (#866)
* self hosted with self gpu

* add optional ollama model

* garage ports

* exposes ports and changes curl

* custom domain

* try to fix wroker

* build locallly

* documentation

* docs format

* precommit
2026-02-19 15:11:45 -05:00
Sergey Mankovsky
a8ad237d85 fix: standalone on ubuntu (#865)
* Standalone on ubuntu

* fix: use port 3043 for Caddy, disable rooms, remove dead Caddyfile

- Caddy mapped to host port 3043 instead of 80/443 to avoid conflicts
- FEATURE_ROOMS=false in standalone web service
- Removed scripts/standalone/Caddyfile (dead code on this branch)
- Updated all URLs, port checks, docs to reference :3043

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2026-02-13 18:27:21 -05:00
9dbf155be4 feat: remove network_mode host for standalone WebRTC (#864)
* feat: remove network_mode host for standalone by fixing WebRTC port range and ICE candidates

aioice hardcodes bind(addr, 0) for ICE UDP sockets, making port mapping
impossible in Docker bridge networking. This adds two env-var-gated
mechanisms to replace network_mode: host:

1. WEBRTC_PORT_RANGE (e.g. "50000-50100"): monkey-patches aioice to bind
   UDP sockets within a known range, so they can be mapped in Docker.

2. WEBRTC_HOST (e.g. "host.docker.internal"): rewrites container-internal
   IPs in SDP answers with the Docker host's real IP, so LAN clients can
   reach the ICE candidates.

Both default to None — no effect on existing deployments.

* fix: do not attempt sidecar to detect host ip, use the standalone script to figure out the external ip and use it

* style: reformat

---------

Co-authored-by: tito <tito@titos-Mac-Studio.local>
2026-02-13 15:59:12 -05:00
7f2a4013cb feat: add Caddy reverse proxy with auto HTTPS for LAN access and auto-derive WebSocket URL (#863)
* feat: add Caddy reverse proxy with auto HTTPS for LAN access and auto-derive WebSocket URL

Add a Caddy service to docker-compose.standalone.yml that provides automatic
HTTPS with local certificates, enabling secure access to both the frontend
and API from the local network through a single entrypoint.

Backend changes:
- Add ROOT_PATH setting to FastAPI so the API can be served under /api prefix
- Route frontend and API (/server-api) through Caddy reverse proxy

Frontend changes:
- Support WEBSOCKET_URL=auto to derive the WebSocket URL from API_URL
  automatically, using the page protocol (http→ws, https→wss) and host
- Make WEBSOCKET_URL env var optional instead of required

* style: pre-commit

* fix: make standalone compose self-contained (drop !reset dependency)

docker-compose.standalone.yml used !reset YAML tags to clear
network_mode and volumes from the base compose. !reset requires
Compose v2.24+ and breaks on Colima + brew-installed compose.

Rewrite as a fully self-contained file with all services defined
directly (server, worker, beat, redis, postgres, web, garage, cpu,
gpu-nvidia, ollama, ollama-cpu). No longer overlays docker-compose.yml.

Update setup-standalone.sh compose_cmd() to use only the standalone
file instead of both files.

* fix: update standalone docs to match self-contained compose usage

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2026-02-13 15:21:43 -05:00
Igor Loskutov
14a8b5808e fix: check for Docker BuildKit (buildx) before building images
Dockerfiles use RUN --mount for caching which requires BuildKit.
Colima and bare Docker Engine installs don't bundle docker-buildx.
2026-02-12 18:57:32 -05:00
Igor Loskutov
e57c6186f9 fix: check compose version output, not just exit code
Without the plugin, `docker compose version` can still exit 0
by falling through to `docker version`. Grep for "Compose" in
the output to reliably detect the plugin.
2026-02-12 18:32:16 -05:00
Igor Loskutov
36a8daee61 fix: check for Docker Compose plugin before running standalone setup
Without the compose plugin, `docker compose -f ...` produces a
misleading "unknown shorthand flag: 'f'" error instead of telling
the user compose is missing.
2026-02-12 18:24:24 -05:00
Igor Loskutov
3d13e5d42f fix: auto-rebuild standalone images and blank Hatchet vars
- Add rebuild_images() to setup-standalone.sh that runs `compose build`
  before `up -d`, with image hash comparison to log whether each service
  was rebuilt or unchanged
- Blank HATCHET_CLIENT_SERVER_URL/HOST_PORT in standalone compose since
  Hatchet is not started (localhost URLs break after network_mode:host removal)
- Fix grep -qx -> -qxF for ollama model matching (dots in model names)
2026-02-12 18:21:09 -05:00
Igor Loskutov
695f3c4928 fix: standalone server networking and setup diagnostics
Replace network_mode:host with standard compose networking for macOS
Docker Desktop compatibility. Add dump_diagnostics() for automatic
failure debugging and docker-exec-based server health checks.
2026-02-12 17:46:00 -05:00
5bca92510a feat: standalone frontend uses production build instead of dev server (#862)
* feat: standalone frontend uses production build instead of dev server

Override web service in docker-compose.standalone.yml to build from
www/Dockerfile (multi-stage: deps → build → standalone runner) instead
of running pnpm dev with bind-mounted source.

* chore: move standalone compose TODO to Huly issue RFFR-46

* fix: add required env vars for standalone production frontend

The standalone web service (node server.js) has no bind-mounted .env
files and the base env_file (.env.local) has API_URL commented out.
Next.js standalone server can't auto-load .env files without them on
disk, so all required vars must be explicit in the compose override.

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2026-02-12 15:36:52 -05:00
972a52d22f fix: live flow real-time updates during processing (#861)
* fix: live flow real-time updates during processing

Three gaps caused transcript pages to require manual refresh after
live recording/processing:

1. UserEventsProvider only invalidated list queries on TRANSCRIPT_STATUS,
   not individual transcript queries. Now parses data.id from the event
   and calls invalidateTranscript for the specific transcript.

2. useWebSockets had no reconnection logic — a dropped WS silently
   killed all real-time updates. Added exponential backoff reconnection
   (1s-30s, max 10 retries) with intentional close detection.

3. No polling fallback — WS was single point of failure. Added
   conditional refetchInterval to useTranscriptGet that polls every 5s
   when transcript status is processing/uploaded/recording.

* feat: type-safe WebSocket events via OpenAPI stub

Define Pydantic models with Literal discriminators for all WS events
(9 transcript-level, 5 user-level). Expose via stub GET endpoints so
pnpm openapi generates TS discriminated unions with exhaustive switch
narrowing on the frontend.

- New server/reflector/ws_events.py with TranscriptWsEvent and UserWsEvent
- Tighten backend emit signatures with TranscriptEventName literal
- Frontend uses generated types, removes Zod schema and manual casts
- Fix pre-existing bugs: waveform mapping, FINAL_LONG_SUMMARY field name
- STATUS value now typed as TranscriptStatus literal end-to-end
- TOPIC handler simplified to query invalidation only (avoids shape mismatch)

* fix: restore TOPIC WS handler with immediate state update

The setTopics call provides instant topic rendering during live
transcription. Query invalidation still follows for full data sync.

* fix: align TOPIC WS event data with GetTranscriptTopic shape

Convert TranscriptTopic → GetTranscriptTopic in pipeline before
emitting, so WS sends segments instead of words. Removes the
`as unknown as Topic` cast on the frontend.

* fix: use NonEmptyString and TranscriptStatus in user WS event models

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2026-02-12 14:49:57 -05:00
b468427f1b feat: local llm support + standalone-script doc/draft (#856)
* feat: local LLM via Ollama + structured output response_format

- Add setup script (scripts/setup-local-llm.sh) for one-command Ollama setup
  Mac: native Metal GPU, Linux: containerized via docker-compose profiles
- Add ollama-gpu and ollama-cpu docker-compose profiles for Linux
- Add extra_hosts to server/hatchet-worker-llm for host.docker.internal
- Pass response_format JSON schema in StructuredOutputWorkflow.extract()
  enabling grammar-based constrained decoding on Ollama/llama.cpp/vLLM/OpenAI
- Update .env.example with Ollama as default LLM option
- Add Ollama PRD and local dev setup docs

* refactor: move Ollama services to docker-compose.standalone.yml

Ollama profiles (ollama-gpu, ollama-cpu) are only for Linux standalone
deployment. Mac devs never use them. Separate file keeps the main
compose clean and provides a natural home for future standalone services
(MinIO, etc.).

Linux: docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ollama-gpu up -d
Mac: docker compose up -d (native Ollama, no standalone file needed)

* fix: correct PRD goal (demo/eval, not dev replacement) and processor naming

* chore: remove completed PRD, rename setup doc, drop response_format tests

- Remove docs/01_ollama.prd.md (implementation complete)
- Rename local-dev-setup.md -> standalone-local-setup.md
- Remove TestResponseFormat class from test_llm_retry.py

* docs: resolve standalone storage step — skip S3 for live-only mode

* docs: add TASKS.md for standalone env defaults + setup script work

* feat: add unified setup-local-dev.sh for standalone deployment

Single script takes fresh clone to working Reflector: Ollama/LLM setup,
env file generation (server/.env + www/.env.local), docker compose up,
health checks. No Hatchet in standalone — live pipeline is pure Celery.

* chore: rename to setup-standalone, remove redundant setup-local-llm.sh

* feat: add custom S3 endpoint support + Garage standalone storage

Add TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL setting to enable S3-compatible
backends (Garage, MinIO). When set, uses path-style addressing and
routes all requests to the custom endpoint. When unset, AWS behavior
is unchanged.

- AwsStorage: accept aws_endpoint_url, pass to all 6 session.client()
  calls, configure path-style addressing and base_url
- Fix 4 direct AwsStorage constructions in Hatchet workflows to pass
  endpoint_url (would have silently targeted wrong endpoint)
- Standalone: add Garage service to docker-compose.standalone.yml,
  setup script initializes layout/bucket/key and writes credentials
- Fix compose_cmd() bug: Mac path was missing standalone yml
- garage.toml template with runtime secret generation via openssl

* fix: standalone setup — garage config, symlink handling, healthcheck

- garage.toml: fix rpc_secret field name (was secret_transmitter),
  move to top-level per Garage v1.1.0 spec, remove unused [s3_web]
- setup-standalone.sh: resolve symlinked .env files before writing,
  always ensure all standalone-critical vars via env_set,
  fix garage key create/info syntax (positional arg, not --name),
  avoid overwriting key secret with "(redacted)" on re-run,
  use compose_cmd in health check
- docker-compose.standalone.yml: fix garage healthcheck (no curl in
  image, use /garage stats instead)

* docs: update standalone md — symlink handling, garage config template

* docs: add troubleshooting section + port conflict check in setup script

Port conflicts from stale next dev / other worktree processes silently
shadow Docker container port mappings, causing env vars to appear ignored.

* fix: invalidate transcript query on STATUS websocket event

Without this, the processing page never redirects after completion
because the redirect logic watches the REST query data, not the
WebSocket status state.

Cherry-picked from feat-dag-progress (faec509a).

* fix: local env setup (#855)

* Ensure rate limit

* Increase nextjs compilation speed

* Fix daily no content handling

* Simplify daily webhook creation

* Fix webhook request validation

* feat: add local pyannote file diarization processor (#858)

* feat: add local pyannote file diarization processor

Enables file diarization without Modal by using pyannote.audio locally.
Downloads model bundle from S3 on first use, caches locally, patches
config to use local paths. Set DIARIZATION_BACKEND=pyannote to enable.

* fix: standalone setup enables pyannote diarization and public mode

Replace DIARIZATION_ENABLED=false with DIARIZATION_BACKEND=pyannote so
file uploads get speaker diarization out of the box. Add PUBLIC_MODE=true
so unauthenticated users can list/browse transcripts.

* fix: touch env files before first compose_cmd in standalone setup

docker-compose.yml references www/.env.local as env_file, but the
setup script only creates it in step 4. compose_cmd calls in step 3
(Garage) fail on a fresh clone when the file doesn't exist yet.

* feat: standalone uses self-hosted GPU service for transcription+diarization

Replace in-process pyannote approach with self-hosted gpu/self_hosted/ service.
Same HTTP API as Modal — just TRANSCRIPT_URL/DIARIZATION_URL point to local container.

- Add gpu/self_hosted/Dockerfile.cpu (GPU Dockerfile minus NVIDIA CUDA)
- Add S3 model bundle fallback in diarizer.py when HF_TOKEN not set
- Add gpu service to docker-compose.standalone.yml with compose env overrides
- Fix /browse empty in PUBLIC_MODE (search+list queries filtered out roomless transcripts)
- Remove audio_diarization_pyannote.py, file_diarization_pyannote.py and tests
- Remove pyannote-audio from server local deps

* fix: allow unauthenticated GPU requests when no API key configured

OAuth2PasswordBearer with auto_error=True rejects requests without
Authorization header before apikey_auth can check if auth is needed.

* fix: rename standalone gpu service to cpu to match Dockerfile.cpu usage

* docs: add programmatic testing section and fix gpu->cpu naming in setup script/docs

- Add "Testing programmatically" section to standalone docs with curl commands
  for creating transcript, uploading audio, polling status, checking result
- Fix setup-standalone.sh to reference `cpu` service (was still `gpu` after rename)
- Update all docs references from gpu to cpu service naming

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>

* Fix websocket disconnect errors

* Fix event loop is closed in Celery workers

* Allow reprocessing idle multitrack transcripts

* feat: add local pyannote file diarization processor

Enables file diarization without Modal by using pyannote.audio locally.
Downloads model bundle from S3 on first use, caches locally, patches
config to use local paths. Set DIARIZATION_BACKEND=pyannote to enable.

* feat: standalone uses self-hosted GPU service for transcription+diarization

Replace in-process pyannote approach with self-hosted gpu/self_hosted/ service.
Same HTTP API as Modal — just TRANSCRIPT_URL/DIARIZATION_URL point to local container.

- Add gpu/self_hosted/Dockerfile.cpu (GPU Dockerfile minus NVIDIA CUDA)
- Add S3 model bundle fallback in diarizer.py when HF_TOKEN not set
- Add gpu service to docker-compose.standalone.yml with compose env overrides
- Fix /browse empty in PUBLIC_MODE (search+list queries filtered out roomless transcripts)
- Remove audio_diarization_pyannote.py, file_diarization_pyannote.py and tests
- Remove pyannote-audio from server local deps

* fix: set source_kind to FILE on audio file upload

The upload endpoint left source_kind as the default LIVE even when
a file was uploaded. Now sets it to FILE when the upload completes.

* Add hatchet env vars

* fix: improve port conflict detection and ollama model check in standalone setup

- Filter OrbStack/Docker Desktop PIDs from port conflict check (false positives on Mac)
- Check all infra ports (5432, 6379, 3900, 3903) not just app ports
- Fix ollama model detection to match on name column only
- Document OrbStack and cross-project port conflicts in troubleshooting

* fix: processing page auto-redirect after file upload completes

Three fixes for the processing page not redirecting when status becomes "ended":

- Add useWebSockets to processing page so it receives STATUS events
- Remove OAuth2PasswordBearer from auth_none — broke WebSocket endpoints (500)
- Reconnect stale Redis in ws_manager when Celery worker reuses dead event loop

* fix: mock Celery broker in idle transcript validation test

test_validation_idle_transcript_with_recording_allowed called
validate_transcript_for_processing without mocking
task_is_scheduled_or_active, which attempts a real Celery
broker connection (AMQP port 5672). Other tests in the same
file already mock this — apply the same pattern here.

* Enable server host mode

* Fix webrtc connection

* Remove turbopack

* fix: standalone GPU service connectivity with host network mode

Server runs with network_mode: host and can't resolve Docker service
names. Publish cpu port as 8100 on host, point server at localhost:8100.
Worker stays on bridge network using cpu:8000. Add dummy
TRANSCRIPT_MODAL_API_KEY since OpenAI SDK requires it even for local
endpoints.

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
Co-authored-by: Sergey Mankovsky <sergey@mankovsky.dev>
2026-02-11 18:20:36 -05:00
99 changed files with 6616 additions and 3958 deletions

2
.gitignore vendored
View File

@@ -23,3 +23,5 @@ www/.env.production
docs/pnpm-lock.yaml
.secrets
opencode.json
vibedocs/

View File

@@ -6,7 +6,7 @@ repos:
- id: format
name: run format
language: system
entry: bash -c 'cd www && pnpm format'
entry: bash -c 'source "$HOME/.nvm/nvm.sh" && cd www && pnpm format'
pass_filenames: false
files: ^www/

View File

@@ -1,5 +1,35 @@
# Changelog
## [0.35.0](https://github.com/Monadical-SAS/reflector/compare/v0.34.0...v0.35.0) (2026-02-23)
### Features
* Add Single User authentication to Selfhosted ([#870](https://github.com/Monadical-SAS/reflector/issues/870)) ([c8db373](https://github.com/Monadical-SAS/reflector/commit/c8db37362b6cfd8f772aee8857de2909f283c029))
## [0.34.0](https://github.com/Monadical-SAS/reflector/compare/v0.33.0...v0.34.0) (2026-02-20)
### Features
* add Caddy reverse proxy with auto HTTPS for LAN access and auto-derive WebSocket URL ([#863](https://github.com/Monadical-SAS/reflector/issues/863)) ([7f2a401](https://github.com/Monadical-SAS/reflector/commit/7f2a4013cbb3d3ee3e76885f28d73331dcaf325c))
* add change_seq to transcripts for ingestion support ([#868](https://github.com/Monadical-SAS/reflector/issues/868)) ([d4cc6be](https://github.com/Monadical-SAS/reflector/commit/d4cc6be1fed56ea7fba06acb8d50c9de43b26b07))
* local llm support + standalone-script doc/draft ([#856](https://github.com/Monadical-SAS/reflector/issues/856)) ([b468427](https://github.com/Monadical-SAS/reflector/commit/b468427f1bb12634f5840990e9d64b2c145d7c1a))
* remove network_mode host for standalone WebRTC ([#864](https://github.com/Monadical-SAS/reflector/issues/864)) ([9dbf155](https://github.com/Monadical-SAS/reflector/commit/9dbf155be4de7c059035a75f90c7bf0845344b74))
* standalone frontend uses production build instead of dev server ([#862](https://github.com/Monadical-SAS/reflector/issues/862)) ([5bca925](https://github.com/Monadical-SAS/reflector/commit/5bca92510a5c33f8baeeaac2c346fb1978366ac8))
### Bug Fixes
* auto-rebuild standalone images and blank Hatchet vars ([3d13e5d](https://github.com/Monadical-SAS/reflector/commit/3d13e5d42fc53ce3c005841265ed1e8735a61518))
* check compose version output, not just exit code ([e57c618](https://github.com/Monadical-SAS/reflector/commit/e57c6186f92d66e4525786e56b018c08cf792d2f))
* check for Docker BuildKit (buildx) before building images ([14a8b58](https://github.com/Monadical-SAS/reflector/commit/14a8b5808e5aed860e55aaed35a0fdf8b2f4afa3))
* check for Docker Compose plugin before running standalone setup ([36a8dae](https://github.com/Monadical-SAS/reflector/commit/36a8daee61c2b7a0937fd0914d51fb4ea8212ae7))
* live flow real-time updates during processing ([#861](https://github.com/Monadical-SAS/reflector/issues/861)) ([972a52d](https://github.com/Monadical-SAS/reflector/commit/972a52d22f989f9e2c6f52362b3f1a4e17773663))
* remove max_tokens cap to support thinking models (Kimi-K2.5) ([#869](https://github.com/Monadical-SAS/reflector/issues/869)) ([527a069](https://github.com/Monadical-SAS/reflector/commit/527a069ba9eff6717ccd4bb1e839674edebffceb))
* standalone on ubuntu ([#865](https://github.com/Monadical-SAS/reflector/issues/865)) ([a8ad237](https://github.com/Monadical-SAS/reflector/commit/a8ad237d8571d5ef5c78fb4427c538592d6a7b43))
* standalone server networking and setup diagnostics ([695f3c4](https://github.com/Monadical-SAS/reflector/commit/695f3c49285254869f6a6cbd5f860d1169fa4daa))
## [0.33.0](https://github.com/Monadical-SAS/reflector/compare/v0.32.2...v0.33.0) (2026-02-05)

View File

@@ -0,0 +1,25 @@
# Reflector self-hosted production — HTTPS via Caddy reverse proxy
# Copy to Caddyfile: cp Caddyfile.selfhosted.example Caddyfile
# Run: ./scripts/setup-selfhosted.sh --ollama-gpu --garage --caddy
#
# DOMAIN defaults to localhost (self-signed cert).
# Set to your real domain for automatic Let's Encrypt:
# export DOMAIN=reflector.example.com
#
# TLS_MODE defaults to "internal" (self-signed).
# Set to "" for automatic Let's Encrypt (requires real domain + ports 80/443 open):
# export TLS_MODE=""
{$DOMAIN:localhost} {
tls {$TLS_MODE:internal}
handle /v1/* {
reverse_proxy server:1250
}
handle /health {
reverse_proxy server:1250
}
handle {
reverse_proxy web:3000
}
}

View File

@@ -0,0 +1,42 @@
# Reflector standalone — HTTPS via Caddy (droplet / IP access)
# Copy to Caddyfile: cp Caddyfile.standalone.example Caddyfile
# Run: docker compose -f docker-compose.standalone.yml --profile ollama-cpu up -d
#
# :443 = catch-all inside container; Docker maps host port 3043 → container 443
# on_demand = generate self-signed cert for IP/SNI on first request (required for bare IP access)
# Browser will warn. Click Advanced → Proceed.
# Access at https://localhost:3043 (or https://YOUR_IP:3043 on droplet)
# Update www/.env.local with: API_URL=https://YOUR_IP:3043, WEBSOCKET_URL=wss://YOUR_IP:3043, SITE_URL=https://YOUR_IP:3043, NEXTAUTH_URL=https://YOUR_IP:3043
:443 {
tls internal {
on_demand
}
handle /v1/* {
reverse_proxy server:1250
}
handle /health {
reverse_proxy server:1250
}
handle {
reverse_proxy web:3000
}
}
# Option B: localhost (comment Option A, uncomment this)
# app.localhost {
# tls internal
# reverse_proxy web:3000
# }
# api.localhost {
# tls internal
# reverse_proxy server:1250
# }
# Option C: Real domain (uncomment and replace example.com)
# app.example.com {
# reverse_proxy web:3000
# }
# api.example.com {
# reverse_proxy server:1250
# }

200
README.md
View File

@@ -44,22 +44,100 @@ Reflector is a web application that utilizes local models to process audio conte
- **Topic Detection & Summarization**: Extract key topics and generate concise summaries using LLMs
- **Meeting Recording**: Create permanent records of meetings with searchable transcripts
Currently we provide [modal.com](https://modal.com/) gpu template to deploy.
## Architecture
## Background
The project consists of three primary components:
The project architecture consists of three primary components:
- **Back-End**: Python FastAPI server with async database operations and background processing, found in `server/`.
- **Front-End**: Next.js 14 React application with Chakra UI, located in `www/`.
- **GPU Models**: Specialized ML models for transcription, diarization, translation, and summarization.
- **Back-End**: Python server that offers an API and data persistence, found in `server/`.
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations.
Currently, Reflector supports two input methods:
- **Screenshare capture**: Real-time audio capture from your browser via WebRTC
- **Audio file upload**: Upload pre-recorded audio files for processing
It also uses authentik for authentication if activated.
## Installation
## Contribution Guidelines
For full deployment instructions, see the [Self-Hosted Production Guide](docsv2/selfhosted-production.md) and the [Architecture Reference](docsv2/selfhosted-architecture.md).
All new contributions should be made in a separate branch, and goes through a Pull Request.
[Conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) must be used for the PR title and commits.
### Self-Hosted Deployment
The self-hosted setup script configures and launches everything on a single server:
```bash
# GPU with local Ollama LLM, local S3 storage, and Caddy reverse proxy
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy
# With a custom domain (enables Let's Encrypt auto-HTTPS)
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy --domain reflector.example.com
# CPU-only mode (slower, no NVIDIA GPU required)
./scripts/setup-selfhosted.sh --cpu --ollama-cpu --garage --caddy
# With password authentication
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy --password mysecretpass
```
The script is idempotent and safe to re-run. See `./scripts/setup-selfhosted.sh --help` for all options.
### Authentication
Reflector supports three authentication modes:
- **Password authentication (recommended for self-hosted / single-user)**: Use the `--password` flag in the setup script. This creates an `admin@localhost` user with the provided password. Users must log in to create, edit, or delete transcripts.
```bash
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy --password mysecretpass
```
- **Authentik OIDC**: For multi-user or enterprise deployments, Reflector supports [Authentik](https://goauthentik.io/) as an OAuth/OIDC provider. This enables SSO, LDAP/AD integration, and centralized user management. Requires configuring `AUTH_BACKEND=jwt` on the backend and `AUTH_PROVIDER=authentik` on the frontend. See the [Self-Hosted Production Guide](docsv2/selfhosted-production.md) for details.
- **Public mode (default when no auth is configured)**: If neither password nor Authentik is set up, Reflector runs in public mode. In this mode, no login is required — anyone with access to the URL can use the application. Transcripts are created anonymously (not tied to any user account), which means they **cannot be edited or deleted** through the UI or API. Anonymous transcripts are automatically cleaned up after 7 days. This mode is suitable for demos or testing but not recommended for production use.
### Development Setup
```bash
# Backend
cd server
uv sync
docker compose up -d redis
uv run alembic upgrade head
uv run -m reflector.app --reload
# In a separate terminal — start the worker
cd server
uv run celery -A reflector.worker.app worker --loglevel=info
# Frontend
cd www
pnpm install
cp .env_template .env
pnpm dev
```
### Modal.com GPU (Optional)
Reflector also supports deploying specialized models (transcription, diarization) to [Modal.com](https://modal.com/) for serverless GPU processing. This is **not integrated into the self-hosted setup script** and must be configured manually.
See [Modal.com Setup Guide](docs/docs/installation/modal-setup.md) for deployment instructions.
## Audio Processing Commands
### Process a local audio file
```bash
cd server
uv run python -m reflector.tools.process path/to/audio.wav
```
### Reprocess an existing transcription
Re-run the processing pipeline on a previously uploaded transcription by its UUID:
```bash
cd server
uv run -m reflector.tools.process_transcript <transcript-uuid> --sync
```
## Usage
@@ -87,96 +165,9 @@ Note: We currently do not have instructions for Windows users.
- Then goto `System Preferences -> Sound` and choose the devices created from the Output and Input tabs.
- The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.
## Installation
*Note: we're working toward better installation, theses instructions are not accurate for now*
### Frontend
Start with `cd www`.
**Installation**
```bash
pnpm install
cp .env.example .env
```
Then, fill in the environment variables in `.env` as needed. If you are unsure on how to proceed, ask in Zulip.
**Run in development mode**
```bash
pnpm dev
```
Then (after completing server setup and starting it) open [http://localhost:3000](http://localhost:3000) to view it in the browser.
**OpenAPI Code Generation**
To generate the TypeScript files from the openapi.json file, make sure the python server is running, then run:
```bash
pnpm openapi
```
### Backend
Start with `cd server`.
**Run in development mode**
```bash
docker compose up -d redis
# on the first run, or if the schemas changed
uv run alembic upgrade head
# start the worker
uv run celery -A reflector.worker.app worker --loglevel=info
# start the app
uv run -m reflector.app --reload
```
Then fill `.env` with the omitted values (ask in Zulip).
**Crontab (optional)**
For crontab (only healthcheck for now), start the celery beat (you don't need it on your local dev environment):
```bash
uv run celery -A reflector.worker.app beat
```
### GPU models
Currently, reflector heavily use custom local models, deployed on modal. All the micro services are available in server/gpu/
To deploy llm changes to modal, you need:
- a modal account
- set up the required secret in your modal account (REFLECTOR_GPU_APIKEY)
- install the modal cli
- connect your modal cli to your account if not done previously
- `modal run path/to/required/llm`
## Using local files
You can manually process an audio file by calling the process tool:
```bash
uv run python -m reflector.tools.process path/to/audio.wav
```
## Reprocessing any transcription
```bash
uv run -m reflector.tools.process_transcript 81ec38d1-9dd7-43d2-b3f8-51f4d34a07cd --sync
```
## Build-time env variables
Next.js projects are more used to NEXT_PUBLIC_ prefixed buildtime vars. We don't have those for the reason we need to serve a ccustomizable prebuild docker container.
Next.js projects are more used to NEXT_PUBLIC_ prefixed buildtime vars. We don't have those for the reason we need to serve a customizable prebuilt docker container.
Instead, all the variables are runtime. Variables needed to the frontend are served to the frontend app at initial render.
@@ -211,3 +202,16 @@ FEATURE_BROWSE=false
# Enable Zulip integration
FEATURE_SEND_TO_ZULIP=true
```
## Contribution Guidelines
All new contributions should be made in a separate branch, and goes through a Pull Request.
[Conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) must be used for the PR title and commits.
## Future Plans
- **Daily.co integration with multitrack processing**: Support for Daily.co live rooms with per-participant audio tracks for improved diarization and transcription quality.
## Legacy Documentation
The `docs/` folder contains an older Docusaurus-based documentation site. These docs are **no longer actively maintained** and may be outdated. For current installation and deployment instructions, refer to the [`docsv2/`](docsv2/) folder instead.

View File

@@ -0,0 +1,321 @@
# Self-hosted production Docker Compose — single file for everything.
#
# Usage: ./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy
# or: docker compose -f docker-compose.selfhosted.yml --profile gpu [--profile ollama-gpu] [--profile garage] [--profile caddy] up -d
#
# Specialized models (pick ONE — required):
# --profile gpu NVIDIA GPU for transcription/diarization/translation
# --profile cpu CPU-only for transcription/diarization/translation
#
# Local LLM (optional — for summarization/topics):
# --profile ollama-gpu Local Ollama with NVIDIA GPU
# --profile ollama-cpu Local Ollama on CPU only
#
# Other optional services:
# --profile garage Local S3-compatible storage (Garage)
# --profile caddy Reverse proxy with auto-SSL
#
# Prerequisites:
# 1. Run ./scripts/setup-selfhosted.sh to generate env files and secrets
# 2. Or manually create server/.env and www/.env from the .selfhosted.example templates
services:
# ===========================================================
# Always-on core services (no profile required)
# ===========================================================
server:
build:
context: ./server
dockerfile: Dockerfile
image: monadicalsas/reflector-backend:latest
restart: unless-stopped
ports:
- "127.0.0.1:1250:1250"
- "50000-50100:50000-50100/udp"
env_file:
- ./server/.env
environment:
ENTRYPOINT: server
# Docker-internal overrides (always correct inside compose network)
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
HATCHET_CLIENT_SERVER_URL: ""
HATCHET_CLIENT_HOST_PORT: ""
# Specialized models via gpu/cpu container (aliased as "transcription")
TRANSCRIPT_BACKEND: modal
TRANSCRIPT_URL: http://transcription:8000
TRANSCRIPT_MODAL_API_KEY: selfhosted
DIARIZATION_BACKEND: modal
DIARIZATION_URL: http://transcription:8000
TRANSLATION_BACKEND: modal
TRANSLATE_URL: http://transcription:8000
# WebRTC: fixed UDP port range for ICE candidates (mapped above)
WEBRTC_PORT_RANGE: "50000-50100"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
volumes:
- server_data:/app/data
worker:
build:
context: ./server
dockerfile: Dockerfile
image: monadicalsas/reflector-backend:latest
restart: unless-stopped
env_file:
- ./server/.env
environment:
ENTRYPOINT: worker
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
HATCHET_CLIENT_SERVER_URL: ""
HATCHET_CLIENT_HOST_PORT: ""
TRANSCRIPT_BACKEND: modal
TRANSCRIPT_URL: http://transcription:8000
TRANSCRIPT_MODAL_API_KEY: selfhosted
DIARIZATION_BACKEND: modal
DIARIZATION_URL: http://transcription:8000
TRANSLATION_BACKEND: modal
TRANSLATE_URL: http://transcription:8000
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
volumes:
- server_data:/app/data
beat:
build:
context: ./server
dockerfile: Dockerfile
image: monadicalsas/reflector-backend:latest
restart: unless-stopped
env_file:
- ./server/.env
environment:
ENTRYPOINT: beat
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
web:
build:
context: ./www
dockerfile: Dockerfile
image: monadicalsas/reflector-frontend:latest
restart: unless-stopped
ports:
- "127.0.0.1:3000:3000"
env_file:
- ./www/.env
environment:
NODE_ENV: production
NODE_TLS_REJECT_UNAUTHORIZED: "0"
SERVER_API_URL: http://server:1250
KV_URL: redis://redis:6379
KV_USE_TLS: "false"
NEXTAUTH_URL_INTERNAL: http://localhost:3000
depends_on:
- redis
redis:
image: redis:7.2-alpine
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 3s
retries: 3
volumes:
- redis_data:/data
postgres:
image: postgres:17-alpine
restart: unless-stopped
environment:
POSTGRES_USER: reflector
POSTGRES_PASSWORD: reflector
POSTGRES_DB: reflector
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U reflector"]
interval: 30s
timeout: 3s
retries: 3
# ===========================================================
# Specialized model containers (transcription, diarization, translation)
# Both gpu and cpu get alias "transcription" so server config never changes.
# ===========================================================
gpu:
build:
context: ./gpu/self_hosted
dockerfile: Dockerfile
profiles: [gpu]
restart: unless-stopped
ports:
- "127.0.0.1:8000:8000"
environment:
HF_TOKEN: ${HF_TOKEN:-}
volumes:
- gpu_cache:/root/.cache
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/docs"]
interval: 15s
timeout: 5s
retries: 10
start_period: 120s
networks:
default:
aliases:
- transcription
cpu:
build:
context: ./gpu/self_hosted
dockerfile: Dockerfile.cpu
profiles: [cpu]
restart: unless-stopped
ports:
- "127.0.0.1:8000:8000"
environment:
HF_TOKEN: ${HF_TOKEN:-}
volumes:
- gpu_cache:/root/.cache
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/docs"]
interval: 15s
timeout: 5s
retries: 10
start_period: 120s
networks:
default:
aliases:
- transcription
# ===========================================================
# Ollama — local LLM for summarization & topic detection
# Only started with --ollama-gpu or --ollama-cpu modes.
# ===========================================================
ollama:
image: ollama/ollama:latest
profiles: [ollama-gpu]
restart: unless-stopped
ports:
- "127.0.0.1:11435:11435"
volumes:
- ollama_data:/root/.ollama
environment:
OLLAMA_HOST: "0.0.0.0:11435"
OLLAMA_KEEP_ALIVE: "24h"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:11435/api/tags"]
interval: 10s
timeout: 5s
retries: 5
ollama-cpu:
image: ollama/ollama:latest
profiles: [ollama-cpu]
restart: unless-stopped
ports:
- "127.0.0.1:11435:11435"
volumes:
- ollama_data:/root/.ollama
environment:
OLLAMA_HOST: "0.0.0.0:11435"
OLLAMA_KEEP_ALIVE: "24h" # keep model loaded to avoid reload delays
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:11435/api/tags"]
interval: 10s
timeout: 5s
retries: 5
# ===========================================================
# Garage — local S3-compatible object storage (optional)
# ===========================================================
garage:
image: dxflrs/garage:v1.1.0
profiles: [garage]
restart: unless-stopped
ports:
- "3900:3900" # S3 API
- "3903:3903" # Admin API
volumes:
- garage_data:/var/lib/garage/data
- garage_meta:/var/lib/garage/meta
- ./data/garage.toml:/etc/garage.toml:ro
healthcheck:
test: ["CMD", "/garage", "stats"]
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
# ===========================================================
# Caddy — reverse proxy with automatic SSL (optional)
# Maps 80:80 and 443:443 — only exposed ports in the stack.
# ===========================================================
caddy:
image: caddy:2-alpine
profiles: [caddy]
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
depends_on:
- web
- server
volumes:
postgres_data:
redis_data:
server_data:
gpu_cache:
garage_data:
garage_meta:
ollama_data:
caddy_data:
caddy_config:
networks:
default:
attachable: true

View File

@@ -0,0 +1,241 @@
# Self-contained standalone compose for fully local deployment (no external dependencies).
# Usage: docker compose -f docker-compose.standalone.yml up -d
#
# On Linux with NVIDIA GPU, also pass: --profile ollama-gpu
# On Linux without GPU: --profile ollama-cpu
# On Mac: Ollama runs natively (Metal GPU) — no profile needed, services here unused.
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "3043:443"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
depends_on:
- web
- server
server:
build:
context: server
ports:
- "1250:1250"
- "50000-50100:50000-50100/udp"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
ENTRYPOINT: server
# Docker DNS names instead of localhost
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
# Standalone doesn't run Hatchet
HATCHET_CLIENT_SERVER_URL: ""
HATCHET_CLIENT_HOST_PORT: ""
# Self-hosted transcription/diarization via CPU service
TRANSCRIPT_BACKEND: modal
TRANSCRIPT_URL: http://cpu:8000
TRANSCRIPT_MODAL_API_KEY: local
DIARIZATION_BACKEND: modal
DIARIZATION_URL: http://cpu:8000
# Caddy reverse proxy prefix
ROOT_PATH: /server-api
# WebRTC: fixed UDP port range for ICE candidates (mapped above).
# WEBRTC_HOST is set by setup-standalone.sh in server/.env (LAN IP detection).
WEBRTC_PORT_RANGE: "50000-50100"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
worker:
build:
context: server
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
ENTRYPOINT: worker
HATCHET_CLIENT_SERVER_URL: ""
HATCHET_CLIENT_HOST_PORT: ""
TRANSCRIPT_BACKEND: modal
TRANSCRIPT_URL: http://cpu:8000
TRANSCRIPT_MODAL_API_KEY: local
DIARIZATION_BACKEND: modal
DIARIZATION_URL: http://cpu:8000
depends_on:
redis:
condition: service_started
beat:
build:
context: server
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
ENTRYPOINT: beat
depends_on:
redis:
condition: service_started
redis:
image: redis:7.2
ports:
- 6379:6379
postgres:
image: postgres:17
command: postgres -c 'max_connections=200'
ports:
- 5432:5432
environment:
POSTGRES_USER: reflector
POSTGRES_PASSWORD: reflector
POSTGRES_DB: reflector
volumes:
- ./data/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -d reflector -U reflector"]
interval: 5s
timeout: 5s
retries: 10
start_period: 15s
web:
image: reflector-frontend-standalone
build:
context: ./www
ports:
- "3000:3000"
command: ["node", "server.js"]
env_file:
- ./www/.env.local
environment:
NODE_ENV: production
# API_URL, WEBSOCKET_URL, SITE_URL, NEXTAUTH_URL from www/.env.local (allows HTTPS)
# Server-side URLs (docker-network internal)
SERVER_API_URL: http://server:1250
KV_URL: redis://redis:6379
KV_USE_TLS: "false"
# Standalone: no external auth provider
FEATURE_REQUIRE_LOGIN: "false"
FEATURE_ROOMS: "false"
NEXTAUTH_SECRET: standalone-local-secret
# Nullify partial auth vars inherited from base env_file
AUTHENTIK_ISSUER: ""
AUTHENTIK_REFRESH_TOKEN_URL: ""
garage:
image: dxflrs/garage:v1.1.0
ports:
- "3900:3900" # S3 API
- "3903:3903" # Admin API
volumes:
- garage_data:/var/lib/garage/data
- garage_meta:/var/lib/garage/meta
- ./data/garage.toml:/etc/garage.toml:ro
restart: unless-stopped
healthcheck:
test: ["CMD", "/garage", "stats"]
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
cpu:
build:
context: ./gpu/self_hosted
dockerfile: Dockerfile.cpu
ports:
- "8100:8000"
volumes:
- gpu_cache:/root/.cache
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/docs"]
interval: 15s
timeout: 5s
retries: 10
start_period: 120s
gpu-nvidia:
build:
context: ./gpu/self_hosted
profiles: ["gpu-nvidia"]
volumes:
- gpu_cache:/root/.cache
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/docs"]
interval: 15s
timeout: 5s
retries: 10
start_period: 120s
ollama:
image: ollama/ollama:latest
profiles: ["ollama-gpu"]
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
interval: 10s
timeout: 5s
retries: 5
ollama-cpu:
image: ollama/ollama:latest
profiles: ["ollama-cpu"]
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
interval: 10s
timeout: 5s
retries: 5
volumes:
garage_data:
garage_meta:
ollama_data:
gpu_cache:
caddy_data:
caddy_config:

View File

@@ -2,8 +2,7 @@ services:
server:
build:
context: server
ports:
- 1250:1250
network_mode: host
volumes:
- ./server/:/app/
- /app/.venv
@@ -11,6 +10,12 @@ services:
- ./server/.env
environment:
ENTRYPOINT: server
DATABASE_URL: postgresql+asyncpg://reflector:reflector@localhost:5432/reflector
REDIS_HOST: localhost
CELERY_BROKER_URL: redis://localhost:6379/1
CELERY_RESULT_BACKEND: redis://localhost:6379/1
HATCHET_CLIENT_SERVER_URL: http://localhost:8889
HATCHET_CLIENT_HOST_PORT: localhost:7078
worker:
build:
@@ -22,6 +27,11 @@ services:
- ./server/.env
environment:
ENTRYPOINT: worker
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
HATCHET_CLIENT_HOST_PORT: hatchet:7077
depends_on:
redis:
condition: service_started
beat:
build:
@@ -33,6 +43,9 @@ services:
- ./server/.env
environment:
ENTRYPOINT: beat
depends_on:
redis:
condition: service_started
hatchet-worker-cpu:
build:
@@ -44,6 +57,8 @@ services:
- ./server/.env
environment:
ENTRYPOINT: hatchet-worker-cpu
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
HATCHET_CLIENT_HOST_PORT: hatchet:7077
depends_on:
hatchet:
condition: service_healthy
@@ -57,6 +72,8 @@ services:
- ./server/.env
environment:
ENTRYPOINT: hatchet-worker-llm
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
HATCHET_CLIENT_HOST_PORT: hatchet:7077
depends_on:
hatchet:
condition: service_healthy
@@ -75,10 +92,16 @@ services:
volumes:
- ./www:/app/
- /app/node_modules
- next_cache:/app/.next
env_file:
- ./www/.env.local
environment:
- NODE_ENV=development
- SERVER_API_URL=http://host.docker.internal:1250
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- server
postgres:
image: postgres:17
@@ -94,13 +117,14 @@ services:
- ./server/docker/init-hatchet-db.sql:/docker-entrypoint-initdb.d/init-hatchet-db.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -d reflector -U reflector"]
interval: 10s
timeout: 10s
retries: 5
start_period: 10s
interval: 5s
timeout: 5s
retries: 10
start_period: 15s
hatchet:
image: ghcr.io/hatchet-dev/hatchet/hatchet-lite:latest
restart: on-failure
ports:
- "8889:8888"
- "7078:7077"
@@ -108,7 +132,7 @@ services:
postgres:
condition: service_healthy
environment:
DATABASE_URL: "postgresql://reflector:reflector@postgres:5432/hatchet?sslmode=disable"
DATABASE_URL: "postgresql://reflector:reflector@postgres:5432/hatchet?sslmode=disable&connect_timeout=30"
SERVER_AUTH_COOKIE_DOMAIN: localhost
SERVER_AUTH_COOKIE_INSECURE: "t"
SERVER_GRPC_BIND_ADDRESS: "0.0.0.0"
@@ -128,6 +152,5 @@ services:
retries: 5
start_period: 30s
networks:
default:
attachable: true
volumes:
next_cache:

View File

@@ -0,0 +1,310 @@
---
sidebar_position: 2
title: Standalone Local Setup
---
# Standalone Local Setup
**The goal**: a clueless user clones the repo, runs one script, and has a working Reflector instance locally. No cloud accounts, no API keys, no manual env file editing.
```bash
git clone https://github.com/monadical-sas/reflector.git
cd reflector
./scripts/setup-standalone.sh
```
On Ubuntu, the setup script installs Docker automatically if missing.
The script is idempotent — safe to re-run at any time. It detects what's already set up and skips completed steps.
## Prerequisites
- Docker with Compose V2 plugin (Docker Desktop, OrbStack, or Docker Engine + compose plugin)
- Mac (Apple Silicon) or Linux
- 16GB+ RAM (32GB recommended for 14B LLM models)
- **Mac only**: [Ollama](https://ollama.com/download) installed (`brew install ollama`)
### Installing Docker (if not already installed)
**Ubuntu**: The setup script runs `install-docker-ubuntu.sh` automatically when Docker is missing. Or run it manually:
```bash
./scripts/install-docker-ubuntu.sh
```
**Mac**: Install [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [OrbStack](https://orbstack.dev/).
## What the script does
### 1. LLM inference via Ollama
**Mac**: starts Ollama natively (Metal GPU acceleration). Pulls the LLM model. Docker containers reach it via `host.docker.internal:11435`.
**Linux**: starts containerized Ollama via `docker-compose.standalone.yml` profile (`ollama-gpu` with NVIDIA, `ollama-cpu` without). Pulls model inside the container.
### 2. Environment files
Generates `server/.env` and `www/.env.local` with standalone defaults:
**`server/.env`** — key settings:
| Variable | Value | Why |
| --------------------- | -------------------------------------------------- | ----------------------------------- |
| `DATABASE_URL` | `postgresql+asyncpg://...@postgres:5432/reflector` | Docker-internal hostname |
| `REDIS_HOST` | `redis` | Docker-internal hostname |
| `CELERY_BROKER_URL` | `redis://redis:6379/1` | Docker-internal hostname |
| `AUTH_BACKEND` | `none` | No Authentik in standalone |
| `TRANSCRIPT_BACKEND` | `modal` | HTTP API to self-hosted CPU service |
| `TRANSCRIPT_URL` | `http://cpu:8000` | Docker-internal CPU service |
| `DIARIZATION_BACKEND` | `modal` | HTTP API to self-hosted CPU service |
| `DIARIZATION_URL` | `http://cpu:8000` | Docker-internal CPU service |
| `TRANSLATION_BACKEND` | `passthrough` | No Modal |
| `LLM_URL` | `http://host.docker.internal:11435/v1` (Mac) | Ollama endpoint |
**`www/.env.local`** — key settings:
| Variable | Value |
| ----------------------- | ------------------------------------------ |
| `API_URL` | `https://localhost:3043` or `https://YOUR_IP:3043` (Linux) |
| `SERVER_API_URL` | `http://server:1250` |
| `WEBSOCKET_URL` | `auto` |
| `FEATURE_REQUIRE_LOGIN` | `false` |
| `NEXTAUTH_SECRET` | `standalone-dev-secret-not-for-production` |
If env files already exist (including symlinks from worktree setup), the script resolves symlinks and ensures all standalone-critical vars are set. Existing vars not related to standalone are preserved.
### 3. Object storage (Garage)
Standalone uses [Garage](https://garagehq.deuxfleurs.fr/) — a lightweight S3-compatible object store running in Docker. The setup script starts Garage, initializes the layout, creates a bucket and access key, and writes the credentials to `server/.env`.
**`server/.env`** — storage settings added by the script:
| Variable | Value | Why |
| ------------------------------------------ | -------------------- | ------------------------------------- |
| `TRANSCRIPT_STORAGE_BACKEND` | `aws` | Uses the S3-compatible storage driver |
| `TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL` | `http://garage:3900` | Docker-internal Garage S3 API |
| `TRANSCRIPT_STORAGE_AWS_BUCKET_NAME` | `reflector-media` | Created by the script |
| `TRANSCRIPT_STORAGE_AWS_REGION` | `garage` | Must match Garage config |
| `TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID` | _(auto-generated)_ | Created by `garage key create` |
| `TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY` | _(auto-generated)_ | Created by `garage key create` |
The `TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL` setting enables S3-compatible backends. When set, the storage driver uses path-style addressing and routes all requests to the custom endpoint. When unset (production AWS), behavior is unchanged.
Garage config template lives at `scripts/garage.toml`. The setup script generates `data/garage.toml` (gitignored) with a random RPC secret and mounts it read-only into the container. Single-node, `replication_factor=1`.
> **Note**: Presigned URLs embed the Garage Docker hostname (`http://garage:3900`). This is fine — the server proxies S3 responses to the browser. Modal GPU workers cannot reach internal Garage, but standalone doesn't use Modal.
### 4. Transcription and diarization
Standalone runs the self-hosted ML service (`gpu/self_hosted/`) in a CPU-only Docker container named `cpu`. This is the same FastAPI service used for Modal.com GPU deployments, but built with `Dockerfile.cpu` (no NVIDIA CUDA dependencies). The compose service is named `cpu` (not `gpu`) to make clear it runs without GPU acceleration; the source code lives in `gpu/self_hosted/` because it's shared with the GPU deployment.
The `modal` backend name is reused — it just means "HTTP API client". Setting `TRANSCRIPT_URL` / `DIARIZATION_URL` to `http://cpu:8000` routes requests to the local container instead of Modal.com.
On first start, the service downloads pyannote speaker diarization models (~1GB) from a public S3 bundle. Models are cached in a Docker volume (`gpu_cache`) so subsequent starts are fast. No HuggingFace token or API key needed.
> **Performance**: CPU-only transcription and diarization work but are slow (~15 min for a 3 min file). For faster processing on Linux with NVIDIA GPU, use `--profile gpu-nvidia` instead (see `docker-compose.standalone.yml`).
### 5. Docker services
```bash
docker compose up -d postgres redis garage cpu server worker beat web
```
All services start in a single command. Garage and `cpu` are already started by earlier steps but included for idempotency. No Hatchet in standalone mode — LLM processing (summaries, topics, titles) runs via Celery tasks.
### 6. Database migrations
Run automatically by the `server` container on startup (`runserver.sh` calls `alembic upgrade head`). No manual step needed.
### 7. Health check
Verifies:
- CPU service responds (transcription + diarization ready)
- Server responds at `http://localhost:1250/health`
- Frontend serves at `http://localhost:3000` (or via Caddy at `https://localhost:3043`)
- LLM endpoint reachable from inside containers
## Services
| Service | Port | Purpose |
| ---------- | ---------- | -------------------------------------------------- |
| `caddy` | 3043 | Reverse proxy (HTTPS, self-signed cert) |
| `server` | 1250 | FastAPI backend (runs migrations on start) |
| `web` | 3000 | Next.js frontend |
| `postgres` | 5432 | PostgreSQL database |
| `redis` | 6379 | Cache + Celery broker |
| `garage` | 3900, 3903 | S3-compatible object storage (S3 API + admin API) |
| `cpu` | — | Self-hosted transcription + diarization (CPU-only) |
| `worker` | — | Celery worker (live pipeline post-processing) |
| `beat` | — | Celery beat (scheduled tasks) |
## Testing programmatically
After the setup script completes, verify the full pipeline (upload, transcription, diarization, LLM summary) via the API:
```bash
# 1. Create a transcript
TRANSCRIPT_ID=$(curl -s -X POST 'http://localhost:1250/v1/transcripts' \
-H 'Content-Type: application/json' \
-d '{"name":"test-upload"}' | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])")
echo "Created: $TRANSCRIPT_ID"
# 2. Upload an audio file (single-chunk upload)
curl -s "http://localhost:1250/v1/transcripts/${TRANSCRIPT_ID}/record/upload?chunk_number=0&total_chunks=1" \
-X POST -F "chunk=@/path/to/audio.mp3"
# 3. Poll until processing completes (status: ended or error)
while true; do
STATUS=$(curl -s "http://localhost:1250/v1/transcripts/${TRANSCRIPT_ID}" \
| python3 -c "import sys,json; print(json.load(sys.stdin)['status'])")
echo "Status: $STATUS"
case "$STATUS" in ended|error) break;; esac
sleep 10
done
# 4. Check the result
curl -s "http://localhost:1250/v1/transcripts/${TRANSCRIPT_ID}" | python3 -m json.tool
```
Expected result: status `ended`, auto-generated `title`, `short_summary`, `long_summary`, and `transcript` text with `Speaker 0` / `Speaker 1` labels.
CPU-only processing is slow (~15 min for a 3 min audio file). Diarization finishes in ~3 min, transcription takes the rest.
## Enabling HTTPS (droplet via IP)
To serve Reflector over HTTPS on a droplet accessed by IP (self-signed certificate):
1. **Copy the Caddyfile** (no edits needed — `:443` catches all HTTPS inside container, mapped to host port 3043):
```bash
cp Caddyfile.standalone.example Caddyfile
```
2. **Update `www/.env.local`** with HTTPS URLs (port 3043):
```env
API_URL=https://YOUR_IP:3043
WEBSOCKET_URL=wss://YOUR_IP:3043
SITE_URL=https://YOUR_IP:3043
NEXTAUTH_URL=https://YOUR_IP:3043
```
3. **Restart services**:
```bash
docker compose -f docker-compose.standalone.yml --profile ollama-cpu up -d
```
(Use `ollama-gpu` instead of `ollama-cpu` if you have an NVIDIA GPU.)
4. **Access** at `https://YOUR_IP:3043`. The browser will warn about the self-signed cert — click **Advanced** → **Proceed to YOUR_IP (unsafe)**. All traffic (page, API, WebSocket) uses the same origin, so accepting once is enough.
## Troubleshooting
### ERR_SSL_PROTOCOL_ERROR when accessing https://YOUR_IP
You do **not** need a domain — the setup works with an IP address. This error usually means Caddy isn't serving TLS on port 3043. Check in order:
1. **Caddyfile** — must use the `:443` catch-all (container-internal; Docker maps host 3043 → container 443):
```bash
cp Caddyfile.standalone.example Caddyfile
```
2. **Firewall** — allow port 3043 (common on DigitalOcean):
```bash
sudo ufw allow 3043
sudo ufw status
```
3. **Caddy running** — verify and restart:
```bash
docker compose -f docker-compose.standalone.yml ps
docker compose -f docker-compose.standalone.yml logs caddy --tail 20
docker compose -f docker-compose.standalone.yml --profile ollama-cpu up -d
```
4. **Test from the droplet** — if this works, the issue is external (firewall, network):
```bash
curl -vk https://localhost:3043
```
5. **localhost works but external IP fails** — Re-run the setup script; it generates a Caddyfile with your droplet IP explicitly, so Caddy provisions the cert at startup:
```bash
./scripts/setup-standalone.sh
```
Or manually create `Caddyfile` with your IP (replace 138.197.162.116):
```
https://138.197.162.116, localhost {
tls internal
handle /v1/* { reverse_proxy server:1250 }
handle /health { reverse_proxy server:1250 }
handle { reverse_proxy web:3000 }
}
```
Then restart: `docker compose -f docker-compose.standalone.yml --profile ollama-cpu up -d`
6. **Still failing?** Try HTTP (no TLS) — create `Caddyfile`:
```
:80 {
handle /v1/* { reverse_proxy server:1250 }
handle /health { reverse_proxy server:1250 }
handle { reverse_proxy web:3000 }
}
```
Update `www/.env.local`: `API_URL=http://YOUR_IP:3043`, `WEBSOCKET_URL=ws://YOUR_IP:3043`, `SITE_URL=http://YOUR_IP:3043`, `NEXTAUTH_URL=http://YOUR_IP:3043`. Restart, then access `http://YOUR_IP:3043`.
### Docker not ready
If setup fails with "Docker not ready", on Ubuntu run `./scripts/install-docker-ubuntu.sh`. If Docker is installed but you're not root, run `newgrp docker` then run the setup script again.
### Port conflicts (most common issue)
If the frontend or backend behaves unexpectedly (e.g., env vars seem ignored, changes don't take effect), **check for port conflicts first**:
```bash
# Check what's listening on key ports
lsof -i :3000 # frontend
lsof -i :1250 # backend
lsof -i :5432 # postgres
lsof -i :3900 # Garage S3 API
lsof -i :6379 # Redis
# Kill stale processes on a port
lsof -ti :3000 | xargs kill
```
Common causes:
- A stale `next dev` or `pnpm dev` process from another terminal/worktree
- Another Docker Compose project (different worktree) with containers on the same ports — the setup script only manages its own project; containers from other projects must be stopped manually (`docker ps` to find them, `docker stop` to kill them)
The setup script checks ports 3000, 1250, 5432, 6379, 3900, 3903 for conflicts before starting services. It ignores OrbStack/Docker Desktop port forwarding processes (which always bind these ports but are not real conflicts).
### OrbStack false port-conflict warnings (Mac)
If you use OrbStack as your Docker runtime, `lsof` will show OrbStack binding ports like 3000, 1250, etc. even when no containers are running. This is OrbStack's port forwarding mechanism — not a real conflict. The setup script filters these out automatically.
### Re-enabling authentication
Standalone runs without authentication (`FEATURE_REQUIRE_LOGIN=false`, `AUTH_BACKEND=none`). To re-enable:
1. In `www/.env.local`: set `FEATURE_REQUIRE_LOGIN=true`, uncomment `AUTHENTIK_ISSUER` and `AUTHENTIK_REFRESH_TOKEN_URL`
2. In `server/.env`: set `AUTH_BACKEND=authentik` (or your backend), configure `AUTH_JWT_AUDIENCE`
3. Restart: `docker compose -f docker-compose.standalone.yml up -d --force-recreate web server`
## What's NOT covered
These require external accounts and infrastructure that can't be scripted:
- **Live meeting rooms** — requires Daily.co account, S3 bucket, IAM roles
- **Authentication** — requires Authentik deployment and OAuth configuration
- **Hatchet workflows** — requires separate Hatchet setup for multitrack processing
- **Production deployment** — see [Deployment Guide](./overview)
## Current status
All steps implemented. The setup script handles everything end-to-end:
- Step 1 (Ollama/LLM) — implemented
- Step 2 (environment files) — implemented
- Step 3 (object storage / Garage) — implemented
- Step 4 (transcription/diarization) — implemented (self-hosted GPU service)
- Steps 5-7 (Docker, migrations, health) — implemented
- **Unified script**: `scripts/setup-standalone.sh`

View File

@@ -0,0 +1,472 @@
# How the Self-Hosted Setup Works
This document explains the internals of the self-hosted deployment: how the setup script orchestrates everything, how the Docker Compose profiles work, how services communicate, and how configuration flows from flags to running containers.
> For quick-start instructions and flag reference, see [Self-Hosted Production Deployment](selfhosted-production.md).
## Table of Contents
- [Overview](#overview)
- [The Setup Script Step by Step](#the-setup-script-step-by-step)
- [Docker Compose Profile System](#docker-compose-profile-system)
- [Service Architecture](#service-architecture)
- [Configuration Flow](#configuration-flow)
- [Storage Architecture](#storage-architecture)
- [SSL/TLS and Reverse Proxy](#ssltls-and-reverse-proxy)
- [Build vs Pull Workflow](#build-vs-pull-workflow)
- [Background Task Processing](#background-task-processing)
- [Network and Port Layout](#network-and-port-layout)
---
## Overview
The self-hosted deployment runs the entire Reflector platform on a single server using Docker Compose. A single bash script (`scripts/setup-selfhosted.sh`) handles all configuration and orchestration. The key design principles are:
- **One command to deploy** — flags select which features to enable
- **Idempotent** — safe to re-run without losing existing configuration
- **Profile-based composition** — Docker Compose profiles activate optional services
- **No external dependencies required** — with `--garage` and `--ollama-*`, everything runs locally
## The Setup Script Step by Step
The script (`scripts/setup-selfhosted.sh`) runs 7 sequential steps. Here's what each one does and why.
### Step 0: Prerequisites
Validates the environment before doing anything:
- **Docker Compose V2** — checks `docker compose version` output (not the legacy `docker-compose`)
- **Docker daemon** — verifies `docker info` succeeds
- **NVIDIA GPU** — only checked when `--gpu` or `--ollama-gpu` is used; runs `nvidia-smi` to confirm drivers are installed
- **Compose file** — verifies `docker-compose.selfhosted.yml` exists at the expected path
If any check fails, the script exits with a clear error message and remediation steps.
### Step 1: Generate Secrets
Creates cryptographic secrets needed by the backend and frontend:
- **`SECRET_KEY`** — used by the FastAPI server for session signing (64 hex chars via `openssl rand -hex 32`)
- **`NEXTAUTH_SECRET`** — used by Next.js NextAuth for JWT signing
Secrets are only generated if they don't already exist or are still set to the placeholder value `changeme`. This is what makes the script idempotent for secrets.
If `--password` is passed, this step also generates a PBKDF2-SHA256 password hash from the provided password. The hash is computed using Python's stdlib (`hashlib.pbkdf2_hmac`) with 100,000 iterations and a random 16-byte salt, producing a hash in the format `pbkdf2:sha256:100000$<salt_hex>$<hash_hex>`.
### Step 2: Generate `server/.env`
Creates or updates the backend environment file from `server/.env.selfhosted.example`. Sets:
- **Infrastructure** — PostgreSQL URL, Redis host, Celery broker (all pointing to Docker-internal hostnames)
- **Public URLs** — `BASE_URL` and `CORS_ORIGIN` computed from the domain (if `--domain`), IP (if detected on Linux), or `localhost`
- **WebRTC** — `WEBRTC_HOST` set to the server's LAN IP so browsers can reach UDP ICE candidates
- **Specialized models** — always points to `http://transcription:8000` (the Docker network alias shared by GPU and CPU containers)
- **HuggingFace token** — prompts interactively for pyannote model access; writes to root `.env` so Docker Compose can inject it into GPU/CPU containers
- **LLM** — if `--ollama-*` is used, configures `LLM_URL` pointing to the Ollama container. Otherwise, warns that the user needs to configure an external LLM
- **Public mode** — sets `PUBLIC_MODE=true` so the app is accessible without authentication by default
- **Password auth** — if `--password` is passed, sets `AUTH_BACKEND=password`, `PUBLIC_MODE=false`, `ADMIN_EMAIL=admin@localhost`, and `ADMIN_PASSWORD_HASH` (the hash generated in Step 1). The admin user is provisioned in the database on container startup via `runserver.sh`
The script uses `env_set` for each variable, which either updates an existing line or appends a new one. This means re-running the script updates values in-place without duplicating keys.
### Step 3: Generate `www/.env`
Creates or updates the frontend environment file from `www/.env.selfhosted.example`. Sets:
- **`SITE_URL` / `NEXTAUTH_URL` / `API_URL`** — all set to the same public-facing URL (with `https://` if Caddy is enabled)
- **`WEBSOCKET_URL`** — set to `auto`, which tells the frontend to derive the WebSocket URL from the page URL automatically
- **`SERVER_API_URL`** — always `http://server:1250` (Docker-internal, used for server-side rendering)
- **`KV_URL`** — Redis URL for Next.js caching
- **`FEATURE_REQUIRE_LOGIN`** — `false` by default (matches `PUBLIC_MODE=true` on the backend)
- **Password auth** — if `--password` is passed, sets `FEATURE_REQUIRE_LOGIN=true` and `AUTH_PROVIDER=credentials`, which tells the frontend to use a local email/password login form instead of Authentik OAuth
### Step 4: Storage Setup
Branches based on whether `--garage` was passed:
**With `--garage` (local S3):**
1. Generates `data/garage.toml` from a template, injecting a random RPC secret
2. Starts only the Garage container (`docker compose --profile garage up -d garage`)
3. Waits for the Garage admin API to respond on port 3903
4. Assigns the node to a storage layout (1GB capacity, zone `dc1`)
5. Creates the `reflector-media` bucket
6. Creates an access key named `reflector` and grants it read/write on the bucket
7. Writes all S3 credentials (`ENDPOINT_URL`, `BUCKET_NAME`, `REGION`, `ACCESS_KEY_ID`, `SECRET_ACCESS_KEY`) to `server/.env`
The Garage endpoint is `http://garage:3900` (Docker-internal), and the region is set to `garage` (arbitrary, Garage ignores it). The boto3 client uses path-style addressing when an endpoint URL is configured, which is required for S3-compatible services like Garage.
**Without `--garage` (external S3):**
1. Checks `server/.env` for the four required S3 variables
2. If any are missing, prompts interactively for each one
3. Optionally prompts for an endpoint URL (for MinIO, Backblaze B2, etc.)
### Step 5: Caddyfile
Only runs when `--caddy` or `--domain` is used. Generates a Caddy configuration file:
**With `--domain`:** Creates a named site block (`reflector.example.com { ... }`). Caddy automatically provisions a Let's Encrypt certificate for this domain. Requires DNS pointing to the server and ports 80/443 open.
**Without `--domain` (IP access):** Creates a catch-all `:443 { tls internal ... }` block. Caddy generates a self-signed certificate. Browsers will show a security warning.
Both configurations route:
- `/v1/*` and `/health` to the backend (`server:1250`)
- Everything else to the frontend (`web:3000`)
### Step 6: Start Services
1. **Always builds the GPU/CPU model image** — these are never prebuilt because they contain ML model download logic specific to the host's hardware
2. **With `--build`:** Also builds backend (server, worker, beat) and frontend (web) images from source
3. **Without `--build`:** Pulls prebuilt images from the Docker registry (`monadicalsas/reflector-backend:latest`, `monadicalsas/reflector-frontend:latest`)
4. **Starts all services**`docker compose up -d` with the active profiles
5. **Quick sanity check** — after 3 seconds, checks for any containers that exited immediately
### Step 7: Health Checks
Waits for each service in order, with generous timeouts:
| Service | Check | Timeout | Notes |
|---------|-------|---------|-------|
| GPU/CPU models | `curl http://localhost:8000/docs` | 10 min (120 x 5s) | First start downloads ~1GB of models |
| Ollama | `curl http://localhost:11435/api/tags` | 3 min (60 x 3s) | Then pulls the selected model |
| Server API | `curl http://localhost:1250/health` | 7.5 min (90 x 5s) | First start runs database migrations |
| Frontend | `curl http://localhost:3000` | 1.5 min (30 x 3s) | Next.js build on first start |
| Caddy | `curl -k https://localhost` | Quick check | After other services are up |
If the server container exits during the health check, the script dumps diagnostics (container statuses + logs) before exiting.
After the Ollama health check passes, the script checks if the selected model is already pulled. If not, it runs `ollama pull <model>` inside the container.
---
## Docker Compose Profile System
The compose file (`docker-compose.selfhosted.yml`) uses Docker Compose profiles to make services optional. Only services whose profiles match the active `--profile` flags are started.
### Always-on Services (no profile)
These start regardless of which flags you pass:
| Service | Role | Image |
|---------|------|-------|
| `server` | FastAPI backend, API endpoints, WebRTC | `monadicalsas/reflector-backend:latest` |
| `worker` | Celery worker for background processing | Same image, `ENTRYPOINT=worker` |
| `beat` | Celery beat scheduler for periodic tasks | Same image, `ENTRYPOINT=beat` |
| `web` | Next.js frontend | `monadicalsas/reflector-frontend:latest` |
| `redis` | Message broker + caching | `redis:7.2-alpine` |
| `postgres` | Primary database | `postgres:17-alpine` |
### Profile-Based Services
| Profile | Service | Role |
|---------|---------|------|
| `gpu` | `gpu` | NVIDIA GPU-accelerated transcription/diarization/translation |
| `cpu` | `cpu` | CPU-only transcription/diarization/translation |
| `ollama-gpu` | `ollama` | Local Ollama LLM with GPU |
| `ollama-cpu` | `ollama-cpu` | Local Ollama LLM on CPU |
| `garage` | `garage` | Local S3-compatible object storage |
| `caddy` | `caddy` | Reverse proxy with SSL |
### The "transcription" Alias
Both the `gpu` and `cpu` services define a Docker network alias of `transcription`. This means the backend always connects to `http://transcription:8000` regardless of which profile is active. The alias is defined in the compose file's `networks.default.aliases` section.
---
## Service Architecture
```
┌─────────────┐
Internet ────────>│ Caddy │ :80/:443 (profile: caddy)
└──────┬──────┘
┌────────────┼────────────┐
│ │ │
v v │
┌─────────┐ ┌─────────┐ │
│ web │ │ server │ │
│ :3000 │ │ :1250 │ │
└─────────┘ └────┬────┘ │
│ │
┌────┴────┐ │
│ worker │ │
│ beat │ │
└────┬────┘ │
│ │
┌──────────────┼────────────┤
│ │ │
v v v
┌───────────┐ ┌─────────┐ ┌─────────┐
│transcription│ │postgres │ │ redis │
│ (gpu/cpu) │ │ :5432 │ │ :6379 │
│ :8000 │ └─────────┘ └─────────┘
└───────────┘
┌─────┴─────┐ ┌─────────┐
│ ollama │ │ garage │
│(optional) │ │(optional│
│ :11435 │ │ S3) │
└───────────┘ └─────────┘
```
### How Services Interact
1. **User request** hits Caddy (if enabled), which routes to `web` (pages) or `server` (API)
2. **`web`** renders pages server-side using `SERVER_API_URL=http://server:1250` and client-side using the public `API_URL`
3. **`server`** handles API requests, file uploads, WebRTC streaming. Dispatches background work to Celery via Redis
4. **`worker`** picks up Celery tasks (transcription pipelines, audio processing). Calls `transcription:8000` for ML inference and uploads results to S3 storage
5. **`beat`** schedules periodic tasks (cleanup, webhook retries) by pushing them onto the Celery queue
6. **`transcription` (gpu/cpu)** runs Whisper/Parakeet (transcription), Pyannote (diarization), and translation models. Stateless HTTP API
7. **`ollama`** provides an OpenAI-compatible API for summarization and topic detection. Called by the worker during post-processing
8. **`garage`** provides S3-compatible storage for audio files and processed results. Accessed by the worker via boto3
---
## Configuration Flow
Environment variables flow through multiple layers. Understanding this prevents confusion when debugging:
```
Flags (--gpu, --garage, etc.)
├── setup-selfhosted.sh interprets flags
│ │
│ ├── Writes server/.env (backend config)
│ ├── Writes www/.env (frontend config)
│ ├── Writes .env (HF_TOKEN for compose interpolation)
│ └── Writes Caddyfile (proxy routes)
└── docker-compose.selfhosted.yml reads:
├── env_file: ./server/.env (loaded into server, worker, beat)
├── env_file: ./www/.env (loaded into web)
├── .env (compose variable interpolation, e.g. ${HF_TOKEN})
└── environment: {...} (hardcoded overrides, always win over env_file)
```
### Precedence Rules
Docker Compose `environment:` keys **always override** `env_file:` values. This is by design — the compose file hardcodes infrastructure values that must be correct inside the Docker network (like `DATABASE_URL=postgresql+asyncpg://...@postgres:5432/...`) regardless of what's in `server/.env`.
The `server/.env` file is still useful for:
- Values not overridden in the compose file (LLM config, storage credentials, auth settings)
- Running the server outside Docker during development
### The Three `.env` Files
| File | Used By | Contains |
|------|---------|----------|
| `server/.env` | server, worker, beat | Backend config: database, Redis, S3, LLM, auth, public URLs |
| `www/.env` | web | Frontend config: site URL, auth, feature flags |
| `.env` (root) | Docker Compose interpolation | Only `HF_TOKEN` — injected into GPU/CPU container env |
---
## Storage Architecture
All audio files and processing results are stored in S3-compatible object storage. The backend uses boto3 (via aioboto3) with automatic path-style addressing when a custom endpoint URL is configured.
### How Garage Works
Garage is a lightweight, self-hosted S3-compatible storage engine. In this deployment:
- Runs as a single-node cluster with 1GB capacity allocation
- Listens on port 3900 (S3 API) and 3903 (admin API)
- Data persists in Docker volumes (`garage_data`, `garage_meta`)
- Accessed by the worker at `http://garage:3900` (Docker-internal)
The setup script creates:
- A bucket called `reflector-media`
- An access key called `reflector` with read/write permissions on that bucket
### Path-Style vs Virtual-Hosted Addressing
AWS S3 uses virtual-hosted addressing by default (`bucket.s3.amazonaws.com`). S3-compatible services like Garage require path-style addressing (`endpoint/bucket`). The `AwsStorage` class detects this automatically: when `TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL` is set, it configures boto3 with `addressing_style: "path"`.
---
## SSL/TLS and Reverse Proxy
### With `--domain` (Production)
Caddy automatically obtains and renews a Let's Encrypt certificate. Requirements:
- DNS A record pointing to the server
- Ports 80 (HTTP challenge) and 443 (HTTPS) open to the internet
The generated Caddyfile uses the domain as the site address, which triggers Caddy's automatic HTTPS.
### Without `--domain` (Development/LAN)
Caddy generates a self-signed certificate and listens on `:443` as a catch-all. Browsers will show a security warning that must be accepted manually.
### Without `--caddy` (BYO Proxy)
No ports are exposed to the internet. The services listen on `127.0.0.1` only:
- Frontend: `localhost:3000`
- Backend API: `localhost:1250`
You can point your own reverse proxy (nginx, Traefik, etc.) at these ports.
### WebRTC and UDP
The server exposes UDP ports 50000-50100 for WebRTC ICE candidates. The `WEBRTC_HOST` variable tells the server which IP to advertise in ICE candidates — this must be the server's actual IP address (not a domain), because WebRTC uses UDP which doesn't go through the HTTP reverse proxy.
---
## Build vs Pull Workflow
### Default (no `--build` flag)
```
GPU/CPU model image: Always built from source (./gpu/self_hosted/)
Backend image: Pulled from monadicalsas/reflector-backend:latest
Frontend image: Pulled from monadicalsas/reflector-frontend:latest
```
The GPU/CPU image is always built because it contains hardware-specific build steps and ML model download logic.
### With `--build`
```
GPU/CPU model image: Built from source (./gpu/self_hosted/)
Backend image: Built from source (./server/)
Frontend image: Built from source (./www/)
```
Use `--build` when:
- You've made local code changes
- The prebuilt registry images are outdated
- You want to verify the build works on your hardware
### Rebuilding Individual Services
```bash
# Rebuild just the backend
docker compose -f docker-compose.selfhosted.yml build server worker beat
# Rebuild just the frontend
docker compose -f docker-compose.selfhosted.yml build web
# Rebuild the GPU model container
docker compose -f docker-compose.selfhosted.yml build gpu
# Force a clean rebuild (no cache)
docker compose -f docker-compose.selfhosted.yml build --no-cache server
```
---
## Background Task Processing
### Celery Architecture
The backend uses Celery for all background work, with Redis as the message broker:
- **`worker`** — picks up tasks from the Redis queue and executes them
- **`beat`** — schedules periodic tasks (cron-like) by pushing them onto the queue
- **`Redis`** — acts as both message broker and result backend
### The Audio Processing Pipeline
When a file is uploaded, the worker runs a multi-step pipeline:
```
Upload → Extract Audio → Upload to S3
┌──────┼──────┐
│ │ │
v v v
Transcribe Diarize Waveform
│ │ │
└──────┼──────┘
Assemble
┌──────┼──────┐
v v v
Topics Title Summaries
Done
```
Transcription, diarization, and waveform generation run in parallel. After assembly, topic detection, title generation, and summarization also run in parallel. Each step calls the appropriate service (transcription container for ML, Ollama/external LLM for text generation, S3 for storage).
### Event Loop Management
Each Celery task runs in its own `asyncio.run()` call, which creates a fresh event loop. The `asynctask` decorator in `server/reflector/asynctask.py` handles:
1. **Database connections** — resets the connection pool before each task (connections from a previous event loop would cause "Future attached to a different loop" errors)
2. **Redis connections** — resets the WebSocket manager singleton so Redis pub/sub reconnects on the current loop
3. **Cleanup** — disconnects the database and clears the context variable in the `finally` block
---
## Network and Port Layout
All services communicate over Docker's default bridge network. Only specific ports are exposed to the host:
| Port | Service | Binding | Purpose |
|------|---------|---------|---------|
| 80 | Caddy | `0.0.0.0:80` | HTTP (redirect to HTTPS / Let's Encrypt challenge) |
| 443 | Caddy | `0.0.0.0:443` | HTTPS (main entry point) |
| 1250 | Server | `127.0.0.1:1250` | Backend API (localhost only) |
| 3000 | Web | `127.0.0.1:3000` | Frontend (localhost only) |
| 3900 | Garage | `0.0.0.0:3900` | S3 API (for admin/debug access) |
| 3903 | Garage | `0.0.0.0:3903` | Garage admin API |
| 8000 | GPU/CPU | `127.0.0.1:8000` | ML model API (localhost only) |
| 11435 | Ollama | `127.0.0.1:11435` | Ollama API (localhost only) |
| 50000-50100/udp | Server | `0.0.0.0:50000-50100` | WebRTC ICE candidates |
Services bound to `127.0.0.1` are only accessible from the host itself (not from the network). Caddy is the only service exposed to the internet on standard HTTP/HTTPS ports.
### Docker-Internal Hostnames
Inside the Docker network, services reach each other by their compose service name:
| Hostname | Resolves To |
|----------|-------------|
| `server` | Backend API container |
| `web` | Frontend container |
| `postgres` | PostgreSQL container |
| `redis` | Redis container |
| `transcription` | GPU or CPU container (network alias) |
| `ollama` / `ollama-cpu` | Ollama container |
| `garage` | Garage S3 container |
---
## Diagnostics and Error Handling
The setup script includes an `ERR` trap that automatically dumps diagnostics when any command fails:
1. Lists all container statuses
2. Shows the last 30 lines of logs for any stopped/exited containers
3. Shows the last 40 lines of the specific failing service
This means if something goes wrong during setup, you'll see the relevant logs immediately without having to run manual debug commands.
### Common Debug Commands
```bash
# Overall status
docker compose -f docker-compose.selfhosted.yml ps
# Logs for a specific service
docker compose -f docker-compose.selfhosted.yml logs server --tail 50
docker compose -f docker-compose.selfhosted.yml logs worker --tail 50
# Check environment inside a container
docker compose -f docker-compose.selfhosted.yml exec server env | grep TRANSCRIPT
# Health check from inside the network
docker compose -f docker-compose.selfhosted.yml exec server curl http://localhost:1250/health
# Check S3 storage connectivity
docker compose -f docker-compose.selfhosted.yml exec server curl http://garage:3900
# Database access
docker compose -f docker-compose.selfhosted.yml exec postgres psql -U reflector -c "SELECT id, status FROM transcript ORDER BY created_at DESC LIMIT 5;"
# List files in server data directory
docker compose -f docker-compose.selfhosted.yml exec server ls -la /app/data/
```

View File

@@ -0,0 +1,519 @@
# Self-Hosted Production Deployment
Deploy Reflector on a single server with everything running in Docker. Transcription, diarization, and translation use specialized ML models (Whisper/Parakeet, Pyannote); only summarization and topic detection require an LLM.
> For a detailed walkthrough of how the setup script and infrastructure work under the hood, see [How the Self-Hosted Setup Works](selfhosted-architecture.md).
## Prerequisites
### Hardware
- **With GPU**: Linux server with NVIDIA GPU (8GB+ VRAM recommended), 16GB+ RAM, 50GB+ disk
- **CPU-only**: 8+ cores, 32GB+ RAM (transcription is slower but works)
- Disk space for ML models (~2GB on first run) + audio storage
### Software
- Docker Engine 24+ with Compose V2
- NVIDIA drivers + `nvidia-container-toolkit` (GPU modes only)
- `curl`, `openssl` (usually pre-installed)
### Accounts & Credentials (depending on options)
**Always recommended:**
- **HuggingFace token** — For downloading pyannote speaker diarization models. Get one at https://huggingface.co/settings/tokens and accept the model licenses:
- https://huggingface.co/pyannote/speaker-diarization-3.1
- https://huggingface.co/pyannote/segmentation-3.0
- The setup script will prompt for this. If skipped, diarization falls back to a public model bundle (may be less reliable).
**LLM for summarization & topic detection (pick one):**
- **With `--ollama-gpu` or `--ollama-cpu`**: Nothing extra — Ollama runs locally and pulls the model automatically
- **Without `--ollama-*`**: An OpenAI-compatible LLM API key and endpoint. Examples:
- OpenAI: `LLM_URL=https://api.openai.com/v1`, `LLM_API_KEY=sk-...`, `LLM_MODEL=gpt-4o-mini`
- Anthropic, Together, Groq, or any OpenAI-compatible API
- A self-managed vLLM or Ollama instance elsewhere on the network
**Object storage (pick one):**
- **With `--garage`**: Nothing extra — Garage (local S3-compatible storage) is auto-configured by the script
- **Without `--garage`**: S3-compatible storage credentials. The script will prompt for these, or you can pre-fill `server/.env`. Options include:
- **AWS S3**: Access Key ID, Secret Access Key, bucket name, region
- **MinIO**: Same credentials + `TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL=http://your-minio:9000`
- **Any S3-compatible provider** (Backblaze B2, Cloudflare R2, DigitalOcean Spaces, etc.): same fields + custom endpoint URL
**Optional add-ons (configure after initial setup):**
- **Daily.co** (live meeting rooms): Requires a Daily.co account (https://www.daily.co/), API key, subdomain, and an AWS S3 bucket + IAM Role for recording storage. See [Enabling Daily.co Live Rooms](#enabling-dailyco-live-rooms) below.
- **Authentik** (user authentication): Requires an Authentik instance with an OAuth2/OIDC application configured for Reflector. See [Enabling Authentication](#enabling-authentication-authentik) below.
## Quick Start
```bash
git clone https://github.com/Monadical-SAS/reflector.git
cd reflector
# GPU + local Ollama LLM + local Garage storage + Caddy SSL (with domain):
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy --domain reflector.example.com
# Same but without a domain (self-signed cert, access via IP):
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy
# CPU-only (same, but slower):
./scripts/setup-selfhosted.sh --cpu --ollama-cpu --garage --caddy
# With password authentication (single admin user):
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy --password mysecretpass
# Build from source instead of pulling prebuilt images:
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy --build
```
That's it. The script generates env files, secrets, starts all containers, waits for health checks, and prints the URL.
## Specialized Models (Required)
Pick `--gpu` or `--cpu`. This determines how **transcription, diarization, and translation** run:
| Flag | What it does | Requires |
|------|-------------|----------|
| `--gpu` | NVIDIA GPU acceleration for ML models | NVIDIA GPU + drivers + `nvidia-container-toolkit` |
| `--cpu` | CPU-only (slower but works without GPU) | 8+ cores, 32GB+ RAM recommended |
## Local LLM (Optional)
Optionally add `--ollama-gpu` or `--ollama-cpu` for a **local Ollama instance** that handles summarization and topic detection. If omitted, configure an external OpenAI-compatible LLM in `server/.env`.
| Flag | What it does | Requires |
|------|-------------|----------|
| `--ollama-gpu` | Local Ollama with NVIDIA GPU acceleration | NVIDIA GPU |
| `--ollama-cpu` | Local Ollama on CPU only | Nothing extra |
| `--llm-model MODEL` | Choose which Ollama model to download (default: `qwen2.5:14b`) | `--ollama-gpu` or `--ollama-cpu` |
| *(omitted)* | User configures external LLM (OpenAI, Anthropic, etc.) | LLM API key |
### Choosing an Ollama model
The default model is `qwen2.5:14b` (~9GB download, good multilingual support and summary quality). Override with `--llm-model`:
```bash
# Default (qwen2.5:14b)
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy
# Mistral — good balance of speed and quality (~4.1GB)
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --llm-model mistral --garage --caddy
# Phi-4 — smaller and faster (~9.1GB)
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --llm-model phi4 --garage --caddy
# Llama 3.3 70B — best quality, needs 48GB+ RAM or GPU VRAM (~43GB)
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --llm-model llama3.3:70b --garage --caddy
# Gemma 2 9B (~5.4GB)
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --llm-model gemma2 --garage --caddy
# DeepSeek R1 8B — reasoning model, verbose but thorough summaries (~4.9GB)
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --llm-model deepseek-r1:8b --garage --caddy
```
Browse all available models at https://ollama.com/library.
### Recommended combinations
- **`--gpu --ollama-gpu`**: Best for servers with NVIDIA GPU. Fully self-contained, no external API keys needed.
- **`--cpu --ollama-cpu`**: No GPU available but want everything self-contained. Slower but works.
- **`--gpu --ollama-cpu`**: GPU for transcription, CPU for LLM. Saves GPU VRAM for ML models.
- **`--gpu`**: Have NVIDIA GPU but prefer a cloud LLM (faster/better summaries with GPT-4, Claude, etc.).
- **`--cpu`**: No GPU, prefer cloud LLM. Slowest transcription but best summary quality.
## Other Optional Flags
| Flag | What it does |
|------|-------------|
| `--garage` | Starts Garage (local S3-compatible storage). Auto-configures bucket, keys, and env vars. |
| `--caddy` | Starts Caddy reverse proxy on ports 80/443 with self-signed cert. |
| `--domain DOMAIN` | Use a real domain with Let's Encrypt auto-HTTPS (implies `--caddy`). Requires DNS A record pointing to this server and ports 80/443 open. |
| `--password PASS` | Enable password authentication with an `admin@localhost` user. Sets `AUTH_BACKEND=password`, `PUBLIC_MODE=false`. See [Enabling Password Authentication](#enabling-password-authentication). |
| `--build` | Build backend (server, worker, beat) and frontend (web) Docker images from source instead of pulling prebuilt images from the registry. Useful for development or when running a version with local changes. |
Without `--garage`, you **must** provide S3-compatible credentials (the script will prompt interactively or you can pre-fill `server/.env`).
Without `--caddy` or `--domain`, no ports are exposed. Point your own reverse proxy at `web:3000` (frontend) and `server:1250` (API).
**Using a domain (recommended for production):** Point a DNS A record at your server's IP, then pass `--domain your.domain.com`. Caddy will automatically obtain and renew a Let's Encrypt certificate. Ports 80 and 443 must be open.
**Without a domain:** `--caddy` alone uses a self-signed certificate. Browsers will show a security warning that must be accepted.
## What the Script Does
1. **Prerequisites check** — Docker, NVIDIA GPU (if needed), compose file exists
2. **Generate secrets**`SECRET_KEY`, `NEXTAUTH_SECRET` via `openssl rand`
3. **Generate `server/.env`** — From template, sets infrastructure defaults, configures LLM based on mode, enables `PUBLIC_MODE`
4. **Generate `www/.env`** — Auto-detects server IP, sets URLs
5. **Storage setup** — Either initializes Garage (bucket, keys, permissions) or prompts for external S3 credentials
6. **Caddyfile** — Generates domain-specific (Let's Encrypt) or IP-specific (self-signed) configuration
7. **Build & start** — Always builds GPU/CPU model image from source. With `--build`, also builds backend and frontend from source; otherwise pulls prebuilt images from the registry
8. **Health checks** — Waits for each service, pulls Ollama model if needed, warns about missing LLM config
> For a deeper dive into each step, see [How the Self-Hosted Setup Works](selfhosted-architecture.md).
## Configuration Reference
### Server Environment (`server/.env`)
| Variable | Description | Default |
|----------|-------------|---------|
| `DATABASE_URL` | PostgreSQL connection | Auto-set (Docker internal) |
| `REDIS_HOST` | Redis hostname | Auto-set (`redis`) |
| `SECRET_KEY` | App secret | Auto-generated |
| `AUTH_BACKEND` | Authentication method (`none`, `password`, `jwt`) | `none` |
| `PUBLIC_MODE` | Allow unauthenticated access | `true` |
| `ADMIN_EMAIL` | Admin email for password auth | *(unset)* |
| `ADMIN_PASSWORD_HASH` | PBKDF2 hash for password auth | *(unset)* |
| `WEBRTC_HOST` | IP advertised in WebRTC ICE candidates | Auto-detected (server IP) |
| `TRANSCRIPT_URL` | Specialized model endpoint | `http://transcription:8000` |
| `LLM_URL` | OpenAI-compatible LLM endpoint | Auto-set for Ollama modes |
| `LLM_API_KEY` | LLM API key | `not-needed` for Ollama |
| `LLM_MODEL` | LLM model name | `qwen2.5:14b` for Ollama (override with `--llm-model`) |
| `CELERY_BEAT_POLL_INTERVAL` | Override all worker polling intervals (seconds). `0` = use individual defaults | `300` (selfhosted), `0` (other) |
| `TRANSCRIPT_STORAGE_BACKEND` | Storage backend | `aws` |
| `TRANSCRIPT_STORAGE_AWS_*` | S3 credentials | Auto-set for Garage |
### Frontend Environment (`www/.env`)
| Variable | Description | Default |
|----------|-------------|---------|
| `SITE_URL` | Public-facing URL | Auto-detected |
| `API_URL` | API URL (browser-side) | Same as SITE_URL |
| `SERVER_API_URL` | API URL (server-side) | `http://server:1250` |
| `NEXTAUTH_SECRET` | Auth secret | Auto-generated |
| `FEATURE_REQUIRE_LOGIN` | Require authentication | `false` |
| `AUTH_PROVIDER` | Auth provider (`authentik` or `credentials`) | *(unset)* |
## Storage Options
### Garage (Recommended for Self-Hosted)
Use `--garage` flag. The script automatically:
- Generates `data/garage.toml` with a random RPC secret
- Starts the Garage container
- Creates the `reflector-media` bucket
- Creates an access key with read/write permissions
- Writes all S3 credentials to `server/.env`
### External S3 (AWS, MinIO, etc.)
Don't use `--garage`. The script will prompt for:
- Access Key ID
- Secret Access Key
- Bucket Name
- Region
- Endpoint URL (for non-AWS like MinIO)
Or pre-fill in `server/.env`:
```env
TRANSCRIPT_STORAGE_BACKEND=aws
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=your-key
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=your-secret
TRANSCRIPT_STORAGE_AWS_BUCKET_NAME=reflector-media
TRANSCRIPT_STORAGE_AWS_REGION=us-east-1
# For non-AWS S3 (MinIO, etc.):
TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL=http://minio:9000
```
## What Authentication Enables
By default, Reflector runs in **public mode** (`AUTH_BACKEND=none`, `PUBLIC_MODE=true`) — anyone can create and view transcripts without logging in. Transcripts are anonymous (not linked to any user) and cannot be edited or deleted after creation.
Enabling authentication (either password or Authentik) unlocks:
| Feature | Public mode (no auth) | With authentication |
|---------|----------------------|---------------------|
| Create transcripts (record/upload) | Yes (anonymous, unowned) | Yes (owned by user) |
| View transcripts | All transcripts visible | Own transcripts + shared rooms |
| Edit/delete transcripts | No | Yes (owner only) |
| Privacy controls (private/semi-private/public) | No (everything public) | Yes (owner can set share mode) |
| Speaker reassignment and merging | No | Yes (owner only) |
| Participant management (add/edit/delete) | Read-only | Full CRUD (owner only) |
| Create rooms | No | Yes |
| Edit/delete rooms | No | Yes (owner only) |
| Room calendar (ICS) sync | No | Yes (owner only) |
| API key management | No | Yes |
| Post to Zulip | No | Yes (owner only) |
| Real-time WebSocket notifications | No (connection closed) | Yes (transcript create/delete events) |
| Meeting host access (Daily.co token) | No | Yes (room owner) |
In short: public mode is "demo-friendly" — great for trying Reflector out. Authentication adds **ownership, privacy, and management** of your data.
## Authentication Options
Reflector supports three authentication backends:
| Backend | `AUTH_BACKEND` | Use case |
|---------|---------------|----------|
| `none` | `none` | Public/demo mode, no login required |
| `password` | `password` | Single-user self-hosted, simple email/password login |
| `jwt` | `jwt` | Multi-user via Authentik (OAuth2/OIDC) |
## Enabling Password Authentication
The simplest way to add authentication. Creates a single admin user with email/password login — no external identity provider needed.
### Quick setup (recommended)
Pass `--password` to the setup script:
```bash
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy --password mysecretpass
```
This automatically:
- Sets `AUTH_BACKEND=password` and `PUBLIC_MODE=false` in `server/.env`
- Creates an `admin@localhost` user with the given password
- Sets `FEATURE_REQUIRE_LOGIN=true` and `AUTH_PROVIDER=credentials` in `www/.env`
- Provisions the admin user in the database on container startup
### Manual setup
If you prefer to configure manually or want to change the admin email:
1. Generate a password hash:
```bash
cd server
uv run python -m reflector.tools.create_admin --hash-only --password yourpassword
```
2. Update `server/.env`:
```env
AUTH_BACKEND=password
PUBLIC_MODE=false
ADMIN_EMAIL=admin@yourdomain.com
ADMIN_PASSWORD_HASH=pbkdf2:sha256:100000$<salt>$<hash>
```
3. Update `www/.env`:
```env
FEATURE_REQUIRE_LOGIN=true
AUTH_PROVIDER=credentials
```
4. Restart:
```bash
docker compose -f docker-compose.selfhosted.yml down
./scripts/setup-selfhosted.sh <same-flags>
```
### How it works
- The backend issues HS256 JWTs (signed with `SECRET_KEY`) on successful login via `POST /v1/auth/login`
- Tokens expire after 24 hours; the user must log in again after expiry
- The frontend shows a login page at `/login` with email and password fields
- A rate limiter blocks IPs after 10 failed login attempts within 5 minutes
- The admin user is provisioned automatically on container startup from `ADMIN_EMAIL` and `ADMIN_PASSWORD_HASH` environment variables
- Passwords are hashed with PBKDF2-SHA256 (100,000 iterations) — no additional dependencies required
### Changing the admin password
```bash
cd server
uv run python -m reflector.tools.create_admin --email admin@localhost --password newpassword
```
Or update `ADMIN_PASSWORD_HASH` in `server/.env` and restart the containers.
## Enabling Authentication (Authentik)
For multi-user deployments with SSO. Requires an external Authentik instance.
By default, authentication is disabled (`AUTH_BACKEND=none`, `FEATURE_REQUIRE_LOGIN=false`). To enable:
1. Deploy an Authentik instance (see [Authentik docs](https://goauthentik.io/docs/installation))
2. Create an OAuth2/OIDC application for Reflector
3. Update `server/.env`:
```env
AUTH_BACKEND=jwt
AUTH_JWT_AUDIENCE=your-client-id
```
4. Update `www/.env`:
```env
FEATURE_REQUIRE_LOGIN=true
AUTH_PROVIDER=authentik
AUTHENTIK_ISSUER=https://authentik.example.com/application/o/reflector
AUTHENTIK_REFRESH_TOKEN_URL=https://authentik.example.com/application/o/token/
AUTHENTIK_CLIENT_ID=your-client-id
AUTHENTIK_CLIENT_SECRET=your-client-secret
```
5. Restart: `docker compose -f docker-compose.selfhosted.yml down && ./scripts/setup-selfhosted.sh <same-flags>`
## Enabling Daily.co Live Rooms
Daily.co enables real-time meeting rooms with automatic recording and transcription.
1. Create a [Daily.co](https://www.daily.co/) account
2. Add to `server/.env`:
```env
DEFAULT_VIDEO_PLATFORM=daily
DAILY_API_KEY=your-daily-api-key
DAILY_SUBDOMAIN=your-subdomain
DAILY_WEBHOOK_SECRET=your-webhook-secret
DAILYCO_STORAGE_AWS_BUCKET_NAME=reflector-dailyco
DAILYCO_STORAGE_AWS_REGION=us-east-1
DAILYCO_STORAGE_AWS_ROLE_ARN=arn:aws:iam::role/DailyCoAccess
```
3. Restart the server: `docker compose -f docker-compose.selfhosted.yml restart server worker`
## Enabling Real Domain with Let's Encrypt
By default, Caddy uses self-signed certificates. For a real domain:
1. Point your domain's DNS to your server's IP
2. Ensure ports 80 and 443 are open
3. Edit `Caddyfile`:
```
reflector.example.com {
handle /v1/* {
reverse_proxy server:1250
}
handle /health {
reverse_proxy server:1250
}
handle {
reverse_proxy web:3000
}
}
```
4. Update `www/.env`:
```env
SITE_URL=https://reflector.example.com
NEXTAUTH_URL=https://reflector.example.com
API_URL=https://reflector.example.com
```
5. Restart Caddy: `docker compose -f docker-compose.selfhosted.yml restart caddy web`
## Worker Polling Frequency
The selfhosted setup defaults all background worker polling intervals to **300 seconds (5 minutes)** to reduce CPU and memory usage. This controls how often the beat scheduler triggers tasks like recording discovery, meeting reconciliation, and calendar sync.
To change the interval, edit `server/.env`:
```env
# Poll every 60 seconds (more responsive, uses more resources)
CELERY_BEAT_POLL_INTERVAL=60
# Poll every 5 minutes (default for selfhosted)
CELERY_BEAT_POLL_INTERVAL=300
# Use individual per-task defaults (production SaaS behavior)
CELERY_BEAT_POLL_INTERVAL=0
```
After changing, restart the beat and worker containers:
```bash
docker compose -f docker-compose.selfhosted.yml restart beat worker
```
**Affected tasks when `CELERY_BEAT_POLL_INTERVAL` is set:**
| Task | Default (no override) | With override |
|------|-----------------------|---------------|
| SQS message polling | 60s | Override value |
| Daily.co recording discovery | 15s (no webhook) / 180s (webhook) | Override value |
| Meeting reconciliation | 30s | Override value |
| ICS calendar sync | 60s | Override value |
| Upcoming meeting creation | 30s | Override value |
> **Note:** Daily crontab tasks (failed recording reprocessing at 05:00 UTC, public data cleanup at 03:00 UTC) and healthcheck pings (10 min) are **not** affected by this setting.
## Troubleshooting
### Check service status
```bash
docker compose -f docker-compose.selfhosted.yml ps
```
### View logs for a specific service
```bash
docker compose -f docker-compose.selfhosted.yml logs server --tail 50
docker compose -f docker-compose.selfhosted.yml logs gpu --tail 50
docker compose -f docker-compose.selfhosted.yml logs web --tail 50
```
### GPU service taking too long
First start downloads ~1-2GB of ML models. Check progress:
```bash
docker compose -f docker-compose.selfhosted.yml logs gpu -f
```
### Server exits immediately
Usually a database migration issue. Check:
```bash
docker compose -f docker-compose.selfhosted.yml logs server --tail 50
```
### Caddy certificate issues
For self-signed certs, your browser will warn. Click Advanced > Proceed.
For Let's Encrypt, ensure ports 80/443 are open and DNS is pointed correctly.
### Summaries/topics not generating
Check LLM configuration:
```bash
grep LLM_ server/.env
```
If you didn't use `--ollama-gpu` or `--ollama-cpu`, you must set `LLM_URL`, `LLM_API_KEY`, and `LLM_MODEL`.
### Health check from inside containers
```bash
docker compose -f docker-compose.selfhosted.yml exec server curl http://localhost:1250/health
docker compose -f docker-compose.selfhosted.yml exec gpu curl http://localhost:8000/docs
```
## Updating
```bash
# Option A: Pull latest prebuilt images and restart
docker compose -f docker-compose.selfhosted.yml down
./scripts/setup-selfhosted.sh <same-flags-as-before>
# Option B: Build from source (after git pull) and restart
git pull
docker compose -f docker-compose.selfhosted.yml down
./scripts/setup-selfhosted.sh <same-flags-as-before> --build
# Rebuild only the GPU/CPU model image (picks up model updates)
docker compose -f docker-compose.selfhosted.yml build gpu # or cpu
```
The setup script is idempotent — it won't overwrite existing secrets or env vars that are already set.
## Architecture Overview
```
┌─────────┐
Internet ────────>│ Caddy │ :80/:443
└────┬────┘
┌────────────┼────────────┐
│ │ │
v v │
┌─────────┐ ┌─────────┐ │
│ web │ │ server │ │
│ :3000 │ │ :1250 │ │
└─────────┘ └────┬────┘ │
│ │
┌────┴────┐ │
│ worker │ │
│ beat │ │
└────┬────┘ │
│ │
┌──────────────┼────────────┤
│ │ │
v v v
┌───────────┐ ┌─────────┐ ┌─────────┐
│transcription│ │postgres │ │ redis │
│(gpu/cpu) │ │ :5432 │ │ :6379 │
│ :8000 │ └─────────┘ └─────────┘
└───────────┘
┌─────┴─────┐ ┌─────────┐
│ ollama │ │ garage │
│ (optional)│ │(optional│
│ :11435 │ │ S3) │
└───────────┘ └─────────┘
```
All services communicate over Docker's internal network. Only Caddy (if enabled) exposes ports to the internet.

View File

@@ -0,0 +1,39 @@
FROM python:3.12-slim
ENV PYTHONUNBUFFERED=1 \
UV_LINK_MODE=copy \
UV_NO_CACHE=1
WORKDIR /tmp
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
ffmpeg \
curl \
ca-certificates \
gnupg \
wget
ADD https://astral.sh/uv/install.sh /uv-installer.sh
RUN sh /uv-installer.sh && rm /uv-installer.sh
ENV PATH="/root/.local/bin/:$PATH"
RUN mkdir -p /app
WORKDIR /app
COPY pyproject.toml uv.lock /app/
COPY ./app /app/app
COPY ./main.py /app/
COPY ./runserver.sh /app/
# prevent uv failing with too many open files on big cpus
ENV UV_CONCURRENT_INSTALLS=16
# first install
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --compile-bytecode --locked
EXPOSE 8000
CMD ["sh", "/app/runserver.sh"]

View File

@@ -3,14 +3,14 @@ import os
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token", auto_error=False)
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
def apikey_auth(apikey: str | None = Depends(oauth2_scheme)):
required_key = os.environ.get("REFLECTOR_GPU_APIKEY")
if not required_key:
return
if apikey == required_key:
if apikey and apikey == required_key:
return
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,

View File

@@ -1,10 +1,65 @@
import logging
import os
import tarfile
import threading
from pathlib import Path
from urllib.request import urlopen
import torch
import torchaudio
import yaml
from pyannote.audio import Pipeline
logger = logging.getLogger(__name__)
S3_BUNDLE_URL = "https://reflector-public.s3.us-east-1.amazonaws.com/pyannote-speaker-diarization-3.1.tar.gz"
BUNDLE_CACHE_DIR = Path("/root/.cache/pyannote-bundle")
def _ensure_model(cache_dir: Path) -> str:
"""Download and extract S3 model bundle if not cached."""
model_dir = cache_dir / "pyannote-speaker-diarization-3.1"
config_path = model_dir / "config.yaml"
if config_path.exists():
logger.info("Using cached model bundle at %s", model_dir)
return str(model_dir)
cache_dir.mkdir(parents=True, exist_ok=True)
tarball_path = cache_dir / "model.tar.gz"
logger.info("Downloading model bundle from %s", S3_BUNDLE_URL)
with urlopen(S3_BUNDLE_URL) as response, open(tarball_path, "wb") as f:
while chunk := response.read(8192):
f.write(chunk)
logger.info("Extracting model bundle")
with tarfile.open(tarball_path, "r:gz") as tar:
tar.extractall(path=cache_dir, filter="data")
tarball_path.unlink()
_patch_config(model_dir, cache_dir)
return str(model_dir)
def _patch_config(model_dir: Path, cache_dir: Path) -> None:
"""Rewrite config.yaml to reference local pytorch_model.bin paths."""
config_path = model_dir / "config.yaml"
with open(config_path) as f:
config = yaml.safe_load(f)
config["pipeline"]["params"]["segmentation"] = str(
cache_dir / "pyannote-segmentation-3.0" / "pytorch_model.bin"
)
config["pipeline"]["params"]["embedding"] = str(
cache_dir / "pyannote-wespeaker-voxceleb-resnet34-LM" / "pytorch_model.bin"
)
with open(config_path, "w") as f:
yaml.dump(config, f)
logger.info("Patched config.yaml with local model paths")
class PyannoteDiarizationService:
def __init__(self):
@@ -14,10 +69,20 @@ class PyannoteDiarizationService:
def load(self):
self._device = "cuda" if torch.cuda.is_available() else "cpu"
self._pipeline = Pipeline.from_pretrained(
"pyannote/speaker-diarization-3.1",
use_auth_token=os.environ.get("HF_TOKEN"),
)
hf_token = os.environ.get("HF_TOKEN")
if hf_token:
logger.info("Loading pyannote model from HuggingFace (HF_TOKEN set)")
self._pipeline = Pipeline.from_pretrained(
"pyannote/speaker-diarization-3.1",
use_auth_token=hf_token,
)
else:
logger.info("HF_TOKEN not set — loading model from S3 bundle")
model_path = _ensure_model(BUNDLE_CACHE_DIR)
config_path = Path(model_path) / "config.yaml"
self._pipeline = Pipeline.from_pretrained(str(config_path))
self._pipeline.to(torch.device(self._device))
def diarize_file(self, file_path: str, timestamp: float = 0.0) -> dict:

10
node_modules/.yarn-integrity generated vendored Normal file
View File

@@ -0,0 +1,10 @@
{
"systemParams": "darwin-x64-83",
"modulesFolders": [],
"flags": [],
"linkedModules": [],
"topLevelPatterns": [],
"lockfileEntries": {},
"files": [],
"artifacts": {}
}

14
scripts/garage.toml Normal file
View File

@@ -0,0 +1,14 @@
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
replication_factor = 1
rpc_secret = "__GARAGE_RPC_SECRET__"
rpc_bind_addr = "[::]:3901"
[s3_api]
api_bind_addr = "[::]:3900"
s3_region = "garage"
root_domain = ".s3.garage.localhost"
[admin]
api_bind_addr = "[::]:3903"

View File

@@ -0,0 +1,87 @@
#!/usr/bin/env bash
#
# Install Docker Engine + Compose plugin on Ubuntu.
# Ubuntu's default repos don't include docker-compose-plugin, so we add Docker's official repo.
#
# Usage:
# ./scripts/install-docker-ubuntu.sh
#
# Requires: root or sudo
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# --- Colors ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m'
info() { echo -e "${CYAN}==>${NC} $*"; }
ok() { echo -e "${GREEN}${NC} $*"; }
warn() { echo -e "${YELLOW} !${NC} $*"; }
err() { echo -e "${RED}${NC} $*" >&2; }
# Use sudo if available and not root; otherwise run directly
if [[ $(id -u) -eq 0 ]]; then
MAYBE_SUDO=""
elif command -v sudo &>/dev/null; then
MAYBE_SUDO="sudo "
else
err "Need root. Run as root or install sudo: apt install sudo"
exit 1
fi
# Check Ubuntu
if [[ ! -f /etc/os-release ]]; then
err "Cannot detect OS. This script is for Ubuntu."
exit 1
fi
source /etc/os-release
if [[ "${ID:-}" != "ubuntu" ]] && [[ "${ID_LIKE:-}" != *"ubuntu"* ]]; then
err "This script is for Ubuntu. Detected: ${ID:-unknown}"
exit 1
fi
info "Adding Docker's official repository..."
${MAYBE_SUDO}apt update
${MAYBE_SUDO}apt install -y ca-certificates curl
${MAYBE_SUDO}install -m 0755 -d /etc/apt/keyrings
${MAYBE_SUDO}rm -f /etc/apt/sources.list.d/docker.list /etc/apt/sources.list.d/docker.sources
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | ${MAYBE_SUDO}tee /etc/apt/keyrings/docker.asc > /dev/null
${MAYBE_SUDO}chmod a+r /etc/apt/keyrings/docker.asc
CODENAME="$(. /etc/os-release && echo "${UBUNTU_CODENAME:-${VERSION_CODENAME:-}}")"
[[ -z "$CODENAME" ]] && { err "Could not detect Ubuntu version codename."; exit 1; }
${MAYBE_SUDO}tee /etc/apt/sources.list.d/docker.sources > /dev/null <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: ${CODENAME}
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
info "Installing Docker Engine and Compose plugin..."
${MAYBE_SUDO}apt update
${MAYBE_SUDO}apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
if [[ -d /run/systemd/system ]]; then
info "Enabling and starting Docker..."
${MAYBE_SUDO}systemctl enable --now docker
else
err "No systemd. This script requires Ubuntu with systemd (e.g. DigitalOcean droplet)."
exit 1
fi
DOCKER_USER="${SUDO_USER:-${USER:-root}}"
if [[ "$DOCKER_USER" != "root" ]]; then
info "Adding $DOCKER_USER to docker group..."
${MAYBE_SUDO}usermod -aG docker "$DOCKER_USER"
fi
ok "Docker installed successfully."
echo ""
echo " Log out and back in (or run: newgrp docker) so the group change takes effect."
echo " Then verify with: docker compose version"
echo ""

1004
scripts/setup-selfhosted.sh Executable file

File diff suppressed because it is too large Load Diff

675
scripts/setup-standalone.sh Executable file
View File

@@ -0,0 +1,675 @@
#!/usr/bin/env bash
#
# Standalone local development setup for Reflector.
# Takes a fresh clone to a working instance — no cloud accounts, no API keys.
#
# Usage:
# ./scripts/setup-standalone.sh
#
# Idempotent — safe to re-run at any time.
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
SERVER_ENV="$ROOT_DIR/server/.env"
WWW_ENV="$ROOT_DIR/www/.env.local"
MODEL="${LLM_MODEL:-qwen2.5:14b}"
OLLAMA_PORT="${OLLAMA_PORT:-11435}"
OS="$(uname -s)"
# --- Colors ---
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m'
info() { echo -e "${CYAN}==>${NC} $*"; }
ok() { echo -e "${GREEN}${NC} $*"; }
warn() { echo -e "${YELLOW} !${NC} $*"; }
err() { echo -e "${RED}${NC} $*" >&2; }
# --- Helpers ---
dump_diagnostics() {
local failed_svc="${1:-}"
echo ""
err "========== DIAGNOSTICS =========="
err "Container status:"
compose_cmd ps -a --format "table {{.Name}}\t{{.Status}}" 2>/dev/null || true
echo ""
# Show logs for any container that exited
local stopped
stopped=$(compose_cmd ps -a --format '{{.Name}}\t{{.Status}}' 2>/dev/null \
| grep -iv 'up\|running' | awk -F'\t' '{print $1}' || true)
for c in $stopped; do
err "--- Logs for $c (exited/unhealthy) ---"
docker logs --tail 30 "$c" 2>&1 || true
echo ""
done
# If a specific service failed, always show its logs
if [[ -n "$failed_svc" ]]; then
err "--- Logs for $failed_svc (last 40) ---"
compose_cmd logs "$failed_svc" --tail 40 2>&1 || true
echo ""
# Try health check from inside the container as extra signal
err "--- Internal health check ($failed_svc) ---"
compose_cmd exec -T "$failed_svc" \
curl -sf http://localhost:1250/health 2>&1 || echo "(not reachable internally either)"
fi
err "================================="
}
trap 'dump_diagnostics' ERR
# Get the image ID for a compose service (works even when containers are not running).
svc_image_id() {
local svc="$1"
# Extract image name from compose config YAML, fall back to <project>-<service>
local img_name
img_name=$(compose_cmd config 2>/dev/null \
| sed -n "/^ ${svc}:/,/^ [a-z]/p" | grep '^\s*image:' | awk '{print $2}')
img_name="${img_name:-reflector-$svc}"
docker images -q "$img_name" 2>/dev/null | head -1
}
# Ensure images with build contexts are up-to-date.
# Docker layer cache makes this fast (~seconds) when source hasn't changed.
rebuild_images() {
local svc
for svc in web cpu; do
local old_id
old_id=$(svc_image_id "$svc")
old_id="${old_id:-<none>}"
info "Building $svc..."
compose_cmd build "$svc"
local new_id
new_id=$(svc_image_id "$svc")
if [[ "$old_id" == "$new_id" ]]; then
ok "$svc unchanged (${new_id:0:12})"
else
ok "$svc rebuilt (${old_id:0:12} -> ${new_id:0:12})"
fi
done
}
detect_lan_ip() {
# Returns the host's LAN IP — used for WebRTC ICE candidate rewriting.
case "$OS" in
Darwin)
# Try common interfaces: en0 (Wi-Fi), en1 (Ethernet)
for iface in en0 en1 en2 en3; do
local ip
ip=$(ipconfig getifaddr "$iface" 2>/dev/null || true)
if [[ -n "$ip" ]]; then
echo "$ip"
return
fi
done
;;
Linux)
ip route get 1.1.1.1 2>/dev/null | sed -n 's/.*src \([^ ]*\).*/\1/p'
return
;;
esac
# Fallback — empty means "not detected"
echo ""
}
wait_for_url() {
local url="$1" label="$2" retries="${3:-30}" interval="${4:-2}"
for i in $(seq 1 "$retries"); do
if curl -sf "$url" > /dev/null 2>&1; then
return 0
fi
echo -ne "\r Waiting for $label... ($i/$retries)"
sleep "$interval"
done
echo ""
err "$label not responding at $url after $retries attempts"
return 1
}
env_has_key() {
local file="$1" key="$2"
grep -q "^${key}=" "$file" 2>/dev/null
}
env_set() {
local file="$1" key="$2" value="$3"
if env_has_key "$file" "$key"; then
# Replace existing value (portable sed)
if [[ "$OS" == "Darwin" ]]; then
sed -i '' "s|^${key}=.*|${key}=${value}|" "$file"
else
sed -i "s|^${key}=.*|${key}=${value}|" "$file"
fi
else
echo "${key}=${value}" >> "$file"
fi
}
resolve_symlink() {
local file="$1"
if [[ -L "$file" ]]; then
warn "$(basename "$file") is a symlink — creating standalone copy"
cp -L "$file" "$file.tmp"
rm "$file"
mv "$file.tmp" "$file"
fi
}
compose_cmd() {
local compose_files="-f $ROOT_DIR/docker-compose.standalone.yml"
if [[ "$OS" == "Linux" ]] && [[ -n "${OLLAMA_PROFILE:-}" ]]; then
docker compose $compose_files --profile "$OLLAMA_PROFILE" "$@"
else
docker compose $compose_files "$@"
fi
}
# =========================================================
# Step 1: LLM / Ollama
# =========================================================
step_llm() {
info "Step 1: LLM setup (Ollama + $MODEL)"
case "$OS" in
Darwin)
if ! command -v ollama &> /dev/null; then
err "Ollama not found. Install it:"
err " brew install ollama"
err " # or https://ollama.com/download"
exit 1
fi
# Start if not running
if ! curl -sf "http://localhost:$OLLAMA_PORT/api/tags" > /dev/null 2>&1; then
info "Starting Ollama..."
ollama serve &
disown
fi
wait_for_url "http://localhost:$OLLAMA_PORT/api/tags" "Ollama"
echo ""
# Pull model if not already present
if ollama list 2>/dev/null | awk '{print $1}' | grep -qxF "$MODEL"; then
ok "Model $MODEL already pulled"
else
info "Pulling model $MODEL (this may take a while)..."
ollama pull "$MODEL"
fi
LLM_URL_VALUE="http://host.docker.internal:$OLLAMA_PORT/v1"
;;
Linux)
if command -v nvidia-smi &> /dev/null && nvidia-smi > /dev/null 2>&1; then
ok "NVIDIA GPU detected — using ollama-gpu profile"
OLLAMA_PROFILE="ollama-gpu"
OLLAMA_SVC="ollama"
LLM_URL_VALUE="http://ollama:$OLLAMA_PORT/v1"
else
warn "No NVIDIA GPU — using ollama-cpu profile"
OLLAMA_PROFILE="ollama-cpu"
OLLAMA_SVC="ollama-cpu"
LLM_URL_VALUE="http://ollama-cpu:$OLLAMA_PORT/v1"
fi
info "Starting Ollama container..."
compose_cmd up -d
wait_for_url "http://localhost:$OLLAMA_PORT/api/tags" "Ollama"
echo ""
# Pull model inside container
if compose_cmd exec "$OLLAMA_SVC" ollama list 2>/dev/null | awk '{print $1}' | grep -qxF "$MODEL"; then
ok "Model $MODEL already pulled"
else
info "Pulling model $MODEL inside container (this may take a while)..."
compose_cmd exec "$OLLAMA_SVC" ollama pull "$MODEL"
fi
;;
*)
err "Unsupported OS: $OS"
exit 1
;;
esac
ok "LLM ready ($MODEL via Ollama)"
}
# =========================================================
# Step 2: Generate server/.env
# =========================================================
step_server_env() {
info "Step 2: Generating server/.env"
resolve_symlink "$SERVER_ENV"
if [[ -f "$SERVER_ENV" ]]; then
ok "server/.env already exists — ensuring standalone vars"
else
cat > "$SERVER_ENV" << 'ENVEOF'
# Generated by setup-standalone.sh — standalone local development
# Source of truth for settings: server/reflector/settings.py
ENVEOF
ok "Created server/.env"
fi
# Ensure all standalone-critical vars (appends if missing, replaces if present)
env_set "$SERVER_ENV" "DATABASE_URL" "postgresql+asyncpg://reflector:reflector@postgres:5432/reflector"
env_set "$SERVER_ENV" "REDIS_HOST" "redis"
env_set "$SERVER_ENV" "CELERY_BROKER_URL" "redis://redis:6379/1"
env_set "$SERVER_ENV" "CELERY_RESULT_BACKEND" "redis://redis:6379/1"
env_set "$SERVER_ENV" "AUTH_BACKEND" "none"
env_set "$SERVER_ENV" "PUBLIC_MODE" "true"
# TRANSCRIPT_BACKEND, TRANSCRIPT_URL, DIARIZATION_BACKEND, DIARIZATION_URL
# are set via docker-compose.standalone.yml `environment:` overrides — not written here
# so we don't clobber the user's server/.env for non-standalone use.
env_set "$SERVER_ENV" "TRANSLATION_BACKEND" "passthrough"
env_set "$SERVER_ENV" "LLM_URL" "$LLM_URL_VALUE"
env_set "$SERVER_ENV" "LLM_MODEL" "$MODEL"
env_set "$SERVER_ENV" "LLM_API_KEY" "not-needed"
# WebRTC: detect LAN IP for ICE candidate rewriting (bridge networking)
local lan_ip
lan_ip=$(detect_lan_ip)
if [[ -n "$lan_ip" ]]; then
env_set "$SERVER_ENV" "WEBRTC_HOST" "$lan_ip"
ok "WebRTC host IP: $lan_ip"
else
warn "Could not detect LAN IP — WebRTC recording from other devices may not work"
warn "Set WEBRTC_HOST=<your-lan-ip> in server/.env manually"
fi
ok "Standalone vars set (LLM_URL=$LLM_URL_VALUE)"
}
# =========================================================
# Step 3: Object storage (Garage)
# =========================================================
step_storage() {
info "Step 3: Object storage (Garage)"
# Generate garage.toml from template (fill in RPC secret)
GARAGE_TOML="$ROOT_DIR/scripts/garage.toml"
GARAGE_TOML_RUNTIME="$ROOT_DIR/data/garage.toml"
mkdir -p "$ROOT_DIR/data"
if [[ -d "$GARAGE_TOML_RUNTIME" ]]; then
rm -rf "$GARAGE_TOML_RUNTIME"
fi
if [[ ! -f "$GARAGE_TOML_RUNTIME" ]]; then
RPC_SECRET=$(openssl rand -hex 32)
sed "s|__GARAGE_RPC_SECRET__|${RPC_SECRET}|" "$GARAGE_TOML" > "$GARAGE_TOML_RUNTIME"
fi
compose_cmd up -d garage
# Use /metrics for readiness — /health returns 503 until layout is applied
if ! wait_for_url "http://localhost:3903/metrics" "Garage admin API"; then
echo ""
err "Garage container logs:"
compose_cmd logs garage --tail 30 2>&1 || true
exit 1
fi
echo ""
# Layout: get node ID, assign, apply (skip if already applied)
NODE_ID=$(compose_cmd exec -T garage /garage node id -q 2>/dev/null | tr -d '[:space:]')
LAYOUT_STATUS=$(compose_cmd exec -T garage /garage layout show 2>&1 || true)
if echo "$LAYOUT_STATUS" | grep -q "No nodes"; then
compose_cmd exec -T garage /garage layout assign "$NODE_ID" -c 1G -z dc1
compose_cmd exec -T garage /garage layout apply --version 1
fi
# Create bucket (idempotent — skip if exists)
if ! compose_cmd exec -T garage /garage bucket info reflector-media &>/dev/null; then
compose_cmd exec -T garage /garage bucket create reflector-media
fi
# Create key (idempotent — skip if exists)
CREATED_KEY=false
if compose_cmd exec -T garage /garage key info reflector &>/dev/null; then
ok "Key 'reflector' already exists"
else
KEY_OUTPUT=$(compose_cmd exec -T garage /garage key create reflector)
CREATED_KEY=true
fi
# Grant bucket permissions (idempotent)
compose_cmd exec -T garage /garage bucket allow reflector-media --read --write --key reflector
# Set env vars (only parse key on first create — key info redacts the secret)
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_BACKEND" "aws"
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL" "http://garage:3900"
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_BUCKET_NAME" "reflector-media"
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_REGION" "garage"
if [[ "$CREATED_KEY" == "true" ]]; then
KEY_ID=$(echo "$KEY_OUTPUT" | grep -i "key id" | awk '{print $NF}')
KEY_SECRET=$(echo "$KEY_OUTPUT" | grep -i "secret key" | awk '{print $NF}')
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID" "$KEY_ID"
env_set "$SERVER_ENV" "TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY" "$KEY_SECRET"
fi
ok "Object storage ready (Garage)"
}
# =========================================================
# Step 4: Generate www/.env.local
# =========================================================
step_www_env() {
info "Step 4: Generating www/.env.local"
resolve_symlink "$WWW_ENV"
if [[ -f "$WWW_ENV" ]]; then
ok "www/.env.local already exists — ensuring standalone vars"
else
cat > "$WWW_ENV" << 'ENVEOF'
# Generated by setup-standalone.sh — standalone local development
ENVEOF
ok "Created www/.env.local"
fi
# Caddyfile.standalone.example serves API at /v1, /health — use base URL
if [[ -n "${PRIMARY_IP:-}" ]]; then
BASE_URL="https://$PRIMARY_IP:3043"
else
BASE_URL="https://localhost:3043"
fi
env_set "$WWW_ENV" "SITE_URL" "$BASE_URL"
env_set "$WWW_ENV" "NEXTAUTH_URL" "$BASE_URL"
env_set "$WWW_ENV" "NEXTAUTH_SECRET" "standalone-dev-secret-not-for-production"
env_set "$WWW_ENV" "API_URL" "$BASE_URL"
env_set "$WWW_ENV" "WEBSOCKET_URL" "auto"
env_set "$WWW_ENV" "SERVER_API_URL" "http://server:1250"
env_set "$WWW_ENV" "FEATURE_REQUIRE_LOGIN" "false"
ok "Standalone www vars set"
}
# =========================================================
# Step 5: Start all services
# =========================================================
step_services() {
info "Step 5: Starting Docker services"
# Check for port conflicts — stale processes silently shadow Docker port mappings.
# OrbStack/Docker Desktop bind ports for forwarding; ignore those PIDs.
local ports_ok=true
for port in 3043 3000 1250 5432 6379 3900 3903; do
local pids
pids=$(lsof -ti :"$port" 2>/dev/null || true)
for pid in $pids; do
local pname
pname=$(ps -p "$pid" -o comm= 2>/dev/null || true)
# OrbStack and Docker Desktop own port forwarding — not real conflicts
if [[ "$pname" == *"OrbStack"* ]] || [[ "$pname" == *"com.docker"* ]] || [[ "$pname" == *"vpnkit"* ]]; then
continue
fi
warn "Port $port already in use by PID $pid ($pname)"
warn "Kill it with: lsof -ti :$port | xargs kill"
ports_ok=false
done
done
if [[ "$ports_ok" == "false" ]]; then
warn "Port conflicts detected — Docker containers may not be reachable"
warn "Continuing anyway (services will start but may be shadowed)"
fi
# Rebuild images if source has changed (Docker layer cache makes this fast when unchanged)
rebuild_images
# server runs alembic migrations on startup automatically (see runserver.sh)
compose_cmd up -d postgres redis garage cpu server worker beat web caddy
ok "Containers started"
# Quick sanity check — catch containers that exit immediately (bad image, missing file, etc.)
sleep 3
local exited
exited=$(compose_cmd ps -a --format '{{.Name}} {{.Status}}' 2>/dev/null \
| grep -i 'exit' || true)
if [[ -n "$exited" ]]; then
warn "Some containers exited immediately:"
echo "$exited" | while read -r line; do warn " $line"; done
dump_diagnostics
fi
info "Server is running migrations (alembic upgrade head)..."
}
# =========================================================
# Step 6: Health checks
# =========================================================
step_health() {
info "Step 6: Health checks"
# CPU service may take a while on first start (model download + load).
# No host port exposed — check via docker exec.
info "Waiting for CPU service (first start downloads ~1GB of models)..."
local cpu_ok=false
for i in $(seq 1 120); do
if compose_cmd exec -T cpu curl -sf http://localhost:8000/docs > /dev/null 2>&1; then
cpu_ok=true
break
fi
echo -ne "\r Waiting for CPU service... ($i/120)"
sleep 5
done
echo ""
if [[ "$cpu_ok" == "true" ]]; then
ok "CPU service healthy (transcription + diarization)"
else
warn "CPU service not ready yet — it will keep loading in the background"
warn "Check with: docker compose logs cpu"
fi
# Server may take a long time on first run — alembic migrations run before uvicorn starts.
# Use docker exec so this works regardless of network_mode or port mapping.
info "Waiting for Server API (first run includes database migrations)..."
local server_ok=false
for i in $(seq 1 90); do
# Check if container is still running
local svc_status
svc_status=$(compose_cmd ps server --format '{{.Status}}' 2>/dev/null || true)
if [[ -z "$svc_status" ]] || echo "$svc_status" | grep -qi 'exit'; then
echo ""
err "Server container exited unexpectedly"
dump_diagnostics server
exit 1
fi
# Health check from inside container (avoids host networking issues)
if compose_cmd exec -T server curl -sf http://localhost:1250/health > /dev/null 2>&1; then
server_ok=true
break
fi
echo -ne "\r Waiting for Server API... ($i/90)"
sleep 5
done
echo ""
if [[ "$server_ok" == "true" ]]; then
ok "Server API healthy"
else
err "Server API not ready after ~7 minutes"
dump_diagnostics server
exit 1
fi
wait_for_url "http://localhost:3000" "Frontend" 90 3
echo ""
ok "Frontend responding"
# Caddy reverse proxy (self-signed TLS — curl needs -k)
if curl -sfk "https://localhost:3043" > /dev/null 2>&1; then
ok "Caddy proxy healthy (https://localhost:3043)"
else
warn "Caddy proxy not responding on https://localhost:3043"
warn "Check with: docker compose logs caddy"
fi
# Check LLM reachability from inside a container
if compose_cmd exec -T server \
curl -sf "$LLM_URL_VALUE/models" > /dev/null 2>&1; then
ok "LLM reachable from containers"
else
warn "LLM not reachable from containers at $LLM_URL_VALUE"
warn "Summaries/topics/titles won't work until LLM is accessible"
fi
}
# =========================================================
# Main
# =========================================================
main() {
echo ""
echo "=========================================="
echo " Reflector — Standalone Local Setup"
echo "=========================================="
echo ""
# Ensure we're in the repo root
if [[ ! -f "$ROOT_DIR/docker-compose.yml" ]]; then
err "docker-compose.yml not found in $ROOT_DIR"
err "Run this script from the repo root: ./scripts/setup-standalone.sh"
exit 1
fi
# Docker: Compose plugin, buildx, and daemon. On Ubuntu, auto-install if missing.
docker_ready() {
docker compose version 2>/dev/null | grep -qi compose \
&& docker buildx version &>/dev/null \
&& docker info &>/dev/null
}
if ! docker_ready; then
RAN_INSTALL=false
if [[ "$OS" == "Linux" ]] && [[ -f /etc/os-release ]] && (source /etc/os-release 2>/dev/null; [[ "${ID:-}" == "ubuntu" || "${ID_LIKE:-}" == *"ubuntu"* ]]); then
info "Docker not ready. Running install-docker-ubuntu.sh..."
"$SCRIPT_DIR/install-docker-ubuntu.sh" || true
RAN_INSTALL=true
[[ -d /run/systemd/system ]] && command -v systemctl &>/dev/null && systemctl start docker 2>/dev/null || true
sleep 2
fi
if ! docker_ready; then
# Docker may be installed but current shell lacks docker group (needs newgrp)
if [[ "$RAN_INSTALL" == "true" ]] && [[ $(id -u) -ne 0 ]] && command -v sg &>/dev/null && getent group docker &>/dev/null; then
info "Re-running with docker group..."
exec sg docker -c "$(printf '%q' "$0" && printf ' %q' "$@")"
fi
if [[ "$OS" == "Darwin" ]]; then
err "Docker not ready. Install Docker Desktop or OrbStack."
elif [[ "$OS" == "Linux" ]]; then
err "Docker not ready. Run: ./scripts/install-docker-ubuntu.sh"
err "Then run: newgrp docker (or log out and back in), then run this script again."
else
err "Docker not ready. Install Docker with Compose V2 and buildx."
fi
exit 1
fi
fi
# LLM_URL_VALUE is set by step_llm, used by later steps
LLM_URL_VALUE=""
OLLAMA_PROFILE=""
# docker-compose.yml may reference env_files that don't exist yet;
# touch them so compose_cmd works before the steps that populate them.
touch "$SERVER_ENV" "$WWW_ENV"
# Ensure garage.toml exists before any compose up (step_llm starts all services including garage)
GARAGE_TOML="$ROOT_DIR/scripts/garage.toml"
GARAGE_TOML_RUNTIME="$ROOT_DIR/data/garage.toml"
mkdir -p "$ROOT_DIR/data"
if [[ -d "$GARAGE_TOML_RUNTIME" ]]; then
rm -rf "$GARAGE_TOML_RUNTIME"
fi
if [[ ! -f "$GARAGE_TOML_RUNTIME" ]]; then
RPC_SECRET=$(openssl rand -hex 32)
sed "s|__GARAGE_RPC_SECRET__|${RPC_SECRET}|" "$GARAGE_TOML" > "$GARAGE_TOML_RUNTIME"
fi
# Remove containers that may have bad mounts (was directory); force recreate
compose_cmd rm -f -s garage caddy 2>/dev/null || true
# Detect primary IP for droplet (used for Caddyfile, step_www_env, success message)
PRIMARY_IP=""
if [[ "$OS" == "Linux" ]]; then
PRIMARY_IP=$(hostname -I 2>/dev/null | awk '{print $1}' || true)
if [[ "$PRIMARY_IP" == "127."* ]] || [[ -z "$PRIMARY_IP" ]]; then
PRIMARY_IP=$(ip -4 route get 1 2>/dev/null | sed -n 's/.*src \([0-9.]*\).*/\1/p' || true)
fi
fi
# Ensure Caddyfile exists before any compose up (step_llm starts caddy)
# On droplet: explicit IP + localhost so Caddy provisions cert at startup (avoids on_demand/SNI issues)
CADDYFILE="$ROOT_DIR/Caddyfile"
if [[ -d "$CADDYFILE" ]]; then
rm -rf "$CADDYFILE"
fi
if [[ -n "$PRIMARY_IP" ]]; then
cat > "$CADDYFILE" << CADDYEOF
# Generated by setup-standalone.sh — explicit IP for droplet (provisions cert at startup)
https://$PRIMARY_IP, localhost {
tls internal
handle /v1/* {
reverse_proxy server:1250
}
handle /health {
reverse_proxy server:1250
}
handle {
reverse_proxy web:3000
}
}
CADDYEOF
ok "Created Caddyfile for $PRIMARY_IP and localhost"
elif [[ ! -f "$CADDYFILE" ]]; then
cp "$ROOT_DIR/Caddyfile.standalone.example" "$CADDYFILE"
fi
step_llm
echo ""
step_server_env
echo ""
step_storage
echo ""
step_www_env
echo ""
step_services
echo ""
step_health
echo ""
echo "=========================================="
echo -e " ${GREEN}Reflector is running!${NC}"
echo "=========================================="
echo ""
if [[ -n "$PRIMARY_IP" ]]; then
echo " App: https://$PRIMARY_IP:3043 (accept self-signed cert in browser)"
echo " API: https://$PRIMARY_IP:3043/v1/"
echo " Local: https://localhost:3043"
else
echo " App: https://localhost:3043 (accept self-signed cert in browser)"
echo " API: https://localhost:3043/v1/"
fi
echo ""
echo " To stop: docker compose down"
echo " To re-run: ./scripts/setup-standalone.sh"
echo ""
}
main "$@"

View File

@@ -66,15 +66,22 @@ TRANSLATE_URL=https://monadical-sas--reflector-translator-web.modal.run
## LLM backend (Required)
##
## Responsible for generating titles, summaries, and topic detection
## Requires OpenAI API key
## Supports any OpenAI-compatible endpoint.
## =======================================================
## OpenAI API key - get from https://platform.openai.com/account/api-keys
LLM_API_KEY=sk-your-openai-api-key
LLM_MODEL=gpt-4o-mini
## --- Option A: Local LLM via Ollama (recommended for dev) ---
## Setup: ./scripts/setup-standalone.sh
## Mac: Ollama runs natively (Metal GPU). Containers reach it via host.docker.internal.
## Linux: docker compose --profile ollama-gpu up -d (or ollama-cpu for no GPU)
LLM_URL=http://host.docker.internal:11435/v1
LLM_MODEL=qwen2.5:14b
LLM_API_KEY=not-needed
## Linux with containerized Ollama: LLM_URL=http://ollama:11435/v1
## Optional: Custom endpoint (defaults to OpenAI)
# LLM_URL=https://api.openai.com/v1
## --- Option B: Remote/cloud LLM ---
#LLM_API_KEY=sk-your-openai-api-key
#LLM_MODEL=gpt-4o-mini
## LLM_URL defaults to OpenAI when unset
## Context size for summary generation (tokens)
LLM_CONTEXT_WINDOW=16000

View File

@@ -0,0 +1,115 @@
# =======================================================
# Reflector Self-Hosted Production — Backend Configuration
# Generated by: ./scripts/setup-selfhosted.sh
# Reference: server/reflector/settings.py
# =======================================================
# =======================================================
# Database & Infrastructure
# Pre-filled for Docker internal networking (docker-compose.selfhosted.yml)
# =======================================================
DATABASE_URL=postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST=redis
REDIS_PORT=6379
CELERY_BROKER_URL=redis://redis:6379/1
CELERY_RESULT_BACKEND=redis://redis:6379/1
# Secret key — auto-generated by setup script
# Generate manually with: openssl rand -hex 32
SECRET_KEY=changeme-generate-a-secure-random-string
# =======================================================
# Authentication
# Disabled by default. Enable Authentik for multi-user access.
# See docsv2/selfhosted-production.md for setup instructions.
# =======================================================
AUTH_BACKEND=none
# AUTH_BACKEND=jwt
# AUTH_JWT_AUDIENCE=
# AUTH_BACKEND=password
# ADMIN_EMAIL=admin@localhost
# ADMIN_PASSWORD_HASH=pbkdf2:sha256:100000$<salt>$<hash>
# =======================================================
# Specialized Models (Transcription, Diarization, Translation)
# These run in the gpu/cpu container — NOT an LLM.
# The "modal" backend means "HTTP API client" — it talks to
# the self-hosted container, not Modal.com cloud.
# =======================================================
TRANSCRIPT_BACKEND=modal
TRANSCRIPT_URL=http://transcription:8000
TRANSCRIPT_MODAL_API_KEY=selfhosted
DIARIZATION_ENABLED=true
DIARIZATION_BACKEND=modal
DIARIZATION_URL=http://transcription:8000
TRANSLATION_BACKEND=modal
TRANSLATE_URL=http://transcription:8000
# HuggingFace token — optional, for gated models (e.g. pyannote).
# Falls back to public S3 model bundle if not set.
# HF_TOKEN=hf_xxxxx
# =======================================================
# LLM for Summarization & Topic Detection
# Only summaries and topics use an LLM. Everything else
# (transcription, diarization, translation) uses specialized models above.
#
# Supports any OpenAI-compatible endpoint.
# Auto-configured by setup script if using --ollama-gpu or --ollama-cpu.
# For --gpu or --cpu modes, you MUST configure an external LLM.
# =======================================================
# --- Option A: External OpenAI-compatible API ---
# LLM_URL=https://api.openai.com/v1
# LLM_API_KEY=sk-your-api-key
# LLM_MODEL=gpt-4o-mini
# --- Option B: Local Ollama (auto-set by --ollama-gpu/--ollama-cpu) ---
# LLM_URL=http://ollama:11435/v1
# LLM_API_KEY=not-needed
# LLM_MODEL=llama3.1
LLM_CONTEXT_WINDOW=16000
# =======================================================
# S3 Storage (REQUIRED)
# Where to store audio files and transcripts.
#
# Option A: Use --garage flag (auto-configured by setup script)
# Option B: Any S3-compatible endpoint (AWS, MinIO, etc.)
# Set TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL for non-AWS endpoints.
# =======================================================
TRANSCRIPT_STORAGE_BACKEND=aws
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=
TRANSCRIPT_STORAGE_AWS_BUCKET_NAME=reflector-media
TRANSCRIPT_STORAGE_AWS_REGION=us-east-1
# For non-AWS S3-compatible endpoints (Garage, MinIO, etc.):
# TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL=http://garage:3900
# =======================================================
# Daily.co Live Rooms (Optional)
# Enable real-time meeting rooms with Daily.co integration.
# Requires a Daily.co account: https://www.daily.co/
# =======================================================
# DEFAULT_VIDEO_PLATFORM=daily
# DAILY_API_KEY=your-daily-api-key
# DAILY_SUBDOMAIN=your-subdomain
# DAILY_WEBHOOK_SECRET=your-daily-webhook-secret
# DAILYCO_STORAGE_AWS_BUCKET_NAME=reflector-dailyco
# DAILYCO_STORAGE_AWS_REGION=us-east-1
# DAILYCO_STORAGE_AWS_ROLE_ARN=arn:aws:iam::role/DailyCoAccess
# =======================================================
# Feature Flags
# =======================================================
PUBLIC_MODE=true
# FEATURE_ROOMS=true
# =======================================================
# Sentry (Optional)
# =======================================================
# SENTRY_DSN=

View File

@@ -0,0 +1,74 @@
"""add_change_seq_to_transcript
Revision ID: 623af934249a
Revises: 3aa20b96d963
Create Date: 2026-02-19 18:53:12.315440
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "623af934249a"
down_revision: Union[str, None] = "3aa20b96d963"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Sequence
op.execute("CREATE SEQUENCE IF NOT EXISTS transcript_change_seq;")
# Column (nullable first for backfill)
op.add_column("transcript", sa.Column("change_seq", sa.BigInteger(), nullable=True))
# Backfill existing rows with sequential values (ordered by created_at for determinism)
op.execute("""
UPDATE transcript SET change_seq = sub.seq FROM (
SELECT id, nextval('transcript_change_seq') AS seq
FROM transcript ORDER BY created_at ASC
) sub WHERE transcript.id = sub.id;
""")
# Now make NOT NULL
op.alter_column("transcript", "change_seq", nullable=False)
# Default for any inserts between now and trigger creation
op.alter_column(
"transcript",
"change_seq",
server_default=sa.text("nextval('transcript_change_seq')"),
)
# Trigger function
op.execute("""
CREATE OR REPLACE FUNCTION set_transcript_change_seq()
RETURNS TRIGGER AS $$
BEGIN
NEW.change_seq := nextval('transcript_change_seq');
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
""")
# Trigger (fires on every INSERT or UPDATE)
op.execute("""
CREATE TRIGGER trigger_transcript_change_seq
BEFORE INSERT OR UPDATE ON transcript
FOR EACH ROW
EXECUTE FUNCTION set_transcript_change_seq();
""")
# Index for efficient polling
op.create_index("idx_transcript_change_seq", "transcript", ["change_seq"])
def downgrade() -> None:
op.execute("DROP TRIGGER IF EXISTS trigger_transcript_change_seq ON transcript;")
op.execute("DROP FUNCTION IF EXISTS set_transcript_change_seq();")
op.drop_index("idx_transcript_change_seq", table_name="transcript")
op.drop_column("transcript", "change_seq")
op.execute("DROP SEQUENCE IF EXISTS transcript_change_seq;")

View File

@@ -0,0 +1,25 @@
"""add password_hash to user table
Revision ID: e1f093f7f124
Revises: 623af934249a
Create Date: 2026-02-19 00:00:00.000000
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
revision: str = "e1f093f7f124"
down_revision: Union[str, None] = "623af934249a"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.add_column("user", sa.Column("password_hash", sa.String(), nullable=True))
def downgrade() -> None:
op.drop_column("user", "password_hash")

View File

@@ -68,7 +68,6 @@ evaluation = [
"pydantic>=2.1.1",
]
local = [
"pyannote-audio>=3.3.2",
"faster-whisper>=0.10.0",
]
silero-vad = [

View File

@@ -8,6 +8,7 @@ from prometheus_fastapi_instrumentator import Instrumentator
import reflector.auth # noqa
import reflector.db # noqa
from reflector.auth import router as auth_router
from reflector.events import subscribers_shutdown, subscribers_startup
from reflector.logger import logger
from reflector.metrics import metrics_init
@@ -37,6 +38,13 @@ try:
except ImportError:
sentry_sdk = None
# Patch aioice port range if configured (must happen before any RTCPeerConnection)
if settings.WEBRTC_PORT_RANGE:
from reflector.webrtc_ports import parse_port_range, patch_aioice_port_range
_min, _max = parse_port_range(settings.WEBRTC_PORT_RANGE)
patch_aioice_port_range(_min, _max)
# lifespan events
@asynccontextmanager
@@ -59,7 +67,7 @@ else:
logger.info("Sentry disabled")
# build app
app = FastAPI(lifespan=lifespan)
app = FastAPI(lifespan=lifespan, root_path=settings.ROOT_PATH)
app.add_middleware(
CORSMiddleware,
allow_credentials=settings.CORS_ALLOW_CREDENTIALS or False,
@@ -98,6 +106,8 @@ app.include_router(user_ws_router, prefix="/v1")
app.include_router(zulip_router, prefix="/v1")
app.include_router(whereby_router, prefix="/v1")
app.include_router(daily_router, prefix="/v1/daily")
if auth_router:
app.include_router(auth_router, prefix="/v1")
add_pagination(app)
# prepare celery

View File

@@ -4,8 +4,9 @@ from uuid import uuid4
from celery import current_task
from reflector.db import get_database
from reflector.db import _database_context, get_database
from reflector.llm import llm_session_id
from reflector.ws_manager import reset_ws_manager
def asynctask(f):
@@ -20,8 +21,18 @@ def asynctask(f):
return await f(*args, **kwargs)
finally:
await database.disconnect()
_database_context.set(None)
if current_task:
# Reset cached connections before each Celery task.
# Each asyncio.run() creates a new event loop, making connections
# from previous tasks stale ("Future attached to a different loop").
_database_context.set(None)
reset_ws_manager()
coro = run_with_db()
if current_task:
return asyncio.run(coro)
try:
loop = asyncio.get_running_loop()
except RuntimeError:

View File

@@ -12,3 +12,8 @@ AccessTokenInfo = auth_module.AccessTokenInfo
authenticated = auth_module.authenticated
current_user = auth_module.current_user
current_user_optional = auth_module.current_user_optional
parse_ws_bearer_token = auth_module.parse_ws_bearer_token
current_user_ws_optional = auth_module.current_user_ws_optional
# Optional router (e.g. for /auth/login in password backend)
router = getattr(auth_module, "router", None)

View File

@@ -1,6 +1,9 @@
from typing import Annotated, List, Optional
from typing import TYPE_CHECKING, Annotated, List, Optional
from fastapi import Depends, HTTPException
if TYPE_CHECKING:
from fastapi import WebSocket
from fastapi.security import APIKeyHeader, OAuth2PasswordBearer
from jose import JWTError, jwt
from pydantic import BaseModel
@@ -124,3 +127,20 @@ async def current_user_optional(
jwtauth: JWTAuth = Depends(),
):
return await _authenticate_user(jwt_token, api_key, jwtauth)
def parse_ws_bearer_token(
websocket: "WebSocket",
) -> tuple[Optional[str], Optional[str]]:
raw = websocket.headers.get("sec-websocket-protocol") or ""
parts = [p.strip() for p in raw.split(",") if p.strip()]
if len(parts) >= 2 and parts[0].lower() == "bearer":
return parts[1], "bearer"
return None, None
async def current_user_ws_optional(websocket: "WebSocket") -> Optional[UserInfo]:
token, _ = parse_ws_bearer_token(websocket)
if not token:
return None
return await _authenticate_user(token, None, JWTAuth())

View File

@@ -1,11 +1,5 @@
from typing import Annotated
from fastapi import Depends
from fastapi.security import OAuth2PasswordBearer
from pydantic import BaseModel
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token", auto_error=False)
class UserInfo(BaseModel):
sub: str
@@ -15,13 +9,21 @@ class AccessTokenInfo(BaseModel):
pass
def authenticated(token: Annotated[str, Depends(oauth2_scheme)]):
def authenticated():
return None
def current_user(token: Annotated[str, Depends(oauth2_scheme)]):
def current_user():
return None
def current_user_optional(token: Annotated[str, Depends(oauth2_scheme)]):
def current_user_optional():
return None
def parse_ws_bearer_token(websocket):
return None, None
async def current_user_ws_optional(websocket):
return None

View File

@@ -0,0 +1,198 @@
"""Password-based authentication backend for selfhosted deployments.
Issues HS256 JWTs signed with settings.SECRET_KEY. Provides a POST /auth/login
endpoint for email/password authentication.
"""
import time
from collections import defaultdict
from datetime import datetime, timedelta, timezone
from typing import TYPE_CHECKING, Annotated, Optional
from fastapi import APIRouter, Depends, HTTPException, Request
from fastapi.security import APIKeyHeader, OAuth2PasswordBearer
from jose import JWTError, jwt
from pydantic import BaseModel
from reflector.auth.password_utils import verify_password
from reflector.db.user_api_keys import user_api_keys_controller
from reflector.db.users import user_controller
from reflector.logger import logger
from reflector.settings import settings
if TYPE_CHECKING:
from fastapi import WebSocket
# --- FastAPI security schemes (same pattern as auth_jwt.py) ---
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/v1/auth/login", auto_error=False)
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False)
# --- JWT configuration ---
JWT_ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES = 60 * 24 # 24 hours
# --- Rate limiting (in-memory) ---
_login_attempts: dict[str, list[float]] = defaultdict(list)
RATE_LIMIT_WINDOW = 300 # 5 minutes
RATE_LIMIT_MAX = 10 # max attempts per window
def _check_rate_limit(key: str) -> bool:
"""Return True if request is allowed, False if rate-limited."""
now = time.monotonic()
attempts = _login_attempts[key]
_login_attempts[key] = [t for t in attempts if now - t < RATE_LIMIT_WINDOW]
if len(_login_attempts[key]) >= RATE_LIMIT_MAX:
return False
_login_attempts[key].append(now)
return True
# --- Pydantic models ---
class UserInfo(BaseModel):
sub: str
email: Optional[str] = None
def __getitem__(self, key):
return getattr(self, key)
class AccessTokenInfo(BaseModel):
exp: Optional[int] = None
sub: Optional[str] = None
class LoginRequest(BaseModel):
email: str
password: str
class LoginResponse(BaseModel):
access_token: str
token_type: str = "bearer"
expires_in: int
# --- JWT token creation and verification ---
def _create_access_token(user_id: str, email: str) -> tuple[str, int]:
"""Create an HS256 JWT. Returns (token, expires_in_seconds)."""
expires_delta = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
expire = datetime.now(timezone.utc) + expires_delta
payload = {
"sub": user_id,
"email": email,
"exp": expire,
}
token = jwt.encode(payload, settings.SECRET_KEY, algorithm=JWT_ALGORITHM)
return token, int(expires_delta.total_seconds())
def _verify_token(token: str) -> dict:
"""Verify and decode an HS256 JWT."""
return jwt.decode(token, settings.SECRET_KEY, algorithms=[JWT_ALGORITHM])
# --- Authentication logic (mirrors auth_jwt._authenticate_user) ---
async def _authenticate_user(
jwt_token: Optional[str],
api_key: Optional[str],
) -> UserInfo | None:
user_infos: list[UserInfo] = []
if api_key:
user_api_key = await user_api_keys_controller.verify_key(api_key)
if user_api_key:
user_infos.append(UserInfo(sub=user_api_key.user_id, email=None))
if jwt_token:
try:
payload = _verify_token(jwt_token)
user_id = payload["sub"]
email = payload.get("email")
user_infos.append(UserInfo(sub=user_id, email=email))
except JWTError as e:
logger.error(f"JWT error: {e}")
raise HTTPException(status_code=401, detail="Invalid authentication")
if len(user_infos) == 0:
return None
if len(set(x.sub for x in user_infos)) > 1:
raise HTTPException(
status_code=401,
detail="Invalid authentication: more than one user provided",
)
return user_infos[0]
# --- FastAPI dependencies (exported, required by auth/__init__.py) ---
def authenticated(token: Annotated[str, Depends(oauth2_scheme)]):
if token is None:
raise HTTPException(status_code=401, detail="Not authenticated")
return None
async def current_user(
jwt_token: Annotated[Optional[str], Depends(oauth2_scheme)],
api_key: Annotated[Optional[str], Depends(api_key_header)],
):
user = await _authenticate_user(jwt_token, api_key)
if user is None:
raise HTTPException(status_code=401, detail="Not authenticated")
return user
async def current_user_optional(
jwt_token: Annotated[Optional[str], Depends(oauth2_scheme)],
api_key: Annotated[Optional[str], Depends(api_key_header)],
):
return await _authenticate_user(jwt_token, api_key)
# --- WebSocket auth (same pattern as auth_jwt.py) ---
def parse_ws_bearer_token(
websocket: "WebSocket",
) -> tuple[Optional[str], Optional[str]]:
raw = websocket.headers.get("sec-websocket-protocol") or ""
parts = [p.strip() for p in raw.split(",") if p.strip()]
if len(parts) >= 2 and parts[0].lower() == "bearer":
return parts[1], "bearer"
return None, None
async def current_user_ws_optional(websocket: "WebSocket") -> Optional[UserInfo]:
token, _ = parse_ws_bearer_token(websocket)
if not token:
return None
return await _authenticate_user(token, None)
# --- Login router ---
router = APIRouter(prefix="/auth", tags=["auth"])
@router.post("/login", response_model=LoginResponse)
async def login(request: Request, body: LoginRequest):
client_ip = request.client.host if request.client else "unknown"
if not _check_rate_limit(client_ip):
raise HTTPException(
status_code=429,
detail="Too many login attempts. Try again later.",
)
user = await user_controller.get_by_email(body.email)
if not user or not user.password_hash:
print("invalid email")
raise HTTPException(status_code=401, detail="Invalid email or password")
if not verify_password(body.password, user.password_hash):
print("invalid pass")
raise HTTPException(status_code=401, detail="Invalid email or password")
access_token, expires_in = _create_access_token(user.id, user.email)
return LoginResponse(
access_token=access_token,
token_type="bearer",
expires_in=expires_in,
)

View File

@@ -0,0 +1,41 @@
"""Password hashing utilities using PBKDF2-SHA256 (stdlib only)."""
import hashlib
import hmac
import os
PBKDF2_ITERATIONS = 100_000
SALT_LENGTH = 16 # bytes, hex-encoded to 32 chars
def hash_password(password: str) -> str:
"""Hash a password using PBKDF2-SHA256 with a random salt.
Format: pbkdf2:sha256:<iterations>$<salt_hex>$<hash_hex>
"""
salt = os.urandom(SALT_LENGTH).hex()
dk = hashlib.pbkdf2_hmac(
"sha256",
password.encode("utf-8"),
salt.encode("utf-8"),
PBKDF2_ITERATIONS,
)
return f"pbkdf2:sha256:{PBKDF2_ITERATIONS}${salt}${dk.hex()}"
def verify_password(password: str, password_hash: str) -> bool:
"""Verify a password against its hash using constant-time comparison."""
try:
header, salt, stored_hash = password_hash.split("$", 2)
_, algo, iterations_str = header.split(":")
iterations = int(iterations_str)
dk = hashlib.pbkdf2_hmac(
algo,
password.encode("utf-8"),
salt.encode("utf-8"),
iterations,
)
return hmac.compare_digest(dk.hex(), stored_hash)
except (ValueError, AttributeError):
return False

View File

@@ -146,6 +146,8 @@ class DailyApiClient:
)
raise DailyApiError(operation, response)
if not response.content:
return {}
return response.json()
# ============================================================================

View File

@@ -99,7 +99,7 @@ def extract_room_name(event: DailyWebhookEvent) -> str | None:
>>> event = DailyWebhookEvent(**webhook_payload)
>>> room_name = extract_room_name(event)
"""
room = event.payload.get("room_name")
room = event.payload.get("room_name") or event.payload.get("room")
# Ensure we return a string, not any falsy value that might be in payload
return room if isinstance(room, str) else None

View File

@@ -6,7 +6,7 @@ Reference: https://docs.daily.co/reference/rest-api/webhooks
from typing import Annotated, Any, Dict, Literal, Union
from pydantic import BaseModel, Field, field_validator
from pydantic import AliasChoices, BaseModel, ConfigDict, Field, field_validator
from reflector.utils.string import NonEmptyString
@@ -41,6 +41,8 @@ class DailyTrack(BaseModel):
Reference: https://docs.daily.co/reference/rest-api/recordings
"""
model_config = ConfigDict(extra="ignore")
type: Literal["audio", "video"]
s3Key: NonEmptyString = Field(description="S3 object key for the track file")
size: int = Field(description="File size in bytes")
@@ -54,6 +56,8 @@ class DailyWebhookEvent(BaseModel):
Reference: https://docs.daily.co/reference/rest-api/webhooks
"""
model_config = ConfigDict(extra="ignore")
version: NonEmptyString = Field(
description="Represents the version of the event. This uses semantic versioning to inform a consumer if the payload has introduced any breaking changes"
)
@@ -82,7 +86,13 @@ class ParticipantJoinedPayload(BaseModel):
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/participant-joined
"""
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
model_config = ConfigDict(extra="ignore")
room_name: NonEmptyString | None = Field(
None,
description="Daily.co room name",
validation_alias=AliasChoices("room_name", "room"),
)
session_id: NonEmptyString = Field(description="Daily.co session identifier")
user_id: NonEmptyString = Field(description="User identifier (may be encoded)")
user_name: NonEmptyString | None = Field(None, description="User display name")
@@ -100,7 +110,13 @@ class ParticipantLeftPayload(BaseModel):
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/participant-left
"""
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
model_config = ConfigDict(extra="ignore")
room_name: NonEmptyString | None = Field(
None,
description="Daily.co room name",
validation_alias=AliasChoices("room_name", "room"),
)
session_id: NonEmptyString = Field(description="Daily.co session identifier")
user_id: NonEmptyString = Field(description="User identifier (may be encoded)")
user_name: NonEmptyString | None = Field(None, description="User display name")
@@ -112,6 +128,9 @@ class ParticipantLeftPayload(BaseModel):
_normalize_joined_at = field_validator("joined_at", mode="before")(
normalize_timestamp_to_int
)
_normalize_duration = field_validator("duration", mode="before")(
normalize_timestamp_to_int
)
class RecordingStartedPayload(BaseModel):
@@ -121,6 +140,8 @@ class RecordingStartedPayload(BaseModel):
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-started
"""
model_config = ConfigDict(extra="ignore")
room_name: NonEmptyString | None = Field(None, description="Daily.co room name")
recording_id: NonEmptyString = Field(description="Recording identifier")
start_ts: int | None = Field(None, description="Recording start timestamp")
@@ -138,7 +159,9 @@ class RecordingReadyToDownloadPayload(BaseModel):
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-ready-to-download
"""
type: Literal["cloud", "raw-tracks"] = Field(
model_config = ConfigDict(extra="ignore")
type: Literal["cloud", "cloud-audio-only", "raw-tracks"] = Field(
description="The type of recording that was generated"
)
recording_id: NonEmptyString = Field(
@@ -153,8 +176,9 @@ class RecordingReadyToDownloadPayload(BaseModel):
status: Literal["finished"] = Field(
description="The status of the given recording (always 'finished' in ready-to-download webhook, see RecordingStatus in responses.py for full API statuses)"
)
max_participants: int = Field(
description="The number of participants on the call that were recorded"
max_participants: int | None = Field(
None,
description="The number of participants on the call that were recorded (optional; Daily may omit it in some webhook versions)",
)
duration: int = Field(description="The duration in seconds of the call")
s3_key: NonEmptyString = Field(
@@ -180,6 +204,8 @@ class RecordingErrorPayload(BaseModel):
Reference: https://docs.daily.co/reference/rest-api/webhooks/events/recording-error
"""
model_config = ConfigDict(extra="ignore")
action: Literal["clourd-recording-err", "cloud-recording-error"] = Field(
description="A string describing the event that was emitted (both variants are documented)"
)
@@ -200,6 +226,8 @@ class RecordingErrorPayload(BaseModel):
class ParticipantJoinedEvent(BaseModel):
model_config = ConfigDict(extra="ignore")
version: NonEmptyString
type: Literal["participant.joined"]
id: NonEmptyString
@@ -212,6 +240,8 @@ class ParticipantJoinedEvent(BaseModel):
class ParticipantLeftEvent(BaseModel):
model_config = ConfigDict(extra="ignore")
version: NonEmptyString
type: Literal["participant.left"]
id: NonEmptyString
@@ -224,6 +254,8 @@ class ParticipantLeftEvent(BaseModel):
class RecordingStartedEvent(BaseModel):
model_config = ConfigDict(extra="ignore")
version: NonEmptyString
type: Literal["recording.started"]
id: NonEmptyString
@@ -236,6 +268,8 @@ class RecordingStartedEvent(BaseModel):
class RecordingReadyEvent(BaseModel):
model_config = ConfigDict(extra="ignore")
version: NonEmptyString
type: Literal["recording.ready-to-download"]
id: NonEmptyString
@@ -248,6 +282,8 @@ class RecordingReadyEvent(BaseModel):
class RecordingErrorEvent(BaseModel):
model_config = ConfigDict(extra="ignore")
version: NonEmptyString
type: Literal["recording.error"]
id: NonEmptyString

View File

@@ -1,7 +1,6 @@
"""Search functionality for transcripts and other entities."""
import itertools
import json
from dataclasses import dataclass
from datetime import datetime
from io import StringIO
@@ -27,6 +26,7 @@ from reflector.db.rooms import rooms
from reflector.db.transcripts import SourceKind, TranscriptStatus, transcripts
from reflector.db.utils import is_postgresql
from reflector.logger import logger
from reflector.settings import settings
from reflector.utils.string import NonEmptyString, try_parse_non_empty_string
DEFAULT_SEARCH_LIMIT = 20
@@ -151,6 +151,7 @@ class SearchResultDB(BaseModel):
title: str | None = None
source_kind: SourceKind
room_id: str | None = None
change_seq: int | None = None
rank: float = Field(..., ge=0, le=1)
@@ -173,9 +174,7 @@ class SearchResult(BaseModel):
total_match_count: NonNegativeInt = Field(
default=0, description="Total number of matches found in the transcript"
)
dag_status: list[dict] | None = Field(
default=None, description="Latest DAG task status for processing transcripts"
)
change_seq: int | None = None
@field_serializer("created_at", when_used="json")
def serialize_datetime(self, dt: datetime) -> str:
@@ -332,42 +331,6 @@ class SnippetGenerator:
return summary_snippets + webvtt_snippets, total_matches
async def _fetch_dag_statuses(transcript_ids: list[str]) -> dict[str, list[dict]]:
"""Fetch latest DAG_STATUS event data for given transcript IDs.
Returns dict mapping transcript_id -> tasks list from the last DAG_STATUS event.
"""
if not transcript_ids:
return {}
db = get_database()
query = sqlalchemy.select(
[
transcripts.c.id,
transcripts.c.events,
]
).where(transcripts.c.id.in_(transcript_ids))
rows = await db.fetch_all(query)
result: dict[str, list[dict]] = {}
for row in rows:
events_raw = row["events"]
if not events_raw:
continue
# events is stored as JSON list
events = events_raw if isinstance(events_raw, list) else json.loads(events_raw)
# Find last DAG_STATUS event
for ev in reversed(events):
if isinstance(ev, dict) and ev.get("event") == "DAG_STATUS":
tasks = ev.get("data", {}).get("tasks")
if tasks:
result[row["id"]] = tasks
break
return result
class SearchController:
"""Controller for search operations across different entities."""
@@ -395,6 +358,7 @@ class SearchController:
transcripts.c.user_id,
transcripts.c.room_id,
transcripts.c.source_kind,
transcripts.c.change_seq,
transcripts.c.webvtt,
transcripts.c.long_summary,
sqlalchemy.case(
@@ -436,7 +400,7 @@ class SearchController:
transcripts.c.user_id == params.user_id, rooms.c.is_shared
)
)
else:
elif not settings.PUBLIC_MODE:
base_query = base_query.where(rooms.c.is_shared)
if params.room_id:
base_query = base_query.where(transcripts.c.room_id == params.room_id)
@@ -510,14 +474,6 @@ class SearchController:
logger.error(f"Error processing search results: {e}", exc_info=True)
raise
# Enrich processing transcripts with DAG status
processing_ids = [r.id for r in results if r.status == "processing"]
if processing_ids:
dag_statuses = await _fetch_dag_statuses(processing_ids)
for r in results:
if r.id in dag_statuses:
r.dag_status = dag_statuses[r.id]
return results, total

View File

@@ -5,7 +5,10 @@ import shutil
from contextlib import asynccontextmanager
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Literal, Sequence
from typing import TYPE_CHECKING, Any, Literal, Sequence
if TYPE_CHECKING:
from reflector.ws_events import TranscriptEventName
import sqlalchemy
from fastapi import HTTPException
@@ -32,6 +35,8 @@ class SourceKind(enum.StrEnum):
FILE = enum.auto()
transcript_change_seq = sqlalchemy.Sequence("transcript_change_seq", metadata=metadata)
transcripts = sqlalchemy.Table(
"transcript",
metadata,
@@ -86,6 +91,12 @@ transcripts = sqlalchemy.Table(
sqlalchemy.Column("webvtt", sqlalchemy.Text),
# Hatchet workflow run ID for resumption of failed workflows
sqlalchemy.Column("workflow_run_id", sqlalchemy.String),
sqlalchemy.Column(
"change_seq",
sqlalchemy.BigInteger,
transcript_change_seq,
server_default=transcript_change_seq.next_value(),
),
sqlalchemy.Index("idx_transcript_recording_id", "recording_id"),
sqlalchemy.Index("idx_transcript_user_id", "user_id"),
sqlalchemy.Index("idx_transcript_created_at", "created_at"),
@@ -184,7 +195,7 @@ class TranscriptWaveform(BaseModel):
class TranscriptEvent(BaseModel):
event: str
event: str # Typed at call sites via ws_events.TranscriptEventName; str here for DB compat
data: dict
@@ -226,6 +237,7 @@ class Transcript(BaseModel):
audio_deleted: bool | None = None
webvtt: str | None = None
workflow_run_id: str | None = None # Hatchet workflow run ID for resumption
change_seq: int | None = None
@field_serializer("created_at", when_used="json")
def serialize_datetime(self, dt: datetime) -> str:
@@ -233,8 +245,10 @@ class Transcript(BaseModel):
dt = dt.replace(tzinfo=timezone.utc)
return dt.isoformat()
def add_event(self, event: str, data: BaseModel) -> TranscriptEvent:
ev = TranscriptEvent(event=event, data=data.model_dump(mode="json"))
def add_event(
self, event: "TranscriptEventName", data: BaseModel
) -> TranscriptEvent:
ev = TranscriptEvent(event=event, data=data.model_dump())
self.events.append(ev)
return ev
@@ -376,6 +390,7 @@ class TranscriptController:
source_kind: SourceKind | None = None,
room_id: str | None = None,
search_term: str | None = None,
change_seq_from: int | None = None,
return_query: bool = False,
exclude_columns: list[str] = [
"topics",
@@ -396,6 +411,7 @@ class TranscriptController:
- `filter_recording`: filter out transcripts that are currently recording
- `room_id`: filter transcripts by room ID
- `search_term`: filter transcripts by search term
- `change_seq_from`: filter transcripts with change_seq > this value
"""
query = transcripts.select().join(
@@ -406,7 +422,7 @@ class TranscriptController:
query = query.where(
or_(transcripts.c.user_id == user_id, rooms.c.is_shared)
)
else:
elif not settings.PUBLIC_MODE:
query = query.where(rooms.c.is_shared)
if source_kind:
@@ -418,6 +434,9 @@ class TranscriptController:
if search_term:
query = query.where(transcripts.c.title.ilike(f"%{search_term}%"))
if change_seq_from is not None:
query = query.where(transcripts.c.change_seq > change_seq_from)
# Exclude heavy JSON columns from list queries
transcript_columns = [
col for col in transcripts.c if col.name not in exclude_columns
@@ -431,9 +450,10 @@ class TranscriptController:
)
if order_by is not None:
field = getattr(transcripts.c, order_by[1:])
if order_by.startswith("-"):
field = field.desc()
field = getattr(transcripts.c, order_by[1:]).desc()
else:
field = getattr(transcripts.c, order_by)
query = query.order_by(field)
if filter_empty:
@@ -688,7 +708,7 @@ class TranscriptController:
async def append_event(
self,
transcript: Transcript,
event: str,
event: "TranscriptEventName",
data: Any,
) -> TranscriptEvent:
"""

View File

@@ -1,4 +1,4 @@
"""User table for storing Authentik user information."""
"""User table for storing user information."""
from datetime import datetime, timezone
@@ -15,6 +15,7 @@ users = sqlalchemy.Table(
sqlalchemy.Column("id", sqlalchemy.String, primary_key=True),
sqlalchemy.Column("email", sqlalchemy.String, nullable=False),
sqlalchemy.Column("authentik_uid", sqlalchemy.String, nullable=False),
sqlalchemy.Column("password_hash", sqlalchemy.String, nullable=True),
sqlalchemy.Column("created_at", sqlalchemy.DateTime(timezone=True), nullable=False),
sqlalchemy.Column("updated_at", sqlalchemy.DateTime(timezone=True), nullable=False),
sqlalchemy.Index("idx_user_authentik_uid", "authentik_uid", unique=True),
@@ -26,6 +27,7 @@ class User(BaseModel):
id: NonEmptyString = Field(default_factory=generate_uuid4)
email: NonEmptyString
authentik_uid: NonEmptyString
password_hash: str | None = None
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
@@ -51,22 +53,29 @@ class UserController:
@staticmethod
async def create_or_update(
id: NonEmptyString, authentik_uid: NonEmptyString, email: NonEmptyString
id: NonEmptyString,
authentik_uid: NonEmptyString,
email: NonEmptyString,
password_hash: str | None = None,
) -> User:
existing = await UserController.get_by_authentik_uid(authentik_uid)
now = datetime.now(timezone.utc)
if existing:
update_values: dict = {"email": email, "updated_at": now}
if password_hash is not None:
update_values["password_hash"] = password_hash
query = (
users.update()
.where(users.c.authentik_uid == authentik_uid)
.values(email=email, updated_at=now)
.values(**update_values)
)
await get_database().execute(query)
return User(
id=existing.id,
authentik_uid=authentik_uid,
email=email,
password_hash=password_hash or existing.password_hash,
created_at=existing.created_at,
updated_at=now,
)
@@ -75,6 +84,7 @@ class UserController:
id=id,
authentik_uid=authentik_uid,
email=email,
password_hash=password_hash,
created_at=now,
updated_at=now,
)
@@ -82,6 +92,16 @@ class UserController:
await get_database().execute(query)
return user
@staticmethod
async def set_password_hash(user_id: NonEmptyString, password_hash: str) -> None:
now = datetime.now(timezone.utc)
query = (
users.update()
.where(users.c.id == user_id)
.values(password_hash=password_hash, updated_at=now)
)
await get_database().execute(query)
@staticmethod
async def list_all() -> list[User]:
query = users.select().order_by(users.c.created_at.desc())

View File

@@ -12,10 +12,11 @@ import structlog
from reflector.db.transcripts import Transcript, TranscriptEvent, transcripts_controller
from reflector.utils.string import NonEmptyString
from reflector.ws_events import TranscriptEventName
from reflector.ws_manager import get_ws_manager
# Events that should also be sent to user room (matches Celery behavior)
USER_ROOM_EVENTS = {"STATUS", "FINAL_TITLE", "DURATION", "DAG_STATUS"}
USER_ROOM_EVENTS: set[TranscriptEventName] = {"STATUS", "FINAL_TITLE", "DURATION"}
async def broadcast_event(
@@ -81,8 +82,7 @@ async def set_status_and_broadcast(
async def append_event_and_broadcast(
transcript_id: NonEmptyString,
transcript: Transcript,
event_name: NonEmptyString,
# TODO proper dictionary event => type
event_name: TranscriptEventName,
data: Any,
logger: structlog.BoundLogger,
) -> TranscriptEvent:

View File

@@ -12,7 +12,9 @@ import threading
from hatchet_sdk import ClientConfig, Hatchet
from hatchet_sdk.clients.rest.models import V1TaskStatus
from hatchet_sdk.rate_limit import RateLimitDuration
from reflector.hatchet.constants import LLM_RATE_LIMIT_KEY, LLM_RATE_LIMIT_PER_SECOND
from reflector.logger import logger
from reflector.settings import settings
@@ -113,3 +115,26 @@ class HatchetClientManager:
"""Reset the client instance (for testing)."""
with cls._lock:
cls._instance = None
@classmethod
async def ensure_rate_limit(cls) -> None:
"""Ensure the LLM rate limit exists in Hatchet.
Uses the Hatchet SDK rate_limits client (aio_put). See:
https://docs.hatchet.run/sdks/python/feature-clients/rate_limits
"""
logger.info(
"[Hatchet] Ensuring rate limit exists",
rate_limit_key=LLM_RATE_LIMIT_KEY,
limit=LLM_RATE_LIMIT_PER_SECOND,
)
client = cls.get_client()
await client.rate_limits.aio_put(
key=LLM_RATE_LIMIT_KEY,
limit=LLM_RATE_LIMIT_PER_SECOND,
duration=RateLimitDuration.SECOND,
)
logger.info(
"[Hatchet] Rate limit put successfully",
rate_limit_key=LLM_RATE_LIMIT_KEY,
)

View File

@@ -1,230 +0,0 @@
"""
DAG Progress Reporting — models and transform.
Converts Hatchet V1WorkflowRunDetails into structured DagTask list
for frontend WebSocket/REST consumption.
Ported from render_hatchet_run.py (feat-dag-zulip) which renders markdown;
this module produces structured Pydantic models instead.
"""
from datetime import datetime
from enum import StrEnum
from hatchet_sdk.clients.rest.models import (
V1TaskStatus,
V1WorkflowRunDetails,
WorkflowRunShapeItemForWorkflowRunDetails,
)
from pydantic import BaseModel
class DagTaskStatus(StrEnum):
QUEUED = "queued"
RUNNING = "running"
COMPLETED = "completed"
FAILED = "failed"
CANCELLED = "cancelled"
_HATCHET_TO_DAG_STATUS: dict[V1TaskStatus, DagTaskStatus] = {
V1TaskStatus.QUEUED: DagTaskStatus.QUEUED,
V1TaskStatus.RUNNING: DagTaskStatus.RUNNING,
V1TaskStatus.COMPLETED: DagTaskStatus.COMPLETED,
V1TaskStatus.FAILED: DagTaskStatus.FAILED,
V1TaskStatus.CANCELLED: DagTaskStatus.CANCELLED,
}
class DagTask(BaseModel):
name: str
status: DagTaskStatus
started_at: datetime | None
finished_at: datetime | None
duration_seconds: float | None
parents: list[str]
error: str | None
children_total: int | None
children_completed: int | None
progress_pct: float | None
class DagStatusData(BaseModel):
workflow_run_id: str
tasks: list[DagTask]
def _topo_sort(
shape: list[WorkflowRunShapeItemForWorkflowRunDetails],
) -> list[str]:
"""Topological sort of step_ids from shape DAG (Kahn's algorithm).
Ported from render_hatchet_run.py.
"""
step_ids = {s.step_id for s in shape}
children_map: dict[str, list[str]] = {}
in_degree: dict[str, int] = {sid: 0 for sid in step_ids}
for s in shape:
children = [c for c in (s.children_step_ids or []) if c in step_ids]
children_map[s.step_id] = children
for c in children:
in_degree[c] += 1
queue = sorted(sid for sid, deg in in_degree.items() if deg == 0)
result: list[str] = []
while queue:
node = queue.pop(0)
result.append(node)
for c in children_map.get(node, []):
in_degree[c] -= 1
if in_degree[c] == 0:
queue.append(c)
queue.sort()
return result
def _extract_error_summary(error_message: str | None) -> str | None:
"""Extract first meaningful line from error message, skipping traceback frames."""
if not error_message or not error_message.strip():
return None
err_lines = error_message.strip().split("\n")
err_summary = err_lines[0]
for line in err_lines:
stripped = line.strip()
if stripped and not stripped.startswith(("Traceback", "File ", "{", ")")):
err_summary = stripped
return err_summary
def extract_dag_tasks(details: V1WorkflowRunDetails) -> list[DagTask]:
"""Extract structured DagTask list from Hatchet workflow run details.
Returns tasks in topological order with status, timestamps, parents,
error summaries, and fan-out children counts.
"""
shape = details.shape or []
tasks = details.tasks or []
if not shape:
return []
# Build lookups
step_to_shape: dict[str, WorkflowRunShapeItemForWorkflowRunDetails] = {
s.step_id: s for s in shape
}
step_to_name: dict[str, str] = {s.step_id: s.task_name for s in shape}
# Reverse edges: child -> parent names
parents_by_step: dict[str, list[str]] = {s.step_id: [] for s in shape}
for s in shape:
for child_id in s.children_step_ids or []:
if child_id in parents_by_step:
parents_by_step[child_id].append(step_to_name[s.step_id])
# Join tasks by step_id
from hatchet_sdk.clients.rest.models import V1TaskSummary # noqa: PLC0415
task_by_step: dict[str, V1TaskSummary] = {}
for t in tasks:
if t.step_id and t.step_id in step_to_name:
task_by_step[t.step_id] = t
ordered = _topo_sort(shape)
result: list[DagTask] = []
for step_id in ordered:
name = step_to_name[step_id]
t = task_by_step.get(step_id)
if not t:
result.append(
DagTask(
name=name,
status=DagTaskStatus.QUEUED,
started_at=None,
finished_at=None,
duration_seconds=None,
parents=parents_by_step.get(step_id, []),
error=None,
children_total=None,
children_completed=None,
progress_pct=None,
)
)
continue
status = _HATCHET_TO_DAG_STATUS.get(t.status, DagTaskStatus.QUEUED)
duration_seconds: float | None = None
if t.duration is not None:
duration_seconds = t.duration / 1000.0
# Fan-out children
children_total: int | None = None
children_completed: int | None = None
if t.num_spawned_children and t.num_spawned_children > 0:
children_total = t.num_spawned_children
children_completed = sum(
1 for c in (t.children or []) if c.status == V1TaskStatus.COMPLETED
)
result.append(
DagTask(
name=name,
status=status,
started_at=t.started_at,
finished_at=t.finished_at,
duration_seconds=duration_seconds,
parents=parents_by_step.get(step_id, []),
error=_extract_error_summary(t.error_message),
children_total=children_total,
children_completed=children_completed,
progress_pct=None,
)
)
return result
async def broadcast_dag_status(transcript_id: str, workflow_run_id: str) -> None:
"""Fetch current DAG state from Hatchet and broadcast via WebSocket.
Fire-and-forget: exceptions are logged but never raised.
All imports are deferred for fork-safety (Hatchet workers fork processes).
"""
try:
from reflector.db.transcripts import transcripts_controller # noqa: I001, PLC0415
from reflector.hatchet.broadcast import append_event_and_broadcast # noqa: PLC0415
from reflector.hatchet.client import HatchetClientManager # noqa: PLC0415
from reflector.hatchet.workflows.daily_multitrack_pipeline import ( # noqa: PLC0415
fresh_db_connection,
)
from reflector.logger import logger # noqa: PLC0415
async with fresh_db_connection():
client = HatchetClientManager.get_client()
details = await client.runs.aio_get(workflow_run_id)
dag_tasks = extract_dag_tasks(details)
dag_status = DagStatusData(workflow_run_id=workflow_run_id, tasks=dag_tasks)
transcript = await transcripts_controller.get_by_id(transcript_id)
if transcript:
await append_event_and_broadcast(
transcript_id,
transcript,
"DAG_STATUS",
dag_status,
logger,
)
except Exception:
from reflector.logger import logger # noqa: PLC0415
logger.warning(
"[DAG Progress] Failed to broadcast DAG status",
transcript_id=transcript_id,
workflow_run_id=workflow_run_id,
exc_info=True,
)

View File

@@ -3,6 +3,8 @@ LLM/I/O worker pool for all non-CPU tasks.
Handles: all tasks except mixdown_tracks (transcription, LLM inference, orchestration)
"""
import asyncio
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
daily_multitrack_pipeline,
@@ -20,6 +22,15 @@ POOL = "llm-io"
def main():
hatchet = HatchetClientManager.get_client()
try:
asyncio.run(HatchetClientManager.ensure_rate_limit())
except Exception as e:
logger.warning(
"[Hatchet] Rate limit initialization failed, but continuing. "
"If workflows fail to register, rate limits may need to be created manually.",
error=str(e),
)
logger.info(
"Starting Hatchet LLM worker pool (all tasks except mixdown)",
worker_name=WORKER_NAME,

View File

@@ -171,11 +171,13 @@ async def set_workflow_error_status(transcript_id: NonEmptyString) -> bool:
def _spawn_storage():
"""Create fresh storage instance."""
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
return AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
)
@@ -184,10 +186,7 @@ class Loggable(Protocol):
def make_audio_progress_logger(
ctx: Loggable,
task_name: TaskName,
interval: float = 5.0,
transcript_id: str | None = None,
ctx: Loggable, task_name: TaskName, interval: float = 5.0
) -> Callable[[float | None, float], None]:
"""Create a throttled progress logger callback for audio processing.
@@ -195,7 +194,6 @@ def make_audio_progress_logger(
ctx: Object with .log() method (e.g., Hatchet Context).
task_name: Name to prefix in log messages.
interval: Minimum seconds between log messages.
transcript_id: If provided, broadcasts transient DAG_TASK_PROGRESS events.
Returns:
Callback(progress_pct, audio_position) that logs at most every `interval` seconds.
@@ -217,27 +215,6 @@ def make_audio_progress_logger(
)
last_log_time[0] = now
if transcript_id and progress_pct is not None:
try:
import asyncio # noqa: PLC0415
from reflector.db.transcripts import TranscriptEvent # noqa: PLC0415
from reflector.hatchet.broadcast import broadcast_event # noqa: PLC0415
loop = asyncio.get_event_loop()
loop.create_task(
broadcast_event(
transcript_id,
TranscriptEvent(
event="DAG_TASK_PROGRESS",
data={"task_name": task_name, "progress_pct": progress_pct},
),
logger=logger,
)
)
except Exception:
pass # transient, never fail the callback
return callback
@@ -262,15 +239,8 @@ def with_error_handling(
) -> Callable[[PipelineInput, Context], Coroutine[Any, Any, R]]:
@functools.wraps(func)
async def wrapper(input: PipelineInput, ctx: Context) -> R:
from reflector.hatchet.dag_progress import broadcast_dag_status # noqa: I001, PLC0415
try:
result = await func(input, ctx)
try:
await broadcast_dag_status(input.transcript_id, ctx.workflow_run_id)
except Exception:
pass
return result
return await func(input, ctx)
except Exception as e:
logger.error(
f"[Hatchet] {step_name} failed",
@@ -278,10 +248,6 @@ def with_error_handling(
error=str(e),
exc_info=True,
)
try:
await broadcast_dag_status(input.transcript_id, ctx.workflow_run_id)
except Exception:
pass
if set_error_status:
await set_workflow_error_status(input.transcript_id)
raise
@@ -596,9 +562,7 @@ async def mixdown_tracks(input: PipelineInput, ctx: Context) -> MixdownResult:
target_sample_rate,
offsets_seconds=None,
logger=logger,
progress_callback=make_audio_progress_logger(
ctx, TaskName.MIXDOWN_TRACKS, transcript_id=input.transcript_id
),
progress_callback=make_audio_progress_logger(ctx, TaskName.MIXDOWN_TRACKS),
expected_duration_sec=recording_duration if recording_duration > 0 else None,
)
await writer.flush()

View File

@@ -49,11 +49,13 @@ async def pad_track(input: PaddingInput, ctx: Context) -> PadTrackResult:
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
)
source_url = await storage.get_file_url(

View File

@@ -71,7 +71,7 @@ async def detect_chunk_topic(input: TopicChunkInput, ctx: Context) -> TopicChunk
from reflector.settings import settings # noqa: PLC0415
from reflector.utils.text import clean_title # noqa: PLC0415
llm = LLM(settings=settings, temperature=0.9, max_tokens=500)
llm = LLM(settings=settings, temperature=0.9)
prompt = TOPIC_PROMPT.format(text=input.chunk_text)
response = await llm.get_structured_response(

View File

@@ -60,6 +60,7 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
try:
# Create fresh storage instance to avoid aioboto3 fork issues
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
@@ -68,6 +69,7 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
)
source_url = await storage.get_file_url(
@@ -159,6 +161,7 @@ async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackRe
raise ValueError("Missing padded_key from pad_track")
# Presign URL on demand (avoids stale URLs on workflow replay)
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
@@ -167,6 +170,7 @@ async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackRe
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
)
audio_url = await storage.get_file_url(

View File

@@ -144,7 +144,18 @@ class StructuredOutputWorkflow(Workflow, Generic[OutputT]):
)
# Network retries handled by OpenAILike (max_retries=3)
response = await Settings.llm.acomplete(json_prompt)
# response_format enables grammar-based constrained decoding on backends
# that support it (DMR/llama.cpp, vLLM, Ollama, OpenAI).
response = await Settings.llm.acomplete(
json_prompt,
response_format={
"type": "json_schema",
"json_schema": {
"name": self.output_cls.__name__,
"schema": self.output_cls.model_json_schema(),
},
},
)
return ExtractionDone(output=response.text)
@step
@@ -191,7 +202,9 @@ class StructuredOutputWorkflow(Workflow, Generic[OutputT]):
class LLM:
def __init__(self, settings, temperature: float = 0.4, max_tokens: int = 2048):
def __init__(
self, settings, temperature: float = 0.4, max_tokens: int | None = None
):
self.settings_obj = settings
self.model_name = settings.LLM_MODEL
self.url = settings.LLM_URL
@@ -215,6 +228,7 @@ class LLM:
is_function_calling_model=False,
temperature=self.temperature,
max_tokens=self.max_tokens,
timeout=self.settings_obj.LLM_REQUEST_TIMEOUT,
additional_kwargs={"extra_body": {"litellm_session_id": session_id}},
)

View File

@@ -62,6 +62,8 @@ from reflector.processors.types import (
from reflector.processors.types import Transcript as TranscriptProcessorType
from reflector.settings import settings
from reflector.storage import get_transcripts_storage
from reflector.views.transcripts import GetTranscriptTopic
from reflector.ws_events import TranscriptEventName
from reflector.ws_manager import WebsocketManager, get_ws_manager
from reflector.zulip import (
get_zulip_message,
@@ -89,7 +91,11 @@ def broadcast_to_sockets(func):
if transcript and transcript.user_id:
# Emit only relevant events to the user room to avoid noisy updates.
# Allowed: STATUS, FINAL_TITLE, DURATION. All are prefixed with TRANSCRIPT_
allowed_user_events = {"STATUS", "FINAL_TITLE", "DURATION"}
allowed_user_events: set[TranscriptEventName] = {
"STATUS",
"FINAL_TITLE",
"DURATION",
}
if resp.event in allowed_user_events:
await self.ws_manager.send_json(
room_id=f"user:{transcript.user_id}",
@@ -244,13 +250,14 @@ class PipelineMainBase(PipelineRunner[PipelineMessage], Generic[PipelineMessage]
)
if isinstance(data, TitleSummaryWithIdProcessorType):
topic.id = data.id
get_topic = GetTranscriptTopic.from_transcript_topic(topic)
async with self.transaction():
transcript = await self.get_transcript()
await transcripts_controller.upsert_topic(transcript, topic)
return await transcripts_controller.append_event(
transcript=transcript,
event="TOPIC",
data=topic,
data=get_topic,
)
@broadcast_to_sockets

View File

@@ -1,74 +0,0 @@
import os
import torch
import torchaudio
from pyannote.audio import Pipeline
from reflector.processors.audio_diarization import AudioDiarizationProcessor
from reflector.processors.audio_diarization_auto import AudioDiarizationAutoProcessor
from reflector.processors.types import AudioDiarizationInput, DiarizationSegment
class AudioDiarizationPyannoteProcessor(AudioDiarizationProcessor):
"""Local diarization processor using pyannote.audio library"""
def __init__(
self,
model_name: str = "pyannote/speaker-diarization-3.1",
pyannote_auth_token: str | None = None,
device: str | None = None,
**kwargs,
):
super().__init__(**kwargs)
self.model_name = model_name
self.auth_token = pyannote_auth_token or os.environ.get("HF_TOKEN")
self.device = device
if device is None:
self.device = "cuda" if torch.cuda.is_available() else "cpu"
self.logger.info(f"Loading pyannote diarization model: {self.model_name}")
self.diarization_pipeline = Pipeline.from_pretrained(
self.model_name, use_auth_token=self.auth_token
)
self.diarization_pipeline.to(torch.device(self.device))
self.logger.info(f"Diarization model loaded on device: {self.device}")
async def _diarize(self, data: AudioDiarizationInput) -> list[DiarizationSegment]:
try:
# Load audio file (audio_url is assumed to be a local file path)
self.logger.info(f"Loading local audio file: {data.audio_url}")
waveform, sample_rate = torchaudio.load(data.audio_url)
audio_input = {"waveform": waveform, "sample_rate": sample_rate}
self.logger.info("Running speaker diarization")
diarization = self.diarization_pipeline(audio_input)
# Convert pyannote diarization output to our format
segments = []
for segment, _, speaker in diarization.itertracks(yield_label=True):
# Extract speaker number from label (e.g., "SPEAKER_00" -> 0)
speaker_id = 0
if speaker.startswith("SPEAKER_"):
try:
speaker_id = int(speaker.split("_")[-1])
except (ValueError, IndexError):
# Fallback to hash-based ID if parsing fails
speaker_id = hash(speaker) % 1000
segments.append(
{
"start": round(segment.start, 3),
"end": round(segment.end, 3),
"speaker": speaker_id,
}
)
self.logger.info(f"Diarization completed with {len(segments)} segments")
return segments
except Exception as e:
self.logger.exception(f"Diarization failed: {e}")
raise
AudioDiarizationAutoProcessor.register("pyannote", AudioDiarizationPyannoteProcessor)

View File

@@ -39,7 +39,7 @@ class TranscriptFinalTitleProcessor(Processor):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.chunks: list[TitleSummary] = []
self.llm = LLM(settings=settings, temperature=0.5, max_tokens=200)
self.llm = LLM(settings=settings, temperature=0.5)
async def _push(self, data: TitleSummary):
self.chunks.append(data)

View File

@@ -35,7 +35,7 @@ class TranscriptTopicDetectorProcessor(Processor):
super().__init__(**kwargs)
self.transcript = None
self.min_transcript_length = min_transcript_length
self.llm = LLM(settings=settings, temperature=0.9, max_tokens=500)
self.llm = LLM(settings=settings, temperature=0.9)
async def _push(self, data: Transcript):
if self.transcript is None:

View File

@@ -97,8 +97,11 @@ async def validate_transcript_for_processing(
if transcript.locked:
return ValidationLocked(detail="Recording is locked")
# Check if recording is ready for processing
if transcript.status == "idle" and not transcript.workflow_run_id:
if (
transcript.status == "idle"
and not transcript.workflow_run_id
and not transcript.recording_id
):
return ValidationNotReady(detail="Recording is not ready for processing")
# Check Celery tasks
@@ -267,19 +270,6 @@ async def dispatch_transcript_processing(
)
logger.info("Hatchet workflow dispatched", workflow_id=workflow_id)
try:
from reflector.hatchet.dag_progress import broadcast_dag_status # noqa: I001, PLC0415
await broadcast_dag_status(config.transcript_id, workflow_id)
except Exception:
logger.warning(
"[DAG Progress] Failed initial broadcast",
transcript_id=config.transcript_id,
workflow_id=workflow_id,
exc_info=True,
)
return None
elif isinstance(config, FileProcessingConfig):

View File

@@ -12,6 +12,17 @@ class Settings(BaseSettings):
extra="ignore",
)
ROOT_PATH: str = "/"
# WebRTC port range for ICE candidates (e.g. "50000-50100").
# When set, monkey-patches aioice to bind UDP sockets within this range,
# allowing Docker port mapping instead of network_mode: host.
WEBRTC_PORT_RANGE: str | None = None
# Host IP or hostname to advertise in ICE candidates instead of the
# container's internal IP. Use "host.docker.internal" in Docker with
# extra_hosts, or a specific LAN IP. Resolved at connection time.
WEBRTC_HOST: str | None = None
# CORS
UI_BASE_URL: str = "http://localhost:3000"
CORS_ORIGIN: str = "*"
@@ -49,6 +60,7 @@ class Settings(BaseSettings):
TRANSCRIPT_STORAGE_AWS_REGION: str = "us-east-1"
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID: str | None = None
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY: str | None = None
TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL: str | None = None
# Platform-specific recording storage (follows {PREFIX}_STORAGE_AWS_{CREDENTIAL} pattern)
# Whereby storage configuration
@@ -75,6 +87,7 @@ class Settings(BaseSettings):
LLM_URL: str | None = None
LLM_API_KEY: str | None = None
LLM_CONTEXT_WINDOW: int = 16000
LLM_REQUEST_TIMEOUT: float = 300.0 # HTTP request timeout for LLM calls (seconds)
LLM_PARSE_MAX_RETRIES: int = (
3 # Max retries for JSON/validation errors (total attempts = retries + 1)
@@ -84,9 +97,7 @@ class Settings(BaseSettings):
)
# Diarization
# backends:
# - pyannote: in-process model loading (no HTTP, runs in same process)
# - modal: HTTP API client (works with Modal.com OR self-hosted gpu/self_hosted/)
# backend: modal — HTTP API client (works with Modal.com OR self-hosted gpu/self_hosted/)
DIARIZATION_ENABLED: bool = True
DIARIZATION_BACKEND: str = "modal"
DIARIZATION_URL: str | None = None
@@ -95,9 +106,6 @@ class Settings(BaseSettings):
# Diarization: modal backend
DIARIZATION_MODAL_API_KEY: str | None = None
# Diarization: local pyannote.audio
DIARIZATION_PYANNOTE_AUTH_TOKEN: str | None = None
# Audio Padding (Modal.com backend)
PADDING_URL: str | None = None
PADDING_MODAL_API_KEY: str | None = None
@@ -105,7 +113,7 @@ class Settings(BaseSettings):
# Sentry
SENTRY_DSN: str | None = None
# User authentication (none, jwt)
# User authentication (none, jwt, password)
AUTH_BACKEND: str = "none"
# User authentication using JWT
@@ -113,6 +121,10 @@ class Settings(BaseSettings):
AUTH_JWT_PUBLIC_KEY: str | None = "authentik.monadical.com_public.pem"
AUTH_JWT_AUDIENCE: str | None = None
# User authentication using password (selfhosted)
ADMIN_EMAIL: str | None = None
ADMIN_PASSWORD_HASH: str | None = None
PUBLIC_MODE: bool = False
PUBLIC_DATA_RETENTION_DAYS: PositiveInt = 7
@@ -146,6 +158,9 @@ class Settings(BaseSettings):
WHEREBY_WEBHOOK_SECRET: str | None = None
AWS_PROCESS_RECORDING_QUEUE_URL: str | None = None
SQS_POLLING_TIMEOUT_SECONDS: int = 60
CELERY_BEAT_POLL_INTERVAL: int = (
0 # 0 = use individual defaults; set e.g. 300 for 5-min polling
)
# Daily.co integration
DAILY_API_KEY: str | None = None

View File

@@ -53,6 +53,7 @@ class AwsStorage(Storage):
aws_access_key_id: str | None = None,
aws_secret_access_key: str | None = None,
aws_role_arn: str | None = None,
aws_endpoint_url: str | None = None,
):
if not aws_bucket_name:
raise ValueError("Storage `aws_storage` require `aws_bucket_name`")
@@ -73,17 +74,26 @@ class AwsStorage(Storage):
self._access_key_id = aws_access_key_id
self._secret_access_key = aws_secret_access_key
self._role_arn = aws_role_arn
self._endpoint_url = aws_endpoint_url
self.aws_folder = ""
if "/" in aws_bucket_name:
self._bucket_name, self.aws_folder = aws_bucket_name.split("/", 1)
self.boto_config = Config(retries={"max_attempts": 3, "mode": "adaptive"})
config_kwargs: dict = {"retries": {"max_attempts": 3, "mode": "adaptive"}}
if aws_endpoint_url:
config_kwargs["s3"] = {"addressing_style": "path"}
self.boto_config = Config(**config_kwargs)
self.session = aioboto3.Session(
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=aws_region,
)
self.base_url = f"https://{self._bucket_name}.s3.amazonaws.com/"
if aws_endpoint_url:
self.base_url = f"{aws_endpoint_url}/{self._bucket_name}/"
else:
self.base_url = f"https://{self._bucket_name}.s3.amazonaws.com/"
# Implement credential properties
@property
@@ -139,7 +149,9 @@ class AwsStorage(Storage):
s3filename = f"{folder}/{filename}" if folder else filename
logger.info(f"Uploading {filename} to S3 {actual_bucket}/{folder}")
async with self.session.client("s3", config=self.boto_config) as client:
async with self.session.client(
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
) as client:
if isinstance(data, bytes):
await client.put_object(Bucket=actual_bucket, Key=s3filename, Body=data)
else:
@@ -162,7 +174,9 @@ class AwsStorage(Storage):
actual_bucket = bucket or self._bucket_name
folder = self.aws_folder
s3filename = f"{folder}/{filename}" if folder else filename
async with self.session.client("s3", config=self.boto_config) as client:
async with self.session.client(
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
) as client:
presigned_url = await client.generate_presigned_url(
operation,
Params={"Bucket": actual_bucket, "Key": s3filename},
@@ -177,7 +191,9 @@ class AwsStorage(Storage):
folder = self.aws_folder
logger.info(f"Deleting {filename} from S3 {actual_bucket}/{folder}")
s3filename = f"{folder}/{filename}" if folder else filename
async with self.session.client("s3", config=self.boto_config) as client:
async with self.session.client(
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
) as client:
await client.delete_object(Bucket=actual_bucket, Key=s3filename)
@handle_s3_client_errors("download")
@@ -186,7 +202,9 @@ class AwsStorage(Storage):
folder = self.aws_folder
logger.info(f"Downloading {filename} from S3 {actual_bucket}/{folder}")
s3filename = f"{folder}/{filename}" if folder else filename
async with self.session.client("s3", config=self.boto_config) as client:
async with self.session.client(
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
) as client:
response = await client.get_object(Bucket=actual_bucket, Key=s3filename)
return await response["Body"].read()
@@ -201,7 +219,9 @@ class AwsStorage(Storage):
logger.info(f"Listing objects from S3 {actual_bucket} with prefix '{s3prefix}'")
keys = []
async with self.session.client("s3", config=self.boto_config) as client:
async with self.session.client(
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
) as client:
paginator = client.get_paginator("list_objects_v2")
async for page in paginator.paginate(Bucket=actual_bucket, Prefix=s3prefix):
if "Contents" in page:
@@ -227,7 +247,9 @@ class AwsStorage(Storage):
folder = self.aws_folder
logger.info(f"Streaming {filename} from S3 {actual_bucket}/{folder}")
s3filename = f"{folder}/{filename}" if folder else filename
async with self.session.client("s3", config=self.boto_config) as client:
async with self.session.client(
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
) as client:
await client.download_fileobj(
Bucket=actual_bucket, Key=s3filename, Fileobj=fileobj
)

View File

@@ -0,0 +1,80 @@
"""Create or update an admin user with password authentication.
Usage:
uv run python -m reflector.tools.create_admin --email admin@localhost --password <pass>
uv run python -m reflector.tools.create_admin --email admin@localhost # prompts for password
uv run python -m reflector.tools.create_admin --hash-only --password <pass> # print hash only
"""
import argparse
import asyncio
import getpass
import sys
from reflector.auth.password_utils import hash_password
from reflector.db.users import user_controller
from reflector.utils import generate_uuid4
async def create_admin(email: str, password: str) -> None:
from reflector.db import get_database
database = get_database()
await database.connect()
try:
password_hash = hash_password(password)
existing = await user_controller.get_by_email(email)
if existing:
await user_controller.set_password_hash(existing.id, password_hash)
print(f"Updated password for existing user: {email} (id={existing.id})")
else:
user = await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid=f"local:{email}",
email=email,
password_hash=password_hash,
)
print(f"Created admin user: {email} (id={user.id})")
finally:
await database.disconnect()
def main():
parser = argparse.ArgumentParser(description="Create or update an admin user")
parser.add_argument(
"--email", default="admin@localhost", help="Admin email address"
)
parser.add_argument(
"--password",
help="Admin password (will prompt if not provided)",
)
parser.add_argument(
"--hash-only",
action="store_true",
help="Print the password hash and exit (for ADMIN_PASSWORD_HASH env var)",
)
args = parser.parse_args()
password = args.password
if not password:
password = getpass.getpass("Password: ")
confirm = getpass.getpass("Confirm password: ")
if password != confirm:
print("Passwords do not match", file=sys.stderr)
sys.exit(1)
if not password:
print("Password cannot be empty", file=sys.stderr)
sys.exit(1)
if args.hash_only:
print(hash_password(password))
sys.exit(0)
asyncio.run(create_admin(args.email, password))
if __name__ == "__main__":
main()

View File

@@ -24,6 +24,9 @@ from reflector.pipelines.main_live_pipeline import (
pipeline_process as live_pipeline_process,
)
from reflector.storage import Storage
from reflector.worker.app import (
app as celery_app, # noqa: F401 - ensure Celery uses Redis broker
)
def validate_s3_bucket_name(bucket: str) -> None:

View File

@@ -0,0 +1,43 @@
"""Provision admin user on server startup using environment variables.
Reads ADMIN_EMAIL and ADMIN_PASSWORD_HASH from settings and creates or updates
the admin user. Intended to be called from runserver.sh on container startup.
"""
import asyncio
from reflector.db.users import user_controller
from reflector.settings import settings
from reflector.utils import generate_uuid4
async def provision() -> None:
if not settings.ADMIN_EMAIL or not settings.ADMIN_PASSWORD_HASH:
return
from reflector.db import get_database
database = get_database()
await database.connect()
try:
existing = await user_controller.get_by_email(settings.ADMIN_EMAIL)
if existing:
await user_controller.set_password_hash(
existing.id, settings.ADMIN_PASSWORD_HASH
)
print(f"Updated admin user: {settings.ADMIN_EMAIL}")
else:
await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid=f"local:{settings.ADMIN_EMAIL}",
email=settings.ADMIN_EMAIL,
password_hash=settings.ADMIN_PASSWORD_HASH,
)
print(f"Created admin user: {settings.ADMIN_EMAIL}")
finally:
await database.disconnect()
if __name__ == "__main__":
asyncio.run(provision())

View File

@@ -80,7 +80,14 @@ async def webhook(request: Request):
try:
event = event_adapter.validate_python(body_json)
except Exception as e:
logger.error("Failed to parse webhook event", error=str(e), body=body.decode())
err_detail = str(e)
if hasattr(e, "errors"):
err_detail = f"{err_detail}; errors={e.errors()!r}"
logger.error(
"Failed to parse webhook event",
error=err_detail,
body=body.decode(),
)
raise HTTPException(status_code=422, detail="Invalid event format")
match event:

View File

@@ -10,6 +10,7 @@ from pydantic import BaseModel
from reflector.events import subscribers_shutdown
from reflector.logger import logger
from reflector.pipelines.runner import PipelineRunner
from reflector.settings import settings
sessions = []
router = APIRouter()
@@ -123,7 +124,16 @@ async def rtc_offer_base(
# update metrics
m_rtc_sessions.inc()
return RtcOffer(sdp=pc.localDescription.sdp, type=pc.localDescription.type)
sdp = pc.localDescription.sdp
# Rewrite ICE candidate IPs when running behind Docker bridge networking
if settings.WEBRTC_HOST:
from reflector.webrtc_ports import resolve_webrtc_host, rewrite_sdp_host
host_ip = resolve_webrtc_host(settings.WEBRTC_HOST)
sdp = rewrite_sdp_host(sdp, host_ip)
return RtcOffer(sdp=sdp, type=pc.localDescription.type)
@subscribers_shutdown.append

View File

@@ -111,7 +111,7 @@ class GetTranscriptMinimal(BaseModel):
room_id: str | None = None
room_name: str | None = None
audio_deleted: bool | None = None
dag_status: list[dict] | None = None
change_seq: int | None = None
class TranscriptParticipantWithEmail(TranscriptParticipant):
@@ -267,12 +267,22 @@ async def transcripts_list(
source_kind: SourceKind | None = None,
room_id: str | None = None,
search_term: str | None = None,
change_seq_from: int | None = None,
sort_by: Literal["created_at", "change_seq"] | None = None,
):
if not user and not settings.PUBLIC_MODE:
raise HTTPException(status_code=401, detail="Not authenticated")
user_id = user["sub"] if user else None
# Default behavior preserved: sort_by=None → "-created_at"
if sort_by == "change_seq":
order_by = "change_seq" # ASC (ascending for checkpoint-based polling)
elif sort_by == "created_at":
order_by = "-created_at" # DESC (newest first, same as current default)
else:
order_by = "-created_at" # default, backward compatible
return await apaginate(
get_database(),
await transcripts_controller.get_all(
@@ -280,7 +290,8 @@ async def transcripts_list(
source_kind=SourceKind(source_kind) if source_kind else None,
room_id=room_id,
search_term=search_term,
order_by="-created_at",
order_by=order_by,
change_seq_from=change_seq_from,
return_query=True,
),
)
@@ -492,13 +503,6 @@ async def transcript_get(
)
)
dag_status = None
if transcript.status == "processing" and transcript.events:
for ev in reversed(transcript.events):
if ev.event == "DAG_STATUS":
dag_status = ev.data.get("tasks") if isinstance(ev.data, dict) else None
break
base_data = {
"id": transcript.id,
"user_id": transcript.user_id,
@@ -520,7 +524,7 @@ async def transcript_get(
"room_id": transcript.room_id,
"room_name": room_name,
"audio_deleted": transcript.audio_deleted,
"dag_status": dag_status,
"change_seq": transcript.change_seq,
"participants": participants,
}

View File

@@ -5,7 +5,7 @@ from fastapi import APIRouter, Depends, HTTPException, UploadFile
from pydantic import BaseModel
import reflector.auth as auth
from reflector.db.transcripts import transcripts_controller
from reflector.db.transcripts import SourceKind, transcripts_controller
from reflector.pipelines.main_file_pipeline import task_pipeline_file_process
router = APIRouter()
@@ -88,8 +88,10 @@ async def transcript_record_upload(
finally:
container.close()
# set the status to "uploaded"
await transcripts_controller.update(transcript, {"status": "uploaded"})
# set the status to "uploaded" and mark as file source
await transcripts_controller.update(
transcript, {"status": "uploaded", "source_kind": SourceKind.FILE}
)
# launch a background task to process the file
task_pipeline_file_process.delay(transcript_id=transcript_id)

View File

@@ -4,18 +4,22 @@ Transcripts websocket API
"""
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException, WebSocket, WebSocketDisconnect
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
import reflector.auth as auth
from reflector.db.transcripts import transcripts_controller
from reflector.ws_events import TranscriptWsEvent
from reflector.ws_manager import get_ws_manager
router = APIRouter()
@router.get("/transcripts/{transcript_id}/events")
@router.get(
"/transcripts/{transcript_id}/events",
response_model=TranscriptWsEvent,
summary="Transcript WebSocket event schema",
description="Stub exposing the discriminated union of all transcript-level WS events for OpenAPI type generation. Real events are delivered over the WebSocket at the same path.",
)
async def transcript_get_websocket_events(transcript_id: str):
pass
@@ -24,8 +28,9 @@ async def transcript_get_websocket_events(transcript_id: str):
async def transcript_events_websocket(
transcript_id: str,
websocket: WebSocket,
user: Optional[auth.UserInfo] = Depends(auth.current_user_optional),
):
_, negotiated_subprotocol = auth.parse_ws_bearer_token(websocket)
user = await auth.current_user_ws_optional(websocket)
user_id = user["sub"] if user else None
transcript = await transcripts_controller.get_by_id_for_http(
transcript_id, user_id=user_id
@@ -37,23 +42,19 @@ async def transcript_events_websocket(
# use ts:transcript_id as room id
room_id = f"ts:{transcript_id}"
ws_manager = get_ws_manager()
await ws_manager.add_user_to_room(room_id, websocket)
await ws_manager.add_user_to_room(
room_id, websocket, subprotocol=negotiated_subprotocol
)
try:
# on first connection, send all events only to the current user
# Find the last DAG_STATUS to send after other historical events
last_dag_status = None
for event in transcript.events:
# for now, do not send TRANSCRIPT or STATUS options - theses are live event
# not necessary to be sent to the client; but keep the rest
name = event.event
if name in ("TRANSCRIPT", "STATUS"):
continue
if name == "DAG_STATUS":
last_dag_status = event
continue
await websocket.send_json(event.model_dump(mode="json"))
# Send only the most recent DAG_STATUS so reconnecting clients get current state
if last_dag_status is not None:
await websocket.send_json(last_dag_status.model_dump(mode="json"))
# XXX if transcript is final (locked=True and status=ended)
# XXX send a final event to the client and close the connection

View File

@@ -1,55 +1,48 @@
from typing import Optional
from fastapi import APIRouter, WebSocket
from fastapi import APIRouter, WebSocket, WebSocketDisconnect
from reflector.auth.auth_jwt import JWTAuth # type: ignore
from reflector.db.users import user_controller
import reflector.auth as auth
from reflector.ws_events import UserWsEvent
from reflector.ws_manager import get_ws_manager
router = APIRouter()
@router.get(
"/events",
response_model=UserWsEvent,
summary="User WebSocket event schema",
description="Stub exposing the discriminated union of all user-level WS events for OpenAPI type generation. Real events are delivered over the WebSocket at the same path.",
)
async def user_get_websocket_events():
pass
# Close code for unauthorized WebSocket connections
UNAUTHORISED = 4401
@router.websocket("/events")
async def user_events_websocket(websocket: WebSocket):
# Browser can't send Authorization header for WS; use subprotocol: ["bearer", token]
raw_subprotocol = websocket.headers.get("sec-websocket-protocol") or ""
parts = [p.strip() for p in raw_subprotocol.split(",") if p.strip()]
token: Optional[str] = None
negotiated_subprotocol: Optional[str] = None
if len(parts) >= 2 and parts[0].lower() == "bearer":
negotiated_subprotocol = "bearer"
token = parts[1]
token, negotiated_subprotocol = auth.parse_ws_bearer_token(websocket)
user_id: Optional[str] = None
if not token:
await websocket.close(code=UNAUTHORISED)
return
try:
payload = JWTAuth().verify_token(token)
authentik_uid = payload.get("sub")
if authentik_uid:
user = await user_controller.get_by_authentik_uid(authentik_uid)
if user:
user_id = user.id
else:
await websocket.close(code=UNAUTHORISED)
return
else:
await websocket.close(code=UNAUTHORISED)
return
user = await auth.current_user_ws_optional(websocket)
except Exception:
await websocket.close(code=UNAUTHORISED)
return
if not user_id:
if not user:
await websocket.close(code=UNAUTHORISED)
return
user_id: Optional[str] = user.sub if hasattr(user, "sub") else user["sub"]
room_id = f"user:{user_id}"
ws_manager = get_ws_manager()
@@ -60,6 +53,8 @@ async def user_events_websocket(websocket: WebSocket):
try:
while True:
await websocket.receive()
except (RuntimeError, WebSocketDisconnect):
pass
finally:
if room_id:
await ws_manager.remove_user_from_room(room_id, websocket)

View File

@@ -0,0 +1,111 @@
"""
Monkey-patch aioice to use a fixed UDP port range for ICE candidates,
and optionally rewrite SDP to advertise a different host IP.
This allows running the server in Docker with bridge networking
(no network_mode: host) by:
1. Restricting ICE UDP ports to a known range that can be mapped in Docker
2. Replacing container-internal IPs with the Docker host IP in SDP answers
"""
import asyncio
import socket
from reflector.logger import logger
def parse_port_range(range_str: str) -> tuple[int, int]:
"""Parse a 'min-max' string into (min_port, max_port)."""
parts = range_str.split("-")
if len(parts) != 2:
raise ValueError(f"WEBRTC_PORT_RANGE must be 'min-max', got: {range_str!r}")
min_port, max_port = int(parts[0]), int(parts[1])
if not (1024 <= min_port <= max_port <= 65535):
raise ValueError(
f"Invalid port range: {min_port}-{max_port} "
"(must be 1024-65535 with min <= max)"
)
return min_port, max_port
def patch_aioice_port_range(min_port: int, max_port: int) -> None:
"""
Monkey-patch aioice so that ICE candidate UDP sockets bind to ports
within [min_port, max_port] instead of OS-assigned ephemeral ports.
Works by temporarily wrapping loop.create_datagram_endpoint() during
aioice's get_component_candidates() to intercept bind(addr, 0) calls.
"""
import aioice.ice as _ice
_original = _ice.Connection.get_component_candidates
_state = {"next_port": min_port}
async def _patched_get_component_candidates(self, component, addresses, timeout=5):
loop = asyncio.get_event_loop()
_orig_create = loop.create_datagram_endpoint
async def _create_with_port_range(*args, **kwargs):
local_addr = kwargs.get("local_addr")
if local_addr and local_addr[1] == 0:
addr = local_addr[0]
# Try each port in the range (wrapping around)
attempts = max_port - min_port + 1
for _ in range(attempts):
port = _state["next_port"]
_state["next_port"] = (
min_port
if _state["next_port"] >= max_port
else _state["next_port"] + 1
)
try:
kwargs["local_addr"] = (addr, port)
return await _orig_create(*args, **kwargs)
except OSError:
continue
# All ports exhausted, fall back to OS assignment
logger.warning(
"All WebRTC ports in range exhausted, falling back to OS",
min_port=min_port,
max_port=max_port,
)
kwargs["local_addr"] = (addr, 0)
return await _orig_create(*args, **kwargs)
loop.create_datagram_endpoint = _create_with_port_range
try:
return await _original(self, component, addresses, timeout)
finally:
loop.create_datagram_endpoint = _orig_create
_ice.Connection.get_component_candidates = _patched_get_component_candidates
logger.info(
"aioice patched for WebRTC port range",
min_port=min_port,
max_port=max_port,
)
def resolve_webrtc_host(host: str) -> str:
"""Resolve a hostname or IP to an IP address for ICE candidate rewriting."""
try:
ip = socket.gethostbyname(host)
logger.info("Resolved WEBRTC_HOST", host=host, ip=ip)
return ip
except socket.gaierror:
logger.warning("Could not resolve WEBRTC_HOST, using as-is", host=host)
return host
def rewrite_sdp_host(sdp: str, target_ip: str) -> str:
"""
Replace container-internal IPs in SDP with target_ip so that
ICE candidates advertise a routable address.
"""
import aioice.ice
container_ips = aioice.ice.get_host_addresses(use_ipv4=True, use_ipv6=False)
for ip in container_ips:
if ip != "127.0.0.1" and ip != target_ip:
sdp = sdp.replace(ip, target_ip)
return sdp

View File

@@ -8,8 +8,21 @@ from reflector.settings import settings
logger = structlog.get_logger(__name__)
# Polling intervals (seconds)
# CELERY_BEAT_POLL_INTERVAL overrides all sub-5-min intervals (e.g. 300 for selfhosted)
_override = (
float(settings.CELERY_BEAT_POLL_INTERVAL)
if settings.CELERY_BEAT_POLL_INTERVAL > 0
else 0
)
# Webhook-aware: 180s when webhook configured (backup mode), 15s when no webhook (primary discovery)
POLL_DAILY_RECORDINGS_INTERVAL_SEC = 180.0 if settings.DAILY_WEBHOOK_SECRET else 15.0
POLL_DAILY_RECORDINGS_INTERVAL_SEC = _override or (
180.0 if settings.DAILY_WEBHOOK_SECRET else 15.0
)
SQS_POLL_INTERVAL = _override or float(settings.SQS_POLLING_TIMEOUT_SECONDS)
RECONCILIATION_INTERVAL = _override or 30.0
ICS_SYNC_INTERVAL = _override or 60.0
UPCOMING_MEETINGS_INTERVAL = _override or 30.0
if celery.current_app.main != "default":
logger.info(f"Celery already configured ({celery.current_app})")
@@ -33,11 +46,11 @@ else:
app.conf.beat_schedule = {
"process_messages": {
"task": "reflector.worker.process.process_messages",
"schedule": float(settings.SQS_POLLING_TIMEOUT_SECONDS),
"schedule": SQS_POLL_INTERVAL,
},
"process_meetings": {
"task": "reflector.worker.process.process_meetings",
"schedule": float(settings.SQS_POLLING_TIMEOUT_SECONDS),
"schedule": SQS_POLL_INTERVAL,
},
"reprocess_failed_recordings": {
"task": "reflector.worker.process.reprocess_failed_recordings",
@@ -53,15 +66,15 @@ else:
},
"trigger_daily_reconciliation": {
"task": "reflector.worker.process.trigger_daily_reconciliation",
"schedule": 30.0, # Every 30 seconds (queues poll tasks for all active meetings)
"schedule": RECONCILIATION_INTERVAL,
},
"sync_all_ics_calendars": {
"task": "reflector.worker.ics_sync.sync_all_ics_calendars",
"schedule": 60.0, # Run every minute to check which rooms need sync
"schedule": ICS_SYNC_INTERVAL,
},
"create_upcoming_meetings": {
"task": "reflector.worker.ics_sync.create_upcoming_meetings",
"schedule": 30.0, # Run every 30 seconds to create upcoming meetings
"schedule": UPCOMING_MEETINGS_INTERVAL,
},
}

View File

@@ -0,0 +1,188 @@
"""Typed WebSocket event models.
Defines Pydantic models with Literal discriminators for all WS events.
Exposed via stub GET endpoints so ``pnpm openapi`` generates TS discriminated unions.
"""
from typing import Annotated, Literal, Union
from pydantic import BaseModel, Discriminator
from reflector.db.transcripts import (
TranscriptActionItems,
TranscriptDuration,
TranscriptFinalLongSummary,
TranscriptFinalShortSummary,
TranscriptFinalTitle,
TranscriptStatus,
TranscriptText,
TranscriptWaveform,
)
from reflector.utils.string import NonEmptyString
from reflector.views.transcripts import GetTranscriptTopic
# ---------------------------------------------------------------------------
# Transcript-level event name literal
# ---------------------------------------------------------------------------
TranscriptEventName = Literal[
"TRANSCRIPT",
"TOPIC",
"STATUS",
"FINAL_TITLE",
"FINAL_LONG_SUMMARY",
"FINAL_SHORT_SUMMARY",
"ACTION_ITEMS",
"DURATION",
"WAVEFORM",
]
# ---------------------------------------------------------------------------
# Transcript-level WS event wrappers
# ---------------------------------------------------------------------------
class TranscriptWsTranscript(BaseModel):
event: Literal["TRANSCRIPT"] = "TRANSCRIPT"
data: TranscriptText
class TranscriptWsTopic(BaseModel):
event: Literal["TOPIC"] = "TOPIC"
data: GetTranscriptTopic
class TranscriptWsStatusData(BaseModel):
value: TranscriptStatus
class TranscriptWsStatus(BaseModel):
event: Literal["STATUS"] = "STATUS"
data: TranscriptWsStatusData
class TranscriptWsFinalTitle(BaseModel):
event: Literal["FINAL_TITLE"] = "FINAL_TITLE"
data: TranscriptFinalTitle
class TranscriptWsFinalLongSummary(BaseModel):
event: Literal["FINAL_LONG_SUMMARY"] = "FINAL_LONG_SUMMARY"
data: TranscriptFinalLongSummary
class TranscriptWsFinalShortSummary(BaseModel):
event: Literal["FINAL_SHORT_SUMMARY"] = "FINAL_SHORT_SUMMARY"
data: TranscriptFinalShortSummary
class TranscriptWsActionItems(BaseModel):
event: Literal["ACTION_ITEMS"] = "ACTION_ITEMS"
data: TranscriptActionItems
class TranscriptWsDuration(BaseModel):
event: Literal["DURATION"] = "DURATION"
data: TranscriptDuration
class TranscriptWsWaveform(BaseModel):
event: Literal["WAVEFORM"] = "WAVEFORM"
data: TranscriptWaveform
TranscriptWsEvent = Annotated[
Union[
TranscriptWsTranscript,
TranscriptWsTopic,
TranscriptWsStatus,
TranscriptWsFinalTitle,
TranscriptWsFinalLongSummary,
TranscriptWsFinalShortSummary,
TranscriptWsActionItems,
TranscriptWsDuration,
TranscriptWsWaveform,
],
Discriminator("event"),
]
# ---------------------------------------------------------------------------
# User-level event name literal
# ---------------------------------------------------------------------------
UserEventName = Literal[
"TRANSCRIPT_CREATED",
"TRANSCRIPT_DELETED",
"TRANSCRIPT_STATUS",
"TRANSCRIPT_FINAL_TITLE",
"TRANSCRIPT_DURATION",
]
# ---------------------------------------------------------------------------
# User-level WS event data models
# ---------------------------------------------------------------------------
class UserTranscriptCreatedData(BaseModel):
id: NonEmptyString
class UserTranscriptDeletedData(BaseModel):
id: NonEmptyString
class UserTranscriptStatusData(BaseModel):
id: NonEmptyString
value: TranscriptStatus
class UserTranscriptFinalTitleData(BaseModel):
id: NonEmptyString
title: NonEmptyString
class UserTranscriptDurationData(BaseModel):
id: NonEmptyString
duration: float
# ---------------------------------------------------------------------------
# User-level WS event wrappers
# ---------------------------------------------------------------------------
class UserWsTranscriptCreated(BaseModel):
event: Literal["TRANSCRIPT_CREATED"] = "TRANSCRIPT_CREATED"
data: UserTranscriptCreatedData
class UserWsTranscriptDeleted(BaseModel):
event: Literal["TRANSCRIPT_DELETED"] = "TRANSCRIPT_DELETED"
data: UserTranscriptDeletedData
class UserWsTranscriptStatus(BaseModel):
event: Literal["TRANSCRIPT_STATUS"] = "TRANSCRIPT_STATUS"
data: UserTranscriptStatusData
class UserWsTranscriptFinalTitle(BaseModel):
event: Literal["TRANSCRIPT_FINAL_TITLE"] = "TRANSCRIPT_FINAL_TITLE"
data: UserTranscriptFinalTitleData
class UserWsTranscriptDuration(BaseModel):
event: Literal["TRANSCRIPT_DURATION"] = "TRANSCRIPT_DURATION"
data: UserTranscriptDurationData
UserWsEvent = Annotated[
Union[
UserWsTranscriptCreated,
UserWsTranscriptDeleted,
UserWsTranscriptStatus,
UserWsTranscriptFinalTitle,
UserWsTranscriptDuration,
],
Discriminator("event"),
]

View File

@@ -48,7 +48,15 @@ class RedisPubSubManager:
if not self.redis_connection:
await self.connect()
message = json.dumps(message)
await self.redis_connection.publish(room_id, message)
try:
await self.redis_connection.publish(room_id, message)
except RuntimeError:
# Celery workers run each task in a new event loop (asyncio.run),
# which closes the previous loop. Cached Redis connection is dead.
# Reconnect on the current loop and retry.
self.redis_connection = None
await self.connect()
await self.redis_connection.publish(room_id, message)
async def subscribe(self, room_id: str) -> redis.Redis:
await self.pubsub.subscribe(room_id)

View File

@@ -2,6 +2,10 @@
if [ "${ENTRYPOINT}" = "server" ]; then
uv run alembic upgrade head
# Provision admin user if password auth is configured
if [ -n "${ADMIN_EMAIL:-}" ] && [ -n "${ADMIN_PASSWORD_HASH:-}" ]; then
uv run python -m reflector.tools.provision_admin
fi
uv run uvicorn reflector.app:app --host 0.0.0.0 --port 1250
elif [ "${ENTRYPOINT}" = "worker" ]; then
uv run celery -A reflector.worker.app worker --loglevel=info

View File

@@ -15,8 +15,7 @@ from reflector.settings import settings
async def setup_webhook(webhook_url: str):
"""
Create or update Daily.co webhook for this environment using dailyco_api module.
Uses DAILY_WEBHOOK_UUID to identify existing webhook.
Create Daily.co webhook. Deletes any existing webhooks first, then creates the new one.
"""
if not settings.DAILY_API_KEY:
print("Error: DAILY_API_KEY not set")
@@ -35,79 +34,37 @@ async def setup_webhook(webhook_url: str):
]
async with DailyApiClient(api_key=settings.DAILY_API_KEY) as client:
webhook_uuid = settings.DAILY_WEBHOOK_UUID
webhooks = await client.list_webhooks()
for wh in webhooks:
await client.delete_webhook(wh.uuid)
print(f"Deleted webhook {wh.uuid}")
if webhook_uuid:
print(f"Updating existing webhook {webhook_uuid}...")
try:
# Note: Daily.co doesn't support PATCH well, so we delete + recreate
await client.delete_webhook(webhook_uuid)
print(f"Deleted old webhook {webhook_uuid}")
request = CreateWebhookRequest(
url=webhook_url,
eventTypes=event_types,
hmac=settings.DAILY_WEBHOOK_SECRET,
)
result = await client.create_webhook(request)
webhook_uuid = result.uuid
request = CreateWebhookRequest(
url=webhook_url,
eventTypes=event_types,
hmac=settings.DAILY_WEBHOOK_SECRET,
)
result = await client.create_webhook(request)
print(f"Created webhook {webhook_uuid} (state: {result.state})")
print(f" URL: {result.url}")
print(
f"✓ Created replacement webhook {result.uuid} (state: {result.state})"
)
print(f" URL: {result.url}")
env_file = Path(__file__).parent.parent / ".env"
if env_file.exists():
lines = env_file.read_text().splitlines()
updated = False
for i, line in enumerate(lines):
if line.startswith("DAILY_WEBHOOK_UUID="):
lines[i] = f"DAILY_WEBHOOK_UUID={webhook_uuid}"
updated = True
break
if not updated:
lines.append(f"DAILY_WEBHOOK_UUID={webhook_uuid}")
env_file.write_text("\n".join(lines) + "\n")
print("✓ Saved DAILY_WEBHOOK_UUID to .env")
webhook_uuid = result.uuid
except Exception as e:
if hasattr(e, "response") and e.response.status_code == 404:
print(f"Webhook {webhook_uuid} not found, creating new one...")
webhook_uuid = None # Fall through to creation
else:
print(f"Error updating webhook: {e}")
return 1
if not webhook_uuid:
print("Creating new webhook...")
request = CreateWebhookRequest(
url=webhook_url,
eventTypes=event_types,
hmac=settings.DAILY_WEBHOOK_SECRET,
)
result = await client.create_webhook(request)
webhook_uuid = result.uuid
print(f"✓ Created webhook {webhook_uuid} (state: {result.state})")
print(f" URL: {result.url}")
print()
print("=" * 60)
print("IMPORTANT: Add this to your environment variables:")
print("=" * 60)
print(f"DAILY_WEBHOOK_UUID: {webhook_uuid}")
print("=" * 60)
print()
# Try to write UUID to .env file
env_file = Path(__file__).parent.parent / ".env"
if env_file.exists():
lines = env_file.read_text().splitlines()
updated = False
# Update existing DAILY_WEBHOOK_UUID line or add it
for i, line in enumerate(lines):
if line.startswith("DAILY_WEBHOOK_UUID="):
lines[i] = f"DAILY_WEBHOOK_UUID={webhook_uuid}"
updated = True
break
if not updated:
lines.append(f"DAILY_WEBHOOK_UUID={webhook_uuid}")
env_file.write_text("\n".join(lines) + "\n")
print(f"✓ Also saved to local .env file")
else:
print(f"⚠ Local .env file not found - please add manually")
return 0
return 0
if __name__ == "__main__":
@@ -117,11 +74,7 @@ if __name__ == "__main__":
"Example: python recreate_daily_webhook.py https://example.com/v1/daily/webhook"
)
print()
print("Behavior:")
print(" - If DAILY_WEBHOOK_UUID set: Deletes old webhook, creates new one")
print(
" - If DAILY_WEBHOOK_UUID empty: Creates new webhook, saves UUID to .env"
)
print("Deletes all existing webhooks, then creates a new one.")
sys.exit(1)
sys.exit(asyncio.run(setup_webhook(sys.argv[1])))

View File

@@ -0,0 +1,201 @@
"""Tests for the password auth backend."""
import pytest
from httpx import AsyncClient
from jose import jwt
from reflector.auth.password_utils import hash_password
from reflector.settings import settings
@pytest.fixture
async def password_app():
"""Create a minimal FastAPI app with the password auth router."""
from fastapi import FastAPI
from reflector.auth import auth_password
app = FastAPI()
app.include_router(auth_password.router, prefix="/v1")
# Reset rate limiter between tests
auth_password._login_attempts.clear()
return app
@pytest.fixture
async def password_client(password_app):
"""Create a test client for the password auth app."""
async with AsyncClient(app=password_app, base_url="http://test/v1") as client:
yield client
async def _create_user_with_password(email: str, password: str):
"""Helper to create a user with a password hash in the DB."""
from reflector.db.users import user_controller
from reflector.utils import generate_uuid4
pw_hash = hash_password(password)
return await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid=f"local:{email}",
email=email,
password_hash=pw_hash,
)
@pytest.mark.asyncio
async def test_login_success(password_client, setup_database):
await _create_user_with_password("admin@test.com", "testpass123")
response = await password_client.post(
"/auth/login",
json={"email": "admin@test.com", "password": "testpass123"},
)
assert response.status_code == 200
data = response.json()
assert "access_token" in data
assert data["token_type"] == "bearer"
assert data["expires_in"] > 0
# Verify the JWT is valid
payload = jwt.decode(
data["access_token"],
settings.SECRET_KEY,
algorithms=["HS256"],
)
assert payload["email"] == "admin@test.com"
assert "sub" in payload
assert "exp" in payload
@pytest.mark.asyncio
async def test_login_wrong_password(password_client, setup_database):
await _create_user_with_password("user@test.com", "correctpassword")
response = await password_client.post(
"/auth/login",
json={"email": "user@test.com", "password": "wrongpassword"},
)
assert response.status_code == 401
@pytest.mark.asyncio
async def test_login_nonexistent_user(password_client, setup_database):
response = await password_client.post(
"/auth/login",
json={"email": "nobody@test.com", "password": "anything"},
)
assert response.status_code == 401
@pytest.mark.asyncio
async def test_login_user_without_password_hash(password_client, setup_database):
"""User exists but has no password_hash (e.g. Authentik user)."""
from reflector.db.users import user_controller
from reflector.utils import generate_uuid4
await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid="authentik:abc123",
email="oidc@test.com",
)
response = await password_client.post(
"/auth/login",
json={"email": "oidc@test.com", "password": "anything"},
)
assert response.status_code == 401
@pytest.mark.asyncio
async def test_login_rate_limiting(password_client, setup_database):
from reflector.auth import auth_password
# Reset rate limiter
auth_password._login_attempts.clear()
for _ in range(10):
await password_client.post(
"/auth/login",
json={"email": "fake@test.com", "password": "wrong"},
)
# 11th attempt should be rate-limited
response = await password_client.post(
"/auth/login",
json={"email": "fake@test.com", "password": "wrong"},
)
assert response.status_code == 429
@pytest.mark.asyncio
async def test_jwt_create_and_verify():
from reflector.auth.auth_password import _create_access_token, _verify_token
token, expires_in = _create_access_token("user-123", "test@example.com")
assert expires_in > 0
payload = _verify_token(token)
assert payload["sub"] == "user-123"
assert payload["email"] == "test@example.com"
assert "exp" in payload
@pytest.mark.asyncio
async def test_authenticate_user_with_jwt():
from reflector.auth.auth_password import (
_authenticate_user,
_create_access_token,
)
token, _ = _create_access_token("user-abc", "abc@test.com")
user = await _authenticate_user(token, None)
assert user is not None
assert user.sub == "user-abc"
assert user.email == "abc@test.com"
@pytest.mark.asyncio
async def test_authenticate_user_invalid_jwt():
from fastapi import HTTPException
from reflector.auth.auth_password import _authenticate_user
with pytest.raises(HTTPException) as exc_info:
await _authenticate_user("invalid.jwt.token", None)
assert exc_info.value.status_code == 401
@pytest.mark.asyncio
async def test_authenticate_user_no_credentials():
from reflector.auth.auth_password import _authenticate_user
user = await _authenticate_user(None, None)
assert user is None
@pytest.mark.asyncio
async def test_current_user_raises_without_token():
"""Verify that current_user dependency raises 401 without token."""
from fastapi import Depends, FastAPI
from fastapi.testclient import TestClient
from reflector.auth import auth_password
app = FastAPI()
@app.get("/test")
async def test_endpoint(user=Depends(auth_password.current_user)):
return {"user": user.sub}
# Use sync TestClient for simplicity
client = TestClient(app)
response = client.get("/test")
# OAuth2PasswordBearer with auto_error=False returns None, then current_user raises 401
assert response.status_code == 401

View File

@@ -0,0 +1,97 @@
"""Tests for admin user creation logic (used by create_admin CLI tool)."""
import pytest
from reflector.auth.password_utils import hash_password, verify_password
from reflector.db.users import user_controller
from reflector.utils import generate_uuid4
async def _provision_admin(email: str, password: str):
"""Mirrors the logic in create_admin.create_admin() without managing DB connections."""
password_hash = hash_password(password)
existing = await user_controller.get_by_email(email)
if existing:
await user_controller.set_password_hash(existing.id, password_hash)
else:
await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid=f"local:{email}",
email=email,
password_hash=password_hash,
)
@pytest.mark.asyncio
async def test_create_admin_new_user(setup_database):
await _provision_admin("newadmin@test.com", "password123")
user = await user_controller.get_by_email("newadmin@test.com")
assert user is not None
assert user.email == "newadmin@test.com"
assert user.authentik_uid == "local:newadmin@test.com"
assert user.password_hash is not None
assert verify_password("password123", user.password_hash)
@pytest.mark.asyncio
async def test_create_admin_updates_existing(setup_database):
# Create first
await _provision_admin("admin@test.com", "oldpassword")
user1 = await user_controller.get_by_email("admin@test.com")
# Update password
await _provision_admin("admin@test.com", "newpassword")
user2 = await user_controller.get_by_email("admin@test.com")
assert user1.id == user2.id # same user, not duplicated
assert verify_password("newpassword", user2.password_hash)
assert not verify_password("oldpassword", user2.password_hash)
@pytest.mark.asyncio
async def test_create_admin_idempotent(setup_database):
await _provision_admin("admin@test.com", "samepassword")
await _provision_admin("admin@test.com", "samepassword")
# Should only have one user
users = await user_controller.list_all()
admin_users = [u for u in users if u.email == "admin@test.com"]
assert len(admin_users) == 1
@pytest.mark.asyncio
async def test_create_or_update_with_password_hash(setup_database):
"""Test the extended create_or_update method with password_hash parameter."""
pw_hash = hash_password("test123")
user = await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid="local:test@example.com",
email="test@example.com",
password_hash=pw_hash,
)
assert user.password_hash == pw_hash
fetched = await user_controller.get_by_email("test@example.com")
assert fetched is not None
assert verify_password("test123", fetched.password_hash)
@pytest.mark.asyncio
async def test_set_password_hash(setup_database):
"""Test the set_password_hash method."""
user = await user_controller.create_or_update(
id=generate_uuid4(),
authentik_uid="local:pw@test.com",
email="pw@test.com",
)
assert user.password_hash is None
pw_hash = hash_password("newpass")
await user_controller.set_password_hash(user.id, pw_hash)
updated = await user_controller.get_by_email("pw@test.com")
assert updated is not None
assert verify_password("newpass", updated.password_hash)

View File

@@ -1,959 +0,0 @@
"""Tests for DAG progress models and transform function.
Tests the extract_dag_tasks function that converts Hatchet V1WorkflowRunDetails
into structured DagTask list for frontend consumption.
"""
from datetime import datetime, timezone
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from reflector.hatchet.constants import TaskName
from reflector.hatchet.dag_progress import (
DagStatusData,
DagTask,
DagTaskStatus,
extract_dag_tasks,
)
def _make_shape_item(
step_id: str,
task_name: str,
children_step_ids: list[str] | None = None,
) -> MagicMock:
"""Create a mock WorkflowRunShapeItemForWorkflowRunDetails."""
item = MagicMock()
item.step_id = step_id
item.task_name = task_name
item.children_step_ids = children_step_ids or []
return item
def _make_task_summary(
step_id: str,
status: str = "QUEUED",
started_at: datetime | None = None,
finished_at: datetime | None = None,
duration: int | None = None,
error_message: str | None = None,
task_external_id: str | None = None,
num_spawned_children: int | None = None,
children: list | None = None,
) -> MagicMock:
"""Create a mock V1TaskSummary."""
from hatchet_sdk.clients.rest.models import V1TaskStatus
task = MagicMock()
task.step_id = step_id
task.status = V1TaskStatus(status)
task.started_at = started_at
task.finished_at = finished_at
task.duration = duration
task.error_message = error_message
task.task_external_id = task_external_id or f"ext-{step_id}"
task.num_spawned_children = num_spawned_children
task.children = children or []
return task
def _make_details(
shape: list,
tasks: list,
run_id: str = "test-run-id",
) -> MagicMock:
"""Create a mock V1WorkflowRunDetails."""
details = MagicMock()
details.shape = shape
details.tasks = tasks
details.task_events = []
details.run = MagicMock()
details.run.metadata = MagicMock()
details.run.metadata.id = run_id
return details
class TestExtractDagTasksBasic:
"""Test basic extraction of DAG tasks from workflow run details."""
def test_empty_shape_returns_empty_list(self):
details = _make_details(shape=[], tasks=[])
result = extract_dag_tasks(details)
assert result == []
def test_single_task_queued(self):
shape = [_make_shape_item("s1", "get_recording")]
tasks = [_make_task_summary("s1", status="QUEUED")]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert len(result) == 1
assert result[0].name == "get_recording"
assert result[0].status == DagTaskStatus.QUEUED
assert result[0].parents == []
assert result[0].started_at is None
assert result[0].finished_at is None
assert result[0].duration_seconds is None
assert result[0].error is None
assert result[0].children_total is None
assert result[0].children_completed is None
assert result[0].progress_pct is None
def test_completed_task_with_duration(self):
now = datetime.now(timezone.utc)
shape = [_make_shape_item("s1", "get_recording")]
tasks = [
_make_task_summary(
"s1",
status="COMPLETED",
started_at=now,
finished_at=now,
duration=1500, # milliseconds
)
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].status == DagTaskStatus.COMPLETED
assert result[0].duration_seconds == 1.5
assert result[0].started_at == now
assert result[0].finished_at == now
def test_failed_task_with_error(self):
shape = [_make_shape_item("s1", "get_recording")]
tasks = [
_make_task_summary(
"s1",
status="FAILED",
error_message="Traceback (most recent call last):\n File something\nConnectionError: connection refused",
)
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].status == DagTaskStatus.FAILED
assert result[0].error == "ConnectionError: connection refused"
def test_running_task(self):
now = datetime.now(timezone.utc)
shape = [_make_shape_item("s1", "mixdown_tracks")]
tasks = [
_make_task_summary(
"s1",
status="RUNNING",
started_at=now,
duration=5000,
)
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].status == DagTaskStatus.RUNNING
assert result[0].started_at == now
assert result[0].duration_seconds == 5.0
def test_cancelled_task(self):
shape = [_make_shape_item("s1", "post_zulip")]
tasks = [_make_task_summary("s1", status="CANCELLED")]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].status == DagTaskStatus.CANCELLED
class TestExtractDagTasksTopology:
"""Test topological ordering and parent extraction."""
def test_linear_chain_parents(self):
"""A -> B -> C should produce correct parents."""
shape = [
_make_shape_item("s1", "get_recording", children_step_ids=["s2"]),
_make_shape_item("s2", "get_participants", children_step_ids=["s3"]),
_make_shape_item("s3", "process_tracks"),
]
tasks = [
_make_task_summary("s1", status="COMPLETED"),
_make_task_summary("s2", status="COMPLETED"),
_make_task_summary("s3", status="QUEUED"),
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert [t.name for t in result] == [
"get_recording",
"get_participants",
"process_tracks",
]
assert result[0].parents == []
assert result[1].parents == ["get_recording"]
assert result[2].parents == ["get_participants"]
def test_diamond_dag(self):
"""
A -> B, A -> C, B -> D, C -> D
D should have parents [B, C] (or [C, B] depending on sort).
"""
shape = [
_make_shape_item("s1", "get_recording", children_step_ids=["s2", "s3"]),
_make_shape_item("s2", "mixdown_tracks", children_step_ids=["s4"]),
_make_shape_item("s3", "detect_topics", children_step_ids=["s4"]),
_make_shape_item("s4", "finalize"),
]
tasks = [
_make_task_summary("s1", status="COMPLETED"),
_make_task_summary("s2", status="RUNNING"),
_make_task_summary("s3", status="RUNNING"),
_make_task_summary("s4", status="QUEUED"),
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
# Topological: s1 first, s2/s3 in some order, s4 last
assert result[0].name == "get_recording"
assert result[-1].name == "finalize"
finalize = result[-1]
assert set(finalize.parents) == {"mixdown_tracks", "detect_topics"}
def test_topological_order_is_stable(self):
"""Verify deterministic ordering (sorted queue in Kahn's)."""
shape = [
_make_shape_item("s_c", "task_c"),
_make_shape_item("s_a", "task_a", children_step_ids=["s_c"]),
_make_shape_item("s_b", "task_b", children_step_ids=["s_c"]),
]
tasks = [
_make_task_summary("s_c", status="QUEUED"),
_make_task_summary("s_a", status="COMPLETED"),
_make_task_summary("s_b", status="COMPLETED"),
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
# s_a and s_b both roots with in-degree 0; sorted alphabetically by step_id
names = [t.name for t in result]
assert names[-1] == "task_c"
# First two should be task_a, task_b (sorted by step_id: s_a < s_b)
assert names[0] == "task_a"
assert names[1] == "task_b"
def test_production_dag_shape(self):
"""Test the real 15-task pipeline topology with mixed statuses.
Simulates a mid-pipeline state where early tasks completed,
middle tasks running, and later tasks still queued.
"""
# Production DAG edges (parent -> children):
# get_recording -> get_participants
# get_participants -> process_tracks
# process_tracks -> mixdown_tracks, detect_topics, finalize
# mixdown_tracks -> generate_waveform
# detect_topics -> generate_title, extract_subjects
# extract_subjects -> process_subjects, identify_action_items
# process_subjects -> generate_recap
# generate_title -> finalize
# generate_recap -> finalize
# identify_action_items -> finalize
# finalize -> cleanup_consent
# cleanup_consent -> post_zulip, send_webhook
shape = [
_make_shape_item(
"s_get_recording", TaskName.GET_RECORDING, ["s_get_participants"]
),
_make_shape_item(
"s_get_participants", TaskName.GET_PARTICIPANTS, ["s_process_tracks"]
),
_make_shape_item(
"s_process_tracks",
TaskName.PROCESS_TRACKS,
["s_mixdown_tracks", "s_detect_topics", "s_finalize"],
),
_make_shape_item(
"s_mixdown_tracks", TaskName.MIXDOWN_TRACKS, ["s_generate_waveform"]
),
_make_shape_item("s_generate_waveform", TaskName.GENERATE_WAVEFORM),
_make_shape_item(
"s_detect_topics",
TaskName.DETECT_TOPICS,
["s_generate_title", "s_extract_subjects"],
),
_make_shape_item(
"s_generate_title", TaskName.GENERATE_TITLE, ["s_finalize"]
),
_make_shape_item(
"s_extract_subjects",
TaskName.EXTRACT_SUBJECTS,
["s_process_subjects", "s_identify_action_items"],
),
_make_shape_item(
"s_process_subjects", TaskName.PROCESS_SUBJECTS, ["s_generate_recap"]
),
_make_shape_item(
"s_generate_recap", TaskName.GENERATE_RECAP, ["s_finalize"]
),
_make_shape_item(
"s_identify_action_items",
TaskName.IDENTIFY_ACTION_ITEMS,
["s_finalize"],
),
_make_shape_item("s_finalize", TaskName.FINALIZE, ["s_cleanup_consent"]),
_make_shape_item(
"s_cleanup_consent",
TaskName.CLEANUP_CONSENT,
["s_post_zulip", "s_send_webhook"],
),
_make_shape_item("s_post_zulip", TaskName.POST_ZULIP),
_make_shape_item("s_send_webhook", TaskName.SEND_WEBHOOK),
]
# Mid-pipeline: early tasks done, middle running, later queued
tasks = [
_make_task_summary("s_get_recording", status="COMPLETED"),
_make_task_summary("s_get_participants", status="COMPLETED"),
_make_task_summary("s_process_tracks", status="COMPLETED"),
_make_task_summary("s_mixdown_tracks", status="RUNNING"),
_make_task_summary("s_generate_waveform", status="QUEUED"),
_make_task_summary("s_detect_topics", status="RUNNING"),
_make_task_summary("s_generate_title", status="QUEUED"),
_make_task_summary("s_extract_subjects", status="QUEUED"),
_make_task_summary("s_process_subjects", status="QUEUED"),
_make_task_summary("s_generate_recap", status="QUEUED"),
_make_task_summary("s_identify_action_items", status="QUEUED"),
_make_task_summary("s_finalize", status="QUEUED"),
_make_task_summary("s_cleanup_consent", status="QUEUED"),
_make_task_summary("s_post_zulip", status="QUEUED"),
_make_task_summary("s_send_webhook", status="QUEUED"),
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
# All 15 tasks present
assert len(result) == 15
result_names = [t.name for t in result]
assert set(result_names) == {
TaskName.GET_RECORDING,
TaskName.GET_PARTICIPANTS,
TaskName.PROCESS_TRACKS,
TaskName.MIXDOWN_TRACKS,
TaskName.GENERATE_WAVEFORM,
TaskName.DETECT_TOPICS,
TaskName.GENERATE_TITLE,
TaskName.EXTRACT_SUBJECTS,
TaskName.PROCESS_SUBJECTS,
TaskName.GENERATE_RECAP,
TaskName.IDENTIFY_ACTION_ITEMS,
TaskName.FINALIZE,
TaskName.CLEANUP_CONSENT,
TaskName.POST_ZULIP,
TaskName.SEND_WEBHOOK,
}
# Topological order invariant: no task appears before its parents
name_to_index = {t.name: i for i, t in enumerate(result)}
for task in result:
for parent_name in task.parents:
assert name_to_index[parent_name] < name_to_index[task.name], (
f"Parent {parent_name} (idx {name_to_index[parent_name]}) "
f"must appear before {task.name} (idx {name_to_index[task.name]})"
)
# finalize has exactly 4 parents
finalize = next(t for t in result if t.name == TaskName.FINALIZE)
assert set(finalize.parents) == {
TaskName.PROCESS_TRACKS,
TaskName.GENERATE_TITLE,
TaskName.GENERATE_RECAP,
TaskName.IDENTIFY_ACTION_ITEMS,
}
# cleanup_consent has 1 parent (finalize)
cleanup = next(t for t in result if t.name == TaskName.CLEANUP_CONSENT)
assert cleanup.parents == [TaskName.FINALIZE]
# post_zulip and send_webhook both have cleanup_consent as parent
post_zulip = next(t for t in result if t.name == TaskName.POST_ZULIP)
send_webhook = next(t for t in result if t.name == TaskName.SEND_WEBHOOK)
assert post_zulip.parents == [TaskName.CLEANUP_CONSENT]
assert send_webhook.parents == [TaskName.CLEANUP_CONSENT]
# Verify statuses propagated correctly
assert (
next(t for t in result if t.name == TaskName.GET_RECORDING).status
== DagTaskStatus.COMPLETED
)
assert (
next(t for t in result if t.name == TaskName.MIXDOWN_TRACKS).status
== DagTaskStatus.RUNNING
)
assert (
next(t for t in result if t.name == TaskName.FINALIZE).status
== DagTaskStatus.QUEUED
)
def test_topological_sort_invariant_complex_dag(self):
"""For a complex DAG, every task's parents appear earlier in the list.
Uses a wider branching/merging DAG than diamond to stress the invariant.
"""
# DAG: A -> B, A -> C, A -> D, B -> E, C -> E, C -> F, D -> F, E -> G, F -> G
shape = [
_make_shape_item("s_a", "task_a", ["s_b", "s_c", "s_d"]),
_make_shape_item("s_b", "task_b", ["s_e"]),
_make_shape_item("s_c", "task_c", ["s_e", "s_f"]),
_make_shape_item("s_d", "task_d", ["s_f"]),
_make_shape_item("s_e", "task_e", ["s_g"]),
_make_shape_item("s_f", "task_f", ["s_g"]),
_make_shape_item("s_g", "task_g"),
]
tasks = [
_make_task_summary("s_a", status="COMPLETED"),
_make_task_summary("s_b", status="COMPLETED"),
_make_task_summary("s_c", status="RUNNING"),
_make_task_summary("s_d", status="COMPLETED"),
_make_task_summary("s_e", status="QUEUED"),
_make_task_summary("s_f", status="QUEUED"),
_make_task_summary("s_g", status="QUEUED"),
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert len(result) == 7
name_to_index = {t.name: i for i, t in enumerate(result)}
# Verify invariant: every parent appears before its child
for task in result:
for parent_name in task.parents:
assert name_to_index[parent_name] < name_to_index[task.name], (
f"Parent {parent_name} (idx {name_to_index[parent_name]}) "
f"must appear before {task.name} (idx {name_to_index[task.name]})"
)
# task_g has 2 parents
task_g = next(t for t in result if t.name == "task_g")
assert set(task_g.parents) == {"task_e", "task_f"}
# task_e has 2 parents
task_e = next(t for t in result if t.name == "task_e")
assert set(task_e.parents) == {"task_b", "task_c"}
# task_a is root (first in topological order)
assert result[0].name == "task_a"
assert result[0].parents == []
class TestExtractDagTasksFanOut:
"""Test fan-out tasks with spawned children."""
def test_fan_out_children_counts(self):
from hatchet_sdk.clients.rest.models import V1TaskStatus
child_mocks = []
for status in ["COMPLETED", "COMPLETED", "RUNNING", "QUEUED"]:
child = MagicMock()
child.status = V1TaskStatus(status)
child_mocks.append(child)
shape = [_make_shape_item("s1", "process_tracks")]
tasks = [
_make_task_summary(
"s1",
status="RUNNING",
num_spawned_children=4,
children=child_mocks,
)
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].children_total == 4
assert result[0].children_completed == 2
def test_no_children_when_no_spawn(self):
shape = [_make_shape_item("s1", "get_recording")]
tasks = [
_make_task_summary("s1", status="COMPLETED", num_spawned_children=None)
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].children_total is None
assert result[0].children_completed is None
def test_zero_spawned_children(self):
shape = [_make_shape_item("s1", "process_tracks")]
tasks = [_make_task_summary("s1", status="COMPLETED", num_spawned_children=0)]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].children_total is None
assert result[0].children_completed is None
class TestExtractDagTasksErrorExtraction:
"""Test error message extraction logic."""
def test_simple_error(self):
shape = [_make_shape_item("s1", "mixdown_tracks")]
tasks = [
_make_task_summary(
"s1", status="FAILED", error_message="ValueError: no tracks"
)
]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].error == "ValueError: no tracks"
def test_traceback_extracts_meaningful_line(self):
error = (
"Traceback (most recent call last):\n"
' File "/app/something.py", line 42\n'
"RuntimeError: out of memory"
)
shape = [_make_shape_item("s1", "mixdown_tracks")]
tasks = [_make_task_summary("s1", status="FAILED", error_message=error)]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].error == "RuntimeError: out of memory"
def test_no_error_when_none(self):
shape = [_make_shape_item("s1", "get_recording")]
tasks = [_make_task_summary("s1", status="COMPLETED", error_message=None)]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].error is None
def test_empty_error_message(self):
shape = [_make_shape_item("s1", "get_recording")]
tasks = [_make_task_summary("s1", status="FAILED", error_message="")]
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert result[0].error is None
class TestExtractDagTasksMissingData:
"""Test edge cases with missing task data."""
def test_shape_without_matching_task(self):
"""Shape has a step but tasks list doesn't contain it."""
shape = [_make_shape_item("s1", "get_recording")]
tasks = [] # No matching task
details = _make_details(shape, tasks)
result = extract_dag_tasks(details)
assert len(result) == 1
assert result[0].name == "get_recording"
assert result[0].status == DagTaskStatus.QUEUED # default when no task data
assert result[0].started_at is None
def test_none_shape_returns_empty(self):
details = _make_details(shape=[], tasks=[])
details.shape = None
result = extract_dag_tasks(details)
assert result == []
class TestDagStatusData:
"""Test DagStatusData model serialization."""
def test_serialization(self):
task = DagTask(
name="get_recording",
status=DagTaskStatus.COMPLETED,
started_at=datetime(2025, 1, 1, tzinfo=timezone.utc),
finished_at=datetime(2025, 1, 1, 0, 0, 1, tzinfo=timezone.utc),
duration_seconds=1.0,
parents=[],
error=None,
children_total=None,
children_completed=None,
progress_pct=None,
)
data = DagStatusData(workflow_run_id="test-123", tasks=[task])
dumped = data.model_dump(mode="json")
assert dumped["workflow_run_id"] == "test-123"
assert len(dumped["tasks"]) == 1
assert dumped["tasks"][0]["name"] == "get_recording"
assert dumped["tasks"][0]["status"] == "completed"
assert dumped["tasks"][0]["duration_seconds"] == 1.0
class AsyncContextManager:
"""No-op async context manager for mocking fresh_db_connection."""
async def __aenter__(self):
return None
async def __aexit__(self, *args):
return None
class TestBroadcastDagStatus:
"""Test broadcast_dag_status function.
broadcast_dag_status uses deferred imports inside its function body.
We mock the source modules/objects before calling the function.
Importing daily_multitrack_pipeline triggers a cascade
(subject_processing -> HatchetClientManager.get_client at module level),
so we set _instance before the import to prevent real SDK init.
"""
@pytest.fixture(autouse=True)
def _setup_hatchet_mock(self):
"""Set HatchetClientManager._instance to a mock to prevent real SDK init.
Module-level code in workflow files calls get_client() during import.
Setting _instance before import avoids ClientConfig validation.
"""
from reflector.hatchet.client import HatchetClientManager
original = HatchetClientManager._instance
HatchetClientManager._instance = MagicMock()
yield
HatchetClientManager._instance = original
@pytest.mark.asyncio
async def test_broadcasts_dag_status(self):
"""broadcast_dag_status fetches run, transforms, and broadcasts."""
mock_transcript = MagicMock()
mock_transcript.id = "t-123"
mock_details = _make_details(
shape=[_make_shape_item("s1", "get_recording")],
tasks=[_make_task_summary("s1", status="COMPLETED")],
run_id="wf-abc",
)
mock_client = MagicMock()
mock_client.runs.aio_get = AsyncMock(return_value=mock_details)
with (
patch(
"reflector.hatchet.client.HatchetClientManager.get_client",
return_value=mock_client,
),
patch(
"reflector.hatchet.broadcast.append_event_and_broadcast",
new_callable=AsyncMock,
) as mock_broadcast,
patch(
"reflector.db.transcripts.transcripts_controller.get_by_id",
new_callable=AsyncMock,
return_value=mock_transcript,
),
patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.fresh_db_connection",
return_value=AsyncContextManager(),
),
):
from reflector.hatchet.dag_progress import broadcast_dag_status
await broadcast_dag_status("t-123", "wf-abc")
mock_client.runs.aio_get.assert_called_once_with("wf-abc")
mock_broadcast.assert_called_once()
call_args = mock_broadcast.call_args
assert call_args[0][0] == "t-123" # transcript_id
assert call_args[0][1] is mock_transcript # transcript
assert call_args[0][2] == "DAG_STATUS" # event_name
data = call_args[0][3]
assert isinstance(data, DagStatusData)
assert data.workflow_run_id == "wf-abc"
assert len(data.tasks) == 1
@pytest.mark.asyncio
async def test_swallows_exceptions(self):
"""broadcast_dag_status never raises even when internals fail."""
from reflector.hatchet.dag_progress import broadcast_dag_status
with patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.fresh_db_connection",
side_effect=RuntimeError("db exploded"),
):
# Should not raise
await broadcast_dag_status("t-123", "wf-abc")
@pytest.mark.asyncio
async def test_no_broadcast_when_transcript_not_found(self):
"""broadcast_dag_status does not broadcast if transcript is None."""
mock_details = _make_details(
shape=[_make_shape_item("s1", "get_recording")],
tasks=[_make_task_summary("s1", status="COMPLETED")],
)
mock_client = MagicMock()
mock_client.runs.aio_get = AsyncMock(return_value=mock_details)
with (
patch(
"reflector.hatchet.client.HatchetClientManager.get_client",
return_value=mock_client,
),
patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.fresh_db_connection",
return_value=AsyncContextManager(),
),
patch(
"reflector.db.transcripts.transcripts_controller.get_by_id",
new_callable=AsyncMock,
return_value=None,
),
patch(
"reflector.hatchet.broadcast.append_event_and_broadcast",
new_callable=AsyncMock,
) as mock_broadcast,
):
from reflector.hatchet.dag_progress import broadcast_dag_status
await broadcast_dag_status("t-123", "wf-abc")
mock_broadcast.assert_not_called()
class TestMakeAudioProgressLoggerWithBroadcast:
"""Test make_audio_progress_logger with transcript_id for transient broadcasts."""
@pytest.fixture(autouse=True)
def _setup_hatchet_mock(self):
"""Set HatchetClientManager._instance to prevent real SDK init on import."""
from reflector.hatchet.client import HatchetClientManager
original = HatchetClientManager._instance
if original is None:
HatchetClientManager._instance = MagicMock()
yield
HatchetClientManager._instance = original
def test_broadcasts_transient_progress_event(self):
"""When transcript_id provided and progress_pct not None, broadcasts event."""
import asyncio
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
make_audio_progress_logger,
)
ctx = MagicMock()
ctx.log = MagicMock()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
mock_broadcast = AsyncMock()
tasks_created = []
original_create_task = loop.create_task
def capture_create_task(coro):
task = original_create_task(coro)
tasks_created.append(task)
return task
try:
with (
patch(
"reflector.hatchet.broadcast.broadcast_event",
mock_broadcast,
),
patch.object(loop, "create_task", side_effect=capture_create_task),
):
callback = make_audio_progress_logger(
ctx, TaskName.MIXDOWN_TRACKS, interval=0.0, transcript_id="t-123"
)
callback(50.0, 100.0)
# Run pending tasks
if tasks_created:
loop.run_until_complete(asyncio.gather(*tasks_created))
mock_broadcast.assert_called_once()
event_arg = mock_broadcast.call_args[0][1]
assert event_arg.event == "DAG_TASK_PROGRESS"
assert event_arg.data["task_name"] == TaskName.MIXDOWN_TRACKS
assert event_arg.data["progress_pct"] == 50.0
finally:
loop.close()
def test_no_broadcast_without_transcript_id(self):
"""When transcript_id is None, no broadcast happens."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
make_audio_progress_logger,
)
ctx = MagicMock()
with patch(
"reflector.hatchet.broadcast.broadcast_event",
new_callable=AsyncMock,
) as mock_broadcast:
callback = make_audio_progress_logger(
ctx, TaskName.MIXDOWN_TRACKS, interval=0.0, transcript_id=None
)
callback(50.0, 100.0)
mock_broadcast.assert_not_called()
def test_no_broadcast_when_progress_pct_is_none(self):
"""When progress_pct is None, no broadcast happens even with transcript_id."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
make_audio_progress_logger,
)
ctx = MagicMock()
with patch(
"reflector.hatchet.broadcast.broadcast_event",
new_callable=AsyncMock,
) as mock_broadcast:
callback = make_audio_progress_logger(
ctx, TaskName.MIXDOWN_TRACKS, interval=0.0, transcript_id="t-123"
)
callback(None, 100.0)
mock_broadcast.assert_not_called()
def test_logging_throttled_by_interval(self):
"""With interval=5.0, rapid calls only log once until interval elapses.
The throttle applies to ctx.log() calls. Broadcasts (fire-and-forget)
are not throttled — they occur every call when transcript_id + progress_pct set.
"""
import asyncio
import time as time_mod
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
make_audio_progress_logger,
)
ctx = MagicMock()
ctx.log = MagicMock()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
mock_broadcast = AsyncMock()
tasks_created = []
original_create_task = loop.create_task
def capture_create_task(coro):
task = original_create_task(coro)
tasks_created.append(task)
return task
# Controlled monotonic values for the 4 calls from make_audio_progress_logger:
# init (start_time, last_log_time), call1 (now), call2 (now), call3 (now)
# After those, fall back to real time.monotonic() for asyncio internals.
controlled_values = [100.0, 100.0, 101.0, 106.0]
call_index = [0]
real_monotonic = time_mod.monotonic
def mock_monotonic():
if call_index[0] < len(controlled_values):
val = controlled_values[call_index[0]]
call_index[0] += 1
return val
return real_monotonic()
try:
with (
patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.time.monotonic",
side_effect=mock_monotonic,
),
patch(
"reflector.hatchet.broadcast.broadcast_event",
mock_broadcast,
),
patch.object(loop, "create_task", side_effect=capture_create_task),
):
callback = make_audio_progress_logger(
ctx, TaskName.MIXDOWN_TRACKS, interval=5.0, transcript_id="t-123"
)
# Call 1 at t=100.0: 100.0 - 100.0 = 0.0 < 5.0 => no log
callback(25.0, 50.0)
assert ctx.log.call_count == 0
# Call 2 at t=101.0: 101.0 - 100.0 = 1.0 < 5.0 => no log
callback(50.0, 100.0)
assert ctx.log.call_count == 0
# Call 3 at t=106.0: 106.0 - 100.0 = 6.0 >= 5.0 => logs
callback(75.0, 150.0)
assert ctx.log.call_count == 1
# Run pending broadcast tasks
if tasks_created:
loop.run_until_complete(asyncio.gather(*tasks_created))
# Broadcasts happen on every call (not throttled) — 3 calls total
assert mock_broadcast.call_count == 3
finally:
loop.close()
def test_uses_broadcast_event_not_append_event_and_broadcast(self):
"""Progress events use broadcast_event (transient), not append_event_and_broadcast (persisted)."""
import asyncio
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
make_audio_progress_logger,
)
ctx = MagicMock()
ctx.log = MagicMock()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
mock_broadcast_event = AsyncMock()
mock_append = AsyncMock()
tasks_created = []
original_create_task = loop.create_task
def capture_create_task(coro):
task = original_create_task(coro)
tasks_created.append(task)
return task
try:
with (
patch(
"reflector.hatchet.broadcast.broadcast_event",
mock_broadcast_event,
),
patch(
"reflector.hatchet.broadcast.append_event_and_broadcast",
mock_append,
),
patch.object(loop, "create_task", side_effect=capture_create_task),
):
callback = make_audio_progress_logger(
ctx, TaskName.MIXDOWN_TRACKS, interval=0.0, transcript_id="t-123"
)
callback(50.0, 100.0)
if tasks_created:
loop.run_until_complete(asyncio.gather(*tasks_created))
# broadcast_event (transient) IS called
mock_broadcast_event.assert_called_once()
# append_event_and_broadcast (persisted) is NOT called
mock_append.assert_not_called()
finally:
loop.close()

View File

@@ -1,181 +0,0 @@
"""Tests for with_error_handling decorator integration with broadcast_dag_status.
The decorator wraps each pipeline task and calls broadcast_dag_status on both
success and failure paths. These tests verify that integration rather than
testing broadcast_dag_status in isolation (which test_dag_progress.py covers).
"""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from reflector.hatchet.constants import TaskName
class TestWithErrorHandlingBroadcast:
"""Test with_error_handling decorator's integration with broadcast_dag_status."""
@pytest.fixture(autouse=True)
def _setup_hatchet_mock(self):
"""Set HatchetClientManager._instance to a mock to prevent real SDK init.
Module-level code in workflow files calls get_client() during import.
Setting _instance before import avoids ClientConfig validation.
"""
from reflector.hatchet.client import HatchetClientManager
original = HatchetClientManager._instance
HatchetClientManager._instance = MagicMock()
yield
HatchetClientManager._instance = original
def _make_input(self, transcript_id: str = "t-123") -> MagicMock:
"""Create a mock PipelineInput with transcript_id."""
inp = MagicMock()
inp.transcript_id = transcript_id
return inp
def _make_ctx(self, workflow_run_id: str = "wf-abc") -> MagicMock:
"""Create a mock Context with workflow_run_id."""
ctx = MagicMock()
ctx.workflow_run_id = workflow_run_id
return ctx
@pytest.mark.asyncio
async def test_calls_broadcast_on_success(self):
"""Decorator calls broadcast_dag_status once when task succeeds."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
with_error_handling,
)
inner = AsyncMock(return_value="ok")
wrapped = with_error_handling(TaskName.GET_RECORDING)(inner)
with patch(
"reflector.hatchet.dag_progress.broadcast_dag_status",
new_callable=AsyncMock,
) as mock_broadcast:
result = await wrapped(self._make_input(), self._make_ctx())
assert result == "ok"
mock_broadcast.assert_called_once_with("t-123", "wf-abc")
@pytest.mark.asyncio
async def test_calls_broadcast_on_failure(self):
"""Decorator calls broadcast_dag_status once when task raises."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
with_error_handling,
)
inner = AsyncMock(side_effect=RuntimeError("boom"))
wrapped = with_error_handling(TaskName.GET_RECORDING)(inner)
with (
patch(
"reflector.hatchet.dag_progress.broadcast_dag_status",
new_callable=AsyncMock,
) as mock_broadcast,
patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.set_workflow_error_status",
new_callable=AsyncMock,
),
):
with pytest.raises(RuntimeError, match="boom"):
await wrapped(self._make_input(), self._make_ctx())
mock_broadcast.assert_called_once_with("t-123", "wf-abc")
@pytest.mark.asyncio
async def test_swallows_broadcast_exception_on_success(self):
"""Broadcast failure does not crash the task on the success path."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
with_error_handling,
)
inner = AsyncMock(return_value="ok")
wrapped = with_error_handling(TaskName.GET_RECORDING)(inner)
with patch(
"reflector.hatchet.dag_progress.broadcast_dag_status",
new_callable=AsyncMock,
side_effect=RuntimeError("broadcast exploded"),
):
result = await wrapped(self._make_input(), self._make_ctx())
assert result == "ok"
@pytest.mark.asyncio
async def test_swallows_broadcast_exception_on_failure(self):
"""Original task exception propagates even when broadcast also fails."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
with_error_handling,
)
inner = AsyncMock(side_effect=ValueError("original error"))
wrapped = with_error_handling(TaskName.GET_RECORDING)(inner)
with (
patch(
"reflector.hatchet.dag_progress.broadcast_dag_status",
new_callable=AsyncMock,
side_effect=RuntimeError("broadcast exploded"),
),
patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.set_workflow_error_status",
new_callable=AsyncMock,
),
):
with pytest.raises(ValueError, match="original error"):
await wrapped(self._make_input(), self._make_ctx())
@pytest.mark.asyncio
async def test_calls_set_workflow_error_status_on_failure(self):
"""On task failure with set_error_status=True (default), calls set_workflow_error_status."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
with_error_handling,
)
inner = AsyncMock(side_effect=RuntimeError("boom"))
wrapped = with_error_handling(TaskName.GET_RECORDING)(inner)
with (
patch(
"reflector.hatchet.dag_progress.broadcast_dag_status",
new_callable=AsyncMock,
),
patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.set_workflow_error_status",
new_callable=AsyncMock,
) as mock_set_error,
):
with pytest.raises(RuntimeError, match="boom"):
await wrapped(self._make_input(), self._make_ctx())
mock_set_error.assert_called_once_with("t-123")
@pytest.mark.asyncio
async def test_no_set_workflow_error_status_when_disabled(self):
"""With set_error_status=False, set_workflow_error_status is NOT called on failure."""
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
with_error_handling,
)
inner = AsyncMock(side_effect=RuntimeError("boom"))
wrapped = with_error_handling(TaskName.GET_RECORDING, set_error_status=False)(
inner
)
with (
patch(
"reflector.hatchet.dag_progress.broadcast_dag_status",
new_callable=AsyncMock,
),
patch(
"reflector.hatchet.workflows.daily_multitrack_pipeline.set_workflow_error_status",
new_callable=AsyncMock,
) as mock_set_error,
):
with pytest.raises(RuntimeError, match="boom"):
await wrapped(self._make_input(), self._make_ctx())
mock_set_error.assert_not_called()

View File

@@ -1,421 +0,0 @@
"""Tests for DAG status REST enrichment on search and transcript GET endpoints."""
from datetime import datetime, timezone
from types import SimpleNamespace
from unittest.mock import AsyncMock, patch
import pytest
import reflector.db.search as search_module
from reflector.db.search import SearchResult, _fetch_dag_statuses
from reflector.db.transcripts import TranscriptEvent
class TestFetchDagStatuses:
"""Test the _fetch_dag_statuses helper."""
@pytest.mark.asyncio
async def test_returns_empty_for_empty_ids(self):
result = await _fetch_dag_statuses([])
assert result == {}
@pytest.mark.asyncio
async def test_extracts_last_dag_status(self):
events = [
{"event": "STATUS", "data": {"value": "processing"}},
{
"event": "DAG_STATUS",
"data": {
"workflow_run_id": "r1",
"tasks": [{"name": "get_recording", "status": "completed"}],
},
},
{
"event": "DAG_STATUS",
"data": {
"workflow_run_id": "r1",
"tasks": [
{"name": "get_recording", "status": "completed"},
{"name": "process_tracks", "status": "running"},
],
},
},
]
mock_row = {"id": "t1", "events": events}
with patch("reflector.db.search.get_database") as mock_db:
mock_db.return_value.fetch_all = AsyncMock(return_value=[mock_row])
result = await _fetch_dag_statuses(["t1"])
assert "t1" in result
assert len(result["t1"]) == 2 # Last DAG_STATUS had 2 tasks
@pytest.mark.asyncio
async def test_skips_transcripts_without_events(self):
mock_row = {"id": "t1", "events": None}
with patch("reflector.db.search.get_database") as mock_db:
mock_db.return_value.fetch_all = AsyncMock(return_value=[mock_row])
result = await _fetch_dag_statuses(["t1"])
assert result == {}
@pytest.mark.asyncio
async def test_skips_transcripts_without_dag_status(self):
events = [
{"event": "STATUS", "data": {"value": "processing"}},
{"event": "DURATION", "data": {"duration": 1000}},
]
mock_row = {"id": "t1", "events": events}
with patch("reflector.db.search.get_database") as mock_db:
mock_db.return_value.fetch_all = AsyncMock(return_value=[mock_row])
result = await _fetch_dag_statuses(["t1"])
assert result == {}
@pytest.mark.asyncio
async def test_handles_json_string_events(self):
"""Events stored as JSON string rather than already-parsed list."""
import json
events = [
{
"event": "DAG_STATUS",
"data": {
"workflow_run_id": "r1",
"tasks": [{"name": "transcribe", "status": "running"}],
},
},
]
mock_row = {"id": "t1", "events": json.dumps(events)}
with patch("reflector.db.search.get_database") as mock_db:
mock_db.return_value.fetch_all = AsyncMock(return_value=[mock_row])
result = await _fetch_dag_statuses(["t1"])
assert "t1" in result
assert len(result["t1"]) == 1
assert result["t1"][0]["name"] == "transcribe"
@pytest.mark.asyncio
async def test_multiple_transcripts(self):
"""Handles multiple transcripts in one call."""
events_t1 = [
{
"event": "DAG_STATUS",
"data": {
"workflow_run_id": "r1",
"tasks": [{"name": "a", "status": "completed"}],
},
},
]
events_t2 = [
{
"event": "DAG_STATUS",
"data": {
"workflow_run_id": "r2",
"tasks": [{"name": "b", "status": "running"}],
},
},
]
mock_rows = [
{"id": "t1", "events": events_t1},
{"id": "t2", "events": events_t2},
]
with patch("reflector.db.search.get_database") as mock_db:
mock_db.return_value.fetch_all = AsyncMock(return_value=mock_rows)
result = await _fetch_dag_statuses(["t1", "t2"])
assert "t1" in result
assert "t2" in result
assert result["t1"][0]["name"] == "a"
assert result["t2"][0]["name"] == "b"
@pytest.mark.asyncio
async def test_dag_status_without_tasks_key_skipped(self):
"""DAG_STATUS event with no tasks key in data should be skipped."""
events = [
{"event": "DAG_STATUS", "data": {"workflow_run_id": "r1"}},
]
mock_row = {"id": "t1", "events": events}
with patch("reflector.db.search.get_database") as mock_db:
mock_db.return_value.fetch_all = AsyncMock(return_value=[mock_row])
result = await _fetch_dag_statuses(["t1"])
assert result == {}
def _extract_dag_status_from_transcript(transcript):
"""Replicate the dag_status extraction logic from transcript_get view.
This mirrors the code in reflector/views/transcripts.py lines 495-500:
dag_status = None
if transcript.status == "processing" and transcript.events:
for ev in reversed(transcript.events):
if ev.event == "DAG_STATUS":
dag_status = ev.data.get("tasks") if isinstance(ev.data, dict) else None
break
"""
dag_status = None
if transcript.status == "processing" and transcript.events:
for ev in reversed(transcript.events):
if ev.event == "DAG_STATUS":
dag_status = ev.data.get("tasks") if isinstance(ev.data, dict) else None
break
return dag_status
class TestTranscriptGetDagStatusExtraction:
"""Test dag_status extraction logic from transcript_get endpoint.
The actual endpoint is complex to set up, so we test the extraction
logic directly using the same code pattern from the view.
"""
def test_processing_transcript_with_dag_status_events(self):
"""Processing transcript with DAG_STATUS events returns tasks from last event."""
transcript = SimpleNamespace(
status="processing",
events=[
TranscriptEvent(event="STATUS", data={"value": "processing"}),
TranscriptEvent(
event="DAG_STATUS",
data={
"workflow_run_id": "r1",
"tasks": [{"name": "get_recording", "status": "completed"}],
},
),
TranscriptEvent(
event="DAG_STATUS",
data={
"workflow_run_id": "r1",
"tasks": [
{"name": "get_recording", "status": "completed"},
{"name": "transcribe", "status": "running"},
],
},
),
],
)
result = _extract_dag_status_from_transcript(transcript)
assert result is not None
assert len(result) == 2
assert result[0]["name"] == "get_recording"
assert result[1]["name"] == "transcribe"
assert result[1]["status"] == "running"
def test_processing_transcript_without_dag_status_events(self):
"""Processing transcript with only non-DAG_STATUS events returns None."""
transcript = SimpleNamespace(
status="processing",
events=[
TranscriptEvent(event="STATUS", data={"value": "processing"}),
TranscriptEvent(event="DURATION", data={"duration": 1000}),
],
)
result = _extract_dag_status_from_transcript(transcript)
assert result is None
def test_ended_transcript_with_dag_status_events(self):
"""Ended transcript with DAG_STATUS events returns None (status check)."""
transcript = SimpleNamespace(
status="ended",
events=[
TranscriptEvent(
event="DAG_STATUS",
data={
"workflow_run_id": "r1",
"tasks": [{"name": "transcribe", "status": "completed"}],
},
),
],
)
result = _extract_dag_status_from_transcript(transcript)
assert result is None
def test_processing_transcript_with_empty_events(self):
"""Processing transcript with empty events list returns None."""
transcript = SimpleNamespace(
status="processing",
events=[],
)
result = _extract_dag_status_from_transcript(transcript)
assert result is None
def test_processing_transcript_with_none_events(self):
"""Processing transcript with None events returns None."""
transcript = SimpleNamespace(
status="processing",
events=None,
)
result = _extract_dag_status_from_transcript(transcript)
assert result is None
def test_extracts_last_dag_status_not_first(self):
"""Should pick the last DAG_STATUS event (most recent), not the first."""
transcript = SimpleNamespace(
status="processing",
events=[
TranscriptEvent(
event="DAG_STATUS",
data={
"workflow_run_id": "r1",
"tasks": [{"name": "a", "status": "running"}],
},
),
TranscriptEvent(event="STATUS", data={"value": "processing"}),
TranscriptEvent(
event="DAG_STATUS",
data={
"workflow_run_id": "r1",
"tasks": [
{"name": "a", "status": "completed"},
{"name": "b", "status": "running"},
],
},
),
],
)
result = _extract_dag_status_from_transcript(transcript)
assert len(result) == 2
assert result[0]["status"] == "completed"
assert result[1]["name"] == "b"
class TestSearchEnrichmentIntegration:
"""Test DAG status enrichment in search results.
The search function enriches processing transcripts with dag_status
by calling _fetch_dag_statuses for processing IDs and assigning results.
We test this enrichment logic by mocking _fetch_dag_statuses.
"""
def _make_search_result(self, id: str, status: str) -> SearchResult:
"""Create a minimal SearchResult for testing."""
return SearchResult(
id=id,
title=f"Transcript {id}",
user_id="u1",
room_id=None,
room_name=None,
source_kind="live",
created_at=datetime(2024, 1, 1, tzinfo=timezone.utc),
status=status,
rank=1.0,
duration=60.0,
search_snippets=[],
total_match_count=0,
dag_status=None,
)
@pytest.mark.asyncio
async def test_processing_result_gets_dag_status(self):
"""SearchResult with status='processing' and matching DAG_STATUS events
gets dag_status populated."""
results = [self._make_search_result("t1", "processing")]
dag_tasks = [
{"name": "get_recording", "status": "completed"},
{"name": "transcribe", "status": "running"},
]
with patch.object(
search_module,
"_fetch_dag_statuses",
new_callable=AsyncMock,
return_value={"t1": dag_tasks},
) as mock_fetch:
# Replicate the enrichment logic from SearchController.search_transcripts
processing_ids = [r.id for r in results if r.status == "processing"]
if processing_ids:
dag_statuses = await search_module._fetch_dag_statuses(processing_ids)
for r in results:
if r.id in dag_statuses:
r.dag_status = dag_statuses[r.id]
mock_fetch.assert_called_once_with(["t1"])
assert results[0].dag_status == dag_tasks
@pytest.mark.asyncio
async def test_ended_result_does_not_trigger_fetch(self):
"""SearchResult with status='ended' does NOT trigger _fetch_dag_statuses."""
results = [self._make_search_result("t1", "ended")]
with patch.object(
search_module,
"_fetch_dag_statuses",
new_callable=AsyncMock,
return_value={},
) as mock_fetch:
processing_ids = [r.id for r in results if r.status == "processing"]
if processing_ids:
dag_statuses = await search_module._fetch_dag_statuses(processing_ids)
for r in results:
if r.id in dag_statuses:
r.dag_status = dag_statuses[r.id]
mock_fetch.assert_not_called()
assert results[0].dag_status is None
@pytest.mark.asyncio
async def test_mixed_processing_and_ended_results(self):
"""Only processing results get enriched; ended results stay None."""
results = [
self._make_search_result("t1", "processing"),
self._make_search_result("t2", "ended"),
self._make_search_result("t3", "processing"),
]
dag_tasks_t1 = [{"name": "transcribe", "status": "running"}]
dag_tasks_t3 = [{"name": "diarize", "status": "completed"}]
with patch.object(
search_module,
"_fetch_dag_statuses",
new_callable=AsyncMock,
return_value={"t1": dag_tasks_t1, "t3": dag_tasks_t3},
) as mock_fetch:
processing_ids = [r.id for r in results if r.status == "processing"]
if processing_ids:
dag_statuses = await search_module._fetch_dag_statuses(processing_ids)
for r in results:
if r.id in dag_statuses:
r.dag_status = dag_statuses[r.id]
mock_fetch.assert_called_once_with(["t1", "t3"])
assert results[0].dag_status == dag_tasks_t1
assert results[1].dag_status is None
assert results[2].dag_status == dag_tasks_t3
@pytest.mark.asyncio
async def test_processing_result_without_dag_events_stays_none(self):
"""Processing result with no DAG_STATUS events in DB stays dag_status=None."""
results = [self._make_search_result("t1", "processing")]
with patch.object(
search_module,
"_fetch_dag_statuses",
new_callable=AsyncMock,
return_value={},
) as mock_fetch:
processing_ids = [r.id for r in results if r.status == "processing"]
if processing_ids:
dag_statuses = await search_module._fetch_dag_statuses(processing_ids)
for r in results:
if r.id in dag_statuses:
r.dag_status = dag_statuses[r.id]
mock_fetch.assert_called_once_with(["t1"])
assert results[0].dag_status is None

View File

@@ -255,7 +255,7 @@ async def test_validation_locked_transcript():
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_validation_idle_transcript():
"""Test that validation rejects idle transcripts (not ready)."""
"""Test that validation rejects idle transcripts without recording (file upload not ready)."""
from reflector.services.transcript_process import (
ValidationNotReady,
validate_transcript_for_processing,
@@ -274,6 +274,34 @@ async def test_validation_idle_transcript():
assert "not ready" in result.detail.lower()
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_validation_idle_transcript_with_recording_allowed():
"""Test that validation allows idle transcripts with recording_id (multitrack ready/retry)."""
from reflector.services.transcript_process import (
ValidationOk,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="idle",
source_kind="room",
recording_id="test-recording-id",
)
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
assert isinstance(result, ValidationOk)
assert result.recording_id == "test-recording-id"
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_prepare_multitrack_config():

View File

@@ -0,0 +1,58 @@
"""Tests for password hashing utilities."""
from reflector.auth.password_utils import hash_password, verify_password
def test_hash_and_verify():
pw = "my-secret-password"
h = hash_password(pw)
assert verify_password(pw, h) is True
def test_wrong_password():
h = hash_password("correct")
assert verify_password("wrong", h) is False
def test_hash_format():
h = hash_password("test")
parts = h.split("$")
assert len(parts) == 3
assert parts[0] == "pbkdf2:sha256:100000"
assert len(parts[1]) == 32 # 16 bytes hex = 32 chars
assert len(parts[2]) == 64 # sha256 hex = 64 chars
def test_different_salts():
h1 = hash_password("same")
h2 = hash_password("same")
assert h1 != h2 # different salts produce different hashes
assert verify_password("same", h1) is True
assert verify_password("same", h2) is True
def test_malformed_hash():
assert verify_password("test", "garbage") is False
assert verify_password("test", "") is False
assert verify_password("test", "pbkdf2:sha256:100000$short") is False
def test_empty_password():
h = hash_password("")
assert verify_password("", h) is True
assert verify_password("notempty", h) is False
def test_unicode_password():
pw = "p\u00e4ssw\u00f6rd\U0001f512"
h = hash_password(pw)
assert verify_password(pw, h) is True
assert verify_password("password", h) is False
def test_constant_time_comparison():
"""Verify that hmac.compare_digest is used (structural test)."""
import inspect
source = inspect.getsource(verify_password)
assert "hmac.compare_digest" in source

View File

@@ -319,3 +319,51 @@ def test_aws_storage_constructor_rejects_mixed_auth():
aws_secret_access_key="test-secret",
aws_role_arn="arn:aws:iam::123456789012:role/test-role",
)
@pytest.mark.asyncio
async def test_aws_storage_custom_endpoint_url():
"""Test that custom endpoint_url configures path-style addressing and passes endpoint to client."""
storage = AwsStorage(
aws_bucket_name="reflector-media",
aws_region="garage",
aws_access_key_id="GKtest",
aws_secret_access_key="secret",
aws_endpoint_url="http://garage:3900",
)
assert storage._endpoint_url == "http://garage:3900"
assert storage.boto_config.s3["addressing_style"] == "path"
assert storage.base_url == "http://garage:3900/reflector-media/"
# retries config preserved (merge, not replace)
assert storage.boto_config.retries["max_attempts"] == 3
mock_client = AsyncMock()
mock_client.put_object = AsyncMock()
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
mock_client.__aexit__ = AsyncMock(return_value=None)
mock_client.generate_presigned_url = AsyncMock(
return_value="http://garage:3900/reflector-media/test.txt"
)
with patch.object(
storage.session, "client", return_value=mock_client
) as mock_session_client:
await storage.put_file("test.txt", b"data")
mock_session_client.assert_called_with(
"s3", config=storage.boto_config, endpoint_url="http://garage:3900"
)
@pytest.mark.asyncio
async def test_aws_storage_none_endpoint_url():
"""Test that None endpoint preserves current AWS behavior."""
storage = AwsStorage(
aws_bucket_name="reflector-bucket",
aws_region="us-east-1",
aws_access_key_id="AKIAtest",
aws_secret_access_key="secret",
)
assert storage._endpoint_url is None
assert storage.base_url == "https://reflector-bucket.s3.amazonaws.com/"
# No s3 addressing_style override — boto_config should only have retries
assert not hasattr(storage.boto_config, "s3") or storage.boto_config.s3 is None

View File

@@ -1,331 +0,0 @@
"""WebSocket broadcast delivery tests for STATUS and DAG_STATUS events.
Tests the full chain identified in DEBUG.md:
broadcast_event() → ws_manager.send_json() → Redis/in-memory pub/sub
→ _pubsub_data_reader() → socket.send_json() → WebSocket client
Covers:
1. STATUS event delivery to transcript room WS
2. DAG_STATUS event delivery to transcript room WS
3. Full broadcast_event() chain (requires broadcast.py patching)
4. _pubsub_data_reader resilience when a client disconnects
"""
import asyncio
import threading
import time
import pytest
from httpx import AsyncClient
from httpx_ws import aconnect_ws
from uvicorn import Config, Server
@pytest.fixture
def appserver_ws_broadcast(setup_database, monkeypatch):
"""Start real uvicorn server for WebSocket broadcast tests.
Also patches broadcast.py's get_ws_manager (missing from conftest autouse fixture).
"""
# Patch broadcast.py's get_ws_manager — conftest.py misses this module.
# Without this, broadcast_event() creates a real Redis ws_manager.
import reflector.ws_manager as ws_mod
from reflector.app import app
from reflector.db import get_database
monkeypatch.setattr(
"reflector.hatchet.broadcast.get_ws_manager", ws_mod.get_ws_manager
)
host = "127.0.0.1"
port = 1259
server_started = threading.Event()
server_exception = None
server_instance = None
def run_server():
nonlocal server_exception, server_instance
try:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
config = Config(app=app, host=host, port=port, loop=loop)
server_instance = Server(config)
async def start_server():
database = get_database()
await database.connect()
try:
await server_instance.serve()
finally:
await database.disconnect()
server_started.set()
loop.run_until_complete(start_server())
except Exception as e:
server_exception = e
server_started.set()
finally:
loop.close()
server_thread = threading.Thread(target=run_server, daemon=True)
server_thread.start()
server_started.wait(timeout=30)
if server_exception:
raise server_exception
time.sleep(0.5)
yield host, port
if server_instance:
server_instance.should_exit = True
server_thread.join(timeout=2.0)
from reflector.ws_manager import reset_ws_manager
reset_ws_manager()
async def _create_transcript(host: str, port: int, name: str) -> str:
"""Create a transcript via ASGI transport and return its ID."""
from reflector.app import app
async with AsyncClient(app=app, base_url=f"http://{host}:{port}/v1") as ac:
resp = await ac.post("/transcripts", json={"name": name})
assert resp.status_code == 200, f"Failed to create transcript: {resp.text}"
return resp.json()["id"]
async def _drain_historical_events(ws, timeout: float = 0.5) -> list[dict]:
"""Read all historical events sent on WS connect (non-blocking drain)."""
events = []
deadline = asyncio.get_event_loop().time() + timeout
while asyncio.get_event_loop().time() < deadline:
try:
msg = await asyncio.wait_for(ws.receive_json(), timeout=0.1)
events.append(msg)
except (asyncio.TimeoutError, Exception):
break
return events
# ---------------------------------------------------------------------------
# Test 1: STATUS event delivery via ws_manager.send_json
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_transcript_ws_receives_status_via_send_json(appserver_ws_broadcast):
"""STATUS event published via ws_manager.send_json() arrives at transcript room WS."""
host, port = appserver_ws_broadcast
transcript_id = await _create_transcript(host, port, "Status send_json test")
ws_url = f"http://{host}:{port}/v1/transcripts/{transcript_id}/events"
async with aconnect_ws(ws_url) as ws:
await _drain_historical_events(ws)
import reflector.ws_manager as ws_mod
ws_manager = ws_mod.get_ws_manager()
await ws_manager.send_json(
room_id=f"ts:{transcript_id}",
message={"event": "STATUS", "data": {"value": "processing"}},
)
msg = await asyncio.wait_for(ws.receive_json(), timeout=5.0)
assert msg["event"] == "STATUS"
assert msg["data"]["value"] == "processing"
# ---------------------------------------------------------------------------
# Test 2: DAG_STATUS event delivery via ws_manager.send_json
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_transcript_ws_receives_dag_status_via_send_json(appserver_ws_broadcast):
"""DAG_STATUS event published via ws_manager.send_json() arrives at transcript room WS."""
host, port = appserver_ws_broadcast
transcript_id = await _create_transcript(host, port, "DAG_STATUS send_json test")
dag_payload = {
"event": "DAG_STATUS",
"data": {
"workflow_run_id": "test-run-123",
"tasks": [
{
"name": "get_recording",
"status": "completed",
"started_at": "2025-01-01T00:00:00Z",
"finished_at": "2025-01-01T00:00:05Z",
"duration_seconds": 5.0,
"parents": [],
"error": None,
"children_total": None,
"children_completed": None,
"progress_pct": None,
},
{
"name": "process_tracks",
"status": "running",
"started_at": "2025-01-01T00:00:05Z",
"finished_at": None,
"duration_seconds": None,
"parents": ["get_recording"],
"error": None,
"children_total": 3,
"children_completed": 1,
"progress_pct": 33.3,
},
],
},
}
ws_url = f"http://{host}:{port}/v1/transcripts/{transcript_id}/events"
async with aconnect_ws(ws_url) as ws:
await _drain_historical_events(ws)
import reflector.ws_manager as ws_mod
ws_manager = ws_mod.get_ws_manager()
await ws_manager.send_json(
room_id=f"ts:{transcript_id}",
message=dag_payload,
)
msg = await asyncio.wait_for(ws.receive_json(), timeout=5.0)
assert msg["event"] == "DAG_STATUS"
assert msg["data"]["workflow_run_id"] == "test-run-123"
assert len(msg["data"]["tasks"]) == 2
assert msg["data"]["tasks"][0]["name"] == "get_recording"
assert msg["data"]["tasks"][0]["status"] == "completed"
assert msg["data"]["tasks"][1]["name"] == "process_tracks"
assert msg["data"]["tasks"][1]["children_completed"] == 1
# ---------------------------------------------------------------------------
# Test 3: Full broadcast_event() chain for STATUS
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_broadcast_event_delivers_status_to_transcript_ws(appserver_ws_broadcast):
"""broadcast_event() end-to-end: STATUS event reaches transcript room WS."""
host, port = appserver_ws_broadcast
transcript_id = await _create_transcript(host, port, "broadcast_event STATUS test")
ws_url = f"http://{host}:{port}/v1/transcripts/{transcript_id}/events"
async with aconnect_ws(ws_url) as ws:
await _drain_historical_events(ws)
from reflector.db.transcripts import TranscriptEvent
from reflector.hatchet.broadcast import broadcast_event
from reflector.logger import logger
log = logger.bind(transcript_id=transcript_id)
event = TranscriptEvent(event="STATUS", data={"value": "processing"})
await broadcast_event(transcript_id, event, logger=log)
msg = await asyncio.wait_for(ws.receive_json(), timeout=5.0)
assert msg["event"] == "STATUS"
assert msg["data"]["value"] == "processing"
# ---------------------------------------------------------------------------
# Test 4: Full broadcast_event() chain for DAG_STATUS
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_broadcast_event_delivers_dag_status_to_transcript_ws(
appserver_ws_broadcast,
):
"""broadcast_event() end-to-end: DAG_STATUS event reaches transcript room WS."""
host, port = appserver_ws_broadcast
transcript_id = await _create_transcript(host, port, "broadcast_event DAG test")
ws_url = f"http://{host}:{port}/v1/transcripts/{transcript_id}/events"
async with aconnect_ws(ws_url) as ws:
await _drain_historical_events(ws)
from reflector.db.transcripts import TranscriptEvent
from reflector.hatchet.broadcast import broadcast_event
from reflector.logger import logger
log = logger.bind(transcript_id=transcript_id)
event = TranscriptEvent(
event="DAG_STATUS",
data={
"workflow_run_id": "test-run-456",
"tasks": [
{
"name": "get_recording",
"status": "running",
"started_at": None,
"finished_at": None,
"duration_seconds": None,
"parents": [],
"error": None,
"children_total": None,
"children_completed": None,
"progress_pct": None,
}
],
},
)
await broadcast_event(transcript_id, event, logger=log)
msg = await asyncio.wait_for(ws.receive_json(), timeout=5.0)
assert msg["event"] == "DAG_STATUS"
assert msg["data"]["tasks"][0]["name"] == "get_recording"
# ---------------------------------------------------------------------------
# Test 5: Multiple rapid events arrive in order
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_multiple_events_arrive_in_order(appserver_ws_broadcast):
"""Multiple STATUS then DAG_STATUS events arrive in correct order."""
host, port = appserver_ws_broadcast
transcript_id = await _create_transcript(host, port, "ordering test")
ws_url = f"http://{host}:{port}/v1/transcripts/{transcript_id}/events"
async with aconnect_ws(ws_url) as ws:
await _drain_historical_events(ws)
import reflector.ws_manager as ws_mod
ws_manager = ws_mod.get_ws_manager()
await ws_manager.send_json(
room_id=f"ts:{transcript_id}",
message={"event": "STATUS", "data": {"value": "processing"}},
)
await ws_manager.send_json(
room_id=f"ts:{transcript_id}",
message={
"event": "DAG_STATUS",
"data": {"workflow_run_id": "r1", "tasks": []},
},
)
await ws_manager.send_json(
room_id=f"ts:{transcript_id}",
message={
"event": "DAG_STATUS",
"data": {
"workflow_run_id": "r1",
"tasks": [{"name": "a", "status": "running"}],
},
},
)
await ws_manager.send_json(
room_id=f"ts:{transcript_id}",
message={"event": "STATUS", "data": {"value": "ended"}},
)
msgs = []
for _ in range(4):
msg = await asyncio.wait_for(ws.receive_json(), timeout=5.0)
msgs.append(msg)
assert msgs[0]["event"] == "STATUS"
assert msgs[0]["data"]["value"] == "processing"
assert msgs[1]["event"] == "DAG_STATUS"
assert msgs[1]["data"]["tasks"] == []
assert msgs[2]["event"] == "DAG_STATUS"
assert len(msgs[2]["data"]["tasks"]) == 1
assert msgs[3]["event"] == "STATUS"
assert msgs[3]["data"]["value"] == "ended"

676
server/uv.lock generated
View File

@@ -235,12 +235,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643 },
]
[[package]]
name = "antlr4-python3-runtime"
version = "4.9.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/3e/38/7859ff46355f76f8d19459005ca000b6e7012f2f1ca597746cbcd1fbfe5e/antlr4-python3-runtime-4.9.3.tar.gz", hash = "sha256:f224469b4168294902bb1efa80a8bf7855f24c99aef99cbefc1bcd3cce77881b", size = 117034 }
[[package]]
name = "anyio"
version = "4.9.0"
@@ -267,21 +261,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2f/f5/c36551e93acba41a59939ae6a0fb77ddb3f2e8e8caa716410c65f7341f72/asgi_lifespan-2.1.0-py3-none-any.whl", hash = "sha256:ed840706680e28428c01e14afb3875d7d76d3206f3d5b2f2294e059b5c23804f", size = 10895 },
]
[[package]]
name = "asteroid-filterbanks"
version = "0.4.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/90/fa/5c2be1f96dc179f83cdd3bb267edbd1f47d08f756785c016d5c2163901a7/asteroid-filterbanks-0.4.0.tar.gz", hash = "sha256:415f89d1dcf2b13b35f03f7a9370968ac4e6fa6800633c522dac992b283409b9", size = 24599 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c5/7c/83ff6046176a675e6a1e8aeefed8892cd97fe7c46af93cc540d1b24b8323/asteroid_filterbanks-0.4.0-py3-none-any.whl", hash = "sha256:4932ac8b6acc6e08fb87cbe8ece84215b5a74eee284fe83acf3540a72a02eaf5", size = 29912 },
]
[[package]]
name = "async-timeout"
version = "5.0.1"
@@ -603,56 +582,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/06/3d6badcf13db419e25b07041d9c7b4a2c331d3f4e7134445ec5df57714cd/coloredlogs-15.0.1-py2.py3-none-any.whl", hash = "sha256:612ee75c546f53e92e70049c9dbfcc18c935a2b9a53b66085ce9ef6a6e5c0934", size = 46018 },
]
[[package]]
name = "colorlog"
version = "6.9.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d3/7a/359f4d5df2353f26172b3cc39ea32daa39af8de522205f512f458923e677/colorlog-6.9.0.tar.gz", hash = "sha256:bfba54a1b93b94f54e1f4fe48395725a3d92fd2a4af702f6bd70946bdc0c6ac2", size = 16624 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e3/51/9b208e85196941db2f0654ad0357ca6388ab3ed67efdbfc799f35d1f83aa/colorlog-6.9.0-py3-none-any.whl", hash = "sha256:5906e71acd67cb07a71e779c47c4bcb45fb8c2993eebe9e5adcd6a6f1b283eff", size = 11424 },
]
[[package]]
name = "contourpy"
version = "1.3.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/58/01/1253e6698a07380cd31a736d248a3f2a50a7c88779a1813da27503cadc2a/contourpy-1.3.3.tar.gz", hash = "sha256:083e12155b210502d0bca491432bb04d56dc3432f95a979b429f2848c3dbe880", size = 13466174 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/91/2e/c4390a31919d8a78b90e8ecf87cd4b4c4f05a5b48d05ec17db8e5404c6f4/contourpy-1.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:709a48ef9a690e1343202916450bc48b9e51c049b089c7f79a267b46cffcdaa1", size = 288773 },
{ url = "https://files.pythonhosted.org/packages/0d/44/c4b0b6095fef4dc9c420e041799591e3b63e9619e3044f7f4f6c21c0ab24/contourpy-1.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:23416f38bfd74d5d28ab8429cc4d63fa67d5068bd711a85edb1c3fb0c3e2f381", size = 270149 },
{ url = "https://files.pythonhosted.org/packages/30/2e/dd4ced42fefac8470661d7cb7e264808425e6c5d56d175291e93890cce09/contourpy-1.3.3-cp311-cp311-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:929ddf8c4c7f348e4c0a5a3a714b5c8542ffaa8c22954862a46ca1813b667ee7", size = 329222 },
{ url = "https://files.pythonhosted.org/packages/f2/74/cc6ec2548e3d276c71389ea4802a774b7aa3558223b7bade3f25787fafc2/contourpy-1.3.3-cp311-cp311-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:9e999574eddae35f1312c2b4b717b7885d4edd6cb46700e04f7f02db454e67c1", size = 377234 },
{ url = "https://files.pythonhosted.org/packages/03/b3/64ef723029f917410f75c09da54254c5f9ea90ef89b143ccadb09df14c15/contourpy-1.3.3-cp311-cp311-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf67e0e3f482cb69779dd3061b534eb35ac9b17f163d851e2a547d56dba0a3a", size = 380555 },
{ url = "https://files.pythonhosted.org/packages/5f/4b/6157f24ca425b89fe2eb7e7be642375711ab671135be21e6faa100f7448c/contourpy-1.3.3-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:51e79c1f7470158e838808d4a996fa9bac72c498e93d8ebe5119bc1e6becb0db", size = 355238 },
{ url = "https://files.pythonhosted.org/packages/98/56/f914f0dd678480708a04cfd2206e7c382533249bc5001eb9f58aa693e200/contourpy-1.3.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:598c3aaece21c503615fd59c92a3598b428b2f01bfb4b8ca9c4edeecc2438620", size = 1326218 },
{ url = "https://files.pythonhosted.org/packages/fb/d7/4a972334a0c971acd5172389671113ae82aa7527073980c38d5868ff1161/contourpy-1.3.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:322ab1c99b008dad206d406bb61d014cf0174df491ae9d9d0fac6a6fda4f977f", size = 1392867 },
{ url = "https://files.pythonhosted.org/packages/75/3e/f2cc6cd56dc8cff46b1a56232eabc6feea52720083ea71ab15523daab796/contourpy-1.3.3-cp311-cp311-win32.whl", hash = "sha256:fd907ae12cd483cd83e414b12941c632a969171bf90fc937d0c9f268a31cafff", size = 183677 },
{ url = "https://files.pythonhosted.org/packages/98/4b/9bd370b004b5c9d8045c6c33cf65bae018b27aca550a3f657cdc99acdbd8/contourpy-1.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:3519428f6be58431c56581f1694ba8e50626f2dd550af225f82fb5f5814d2a42", size = 225234 },
{ url = "https://files.pythonhosted.org/packages/d9/b6/71771e02c2e004450c12b1120a5f488cad2e4d5b590b1af8bad060360fe4/contourpy-1.3.3-cp311-cp311-win_arm64.whl", hash = "sha256:15ff10bfada4bf92ec8b31c62bf7c1834c244019b4a33095a68000d7075df470", size = 193123 },
{ url = "https://files.pythonhosted.org/packages/be/45/adfee365d9ea3d853550b2e735f9d66366701c65db7855cd07621732ccfc/contourpy-1.3.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b08a32ea2f8e42cf1d4be3169a98dd4be32bafe4f22b6c4cb4ba810fa9e5d2cb", size = 293419 },
{ url = "https://files.pythonhosted.org/packages/53/3e/405b59cfa13021a56bba395a6b3aca8cec012b45bf177b0eaf7a202cde2c/contourpy-1.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:556dba8fb6f5d8742f2923fe9457dbdd51e1049c4a43fd3986a0b14a1d815fc6", size = 273979 },
{ url = "https://files.pythonhosted.org/packages/d4/1c/a12359b9b2ca3a845e8f7f9ac08bdf776114eb931392fcad91743e2ea17b/contourpy-1.3.3-cp312-cp312-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:92d9abc807cf7d0e047b95ca5d957cf4792fcd04e920ca70d48add15c1a90ea7", size = 332653 },
{ url = "https://files.pythonhosted.org/packages/63/12/897aeebfb475b7748ea67b61e045accdfcf0d971f8a588b67108ed7f5512/contourpy-1.3.3-cp312-cp312-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b2e8faa0ed68cb29af51edd8e24798bb661eac3bd9f65420c1887b6ca89987c8", size = 379536 },
{ url = "https://files.pythonhosted.org/packages/43/8a/a8c584b82deb248930ce069e71576fc09bd7174bbd35183b7943fb1064fd/contourpy-1.3.3-cp312-cp312-manylinux_2_26_s390x.manylinux_2_28_s390x.whl", hash = "sha256:626d60935cf668e70a5ce6ff184fd713e9683fb458898e4249b63be9e28286ea", size = 384397 },
{ url = "https://files.pythonhosted.org/packages/cc/8f/ec6289987824b29529d0dfda0d74a07cec60e54b9c92f3c9da4c0ac732de/contourpy-1.3.3-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4d00e655fcef08aba35ec9610536bfe90267d7ab5ba944f7032549c55a146da1", size = 362601 },
{ url = "https://files.pythonhosted.org/packages/05/0a/a3fe3be3ee2dceb3e615ebb4df97ae6f3828aa915d3e10549ce016302bd1/contourpy-1.3.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:451e71b5a7d597379ef572de31eeb909a87246974d960049a9848c3bc6c41bf7", size = 1331288 },
{ url = "https://files.pythonhosted.org/packages/33/1d/acad9bd4e97f13f3e2b18a3977fe1b4a37ecf3d38d815333980c6c72e963/contourpy-1.3.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:459c1f020cd59fcfe6650180678a9993932d80d44ccde1fa1868977438f0b411", size = 1403386 },
{ url = "https://files.pythonhosted.org/packages/cf/8f/5847f44a7fddf859704217a99a23a4f6417b10e5ab1256a179264561540e/contourpy-1.3.3-cp312-cp312-win32.whl", hash = "sha256:023b44101dfe49d7d53932be418477dba359649246075c996866106da069af69", size = 185018 },
{ url = "https://files.pythonhosted.org/packages/19/e8/6026ed58a64563186a9ee3f29f41261fd1828f527dd93d33b60feca63352/contourpy-1.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:8153b8bfc11e1e4d75bcb0bff1db232f9e10b274e0929de9d608027e0d34ff8b", size = 226567 },
{ url = "https://files.pythonhosted.org/packages/d1/e2/f05240d2c39a1ed228d8328a78b6f44cd695f7ef47beb3e684cf93604f86/contourpy-1.3.3-cp312-cp312-win_arm64.whl", hash = "sha256:07ce5ed73ecdc4a03ffe3e1b3e3c1166db35ae7584be76f65dbbe28a7791b0cc", size = 193655 },
{ url = "https://files.pythonhosted.org/packages/a5/29/8dcfe16f0107943fa92388c23f6e05cff0ba58058c4c95b00280d4c75a14/contourpy-1.3.3-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:cd5dfcaeb10f7b7f9dc8941717c6c2ade08f587be2226222c12b25f0483ed497", size = 278809 },
{ url = "https://files.pythonhosted.org/packages/85/a9/8b37ef4f7dafeb335daee3c8254645ef5725be4d9c6aa70b50ec46ef2f7e/contourpy-1.3.3-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:0c1fc238306b35f246d61a1d416a627348b5cf0648648a031e14bb8705fcdfe8", size = 261593 },
{ url = "https://files.pythonhosted.org/packages/0a/59/ebfb8c677c75605cc27f7122c90313fd2f375ff3c8d19a1694bda74aaa63/contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70f9aad7de812d6541d29d2bbf8feb22ff7e1c299523db288004e3157ff4674e", size = 302202 },
{ url = "https://files.pythonhosted.org/packages/3c/37/21972a15834d90bfbfb009b9d004779bd5a07a0ec0234e5ba8f64d5736f4/contourpy-1.3.3-pp311-pypy311_pp73-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5ed3657edf08512fc3fe81b510e35c2012fbd3081d2e26160f27ca28affec989", size = 329207 },
{ url = "https://files.pythonhosted.org/packages/0c/58/bd257695f39d05594ca4ad60df5bcb7e32247f9951fd09a9b8edb82d1daa/contourpy-1.3.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:3d1a3799d62d45c18bafd41c5fa05120b96a28079f2393af559b843d1a966a77", size = 225315 },
]
[[package]]
name = "coverage"
version = "7.9.2"
@@ -753,15 +682,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/4c/0ecd260233290bee4b2facec4d8e755e57d8781d68f276e1248433993c9f/ctranslate2-4.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:511cdf810a5bf6a2cec735799e5cd47966e63f8f7688fdee1b97fed621abda00", size = 19470040 },
]
[[package]]
name = "cycler"
version = "0.12.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a9/95/a3dbbb5028f35eafb79008e7522a75244477d2838f38cbb722248dabc2a8/cycler-0.12.1.tar.gz", hash = "sha256:88bb128f02ba341da8ef447245a9e138fae777f6a23943da4540077d3601eb1c", size = 7615 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/05/c19819d5e3d95294a6f5947fb9b9629efb316b96de511b418c53d245aae6/cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30", size = 8321 },
]
[[package]]
name = "databases"
version = "0.8.0"
@@ -874,12 +794,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e3/26/57c6fb270950d476074c087527a558ccb6f4436657314bfb6cdf484114c4/docker-7.1.0-py3-none-any.whl", hash = "sha256:c96b93b7f0a746f9e77d325bcfb87422a3d8bd4f03136ae8a85b37f1898d5fc0", size = 147774 },
]
[[package]]
name = "docopt"
version = "0.6.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a2/55/8f8cab2afd404cf578136ef2cc5dfb50baa1761b68c9da1fb1e4eed343c9/docopt-0.6.2.tar.gz", hash = "sha256:49b3a825280bd66b3aa83585ef59c4a8c82f2c8a522dbe754a8bc8d08c85c491", size = 25901 }
[[package]]
name = "ecdsa"
version = "0.19.1"
@@ -892,15 +806,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/a3/460c57f094a4a165c84a1341c373b0a4f5ec6ac244b998d5021aade89b77/ecdsa-0.19.1-py2.py3-none-any.whl", hash = "sha256:30638e27cf77b7e15c4c4cc1973720149e1033827cfd00661ca5c8cc0cdb24c3", size = 150607 },
]
[[package]]
name = "einops"
version = "0.8.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e5/81/df4fbe24dff8ba3934af99044188e20a98ed441ad17a274539b74e82e126/einops-0.8.1.tar.gz", hash = "sha256:de5d960a7a761225532e0f1959e5315ebeafc0cd43394732f103ca44b9837e84", size = 54805 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/87/62/9773de14fe6c45c23649e98b83231fffd7b9892b6cf863251dc2afa73643/einops-0.8.1-py3-none-any.whl", hash = "sha256:919387eb55330f5757c6bea9165c5ff5cfe63a642682ea788a6d472576d81737", size = 64359 },
]
[[package]]
name = "email-validator"
version = "2.2.0"
@@ -1034,31 +939,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b8/25/155f9f080d5e4bc0082edfda032ea2bc2b8fab3f4d25d46c1e9dd22a1a89/flatbuffers-25.2.10-py2.py3-none-any.whl", hash = "sha256:ebba5f4d5ea615af3f7fd70fc310636fbb2bbd1f566ac0a23d98dd412de50051", size = 30953 },
]
[[package]]
name = "fonttools"
version = "4.59.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/0d/a5/fba25f9fbdab96e26dedcaeeba125e5f05a09043bf888e0305326e55685b/fonttools-4.59.2.tar.gz", hash = "sha256:e72c0749b06113f50bcb80332364c6be83a9582d6e3db3fe0b280f996dc2ef22", size = 3540889 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f8/53/742fcd750ae0bdc74de4c0ff923111199cc2f90a4ee87aaddad505b6f477/fonttools-4.59.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:511946e8d7ea5c0d6c7a53c4cb3ee48eda9ab9797cd9bf5d95829a398400354f", size = 2774961 },
{ url = "https://files.pythonhosted.org/packages/57/2a/976f5f9fa3b4dd911dc58d07358467bec20e813d933bc5d3db1a955dd456/fonttools-4.59.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8e5e2682cf7be766d84f462ba8828d01e00c8751a8e8e7ce12d7784ccb69a30d", size = 2344690 },
{ url = "https://files.pythonhosted.org/packages/c1/8f/b7eefc274fcf370911e292e95565c8253b0b87c82a53919ab3c795a4f50e/fonttools-4.59.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5729e12a982dba3eeae650de48b06f3b9ddb51e9aee2fcaf195b7d09a96250e2", size = 5026910 },
{ url = "https://files.pythonhosted.org/packages/69/95/864726eaa8f9d4e053d0c462e64d5830ec7c599cbdf1db9e40f25ca3972e/fonttools-4.59.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c52694eae5d652361d59ecdb5a2246bff7cff13b6367a12da8499e9df56d148d", size = 4971031 },
{ url = "https://files.pythonhosted.org/packages/24/4c/b8c4735ebdea20696277c70c79e0de615dbe477834e5a7c2569aa1db4033/fonttools-4.59.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:f1f1bbc23ba1312bd8959896f46f667753b90216852d2a8cfa2d07e0cb234144", size = 5006112 },
{ url = "https://files.pythonhosted.org/packages/3b/23/f9ea29c292aa2fc1ea381b2e5621ac436d5e3e0a5dee24ffe5404e58eae8/fonttools-4.59.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:1a1bfe5378962825dabe741720885e8b9ae9745ec7ecc4a5ec1f1ce59a6062bf", size = 5117671 },
{ url = "https://files.pythonhosted.org/packages/ba/07/cfea304c555bf06e86071ff2a3916bc90f7c07ec85b23bab758d4908c33d/fonttools-4.59.2-cp311-cp311-win32.whl", hash = "sha256:e937790f3c2c18a1cbc7da101550a84319eb48023a715914477d2e7faeaba570", size = 2218157 },
{ url = "https://files.pythonhosted.org/packages/d7/de/35d839aa69db737a3f9f3a45000ca24721834d40118652a5775d5eca8ebb/fonttools-4.59.2-cp311-cp311-win_amd64.whl", hash = "sha256:9836394e2f4ce5f9c0a7690ee93bd90aa1adc6b054f1a57b562c5d242c903104", size = 2265846 },
{ url = "https://files.pythonhosted.org/packages/ba/3d/1f45db2df51e7bfa55492e8f23f383d372200be3a0ded4bf56a92753dd1f/fonttools-4.59.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:82906d002c349cad647a7634b004825a7335f8159d0d035ae89253b4abf6f3ea", size = 2769711 },
{ url = "https://files.pythonhosted.org/packages/29/df/cd236ab32a8abfd11558f296e064424258db5edefd1279ffdbcfd4fd8b76/fonttools-4.59.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:a10c1bd7644dc58f8862d8ba0cf9fb7fef0af01ea184ba6ce3f50ab7dfe74d5a", size = 2340225 },
{ url = "https://files.pythonhosted.org/packages/98/12/b6f9f964fe6d4b4dd4406bcbd3328821c3de1f909ffc3ffa558fe72af48c/fonttools-4.59.2-cp312-cp312-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:738f31f23e0339785fd67652a94bc69ea49e413dfdb14dcb8c8ff383d249464e", size = 4912766 },
{ url = "https://files.pythonhosted.org/packages/73/78/82bde2f2d2c306ef3909b927363170b83df96171f74e0ccb47ad344563cd/fonttools-4.59.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0ec99f9bdfee9cdb4a9172f9e8fd578cce5feb231f598909e0aecf5418da4f25", size = 4955178 },
{ url = "https://files.pythonhosted.org/packages/92/77/7de766afe2d31dda8ee46d7e479f35c7d48747e558961489a2d6e3a02bd4/fonttools-4.59.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0476ea74161322e08c7a982f83558a2b81b491509984523a1a540baf8611cc31", size = 4897898 },
{ url = "https://files.pythonhosted.org/packages/c5/77/ce0e0b905d62a06415fda9f2b2e109a24a5db54a59502b769e9e297d2242/fonttools-4.59.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:95922a922daa1f77cc72611747c156cfb38030ead72436a2c551d30ecef519b9", size = 5049144 },
{ url = "https://files.pythonhosted.org/packages/d9/ea/870d93aefd23fff2e07cbeebdc332527868422a433c64062c09d4d5e7fe6/fonttools-4.59.2-cp312-cp312-win32.whl", hash = "sha256:39ad9612c6a622726a6a130e8ab15794558591f999673f1ee7d2f3d30f6a3e1c", size = 2206473 },
{ url = "https://files.pythonhosted.org/packages/61/c4/e44bad000c4a4bb2e9ca11491d266e857df98ab6d7428441b173f0fe2517/fonttools-4.59.2-cp312-cp312-win_amd64.whl", hash = "sha256:980fd7388e461b19a881d35013fec32c713ffea1fc37aef2f77d11f332dfd7da", size = 2254706 },
{ url = "https://files.pythonhosted.org/packages/65/a4/d2f7be3c86708912c02571db0b550121caab8cd88a3c0aacb9cfa15ea66e/fonttools-4.59.2-py3-none-any.whl", hash = "sha256:8bd0f759020e87bb5d323e6283914d9bf4ae35a7307dafb2cbd1e379e720ad37", size = 1132315 },
]
[[package]]
name = "frozenlist"
version = "1.7.0"
@@ -1111,11 +991,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2f/e0/014d5d9d7a4564cf1c40b5039bc882db69fd881111e03ab3657ac0b218e2/fsspec-2025.7.0-py3-none-any.whl", hash = "sha256:8b012e39f63c7d5f10474de957f3ab793b47b45ae7d39f2fb735f8bbe25c0e21", size = 199597 },
]
[package.optional-dependencies]
http = [
{ name = "aiohttp" },
]
[[package]]
name = "google-crc32c"
version = "1.7.1"
@@ -1380,19 +1255,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f0/0f/310fb31e39e2d734ccaa2c0fb981ee41f7bd5056ce9bc29b2248bd569169/humanfriendly-10.0-py2.py3-none-any.whl", hash = "sha256:1697e1a8a8f550fd43c2865cd84542fc175a61dcb779b6fee18cf6b6ccba1477", size = 86794 },
]
[[package]]
name = "hyperpyyaml"
version = "1.2.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyyaml" },
{ name = "ruamel-yaml" },
]
sdist = { url = "https://files.pythonhosted.org/packages/52/e3/3ac46d9a662b037f699a6948b39c8d03bfcff0b592335d5953ba0c55d453/HyperPyYAML-1.2.2.tar.gz", hash = "sha256:bdb734210d18770a262f500fe5755c7a44a5d3b91521b06e24f7a00a36ee0f87", size = 17085 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/33/c9/751b6401887f4b50f9307cc1e53d287b3dc77c375c126aeb6335aff73ccb/HyperPyYAML-1.2.2-py3-none-any.whl", hash = "sha256:3c5864bdc8864b2f0fbd7bc495e7e8fdf2dfd5dd80116f72da27ca96a128bdeb", size = 16118 },
]
[[package]]
name = "icalendar"
version = "6.3.1"
@@ -1535,55 +1397,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/01/0e/b27cdbaccf30b890c40ed1da9fd4a3593a5cf94dae54fb34f8a4b74fcd3f/jsonschema_specifications-2025.4.1-py3-none-any.whl", hash = "sha256:4653bffbd6584f7de83a67e0d620ef16900b390ddc7939d56684d6c81e33f1af", size = 18437 },
]
[[package]]
name = "julius"
version = "0.2.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a1/19/c9e1596b5572c786b93428d0904280e964c930fae7e6c9368ed9e1b63922/julius-0.2.7.tar.gz", hash = "sha256:3c0f5f5306d7d6016fcc95196b274cae6f07e2c9596eed314e4e7641554fbb08", size = 59640 }
[[package]]
name = "kiwisolver"
version = "1.4.9"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/5c/3c/85844f1b0feb11ee581ac23fe5fce65cd049a200c1446708cc1b7f922875/kiwisolver-1.4.9.tar.gz", hash = "sha256:c3b22c26c6fd6811b0ae8363b95ca8ce4ea3c202d3d0975b2914310ceb1bcc4d", size = 97564 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6f/ab/c80b0d5a9d8a1a65f4f815f2afff9798b12c3b9f31f1d304dd233dd920e2/kiwisolver-1.4.9-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:eb14a5da6dc7642b0f3a18f13654847cd8b7a2550e2645a5bda677862b03ba16", size = 124167 },
{ url = "https://files.pythonhosted.org/packages/a0/c0/27fe1a68a39cf62472a300e2879ffc13c0538546c359b86f149cc19f6ac3/kiwisolver-1.4.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:39a219e1c81ae3b103643d2aedb90f1ef22650deb266ff12a19e7773f3e5f089", size = 66579 },
{ url = "https://files.pythonhosted.org/packages/31/a2/a12a503ac1fd4943c50f9822678e8015a790a13b5490354c68afb8489814/kiwisolver-1.4.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2405a7d98604b87f3fc28b1716783534b1b4b8510d8142adca34ee0bc3c87543", size = 65309 },
{ url = "https://files.pythonhosted.org/packages/66/e1/e533435c0be77c3f64040d68d7a657771194a63c279f55573188161e81ca/kiwisolver-1.4.9-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dc1ae486f9abcef254b5618dfb4113dd49f94c68e3e027d03cf0143f3f772b61", size = 1435596 },
{ url = "https://files.pythonhosted.org/packages/67/1e/51b73c7347f9aabdc7215aa79e8b15299097dc2f8e67dee2b095faca9cb0/kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8a1f570ce4d62d718dce3f179ee78dac3b545ac16c0c04bb363b7607a949c0d1", size = 1246548 },
{ url = "https://files.pythonhosted.org/packages/21/aa/72a1c5d1e430294f2d32adb9542719cfb441b5da368d09d268c7757af46c/kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:cb27e7b78d716c591e88e0a09a2139c6577865d7f2e152488c2cc6257f460872", size = 1263618 },
{ url = "https://files.pythonhosted.org/packages/a3/af/db1509a9e79dbf4c260ce0cfa3903ea8945f6240e9e59d1e4deb731b1a40/kiwisolver-1.4.9-cp311-cp311-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:15163165efc2f627eb9687ea5f3a28137217d217ac4024893d753f46bce9de26", size = 1317437 },
{ url = "https://files.pythonhosted.org/packages/e0/f2/3ea5ee5d52abacdd12013a94130436e19969fa183faa1e7c7fbc89e9a42f/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:bdee92c56a71d2b24c33a7d4c2856bd6419d017e08caa7802d2963870e315028", size = 2195742 },
{ url = "https://files.pythonhosted.org/packages/6f/9b/1efdd3013c2d9a2566aa6a337e9923a00590c516add9a1e89a768a3eb2fc/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:412f287c55a6f54b0650bd9b6dce5aceddb95864a1a90c87af16979d37c89771", size = 2290810 },
{ url = "https://files.pythonhosted.org/packages/fb/e5/cfdc36109ae4e67361f9bc5b41323648cb24a01b9ade18784657e022e65f/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:2c93f00dcba2eea70af2be5f11a830a742fe6b579a1d4e00f47760ef13be247a", size = 2461579 },
{ url = "https://files.pythonhosted.org/packages/62/86/b589e5e86c7610842213994cdea5add00960076bef4ae290c5fa68589cac/kiwisolver-1.4.9-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f117e1a089d9411663a3207ba874f31be9ac8eaa5b533787024dc07aeb74f464", size = 2268071 },
{ url = "https://files.pythonhosted.org/packages/3b/c6/f8df8509fd1eee6c622febe54384a96cfaf4d43bf2ccec7a0cc17e4715c9/kiwisolver-1.4.9-cp311-cp311-win_amd64.whl", hash = "sha256:be6a04e6c79819c9a8c2373317d19a96048e5a3f90bec587787e86a1153883c2", size = 73840 },
{ url = "https://files.pythonhosted.org/packages/e2/2d/16e0581daafd147bc11ac53f032a2b45eabac897f42a338d0a13c1e5c436/kiwisolver-1.4.9-cp311-cp311-win_arm64.whl", hash = "sha256:0ae37737256ba2de764ddc12aed4956460277f00c4996d51a197e72f62f5eec7", size = 65159 },
{ url = "https://files.pythonhosted.org/packages/86/c9/13573a747838aeb1c76e3267620daa054f4152444d1f3d1a2324b78255b5/kiwisolver-1.4.9-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ac5a486ac389dddcc5bef4f365b6ae3ffff2c433324fb38dd35e3fab7c957999", size = 123686 },
{ url = "https://files.pythonhosted.org/packages/51/ea/2ecf727927f103ffd1739271ca19c424d0e65ea473fbaeea1c014aea93f6/kiwisolver-1.4.9-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f2ba92255faa7309d06fe44c3a4a97efe1c8d640c2a79a5ef728b685762a6fd2", size = 66460 },
{ url = "https://files.pythonhosted.org/packages/5b/5a/51f5464373ce2aeb5194508298a508b6f21d3867f499556263c64c621914/kiwisolver-1.4.9-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4a2899935e724dd1074cb568ce7ac0dce28b2cd6ab539c8e001a8578eb106d14", size = 64952 },
{ url = "https://files.pythonhosted.org/packages/70/90/6d240beb0f24b74371762873e9b7f499f1e02166a2d9c5801f4dbf8fa12e/kiwisolver-1.4.9-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f6008a4919fdbc0b0097089f67a1eb55d950ed7e90ce2cc3e640abadd2757a04", size = 1474756 },
{ url = "https://files.pythonhosted.org/packages/12/42/f36816eaf465220f683fb711efdd1bbf7a7005a2473d0e4ed421389bd26c/kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:67bb8b474b4181770f926f7b7d2f8c0248cbcb78b660fdd41a47054b28d2a752", size = 1276404 },
{ url = "https://files.pythonhosted.org/packages/2e/64/bc2de94800adc830c476dce44e9b40fd0809cddeef1fde9fcf0f73da301f/kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2327a4a30d3ee07d2fbe2e7933e8a37c591663b96ce42a00bc67461a87d7df77", size = 1294410 },
{ url = "https://files.pythonhosted.org/packages/5f/42/2dc82330a70aa8e55b6d395b11018045e58d0bb00834502bf11509f79091/kiwisolver-1.4.9-cp312-cp312-manylinux_2_24_s390x.manylinux_2_28_s390x.whl", hash = "sha256:7a08b491ec91b1d5053ac177afe5290adacf1f0f6307d771ccac5de30592d198", size = 1343631 },
{ url = "https://files.pythonhosted.org/packages/22/fd/f4c67a6ed1aab149ec5a8a401c323cee7a1cbe364381bb6c9c0d564e0e20/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d8fc5c867c22b828001b6a38d2eaeb88160bf5783c6cb4a5e440efc981ce286d", size = 2224963 },
{ url = "https://files.pythonhosted.org/packages/45/aa/76720bd4cb3713314677d9ec94dcc21ced3f1baf4830adde5bb9b2430a5f/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:3b3115b2581ea35bb6d1f24a4c90af37e5d9b49dcff267eeed14c3893c5b86ab", size = 2321295 },
{ url = "https://files.pythonhosted.org/packages/80/19/d3ec0d9ab711242f56ae0dc2fc5d70e298bb4a1f9dfab44c027668c673a1/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:858e4c22fb075920b96a291928cb7dea5644e94c0ee4fcd5af7e865655e4ccf2", size = 2487987 },
{ url = "https://files.pythonhosted.org/packages/39/e9/61e4813b2c97e86b6fdbd4dd824bf72d28bcd8d4849b8084a357bc0dd64d/kiwisolver-1.4.9-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ed0fecd28cc62c54b262e3736f8bb2512d8dcfdc2bcf08be5f47f96bf405b145", size = 2291817 },
{ url = "https://files.pythonhosted.org/packages/a0/41/85d82b0291db7504da3c2defe35c9a8a5c9803a730f297bd823d11d5fb77/kiwisolver-1.4.9-cp312-cp312-win_amd64.whl", hash = "sha256:f68208a520c3d86ea51acf688a3e3002615a7f0238002cccc17affecc86a8a54", size = 73895 },
{ url = "https://files.pythonhosted.org/packages/e2/92/5f3068cf15ee5cb624a0c7596e67e2a0bb2adee33f71c379054a491d07da/kiwisolver-1.4.9-cp312-cp312-win_arm64.whl", hash = "sha256:2c1a4f57df73965f3f14df20b80ee29e6a7930a57d2d9e8491a25f676e197c60", size = 64992 },
{ url = "https://files.pythonhosted.org/packages/a3/0f/36d89194b5a32c054ce93e586d4049b6c2c22887b0eb229c61c68afd3078/kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:720e05574713db64c356e86732c0f3c5252818d05f9df320f0ad8380641acea5", size = 60104 },
{ url = "https://files.pythonhosted.org/packages/52/ba/4ed75f59e4658fd21fe7dde1fee0ac397c678ec3befba3fe6482d987af87/kiwisolver-1.4.9-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:17680d737d5335b552994a2008fab4c851bcd7de33094a82067ef3a576ff02fa", size = 58592 },
{ url = "https://files.pythonhosted.org/packages/33/01/a8ea7c5ea32a9b45ceeaee051a04c8ed4320f5add3c51bfa20879b765b70/kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:85b5352f94e490c028926ea567fc569c52ec79ce131dadb968d3853e809518c2", size = 80281 },
{ url = "https://files.pythonhosted.org/packages/da/e3/dbd2ecdce306f1d07a1aaf324817ee993aab7aee9db47ceac757deabafbe/kiwisolver-1.4.9-pp311-pypy311_pp73-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:464415881e4801295659462c49461a24fb107c140de781d55518c4b80cb6790f", size = 78009 },
{ url = "https://files.pythonhosted.org/packages/da/e9/0d4add7873a73e462aeb45c036a2dead2562b825aa46ba326727b3f31016/kiwisolver-1.4.9-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:fb940820c63a9590d31d88b815e7a3aa5915cad3ce735ab45f0c730b39547de1", size = 73929 },
]
[[package]]
name = "kombu"
version = "5.5.4"
@@ -1646,41 +1459,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/dc/1e/408fd10217eac0e43aea0604be22b4851a09e03d761d44d4ea12089dd70e/levenshtein-0.27.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:7987ef006a3cf56a4532bd4c90c2d3b7b4ca9ad3bf8ae1ee5713c4a3bdfda913", size = 98045 },
]
[[package]]
name = "lightning"
version = "2.5.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "fsspec", extra = ["http"] },
{ name = "lightning-utilities" },
{ name = "packaging" },
{ name = "pytorch-lightning" },
{ name = "pyyaml" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "torchmetrics" },
{ name = "tqdm" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0f/dd/86bb3bebadcdbc6e6e5a63657f0a03f74cd065b5ea965896679f76fec0b4/lightning-2.5.5.tar.gz", hash = "sha256:4d3d66c5b1481364a7e6a1ce8ddde1777a04fa740a3145ec218a9941aed7dd30", size = 640770 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2e/d0/4b4fbafc3b18df91207a6e46782d9fd1905f9f45cb2c3b8dfbb239aef781/lightning-2.5.5-py3-none-any.whl", hash = "sha256:69eb248beadd7b600bf48eff00a0ec8af171ec7a678d23787c4aedf12e225e8f", size = 828490 },
]
[[package]]
name = "lightning-utilities"
version = "0.15.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "packaging" },
{ name = "setuptools" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b8/39/6fc58ca81492db047149b4b8fd385aa1bfb8c28cd7cacb0c7eb0c44d842f/lightning_utilities-0.15.2.tar.gz", hash = "sha256:cdf12f530214a63dacefd713f180d1ecf5d165338101617b4742e8f22c032e24", size = 31090 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/de/73/3d757cb3fc16f0f9794dd289bcd0c4a031d9cf54d8137d6b984b2d02edf3/lightning_utilities-0.15.2-py3-none-any.whl", hash = "sha256:ad3ab1703775044bbf880dbf7ddaaac899396c96315f3aa1779cec9d618a9841", size = 29431 },
]
[[package]]
name = "llama-cloud"
version = "0.1.32"
@@ -2028,42 +1806,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/34/75/51952c7b2d3873b44a0028b1bd26a25078c18f92f256608e8d1dc61b39fd/marshmallow-3.26.1-py3-none-any.whl", hash = "sha256:3350409f20a70a7e4e11a27661187b77cdcaeb20abca41c1454fe33636bea09c", size = 50878 },
]
[[package]]
name = "matplotlib"
version = "3.10.6"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "contourpy" },
{ name = "cycler" },
{ name = "fonttools" },
{ name = "kiwisolver" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "pillow" },
{ name = "pyparsing" },
{ name = "python-dateutil" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a0/59/c3e6453a9676ffba145309a73c462bb407f4400de7de3f2b41af70720a3c/matplotlib-3.10.6.tar.gz", hash = "sha256:ec01b645840dd1996df21ee37f208cd8ba57644779fa20464010638013d3203c", size = 34804264 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/80/d6/5d3665aa44c49005aaacaa68ddea6fcb27345961cd538a98bb0177934ede/matplotlib-3.10.6-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:905b60d1cb0ee604ce65b297b61cf8be9f4e6cfecf95a3fe1c388b5266bc8f4f", size = 8257527 },
{ url = "https://files.pythonhosted.org/packages/8c/af/30ddefe19ca67eebd70047dabf50f899eaff6f3c5e6a1a7edaecaf63f794/matplotlib-3.10.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7bac38d816637343e53d7185d0c66677ff30ffb131044a81898b5792c956ba76", size = 8119583 },
{ url = "https://files.pythonhosted.org/packages/d3/29/4a8650a3dcae97fa4f375d46efcb25920d67b512186f8a6788b896062a81/matplotlib-3.10.6-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:942a8de2b5bfff1de31d95722f702e2966b8a7e31f4e68f7cd963c7cd8861cf6", size = 8692682 },
{ url = "https://files.pythonhosted.org/packages/aa/d3/b793b9cb061cfd5d42ff0f69d1822f8d5dbc94e004618e48a97a8373179a/matplotlib-3.10.6-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a3276c85370bc0dfca051ec65c5817d1e0f8f5ce1b7787528ec8ed2d524bbc2f", size = 9521065 },
{ url = "https://files.pythonhosted.org/packages/f7/c5/53de5629f223c1c66668d46ac2621961970d21916a4bc3862b174eb2a88f/matplotlib-3.10.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9df5851b219225731f564e4b9e7f2ac1e13c9e6481f941b5631a0f8e2d9387ce", size = 9576888 },
{ url = "https://files.pythonhosted.org/packages/fc/8e/0a18d6d7d2d0a2e66585032a760d13662e5250c784d53ad50434e9560991/matplotlib-3.10.6-cp311-cp311-win_amd64.whl", hash = "sha256:abb5d9478625dd9c9eb51a06d39aae71eda749ae9b3138afb23eb38824026c7e", size = 8115158 },
{ url = "https://files.pythonhosted.org/packages/07/b3/1a5107bb66c261e23b9338070702597a2d374e5aa7004b7adfc754fbed02/matplotlib-3.10.6-cp311-cp311-win_arm64.whl", hash = "sha256:886f989ccfae63659183173bb3fced7fd65e9eb793c3cc21c273add368536951", size = 7992444 },
{ url = "https://files.pythonhosted.org/packages/ea/1a/7042f7430055d567cc3257ac409fcf608599ab27459457f13772c2d9778b/matplotlib-3.10.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:31ca662df6a80bd426f871105fdd69db7543e28e73a9f2afe80de7e531eb2347", size = 8272404 },
{ url = "https://files.pythonhosted.org/packages/a9/5d/1d5f33f5b43f4f9e69e6a5fe1fb9090936ae7bc8e2ff6158e7a76542633b/matplotlib-3.10.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1678bb61d897bb4ac4757b5ecfb02bfb3fddf7f808000fb81e09c510712fda75", size = 8128262 },
{ url = "https://files.pythonhosted.org/packages/67/c3/135fdbbbf84e0979712df58e5e22b4f257b3f5e52a3c4aacf1b8abec0d09/matplotlib-3.10.6-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:56cd2d20842f58c03d2d6e6c1f1cf5548ad6f66b91e1e48f814e4fb5abd1cb95", size = 8697008 },
{ url = "https://files.pythonhosted.org/packages/9c/be/c443ea428fb2488a3ea7608714b1bd85a82738c45da21b447dc49e2f8e5d/matplotlib-3.10.6-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:662df55604a2f9a45435566d6e2660e41efe83cd94f4288dfbf1e6d1eae4b0bb", size = 9530166 },
{ url = "https://files.pythonhosted.org/packages/a9/35/48441422b044d74034aea2a3e0d1a49023f12150ebc58f16600132b9bbaf/matplotlib-3.10.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:08f141d55148cd1fc870c3387d70ca4df16dee10e909b3b038782bd4bda6ea07", size = 9593105 },
{ url = "https://files.pythonhosted.org/packages/45/c3/994ef20eb4154ab84cc08d033834555319e4af970165e6c8894050af0b3c/matplotlib-3.10.6-cp312-cp312-win_amd64.whl", hash = "sha256:590f5925c2d650b5c9d813c5b3b5fc53f2929c3f8ef463e4ecfa7e052044fb2b", size = 8122784 },
{ url = "https://files.pythonhosted.org/packages/57/b8/5c85d9ae0e40f04e71bedb053aada5d6bab1f9b5399a0937afb5d6b02d98/matplotlib-3.10.6-cp312-cp312-win_arm64.whl", hash = "sha256:f44c8d264a71609c79a78d50349e724f5d5fc3684ead7c2a473665ee63d868aa", size = 7992823 },
{ url = "https://files.pythonhosted.org/packages/12/bb/02c35a51484aae5f49bd29f091286e7af5f3f677a9736c58a92b3c78baeb/matplotlib-3.10.6-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:f2d684c3204fa62421bbf770ddfebc6b50130f9cad65531eeba19236d73bb488", size = 8252296 },
{ url = "https://files.pythonhosted.org/packages/7d/85/41701e3092005aee9a2445f5ee3904d9dbd4a7df7a45905ffef29b7ef098/matplotlib-3.10.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:6f4a69196e663a41d12a728fab8751177215357906436804217d6d9cf0d4d6cf", size = 8116749 },
{ url = "https://files.pythonhosted.org/packages/16/53/8d8fa0ea32a8c8239e04d022f6c059ee5e1b77517769feccd50f1df43d6d/matplotlib-3.10.6-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d6ca6ef03dfd269f4ead566ec6f3fb9becf8dab146fb999022ed85ee9f6b3eb", size = 8693933 },
]
[[package]]
name = "mdurl"
version = "0.1.2"
@@ -2205,19 +1947,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/48/6b/1c6b515a83d5564b1698a61efa245727c8feecf308f4091f565988519d20/numpy-2.3.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:e610832418a2bc09d974cc9fecebfa51e9532d6190223bc5ef6a7402ebf3b5cb", size = 12927246 },
]
[[package]]
name = "omegaconf"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "antlr4-python3-runtime" },
{ name = "pyyaml" },
]
sdist = { url = "https://files.pythonhosted.org/packages/09/48/6388f1bb9da707110532cb70ec4d2822858ddfb44f1cdf1233c20a80ea4b/omegaconf-2.3.0.tar.gz", hash = "sha256:d5d4b6d29955cc50ad50c46dc269bcd92c6e00f5f90d23ab5fee7bfca4ba4cc7", size = 3298120 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e3/94/1843518e420fa3ed6919835845df698c7e27e183cb997394e4a670973a65/omegaconf-2.3.0-py3-none-any.whl", hash = "sha256:7b4df175cdb08ba400f45cae3bdcae7ba8365db4d165fc65fd04b050ab63b46b", size = 79500 },
]
[[package]]
name = "onnxruntime"
version = "1.22.1"
@@ -2260,24 +1989,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/8a/91/1f1cf577f745e956b276a8b1d3d76fa7a6ee0c2b05db3b001b900f2c71db/openai-1.97.0-py3-none-any.whl", hash = "sha256:a1c24d96f4609f3f7f51c9e1c2606d97cc6e334833438659cfd687e9c972c610", size = 764953 },
]
[[package]]
name = "optuna"
version = "4.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "alembic" },
{ name = "colorlog" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "pyyaml" },
{ name = "sqlalchemy" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/53/a3/bcd1e5500de6ec794c085a277e5b624e60b4fac1790681d7cdbde25b93a2/optuna-4.5.0.tar.gz", hash = "sha256:264844da16dad744dea295057d8bc218646129c47567d52c35a201d9f99942ba", size = 472338 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7f/12/cba81286cbaf0f0c3f0473846cfd992cb240bdcea816bf2ef7de8ed0f744/optuna-4.5.0-py3-none-any.whl", hash = "sha256:5b8a783e84e448b0742501bc27195344a28d2c77bd2feef5b558544d954851b0", size = 400872 },
]
[[package]]
name = "packaging"
version = "25.0"
@@ -2379,15 +2090,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538 },
]
[[package]]
name = "primepy"
version = "1.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/35/77/0cfa1b4697cfb5336f3a96e8bc73327f64610be3a64c97275f1801afb395/primePy-1.3.tar.gz", hash = "sha256:25fd7e25344b0789a5984c75d89f054fcf1f180bef20c998e4befbac92de4669", size = 3914 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/74/c1/bb7e334135859c3a92ec399bc89293ea73f28e815e35b43929c8db6af030/primePy-1.3-py3-none-any.whl", hash = "sha256:5ed443718765be9bf7e2ff4c56cdff71b42140a15b39d054f9d99f0009e2317a", size = 4040 },
]
[[package]]
name = "prometheus-client"
version = "0.22.1"
@@ -2524,109 +2226,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/92/29/06261ea000e2dc1e22907dbbc483a1093665509ea586b29b8986a0e56733/psycopg2_binary-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:18c5ee682b9c6dd3696dad6e54cc7ff3a1a9020df6a5c0f861ef8bfd338c3ca0", size = 1164031 },
]
[[package]]
name = "pyannote-audio"
version = "3.3.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "asteroid-filterbanks" },
{ name = "einops" },
{ name = "huggingface-hub" },
{ name = "lightning" },
{ name = "omegaconf" },
{ name = "pyannote-core" },
{ name = "pyannote-database" },
{ name = "pyannote-metrics" },
{ name = "pyannote-pipeline" },
{ name = "pytorch-metric-learning" },
{ name = "rich" },
{ name = "semver" },
{ name = "soundfile" },
{ name = "speechbrain" },
{ name = "tensorboardx" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "torch-audiomentations" },
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "torchmetrics" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e9/00/3b96ca7ad0641e4f64cfaa2af153dc7da0998ff972280e1c1681b1fcc243/pyannote_audio-3.3.2.tar.gz", hash = "sha256:b2115e86b0db5faedb9f36ee1a150cebd07f7758e65e815accdac1a12ca9c777", size = 13664309 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/17/e6/76049470d90217f9a15a34abf3e92d782cabc3fb4ab27515c9baaa5495d1/pyannote.audio-3.3.2-py2.py3-none-any.whl", hash = "sha256:599c694acd5d193215147ff82d0bf638bb191204ed502bd9fde8ff582e20aa1c", size = 898707 },
{ url = "https://files.pythonhosted.org/packages/b7/9a/98a8992727e762b031ed30451d5726ece46cf8bb7b872a9dba5cef011e5d/pyannote_audio-3.3.2-py2.py3-none-any.whl", hash = "sha256:23e0dcedda920cb2e154e146bcd9663289ee7942d0e012663dad76f2e571ebeb", size = 897827 },
]
[[package]]
name = "pyannote-core"
version = "5.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
{ name = "scipy" },
{ name = "sortedcontainers" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/65/03/feaf7534206f02c75baf151ce4b8c322b402a6f477c2be82f69d9269cbe6/pyannote.core-5.0.0.tar.gz", hash = "sha256:1a55bcc8bd680ba6be5fa53efa3b6f3d2cdd67144c07b6b4d8d66d5cb0d2096f", size = 59247 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/84/c4/370bc8ba66815a5832ece753a1009388bb07ea353d21c83f2d5a1a436f2c/pyannote.core-5.0.0-py3-none-any.whl", hash = "sha256:04920a6754492242ce0dc6017545595ab643870fe69a994f20c1a5f2da0544d0", size = 58475 },
]
[[package]]
name = "pyannote-database"
version = "5.1.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pandas" },
{ name = "pyannote-core" },
{ name = "pyyaml" },
{ name = "typer" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a9/ae/de36413d69a46be87cb612ebbcdc4eacbeebce3bc809124603e44a88fe26/pyannote.database-5.1.3.tar.gz", hash = "sha256:0eaf64c1cc506718de60d2d702f1359b1ae7ff252ee3e4799f1c5e378cd52c31", size = 49957 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a1/64/92d51a3a05615ba58be8ba62a43f9f9f952d9f3646f7e4fb7826e5a3a24e/pyannote.database-5.1.3-py3-none-any.whl", hash = "sha256:37887844c7dfbcc075cb591eddc00aff45fae1ed905344e1f43e0090e63bd40a", size = 48127 },
]
[[package]]
name = "pyannote-metrics"
version = "3.2.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "docopt" },
{ name = "matplotlib" },
{ name = "numpy" },
{ name = "pandas" },
{ name = "pyannote-core" },
{ name = "pyannote-database" },
{ name = "scikit-learn" },
{ name = "scipy" },
{ name = "sympy" },
{ name = "tabulate" },
]
sdist = { url = "https://files.pythonhosted.org/packages/39/2b/6c5f01d3c49aa1c160765946e23782ca6436ae8b9bc514b56319ff5f16e7/pyannote.metrics-3.2.1.tar.gz", hash = "sha256:08024255a3550e96a8e9da4f5f4af326886548480de891414567c8900920ee5c", size = 49086 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6c/7d/035b370ab834b30e849fe9cd092b7bd7f321fcc4a2c56b84e96476b7ede5/pyannote.metrics-3.2.1-py3-none-any.whl", hash = "sha256:46be797cdade26c82773e5018659ae610145260069c7c5bf3d3c8a029ade8e22", size = 51386 },
]
[[package]]
name = "pyannote-pipeline"
version = "3.0.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "docopt" },
{ name = "filelock" },
{ name = "optuna" },
{ name = "pyannote-core" },
{ name = "pyannote-database" },
{ name = "pyyaml" },
{ name = "scikit-learn" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/35/04/4bcfe0dd588577a188328b806f3a7213d8cead0ce5fe5784d01fd57df93f/pyannote.pipeline-3.0.1.tar.gz", hash = "sha256:021794e26a2cf5d8fb5bb1835951e71f5fac33eb14e23dfb7468e16b1b805151", size = 34486 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/83/42/1bf7cbf061ed05c580bfb63bffdd3f3474cbd5c02bee4fac518eea9e9d9e/pyannote.pipeline-3.0.1-py3-none-any.whl", hash = "sha256:819bde4c4dd514f740f2373dfec794832b9fc8e346a35e43a7681625ee187393", size = 31517 },
]
[[package]]
name = "pyasn1"
version = "0.6.1"
@@ -2806,15 +2405,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/80/28/2659c02301b9500751f8d42f9a6632e1508aa5120de5e43042b8b30f8d5d/pyopenssl-25.1.0-py3-none-any.whl", hash = "sha256:2b11f239acc47ac2e5aca04fd7fa829800aeee22a2eb30d744572a157bd8a1ab", size = 56771 },
]
[[package]]
name = "pyparsing"
version = "3.2.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/bb/22/f1129e69d94ffff626bdb5c835506b3a5b4f3d070f17ea295e12c2c6f60f/pyparsing-3.2.3.tar.gz", hash = "sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be", size = 1088608 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/05/e7/df2285f3d08fee213f2d041540fa4fc9ca6c2d44cf36d3a035bf2a8d2bcc/pyparsing-3.2.3-py3-none-any.whl", hash = "sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf", size = 111120 },
]
[[package]]
name = "pypdf"
version = "5.8.0"
@@ -3022,42 +2612,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/45/58/38b5afbc1a800eeea951b9285d3912613f2603bdf897a4ab0f4bd7f405fc/python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104", size = 24546 },
]
[[package]]
name = "pytorch-lightning"
version = "2.5.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "fsspec", extra = ["http"] },
{ name = "lightning-utilities" },
{ name = "packaging" },
{ name = "pyyaml" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "torchmetrics" },
{ name = "tqdm" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/16/78/bce84aab9a5b3b2e9d087d4f1a6be9b481adbfaac4903bc9daaaf09d49a3/pytorch_lightning-2.5.5.tar.gz", hash = "sha256:d6fc8173d1d6e49abfd16855ea05d2eb2415e68593f33d43e59028ecb4e64087", size = 643703 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/04/f6/99a5c66478f469598dee25b0e29b302b5bddd4e03ed0da79608ac964056e/pytorch_lightning-2.5.5-py3-none-any.whl", hash = "sha256:0b533991df2353c0c6ea9ca10a7d0728b73631fd61f5a15511b19bee2aef8af0", size = 832431 },
]
[[package]]
name = "pytorch-metric-learning"
version = "2.9.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
{ name = "scikit-learn" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/9b/80/6e61b1a91debf4c1b47d441f9a9d7fe2aabcdd9575ed70b2811474eb95c3/pytorch-metric-learning-2.9.0.tar.gz", hash = "sha256:27a626caf5e2876a0fd666605a78cb67ef7597e25d7a68c18053dd503830701f", size = 84530 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/46/7d/73ef5052f57b7720cad00e16598db3592a5ef4826745ffca67a2f085d4dc/pytorch_metric_learning-2.9.0-py3-none-any.whl", hash = "sha256:d51646006dc87168f00cf954785db133a4c5aac81253877248737aa42ef6432a", size = 127801 },
]
[[package]]
name = "pytz"
version = "2025.2"
@@ -3234,7 +2788,6 @@ evaluation = [
]
local = [
{ name = "faster-whisper" },
{ name = "pyannote-audio" },
]
silero-vad = [
{ name = "silero-vad" },
@@ -3307,10 +2860,7 @@ evaluation = [
{ name = "pydantic", specifier = ">=2.1.1" },
{ name = "tqdm", specifier = ">=4.66.0" },
]
local = [
{ name = "faster-whisper", specifier = ">=0.10.0" },
{ name = "pyannote-audio", specifier = ">=3.3.2" },
]
local = [{ name = "faster-whisper", specifier = ">=0.10.0" }]
silero-vad = [
{ name = "silero-vad", specifier = ">=5.1.2" },
{ name = "torch", specifier = ">=2.8.0", index = "https://download.pytorch.org/whl/cpu" },
@@ -3514,44 +3064,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/64/8d/0133e4eb4beed9e425d9a98ed6e081a55d195481b7632472be1af08d2f6b/rsa-4.9.1-py3-none-any.whl", hash = "sha256:68635866661c6836b8d39430f97a996acbd61bfa49406748ea243539fe239762", size = 34696 },
]
[[package]]
name = "ruamel-yaml"
version = "0.18.15"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "ruamel-yaml-clib", marker = "platform_python_implementation == 'CPython'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3e/db/f3950f5e5031b618aae9f423a39bf81a55c148aecd15a34527898e752cf4/ruamel.yaml-0.18.15.tar.gz", hash = "sha256:dbfca74b018c4c3fba0b9cc9ee33e53c371194a9000e694995e620490fd40700", size = 146865 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/e5/f2a0621f1781b76a38194acae72f01e37b1941470407345b6e8653ad7640/ruamel.yaml-0.18.15-py3-none-any.whl", hash = "sha256:148f6488d698b7a5eded5ea793a025308b25eca97208181b6a026037f391f701", size = 119702 },
]
[[package]]
name = "ruamel-yaml-clib"
version = "0.2.12"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/20/84/80203abff8ea4993a87d823a5f632e4d92831ef75d404c9fc78d0176d2b5/ruamel.yaml.clib-0.2.12.tar.gz", hash = "sha256:6c8fbb13ec503f99a91901ab46e0b07ae7941cd527393187039aec586fdfd36f", size = 225315 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fb/8f/683c6ad562f558cbc4f7c029abcd9599148c51c54b5ef0f24f2638da9fbb/ruamel.yaml.clib-0.2.12-cp311-cp311-macosx_13_0_arm64.whl", hash = "sha256:4a6679521a58256a90b0d89e03992c15144c5f3858f40d7c18886023d7943db6", size = 132224 },
{ url = "https://files.pythonhosted.org/packages/3c/d2/b79b7d695e2f21da020bd44c782490578f300dd44f0a4c57a92575758a76/ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:d84318609196d6bd6da0edfa25cedfbabd8dbde5140a0a23af29ad4b8f91fb1e", size = 641480 },
{ url = "https://files.pythonhosted.org/packages/68/6e/264c50ce2a31473a9fdbf4fa66ca9b2b17c7455b31ef585462343818bd6c/ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb43a269eb827806502c7c8efb7ae7e9e9d0573257a46e8e952f4d4caba4f31e", size = 739068 },
{ url = "https://files.pythonhosted.org/packages/86/29/88c2567bc893c84d88b4c48027367c3562ae69121d568e8a3f3a8d363f4d/ruamel.yaml.clib-0.2.12-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:811ea1594b8a0fb466172c384267a4e5e367298af6b228931f273b111f17ef52", size = 703012 },
{ url = "https://files.pythonhosted.org/packages/11/46/879763c619b5470820f0cd6ca97d134771e502776bc2b844d2adb6e37753/ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cf12567a7b565cbf65d438dec6cfbe2917d3c1bdddfce84a9930b7d35ea59642", size = 704352 },
{ url = "https://files.pythonhosted.org/packages/02/80/ece7e6034256a4186bbe50dee28cd032d816974941a6abf6a9d65e4228a7/ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7dd5adc8b930b12c8fc5b99e2d535a09889941aa0d0bd06f4749e9a9397c71d2", size = 737344 },
{ url = "https://files.pythonhosted.org/packages/f0/ca/e4106ac7e80efbabdf4bf91d3d32fc424e41418458251712f5672eada9ce/ruamel.yaml.clib-0.2.12-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1492a6051dab8d912fc2adeef0e8c72216b24d57bd896ea607cb90bb0c4981d3", size = 714498 },
{ url = "https://files.pythonhosted.org/packages/67/58/b1f60a1d591b771298ffa0428237afb092c7f29ae23bad93420b1eb10703/ruamel.yaml.clib-0.2.12-cp311-cp311-win32.whl", hash = "sha256:bd0a08f0bab19093c54e18a14a10b4322e1eacc5217056f3c063bd2f59853ce4", size = 100205 },
{ url = "https://files.pythonhosted.org/packages/b4/4f/b52f634c9548a9291a70dfce26ca7ebce388235c93588a1068028ea23fcc/ruamel.yaml.clib-0.2.12-cp311-cp311-win_amd64.whl", hash = "sha256:a274fb2cb086c7a3dea4322ec27f4cb5cc4b6298adb583ab0e211a4682f241eb", size = 118185 },
{ url = "https://files.pythonhosted.org/packages/48/41/e7a405afbdc26af961678474a55373e1b323605a4f5e2ddd4a80ea80f628/ruamel.yaml.clib-0.2.12-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:20b0f8dc160ba83b6dcc0e256846e1a02d044e13f7ea74a3d1d56ede4e48c632", size = 133433 },
{ url = "https://files.pythonhosted.org/packages/ec/b0/b850385604334c2ce90e3ee1013bd911aedf058a934905863a6ea95e9eb4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux2014_aarch64.whl", hash = "sha256:943f32bc9dedb3abff9879edc134901df92cfce2c3d5c9348f172f62eb2d771d", size = 647362 },
{ url = "https://files.pythonhosted.org/packages/44/d0/3f68a86e006448fb6c005aee66565b9eb89014a70c491d70c08de597f8e4/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95c3829bb364fdb8e0332c9931ecf57d9be3519241323c5274bd82f709cebc0c", size = 754118 },
{ url = "https://files.pythonhosted.org/packages/52/a9/d39f3c5ada0a3bb2870d7db41901125dbe2434fa4f12ca8c5b83a42d7c53/ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:749c16fcc4a2b09f28843cda5a193e0283e47454b63ec4b81eaa2242f50e4ccd", size = 706497 },
{ url = "https://files.pythonhosted.org/packages/b0/fa/097e38135dadd9ac25aecf2a54be17ddf6e4c23e43d538492a90ab3d71c6/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:bf165fef1f223beae7333275156ab2022cffe255dcc51c27f066b4370da81e31", size = 698042 },
{ url = "https://files.pythonhosted.org/packages/ec/d5/a659ca6f503b9379b930f13bc6b130c9f176469b73b9834296822a83a132/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:32621c177bbf782ca5a18ba4d7af0f1082a3f6e517ac2a18b3974d4edf349680", size = 745831 },
{ url = "https://files.pythonhosted.org/packages/db/5d/36619b61ffa2429eeaefaab4f3374666adf36ad8ac6330d855848d7d36fd/ruamel.yaml.clib-0.2.12-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b82a7c94a498853aa0b272fd5bc67f29008da798d4f93a2f9f289feb8426a58d", size = 715692 },
{ url = "https://files.pythonhosted.org/packages/b1/82/85cb92f15a4231c89b95dfe08b09eb6adca929ef7df7e17ab59902b6f589/ruamel.yaml.clib-0.2.12-cp312-cp312-win32.whl", hash = "sha256:e8c4ebfcfd57177b572e2040777b8abc537cdef58a2120e830124946aa9b42c5", size = 98777 },
{ url = "https://files.pythonhosted.org/packages/d7/8f/c3654f6f1ddb75daf3922c3d8fc6005b1ab56671ad56ffb874d908bfa668/ruamel.yaml.clib-0.2.12-cp312-cp312-win_amd64.whl", hash = "sha256:0467c5965282c62203273b838ae77c0d29d7638c8a4e3a1c8bdd3602c10904e4", size = 115523 },
]
[[package]]
name = "s3transfer"
version = "0.13.0"
@@ -3586,68 +3098,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/69/e2/b011c38e5394c4c18fb5500778a55ec43ad6106126e74723ffaee246f56e/safetensors-0.5.3-cp38-abi3-win_amd64.whl", hash = "sha256:836cbbc320b47e80acd40e44c8682db0e8ad7123209f69b093def21ec7cafd11", size = 308878 },
]
[[package]]
name = "scikit-learn"
version = "1.7.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "joblib" },
{ name = "numpy" },
{ name = "scipy" },
{ name = "threadpoolctl" },
]
sdist = { url = "https://files.pythonhosted.org/packages/41/84/5f4af978fff619706b8961accac84780a6d298d82a8873446f72edb4ead0/scikit_learn-1.7.1.tar.gz", hash = "sha256:24b3f1e976a4665aa74ee0fcaac2b8fccc6ae77c8e07ab25da3ba6d3292b9802", size = 7190445 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b4/bd/a23177930abd81b96daffa30ef9c54ddbf544d3226b8788ce4c3ef1067b4/scikit_learn-1.7.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:90c8494ea23e24c0fb371afc474618c1019dc152ce4a10e4607e62196113851b", size = 9334838 },
{ url = "https://files.pythonhosted.org/packages/8d/a1/d3a7628630a711e2ac0d1a482910da174b629f44e7dd8cfcd6924a4ef81a/scikit_learn-1.7.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:bb870c0daf3bf3be145ec51df8ac84720d9972170786601039f024bf6d61a518", size = 8651241 },
{ url = "https://files.pythonhosted.org/packages/26/92/85ec172418f39474c1cd0221d611345d4f433fc4ee2fc68e01f524ccc4e4/scikit_learn-1.7.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:40daccd1b5623f39e8943ab39735cadf0bdce80e67cdca2adcb5426e987320a8", size = 9718677 },
{ url = "https://files.pythonhosted.org/packages/df/ce/abdb1dcbb1d2b66168ec43b23ee0cee356b4cc4100ddee3943934ebf1480/scikit_learn-1.7.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:30d1f413cfc0aa5a99132a554f1d80517563c34a9d3e7c118fde2d273c6fe0f7", size = 9511189 },
{ url = "https://files.pythonhosted.org/packages/b2/3b/47b5eaee01ef2b5a80ba3f7f6ecf79587cb458690857d4777bfd77371c6f/scikit_learn-1.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:c711d652829a1805a95d7fe96654604a8f16eab5a9e9ad87b3e60173415cb650", size = 8914794 },
{ url = "https://files.pythonhosted.org/packages/cb/16/57f176585b35ed865f51b04117947fe20f130f78940c6477b6d66279c9c2/scikit_learn-1.7.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3cee419b49b5bbae8796ecd690f97aa412ef1674410c23fc3257c6b8b85b8087", size = 9260431 },
{ url = "https://files.pythonhosted.org/packages/67/4e/899317092f5efcab0e9bc929e3391341cec8fb0e816c4789686770024580/scikit_learn-1.7.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:2fd8b8d35817b0d9ebf0b576f7d5ffbbabdb55536b0655a8aaae629d7ffd2e1f", size = 8637191 },
{ url = "https://files.pythonhosted.org/packages/f3/1b/998312db6d361ded1dd56b457ada371a8d8d77ca2195a7d18fd8a1736f21/scikit_learn-1.7.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:588410fa19a96a69763202f1d6b7b91d5d7a5d73be36e189bc6396bfb355bd87", size = 9486346 },
{ url = "https://files.pythonhosted.org/packages/ad/09/a2aa0b4e644e5c4ede7006748f24e72863ba2ae71897fecfd832afea01b4/scikit_learn-1.7.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e3142f0abe1ad1d1c31a2ae987621e41f6b578144a911ff4ac94781a583adad7", size = 9290988 },
{ url = "https://files.pythonhosted.org/packages/15/fa/c61a787e35f05f17fc10523f567677ec4eeee5f95aa4798dbbbcd9625617/scikit_learn-1.7.1-cp312-cp312-win_amd64.whl", hash = "sha256:3ddd9092c1bd469acab337d87930067c87eac6bd544f8d5027430983f1e1ae88", size = 8735568 },
]
[[package]]
name = "scipy"
version = "1.16.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f5/4a/b927028464795439faec8eaf0b03b011005c487bb2d07409f28bf30879c4/scipy-1.16.1.tar.gz", hash = "sha256:44c76f9e8b6e8e488a586190ab38016e4ed2f8a038af7cd3defa903c0a2238b3", size = 30580861 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/da/91/812adc6f74409b461e3a5fa97f4f74c769016919203138a3bf6fc24ba4c5/scipy-1.16.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:c033fa32bab91dc98ca59d0cf23bb876454e2bb02cbe592d5023138778f70030", size = 36552519 },
{ url = "https://files.pythonhosted.org/packages/47/18/8e355edcf3b71418d9e9f9acd2708cc3a6c27e8f98fde0ac34b8a0b45407/scipy-1.16.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:6e5c2f74e5df33479b5cd4e97a9104c511518fbd979aa9b8f6aec18b2e9ecae7", size = 28638010 },
{ url = "https://files.pythonhosted.org/packages/d9/eb/e931853058607bdfbc11b86df19ae7a08686121c203483f62f1ecae5989c/scipy-1.16.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:0a55ffe0ba0f59666e90951971a884d1ff6f4ec3275a48f472cfb64175570f77", size = 20909790 },
{ url = "https://files.pythonhosted.org/packages/45/0c/be83a271d6e96750cd0be2e000f35ff18880a46f05ce8b5d3465dc0f7a2a/scipy-1.16.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:f8a5d6cd147acecc2603fbd382fed6c46f474cccfcf69ea32582e033fb54dcfe", size = 23513352 },
{ url = "https://files.pythonhosted.org/packages/7c/bf/fe6eb47e74f762f933cca962db7f2c7183acfdc4483bd1c3813cfe83e538/scipy-1.16.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cb18899127278058bcc09e7b9966d41a5a43740b5bb8dcba401bd983f82e885b", size = 33534643 },
{ url = "https://files.pythonhosted.org/packages/bb/ba/63f402e74875486b87ec6506a4f93f6d8a0d94d10467280f3d9d7837ce3a/scipy-1.16.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:adccd93a2fa937a27aae826d33e3bfa5edf9aa672376a4852d23a7cd67a2e5b7", size = 35376776 },
{ url = "https://files.pythonhosted.org/packages/c3/b4/04eb9d39ec26a1b939689102da23d505ea16cdae3dbb18ffc53d1f831044/scipy-1.16.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:18aca1646a29ee9a0625a1be5637fa798d4d81fdf426481f06d69af828f16958", size = 35698906 },
{ url = "https://files.pythonhosted.org/packages/04/d6/bb5468da53321baeb001f6e4e0d9049eadd175a4a497709939128556e3ec/scipy-1.16.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:d85495cef541729a70cdddbbf3e6b903421bc1af3e8e3a9a72a06751f33b7c39", size = 38129275 },
{ url = "https://files.pythonhosted.org/packages/c4/94/994369978509f227cba7dfb9e623254d0d5559506fe994aef4bea3ed469c/scipy-1.16.1-cp311-cp311-win_amd64.whl", hash = "sha256:226652fca853008119c03a8ce71ffe1b3f6d2844cc1686e8f9806edafae68596", size = 38644572 },
{ url = "https://files.pythonhosted.org/packages/f8/d9/ec4864f5896232133f51382b54a08de91a9d1af7a76dfa372894026dfee2/scipy-1.16.1-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:81b433bbeaf35728dad619afc002db9b189e45eebe2cd676effe1fb93fef2b9c", size = 36575194 },
{ url = "https://files.pythonhosted.org/packages/5c/6d/40e81ecfb688e9d25d34a847dca361982a6addf8e31f0957b1a54fbfa994/scipy-1.16.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:886cc81fdb4c6903a3bb0464047c25a6d1016fef77bb97949817d0c0d79f9e04", size = 28594590 },
{ url = "https://files.pythonhosted.org/packages/0e/37/9f65178edfcc629377ce9a64fc09baebea18c80a9e57ae09a52edf84880b/scipy-1.16.1-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:15240c3aac087a522b4eaedb09f0ad061753c5eebf1ea430859e5bf8640d5919", size = 20866458 },
{ url = "https://files.pythonhosted.org/packages/2c/7b/749a66766871ea4cb1d1ea10f27004db63023074c22abed51f22f09770e0/scipy-1.16.1-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:65f81a25805f3659b48126b5053d9e823d3215e4a63730b5e1671852a1705921", size = 23539318 },
{ url = "https://files.pythonhosted.org/packages/c4/db/8d4afec60eb833a666434d4541a3151eedbf2494ea6d4d468cbe877f00cd/scipy-1.16.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:6c62eea7f607f122069b9bad3f99489ddca1a5173bef8a0c75555d7488b6f725", size = 33292899 },
{ url = "https://files.pythonhosted.org/packages/51/1e/79023ca3bbb13a015d7d2757ecca3b81293c663694c35d6541b4dca53e98/scipy-1.16.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f965bbf3235b01c776115ab18f092a95aa74c271a52577bcb0563e85738fd618", size = 35162637 },
{ url = "https://files.pythonhosted.org/packages/b6/49/0648665f9c29fdaca4c679182eb972935b3b4f5ace41d323c32352f29816/scipy-1.16.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f006e323874ffd0b0b816d8c6a8e7f9a73d55ab3b8c3f72b752b226d0e3ac83d", size = 35490507 },
{ url = "https://files.pythonhosted.org/packages/62/8f/66cbb9d6bbb18d8c658f774904f42a92078707a7c71e5347e8bf2f52bb89/scipy-1.16.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e8fd15fc5085ab4cca74cb91fe0a4263b1f32e4420761ddae531ad60934c2119", size = 37923998 },
{ url = "https://files.pythonhosted.org/packages/14/c3/61f273ae550fbf1667675701112e380881905e28448c080b23b5a181df7c/scipy-1.16.1-cp312-cp312-win_amd64.whl", hash = "sha256:f7b8013c6c066609577d910d1a2a077021727af07b6fab0ee22c2f901f22352a", size = 38508060 },
]
[[package]]
name = "semver"
version = "3.0.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/d1/d3159231aec234a59dd7d601e9dd9fe96f3afff15efd33c1070019b26132/semver-3.0.4.tar.gz", hash = "sha256:afc7d8c584a5ed0a11033af086e8af226a9c0b206f313e0301f8dd7b6b589602", size = 269730 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a6/24/4d91e05817e92e3a61c8a21e08fd0f390f5301f1c448b137c57c4bc6e543/semver-3.0.4-py3-none-any.whl", hash = "sha256:9c824d87ba7f7ab4a1890799cec8596f15c1241cb473404ea1cb0c55e4b04746", size = 17912 },
]
[[package]]
name = "sentencepiece"
version = "0.2.0"
@@ -3751,25 +3201,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/32/46/9cb0e58b2deb7f82b84065f37f3bffeb12413f947f9388e4cac22c4621ce/sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0", size = 29575 },
]
[[package]]
name = "soundfile"
version = "0.13.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cffi" },
{ name = "numpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e1/41/9b873a8c055582859b239be17902a85339bec6a30ad162f98c9b0288a2cc/soundfile-0.13.1.tar.gz", hash = "sha256:b2c68dab1e30297317080a5b43df57e302584c49e2942defdde0acccc53f0e5b", size = 46156 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/64/28/e2a36573ccbcf3d57c00626a21fe51989380636e821b341d36ccca0c1c3a/soundfile-0.13.1-py2.py3-none-any.whl", hash = "sha256:a23c717560da2cf4c7b5ae1142514e0fd82d6bbd9dfc93a50423447142f2c445", size = 25751 },
{ url = "https://files.pythonhosted.org/packages/ea/ab/73e97a5b3cc46bba7ff8650a1504348fa1863a6f9d57d7001c6b67c5f20e/soundfile-0.13.1-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:82dc664d19831933fe59adad199bf3945ad06d84bc111a5b4c0d3089a5b9ec33", size = 1142250 },
{ url = "https://files.pythonhosted.org/packages/a0/e5/58fd1a8d7b26fc113af244f966ee3aecf03cb9293cb935daaddc1e455e18/soundfile-0.13.1-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:743f12c12c4054921e15736c6be09ac26b3b3d603aef6fd69f9dde68748f2593", size = 1101406 },
{ url = "https://files.pythonhosted.org/packages/58/ae/c0e4a53d77cf6e9a04179535766b3321b0b9ced5f70522e4caf9329f0046/soundfile-0.13.1-py2.py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:9c9e855f5a4d06ce4213f31918653ab7de0c5a8d8107cd2427e44b42df547deb", size = 1235729 },
{ url = "https://files.pythonhosted.org/packages/57/5e/70bdd9579b35003a489fc850b5047beeda26328053ebadc1fb60f320f7db/soundfile-0.13.1-py2.py3-none-manylinux_2_28_x86_64.whl", hash = "sha256:03267c4e493315294834a0870f31dbb3b28a95561b80b134f0bd3cf2d5f0e618", size = 1313646 },
{ url = "https://files.pythonhosted.org/packages/fe/df/8c11dc4dfceda14e3003bb81a0d0edcaaf0796dd7b4f826ea3e532146bba/soundfile-0.13.1-py2.py3-none-win32.whl", hash = "sha256:c734564fab7c5ddf8e9be5bf70bab68042cd17e9c214c06e365e20d64f9a69d5", size = 899881 },
{ url = "https://files.pythonhosted.org/packages/14/e9/6b761de83277f2f02ded7e7ea6f07828ec78e4b229b80e4ca55dd205b9dc/soundfile-0.13.1-py2.py3-none-win_amd64.whl", hash = "sha256:1e70a05a0626524a69e9f0f4dd2ec174b4e9567f4d8b6c11d38b5c289be36ee9", size = 1019162 },
]
[[package]]
name = "soupsieve"
version = "2.7"
@@ -3779,29 +3210,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/9c/0e6afc12c269578be5c0c1c9f4b49a8d32770a080260c333ac04cc1c832d/soupsieve-2.7-py3-none-any.whl", hash = "sha256:6e60cc5c1ffaf1cebcc12e8188320b72071e922c2e897f737cadce79ad5d30c4", size = 36677 },
]
[[package]]
name = "speechbrain"
version = "1.0.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "huggingface-hub" },
{ name = "hyperpyyaml" },
{ name = "joblib" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "scipy" },
{ name = "sentencepiece" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ab/10/87e666544a4e0cec7cbdc09f26948994831ae0f8bbc58de3bf53b68285ff/speechbrain-1.0.3.tar.gz", hash = "sha256:fcab3c6e90012cecb1eed40ea235733b550137e73da6bfa2340ba191ec714052", size = 747735 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/58/13/e61f1085aebee17d5fc2df19fcc5177c10379be52578afbecdd615a831c9/speechbrain-1.0.3-py3-none-any.whl", hash = "sha256:9859d4c1b1fb3af3b85523c0c89f52e45a04f305622ed55f31aa32dd2fba19e9", size = 864091 },
]
[[package]]
name = "sqlalchemy"
version = "1.4.54"
@@ -3883,15 +3291,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a2/09/77d55d46fd61b4a135c444fc97158ef34a095e5681d0a6c10b75bf356191/sympy-1.14.0-py3-none-any.whl", hash = "sha256:e091cc3e99d2141a0ba2847328f5479b05d94a6635cb96148ccb3f34671bd8f5", size = 6299353 },
]
[[package]]
name = "tabulate"
version = "0.9.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ec/fe/802052aecb21e3797b8f7902564ab6ea0d60ff8ca23952079064155d1ae1/tabulate-0.9.0.tar.gz", hash = "sha256:0095b12bf5966de529c0feb1fa08671671b3368eec77d7ef7ab114be2c068b3c", size = 81090 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl", hash = "sha256:024ca478df22e9340661486f85298cff5f6dcdba14f3813e8830015b9ed1948f", size = 35252 },
]
[[package]]
name = "tenacity"
version = "9.1.2"
@@ -3901,29 +3300,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/30/643397144bfbfec6f6ef821f36f33e57d35946c44a2352d3c9f0ae847619/tenacity-9.1.2-py3-none-any.whl", hash = "sha256:f77bf36710d8b73a50b2dd155c97b870017ad21afe6ab300326b0371b3b05138", size = 28248 },
]
[[package]]
name = "tensorboardx"
version = "2.6.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
{ name = "packaging" },
{ name = "protobuf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/2b/c5/d4cc6e293fb837aaf9f76dd7745476aeba8ef7ef5146c3b3f9ee375fe7a5/tensorboardx-2.6.4.tar.gz", hash = "sha256:b163ccb7798b31100b9f5fa4d6bc22dad362d7065c2f24b51e50731adde86828", size = 4769801 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e0/1d/b5d63f1a6b824282b57f7b581810d20b7a28ca951f2d5b59f1eb0782c12b/tensorboardx-2.6.4-py3-none-any.whl", hash = "sha256:5970cf3a1f0a6a6e8b180ccf46f3fe832b8a25a70b86e5a237048a7c0beb18e2", size = 87201 },
]
[[package]]
name = "threadpoolctl"
version = "3.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b7/4d/08c89e34946fce2aec4fbb45c9016efd5f4d7f24af8e5d93296e935631d8/threadpoolctl-3.6.0.tar.gz", hash = "sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e", size = 21274 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/32/d5/f9a850d79b0851d1d4ef6456097579a9005b31fea68726a4ae5f2d82ddd9/threadpoolctl-3.6.0-py3-none-any.whl", hash = "sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb", size = 18638 },
]
[[package]]
name = "tiktoken"
version = "0.9.0"
@@ -4064,40 +3440,6 @@ wheels = [
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-win_arm64.whl", hash = "sha256:99fc421a5d234580e45957a7b02effbf3e1c884a5dd077afc85352c77bf41434" },
]
[[package]]
name = "torch-audiomentations"
version = "0.12.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "julius" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "torch-pitch-shift" },
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
sdist = { url = "https://files.pythonhosted.org/packages/31/8d/2f8fd7e34c75f5ee8de4310c3bd3f22270acd44d1f809e2fe7c12fbf35f8/torch_audiomentations-0.12.0.tar.gz", hash = "sha256:b02d4c5eb86376986a53eb405cca5e34f370ea9284411237508e720c529f7888", size = 52094 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/21/9d/1ee04f49c15d2d632f6f7102061d7c07652858e6d91b58a091531034e84f/torch_audiomentations-0.12.0-py3-none-any.whl", hash = "sha256:1b80b91d2016ccf83979622cac8f702072a79b7dcc4c2bee40f00b26433a786b", size = 48506 },
]
[[package]]
name = "torch-pitch-shift"
version = "1.2.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "packaging" },
{ name = "primepy" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
{ name = "torchaudio", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or sys_platform == 'darwin'" },
{ name = "torchaudio", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
sdist = { url = "https://files.pythonhosted.org/packages/79/a6/722a832bca75d5079f6731e005b3d0c2eec7c6c6863d030620952d143d57/torch_pitch_shift-1.2.5.tar.gz", hash = "sha256:6e1c7531f08d0f407a4c55e5ff8385a41355c5c5d27ab7fa08632e51defbd0ed", size = 4725 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/27/4c/96ac2a09efb56cc3c41fb3ce9b6f4d8c0604499f7481d4a13a7b03e21382/torch_pitch_shift-1.2.5-py3-none-any.whl", hash = "sha256:6f8500cbc13f1c98b11cde1805ce5084f82cdd195c285f34287541f168a7c6a7", size = 5005 },
]
[[package]]
name = "torchaudio"
version = "2.8.0"
@@ -4145,22 +3487,6 @@ wheels = [
{ url = "https://download.pytorch.org/whl/cpu/torchaudio-2.8.0%2Bcpu-cp312-cp312-win_amd64.whl", hash = "sha256:9b302192b570657c1cc787a4d487ae4bbb7f2aab1c01b1fcc46757e7f86f391e" },
]
[[package]]
name = "torchmetrics"
version = "1.8.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "lightning-utilities" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "torch", version = "2.8.0", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform == 'darwin'" },
{ name = "torch", version = "2.8.0+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "sys_platform != 'darwin'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/85/2e/48a887a59ecc4a10ce9e8b35b3e3c5cef29d902c4eac143378526e7485cb/torchmetrics-1.8.2.tar.gz", hash = "sha256:cf64a901036bf107f17a524009eea7781c9c5315d130713aeca5747a686fe7a5", size = 580679 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/02/21/aa0f434434c48490f91b65962b1ce863fdcce63febc166ca9fe9d706c2b6/torchmetrics-1.8.2-py3-none-any.whl", hash = "sha256:08382fd96b923e39e904c4d570f3d49e2cc71ccabd2a94e0f895d1f0dac86242", size = 983161 },
]
[[package]]
name = "tqdm"
version = "4.67.1"

View File

@@ -0,0 +1,53 @@
# =======================================================
# Reflector Self-Hosted Production — Frontend Configuration
# Generated by: ./scripts/setup-selfhosted.sh
# =======================================================
# Site URL — set to your domain or server IP
# The setup script auto-detects this on Linux.
SITE_URL=https://localhost
NEXTAUTH_URL=https://localhost
NEXTAUTH_SECRET=changeme-generate-a-secure-random-string
# API URLs
# Public-facing (what the browser uses):
API_URL=https://localhost
WEBSOCKET_URL=auto
# Internal Docker network (server-side rendering):
SERVER_API_URL=http://server:1250
KV_URL=redis://redis:6379
# Authentication
# Set to true when Authentik or password auth is configured
FEATURE_REQUIRE_LOGIN=false
# Auth provider: "authentik" or "credentials"
# Set to "credentials" when using password auth backend
# AUTH_PROVIDER=credentials
# Nullify auth vars when not using Authentik
AUTHENTIK_ISSUER=
AUTHENTIK_REFRESH_TOKEN_URL=
# =======================================================
# Authentik OAuth/OIDC (Optional)
# Uncomment and configure when enabling authentication.
# See docsv2/selfhosted-production.md for setup instructions.
# =======================================================
# FEATURE_REQUIRE_LOGIN=true
# AUTHENTIK_ISSUER=https://authentik.example.com/application/o/reflector
# AUTHENTIK_REFRESH_TOKEN_URL=https://authentik.example.com/application/o/token/
# AUTHENTIK_CLIENT_ID=your-client-id
# AUTHENTIK_CLIENT_SECRET=your-client-secret
# =======================================================
# Feature Flags
# =======================================================
# FEATURE_ROOMS=true
# FEATURE_BROWSE=true
# =======================================================
# Sentry (Optional)
# =======================================================
# SENTRY_DSN=

View File

@@ -1,61 +0,0 @@
import React from "react";
import { Box, Flex } from "@chakra-ui/react";
import type { DagTask } from "../../../lib/UserEventsProvider";
const pulseKeyframes = `
@keyframes dagDotPulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.3; }
}
`;
function humanizeTaskName(name: string): string {
return name
.split("_")
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(" ");
}
function dotProps(status: DagTask["status"]): Record<string, unknown> {
switch (status) {
case "completed":
return { bg: "green.500" };
case "running":
return {
bg: "blue.500",
style: { animation: "dagDotPulse 1.5s ease-in-out infinite" },
};
case "failed":
return { bg: "red.500" };
case "cancelled":
return { bg: "gray.400" };
case "queued":
default:
return {
bg: "transparent",
border: "1px solid",
borderColor: "gray.400",
};
}
}
export default function DagProgressDots({ tasks }: { tasks: DagTask[] }) {
return (
<>
<style>{pulseKeyframes}</style>
<Flex gap="2px" alignItems="center" flexWrap="wrap">
{tasks.map((task) => (
<Box
key={task.name}
w="4px"
h="4px"
borderRadius="full"
flexShrink={0}
title={humanizeTaskName(task.name)}
{...dotProps(task.status)}
/>
))}
</Flex>
</>
);
}

View File

@@ -19,7 +19,6 @@ import {
generateTextFragment,
} from "../../../lib/textHighlight";
import type { components } from "../../../reflector-api";
import type { DagTask } from "../../../lib/UserEventsProvider";
type SearchResult = components["schemas"]["SearchResult"];
type SourceKind = components["schemas"]["SourceKind"];
@@ -30,7 +29,6 @@ interface TranscriptCardsProps {
isLoading?: boolean;
onDelete: (transcriptId: string) => void;
onReprocess: (transcriptId: string) => void;
dagStatusMap?: Map<string, DagTask[]>;
}
function highlightText(text: string, query: string): React.ReactNode {
@@ -104,13 +102,11 @@ function TranscriptCard({
query,
onDelete,
onReprocess,
dagStatusMap,
}: {
result: SearchResult;
query: string;
onDelete: (transcriptId: string) => void;
onReprocess: (transcriptId: string) => void;
dagStatusMap?: Map<string, DagTask[]>;
}) {
const [isExpanded, setIsExpanded] = useState(false);
@@ -141,16 +137,7 @@ function TranscriptCard({
<Box borderWidth={1} p={4} borderRadius="md" fontSize="sm">
<Flex justify="space-between" alignItems="flex-start" gap="2">
<Box>
<TranscriptStatusIcon
status={result.status}
dagStatus={
dagStatusMap?.get(result.id) ??
((result as Record<string, unknown>).dag_status as
| DagTask[]
| null) ??
null
}
/>
<TranscriptStatusIcon status={result.status} />
</Box>
<Box flex="1">
{/* Title with highlighting and text fragment for deep linking */}
@@ -297,7 +284,6 @@ export default function TranscriptCards({
isLoading,
onDelete,
onReprocess,
dagStatusMap,
}: TranscriptCardsProps) {
return (
<Box position="relative">
@@ -329,7 +315,6 @@ export default function TranscriptCards({
query={query}
onDelete={onDelete}
onReprocess={onReprocess}
dagStatusMap={dagStatusMap}
/>
))}
</Stack>

View File

@@ -8,17 +8,13 @@ import {
FaGear,
} from "react-icons/fa6";
import { TranscriptStatus } from "../../../lib/transcript";
import type { DagTask } from "../../../lib/UserEventsProvider";
import DagProgressDots from "./DagProgressDots";
interface TranscriptStatusIconProps {
status: TranscriptStatus;
dagStatus?: DagTask[] | null;
}
export default function TranscriptStatusIcon({
status,
dagStatus,
}: TranscriptStatusIconProps) {
switch (status) {
case "ended":
@@ -40,9 +36,6 @@ export default function TranscriptStatusIcon({
</Box>
);
case "processing":
if (dagStatus && dagStatus.length > 0) {
return <DagProgressDots tasks={dagStatus} />;
}
return (
<Box as="span" title="Processing in progress">
<Icon color="gray.500" as={FaGear} />

View File

@@ -43,7 +43,6 @@ import DeleteTranscriptDialog from "./_components/DeleteTranscriptDialog";
import { formatLocalDate } from "../../lib/time";
import { RECORD_A_MEETING_URL } from "../../api/urls";
import { useUserName } from "../../lib/useUserName";
import { useDagStatusMap } from "../../lib/UserEventsProvider";
const SEARCH_FORM_QUERY_INPUT_NAME = "query" as const;
@@ -274,7 +273,6 @@ export default function TranscriptBrowser() {
}, [JSON.stringify(searchFilters)]);
const userName = useUserName();
const dagStatusMap = useDagStatusMap();
const [deletionLoading, setDeletionLoading] = useState(false);
const cancelRef = React.useRef(null);
const [transcriptToDeleteId, setTranscriptToDeleteId] =
@@ -410,7 +408,6 @@ export default function TranscriptBrowser() {
isLoading={searchLoading}
onDelete={setTranscriptToDeleteId}
onReprocess={handleProcessTranscript}
dagStatusMap={dagStatusMap}
/>
{!searchLoading && results.length === 0 && (

View File

@@ -1,190 +0,0 @@
"use client";
import { useEffect, useState } from "react";
import { Table, Box, Icon, Spinner, Text, Badge } from "@chakra-ui/react";
import { FaCheck, FaXmark, FaClock, FaMinus } from "react-icons/fa6";
import type { DagTask, DagTaskStatus } from "../../useWebSockets";
function humanizeTaskName(name: string): string {
return name
.split("_")
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(" ");
}
function formatDuration(seconds: number): string {
if (seconds < 60) {
return `${Math.round(seconds)}s`;
}
const minutes = Math.floor(seconds / 60);
const remainingSeconds = Math.round(seconds % 60);
return `${minutes}m ${remainingSeconds}s`;
}
function StatusIcon({ status }: { status: DagTaskStatus }) {
switch (status) {
case "completed":
return (
<Box as="span" title="Completed">
<Icon color="green.500" as={FaCheck} />
</Box>
);
case "running":
return <Spinner size="sm" color="blue.500" />;
case "failed":
return (
<Box as="span" title="Failed">
<Icon color="red.500" as={FaXmark} />
</Box>
);
case "queued":
return (
<Box as="span" title="Queued">
<Icon color="gray.400" as={FaClock} />
</Box>
);
case "cancelled":
return (
<Box as="span" title="Cancelled">
<Icon color="gray.400" as={FaMinus} />
</Box>
);
default:
return null;
}
}
function ElapsedTimer({ startedAt }: { startedAt: string }) {
const [elapsed, setElapsed] = useState<number>(() => {
return (Date.now() - new Date(startedAt).getTime()) / 1000;
});
useEffect(() => {
const interval = setInterval(() => {
setElapsed((Date.now() - new Date(startedAt).getTime()) / 1000);
}, 1000);
return () => clearInterval(interval);
}, [startedAt]);
return <Text fontSize="sm">{formatDuration(elapsed)}</Text>;
}
function DurationCell({ task }: { task: DagTask }) {
if (task.status === "completed" && task.duration_seconds !== null) {
return <Text fontSize="sm">{formatDuration(task.duration_seconds)}</Text>;
}
if (task.status === "running" && task.started_at) {
return <ElapsedTimer startedAt={task.started_at} />;
}
return (
<Text fontSize="sm" color="gray.400">
--
</Text>
);
}
function ProgressCell({ task }: { task: DagTask }) {
if (task.progress_pct === null && task.children_total === null) {
return null;
}
return (
<Box>
{task.progress_pct !== null && (
<Box
w="100%"
h="6px"
bg="gray.200"
borderRadius="full"
overflow="hidden"
>
<Box
h="100%"
w={`${Math.min(100, Math.max(0, task.progress_pct))}%`}
bg={task.status === "failed" ? "red.400" : "blue.400"}
borderRadius="full"
transition="width 0.3s ease"
/>
</Box>
)}
{task.children_total !== null && (
<Badge
size="sm"
colorPalette="gray"
mt={task.progress_pct !== null ? 1 : 0}
>
{task.children_completed ?? 0}/{task.children_total}
</Badge>
)}
</Box>
);
}
function TaskRow({ task }: { task: DagTask }) {
const [expanded, setExpanded] = useState(false);
const hasFailed = task.status === "failed" && task.error;
return (
<>
<Table.Row
cursor={hasFailed ? "pointer" : "default"}
onClick={hasFailed ? () => setExpanded((prev) => !prev) : undefined}
_hover={hasFailed ? { bg: "gray.50" } : undefined}
>
<Table.Cell>
<Text fontSize="sm" fontWeight="medium">
{humanizeTaskName(task.name)}
</Text>
</Table.Cell>
<Table.Cell>
<StatusIcon status={task.status} />
</Table.Cell>
<Table.Cell>
<DurationCell task={task} />
</Table.Cell>
<Table.Cell>
<ProgressCell task={task} />
</Table.Cell>
</Table.Row>
{hasFailed && expanded && (
<Table.Row>
<Table.Cell colSpan={4}>
<Box bg="red.50" p={3} borderRadius="md">
<Text fontSize="xs" color="red.700" whiteSpace="pre-wrap">
{task.error}
</Text>
</Box>
</Table.Cell>
</Table.Row>
)}
</>
);
}
export default function DagProgressTable({ tasks }: { tasks: DagTask[] }) {
return (
<Box w="100%" overflowX="auto">
<Table.Root size="sm">
<Table.Header>
<Table.Row>
<Table.ColumnHeader fontWeight="600">Task</Table.ColumnHeader>
<Table.ColumnHeader fontWeight="600" width="80px">
Status
</Table.ColumnHeader>
<Table.ColumnHeader fontWeight="600" width="100px">
Duration
</Table.ColumnHeader>
<Table.ColumnHeader fontWeight="600" width="140px">
Progress
</Table.ColumnHeader>
</Table.Row>
</Table.Header>
<Table.Body>
{tasks.map((task) => (
<TaskRow key={task.name} task={task} />
))}
</Table.Body>
</Table.Root>
</Box>
);
}

View File

@@ -12,9 +12,6 @@ import { useRouter } from "next/navigation";
import { useTranscriptGet } from "../../../../lib/apiHooks";
import { parseNonEmptyString } from "../../../../lib/utils";
import { useWebSockets } from "../../useWebSockets";
import type { DagTask } from "../../useWebSockets";
import { useDagStatusMap } from "../../../../lib/UserEventsProvider";
import DagProgressTable from "./DagProgressTable";
type TranscriptProcessing = {
params: Promise<{
@@ -28,21 +25,10 @@ export default function TranscriptProcessing(details: TranscriptProcessing) {
const router = useRouter();
const transcript = useTranscriptGet(transcriptId);
const { status: wsStatus, dagStatus: wsDagStatus } =
useWebSockets(transcriptId);
const userDagStatusMap = useDagStatusMap();
const userDagStatus = userDagStatusMap.get(transcriptId) ?? null;
const restDagStatus: DagTask[] | null =
((transcript.data as Record<string, unknown>)?.dag_status as
| DagTask[]
| null) ?? null;
// Prefer transcript room WS (most granular), then user room WS, then REST
const dagStatus = wsDagStatus ?? userDagStatus ?? restDagStatus;
useWebSockets(transcriptId);
useEffect(() => {
const status = wsStatus?.value ?? transcript.data?.status;
const status = transcript.data?.status;
if (!status) return;
if (status === "ended" || status === "error") {
@@ -57,7 +43,6 @@ export default function TranscriptProcessing(details: TranscriptProcessing) {
router.replace(dest);
}
}, [
wsStatus?.value,
transcript.data?.status,
transcript.data?.source_kind,
router,
@@ -91,29 +76,11 @@ export default function TranscriptProcessing(details: TranscriptProcessing) {
w={{ base: "full", md: "container.xl" }}
>
<Center h={"full"} w="full">
<VStack
gap={10}
bg="gray.100"
p={10}
borderRadius="md"
maxW="600px"
w="full"
>
{dagStatus ? (
<>
<Heading size={"md"} textAlign="center">
Processing recording
</Heading>
<DagProgressTable tasks={dagStatus} />
</>
) : (
<>
<Spinner size="xl" color="blue.500" />
<Heading size={"md"} textAlign="center">
Processing recording
</Heading>
</>
)}
<VStack gap={10} bg="gray.100" p={10} borderRadius="md" maxW="500px">
<Spinner size="xl" color="blue.500" />
<Heading size={"md"} textAlign="center">
Processing recording
</Heading>
<Text color="gray.600" textAlign="center">
You can safely return to the library while your recording is being
processed.

View File

@@ -78,7 +78,10 @@ const useMp3 = (transcriptId: string, waiting?: boolean): Mp3Response => {
// Audio is not deleted, proceed to load it
audioElement = document.createElement("audio");
audioElement.src = `${API_URL}/v1/transcripts/${transcriptId}/audio/mp3`;
const audioUrl = `${API_URL}/v1/transcripts/${transcriptId}/audio/mp3`;
audioElement.src = accessTokenInfo
? `${audioUrl}?token=${encodeURIComponent(accessTokenInfo)}`
: audioUrl;
audioElement.crossOrigin = "anonymous";
audioElement.preload = "auto";

View File

@@ -23,7 +23,16 @@ const useWebRTC = (
let p: Peer;
try {
p = new Peer({ initiator: true, stream: stream });
p = new Peer({
initiator: true,
stream: stream,
// Disable trickle ICE: single SDP exchange (offer + answer) with all candidates.
// Required for HTTP-based signaling; trickle needs WebSocket for candidate exchange.
trickle: false,
config: {
iceServers: [{ urls: "stun:stun.l.google.com:19302" }],
},
});
} catch (error) {
setError(error as Error, "Error creating WebRTC");
return;

View File

@@ -1,21 +1,22 @@
import { useEffect, useState } from "react";
import { Topic, FinalSummary, Status } from "./webSocketTypes";
import { useError } from "../../(errors)/errorContext";
import type { components } from "../../reflector-api";
import type { components, operations } from "../../reflector-api";
type AudioWaveform = components["schemas"]["AudioWaveform"];
type GetTranscriptSegmentTopic =
components["schemas"]["GetTranscriptSegmentTopic"];
import { useQueryClient } from "@tanstack/react-query";
import { $api, WEBSOCKET_URL } from "../../lib/apiClient";
import { WEBSOCKET_URL } from "../../lib/apiClient";
import {
invalidateTranscript,
invalidateTranscriptTopics,
invalidateTranscriptWaveform,
} from "../../lib/apiHooks";
import { NonEmptyString } from "../../lib/utils";
import { useAuth } from "../../lib/AuthProvider";
import { parseNonEmptyString } from "../../lib/utils";
import type { DagTask } from "../../lib/dagTypes";
export type { DagTask, DagTaskStatus } from "../../lib/dagTypes";
type TranscriptWsEvent =
operations["v1_transcript_get_websocket_events"]["responses"][200]["content"]["application/json"];
export type UseWebSockets = {
transcriptTextLive: string;
@@ -27,10 +28,10 @@ export type UseWebSockets = {
status: Status | null;
waveform: AudioWaveform | null;
duration: number | null;
dagStatus: DagTask[] | null;
};
export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
const auth = useAuth();
const [transcriptTextLive, setTranscriptTextLive] = useState<string>("");
const [translateText, setTranslateText] = useState<string>("");
const [title, setTitle] = useState<string>("");
@@ -44,7 +45,6 @@ export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
summary: "",
});
const [status, setStatus] = useState<Status | null>(null);
const [dagStatus, setDagStatus] = useState<DagTask[] | null>(null);
const { setError } = useError();
const queryClient = useQueryClient();
@@ -336,175 +336,168 @@ export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
};
if (!transcriptId) return;
const tsId = parseNonEmptyString(transcriptId);
const MAX_RETRIES = 10;
const url = `${WEBSOCKET_URL}/v1/transcripts/${transcriptId}/events`;
let ws = new WebSocket(url);
let ws: WebSocket | null = null;
let retryCount = 0;
let retryTimeout: ReturnType<typeof setTimeout> | null = null;
let intentionalClose = false;
ws.onopen = () => {
console.debug("WebSocket connection opened");
};
const connect = () => {
const subprotocols = auth.accessToken
? ["bearer", auth.accessToken]
: undefined;
ws = new WebSocket(url, subprotocols);
ws.onmessage = (event) => {
const message = JSON.parse(event.data);
ws.onopen = () => {
console.debug("WebSocket connection opened");
retryCount = 0;
};
try {
switch (message.event) {
case "TRANSCRIPT":
const newText = (message.data.text ?? "").trim();
const newTranslation = (message.data.translation ?? "").trim();
ws.onmessage = (event) => {
const message: TranscriptWsEvent = JSON.parse(event.data);
if (!newText) break;
try {
switch (message.event) {
case "TRANSCRIPT": {
const newText = (message.data.text ?? "").trim();
const newTranslation = (message.data.translation ?? "").trim();
console.debug("TRANSCRIPT event:", newText);
setTextQueue((prevQueue) => [...prevQueue, newText]);
setTranslationQueue((prevQueue) => [...prevQueue, newTranslation]);
if (!newText) break;
setAccumulatedText((prevText) => prevText + " " + newText);
break;
console.debug("TRANSCRIPT event:", newText);
setTextQueue((prevQueue) => [...prevQueue, newText]);
setTranslationQueue((prevQueue) => [
...prevQueue,
newTranslation,
]);
case "TOPIC":
setTopics((prevTopics) => {
const topic = message.data as Topic;
const index = prevTopics.findIndex(
(prevTopic) => prevTopic.id === topic.id,
);
if (index >= 0) {
prevTopics[index] = topic;
return prevTopics;
}
setAccumulatedText((prevText) =>
prevText.slice(topic.transcript.length),
);
return [...prevTopics, topic];
});
console.debug("TOPIC event:", message.data);
// Invalidate topics query to sync with WebSocket data
invalidateTranscriptTopics(
queryClient,
transcriptId as NonEmptyString,
);
break;
case "FINAL_SHORT_SUMMARY":
console.debug("FINAL_SHORT_SUMMARY event:", message.data);
break;
case "FINAL_LONG_SUMMARY":
if (message.data) {
setFinalSummary(message.data);
// Invalidate transcript query to sync summary
invalidateTranscript(queryClient, transcriptId as NonEmptyString);
setAccumulatedText((prevText) => prevText + " " + newText);
break;
}
break;
case "FINAL_TITLE":
console.debug("FINAL_TITLE event:", message.data);
if (message.data) {
case "TOPIC":
setTopics((prevTopics) => {
const topic = message.data;
const index = prevTopics.findIndex(
(prevTopic) => prevTopic.id === topic.id,
);
if (index >= 0) {
prevTopics[index] = topic;
return prevTopics;
}
setAccumulatedText((prevText) =>
prevText.slice(topic.transcript?.length ?? 0),
);
return [...prevTopics, topic];
});
console.debug("TOPIC event:", message.data);
invalidateTranscriptTopics(queryClient, tsId);
break;
case "FINAL_SHORT_SUMMARY":
console.debug("FINAL_SHORT_SUMMARY event:", message.data);
break;
case "FINAL_LONG_SUMMARY":
setFinalSummary({ summary: message.data.long_summary });
invalidateTranscript(queryClient, tsId);
break;
case "FINAL_TITLE":
console.debug("FINAL_TITLE event:", message.data);
setTitle(message.data.title);
// Invalidate transcript query to sync title
invalidateTranscript(queryClient, transcriptId as NonEmptyString);
}
break;
invalidateTranscript(queryClient, tsId);
break;
case "WAVEFORM":
console.debug(
"WAVEFORM event length:",
message.data.waveform.length,
);
if (message.data) {
setWaveForm(message.data.waveform);
invalidateTranscriptWaveform(
queryClient,
transcriptId as NonEmptyString,
case "WAVEFORM":
console.debug(
"WAVEFORM event length:",
message.data.waveform.length,
);
}
break;
case "DURATION":
console.debug("DURATION event:", message.data);
if (message.data) {
setWaveForm({ data: message.data.waveform });
invalidateTranscriptWaveform(queryClient, tsId);
break;
case "DURATION":
console.debug("DURATION event:", message.data);
setDuration(message.data.duration);
}
break;
break;
case "STATUS":
console.log("STATUS event:", message.data);
if (message.data.value === "error") {
setError(
Error("Websocket error status"),
"There was an error processing this meeting.",
case "STATUS":
console.log("STATUS event:", message.data);
if (message.data.value === "error") {
setError(
Error("Websocket error status"),
"There was an error processing this meeting.",
);
}
setStatus(message.data);
invalidateTranscript(queryClient, tsId);
if (message.data.value === "ended") {
intentionalClose = true;
ws?.close();
}
break;
case "ACTION_ITEMS":
console.debug("ACTION_ITEMS event:", message.data);
invalidateTranscript(queryClient, tsId);
break;
default: {
const _exhaustive: never = message;
console.warn(
`Received unknown WebSocket event: ${(_exhaustive as TranscriptWsEvent).event}`,
);
}
setStatus(message.data);
invalidateTranscript(queryClient, transcriptId as NonEmptyString);
if (message.data.value === "ended") {
ws.close();
}
break;
case "DAG_STATUS":
if (message.data?.tasks) {
setDagStatus(message.data.tasks);
}
break;
case "DAG_TASK_PROGRESS":
if (message.data) {
setDagStatus(
(prev) =>
prev?.map((t) =>
t.name === message.data.task_name
? { ...t, progress_pct: message.data.progress_pct }
: t,
) ?? null,
);
}
break;
default:
setError(
new Error(`Received unknown WebSocket event: ${message.event}`),
);
}
} catch (error) {
setError(error);
}
} catch (error) {
setError(error);
}
};
};
ws.onerror = (error) => {
console.error("WebSocket error:", error);
setError(new Error("A WebSocket error occurred."));
};
ws.onerror = (error) => {
console.error("WebSocket error:", error);
};
ws.onclose = (event) => {
console.debug("WebSocket connection closed");
switch (event.code) {
case 1000: // Normal Closure:
break;
case 1005: // Closure by client FF
break;
case 1001: // Navigate away
break;
case 1006: // Closed by client Chrome
console.warn(
"WebSocket closed by client, likely duplicated connection in react dev mode",
ws.onclose = (event) => {
console.debug("WebSocket connection closed, code:", event.code);
if (intentionalClose) return;
const normalCodes = [1000, 1001, 1005];
if (normalCodes.includes(event.code)) return;
if (retryCount < MAX_RETRIES) {
const delay = Math.min(1000 * Math.pow(2, retryCount), 30000);
console.log(
`WebSocket reconnecting in ${delay}ms (attempt ${retryCount + 1}/${MAX_RETRIES})`,
);
break;
default:
if (retryCount === 0) {
setError(
new Error("WebSocket connection lost"),
"Connection lost. Reconnecting...",
);
}
retryCount++;
retryTimeout = setTimeout(connect, delay);
} else {
setError(
new Error(`WebSocket closed unexpectedly with code: ${event.code}`),
"Disconnected from the server. Please refresh the page.",
);
console.log(
"Socket is closed. Reconnect will be attempted in 1 second.",
event.reason,
);
// todo handle reconnect with socket.io
}
}
};
};
connect();
return () => {
ws.close();
intentionalClose = true;
if (retryTimeout) clearTimeout(retryTimeout);
ws?.close();
};
}, [transcriptId]);
@@ -518,6 +511,5 @@ export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
status,
waveform,
duration,
dagStatus,
};
};

View File

@@ -23,7 +23,7 @@ export default function UserInfo() {
className="font-light px-2"
onClick={(e) => {
e.preventDefault();
auth.signIn("authentik");
auth.signIn();
}}
>
Log in

View File

@@ -1,31 +1,15 @@
"use client";
import React, { useEffect, useRef, useState } from "react";
import React, { useEffect, useRef } from "react";
import { useQueryClient } from "@tanstack/react-query";
import { WEBSOCKET_URL } from "./apiClient";
import { useAuth } from "./AuthProvider";
import { z } from "zod";
import {
invalidateTranscript,
invalidateTranscriptLists,
TRANSCRIPT_SEARCH_URL,
} from "./apiHooks";
import type { NonEmptyString } from "./utils";
import { invalidateTranscript, invalidateTranscriptLists } from "./apiHooks";
import { parseNonEmptyString } from "./utils";
import type { operations } from "../reflector-api";
import type { DagTask } from "./dagTypes";
export type { DagTask, DagTaskStatus } from "./dagTypes";
const DagStatusContext = React.createContext<Map<string, DagTask[]>>(new Map());
export function useDagStatusMap() {
return React.useContext(DagStatusContext);
}
const UserEvent = z.object({
event: z.string(),
});
type UserEvent = z.TypeOf<typeof UserEvent>;
type UserWsEvent =
operations["v1_user_get_websocket_events"]["responses"][200]["content"]["application/json"];
class UserEventsStore {
private socket: WebSocket | null = null;
@@ -109,9 +93,6 @@ export function UserEventsProvider({
const queryClient = useQueryClient();
const tokenRef = useRef<string | null>(null);
const detachRef = useRef<(() => void) | null>(null);
const [dagStatusMap, setDagStatusMap] = useState<Map<string, DagTask[]>>(
new Map(),
);
useEffect(() => {
// Only tear down when the user is truly unauthenticated
@@ -150,55 +131,26 @@ export function UserEventsProvider({
if (!detachRef.current) {
const onMessage = (event: MessageEvent) => {
try {
const fullMsg = JSON.parse(event.data);
const msg = UserEvent.parse(fullMsg);
const eventName = msg.event;
const invalidateList = () => invalidateTranscriptLists(queryClient);
const msg: UserWsEvent = JSON.parse(event.data);
switch (eventName) {
switch (msg.event) {
case "TRANSCRIPT_CREATED":
case "TRANSCRIPT_DELETED":
case "TRANSCRIPT_STATUS":
case "TRANSCRIPT_FINAL_TITLE":
case "TRANSCRIPT_DURATION":
invalidateList().then(() => {});
break;
case "TRANSCRIPT_STATUS": {
invalidateList().then(() => {});
const transcriptId = fullMsg.data?.id as string | undefined;
if (transcriptId) {
invalidateTranscript(
queryClient,
transcriptId as NonEmptyString,
).then(() => {});
}
const status = fullMsg.data?.value as string | undefined;
if (transcriptId && status && status !== "processing") {
setDagStatusMap((prev) => {
const next = new Map(prev);
next.delete(transcriptId);
return next;
});
}
invalidateTranscriptLists(queryClient).then(() => {});
invalidateTranscript(
queryClient,
parseNonEmptyString(msg.data.id),
).then(() => {});
break;
default: {
const _exhaustive: never = msg;
console.warn(
`Unknown user event: ${(_exhaustive as UserWsEvent).event}`,
);
}
case "TRANSCRIPT_DAG_STATUS": {
const transcriptId = fullMsg.data?.id as string | undefined;
const tasks = fullMsg.data?.tasks as DagTask[] | undefined;
if (transcriptId && tasks) {
setDagStatusMap((prev) => {
const next = new Map(prev);
next.set(transcriptId, tasks);
return next;
});
}
break;
}
default:
// Ignore other content events for list updates
break;
}
} catch (err) {
console.warn("Invalid user event message", event.data);
@@ -225,9 +177,5 @@ export function UserEventsProvider({
};
}, []);
return (
<DagStatusContext.Provider value={dagStatusMap}>
{children}
</DagStatusContext.Provider>
);
return <>{children}</>;
}

View File

@@ -13,9 +13,33 @@ export const API_URL = !isBuildPhase
? getClientEnv().API_URL
: "http://localhost";
export const WEBSOCKET_URL = !isBuildPhase
? getClientEnv().WEBSOCKET_URL || "ws://127.0.0.1:1250"
: "ws://localhost";
/**
* Derive a WebSocket URL from the API_URL.
* Handles full URLs (http://host/api, https://host/api) and relative paths (/api).
* For full URLs, ws/wss is derived from the URL's own protocol.
* For relative URLs, ws/wss is derived from window.location.protocol.
*/
const deriveWebSocketUrl = (apiUrl: string): string => {
if (typeof window === "undefined") {
return "ws://localhost";
}
const parsed = new URL(apiUrl, window.location.origin);
const wsProtocol = parsed.protocol === "https:" ? "wss:" : "ws:";
// Normalize: remove trailing slash from pathname
const pathname = parsed.pathname.replace(/\/+$/, "");
return `${wsProtocol}//${parsed.host}${pathname}`;
};
const resolveWebSocketUrl = (): string => {
if (isBuildPhase) return "ws://localhost";
const raw = getClientEnv().WEBSOCKET_URL;
if (!raw || raw === "auto") {
return deriveWebSocketUrl(API_URL);
}
return raw;
};
export const WEBSOCKET_URL = resolveWebSocketUrl();
export const client = createClient<paths>({
baseUrl: API_URL,

View File

@@ -7,6 +7,7 @@ import type { components } from "../reflector-api";
import { useAuth } from "./AuthProvider";
import { MeetingId } from "./types";
import { NonEmptyString } from "./utils";
import type { TranscriptStatus } from "./transcript";
/*
* XXX error types returned from the hooks are not always correct; declared types are ValidationError but real type could be string or any other
@@ -104,6 +105,12 @@ export function useTranscriptProcess() {
});
}
const ACTIVE_TRANSCRIPT_STATUSES = new Set<TranscriptStatus>([
"processing",
"uploaded",
"recording",
]);
export function useTranscriptGet(transcriptId: NonEmptyString | null) {
return $api.useQuery(
"get",
@@ -117,6 +124,10 @@ export function useTranscriptGet(transcriptId: NonEmptyString | null) {
},
{
enabled: !!transcriptId,
refetchInterval: (query) => {
const status = query.state.data?.status;
return status && ACTIVE_TRANSCRIPT_STATUSES.has(status) ? 5000 : false;
},
},
);
}

View File

@@ -1,5 +1,6 @@
import { AuthOptions } from "next-auth";
import AuthentikProvider from "next-auth/providers/authentik";
import CredentialsProvider from "next-auth/providers/credentials";
import type { JWT } from "next-auth/jwt";
import { JWTWithAccessToken, CustomSession } from "./types";
import {
@@ -52,7 +53,7 @@ const TOKEN_CACHE_TTL = REFRESH_ACCESS_TOKEN_BEFORE;
const getAuthentikClientId = () => getNextEnvVar("AUTHENTIK_CLIENT_ID");
const getAuthentikClientSecret = () => getNextEnvVar("AUTHENTIK_CLIENT_SECRET");
const getAuthentikRefreshTokenUrl = () =>
getNextEnvVar("AUTHENTIK_REFRESH_TOKEN_URL");
getNextEnvVar("AUTHENTIK_REFRESH_TOKEN_URL").replace(/\/+$/, "");
const getAuthentikIssuer = () => {
const stringUrl = getNextEnvVar("AUTHENTIK_ISSUER");
@@ -61,113 +62,194 @@ const getAuthentikIssuer = () => {
} catch (e) {
throw new Error("AUTHENTIK_ISSUER is not a valid URL: " + stringUrl);
}
return stringUrl;
return stringUrl.replace(/\/+$/, "");
};
export const authOptions = (): AuthOptions =>
featureEnabled("requireLogin")
? {
providers: [
AuthentikProvider({
...(() => {
const [clientId, clientSecret, issuer] = sequenceThrows(
getAuthentikClientId,
getAuthentikClientSecret,
getAuthentikIssuer,
);
return {
clientId,
clientSecret,
issuer,
};
})(),
authorization: {
params: {
scope: "openid email profile offline_access",
},
},
}),
],
session: {
strategy: "jwt",
export const authOptions = (): AuthOptions => {
if (!featureEnabled("requireLogin")) {
return { providers: [] };
}
const authProvider = process.env.AUTH_PROVIDER;
if (authProvider === "credentials") {
return credentialsAuthOptions();
}
return authentikAuthOptions();
};
function credentialsAuthOptions(): AuthOptions {
return {
providers: [
CredentialsProvider({
name: "Password",
credentials: {
email: { label: "Email", type: "email" },
password: { label: "Password", type: "password" },
},
callbacks: {
async jwt({ token, account, user }) {
if (account && !account.access_token) {
async authorize(credentials) {
if (!credentials?.email || !credentials?.password) return null;
const apiUrl = getNextEnvVar("SERVER_API_URL");
const response = await fetch(`${apiUrl}/v1/auth/login`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
email: credentials.email,
password: credentials.password,
}),
});
if (!response.ok) return null;
const data = await response.json();
return {
id: "pending",
email: credentials.email,
accessToken: data.access_token,
expiresIn: data.expires_in,
};
},
}),
],
session: { strategy: "jwt" },
pages: {
signIn: "/login",
},
callbacks: {
async jwt({ token, user }) {
if (user) {
// First login - user comes from authorize()
const typedUser = user as any;
token.accessToken = typedUser.accessToken;
token.accessTokenExpires = Date.now() + typedUser.expiresIn * 1000;
// Resolve actual user ID from backend
const userId = await getUserId(typedUser.accessToken);
if (userId) {
token.sub = userId;
}
token.email = typedUser.email;
}
return token;
},
async session({ session, token }) {
const extendedToken = token as JWTWithAccessToken;
return {
...session,
accessToken: extendedToken.accessToken,
accessTokenExpires: extendedToken.accessTokenExpires,
error: extendedToken.error,
user: {
id: assertExistsAndNonEmptyString(token.sub, "User ID required"),
name: extendedToken.name,
email: extendedToken.email,
},
} satisfies CustomSession;
},
},
};
}
function authentikAuthOptions(): AuthOptions {
return {
providers: [
AuthentikProvider({
...(() => {
const [clientId, clientSecret, issuer] = sequenceThrows(
getAuthentikClientId,
getAuthentikClientSecret,
getAuthentikIssuer,
);
return {
clientId,
clientSecret,
issuer,
};
})(),
authorization: {
params: {
scope: "openid email profile offline_access",
},
},
}),
],
session: {
strategy: "jwt",
},
callbacks: {
async jwt({ token, account, user }) {
if (account && !account.access_token) {
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
}
if (account && user) {
// called only on first login
// XXX account.expires_in used in example is not defined for authentik backend, but expires_at is
if (account.access_token) {
const expiresAtS = assertExists(account.expires_at);
const expiresAtMs = expiresAtS * 1000;
const jwtToken: JWTWithAccessToken = {
...token,
accessToken: account.access_token,
accessTokenExpires: expiresAtMs,
refreshToken: account.refresh_token,
};
if (jwtToken.error) {
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
} else {
assertNotExists(
jwtToken.error,
`panic! trying to cache token with error in jwt: ${jwtToken.error}`,
);
await setTokenCache(tokenCacheRedis, `token:${token.sub}`, {
token: jwtToken,
timestamp: Date.now(),
});
return jwtToken;
}
}
}
if (account && user) {
// called only on first login
// XXX account.expires_in used in example is not defined for authentik backend, but expires_at is
if (account.access_token) {
const expiresAtS = assertExists(account.expires_at);
const expiresAtMs = expiresAtS * 1000;
const jwtToken: JWTWithAccessToken = {
...token,
accessToken: account.access_token,
accessTokenExpires: expiresAtMs,
refreshToken: account.refresh_token,
};
if (jwtToken.error) {
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
} else {
assertNotExists(
jwtToken.error,
`panic! trying to cache token with error in jwt: ${jwtToken.error}`,
);
await setTokenCache(tokenCacheRedis, `token:${token.sub}`, {
token: jwtToken,
timestamp: Date.now(),
});
return jwtToken;
}
}
}
const currentToken = await getTokenCache(
tokenCacheRedis,
`token:${token.sub}`,
);
console.debug(
"currentToken from cache",
JSON.stringify(currentToken, null, 2),
"will be returned?",
currentToken &&
!shouldRefreshToken(currentToken.token.accessTokenExpires),
);
if (
currentToken &&
!shouldRefreshToken(currentToken.token.accessTokenExpires)
) {
return currentToken.token;
}
const currentToken = await getTokenCache(
tokenCacheRedis,
`token:${token.sub}`,
);
console.debug(
"currentToken from cache",
JSON.stringify(currentToken, null, 2),
"will be returned?",
currentToken &&
!shouldRefreshToken(currentToken.token.accessTokenExpires),
);
if (
currentToken &&
!shouldRefreshToken(currentToken.token.accessTokenExpires)
) {
return currentToken.token;
}
// access token has expired, try to update it
return await lockedRefreshAccessToken(token);
},
async session({ session, token }) {
const extendedToken = token as JWTWithAccessToken;
console.log("extendedToken", extendedToken);
const userId = await getUserId(extendedToken.accessToken);
// access token has expired, try to update it
return await lockedRefreshAccessToken(token);
return {
...session,
accessToken: extendedToken.accessToken,
accessTokenExpires: extendedToken.accessTokenExpires,
error: extendedToken.error,
user: {
id: assertExistsAndNonEmptyString(userId, "User ID required"),
name: extendedToken.name,
email: extendedToken.email,
},
async session({ session, token }) {
const extendedToken = token as JWTWithAccessToken;
console.log("extendedToken", extendedToken);
const userId = await getUserId(extendedToken.accessToken);
return {
...session,
accessToken: extendedToken.accessToken,
accessTokenExpires: extendedToken.accessTokenExpires,
error: extendedToken.error,
user: {
id: assertExistsAndNonEmptyString(userId, "User ID required"),
name: extendedToken.name,
email: extendedToken.email,
},
} satisfies CustomSession;
},
},
}
: {
providers: [],
};
} satisfies CustomSession;
},
},
};
}
async function lockedRefreshAccessToken(
token: JWT,

View File

@@ -2,6 +2,7 @@ import {
assertExists,
assertExistsAndNonEmptyString,
NonEmptyString,
parseMaybeNonEmptyString,
parseNonEmptyString,
} from "./utils";
import { isBuildPhase } from "./next";
@@ -27,10 +28,13 @@ export type EnvFeaturePartial = {
[key in FeatureEnvName]: boolean | null;
};
export type AuthProviderType = "authentik" | "credentials" | null;
// CONTRACT: isomorphic with JSON.stringify
export type ClientEnvCommon = EnvFeaturePartial & {
API_URL: NonEmptyString;
WEBSOCKET_URL: NonEmptyString | null;
AUTH_PROVIDER: AuthProviderType;
};
let clientEnv: ClientEnvCommon | null = null;
@@ -58,6 +62,12 @@ const parseBooleanString = (str: string | undefined): boolean | null => {
return str === "true";
};
const parseAuthProvider = (): AuthProviderType => {
const val = process.env.AUTH_PROVIDER;
if (val === "authentik" || val === "credentials") return val;
return null;
};
export const getClientEnvServer = (): ClientEnvCommon => {
if (typeof window !== "undefined") {
throw new Error(
@@ -74,14 +84,16 @@ export const getClientEnvServer = (): ClientEnvCommon => {
if (isBuildPhase) {
return {
API_URL: getNextEnvVar("API_URL"),
WEBSOCKET_URL: getNextEnvVar("WEBSOCKET_URL"),
WEBSOCKET_URL: parseMaybeNonEmptyString(process.env.WEBSOCKET_URL ?? ""),
AUTH_PROVIDER: parseAuthProvider(),
...features,
};
}
clientEnv = {
API_URL: getNextEnvVar("API_URL"),
WEBSOCKET_URL: getNextEnvVar("WEBSOCKET_URL"),
WEBSOCKET_URL: parseMaybeNonEmptyString(process.env.WEBSOCKET_URL ?? ""),
AUTH_PROVIDER: parseAuthProvider(),
...features,
};
return clientEnv;

View File

@@ -1,19 +0,0 @@
export type DagTaskStatus =
| "queued"
| "running"
| "completed"
| "failed"
| "cancelled";
export type DagTask = {
name: string;
status: DagTaskStatus;
started_at: string | null;
finished_at: string | null;
duration_seconds: number | null;
parents: string[];
error: string | null;
children_total: number | null;
children_completed: number | null;
progress_pct: number | null;
};

76
www/app/login/page.tsx Normal file
View File

@@ -0,0 +1,76 @@
"use client";
import { useState } from "react";
import { signIn } from "next-auth/react";
import { useRouter } from "next/navigation";
import {
Box,
Button,
Field,
Input,
VStack,
Text,
Heading,
} from "@chakra-ui/react";
export default function LoginPage() {
const router = useRouter();
const [email, setEmail] = useState("");
const [password, setPassword] = useState("");
const [error, setError] = useState<string | null>(null);
const [loading, setLoading] = useState(false);
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
setError(null);
setLoading(true);
const result = await signIn("credentials", {
email,
password,
redirect: false,
});
setLoading(false);
if (result?.error) {
console.log(result?.error);
setError("Invalid email or password");
} else {
router.push("/");
}
};
return (
<Box maxW="400px" mx="auto" mt="100px" p={6}>
<VStack gap={6} as="form" onSubmit={handleSubmit}>
<Heading size="lg">Log in</Heading>
{error && <Text color="red.500">{error}</Text>}
<Field.Root required>
<Field.Label>Email</Field.Label>
<Input
type="email"
value={email}
onChange={(e) => setEmail(e.target.value)}
/>
</Field.Root>
<Field.Root required>
<Field.Label>Password</Field.Label>
<Input
type="password"
value={password}
onChange={(e) => setPassword(e.target.value)}
/>
</Field.Root>
<Button
type="submit"
colorPalette="blue"
width="full"
loading={loading}
>
Log in
</Button>
</VStack>
</Box>
);
}

View File

@@ -568,7 +568,10 @@ export interface paths {
path?: never;
cookie?: never;
};
/** Transcript Get Websocket Events */
/**
* Transcript WebSocket event schema
* @description Stub exposing the discriminated union of all transcript-level WS events for OpenAPI type generation. Real events are delivered over the WebSocket at the same path.
*/
get: operations["v1_transcript_get_websocket_events"];
put?: never;
post?: never;
@@ -664,6 +667,26 @@ export interface paths {
patch?: never;
trace?: never;
};
"/v1/events": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/**
* User WebSocket event schema
* @description Stub exposing the discriminated union of all user-level WS events for OpenAPI type generation. Real events are delivered over the WebSocket at the same path.
*/
get: operations["v1_user_get_websocket_events"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/v1/zulip/streams": {
parameters: {
query?: never;
@@ -1009,6 +1032,8 @@ export interface components {
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
};
/** GetTranscriptSegmentTopic */
GetTranscriptSegmentTopic: {
@@ -1155,6 +1180,8 @@ export interface components {
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1218,6 +1245,8 @@ export interface components {
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1282,6 +1311,8 @@ export interface components {
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1353,6 +1384,8 @@ export interface components {
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1426,6 +1459,8 @@ export interface components {
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1811,6 +1846,8 @@ export interface components {
* @default 0
*/
total_match_count: number;
/** Change Seq */
change_seq?: number | null;
};
/**
* SourceKind
@@ -1877,6 +1914,33 @@ export interface components {
/** Name */
name: string;
};
/** TranscriptActionItems */
TranscriptActionItems: {
/** Action Items */
action_items: {
[key: string]: unknown;
};
};
/** TranscriptDuration */
TranscriptDuration: {
/** Duration */
duration: number;
};
/** TranscriptFinalLongSummary */
TranscriptFinalLongSummary: {
/** Long Summary */
long_summary: string;
};
/** TranscriptFinalShortSummary */
TranscriptFinalShortSummary: {
/** Short Summary */
short_summary: string;
};
/** TranscriptFinalTitle */
TranscriptFinalTitle: {
/** Title */
title: string;
};
/** TranscriptParticipant */
TranscriptParticipant: {
/** Id */
@@ -1917,6 +1981,113 @@ export interface components {
/** End */
end: number;
};
/** TranscriptText */
TranscriptText: {
/** Text */
text: string;
/** Translation */
translation: string | null;
};
/** TranscriptWaveform */
TranscriptWaveform: {
/** Waveform */
waveform: number[];
};
/** TranscriptWsActionItems */
TranscriptWsActionItems: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "ACTION_ITEMS";
data: components["schemas"]["TranscriptActionItems"];
};
/** TranscriptWsDuration */
TranscriptWsDuration: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "DURATION";
data: components["schemas"]["TranscriptDuration"];
};
/** TranscriptWsFinalLongSummary */
TranscriptWsFinalLongSummary: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "FINAL_LONG_SUMMARY";
data: components["schemas"]["TranscriptFinalLongSummary"];
};
/** TranscriptWsFinalShortSummary */
TranscriptWsFinalShortSummary: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "FINAL_SHORT_SUMMARY";
data: components["schemas"]["TranscriptFinalShortSummary"];
};
/** TranscriptWsFinalTitle */
TranscriptWsFinalTitle: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "FINAL_TITLE";
data: components["schemas"]["TranscriptFinalTitle"];
};
/** TranscriptWsStatus */
TranscriptWsStatus: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "STATUS";
data: components["schemas"]["TranscriptWsStatusData"];
};
/** TranscriptWsStatusData */
TranscriptWsStatusData: {
/**
* Value
* @enum {string}
*/
value:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
};
/** TranscriptWsTopic */
TranscriptWsTopic: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "TOPIC";
data: components["schemas"]["GetTranscriptTopic"];
};
/** TranscriptWsTranscript */
TranscriptWsTranscript: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "TRANSCRIPT";
data: components["schemas"]["TranscriptText"];
};
/** TranscriptWsWaveform */
TranscriptWsWaveform: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "WAVEFORM";
data: components["schemas"]["TranscriptWaveform"];
};
/** UpdateParticipant */
UpdateParticipant: {
/** Speaker */
@@ -1987,6 +2158,109 @@ export interface components {
/** Email */
email: string | null;
};
/** UserTranscriptCreatedData */
UserTranscriptCreatedData: {
/**
* Id
* @description A non-empty string
*/
id: string;
};
/** UserTranscriptDeletedData */
UserTranscriptDeletedData: {
/**
* Id
* @description A non-empty string
*/
id: string;
};
/** UserTranscriptDurationData */
UserTranscriptDurationData: {
/**
* Id
* @description A non-empty string
*/
id: string;
/** Duration */
duration: number;
};
/** UserTranscriptFinalTitleData */
UserTranscriptFinalTitleData: {
/**
* Id
* @description A non-empty string
*/
id: string;
/**
* Title
* @description A non-empty string
*/
title: string;
};
/** UserTranscriptStatusData */
UserTranscriptStatusData: {
/**
* Id
* @description A non-empty string
*/
id: string;
/**
* Value
* @enum {string}
*/
value:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
};
/** UserWsTranscriptCreated */
UserWsTranscriptCreated: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "TRANSCRIPT_CREATED";
data: components["schemas"]["UserTranscriptCreatedData"];
};
/** UserWsTranscriptDeleted */
UserWsTranscriptDeleted: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "TRANSCRIPT_DELETED";
data: components["schemas"]["UserTranscriptDeletedData"];
};
/** UserWsTranscriptDuration */
UserWsTranscriptDuration: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "TRANSCRIPT_DURATION";
data: components["schemas"]["UserTranscriptDurationData"];
};
/** UserWsTranscriptFinalTitle */
UserWsTranscriptFinalTitle: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "TRANSCRIPT_FINAL_TITLE";
data: components["schemas"]["UserTranscriptFinalTitleData"];
};
/** UserWsTranscriptStatus */
UserWsTranscriptStatus: {
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
event: "TRANSCRIPT_STATUS";
data: components["schemas"]["UserTranscriptStatusData"];
};
/** ValidationError */
ValidationError: {
/** Location */
@@ -2693,6 +2967,8 @@ export interface operations {
source_kind?: components["schemas"]["SourceKind"] | null;
room_id?: string | null;
search_term?: string | null;
change_seq_from?: number | null;
sort_by?: ("created_at" | "change_seq") | null;
/** @description Page number */
page?: number;
/** @description Page size */
@@ -3423,7 +3699,16 @@ export interface operations {
[name: string]: unknown;
};
content: {
"application/json": unknown;
"application/json":
| components["schemas"]["TranscriptWsTranscript"]
| components["schemas"]["TranscriptWsTopic"]
| components["schemas"]["TranscriptWsStatus"]
| components["schemas"]["TranscriptWsFinalTitle"]
| components["schemas"]["TranscriptWsFinalLongSummary"]
| components["schemas"]["TranscriptWsFinalShortSummary"]
| components["schemas"]["TranscriptWsActionItems"]
| components["schemas"]["TranscriptWsDuration"]
| components["schemas"]["TranscriptWsWaveform"];
};
};
/** @description Validation Error */
@@ -3607,6 +3892,31 @@ export interface operations {
};
};
};
v1_user_get_websocket_events: {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json":
| components["schemas"]["UserWsTranscriptCreated"]
| components["schemas"]["UserWsTranscriptDeleted"]
| components["schemas"]["UserWsTranscriptStatus"]
| components["schemas"]["UserWsTranscriptFinalTitle"]
| components["schemas"]["UserWsTranscriptDuration"];
};
};
};
};
v1_zulip_get_streams: {
parameters: {
query?: never;

4
yarn.lock Normal file
View File

@@ -0,0 +1,4 @@
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1