Compare commits

..

9 Commits

Author SHA1 Message Date
Juan Diego García
4235ab4293 chore(main): release 0.37.0 (#889) 2026-03-03 13:14:15 -05:00
Juan Diego García
f5ec2d28cf fix: aws storage construction (#895) 2026-03-03 13:04:22 -05:00
dependabot[bot]
ac46c60a7c build(deps): bump pypdf in /server in the uv group across 1 directory (#893)
Bumps the uv group with 1 update in the /server directory: [pypdf](https://github.com/py-pdf/pypdf).


Updates `pypdf` from 6.7.4 to 6.7.5
- [Release notes](https://github.com/py-pdf/pypdf/releases)
- [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md)
- [Commits](https://github.com/py-pdf/pypdf/compare/6.7.4...6.7.5)

---
updated-dependencies:
- dependency-name: pypdf
  dependency-version: 6.7.5
  dependency-type: indirect
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-03 18:35:16 +01:00
Juan Diego García
1d1a520be9 fix audio permissions (#894) 2026-03-03 12:11:25 -05:00
dependabot[bot]
9e64d52461 build(deps): bump pypdf in /server in the uv group across 1 directory (#891)
Bumps the uv group with 1 update in the /server directory: [pypdf](https://github.com/py-pdf/pypdf).


Updates `pypdf` from 6.7.3 to 6.7.4
- [Release notes](https://github.com/py-pdf/pypdf/releases)
- [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md)
- [Commits](https://github.com/py-pdf/pypdf/compare/6.7.3...6.7.4)

---
updated-dependencies:
- dependency-name: pypdf
  dependency-version: 6.7.4
  dependency-type: indirect
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-02 18:29:25 +01:00
Sergey Mankovsky
0931095f49 fix: remaining dependabot security issues (#890)
* Upgrade docs deps

* Upgrade frontend to latest deps

* Update package overrides

* Remove redundant deps

* Add tailwind postcss plugin

* Replace language select with chakra

* Fix main nav

* Patch gray matter

* Fix webpack override

* Replace python-jose with pyjwt

* Override kv url for frontend in compose

* Upgrade hatchet sdk

* Update docs

* Supress pydantic warnings
2026-03-02 17:17:40 +01:00
Sergey Mankovsky
4d915e2a9f fix: test selfhosted script (#892)
* Test selfhosted script

* Don't ask for hugging face token on ci
2026-03-02 17:17:16 +01:00
Juan Diego García
045eae8ff2 feat: enable daily co in selfhosted + only schedule tasks when necessary (#883)
* feat: enable daily co in selfhosted + only schedule tasks when necessary

* feat: refactor aws storage to be platform agnostic + add local pad tracking with slfhosted support
2026-03-02 11:08:20 -05:00
Sergey Mankovsky
f6cc03286b fix: upgrade to nextjs 16 (#888)
* Upgrade to nextjs 16

* Update sentry config

* Force dynamic for health route

* Upgrade eslint config

* Upgrade jest

* Move types to dev dependencies

* Remove pages from tailwind config

* Replace img with next image
2026-02-27 17:18:03 +01:00
76 changed files with 22675 additions and 27496 deletions

36
.github/workflows/selfhost-script.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
# Validates the self-hosted setup script: runs with --cpu and --garage,
# brings up services, runs health checks, then tears down.
name: Selfhost script (CPU + Garage)
on:
workflow_dispatch: {}
push:
branches:
- main
pull_request: {}
jobs:
selfhost-cpu-garage:
runs-on: ubuntu-latest
timeout-minutes: 25
concurrency:
group: selfhost-${{ github.ref }}
cancel-in-progress: true
steps:
- uses: actions/checkout@v4
- name: Run setup-selfhosted.sh (CPU + Garage)
run: |
./scripts/setup-selfhosted.sh --cpu --garage
- name: Quick health checks
run: |
curl -sf http://localhost:1250/health && echo " Server OK"
curl -sf http://localhost:3000 > /dev/null && echo " Frontend OK"
curl -sf http://localhost:3903/metrics > /dev/null && echo " Garage admin OK"
- name: Teardown
if: always()
run: |
docker compose -f docker-compose.selfhosted.yml --profile cpu --profile garage down -v --remove-orphans 2>/dev/null || true

2
.gitignore vendored
View File

@@ -3,6 +3,7 @@ server/.env
server/.env.production
.env
Caddyfile
.env.hatchet
server/exportdanswer
.vercel
.env*.local
@@ -20,7 +21,6 @@ CLAUDE.local.md
www/.env.development
www/.env.production
.playwright-mcp
docs/pnpm-lock.yaml
.secrets
opencode.json

View File

@@ -1,5 +1,20 @@
# Changelog
## [0.37.0](https://github.com/GreyhavenHQ/reflector/compare/v0.36.0...v0.37.0) (2026-03-03)
### Features
* enable daily co in selfhosted + only schedule tasks when necessary ([#883](https://github.com/GreyhavenHQ/reflector/issues/883)) ([045eae8](https://github.com/GreyhavenHQ/reflector/commit/045eae8ff2014a7b83061045e3c8cb25cce9d60a))
### Bug Fixes
* aws storage construction ([#895](https://github.com/GreyhavenHQ/reflector/issues/895)) ([f5ec2d2](https://github.com/GreyhavenHQ/reflector/commit/f5ec2d28cfa2de9b2b4aeec81966737b740689c2))
* remaining dependabot security issues ([#890](https://github.com/GreyhavenHQ/reflector/issues/890)) ([0931095](https://github.com/GreyhavenHQ/reflector/commit/0931095f49e61216e651025ce92be460e6a9df9e))
* test selfhosted script ([#892](https://github.com/GreyhavenHQ/reflector/issues/892)) ([4d915e2](https://github.com/GreyhavenHQ/reflector/commit/4d915e2a9fe9f05f31cbd0018d9c2580daf7854f))
* upgrade to nextjs 16 ([#888](https://github.com/GreyhavenHQ/reflector/issues/888)) ([f6cc032](https://github.com/GreyhavenHQ/reflector/commit/f6cc03286baf3e3a115afd3b22ae993ad7a4b7e3))
## [0.35.1](https://github.com/GreyhavenHQ/reflector/compare/v0.35.0...v0.35.1) (2026-02-25)

View File

@@ -6,7 +6,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
Reflector is an AI-powered audio transcription and meeting analysis platform with real-time processing capabilities. The system consists of:
- **Frontend**: Next.js 14 React application (`www/`) with Chakra UI, real-time WebSocket integration
- **Frontend**: Next.js 16 React application (`www/`) with Chakra UI, real-time WebSocket integration
- **Backend**: Python FastAPI server (`server/`) with async database operations and background processing
- **Processing**: GPU-accelerated ML pipeline for transcription, diarization, summarization via Modal.com
- **Infrastructure**: Redis, PostgreSQL/SQLite, Celery workers, WebRTC streaming

View File

@@ -11,6 +11,9 @@
# --profile ollama-gpu Local Ollama with NVIDIA GPU
# --profile ollama-cpu Local Ollama on CPU only
#
# Daily.co multitrack processing (auto-detected from server/.env):
# --profile dailyco Hatchet workflow engine + CPU/LLM workers
#
# Other optional services:
# --profile garage Local S3-compatible storage (Garage)
# --profile caddy Reverse proxy with auto-SSL
@@ -32,7 +35,7 @@ services:
restart: unless-stopped
ports:
- "127.0.0.1:1250:1250"
- "50000-50100:50000-50100/udp"
- "51000-51100:51000-51100/udp"
env_file:
- ./server/.env
environment:
@@ -42,8 +45,6 @@ services:
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
HATCHET_CLIENT_SERVER_URL: ""
HATCHET_CLIENT_HOST_PORT: ""
# Specialized models via gpu/cpu container (aliased as "transcription")
TRANSCRIPT_BACKEND: modal
TRANSCRIPT_URL: http://transcription:8000
@@ -52,8 +53,10 @@ services:
DIARIZATION_URL: http://transcription:8000
TRANSLATION_BACKEND: modal
TRANSLATE_URL: http://transcription:8000
PADDING_BACKEND: modal
PADDING_URL: http://transcription:8000
# WebRTC: fixed UDP port range for ICE candidates (mapped above)
WEBRTC_PORT_RANGE: "50000-50100"
WEBRTC_PORT_RANGE: "51000-51100"
depends_on:
postgres:
condition: service_healthy
@@ -76,8 +79,6 @@ services:
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
HATCHET_CLIENT_SERVER_URL: ""
HATCHET_CLIENT_HOST_PORT: ""
TRANSCRIPT_BACKEND: modal
TRANSCRIPT_URL: http://transcription:8000
TRANSCRIPT_MODAL_API_KEY: selfhosted
@@ -85,6 +86,8 @@ services:
DIARIZATION_URL: http://transcription:8000
TRANSLATION_BACKEND: modal
TRANSLATE_URL: http://transcription:8000
PADDING_BACKEND: modal
PADDING_URL: http://transcription:8000
depends_on:
postgres:
condition: service_healthy
@@ -153,6 +156,7 @@ services:
POSTGRES_DB: reflector
volumes:
- postgres_data:/var/lib/postgresql/data
- ./server/docker/init-hatchet-db.sql:/docker-entrypoint-initdb.d/init-hatchet-db.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U reflector"]
interval: 30s
@@ -305,6 +309,87 @@ services:
- web
- server
# ===========================================================
# Hatchet + Daily.co workers (optional — for Daily.co multitrack processing)
# Auto-enabled when DAILY_API_KEY is configured in server/r
# ===========================================================
hatchet:
image: ghcr.io/hatchet-dev/hatchet/hatchet-lite:latest
profiles: [dailyco]
restart: on-failure
depends_on:
postgres:
condition: service_healthy
ports:
- "8888:8888"
- "7078:7077"
env_file:
- ./.env.hatchet
environment:
DATABASE_URL: "postgresql://reflector:reflector@postgres:5432/hatchet?sslmode=disable&connect_timeout=30"
SERVER_AUTH_COOKIE_INSECURE: "t"
SERVER_GRPC_BIND_ADDRESS: "0.0.0.0"
SERVER_GRPC_INSECURE: "t"
SERVER_GRPC_BROADCAST_ADDRESS: hatchet:7077
SERVER_GRPC_PORT: "7077"
SERVER_AUTH_SET_EMAIL_VERIFIED: "t"
SERVER_INTERNAL_CLIENT_INTERNAL_GRPC_BROADCAST_ADDRESS: hatchet:7077
volumes:
- hatchet_config:/config
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8888/api/live"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
hatchet-worker-cpu:
build:
context: ./server
dockerfile: Dockerfile
image: monadicalsas/reflector-backend:latest
profiles: [dailyco]
restart: unless-stopped
env_file:
- ./server/.env
environment:
ENTRYPOINT: hatchet-worker-cpu
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
HATCHET_CLIENT_HOST_PORT: hatchet:7077
depends_on:
hatchet:
condition: service_healthy
volumes:
- server_data:/app/data
hatchet-worker-llm:
build:
context: ./server
dockerfile: Dockerfile
image: monadicalsas/reflector-backend:latest
profiles: [dailyco]
restart: unless-stopped
env_file:
- ./server/.env
environment:
ENTRYPOINT: hatchet-worker-llm
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
REDIS_HOST: redis
CELERY_BROKER_URL: redis://redis:6379/1
CELERY_RESULT_BACKEND: redis://redis:6379/1
HATCHET_CLIENT_SERVER_URL: http://hatchet:8888
HATCHET_CLIENT_HOST_PORT: hatchet:7077
depends_on:
hatchet:
condition: service_healthy
volumes:
- server_data:/app/data
volumes:
postgres_data:
redis_data:
@@ -315,6 +400,7 @@ volumes:
ollama_data:
caddy_data:
caddy_config:
hatchet_config:
networks:
default:

View File

@@ -93,6 +93,7 @@ services:
environment:
NODE_ENV: development
SERVER_API_URL: http://host.docker.internal:1250
KV_URL: redis://redis:6379
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:

7
docs/.dockerignore Normal file
View File

@@ -0,0 +1,7 @@
node_modules
build
.git
.gitignore
*.log
.DS_Store
.env*

View File

@@ -1,14 +1,17 @@
FROM node:18-alpine AS builder
FROM node:20-alpine AS builder
WORKDIR /app
# Install curl for fetching OpenAPI spec
RUN apk add --no-cache curl
# Copy package files
COPY package*.json ./
# Enable pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate
# Copy package files and lockfile
COPY package.json pnpm-lock.yaml* ./
# Install dependencies
RUN npm ci
RUN pnpm install --frozen-lockfile
# Copy source
COPY . .
@@ -21,7 +24,7 @@ RUN mkdir -p ./static && curl -sf "${OPENAPI_URL}" -o ./static/openapi.json || e
RUN sed -i "s/onBrokenLinks: 'throw'/onBrokenLinks: 'warn'/g" docusaurus.config.ts
# Build static site (skip prebuild hook by calling docusaurus directly)
RUN npx docusaurus build
RUN pnpm exec docusaurus build
# Production image
FROM nginx:alpine

View File

@@ -5,13 +5,13 @@ This website is built using [Docusaurus](https://docusaurus.io/), a modern stati
### Installation
```
$ yarn
$ pnpm install
```
### Local Development
```
$ yarn start
$ pnpm start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
@@ -19,7 +19,7 @@ This command starts a local development server and opens up a browser window. Mo
### Build
```
$ yarn build
$ pnpm build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
@@ -29,13 +29,13 @@ This command generates static content into the `build` directory and can be serv
Using SSH:
```
$ USE_SSH=true yarn deploy
$ USE_SSH=true pnpm deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> yarn deploy
$ GIT_USER=<Your GitHub username> pnpm deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.

View File

@@ -11,7 +11,7 @@ Reflector is built as a modern, scalable, microservices-based application design
### Frontend Application
The user interface is built with **Next.js 15** using the App Router pattern, providing:
The user interface is built with **Next.js 16** using the App Router pattern, providing:
- Server-side rendering for optimal performance
- Real-time WebSocket connections for live transcription

View File

@@ -36,14 +36,15 @@ This creates `docs/static/openapi.json` (should be ~70KB) which will be copied d
The Dockerfile is already in `docs/Dockerfile`:
```dockerfile
FROM node:18-alpine AS builder
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
# Enable pnpm and copy package files + lockfile
RUN corepack enable && corepack prepare pnpm@latest --activate
COPY package.json pnpm-lock.yaml* ./
# Inshall dependencies
RUN npm ci
# Install dependencies
RUN pnpm install --frozen-lockfile
# Copy source (includes static/openapi.json if pre-fetched)
COPY . .
@@ -52,7 +53,7 @@ COPY . .
RUN sed -i "s/onBrokenLinks: 'throw'/onBrokenLinks: 'warn'/g" docusaurus.config.ts
# Build static site
RUN npx docusaurus build
RUN pnpm exec docusaurus build
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html

View File

@@ -46,7 +46,7 @@ Reflector consists of three main components:
Ready to deploy Reflector? Head over to our [Installation Guide](./installation/overview) to set up your own instance.
For a quick overview of how Reflector processes audio, check out our [Pipeline Documentation](./pipelines/overview).
For a quick overview of how Reflector processes audio, check out our [Pipeline Documentation](./concepts/pipeline).
## Open Source

View File

@@ -124,11 +124,11 @@ const config: Config = {
items: [
{
label: 'Architecture',
to: '/docs/reference/architecture/overview',
to: '/docs/concepts/overview',
},
{
label: 'Pipelines',
to: '/docs/pipelines/overview',
to: '/docs/concepts/pipeline',
},
{
label: 'Roadmap',

23526
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,26 +14,26 @@
"write-heading-ids": "docusaurus write-heading-ids",
"typecheck": "tsc",
"fetch-openapi": "./scripts/fetch-openapi.sh",
"gen-api-docs": "npm run fetch-openapi && docusaurus gen-api-docs reflector",
"prebuild": "npm run fetch-openapi"
"gen-api-docs": "pnpm run fetch-openapi && docusaurus gen-api-docs reflector",
"prebuild": "pnpm run fetch-openapi"
},
"dependencies": {
"@docusaurus/core": "3.6.3",
"@docusaurus/preset-classic": "3.6.3",
"@mdx-js/react": "^3.0.0",
"clsx": "^2.0.0",
"docusaurus-plugin-openapi-docs": "^4.5.1",
"docusaurus-theme-openapi-docs": "^4.5.1",
"@docusaurus/theme-mermaid": "3.6.3",
"prism-react-renderer": "^2.3.0",
"react": "^18.0.0",
"react-dom": "^18.0.0"
"@docusaurus/core": "3.9.2",
"@docusaurus/preset-classic": "3.9.2",
"@docusaurus/theme-mermaid": "3.9.2",
"@mdx-js/react": "^3.1.1",
"clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.7.1",
"docusaurus-theme-openapi-docs": "^4.7.1",
"prism-react-renderer": "^2.4.1",
"react": "^19.2.4",
"react-dom": "^19.2.4"
},
"devDependencies": {
"@docusaurus/module-type-aliases": "3.6.3",
"@docusaurus/tsconfig": "3.6.3",
"@docusaurus/types": "3.6.3",
"typescript": "~5.6.2"
"@docusaurus/module-type-aliases": "3.9.2",
"@docusaurus/tsconfig": "3.9.2",
"@docusaurus/types": "3.9.2",
"typescript": "~5.9.3"
},
"browserslist": {
"production": [
@@ -49,5 +49,15 @@
},
"engines": {
"node": ">=18.0"
},
"pnpm": {
"overrides": {
"minimatch@<3.1.4": "3.1.5",
"minimatch@>=5.0.0 <5.1.8": "5.1.8",
"minimatch@>=9.0.0 <9.0.7": "9.0.7",
"lodash@<4.17.23": "4.17.23",
"js-yaml@<4.1.1": "4.1.1",
"gray-matter": "github:jonschlinkert/gray-matter#234163e"
}
}
}

13976
docs/pnpm-lock.yaml generated Normal file

File diff suppressed because it is too large Load Diff

4151
docs/static/openapi.json vendored

File diff suppressed because it is too large Load Diff

View File

@@ -67,7 +67,7 @@ That's it. The script generates env files, secrets, starts all containers, waits
## Specialized Models (Required)
Pick `--gpu` or `--cpu`. This determines how **transcription, diarization, and translation** run:
Pick `--gpu` or `--cpu`. This determines how **transcription, diarization, translation, and audio padding** run:
| Flag | What it does | Requires |
|------|-------------|----------|
@@ -161,7 +161,8 @@ Without `--caddy` or `--domain`, no ports are exposed. Point your own reverse pr
5. **Storage setup** — Either initializes Garage (bucket, keys, permissions) or prompts for external S3 credentials
6. **Caddyfile** — Generates domain-specific (Let's Encrypt) or IP-specific (self-signed) configuration
7. **Build & start** — Always builds GPU/CPU model image from source. With `--build`, also builds backend and frontend from source; otherwise pulls prebuilt images from the registry
8. **Health checks** — Waits for each service, pulls Ollama model if needed, warns about missing LLM config
8. **Auto-detects video platforms** — If `DAILY_API_KEY` is found in `server/.env`, generates `.env.hatchet` (dashboard URL/cookie config), starts Hatchet workflow engine, and generates an API token. If any video platform is configured, enables the Rooms feature
9. **Health checks** — Waits for each service, pulls Ollama model if needed, warns about missing LLM config
> For a deeper dive into each step, see [How the Self-Hosted Setup Works](selfhosted-architecture.md).
@@ -180,12 +181,23 @@ Without `--caddy` or `--domain`, no ports are exposed. Point your own reverse pr
| `ADMIN_PASSWORD_HASH` | PBKDF2 hash for password auth | *(unset)* |
| `WEBRTC_HOST` | IP advertised in WebRTC ICE candidates | Auto-detected (server IP) |
| `TRANSCRIPT_URL` | Specialized model endpoint | `http://transcription:8000` |
| `PADDING_BACKEND` | Audio padding backend (`local` or `modal`) | `modal` (selfhosted), `local` (default) |
| `PADDING_URL` | Audio padding endpoint (when `PADDING_BACKEND=modal`) | `http://transcription:8000` |
| `LLM_URL` | OpenAI-compatible LLM endpoint | Auto-set for Ollama modes |
| `LLM_API_KEY` | LLM API key | `not-needed` for Ollama |
| `LLM_MODEL` | LLM model name | `qwen2.5:14b` for Ollama (override with `--llm-model`) |
| `CELERY_BEAT_POLL_INTERVAL` | Override all worker polling intervals (seconds). `0` = use individual defaults | `300` (selfhosted), `0` (other) |
| `TRANSCRIPT_STORAGE_BACKEND` | Storage backend | `aws` |
| `TRANSCRIPT_STORAGE_AWS_*` | S3 credentials | Auto-set for Garage |
| `DAILY_API_KEY` | Daily.co API key (enables live rooms) | *(unset)* |
| `DAILY_SUBDOMAIN` | Daily.co subdomain | *(unset)* |
| `DAILYCO_STORAGE_AWS_ACCESS_KEY_ID` | AWS access key for reading Daily's recording bucket | *(unset)* |
| `DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY` | AWS secret key for reading Daily's recording bucket | *(unset)* |
| `HATCHET_CLIENT_TOKEN` | Hatchet API token (auto-generated) | *(unset)* |
| `HATCHET_CLIENT_SERVER_URL` | Hatchet server URL | Auto-set when Daily.co configured |
| `HATCHET_CLIENT_HOST_PORT` | Hatchet gRPC address | Auto-set when Daily.co configured |
| `TRANSCRIPT_FILE_TIMEOUT` | HTTP timeout (seconds) for file transcription requests | `600` (`3600` in CPU mode) |
| `DIARIZATION_FILE_TIMEOUT` | HTTP timeout (seconds) for file diarization requests | `600` (`3600` in CPU mode) |
### Frontend Environment (`www/.env`)
@@ -197,6 +209,7 @@ Without `--caddy` or `--domain`, no ports are exposed. Point your own reverse pr
| `NEXTAUTH_SECRET` | Auth secret | Auto-generated |
| `FEATURE_REQUIRE_LOGIN` | Require authentication | `false` |
| `AUTH_PROVIDER` | Auth provider (`authentik` or `credentials`) | *(unset)* |
| `FEATURE_ROOMS` | Enable meeting rooms UI | Auto-set when video platform configured |
## Storage Options
@@ -353,6 +366,87 @@ By default, authentication is disabled (`AUTH_BACKEND=none`, `FEATURE_REQUIRE_LO
```
5. Restart: `docker compose -f docker-compose.selfhosted.yml down && ./scripts/setup-selfhosted.sh <same-flags>`
## Enabling Daily.co Live Rooms
Daily.co enables real-time meeting rooms with automatic recording and per-participant
audio tracks for improved diarization. When configured, the setup script automatically
starts the Hatchet workflow engine for multitrack recording processing.
### Prerequisites
- **Daily.co account** — Sign up at https://www.daily.co/
- **API key** — From Daily.co Dashboard → Developers → API Keys
- **Subdomain** — The `yourname` part of `yourname.daily.co`
- **AWS S3 bucket** — For Daily.co to store recordings. See [Daily.co recording storage docs](https://docs.daily.co/guides/products/live-streaming-recording/storing-recordings-in-a-custom-s3-bucket)
- **IAM role ARN** — An AWS IAM role that Daily.co assumes to write recordings to your bucket
### Setup
1. Configure Daily.co env vars in `server/.env` **before** running the setup script:
```env
DAILY_API_KEY=your-daily-api-key
DAILY_SUBDOMAIN=your-subdomain
DEFAULT_VIDEO_PLATFORM=daily
DAILYCO_STORAGE_AWS_BUCKET_NAME=your-recordings-bucket
DAILYCO_STORAGE_AWS_REGION=us-east-1
DAILYCO_STORAGE_AWS_ROLE_ARN=arn:aws:iam::123456789:role/DailyCoAccess
# Worker credentials for reading/deleting recordings from Daily's S3 bucket.
# Required when transcript storage is separate from Daily's bucket
# (e.g., selfhosted with Garage or a different S3 account).
DAILYCO_STORAGE_AWS_ACCESS_KEY_ID=your-aws-access-key
DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
```
> **Important:** The `DAILYCO_STORAGE_AWS_ACCESS_KEY_ID` and `SECRET_ACCESS_KEY` are AWS IAM
> credentials that allow the Hatchet workers to **read and delete** recording files from Daily's
> S3 bucket. These are separate from the `ROLE_ARN` (which Daily's API uses to *write* recordings).
> Without these keys, multitrack processing will fail with 404 errors when transcript storage
> (e.g., Garage) uses different credentials than the Daily recording bucket.
2. Run the setup script as normal:
```bash
./scripts/setup-selfhosted.sh --gpu --ollama-gpu --garage --caddy
```
The script detects `DAILY_API_KEY` and automatically:
- Starts the Hatchet workflow engine (`hatchet` container)
- Starts Hatchet CPU and LLM workers (`hatchet-worker-cpu`, `hatchet-worker-llm`)
- Generates a `HATCHET_CLIENT_TOKEN` and saves it to `server/.env`
- Sets `HATCHET_CLIENT_SERVER_URL` and `HATCHET_CLIENT_HOST_PORT`
- Enables `FEATURE_ROOMS=true` in `www/.env`
- Registers Daily.co beat tasks (recording polling, presence reconciliation)
3. (Optional) For faster recording discovery, configure a Daily.co webhook:
- In the Daily.co dashboard, add a webhook pointing to `https://your-domain/v1/daily/webhook`
- Set `DAILY_WEBHOOK_SECRET` in `server/.env` (the signing secret from Daily.co)
- Without webhooks, the system polls the Daily.co API every 15 seconds
### What Gets Started
| Service | Purpose |
|---------|---------|
| `hatchet` | Workflow orchestration engine (manages multitrack processing pipelines) |
| `hatchet-worker-cpu` | CPU-heavy audio tasks (track mixdown, waveform generation) |
| `hatchet-worker-llm` | Transcription, LLM inference (summaries, topics, titles), orchestration |
### Hatchet Dashboard
The Hatchet workflow engine includes a web dashboard for monitoring workflow runs and debugging. The setup script auto-generates `.env.hatchet` at the project root with the dashboard URL and cookie domain configuration. This file is git-ignored.
- **With Caddy**: Accessible at `https://your-domain:8888` (TLS via Caddy)
- **Without Caddy**: Accessible at `http://your-ip:8888` (direct port mapping)
### Conditional Beat Tasks
Beat tasks are registered based on which services are configured:
- **Whereby tasks** (only if `WHEREBY_API_KEY` or `AWS_PROCESS_RECORDING_QUEUE_URL`): `process_messages`, `reprocess_failed_recordings`
- **Daily.co tasks** (only if `DAILY_API_KEY`): `poll_daily_recordings`, `trigger_daily_reconciliation`, `reprocess_failed_daily_recordings`
- **Platform tasks** (if any video platform configured): `process_meetings`, `sync_all_ics_calendars`, `create_upcoming_meetings`
- **Always registered**: `cleanup_old_public_data` (if `PUBLIC_MODE`), `healthcheck_ping` (if `HEALTHCHECK_URL`)
## Enabling Real Domain with Let's Encrypt
By default, Caddy uses self-signed certificates. For a real domain:
@@ -446,6 +540,15 @@ docker compose -f docker-compose.selfhosted.yml logs server --tail 50
For self-signed certs, your browser will warn. Click Advanced > Proceed.
For Let's Encrypt, ensure ports 80/443 are open and DNS is pointed correctly.
### File processing timeout on CPU
CPU transcription and diarization are significantly slower than GPU. A 20-minute audio file can take 20-40 minutes to process on CPU. The setup script automatically sets `TRANSCRIPT_FILE_TIMEOUT=3600` and `DIARIZATION_FILE_TIMEOUT=3600` (1 hour) for `--cpu` mode. If you still hit timeouts with very long files, increase these values in `server/.env`:
```bash
# Increase to 2 hours for files over 1 hour
TRANSCRIPT_FILE_TIMEOUT=7200
DIARIZATION_FILE_TIMEOUT=7200
```
Then restart the worker: `docker compose -f docker-compose.selfhosted.yml restart worker`
### Summaries/topics not generating
Check LLM configuration:
```bash
@@ -511,12 +614,15 @@ The setup script is idempotent — it won't overwrite existing secrets or env va
│ (optional)│ │(optional│
│ :11435 │ │ S3) │
└───────────┘ └─────────┘
┌───────────────────────────────────┐
│ Hatchet (optional — Daily.co) │
│ ┌─────────┐ ┌───────────────┐ │
│ │ hatchet │ │ hatchet-worker│ │
│ │ :8888 │──│ -cpu / -llm │ │
│ └─────────┘ └───────────────┘ │
└───────────────────────────────────┘
```
All services communicate over Docker's internal network. Only Caddy (if enabled) exposes ports to the internet.
All services communicate over Docker's internal network. Only Caddy (if enabled) exposes ports to the internet. Hatchet services are only started when `DAILY_API_KEY` is configured.
## Future Plans for the Self-Hosted Script
The following features are supported by Reflector but are **not yet integrated into the self-hosted setup script** and require manual configuration:
- **Daily.co live rooms with multitrack processing**: Daily.co enables real-time meeting rooms with automatic recording and per-participant audio tracks for improved diarization. Requires a Daily.co account, API key, and an AWS S3 bucket for recording storage. Currently not automated in the script because the worker orchestration (hatchet) is not yet supported in the selfhosted compose setup.

View File

@@ -3,6 +3,7 @@ from contextlib import asynccontextmanager
from fastapi import FastAPI
from .routers.diarization import router as diarization_router
from .routers.padding import router as padding_router
from .routers.transcription import router as transcription_router
from .routers.translation import router as translation_router
from .services.transcriber import WhisperService
@@ -27,4 +28,5 @@ def create_app() -> FastAPI:
app.include_router(transcription_router)
app.include_router(translation_router)
app.include_router(diarization_router)
app.include_router(padding_router)
return app

View File

@@ -0,0 +1,199 @@
"""
Audio padding endpoint for selfhosted GPU service.
CPU-intensive audio padding service for adding silence to audio tracks.
Uses PyAV filter graph (adelay) for precise track synchronization.
IMPORTANT: This padding logic is duplicated from server/reflector/utils/audio_padding.py
for deployment isolation (self_hosted can't import from server/reflector/). If you modify
the PyAV filter graph or padding algorithm, you MUST update both:
- gpu/self_hosted/app/routers/padding.py (this file)
- server/reflector/utils/audio_padding.py
Constants duplicated from server/reflector/utils/audio_constants.py for same reason.
"""
import logging
import math
import os
import tempfile
from fractions import Fraction
import av
import requests
from av.audio.resampler import AudioResampler
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
from ..auth import apikey_auth
logger = logging.getLogger(__name__)
router = APIRouter(tags=["padding"])
# ref B0F71CE8-FC59-4AA5-8414-DAFB836DB711
OPUS_STANDARD_SAMPLE_RATE = 48000
OPUS_DEFAULT_BIT_RATE = 128000
S3_TIMEOUT = 60
class PaddingRequest(BaseModel):
track_url: str
output_url: str
start_time_seconds: float
track_index: int
class PaddingResponse(BaseModel):
size: int
cancelled: bool = False
@router.post("/pad", dependencies=[Depends(apikey_auth)], response_model=PaddingResponse)
def pad_track(req: PaddingRequest):
"""Pad audio track with silence using PyAV adelay filter graph."""
if not req.track_url:
raise HTTPException(status_code=400, detail="track_url cannot be empty")
if not req.output_url:
raise HTTPException(status_code=400, detail="output_url cannot be empty")
if req.start_time_seconds <= 0:
raise HTTPException(
status_code=400,
detail=f"start_time_seconds must be positive, got {req.start_time_seconds}",
)
if req.start_time_seconds > 18000:
raise HTTPException(
status_code=400,
detail="start_time_seconds exceeds maximum 18000s (5 hours)",
)
logger.info(
"Padding request: track %d, delay=%.3fs", req.track_index, req.start_time_seconds
)
temp_dir = tempfile.mkdtemp()
input_path = None
output_path = None
try:
# Download source audio
logger.info("Downloading track for padding")
response = requests.get(req.track_url, stream=True, timeout=S3_TIMEOUT)
response.raise_for_status()
input_path = os.path.join(temp_dir, "track.webm")
total_bytes = 0
with open(input_path, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
total_bytes += len(chunk)
logger.info("Track downloaded: %d bytes", total_bytes)
# Apply padding using PyAV
output_path = os.path.join(temp_dir, "padded.webm")
delay_ms = math.floor(req.start_time_seconds * 1000)
logger.info("Padding track %d with %dms delay using PyAV", req.track_index, delay_ms)
in_container = av.open(input_path)
in_stream = next((s for s in in_container.streams if s.type == "audio"), None)
if in_stream is None:
in_container.close()
raise HTTPException(status_code=400, detail="No audio stream in input")
with av.open(output_path, "w", format="webm") as out_container:
out_stream = out_container.add_stream("libopus", rate=OPUS_STANDARD_SAMPLE_RATE)
out_stream.bit_rate = OPUS_DEFAULT_BIT_RATE
graph = av.filter.Graph()
abuf_args = (
f"time_base=1/{OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_rate={OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_fmt=s16:"
f"channel_layout=stereo"
)
src = graph.add("abuffer", args=abuf_args, name="src")
aresample_f = graph.add("aresample", args="async=1", name="ares")
delays_arg = f"{delay_ms}|{delay_ms}"
adelay_f = graph.add(
"adelay", args=f"delays={delays_arg}:all=1", name="delay"
)
sink = graph.add("abuffersink", name="sink")
src.link_to(aresample_f)
aresample_f.link_to(adelay_f)
adelay_f.link_to(sink)
graph.configure()
resampler = AudioResampler(
format="s16", layout="stereo", rate=OPUS_STANDARD_SAMPLE_RATE
)
for frame in in_container.decode(in_stream):
out_frames = resampler.resample(frame) or []
for rframe in out_frames:
rframe.sample_rate = OPUS_STANDARD_SAMPLE_RATE
rframe.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
src.push(rframe)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
# Flush filter graph
src.push(None)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
# Flush encoder
for packet in out_stream.encode(None):
out_container.mux(packet)
in_container.close()
file_size = os.path.getsize(output_path)
logger.info("Padding complete: %d bytes", file_size)
# Upload padded track
logger.info("Uploading padded track to S3")
with open(output_path, "rb") as f:
upload_response = requests.put(req.output_url, data=f, timeout=S3_TIMEOUT)
upload_response.raise_for_status()
logger.info("Upload complete: %d bytes", file_size)
return PaddingResponse(size=file_size)
except HTTPException:
raise
except Exception as e:
logger.error("Padding failed for track %d: %s", req.track_index, e, exc_info=True)
raise HTTPException(status_code=500, detail=f"Padding failed: {e}") from e
finally:
if input_path and os.path.exists(input_path):
try:
os.unlink(input_path)
except Exception as e:
logger.warning("Failed to cleanup input file: %s", e)
if output_path and os.path.exists(output_path):
try:
os.unlink(output_path)
except Exception as e:
logger.warning("Failed to cleanup output file: %s", e)
try:
os.rmdir(temp_dir)
except Exception as e:
logger.warning("Failed to cleanup temp directory: %s", e)

View File

@@ -16,4 +16,5 @@ dependencies = [
"sentencepiece",
"pyannote.audio==3.1.0",
"torchaudio>=2.3.0",
"av>=13.1.0",
]

View File

@@ -726,7 +726,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/44/69/9b804adb5fd0671f367781560eb5eb586c4d495277c93bde4307b9e28068/greenlet-3.2.4-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:3b67ca49f54cede0186854a008109d6ee71f66bd57bb36abd6d0a0267b540cdd", size = 274079, upload-time = "2025-08-07T13:15:45.033Z" },
{ url = "https://files.pythonhosted.org/packages/46/e9/d2a80c99f19a153eff70bc451ab78615583b8dac0754cfb942223d2c1a0d/greenlet-3.2.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ddf9164e7a5b08e9d22511526865780a576f19ddd00d62f8a665949327fde8bb", size = 640997, upload-time = "2025-08-07T13:42:56.234Z" },
{ url = "https://files.pythonhosted.org/packages/3b/16/035dcfcc48715ccd345f3a93183267167cdd162ad123cd93067d86f27ce4/greenlet-3.2.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f28588772bb5fb869a8eb331374ec06f24a83a9c25bfa1f38b6993afe9c1e968", size = 655185, upload-time = "2025-08-07T13:45:27.624Z" },
{ url = "https://files.pythonhosted.org/packages/31/da/0386695eef69ffae1ad726881571dfe28b41970173947e7c558d9998de0f/greenlet-3.2.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:5c9320971821a7cb77cfab8d956fa8e39cd07ca44b6070db358ceb7f8797c8c9", size = 649926, upload-time = "2025-08-07T13:53:15.251Z" },
{ url = "https://files.pythonhosted.org/packages/68/88/69bf19fd4dc19981928ceacbc5fd4bb6bc2215d53199e367832e98d1d8fe/greenlet-3.2.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c60a6d84229b271d44b70fb6e5fa23781abb5d742af7b808ae3f6efd7c9c60f6", size = 651839, upload-time = "2025-08-07T13:18:30.281Z" },
{ url = "https://files.pythonhosted.org/packages/19/0d/6660d55f7373b2ff8152401a83e02084956da23ae58cddbfb0b330978fe9/greenlet-3.2.4-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3b3812d8d0c9579967815af437d96623f45c0f2ae5f04e366de62a12d83a8fb0", size = 607586, upload-time = "2025-08-07T13:18:28.544Z" },
{ url = "https://files.pythonhosted.org/packages/8e/1a/c953fdedd22d81ee4629afbb38d2f9d71e37d23caace44775a3a969147d4/greenlet-3.2.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:abbf57b5a870d30c4675928c37278493044d7c14378350b3aa5d484fa65575f0", size = 1123281, upload-time = "2025-08-07T13:42:39.858Z" },
@@ -737,7 +736,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/49/e8/58c7f85958bda41dafea50497cbd59738c5c43dbbea5ee83d651234398f4/greenlet-3.2.4-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:1a921e542453fe531144e91e1feedf12e07351b1cf6c9e8a3325ea600a715a31", size = 272814, upload-time = "2025-08-07T13:15:50.011Z" },
{ url = "https://files.pythonhosted.org/packages/62/dd/b9f59862e9e257a16e4e610480cfffd29e3fae018a68c2332090b53aac3d/greenlet-3.2.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cd3c8e693bff0fff6ba55f140bf390fa92c994083f838fece0f63be121334945", size = 641073, upload-time = "2025-08-07T13:42:57.23Z" },
{ url = "https://files.pythonhosted.org/packages/f7/0b/bc13f787394920b23073ca3b6c4a7a21396301ed75a655bcb47196b50e6e/greenlet-3.2.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:710638eb93b1fa52823aa91bf75326f9ecdfd5e0466f00789246a5280f4ba0fc", size = 655191, upload-time = "2025-08-07T13:45:29.752Z" },
{ url = "https://files.pythonhosted.org/packages/f2/d6/6adde57d1345a8d0f14d31e4ab9c23cfe8e2cd39c3baf7674b4b0338d266/greenlet-3.2.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:c5111ccdc9c88f423426df3fd1811bfc40ed66264d35aa373420a34377efc98a", size = 649516, upload-time = "2025-08-07T13:53:16.314Z" },
{ url = "https://files.pythonhosted.org/packages/7f/3b/3a3328a788d4a473889a2d403199932be55b1b0060f4ddd96ee7cdfcad10/greenlet-3.2.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d76383238584e9711e20ebe14db6c88ddcedc1829a9ad31a584389463b5aa504", size = 652169, upload-time = "2025-08-07T13:18:32.861Z" },
{ url = "https://files.pythonhosted.org/packages/ee/43/3cecdc0349359e1a527cbf2e3e28e5f8f06d3343aaf82ca13437a9aa290f/greenlet-3.2.4-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671", size = 610497, upload-time = "2025-08-07T13:18:31.636Z" },
{ url = "https://files.pythonhosted.org/packages/b8/19/06b6cf5d604e2c382a6f31cafafd6f33d5dea706f4db7bdab184bad2b21d/greenlet-3.2.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b", size = 1121662, upload-time = "2025-08-07T13:42:41.117Z" },
@@ -748,7 +746,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/22/5c/85273fd7cc388285632b0498dbbab97596e04b154933dfe0f3e68156c68c/greenlet-3.2.4-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0", size = 273586, upload-time = "2025-08-07T13:16:08.004Z" },
{ url = "https://files.pythonhosted.org/packages/d1/75/10aeeaa3da9332c2e761e4c50d4c3556c21113ee3f0afa2cf5769946f7a3/greenlet-3.2.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f", size = 686346, upload-time = "2025-08-07T13:42:59.944Z" },
{ url = "https://files.pythonhosted.org/packages/c0/aa/687d6b12ffb505a4447567d1f3abea23bd20e73a5bed63871178e0831b7a/greenlet-3.2.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:c17b6b34111ea72fc5a4e4beec9711d2226285f0386ea83477cbb97c30a3f3a5", size = 699218, upload-time = "2025-08-07T13:45:30.969Z" },
{ url = "https://files.pythonhosted.org/packages/dc/8b/29aae55436521f1d6f8ff4e12fb676f3400de7fcf27fccd1d4d17fd8fecd/greenlet-3.2.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1", size = 694659, upload-time = "2025-08-07T13:53:17.759Z" },
{ url = "https://files.pythonhosted.org/packages/92/2e/ea25914b1ebfde93b6fc4ff46d6864564fba59024e928bdc7de475affc25/greenlet-3.2.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735", size = 695355, upload-time = "2025-08-07T13:18:34.517Z" },
{ url = "https://files.pythonhosted.org/packages/72/60/fc56c62046ec17f6b0d3060564562c64c862948c9d4bc8aa807cf5bd74f4/greenlet-3.2.4-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337", size = 657512, upload-time = "2025-08-07T13:18:33.969Z" },
{ url = "https://files.pythonhosted.org/packages/23/6e/74407aed965a4ab6ddd93a7ded3180b730d281c77b765788419484cdfeef/greenlet-3.2.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:2917bdf657f5859fbf3386b12d68ede4cf1f04c90c3a6bc1f013dd68a22e2269", size = 1612508, upload-time = "2025-11-04T12:42:23.427Z" },
@@ -2072,6 +2069,7 @@ name = "reflector-gpu"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "av" },
{ name = "fastapi", extra = ["standard"] },
{ name = "faster-whisper" },
{ name = "librosa" },
@@ -2087,6 +2085,7 @@ dependencies = [
[package.metadata]
requires-dist = [
{ name = "av", specifier = ">=13.1.0" },
{ name = "fastapi", extras = ["standard"], specifier = ">=0.116.1" },
{ name = "faster-whisper", specifier = ">=1.1.0" },
{ name = "librosa", specifier = "==0.10.1" },

View File

@@ -34,6 +34,10 @@
# ./scripts/setup-selfhosted.sh --gpu --garage --caddy
# ./scripts/setup-selfhosted.sh --cpu
#
# The script auto-detects Daily.co (DAILY_API_KEY) and Whereby (WHEREBY_API_KEY)
# from server/.env. If Daily.co is configured, Hatchet workflow services are
# started automatically for multitrack recording processing.
#
# Idempotent — safe to re-run at any time.
#
set -euo pipefail
@@ -427,6 +431,8 @@ step_server_env() {
env_set "$SERVER_ENV" "DIARIZATION_URL" "http://transcription:8000"
env_set "$SERVER_ENV" "TRANSLATION_BACKEND" "modal"
env_set "$SERVER_ENV" "TRANSLATE_URL" "http://transcription:8000"
env_set "$SERVER_ENV" "PADDING_BACKEND" "modal"
env_set "$SERVER_ENV" "PADDING_URL" "http://transcription:8000"
# HuggingFace token for gated models (pyannote diarization)
# Written to root .env so docker compose picks it up for gpu/cpu containers
@@ -440,7 +446,9 @@ step_server_env() {
warn "HF_TOKEN not set. Diarization will use a public model fallback."
warn "For best results, get a token at https://huggingface.co/settings/tokens"
warn "and accept pyannote licenses at https://huggingface.co/pyannote/speaker-diarization-3.1"
read -rp " HuggingFace token (or press Enter to skip): " current_hf_token
if [[ -t 0 ]]; then
read -rp " HuggingFace token (or press Enter to skip): " current_hf_token
fi
fi
if [[ -n "$current_hf_token" ]]; then
touch "$root_env"
@@ -466,7 +474,7 @@ step_server_env() {
if env_has_key "$SERVER_ENV" "LLM_URL"; then
current_llm_url=$(env_get "$SERVER_ENV" "LLM_URL")
fi
if [[ -z "$current_llm_url" ]] || [[ "$current_llm_url" == "http://host.docker.internal"* ]]; then
if [[ -z "$current_llm_url" ]]; then
warn "LLM not configured. Summarization and topic detection will NOT work."
warn "Edit server/.env and set LLM_URL, LLM_API_KEY, LLM_MODEL"
warn "Example: LLM_URL=https://api.openai.com/v1 LLM_MODEL=gpt-4o-mini"
@@ -475,6 +483,20 @@ step_server_env() {
fi
fi
# CPU mode: increase file processing timeouts (default 600s is too short for long audio on CPU)
if [[ "$MODEL_MODE" == "cpu" ]]; then
env_set "$SERVER_ENV" "TRANSCRIPT_FILE_TIMEOUT" "3600"
env_set "$SERVER_ENV" "DIARIZATION_FILE_TIMEOUT" "3600"
ok "CPU mode — file processing timeouts set to 3600s (1 hour)"
fi
# If Daily.co is manually configured, ensure Hatchet connectivity vars are set
if env_has_key "$SERVER_ENV" "DAILY_API_KEY" && [[ -n "$(env_get "$SERVER_ENV" "DAILY_API_KEY")" ]]; then
env_set "$SERVER_ENV" "HATCHET_CLIENT_SERVER_URL" "http://hatchet:8888"
env_set "$SERVER_ENV" "HATCHET_CLIENT_HOST_PORT" "hatchet:7077"
ok "Daily.co detected — Hatchet connectivity configured"
fi
ok "server/.env ready"
}
@@ -535,6 +557,19 @@ step_www_env() {
fi
fi
# Enable rooms if any video platform is configured in server/.env
local _daily_key="" _whereby_key=""
if env_has_key "$SERVER_ENV" "DAILY_API_KEY"; then
_daily_key=$(env_get "$SERVER_ENV" "DAILY_API_KEY")
fi
if env_has_key "$SERVER_ENV" "WHEREBY_API_KEY"; then
_whereby_key=$(env_get "$SERVER_ENV" "WHEREBY_API_KEY")
fi
if [[ -n "$_daily_key" ]] || [[ -n "$_whereby_key" ]]; then
env_set "$WWW_ENV" "FEATURE_ROOMS" "true"
ok "Rooms feature enabled (video platform configured)"
fi
ok "www/.env ready (URL=$base_url)"
}
@@ -739,6 +774,23 @@ CADDYEOF
else
ok "Caddyfile already exists"
fi
# Add Hatchet dashboard route if Daily.co is detected
if [[ "$DAILY_DETECTED" == "true" ]]; then
if ! grep -q "hatchet" "$caddyfile" 2>/dev/null; then
cat >> "$caddyfile" << CADDYEOF
# Hatchet workflow dashboard (Daily.co multitrack processing)
:8888 {
tls internal
reverse_proxy hatchet:8888
}
CADDYEOF
ok "Added Hatchet dashboard route to Caddyfile (port 8888)"
else
ok "Hatchet dashboard route already in Caddyfile"
fi
fi
}
# =========================================================
@@ -766,6 +818,37 @@ step_services() {
compose_cmd pull server web || warn "Pull failed — using cached images"
fi
# Build hatchet workers if Daily.co is configured (same backend image)
if [[ "$DAILY_DETECTED" == "true" ]] && [[ "$BUILD_IMAGES" == "true" ]]; then
info "Building Hatchet worker images..."
compose_cmd build hatchet-worker-cpu hatchet-worker-llm
ok "Hatchet worker images built"
fi
# Ensure hatchet database exists before starting hatchet (init-hatchet-db.sql only runs on fresh postgres volumes)
if [[ "$DAILY_DETECTED" == "true" ]]; then
info "Ensuring postgres is running for Hatchet database setup..."
compose_cmd up -d postgres
local pg_ready=false
for i in $(seq 1 30); do
if compose_cmd exec -T postgres pg_isready -U reflector > /dev/null 2>&1; then
pg_ready=true
break
fi
sleep 2
done
if [[ "$pg_ready" == "true" ]]; then
compose_cmd exec -T postgres psql -U reflector -tc \
"SELECT 1 FROM pg_database WHERE datname = 'hatchet'" 2>/dev/null \
| grep -q 1 \
|| compose_cmd exec -T postgres psql -U reflector -c "CREATE DATABASE hatchet" 2>/dev/null \
|| true
ok "Hatchet database ready"
else
warn "Postgres not ready — hatchet database may need to be created manually"
fi
fi
# Start all services
compose_cmd up -d
ok "Containers started"
@@ -894,6 +977,26 @@ step_health() {
fi
fi
# Hatchet (if Daily.co detected)
if [[ "$DAILY_DETECTED" == "true" ]]; then
info "Waiting for Hatchet workflow engine..."
local hatchet_ok=false
for i in $(seq 1 60); do
if curl -sf http://localhost:8888/api/live > /dev/null 2>&1; then
hatchet_ok=true
break
fi
echo -ne "\r Waiting for Hatchet... ($i/60)"
sleep 3
done
echo ""
if [[ "$hatchet_ok" == "true" ]]; then
ok "Hatchet workflow engine healthy"
else
warn "Hatchet not ready yet. Check: docker compose logs hatchet"
fi
fi
# LLM warning for non-Ollama modes
if [[ "$USES_OLLAMA" == "false" ]]; then
local llm_url=""
@@ -911,6 +1014,71 @@ step_health() {
fi
}
# =========================================================
# Step 8: Hatchet token generation (Daily.co only)
# =========================================================
step_hatchet_token() {
if [[ "$DAILY_DETECTED" != "true" ]]; then
return
fi
# Skip if token already set
if env_has_key "$SERVER_ENV" "HATCHET_CLIENT_TOKEN" && [[ -n "$(env_get "$SERVER_ENV" "HATCHET_CLIENT_TOKEN")" ]]; then
ok "HATCHET_CLIENT_TOKEN already set — skipping generation"
return
fi
info "Step 8: Generating Hatchet API token"
# Wait for hatchet to be healthy
local hatchet_ok=false
for i in $(seq 1 60); do
if curl -sf http://localhost:8888/api/live > /dev/null 2>&1; then
hatchet_ok=true
break
fi
echo -ne "\r Waiting for Hatchet API... ($i/60)"
sleep 3
done
echo ""
if [[ "$hatchet_ok" != "true" ]]; then
err "Hatchet not responding — cannot generate token"
err "Check: docker compose logs hatchet"
return
fi
# Get tenant ID from hatchet database
local tenant_id
tenant_id=$(compose_cmd exec -T postgres psql -U reflector -d hatchet -t -c \
"SELECT id FROM \"Tenant\" WHERE slug = 'default';" 2>/dev/null | tr -d ' \n')
if [[ -z "$tenant_id" ]]; then
err "Could not find default tenant in Hatchet database"
err "Hatchet may still be initializing. Try re-running the script."
return
fi
# Generate token via hatchet-admin
local token
token=$(compose_cmd exec -T hatchet /hatchet-admin token create \
--config /config --tenant-id "$tenant_id" 2>/dev/null | tr -d '\n')
if [[ -z "$token" ]]; then
err "Failed to generate Hatchet token"
err "Try generating manually: see server/README.md"
return
fi
env_set "$SERVER_ENV" "HATCHET_CLIENT_TOKEN" "$token"
ok "HATCHET_CLIENT_TOKEN generated and saved to server/.env"
# Restart services that need the token
info "Restarting services with new Hatchet token..."
compose_cmd restart server worker hatchet-worker-cpu hatchet-worker-llm
ok "Services restarted with Hatchet token"
}
# =========================================================
# Main
# =========================================================
@@ -957,6 +1125,48 @@ main() {
echo ""
step_server_env
echo ""
# Auto-detect video platforms from server/.env (after step_server_env so file exists)
DAILY_DETECTED=false
WHEREBY_DETECTED=false
if env_has_key "$SERVER_ENV" "DAILY_API_KEY" && [[ -n "$(env_get "$SERVER_ENV" "DAILY_API_KEY")" ]]; then
DAILY_DETECTED=true
fi
if env_has_key "$SERVER_ENV" "WHEREBY_API_KEY" && [[ -n "$(env_get "$SERVER_ENV" "WHEREBY_API_KEY")" ]]; then
WHEREBY_DETECTED=true
fi
ANY_PLATFORM_DETECTED=false
[[ "$DAILY_DETECTED" == "true" || "$WHEREBY_DETECTED" == "true" ]] && ANY_PLATFORM_DETECTED=true
# Conditional profile activation for Daily.co
if [[ "$DAILY_DETECTED" == "true" ]]; then
COMPOSE_PROFILES+=("dailyco")
ok "Daily.co detected — enabling Hatchet workflow services"
fi
# Generate .env.hatchet for hatchet dashboard config
if [[ "$DAILY_DETECTED" == "true" ]]; then
local hatchet_server_url hatchet_cookie_domain
if [[ -n "$CUSTOM_DOMAIN" ]]; then
hatchet_server_url="https://${CUSTOM_DOMAIN}:8888"
hatchet_cookie_domain="$CUSTOM_DOMAIN"
elif [[ -n "$PRIMARY_IP" ]]; then
hatchet_server_url="http://${PRIMARY_IP}:8888"
hatchet_cookie_domain="$PRIMARY_IP"
else
hatchet_server_url="http://localhost:8888"
hatchet_cookie_domain="localhost"
fi
cat > "$ROOT_DIR/.env.hatchet" << EOF
SERVER_URL=$hatchet_server_url
SERVER_AUTH_COOKIE_DOMAIN=$hatchet_cookie_domain
EOF
ok "Generated .env.hatchet (dashboard URL=$hatchet_server_url)"
else
# Create empty .env.hatchet so compose doesn't fail if dailyco profile is ever activated manually
touch "$ROOT_DIR/.env.hatchet"
fi
step_www_env
echo ""
step_storage
@@ -966,6 +1176,8 @@ main() {
step_services
echo ""
step_health
echo ""
step_hatchet_token
echo ""
echo "=========================================="
@@ -995,6 +1207,9 @@ main() {
[[ "$USE_GARAGE" != "true" ]] && echo " Storage: External S3"
[[ "$USES_OLLAMA" == "true" ]] && echo " LLM: Ollama ($OLLAMA_MODEL) for summarization/topics"
[[ "$USES_OLLAMA" != "true" ]] && echo " LLM: External (configure in server/.env)"
[[ "$DAILY_DETECTED" == "true" ]] && echo " Video: Daily.co (live rooms + multitrack processing via Hatchet)"
[[ "$WHEREBY_DETECTED" == "true" ]] && echo " Video: Whereby (live rooms)"
[[ "$ANY_PLATFORM_DETECTED" != "true" ]] && echo " Video: None (rooms disabled)"
echo ""
echo " To stop: docker compose -f docker-compose.selfhosted.yml down"
echo " To re-run: ./scripts/setup-selfhosted.sh $*"

View File

@@ -86,6 +86,18 @@ LLM_API_KEY=not-needed
## Context size for summary generation (tokens)
LLM_CONTEXT_WINDOW=16000
## =======================================================
## Audio Padding
##
## backends: local (in-process PyAV), modal (HTTP API client)
## Default is "local" — no external service needed.
## Set to "modal" when using Modal.com or self-hosted gpu/self_hosted/ container.
## =======================================================
#PADDING_BACKEND=local
#PADDING_BACKEND=modal
#PADDING_URL=https://xxxxx--reflector-padding-web.modal.run
#PADDING_MODAL_API_KEY=xxxxx
## =======================================================
## Diarization
##
@@ -137,6 +149,10 @@ TRANSCRIPT_STORAGE_AWS_REGION=us-east-1
#DAILYCO_STORAGE_AWS_ROLE_ARN=... # IAM role ARN for Daily.co S3 access
#DAILYCO_STORAGE_AWS_BUCKET_NAME=reflector-dailyco
#DAILYCO_STORAGE_AWS_REGION=us-west-2
# Worker credentials for reading/deleting from Daily's recording bucket
# Required when transcript storage is separate from Daily's bucket (e.g., selfhosted with Garage)
#DAILYCO_STORAGE_AWS_ACCESS_KEY_ID=your-aws-access-key
#DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
## Whereby (optional separate bucket)
#WHEREBY_STORAGE_AWS_BUCKET_NAME=reflector-whereby

View File

@@ -47,6 +47,9 @@ DIARIZATION_URL=http://transcription:8000
TRANSLATION_BACKEND=modal
TRANSLATE_URL=http://transcription:8000
PADDING_BACKEND=modal
PADDING_URL=http://transcription:8000
# HuggingFace token — optional, for gated models (e.g. pyannote).
# Falls back to public S3 model bundle if not set.
# HF_TOKEN=hf_xxxxx
@@ -93,15 +96,42 @@ TRANSCRIPT_STORAGE_AWS_REGION=us-east-1
# =======================================================
# Daily.co Live Rooms (Optional)
# Enable real-time meeting rooms with Daily.co integration.
# Requires a Daily.co account: https://www.daily.co/
# Configure these BEFORE running setup-selfhosted.sh and the
# script will auto-detect and start Hatchet workflow services.
#
# Prerequisites:
# 1. Daily.co account: https://www.daily.co/
# 2. API key: Dashboard → Developers → API Keys
# 3. S3 bucket for recordings: https://docs.daily.co/guides/products/live-streaming-recording/storing-recordings-in-a-custom-s3-bucket
# 4. IAM role ARN for Daily.co to write recordings to your bucket
#
# After configuring, run: ./scripts/setup-selfhosted.sh <your-flags>
# The script will detect DAILY_API_KEY and automatically:
# - Start Hatchet workflow engine + CPU/LLM workers
# - Generate a Hatchet API token
# - Enable FEATURE_ROOMS in the frontend
# =======================================================
# DEFAULT_VIDEO_PLATFORM=daily
# DAILY_API_KEY=your-daily-api-key
# DAILY_SUBDOMAIN=your-subdomain
# DAILY_WEBHOOK_SECRET=your-daily-webhook-secret
# DEFAULT_VIDEO_PLATFORM=daily
# DAILYCO_STORAGE_AWS_BUCKET_NAME=reflector-dailyco
# DAILYCO_STORAGE_AWS_REGION=us-east-1
# DAILYCO_STORAGE_AWS_ROLE_ARN=arn:aws:iam::role/DailyCoAccess
# Worker credentials for reading/deleting from Daily's recording bucket
# Required when transcript storage is separate from Daily's bucket (e.g., selfhosted with Garage)
# DAILYCO_STORAGE_AWS_ACCESS_KEY_ID=your-aws-access-key
# DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
# DAILY_WEBHOOK_SECRET=your-daily-webhook-secret # optional, for faster recording discovery
# =======================================================
# Hatchet Workflow Engine (Auto-configured for Daily.co)
# Required for Daily.co multitrack recording processing.
# The setup script generates HATCHET_CLIENT_TOKEN automatically.
# Do not set these manually unless you know what you're doing.
# =======================================================
# HATCHET_CLIENT_TOKEN=<auto-generated-by-script>
# HATCHET_CLIENT_SERVER_URL=http://hatchet:8888
# HATCHET_CLIENT_HOST_PORT=hatchet:7077
# =======================================================
# Feature Flags

View File

@@ -27,7 +27,7 @@ dependencies = [
"protobuf>=4.24.3",
"celery>=5.3.4",
"redis>=5.0.1",
"python-jose[cryptography]>=3.3.0",
"pyjwt[crypto]>=2.8.0",
"python-multipart>=0.0.6",
"transformers>=4.36.2",
"jsonschema>=4.23.0",

View File

@@ -0,0 +1,13 @@
"""
Suppress known dependency warnings. Import this before any reflector/hatchet_sdk
imports that pull in pydantic (e.g. llama_index) to hide UnsupportedFieldAttributeWarning
about validate_default.
"""
import warnings
warnings.filterwarnings(
"ignore",
message=".*validate_default.*",
category=UserWarning,
)

View File

@@ -4,8 +4,8 @@ from fastapi import Depends, HTTPException
if TYPE_CHECKING:
from fastapi import WebSocket
import jwt
from fastapi.security import APIKeyHeader, OAuth2PasswordBearer
from jose import JWTError, jwt
from pydantic import BaseModel
from reflector.db.user_api_keys import user_api_keys_controller
@@ -54,7 +54,7 @@ class JWTAuth:
audience=jwt_audience,
)
return payload
except JWTError as e:
except jwt.PyJWTError as e:
logger.error(f"JWT error: {e}")
raise
@@ -94,7 +94,7 @@ async def _authenticate_user(
)
user_infos.append(UserInfo(sub=user.id, email=email))
except JWTError as e:
except jwt.PyJWTError as e:
logger.error(f"JWT error: {e}")
raise HTTPException(status_code=401, detail="Invalid authentication")

View File

@@ -9,9 +9,9 @@ from collections import defaultdict
from datetime import datetime, timedelta, timezone
from typing import TYPE_CHECKING, Annotated, Optional
import jwt
from fastapi import APIRouter, Depends, HTTPException, Request
from fastapi.security import APIKeyHeader, OAuth2PasswordBearer
from jose import JWTError, jwt
from pydantic import BaseModel
from reflector.auth.password_utils import verify_password
@@ -110,7 +110,7 @@ async def _authenticate_user(
user_id = payload["sub"]
email = payload.get("email")
user_infos.append(UserInfo(sub=user_id, email=email))
except JWTError as e:
except jwt.PyJWTError as e:
logger.error(f"JWT error: {e}")
raise HTTPException(status_code=401, detail="Invalid authentication")

View File

@@ -7,6 +7,7 @@ Configuration:
- Worker affinity: pool=cpu-heavy
"""
import reflector._warnings_filter # noqa: F401 -- side effect: suppress pydantic validate_default warning
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
daily_multitrack_pipeline,

View File

@@ -5,6 +5,7 @@ Handles: all tasks except mixdown_tracks (transcription, LLM inference, orchestr
import asyncio
import reflector._warnings_filter # noqa: F401 -- side effect: suppress pydantic validate_default warning
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
daily_multitrack_pipeline,

View File

@@ -90,7 +90,6 @@ from reflector.processors.summary.summary_builder import SummaryBuilder
from reflector.processors.types import TitleSummary, Word
from reflector.processors.types import Transcript as TranscriptType
from reflector.settings import settings
from reflector.storage.storage_aws import AwsStorage
from reflector.utils.audio_constants import (
PRESIGNED_URL_EXPIRATION_SECONDS,
WAVEFORM_SEGMENTS,
@@ -117,6 +116,7 @@ class PipelineInput(BaseModel):
bucket_name: NonEmptyString
transcript_id: NonEmptyString
room_id: NonEmptyString | None = None
source_platform: str = "daily"
hatchet = HatchetClientManager.get_client()
@@ -170,15 +170,10 @@ async def set_workflow_error_status(transcript_id: NonEmptyString) -> bool:
def _spawn_storage():
"""Create fresh storage instance."""
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
return AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
)
"""Create fresh storage instance for writing to our transcript bucket."""
from reflector.storage import get_transcripts_storage # noqa: PLC0415
return get_transcripts_storage()
class Loggable(Protocol):
@@ -434,6 +429,7 @@ async def process_tracks(input: PipelineInput, ctx: Context) -> ProcessTracksRes
bucket_name=input.bucket_name,
transcript_id=input.transcript_id,
language=source_language,
source_platform=input.source_platform,
)
)
for i, track in enumerate(input.tracks)
@@ -1195,7 +1191,10 @@ async def cleanup_consent(input: PipelineInput, ctx: Context) -> ConsentResult:
)
from reflector.db.recordings import recordings_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.storage import get_transcripts_storage # noqa: PLC0415
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
@@ -1245,7 +1244,7 @@ async def cleanup_consent(input: PipelineInput, ctx: Context) -> ConsentResult:
deletion_errors = []
if input_track_keys and input.bucket_name:
master_storage = get_transcripts_storage()
master_storage = get_source_storage(input.source_platform)
for key in input_track_keys:
try:
await master_storage.delete_file(key, bucket=input.bucket_name)

View File

@@ -24,6 +24,7 @@ class PaddingInput(BaseModel):
s3_key: str
bucket_name: str
transcript_id: str
source_platform: str = "daily"
hatchet = HatchetClientManager.get_client()
@@ -45,20 +46,14 @@ async def pad_track(input: PaddingInput, ctx: Context) -> PadTrackResult:
)
try:
# Create fresh storage instance to avoid aioboto3 fork issues
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
source_url = await storage.get_file_url(
# Source reads: use platform-specific credentials
source_storage = get_source_storage(input.source_platform)
source_url = await source_storage.get_file_url(
input.s3_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
@@ -96,52 +91,28 @@ async def pad_track(input: PaddingInput, ctx: Context) -> PadTrackResult:
storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{input.track_index}.webm"
# Presign PUT URL for output (Modal will upload directly)
output_url = await storage.get_file_url(
# Output writes: use transcript storage (our own bucket)
output_storage = get_transcripts_storage()
output_url = await output_storage.get_file_url(
storage_path,
operation="put_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
import httpx # noqa: PLC0415
from reflector.processors.audio_padding_modal import ( # noqa: PLC0415
AudioPaddingModalProcessor,
from reflector.processors.audio_padding_auto import ( # noqa: PLC0415
AudioPaddingAutoProcessor,
)
try:
processor = AudioPaddingModalProcessor()
result = await processor.pad_track(
track_url=source_url,
output_url=output_url,
start_time_seconds=start_time_seconds,
track_index=input.track_index,
)
file_size = result.size
processor = AudioPaddingAutoProcessor()
result = await processor.pad_track(
track_url=source_url,
output_url=output_url,
start_time_seconds=start_time_seconds,
track_index=input.track_index,
)
file_size = result.size
ctx.log(f"pad_track: Modal returned size={file_size}")
except httpx.HTTPStatusError as e:
error_detail = e.response.text if hasattr(e.response, "text") else str(e)
logger.error(
"[Hatchet] Modal padding HTTP error",
transcript_id=input.transcript_id,
track_index=input.track_index,
status_code=e.response.status_code if hasattr(e, "response") else None,
error=error_detail,
exc_info=True,
)
raise Exception(
f"Modal padding failed: HTTP {e.response.status_code}"
) from e
except httpx.TimeoutException as e:
logger.error(
"[Hatchet] Modal padding timeout",
transcript_id=input.transcript_id,
track_index=input.track_index,
error=str(e),
exc_info=True,
)
raise Exception("Modal padding timeout") from e
ctx.log(f"pad_track: padding returned size={file_size}")
logger.info(
"[Hatchet] pad_track complete",

View File

@@ -36,6 +36,7 @@ class TrackInput(BaseModel):
bucket_name: str
transcript_id: str
language: str = "en"
source_platform: str = "daily"
hatchet = HatchetClientManager.get_client()
@@ -59,20 +60,14 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
)
try:
# Create fresh storage instance to avoid aioboto3 fork issues
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
source_url = await storage.get_file_url(
# Source reads: use platform-specific credentials
source_storage = get_source_storage(input.source_platform)
source_url = await source_storage.get_file_url(
input.s3_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
@@ -99,18 +94,19 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{input.track_index}.webm"
# Presign PUT URL for output (Modal uploads directly)
output_url = await storage.get_file_url(
# Output writes: use transcript storage (our own bucket)
output_storage = get_transcripts_storage()
output_url = await output_storage.get_file_url(
storage_path,
operation="put_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
from reflector.processors.audio_padding_modal import ( # noqa: PLC0415
AudioPaddingModalProcessor,
from reflector.processors.audio_padding_auto import ( # noqa: PLC0415
AudioPaddingAutoProcessor,
)
processor = AudioPaddingModalProcessor()
processor = AudioPaddingAutoProcessor()
result = await processor.pad_track(
track_url=source_url,
output_url=output_url,
@@ -161,18 +157,18 @@ async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackRe
raise ValueError("Missing padded_key from pad_track")
# Presign URL on demand (avoids stale URLs on workflow replay)
# TODO: replace direct AwsStorage construction with get_transcripts_storage() factory
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
aws_endpoint_url=settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL,
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
# If bucket_name is set, file is still in the platform's source bucket (no padding applied).
# If bucket_name is None, padded file was written to our transcript storage.
if bucket_name:
storage = get_source_storage(input.source_platform)
else:
storage = get_transcripts_storage()
audio_url = await storage.get_file_url(
padded_key,
operation="get_object",

View File

@@ -0,0 +1,31 @@
import importlib
from reflector.settings import settings
class AudioPaddingAutoProcessor:
_registry = {}
@classmethod
def register(cls, name, kclass):
cls._registry[name] = kclass
def __new__(cls, name: str | None = None, **kwargs):
if name is None:
name = settings.PADDING_BACKEND
if name not in cls._registry:
module_name = f"reflector.processors.audio_padding_{name}"
importlib.import_module(module_name)
# gather specific configuration for the processor
# search `PADDING_XXX_YYY`, push to constructor as `xxx_yyy`
config = {}
name_upper = name.upper()
settings_prefix = "PADDING_"
config_prefix = f"{settings_prefix}{name_upper}_"
for key, value in settings:
if key.startswith(config_prefix):
config_name = key[len(settings_prefix) :].lower()
config[config_name] = value
return cls._registry[name](**config | kwargs)

View File

@@ -0,0 +1,133 @@
"""
Local audio padding processor using PyAV.
Pads audio tracks with silence directly in-process (no HTTP).
Reuses the shared PyAV utilities from reflector.utils.audio_padding.
"""
import asyncio
import os
import tempfile
import av
from reflector.logger import logger
from reflector.processors.audio_padding_auto import AudioPaddingAutoProcessor
from reflector.processors.audio_padding_modal import PaddingResponse
from reflector.utils.audio_padding import apply_audio_padding_to_file
S3_TIMEOUT = 60
class AudioPaddingLocalProcessor:
"""Audio padding processor using local PyAV (no HTTP backend)."""
async def pad_track(
self,
track_url: str,
output_url: str,
start_time_seconds: float,
track_index: int,
) -> PaddingResponse:
"""Pad audio track with silence locally via PyAV.
Args:
track_url: Presigned GET URL for source audio track
output_url: Presigned PUT URL for output WebM
start_time_seconds: Amount of silence to prepend
track_index: Track index for logging
"""
if not track_url:
raise ValueError("track_url cannot be empty")
if start_time_seconds <= 0:
raise ValueError(
f"start_time_seconds must be positive, got {start_time_seconds}"
)
log = logger.bind(track_index=track_index, padding_seconds=start_time_seconds)
log.info("Starting local PyAV padding")
loop = asyncio.get_event_loop()
return await loop.run_in_executor(
None,
self._pad_track_blocking,
track_url,
output_url,
start_time_seconds,
track_index,
)
def _pad_track_blocking(
self,
track_url: str,
output_url: str,
start_time_seconds: float,
track_index: int,
) -> PaddingResponse:
"""Blocking padding work: download, pad with PyAV, upload."""
import requests
log = logger.bind(track_index=track_index, padding_seconds=start_time_seconds)
temp_dir = tempfile.mkdtemp()
input_path = None
output_path = None
try:
# Download source audio
log.info("Downloading track for local padding")
response = requests.get(track_url, stream=True, timeout=S3_TIMEOUT)
response.raise_for_status()
input_path = os.path.join(temp_dir, "track.webm")
total_bytes = 0
with open(input_path, "wb") as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
total_bytes += len(chunk)
log.info("Track downloaded", bytes=total_bytes)
# Apply padding using shared PyAV utility
output_path = os.path.join(temp_dir, "padded.webm")
with av.open(input_path) as in_container:
apply_audio_padding_to_file(
in_container,
output_path,
start_time_seconds,
track_index,
logger=logger,
)
file_size = os.path.getsize(output_path)
log.info("Local padding complete", size=file_size)
# Upload padded track
log.info("Uploading padded track to S3")
with open(output_path, "rb") as f:
upload_response = requests.put(output_url, data=f, timeout=S3_TIMEOUT)
upload_response.raise_for_status()
log.info("Upload complete", size=file_size)
return PaddingResponse(size=file_size)
except Exception as e:
log.error("Local padding failed", error=str(e), exc_info=True)
raise
finally:
if input_path and os.path.exists(input_path):
try:
os.unlink(input_path)
except Exception as e:
log.warning("Failed to cleanup input file", error=str(e))
if output_path and os.path.exists(output_path):
try:
os.unlink(output_path)
except Exception as e:
log.warning("Failed to cleanup output file", error=str(e))
try:
os.rmdir(temp_dir)
except Exception as e:
log.warning("Failed to cleanup temp directory", error=str(e))
AudioPaddingAutoProcessor.register("local", AudioPaddingLocalProcessor)

View File

@@ -10,6 +10,7 @@ from pydantic import BaseModel
from reflector.hatchet.constants import TIMEOUT_AUDIO
from reflector.logger import logger
from reflector.processors.audio_padding_auto import AudioPaddingAutoProcessor
class PaddingResponse(BaseModel):
@@ -111,3 +112,6 @@ class AudioPaddingModalProcessor:
except Exception as e:
log.error("Modal padding unexpected error", error=str(e), exc_info=True)
raise
AudioPaddingAutoProcessor.register("modal", AudioPaddingModalProcessor)

View File

@@ -40,6 +40,7 @@ class MultitrackProcessingConfig:
track_keys: list[str]
recording_id: NonEmptyString | None = None
room_id: NonEmptyString | None = None
source_platform: str = "daily"
mode: Literal["multitrack"] = "multitrack"
@@ -256,6 +257,7 @@ async def dispatch_transcript_processing(
"bucket_name": config.bucket_name,
"transcript_id": config.transcript_id,
"room_id": config.room_id,
"source_platform": config.source_platform,
},
additional_metadata={
"transcript_id": config.transcript_id,

View File

@@ -73,6 +73,9 @@ class Settings(BaseSettings):
DAILYCO_STORAGE_AWS_BUCKET_NAME: str | None = None
DAILYCO_STORAGE_AWS_REGION: str | None = None
DAILYCO_STORAGE_AWS_ROLE_ARN: str | None = None
# Worker credentials for reading/deleting from Daily's recording bucket
DAILYCO_STORAGE_AWS_ACCESS_KEY_ID: str | None = None
DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY: str | None = None
# Translate into the target language
TRANSLATION_BACKEND: str = "passthrough"
@@ -106,7 +109,11 @@ class Settings(BaseSettings):
# Diarization: modal backend
DIARIZATION_MODAL_API_KEY: str | None = None
# Audio Padding (Modal.com backend)
# Audio Padding
# backends:
# - local: in-process PyAV padding (no HTTP, runs in same process)
# - modal: HTTP API client (works with Modal.com OR self-hosted gpu/self_hosted/)
PADDING_BACKEND: str = "local"
PADDING_URL: str | None = None
PADDING_MODAL_API_KEY: str | None = None

View File

@@ -17,6 +17,49 @@ def get_transcripts_storage() -> Storage:
)
def get_source_storage(platform: str) -> Storage:
"""Get storage for reading/deleting source recording files from the platform's bucket.
Returns an AwsStorage configured with the platform's worker credentials
(access keys), or falls back to get_transcripts_storage() when platform-specific
credentials aren't configured (e.g., single-bucket setups).
Args:
platform: Recording platform name ("daily", "whereby", or other).
"""
if platform == "daily":
if (
settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID
and settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY
and settings.DAILYCO_STORAGE_AWS_BUCKET_NAME
):
from reflector.storage.storage_aws import AwsStorage
return AwsStorage(
aws_bucket_name=settings.DAILYCO_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.DAILYCO_STORAGE_AWS_REGION or "us-east-1",
aws_access_key_id=settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY,
)
elif platform == "whereby":
if (
settings.WHEREBY_STORAGE_AWS_ACCESS_KEY_ID
and settings.WHEREBY_STORAGE_AWS_SECRET_ACCESS_KEY
and settings.WHEREBY_STORAGE_AWS_BUCKET_NAME
):
from reflector.storage.storage_aws import AwsStorage
return AwsStorage(
aws_bucket_name=settings.WHEREBY_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.WHEREBY_STORAGE_AWS_REGION or "us-east-1",
aws_access_key_id=settings.WHEREBY_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.WHEREBY_STORAGE_AWS_SECRET_ACCESS_KEY,
)
return get_transcripts_storage()
def get_whereby_storage() -> Storage:
"""
Get storage config for Whereby (for passing to Whereby API).
@@ -47,6 +90,9 @@ def get_dailyco_storage() -> Storage:
"""
Get storage config for Daily.co (for passing to Daily API).
Uses role_arn only — access keys are excluded because they're for
worker reads (get_source_storage), not for the Daily API.
Usage:
daily_storage = get_dailyco_storage()
daily_api.create_meeting(
@@ -57,13 +103,15 @@ def get_dailyco_storage() -> Storage:
Do NOT use for our file operations - use get_transcripts_storage() instead.
"""
# Fail fast if platform-specific config missing
if not settings.DAILYCO_STORAGE_AWS_BUCKET_NAME:
raise ValueError(
"DAILYCO_STORAGE_AWS_BUCKET_NAME required for Daily.co with AWS storage"
)
return Storage.get_instance(
name="aws",
settings_prefix="DAILYCO_STORAGE_",
from reflector.storage.storage_aws import AwsStorage
return AwsStorage(
aws_bucket_name=settings.DAILYCO_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.DAILYCO_STORAGE_AWS_REGION or "us-east-1",
aws_role_arn=settings.DAILYCO_STORAGE_AWS_ROLE_ARN,
)

View File

@@ -17,6 +17,7 @@ from typing import Callable
from celery.result import AsyncResult
from hatchet_sdk.clients.rest.models import V1TaskStatus
import reflector._warnings_filter # noqa: F401 -- side effect: suppress pydantic validate_default warning
from reflector.db import get_database
from reflector.db.transcripts import Transcript, transcripts_controller
from reflector.hatchet.client import HatchetClientManager

View File

@@ -1,10 +1,10 @@
from datetime import datetime, timedelta, timezone
from typing import Annotated, Literal, Optional, assert_never
import jwt
from fastapi import APIRouter, Depends, HTTPException, Query
from fastapi_pagination import Page
from fastapi_pagination.ext.databases import apaginate
from jose import jwt
from pydantic import (
AwareDatetime,
BaseModel,

View File

@@ -7,8 +7,8 @@ Transcripts audio related endpoints
from typing import Annotated, Optional
import httpx
import jwt
from fastapi import APIRouter, Depends, HTTPException, Request, Response, status
from jose import jwt
import reflector.auth as auth
from reflector.db.transcripts import AudioWaveform, transcripts_controller
@@ -44,7 +44,7 @@ async def transcript_get_audio_mp3(
try:
payload = jwt.decode(token, settings.SECRET_KEY, algorithms=[ALGORITHM])
user_id: str = payload.get("sub")
except jwt.JWTError:
except jwt.PyJWTError:
raise unauthorized_exception
transcript = await transcripts_controller.get_by_id_for_http(

View File

@@ -24,6 +24,118 @@ RECONCILIATION_INTERVAL = _override or 30.0
ICS_SYNC_INTERVAL = _override or 60.0
UPCOMING_MEETINGS_INTERVAL = _override or 30.0
def build_beat_schedule(
*,
whereby_api_key=None,
aws_process_recording_queue_url=None,
daily_api_key=None,
public_mode=False,
public_data_retention_days=None,
healthcheck_url=None,
):
"""Build the Celery beat schedule based on configured services.
Only registers tasks for services that are actually configured,
avoiding unnecessary worker wake-ups in selfhosted deployments.
"""
beat_schedule = {}
_whereby_enabled = bool(whereby_api_key) or bool(aws_process_recording_queue_url)
if _whereby_enabled:
beat_schedule["process_messages"] = {
"task": "reflector.worker.process.process_messages",
"schedule": SQS_POLL_INTERVAL,
}
beat_schedule["reprocess_failed_recordings"] = {
"task": "reflector.worker.process.reprocess_failed_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST
}
logger.info(
"Whereby beat tasks enabled",
tasks=["process_messages", "reprocess_failed_recordings"],
)
else:
logger.info("Whereby beat tasks disabled (no WHEREBY_API_KEY or SQS URL)")
_daily_enabled = bool(daily_api_key)
if _daily_enabled:
beat_schedule["poll_daily_recordings"] = {
"task": "reflector.worker.process.poll_daily_recordings",
"schedule": POLL_DAILY_RECORDINGS_INTERVAL_SEC,
}
beat_schedule["trigger_daily_reconciliation"] = {
"task": "reflector.worker.process.trigger_daily_reconciliation",
"schedule": RECONCILIATION_INTERVAL,
}
beat_schedule["reprocess_failed_daily_recordings"] = {
"task": "reflector.worker.process.reprocess_failed_daily_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST
}
logger.info(
"Daily.co beat tasks enabled",
tasks=[
"poll_daily_recordings",
"trigger_daily_reconciliation",
"reprocess_failed_daily_recordings",
],
)
else:
logger.info("Daily.co beat tasks disabled (no DAILY_API_KEY)")
_any_platform = _whereby_enabled or _daily_enabled
if _any_platform:
beat_schedule["process_meetings"] = {
"task": "reflector.worker.process.process_meetings",
"schedule": SQS_POLL_INTERVAL,
}
beat_schedule["sync_all_ics_calendars"] = {
"task": "reflector.worker.ics_sync.sync_all_ics_calendars",
"schedule": ICS_SYNC_INTERVAL,
}
beat_schedule["create_upcoming_meetings"] = {
"task": "reflector.worker.ics_sync.create_upcoming_meetings",
"schedule": UPCOMING_MEETINGS_INTERVAL,
}
logger.info(
"Platform tasks enabled",
tasks=[
"process_meetings",
"sync_all_ics_calendars",
"create_upcoming_meetings",
],
)
else:
logger.info("Platform tasks disabled (no video platform configured)")
if public_mode:
beat_schedule["cleanup_old_public_data"] = {
"task": "reflector.worker.cleanup.cleanup_old_public_data_task",
"schedule": crontab(hour=3, minute=0),
}
logger.info(
"Public mode cleanup enabled",
retention_days=public_data_retention_days,
)
if healthcheck_url:
beat_schedule["healthcheck_ping"] = {
"task": "reflector.worker.healthcheck.healthcheck_ping",
"schedule": 60.0 * 10,
}
logger.info("Healthcheck enabled", url=healthcheck_url)
else:
logger.warning("Healthcheck disabled, no url configured")
logger.info(
"Beat schedule configured",
total_tasks=len(beat_schedule),
task_names=sorted(beat_schedule.keys()),
)
return beat_schedule
if celery.current_app.main != "default":
logger.info(f"Celery already configured ({celery.current_app})")
app = celery.current_app
@@ -42,57 +154,11 @@ else:
]
)
# crontab
app.conf.beat_schedule = {
"process_messages": {
"task": "reflector.worker.process.process_messages",
"schedule": SQS_POLL_INTERVAL,
},
"process_meetings": {
"task": "reflector.worker.process.process_meetings",
"schedule": SQS_POLL_INTERVAL,
},
"reprocess_failed_recordings": {
"task": "reflector.worker.process.reprocess_failed_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST
},
"reprocess_failed_daily_recordings": {
"task": "reflector.worker.process.reprocess_failed_daily_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST
},
"poll_daily_recordings": {
"task": "reflector.worker.process.poll_daily_recordings",
"schedule": POLL_DAILY_RECORDINGS_INTERVAL_SEC,
},
"trigger_daily_reconciliation": {
"task": "reflector.worker.process.trigger_daily_reconciliation",
"schedule": RECONCILIATION_INTERVAL,
},
"sync_all_ics_calendars": {
"task": "reflector.worker.ics_sync.sync_all_ics_calendars",
"schedule": ICS_SYNC_INTERVAL,
},
"create_upcoming_meetings": {
"task": "reflector.worker.ics_sync.create_upcoming_meetings",
"schedule": UPCOMING_MEETINGS_INTERVAL,
},
}
if settings.PUBLIC_MODE:
app.conf.beat_schedule["cleanup_old_public_data"] = {
"task": "reflector.worker.cleanup.cleanup_old_public_data_task",
"schedule": crontab(hour=3, minute=0),
}
logger.info(
"Public mode cleanup enabled",
retention_days=settings.PUBLIC_DATA_RETENTION_DAYS,
)
if settings.HEALTHCHECK_URL:
app.conf.beat_schedule["healthcheck_ping"] = {
"task": "reflector.worker.healthcheck.healthcheck_ping",
"schedule": 60.0 * 10,
}
logger.info("Healthcheck enabled", url=settings.HEALTHCHECK_URL)
else:
logger.warning("Healthcheck disabled, no url configured")
app.conf.beat_schedule = build_beat_schedule(
whereby_api_key=settings.WHEREBY_API_KEY,
aws_process_recording_queue_url=settings.AWS_PROCESS_RECORDING_QUEUE_URL,
daily_api_key=settings.DAILY_API_KEY,
public_mode=settings.PUBLIC_MODE,
public_data_retention_days=settings.PUBLIC_DATA_RETENTION_DAYS,
healthcheck_url=settings.HEALTHCHECK_URL,
)

View File

@@ -357,6 +357,7 @@ async def _process_multitrack_recording_inner(
"bucket_name": bucket_name,
"transcript_id": transcript.id,
"room_id": room.id,
"source_platform": "daily",
},
additional_metadata={
"transcript_id": transcript.id,
@@ -1068,6 +1069,7 @@ async def reprocess_failed_daily_recordings():
"bucket_name": bucket_name,
"transcript_id": transcript.id,
"room_id": room.id if room else None,
"source_platform": "daily",
},
additional_metadata={
"transcript_id": transcript.id,

View File

@@ -1,8 +1,8 @@
"""Tests for the password auth backend."""
import jwt
import pytest
from httpx import AsyncClient
from jose import jwt
from reflector.auth.password_utils import hash_password
from reflector.settings import settings

View File

@@ -0,0 +1,247 @@
"""Tests for conditional Celery beat schedule registration.
Verifies that beat tasks are only registered when their corresponding
services are configured (WHEREBY_API_KEY, DAILY_API_KEY, etc.).
"""
import pytest
from reflector.worker.app import build_beat_schedule
# Override autouse fixtures from conftest — these tests don't need database or websockets
@pytest.fixture(autouse=True)
def setup_database():
yield
@pytest.fixture(autouse=True)
def ws_manager_in_memory():
yield
@pytest.fixture(autouse=True)
def reset_hatchet_client():
yield
# Task name sets for each group
WHEREBY_TASKS = {"process_messages", "reprocess_failed_recordings"}
DAILY_TASKS = {
"poll_daily_recordings",
"trigger_daily_reconciliation",
"reprocess_failed_daily_recordings",
}
PLATFORM_TASKS = {
"process_meetings",
"sync_all_ics_calendars",
"create_upcoming_meetings",
}
class TestNoPlatformConfigured:
"""When no video platform is configured, no platform tasks should be registered."""
def test_no_platform_tasks(self):
schedule = build_beat_schedule()
task_names = set(schedule.keys())
assert not task_names & WHEREBY_TASKS
assert not task_names & DAILY_TASKS
assert not task_names & PLATFORM_TASKS
def test_only_healthcheck_disabled_warning(self):
"""With no config at all, schedule should be empty (healthcheck needs URL)."""
schedule = build_beat_schedule()
assert len(schedule) == 0
def test_healthcheck_only(self):
schedule = build_beat_schedule(healthcheck_url="https://hc.example.com/ping")
assert set(schedule.keys()) == {"healthcheck_ping"}
def test_public_mode_only(self):
schedule = build_beat_schedule(public_mode=True)
assert set(schedule.keys()) == {"cleanup_old_public_data"}
class TestWherebyOnly:
"""When only Whereby is configured."""
def test_whereby_api_key(self):
schedule = build_beat_schedule(whereby_api_key="test-key")
task_names = set(schedule.keys())
assert WHEREBY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
assert not task_names & DAILY_TASKS
def test_whereby_sqs_url(self):
schedule = build_beat_schedule(
aws_process_recording_queue_url="https://sqs.us-east-1.amazonaws.com/123/queue"
)
task_names = set(schedule.keys())
assert WHEREBY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
assert not task_names & DAILY_TASKS
def test_whereby_task_count(self):
schedule = build_beat_schedule(whereby_api_key="test-key")
# Whereby (2) + Platform (3) = 5
assert len(schedule) == 5
class TestDailyOnly:
"""When only Daily.co is configured."""
def test_daily_api_key(self):
schedule = build_beat_schedule(daily_api_key="test-daily-key")
task_names = set(schedule.keys())
assert DAILY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
assert not task_names & WHEREBY_TASKS
def test_daily_task_count(self):
schedule = build_beat_schedule(daily_api_key="test-daily-key")
# Daily (3) + Platform (3) = 6
assert len(schedule) == 6
class TestBothPlatforms:
"""When both Whereby and Daily.co are configured."""
def test_all_tasks_registered(self):
schedule = build_beat_schedule(
whereby_api_key="test-key",
daily_api_key="test-daily-key",
)
task_names = set(schedule.keys())
assert WHEREBY_TASKS <= task_names
assert DAILY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
def test_combined_task_count(self):
schedule = build_beat_schedule(
whereby_api_key="test-key",
daily_api_key="test-daily-key",
)
# Whereby (2) + Daily (3) + Platform (3) = 8
assert len(schedule) == 8
class TestConditionalFlags:
"""Test PUBLIC_MODE and HEALTHCHECK_URL interact correctly with platform tasks."""
def test_all_flags_enabled(self):
schedule = build_beat_schedule(
whereby_api_key="test-key",
daily_api_key="test-daily-key",
public_mode=True,
healthcheck_url="https://hc.example.com/ping",
)
task_names = set(schedule.keys())
assert "cleanup_old_public_data" in task_names
assert "healthcheck_ping" in task_names
assert WHEREBY_TASKS <= task_names
assert DAILY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
# Whereby (2) + Daily (3) + Platform (3) + cleanup (1) + healthcheck (1) = 10
assert len(schedule) == 10
def test_public_mode_with_whereby(self):
schedule = build_beat_schedule(
whereby_api_key="test-key",
public_mode=True,
)
task_names = set(schedule.keys())
assert "cleanup_old_public_data" in task_names
assert WHEREBY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
def test_healthcheck_with_daily(self):
schedule = build_beat_schedule(
daily_api_key="test-daily-key",
healthcheck_url="https://hc.example.com/ping",
)
task_names = set(schedule.keys())
assert "healthcheck_ping" in task_names
assert DAILY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
class TestTaskDefinitions:
"""Verify task definitions have correct structure."""
def test_whereby_task_paths(self):
schedule = build_beat_schedule(whereby_api_key="test-key")
assert (
schedule["process_messages"]["task"]
== "reflector.worker.process.process_messages"
)
assert (
schedule["reprocess_failed_recordings"]["task"]
== "reflector.worker.process.reprocess_failed_recordings"
)
def test_daily_task_paths(self):
schedule = build_beat_schedule(daily_api_key="test-daily-key")
assert (
schedule["poll_daily_recordings"]["task"]
== "reflector.worker.process.poll_daily_recordings"
)
assert (
schedule["trigger_daily_reconciliation"]["task"]
== "reflector.worker.process.trigger_daily_reconciliation"
)
assert (
schedule["reprocess_failed_daily_recordings"]["task"]
== "reflector.worker.process.reprocess_failed_daily_recordings"
)
def test_platform_task_paths(self):
schedule = build_beat_schedule(daily_api_key="test-daily-key")
assert (
schedule["process_meetings"]["task"]
== "reflector.worker.process.process_meetings"
)
assert (
schedule["sync_all_ics_calendars"]["task"]
== "reflector.worker.ics_sync.sync_all_ics_calendars"
)
assert (
schedule["create_upcoming_meetings"]["task"]
== "reflector.worker.ics_sync.create_upcoming_meetings"
)
def test_all_tasks_have_schedule(self):
"""Every registered task must have a 'schedule' key."""
schedule = build_beat_schedule(
whereby_api_key="test-key",
daily_api_key="test-daily-key",
public_mode=True,
healthcheck_url="https://hc.example.com/ping",
)
for name, config in schedule.items():
assert "schedule" in config, f"Task '{name}' missing 'schedule' key"
assert "task" in config, f"Task '{name}' missing 'task' key"
class TestEmptyStringValues:
"""Empty strings should be treated as not configured (falsy)."""
def test_empty_whereby_key(self):
schedule = build_beat_schedule(whereby_api_key="")
assert not set(schedule.keys()) & WHEREBY_TASKS
def test_empty_daily_key(self):
schedule = build_beat_schedule(daily_api_key="")
assert not set(schedule.keys()) & DAILY_TASKS
def test_empty_sqs_url(self):
schedule = build_beat_schedule(aws_process_recording_queue_url="")
assert not set(schedule.keys()) & WHEREBY_TASKS
def test_none_values(self):
schedule = build_beat_schedule(
whereby_api_key=None,
daily_api_key=None,
aws_process_recording_queue_url=None,
)
assert len(schedule) == 0

View File

@@ -367,3 +367,390 @@ async def test_aws_storage_none_endpoint_url():
assert storage.base_url == "https://reflector-bucket.s3.amazonaws.com/"
# No s3 addressing_style override — boto_config should only have retries
assert not hasattr(storage.boto_config, "s3") or storage.boto_config.s3 is None
# --- Tests for get_source_storage() ---
def test_get_source_storage_daily_with_credentials():
"""Daily platform with access keys returns AwsStorage with Daily credentials."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID = "daily-key"
mock_settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY = "daily-secret"
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.DAILYCO_STORAGE_AWS_REGION = "us-west-2"
from reflector.storage import get_source_storage
storage = get_source_storage("daily")
assert isinstance(storage, AwsStorage)
assert storage._bucket_name == "daily-bucket"
assert storage._region == "us-west-2"
assert storage._access_key_id == "daily-key"
assert storage._secret_access_key == "daily-secret"
assert storage._endpoint_url is None
def test_get_source_storage_daily_falls_back_without_credentials():
"""Daily platform without access keys falls back to transcript storage."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID = None
mock_settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY = None
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.TRANSCRIPT_STORAGE_BACKEND = "aws"
mock_settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME = "transcript-bucket"
mock_settings.TRANSCRIPT_STORAGE_AWS_REGION = "us-east-1"
mock_settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID = "transcript-key"
mock_settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY = "transcript-secret"
mock_settings.TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL = None
from reflector.storage import get_source_storage
with patch("reflector.storage.get_transcripts_storage") as mock_get_transcripts:
fallback = AwsStorage(
aws_bucket_name="transcript-bucket",
aws_region="us-east-1",
aws_access_key_id="transcript-key",
aws_secret_access_key="transcript-secret",
)
mock_get_transcripts.return_value = fallback
storage = get_source_storage("daily")
mock_get_transcripts.assert_called_once()
assert storage is fallback
def test_get_source_storage_whereby_with_credentials():
"""Whereby platform with access keys returns AwsStorage with Whereby credentials."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.WHEREBY_STORAGE_AWS_ACCESS_KEY_ID = "whereby-key"
mock_settings.WHEREBY_STORAGE_AWS_SECRET_ACCESS_KEY = "whereby-secret"
mock_settings.WHEREBY_STORAGE_AWS_BUCKET_NAME = "whereby-bucket"
mock_settings.WHEREBY_STORAGE_AWS_REGION = "eu-west-1"
from reflector.storage import get_source_storage
storage = get_source_storage("whereby")
assert isinstance(storage, AwsStorage)
assert storage._bucket_name == "whereby-bucket"
assert storage._region == "eu-west-1"
assert storage._access_key_id == "whereby-key"
assert storage._secret_access_key == "whereby-secret"
def test_get_source_storage_unknown_platform_falls_back():
"""Unknown platform falls back to transcript storage."""
with patch("reflector.storage.settings"):
from reflector.storage import get_source_storage
with patch("reflector.storage.get_transcripts_storage") as mock_get_transcripts:
fallback = MagicMock()
mock_get_transcripts.return_value = fallback
storage = get_source_storage("unknown-platform")
mock_get_transcripts.assert_called_once()
assert storage is fallback
@pytest.mark.asyncio
async def test_source_storage_presigns_for_correct_bucket():
"""Source storage presigns URLs using the platform's credentials and the override bucket."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID = "daily-key"
mock_settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY = "daily-secret"
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.DAILYCO_STORAGE_AWS_REGION = "us-west-2"
from reflector.storage import get_source_storage
storage = get_source_storage("daily")
mock_client = AsyncMock()
mock_client.generate_presigned_url = AsyncMock(
return_value="https://daily-bucket.s3.amazonaws.com/track.webm?signed"
)
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
mock_client.__aexit__ = AsyncMock(return_value=None)
with patch.object(storage.session, "client", return_value=mock_client):
url = await storage.get_file_url(
"track.webm",
operation="get_object",
expires_in=3600,
bucket="override-bucket",
)
assert "track.webm" in url
mock_client.generate_presigned_url.assert_called_once()
call_kwargs = mock_client.generate_presigned_url.call_args
params = call_kwargs[1].get("Params") or call_kwargs[0][1]
assert params["Bucket"] == "override-bucket"
assert params["Key"] == "track.webm"
def test_get_source_storage_daily_default_region():
"""Daily platform without region falls back to us-east-1."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID = "daily-key"
mock_settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY = "daily-secret"
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.DAILYCO_STORAGE_AWS_REGION = None
from reflector.storage import get_source_storage
storage = get_source_storage("daily")
assert isinstance(storage, AwsStorage)
assert storage._region == "us-east-1"
# --- Tests for get_dailyco_storage() ---
def test_get_dailyco_storage_with_role_arn():
"""get_dailyco_storage returns AwsStorage with role_arn for Daily API."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.DAILYCO_STORAGE_AWS_REGION = "us-west-2"
mock_settings.DAILYCO_STORAGE_AWS_ROLE_ARN = "arn:aws:iam::123:role/DailyAccess"
from reflector.storage import get_dailyco_storage
storage = get_dailyco_storage()
assert isinstance(storage, AwsStorage)
assert storage._bucket_name == "daily-bucket"
assert storage._region == "us-west-2"
assert storage._role_arn == "arn:aws:iam::123:role/DailyAccess"
assert storage._access_key_id is None
assert storage._secret_access_key is None
def test_get_dailyco_storage_no_conflict_when_access_keys_also_set():
"""get_dailyco_storage ignores access keys even when set (avoids mixed-auth error).
This is the key regression test: DAILYCO_STORAGE_AWS_ACCESS_KEY_ID and
SECRET_ACCESS_KEY are for get_source_storage(), not for get_dailyco_storage().
"""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.DAILYCO_STORAGE_AWS_REGION = "us-west-2"
mock_settings.DAILYCO_STORAGE_AWS_ROLE_ARN = "arn:aws:iam::123:role/DailyAccess"
# These are set for get_source_storage but must NOT leak into get_dailyco_storage
mock_settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID = "AKIA-worker-key"
mock_settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY = "worker-secret"
from reflector.storage import get_dailyco_storage
# Must NOT raise "cannot use both aws_role_arn and access keys"
storage = get_dailyco_storage()
assert isinstance(storage, AwsStorage)
assert storage._role_arn == "arn:aws:iam::123:role/DailyAccess"
assert storage._access_key_id is None
assert storage._secret_access_key is None
def test_get_dailyco_storage_default_region():
"""get_dailyco_storage falls back to us-east-1 when region is None."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.DAILYCO_STORAGE_AWS_REGION = None
mock_settings.DAILYCO_STORAGE_AWS_ROLE_ARN = "arn:aws:iam::123:role/DailyAccess"
from reflector.storage import get_dailyco_storage
storage = get_dailyco_storage()
assert storage._region == "us-east-1"
def test_get_dailyco_storage_raises_without_bucket():
"""get_dailyco_storage raises ValueError when bucket is not configured."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = None
from reflector.storage import get_dailyco_storage
with pytest.raises(
ValueError, match="DAILYCO_STORAGE_AWS_BUCKET_NAME required"
):
get_dailyco_storage()
def test_get_dailyco_storage_exposes_role_credential():
"""get_dailyco_storage().role_credential returns the role ARN."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-bucket"
mock_settings.DAILYCO_STORAGE_AWS_REGION = "us-east-1"
mock_settings.DAILYCO_STORAGE_AWS_ROLE_ARN = "arn:aws:iam::123:role/DailyAccess"
from reflector.storage import get_dailyco_storage
storage = get_dailyco_storage()
assert storage.role_credential == "arn:aws:iam::123:role/DailyAccess"
assert storage.bucket_name == "daily-bucket"
assert storage.region == "us-east-1"
# --- Tests for get_whereby_storage() ---
def test_get_whereby_storage_with_access_keys():
"""get_whereby_storage returns AwsStorage with Whereby access keys."""
whereby_settings = [
("WHEREBY_STORAGE_AWS_BUCKET_NAME", "whereby-bucket"),
("WHEREBY_STORAGE_AWS_REGION", "eu-west-1"),
("WHEREBY_STORAGE_AWS_ACCESS_KEY_ID", "whereby-key"),
("WHEREBY_STORAGE_AWS_SECRET_ACCESS_KEY", "whereby-secret"),
]
mock_settings = MagicMock()
mock_settings.WHEREBY_STORAGE_AWS_BUCKET_NAME = "whereby-bucket"
mock_settings.__iter__ = MagicMock(return_value=iter(whereby_settings))
# Patch both settings references: __init__.py and base.py
with (
patch("reflector.storage.settings", mock_settings),
patch("reflector.storage.base.settings", mock_settings),
):
from reflector.storage import get_whereby_storage
storage = get_whereby_storage()
assert isinstance(storage, AwsStorage)
assert storage._bucket_name == "whereby-bucket"
assert storage._region == "eu-west-1"
assert storage._access_key_id == "whereby-key"
assert storage._secret_access_key == "whereby-secret"
def test_get_whereby_storage_raises_without_bucket():
"""get_whereby_storage raises ValueError when bucket is not configured."""
with patch("reflector.storage.settings") as mock_settings:
mock_settings.WHEREBY_STORAGE_AWS_BUCKET_NAME = None
from reflector.storage import get_whereby_storage
with pytest.raises(
ValueError, match="WHEREBY_STORAGE_AWS_BUCKET_NAME required"
):
get_whereby_storage()
# --- Tests for get_transcripts_storage() ---
def test_get_transcripts_storage_with_garage():
"""get_transcripts_storage returns AwsStorage configured for Garage (custom endpoint)."""
garage_settings = [
("TRANSCRIPT_STORAGE_AWS_BUCKET_NAME", "reflector-media"),
("TRANSCRIPT_STORAGE_AWS_REGION", "garage"),
("TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID", "GK-garage-key"),
("TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY", "garage-secret"),
("TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL", "http://garage:3900"),
]
mock_settings = MagicMock()
mock_settings.TRANSCRIPT_STORAGE_BACKEND = "aws"
mock_settings.__iter__ = MagicMock(return_value=iter(garage_settings))
with (
patch("reflector.storage.settings", mock_settings),
patch("reflector.storage.base.settings", mock_settings),
):
from reflector.storage import get_transcripts_storage
storage = get_transcripts_storage()
assert isinstance(storage, AwsStorage)
assert storage._bucket_name == "reflector-media"
assert storage._endpoint_url == "http://garage:3900"
assert storage._access_key_id == "GK-garage-key"
assert storage.boto_config.s3["addressing_style"] == "path"
def test_get_transcripts_storage_with_vanilla_aws():
"""get_transcripts_storage returns AwsStorage configured for real AWS S3."""
aws_settings = [
("TRANSCRIPT_STORAGE_AWS_BUCKET_NAME", "prod-transcripts"),
("TRANSCRIPT_STORAGE_AWS_REGION", "us-east-1"),
("TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID", "AKIA-prod-key"),
("TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY", "prod-secret"),
]
mock_settings = MagicMock()
mock_settings.TRANSCRIPT_STORAGE_BACKEND = "aws"
mock_settings.__iter__ = MagicMock(return_value=iter(aws_settings))
with (
patch("reflector.storage.settings", mock_settings),
patch("reflector.storage.base.settings", mock_settings),
):
from reflector.storage import get_transcripts_storage
storage = get_transcripts_storage()
assert isinstance(storage, AwsStorage)
assert storage._bucket_name == "prod-transcripts"
assert storage._endpoint_url is None
assert storage._access_key_id == "AKIA-prod-key"
# --- Tests for coexistence (selfhosted scenario) ---
def test_all_factories_coexist_selfhosted_scenario():
"""All storage factories work simultaneously with selfhosted config.
Simulates the real selfhosted setup:
- Transcript storage → Garage (http://garage:3900)
- Daily API storage → role_arn (for Daily to write recordings)
- Source storage → access keys (for workers to read Daily's S3 bucket)
"""
transcript_settings = [
("TRANSCRIPT_STORAGE_AWS_BUCKET_NAME", "reflector-media"),
("TRANSCRIPT_STORAGE_AWS_REGION", "garage"),
("TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID", "GK-garage-key"),
("TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY", "garage-secret"),
("TRANSCRIPT_STORAGE_AWS_ENDPOINT_URL", "http://garage:3900"),
]
mock_settings = MagicMock()
# Transcript storage: Garage
mock_settings.TRANSCRIPT_STORAGE_BACKEND = "aws"
mock_settings.__iter__ = MagicMock(return_value=iter(transcript_settings))
# Daily.co: both role_arn AND access keys configured
mock_settings.DAILYCO_STORAGE_AWS_BUCKET_NAME = "daily-recordings"
mock_settings.DAILYCO_STORAGE_AWS_REGION = "us-west-2"
mock_settings.DAILYCO_STORAGE_AWS_ROLE_ARN = "arn:aws:iam::123:role/DailyAccess"
mock_settings.DAILYCO_STORAGE_AWS_ACCESS_KEY_ID = "AKIA-daily-worker"
mock_settings.DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY = "daily-worker-secret"
with (
patch("reflector.storage.settings", mock_settings),
patch("reflector.storage.base.settings", mock_settings),
):
from reflector.storage import (
get_dailyco_storage,
get_source_storage,
get_transcripts_storage,
)
# 1. Transcript storage → Garage
transcript_storage = get_transcripts_storage()
assert transcript_storage._endpoint_url == "http://garage:3900"
assert transcript_storage._access_key_id == "GK-garage-key"
# 2. Daily API storage → role_arn only (no access keys)
daily_api_storage = get_dailyco_storage()
assert daily_api_storage._role_arn == "arn:aws:iam::123:role/DailyAccess"
assert daily_api_storage._access_key_id is None
# 3. Source storage → access keys only (no role_arn)
source_storage = get_source_storage("daily")
assert source_storage._access_key_id == "AKIA-daily-worker"
assert source_storage._role_arn is None
assert source_storage._endpoint_url is None

View File

@@ -67,7 +67,7 @@ def appserver_ws_user(setup_database):
@pytest.fixture(autouse=True)
def patch_jwt_verification(monkeypatch):
"""Patch JWT verification to accept HS256 tokens signed with SECRET_KEY for tests."""
from jose import jwt
import jwt
from reflector.settings import settings
@@ -84,7 +84,7 @@ def _make_dummy_jwt(sub: str = "user123") -> str:
# Create a short HS256 JWT using the app secret to pass verification in tests
from datetime import datetime, timedelta, timezone
from jose import jwt
import jwt
from reflector.settings import settings

83
server/uv.lock generated
View File

@@ -861,18 +861,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e3/26/57c6fb270950d476074c087527a558ccb6f4436657314bfb6cdf484114c4/docker-7.1.0-py3-none-any.whl", hash = "sha256:c96b93b7f0a746f9e77d325bcfb87422a3d8bd4f03136ae8a85b37f1898d5fc0", size = 147774, upload-time = "2024-05-23T11:13:55.01Z" },
]
[[package]]
name = "ecdsa"
version = "0.19.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c0/1f/924e3caae75f471eae4b26bd13b698f6af2c44279f67af317439c2f4c46a/ecdsa-0.19.1.tar.gz", hash = "sha256:478cba7b62555866fcb3bb3fe985e06decbdb68ef55713c4e5ab98c57d508e61", size = 201793, upload-time = "2025-03-13T11:52:43.25Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/a3/460c57f094a4a165c84a1341c373b0a4f5ec6ac244b998d5021aade89b77/ecdsa-0.19.1-py2.py3-none-any.whl", hash = "sha256:30638e27cf77b7e15c4c4cc1973720149e1033827cfd00661ca5c8cc0cdb24c3", size = 150607, upload-time = "2025-03-13T11:52:41.757Z" },
]
[[package]]
name = "email-validator"
version = "2.2.0"
@@ -1195,7 +1183,7 @@ wheels = [
[[package]]
name = "hatchet-sdk"
version = "1.21.6"
version = "1.27.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -1207,11 +1195,12 @@ dependencies = [
{ name = "pydantic-settings" },
{ name = "python-dateutil" },
{ name = "tenacity" },
{ name = "typing-inspection" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7c/df/75dd02e1dc6b99f7151a57f084876c50f739ad4d643b060078f65d51d717/hatchet_sdk-1.21.6.tar.gz", hash = "sha256:b65741324ad721ce57f5fe3f960e2942c4ac2ceec6ca483dd35f84137ff7c46c", size = 219345, upload-time = "2025-12-11T15:04:24.899Z" }
sdist = { url = "https://files.pythonhosted.org/packages/5b/02/e8bcc42654f03af3a39f9319d21fc42ab36abca9514cee275c04b2810186/hatchet_sdk-1.27.0.tar.gz", hash = "sha256:c312a83c8e6c13040cc2512a6ed7e60085af2496587a2dbd5c18a62d84217cb8", size = 246838, upload-time = "2026-02-27T18:21:40.236Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/00/86/e4cd7928bcabd33c634c33d4e878e2454e03f97c87b72947c7ff5762d813/hatchet_sdk-1.21.6-py3-none-any.whl", hash = "sha256:589fba9104a6517e1ba677b9865fa0a20e221863a8c2a2724051198994c11399", size = 529167, upload-time = "2025-12-11T15:04:23.697Z" },
{ url = "https://files.pythonhosted.org/packages/ef/5b/3c2a8b6908a68d42489d903c41fa460cd6d61e07a27252737fcec8d97b31/hatchet_sdk-1.27.0-py3-none-any.whl", hash = "sha256:3cea10e68d3551881588ec941b50f0e383855b191eb79905ee57ee806b08430b", size = 574642, upload-time = "2026-02-27T18:21:37.611Z" },
]
[[package]]
@@ -2240,15 +2229,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/92/29/06261ea000e2dc1e22907dbbc483a1093665509ea586b29b8986a0e56733/psycopg2_binary-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:18c5ee682b9c6dd3696dad6e54cc7ff3a1a9020df6a5c0f861ef8bfd338c3ca0", size = 1164031, upload-time = "2024-10-16T11:21:34.211Z" },
]
[[package]]
name = "pyasn1"
version = "0.6.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/fe/b6/6e630dff89739fcd427e3f72b3d905ce0acb85a45d4ec3e2678718a3487f/pyasn1-0.6.2.tar.gz", hash = "sha256:9b59a2b25ba7e4f8197db7686c09fb33e658b98339fadb826e9512629017833b", size = 146586, upload-time = "2026-01-16T18:04:18.534Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/44/b5/a96872e5184f354da9c84ae119971a0a4c221fe9b27a4d94bd43f2596727/pyasn1-0.6.2-py3-none-any.whl", hash = "sha256:1eb26d860996a18e9b6ed05e7aae0e9fc21619fcee6af91cca9bad4fbea224bf", size = 83371, upload-time = "2026-01-16T18:04:17.174Z" },
]
[[package]]
name = "pycparser"
version = "2.22"
@@ -2405,6 +2385,20 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/0c/7f/113b16d55e8d2dd9143628eec39b138fd6c52f72dcd11b4dae4a3845da4d/pyinstrument-5.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:88df7e3ab11604ae7cef1f576c097a08752bf8fc13c5755803bd3cd92f15aba3", size = 124314, upload-time = "2025-07-02T14:13:26.708Z" },
]
[[package]]
name = "pyjwt"
version = "2.11.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/5c/5a/b46fa56bf322901eee5b0454a34343cdbdae202cd421775a8ee4e42fd519/pyjwt-2.11.0.tar.gz", hash = "sha256:35f95c1f0fbe5d5ba6e43f00271c275f7a1a4db1dab27bf708073b75318ea623", size = 98019, upload-time = "2026-01-30T19:59:55.694Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6f/01/c26ce75ba460d5cd503da9e13b21a33804d38c2165dec7b716d06b13010c/pyjwt-2.11.0-py3-none-any.whl", hash = "sha256:94a6bde30eb5c8e04fee991062b534071fd1439ef58d2adc9ccb823e7bcd0469", size = 28224, upload-time = "2026-01-30T19:59:54.539Z" },
]
[package.optional-dependencies]
crypto = [
{ name = "cryptography" },
]
[[package]]
name = "pylibsrtp"
version = "0.12.0"
@@ -2442,11 +2436,11 @@ wheels = [
[[package]]
name = "pypdf"
version = "6.7.3"
version = "6.7.5"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/53/9b/63e767042fc852384dc71e5ff6f990ee4e1b165b1526cf3f9c23a4eebb47/pypdf-6.7.3.tar.gz", hash = "sha256:eca55c78d0ec7baa06f9288e2be5c4e8242d5cbb62c7a4b94f2716f8e50076d2", size = 5303304, upload-time = "2026-02-24T17:23:11.42Z" }
sdist = { url = "https://files.pythonhosted.org/packages/f6/52/37cc0aa9e9d1bf7729a737a0d83f8b3f851c8eb137373d9f71eafb0a3405/pypdf-6.7.5.tar.gz", hash = "sha256:40bb2e2e872078655f12b9b89e2f900888bb505e88a82150b64f9f34fa25651d", size = 5304278, upload-time = "2026-03-02T09:05:21.464Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b0/90/3308a9b8b46c1424181fdf3f4580d2b423c5471425799e7fc62f92d183f4/pypdf-6.7.3-py3-none-any.whl", hash = "sha256:cd25ac508f20b554a9fafd825186e3ba29591a69b78c156783c5d8a2d63a1c0a", size = 331263, upload-time = "2026-02-24T17:23:09.932Z" },
{ url = "https://files.pythonhosted.org/packages/05/89/336673efd0a88956562658aba4f0bbef7cb92a6fbcbcaf94926dbc82b408/pypdf-6.7.5-py3-none-any.whl", hash = "sha256:07ba7f1d6e6d9aa2a17f5452e320a84718d4ce863367f7ede2fd72280349ab13", size = 331421, upload-time = "2026-03-02T09:05:19.722Z" },
]
[[package]]
@@ -2619,25 +2613,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/5f/ed/539768cf28c661b5b068d66d96a2f155c4971a5d55684a514c1a0e0dec2f/python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc", size = 20556, upload-time = "2025-06-24T04:21:06.073Z" },
]
[[package]]
name = "python-jose"
version = "3.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "ecdsa" },
{ name = "pyasn1" },
{ name = "rsa" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c6/77/3a1c9039db7124eb039772b935f2244fbb73fc8ee65b9acf2375da1c07bf/python_jose-3.5.0.tar.gz", hash = "sha256:fb4eaa44dbeb1c26dcc69e4bd7ec54a1cb8dd64d3b4d81ef08d90ff453f2b01b", size = 92726, upload-time = "2025-05-28T17:31:54.288Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d9/c3/0bd11992072e6a1c513b16500a5d07f91a24017c5909b02c72c62d7ad024/python_jose-3.5.0-py2.py3-none-any.whl", hash = "sha256:abd1202f23d34dfad2c3d28cb8617b90acf34132c7afd60abd0b0b7d3cb55771", size = 34624, upload-time = "2025-05-28T17:31:52.802Z" },
]
[package.optional-dependencies]
cryptography = [
{ name = "cryptography" },
]
[[package]]
name = "python-multipart"
version = "0.0.22"
@@ -2791,8 +2766,8 @@ dependencies = [
{ name = "psycopg2-binary" },
{ name = "pydantic" },
{ name = "pydantic-settings" },
{ name = "pyjwt", extra = ["crypto"] },
{ name = "pytest-env" },
{ name = "python-jose", extra = ["cryptography"] },
{ name = "python-multipart" },
{ name = "redis" },
{ name = "requests" },
@@ -2867,8 +2842,8 @@ requires-dist = [
{ name = "psycopg2-binary", specifier = ">=2.9.10" },
{ name = "pydantic", specifier = ">=2.12.5" },
{ name = "pydantic-settings", specifier = ">=2.0.2" },
{ name = "pyjwt", extras = ["crypto"], specifier = ">=2.8.0" },
{ name = "pytest-env", specifier = ">=1.1.5" },
{ name = "python-jose", extras = ["cryptography"], specifier = ">=3.3.0" },
{ name = "python-multipart", specifier = ">=0.0.6" },
{ name = "redis", specifier = ">=5.0.1" },
{ name = "requests", specifier = ">=2.31.0" },
@@ -3087,18 +3062,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c8/ed/9de62c2150ca8e2e5858acf3f4f4d0d180a38feef9fdab4078bea63d8dba/rpds_py-0.26.0-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:e99685fc95d386da368013e7fb4269dd39c30d99f812a8372d62f244f662709c", size = 555334, upload-time = "2025-07-01T15:56:51.703Z" },
]
[[package]]
name = "rsa"
version = "4.9.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyasn1" },
]
sdist = { url = "https://files.pythonhosted.org/packages/da/8a/22b7beea3ee0d44b1916c0c1cb0ee3af23b700b6da9f04991899d0c555d4/rsa-4.9.1.tar.gz", hash = "sha256:e7bdbfdb5497da4c07dfd35530e1a902659db6ff241e39d9953cad06ebd0ae75", size = 29034, upload-time = "2025-04-16T09:51:18.218Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/64/8d/0133e4eb4beed9e425d9a98ed6e081a55d195481b7632472be1af08d2f6b/rsa-4.9.1-py3-none-any.whl", hash = "sha256:68635866661c6836b8d39430f97a996acbd61bfa49406748ea243539fe239762", size = 34696, upload-time = "2025-04-16T09:51:17.142Z" },
]
[[package]]
name = "s3transfer"
version = "0.13.0"

1
www/.gitignore vendored
View File

@@ -46,3 +46,4 @@ openapi-ts-error-*.log
# pnpm
.pnpm-store
/v10

View File

@@ -1,5 +1,6 @@
import { FontAwesomeIcon } from "@fortawesome/react-fontawesome";
import { faClose } from "@fortawesome/free-solid-svg-icons";
import type { JSX } from "react";
import { MouseEventHandler } from "react";
type ModalProps = {

View File

@@ -1,3 +1,5 @@
"use client";
import React from "react";
import { Box, Stack, Link, Heading } from "@chakra-ui/react";
import NextLink from "next/link";

View File

@@ -1,3 +1,5 @@
"use client";
import React, { useState } from "react";
import {
Box,

View File

@@ -1,10 +1,8 @@
import { Container, Flex, Link } from "@chakra-ui/react";
import { featureEnabled } from "../lib/features";
import { Container, Flex } from "@chakra-ui/react";
import NextLink from "next/link";
import Image from "next/image";
import UserInfo from "../(auth)/userInfo";
import AuthWrapper from "./AuthWrapper";
import { RECORD_A_MEETING_URL } from "../api/urls";
import MainNav from "../components/MainNav";
export default async function AppLayout({
children,
@@ -30,7 +28,7 @@ export default async function AppLayout({
mt="1"
>
{/* Logo on the left */}
<Link as={NextLink} href="/" className="flex">
<NextLink href="/" className="flex">
<Image
src="/reach.svg"
width={32}
@@ -46,53 +44,8 @@ export default async function AppLayout({
Capture the signal, not the noise
</p>
</div>
</Link>
<div>
{/* Text link on the right */}
<Link
as={NextLink}
href={RECORD_A_MEETING_URL}
className="font-light px-2"
>
Create
</Link>
{featureEnabled("browse") ? (
<>
&nbsp;·&nbsp;
<Link href="/browse" as={NextLink} className="font-light px-2">
Browse
</Link>
</>
) : (
<></>
)}
{featureEnabled("rooms") ? (
<>
&nbsp;·&nbsp;
<Link href="/rooms" as={NextLink} className="font-light px-2">
Rooms
</Link>
</>
) : (
<></>
)}
{featureEnabled("requireLogin") ? (
<>
&nbsp;·&nbsp;
<Link
href="/settings/api-keys"
as={NextLink}
className="font-light px-2"
>
Settings
</Link>
&nbsp;·&nbsp;
<UserInfo />
</>
) : (
<></>
)}
</div>
</NextLink>
<MainNav />
</Flex>
<AuthWrapper>{children}</AuthWrapper>

View File

@@ -1,13 +1,11 @@
"use client";
import React, { useEffect, useState } from "react";
import useAudioDevice from "../useAudioDevice";
import "react-select-search/style.css";
import "../../../styles/form.scss";
import About from "../../../(aboutAndPrivacy)/about";
import Privacy from "../../../(aboutAndPrivacy)/privacy";
import { useRouter } from "next/navigation";
import useCreateTranscript from "../createTranscript";
import SelectSearch from "react-select-search";
import { supportedLanguages } from "../../../supportedLanguages";
import {
Flex,
@@ -21,6 +19,7 @@ import {
} from "@chakra-ui/react";
import { useAuth } from "../../../lib/AuthProvider";
import { featureEnabled } from "../../../lib/features";
import { SearchableLanguageSelect } from "../../../components/SearchableLanguageSelect";
const TranscriptCreate = () => {
const router = useRouter();
@@ -147,31 +146,27 @@ const TranscriptCreate = () => {
p={8}
flexDir="column"
my={4}
className="form-on-primary"
>
<Heading size="xl" mb={4}>
Try Reflector
</Heading>
<Box mb={4}>
<Text>Recording name</Text>
<div className="select-search-container">
<input
className="select-search-input"
type="text"
onChange={nameChange}
placeholder="Optional"
/>
</div>
<Text mb={1}>Recording name</Text>
<input
className="form-field-input"
type="text"
onChange={nameChange}
placeholder="Optional"
/>
</Box>
<Box mb={4}>
<Text>Do you want to enable live translation?</Text>
<SelectSearch
search
<Text mb={1}>Do you want to enable live translation?</Text>
<SearchableLanguageSelect
options={supportedLanguages}
value={targetLanguage}
onChange={onLanguageChange}
onBlur={() => {}}
onFocus={() => {}}
placeholder="Choose your language"
placeholder="No translation"
/>
</Box>
{!loading ? (

View File

@@ -79,9 +79,7 @@ const useMp3 = (transcriptId: string, waiting?: boolean): Mp3Response => {
// Audio is not deleted, proceed to load it
audioElement = document.createElement("audio");
const audioUrl = `${API_URL}/v1/transcripts/${transcriptId}/audio/mp3`;
audioElement.src = accessTokenInfo
? `${audioUrl}?token=${encodeURIComponent(accessTokenInfo)}`
: audioUrl;
audioElement.src = audioUrl;
audioElement.crossOrigin = "anonymous";
audioElement.preload = "auto";

View File

@@ -28,7 +28,7 @@ function WherebyConsentDialogButton({
meetingId: MeetingId;
recordingType: Meeting["recording_type"];
skipConsent: boolean;
wherebyRef: React.RefObject<HTMLElement>;
wherebyRef: React.RefObject<HTMLElement | null>;
}) {
const previousFocusRef = useRef<HTMLElement | null>(null);

View File

@@ -49,8 +49,8 @@ export type RoomDetails = {
// stages: we focus on the consent, then whereby steals focus, then we focus on the consent again, then return focus to whoever stole it initially
const useConsentWherebyFocusManagement = (
acceptButtonRef: RefObject<HTMLButtonElement>,
wherebyRef: RefObject<HTMLElement>,
acceptButtonRef: RefObject<HTMLButtonElement | null>,
wherebyRef: RefObject<HTMLElement | null>,
) => {
const currentFocusRef = useRef<HTMLElement | null>(null);
useEffect(() => {
@@ -87,7 +87,7 @@ const useConsentWherebyFocusManagement = (
const useConsentDialog = (
meetingId: MeetingId,
wherebyRef: RefObject<HTMLElement> /*accessibility*/,
wherebyRef: RefObject<HTMLElement | null> /*accessibility*/,
) => {
const { state: consentState, touch, hasAnswered } = useRecordingConsent();
// toast would open duplicates, even with using "id=" prop
@@ -220,7 +220,7 @@ function ConsentDialogButton({
wherebyRef,
}: {
meetingId: MeetingId;
wherebyRef: React.RefObject<HTMLElement>;
wherebyRef: React.RefObject<HTMLElement | null>;
}) {
const { showConsentModal, consentState, hasAnswered, consentLoading } =
useConsentDialog(meetingId, wherebyRef);

View File

@@ -1,6 +1,14 @@
import NextAuth from "next-auth";
import { authOptions } from "../../../lib/authBackend";
const handler = NextAuth(authOptions());
export const dynamic = "force-dynamic";
export { handler as GET, handler as POST };
// authOptions() is deferred to request time to avoid calling getNextEnvVar
// during Turbopack's build-phase module evaluation (Next.js 16+)
export function GET(req: Request, ctx: any) {
return NextAuth(authOptions())(req as any, ctx);
}
export function POST(req: Request, ctx: any) {
return NextAuth(authOptions())(req as any, ctx);
}

View File

@@ -1,5 +1,7 @@
import { NextResponse } from "next/server";
export const dynamic = "force-dynamic";
export async function GET() {
const health = {
status: "healthy",

View File

@@ -0,0 +1,46 @@
import NextLink from "next/link";
import { featureEnabled } from "../lib/features";
import UserInfo from "../(auth)/userInfo";
import { RECORD_A_MEETING_URL } from "../api/urls";
function NavLink({
href,
children,
}: {
href: string;
children: React.ReactNode;
}) {
return (
<NextLink href={href} className="font-light px-10">
{children}
</NextLink>
);
}
export default function MainNav() {
return (
<nav>
<NavLink href={RECORD_A_MEETING_URL}>Create</NavLink>
{featureEnabled("browse") && (
<>
&nbsp;·&nbsp;
<NavLink href="/browse">Browse</NavLink>
</>
)}
{featureEnabled("rooms") && (
<>
&nbsp;·&nbsp;
<NavLink href="/rooms">Rooms</NavLink>
</>
)}
{featureEnabled("requireLogin") && (
<>
&nbsp;·&nbsp;
<NavLink href="/settings/api-keys">Settings</NavLink>
&nbsp;·&nbsp;
<UserInfo />
</>
)}
</nav>
);
}

View File

@@ -0,0 +1,98 @@
"use client";
import React, { useMemo } from "react";
import {
Combobox,
createListCollection,
useComboboxContext,
} from "@chakra-ui/react";
export type LangOption = { value: string | undefined; name: string };
type Item = { label: string; value: string };
function FilteredComboboxItems({ items }: { items: Item[] }) {
const ctx = useComboboxContext();
const inputValue = (ctx as { inputValue?: string }).inputValue ?? "";
const filtered = useMemo(() => {
const q = inputValue.trim().toLowerCase();
if (!q) return items;
return items.filter((item) => item.label.toLowerCase().includes(q));
}, [items, inputValue]);
return (
<>
<Combobox.Empty>No matches</Combobox.Empty>
{filtered.map((item) => (
<Combobox.Item key={item.value} item={item}>
{item.label}
</Combobox.Item>
))}
</>
);
}
type Props = {
options: LangOption[];
value: string;
onChange: (value: string) => void;
placeholder: string;
};
export function SearchableLanguageSelect({
options,
value,
onChange,
placeholder,
}: Props) {
const items = useMemo(() => {
const result: Item[] = [];
let addedNone = false;
for (const opt of options) {
const val = opt.value ?? "NOTRANSLATION";
if (val === "NOTRANSLATION" || val === "") {
if (addedNone) continue;
addedNone = true;
result.push({ label: "No translation", value: "NOTRANSLATION" });
} else {
result.push({ label: opt.name, value: val });
}
}
return result.sort((a, b) => {
if (a.value === "NOTRANSLATION") return -1;
if (b.value === "NOTRANSLATION") return 1;
return a.label.localeCompare(b.label);
});
}, [options]);
const collection = useMemo(() => createListCollection({ items }), [items]);
const selectedValues = value && value !== "NOTRANSLATION" ? [value] : [];
return (
<Combobox.Root
collection={collection}
value={selectedValues}
onValueChange={(e) => onChange(e.value[0] ?? "NOTRANSLATION")}
openOnClick
closeOnSelect
selectionBehavior="replace"
placeholder={placeholder}
className="form-combobox"
size="md"
positioning={{ strategy: "fixed", hideWhenDetached: true }}
>
<Combobox.Control>
<Combobox.Input />
<Combobox.IndicatorGroup>
<Combobox.Trigger />
</Combobox.IndicatorGroup>
</Combobox.Control>
<Combobox.Positioner>
<Combobox.Content>
<FilteredComboboxItems items={items} />
</Combobox.Content>
</Combobox.Positioner>
</Combobox.Root>
);
}

View File

@@ -24,55 +24,63 @@ export const viewport: Viewport = {
maximumScale: 1,
};
const SITE_URL = getNextEnvVar("SITE_URL");
const env = getClientEnv();
export const metadata: Metadata = {
metadataBase: new URL(SITE_URL),
title: {
template: "%s Reflector",
default: "Reflector - AI-Powered Meeting Transcriptions by Monadical",
},
description:
"Reflector is an AI-powered tool that transcribes your meetings with unparalleled accuracy, divides content by topics, and provides insightful summaries. Maximize your productivity with Reflector, brought to you by Monadical. Capture the signal, not the noise",
applicationName: "Reflector",
referrer: "origin-when-cross-origin",
keywords: ["Reflector", "Monadical", "AI", "Meetings", "Transcription"],
authors: [{ name: "Monadical Team", url: "https://monadical.com/team.html" }],
formatDetection: {
email: false,
address: false,
telephone: false,
},
openGraph: {
title: "Reflector",
export function generateMetadata(): Metadata {
const SITE_URL = getNextEnvVar("SITE_URL");
return {
metadataBase: new URL(SITE_URL),
title: {
template: "%s Reflector",
default: "Reflector - AI-Powered Meeting Transcriptions by Monadical",
},
description:
"Reflector is an AI-powered tool that transcribes your meetings with unparalleled accuracy, divides content by topics, and provides insightful summaries. Maximize your productivity with Reflector, brought to you by Monadical. Capture the signal, not the noise.",
type: "website",
},
"Reflector is an AI-powered tool that transcribes your meetings with unparalleled accuracy, divides content by topics, and provides insightful summaries. Maximize your productivity with Reflector, brought to you by Monadical. Capture the signal, not the noise",
applicationName: "Reflector",
referrer: "origin-when-cross-origin",
keywords: ["Reflector", "Monadical", "AI", "Meetings", "Transcription"],
authors: [
{ name: "Monadical Team", url: "https://monadical.com/team.html" },
],
formatDetection: {
email: false,
address: false,
telephone: false,
},
twitter: {
card: "summary_large_image",
title: "Reflector",
description:
"Reflector is an AI-powered tool that transcribes your meetings with unparalleled accuracy, divides content by topics, and provides insightful summaries. Maximize your productivity with Reflector, brought to you by Monadical. Capture the signal, not the noise.",
images: ["/r-icon.png"],
},
openGraph: {
title: "Reflector",
description:
"Reflector is an AI-powered tool that transcribes your meetings with unparalleled accuracy, divides content by topics, and provides insightful summaries. Maximize your productivity with Reflector, brought to you by Monadical. Capture the signal, not the noise.",
type: "website",
},
icons: {
icon: "/r-icon.png",
shortcut: "/r-icon.png",
apple: "/r-icon.png",
},
robots: { index: false, follow: false, noarchive: true, noimageindex: true },
};
twitter: {
card: "summary_large_image",
title: "Reflector",
description:
"Reflector is an AI-powered tool that transcribes your meetings with unparalleled accuracy, divides content by topics, and provides insightful summaries. Maximize your productivity with Reflector, brought to you by Monadical. Capture the signal, not the noise.",
images: ["/r-icon.png"],
},
icons: {
icon: "/r-icon.png",
shortcut: "/r-icon.png",
apple: "/r-icon.png",
},
robots: {
index: false,
follow: false,
noarchive: true,
noimageindex: true,
},
};
}
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const env = getClientEnv();
return (
<html lang="en" className={poppins.className} suppressHydrationWarning>
<body

View File

@@ -84,7 +84,7 @@ export const getClientEnvServer = (): ClientEnvCommon => {
if (isBuildPhase) {
return {
API_URL: getNextEnvVar("API_URL"),
API_URL: parseNonEmptyString(process.env.API_URL ?? ""),
WEBSOCKET_URL: parseMaybeNonEmptyString(process.env.WEBSOCKET_URL ?? ""),
AUTH_PROVIDER: parseAuthProvider(),
SENTRY_DSN: parseMaybeNonEmptyString(

View File

@@ -1,3 +1,5 @@
import type { JSX } from "react";
type SimpleProps = {
children: JSX.Element | string | (JSX.Element | string)[];
className?: string;

View File

@@ -1,42 +1,74 @@
@media (prefers-color-scheme: dark) {
.select-search-container,
.input-container {
--select-search-background: #fff;
--select-search-border: #dce0e8;
--select-search-selected: #1e66f5;
--select-search-text: #000;
--select-search-subtle-text: #6c6f85;
--select-search-highlight: #eff1f5;
/* Form fields on primary (blue) card white inputs like previous react-select */
.form-on-primary {
--form-bg: #fff;
--form-border: #dce0e8;
--form-focus-border: #1e66f5;
--form-text: #000;
--form-placeholder: #6c6f85;
--form-option-bg: #fff;
--form-option-hover: #eff1f5;
--form-dropdown-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);
--form-radius: 0.5rem; /* 8px, matches rounded-lg elsewhere */
}
.form-on-primary .form-field-input,
.form-on-primary .form-field-select,
.form-on-primary .form-field-search-input {
box-sizing: border-box;
width: 100%;
padding: 0.5rem 0.75rem;
border-radius: var(--form-radius);
border: 1px solid var(--form-border);
background-color: var(--form-bg);
color: var(--form-text);
font-size: 0.9375rem;
outline: none;
cursor: pointer;
&::placeholder {
color: var(--form-placeholder);
}
&:focus {
border-color: var(--form-focus-border);
}
}
body.is-dark-mode .select-search-container,
body.is-dark-mode .input-container {
--select-search-background: #fff;
--select-search-border: #dce0e8;
--select-search-selected: #1e66f5;
--select-search-text: #000;
--select-search-subtle-text: #6c6f85;
--select-search-highlight: #eff1f5;
.form-on-primary .form-field-select option {
background: var(--form-option-bg);
color: var(--form-text);
}
body.is-light-mode .select-search-container,
body.is-light-mode .input-container {
--select-search-background: #fff;
--select-search-border: #dce0e8;
--select-search-selected: #1e66f5;
--select-search-text: #000;
--select-search-subtle-text: #6c6f85;
--select-search-highlight: #eff1f5;
}
/* Chakra Combobox inside form-on-primary: white input + dropdown, dark text */
.form-on-primary .form-combobox {
width: 100%;
.input-container,
.select-search-container {
max-width: 100%;
width: auto;
}
[data-part="control"],
& input {
border-radius: var(--form-radius);
border-color: var(--form-border);
background-color: var(--form-bg);
color: var(--form-text);
body .select-search-container .select-search--top.select-search-select {
top: auto;
bottom: 46px;
&:focus,
&[data-focus] {
border-color: var(--form-focus-border);
}
}
[data-part="content"] {
border-radius: var(--form-radius);
border: 1px solid var(--form-border);
background: var(--form-bg);
box-shadow: var(--form-dropdown-shadow);
color: var(--form-text);
}
[data-part="item"] {
color: var(--form-text);
&:hover {
background: var(--form-option-hover);
}
}
}

View File

@@ -159,7 +159,7 @@ export default function WebinarPage(details: WebinarDetails) {
<div className="max-w-4xl mx-auto px-2 py-8 bg-gray-50">
<div className="bg-white rounded-3xl px-4 md:px-36 py-4 shadow-md mx-auto">
<Link href="https://www.monadical.com" target="_blank">
<img
<Image
src="/monadical-black-white 1.svg"
alt="Monadical Logo"
className="mx-auto mb-8"
@@ -355,7 +355,7 @@ export default function WebinarPage(details: WebinarDetails) {
<div className="max-w-4xl mx-auto px-2 py-8 bg-gray-50">
<div className="bg-white rounded-3xl px-4 md:px-36 py-4 shadow-md mx-auto">
<Link href="https://www.monadical.com" target="_blank">
<img
<Image
src="/monadical-black-white 1.svg"
alt="Monadical Logo"
className="mx-auto mb-8"

View File

@@ -4,47 +4,20 @@ const nextConfig = {
env: {
IS_CI: process.env.IS_CI,
},
experimental: {
optimizePackageImports: ["@chakra-ui/react"],
},
};
module.exports = nextConfig;
// Injected content via Sentry wizard below
const { withSentryConfig } = require("@sentry/nextjs");
module.exports = withSentryConfig(
module.exports,
{
// For all available options, see:
// https://github.com/getsentry/sentry-webpack-plugin#options
// Suppresses source map uploading logs during build
silent: true,
org: "monadical",
project: "reflector-www",
module.exports = withSentryConfig(nextConfig, {
silent: true,
org: "monadical",
project: "reflector-www",
widenClientFileUpload: true,
tunnelRoute: "/monitoring",
bundleSizeOptimizations: {
excludeDebugStatements: true,
},
{
// For all available options, see:
// https://docs.sentry.io/platforms/javascript/guides/nextjs/manual-setup/
// Upload a larger set of source maps for prettier stack traces (increases build time)
widenClientFileUpload: true,
// Transpiles SDK to be compatible with IE11 (increases bundle size)
transpileClientSDK: true,
// Routes browser requests to Sentry through a Next.js rewrite to circumvent ad-blockers (increases server load)
tunnelRoute: "/monitoring",
// Hides source maps from generated client bundles
hideSourceMaps: true,
// Automatically tree-shake Sentry logger statements to reduce bundle size
disableLogger: true,
experimental: {
optimizePackageImports: ["@chakra-ui/react"],
},
},
);
});

View File

@@ -13,60 +13,66 @@
"test": "jest"
},
"dependencies": {
"@chakra-ui/react": "^3.24.2",
"@daily-co/daily-js": "^0.84.0",
"@chakra-ui/react": "^3.33.0",
"@daily-co/daily-js": "^0.87.0",
"@emotion/react": "^11.14.0",
"@fortawesome/fontawesome-svg-core": "^6.4.0",
"@fortawesome/free-solid-svg-icons": "^6.4.0",
"@fortawesome/react-fontawesome": "^0.2.0",
"@fortawesome/fontawesome-svg-core": "^7.2.0",
"@fortawesome/free-solid-svg-icons": "^7.2.0",
"@fortawesome/react-fontawesome": "^3.2.0",
"@sentry/nextjs": "^10.40.0",
"@tanstack/react-query": "^5.85.9",
"@types/ioredis": "^5.0.0",
"@whereby.com/browser-sdk": "^3.3.4",
"autoprefixer": "10.4.20",
"axios": "^1.13.5",
"eslint": "^9.33.0",
"eslint-config-next": "^15.5.3",
"@tanstack/react-query": "^5.90.21",
"@whereby.com/browser-sdk": "^3.18.21",
"autoprefixer": "10.4.27",
"axios": "^1.13.6",
"eslint": "^10.0.2",
"eslint-config-next": "^16.1.6",
"fontawesome": "^5.6.3",
"ioredis": "^5.7.0",
"jest-worker": "^29.6.2",
"lucide-react": "^0.525.0",
"next": "^15.5.10",
"next-auth": "^4.24.12",
"ioredis": "^5.10.0",
"jest-worker": "^30.2.0",
"lucide-react": "^0.575.0",
"next": "^16.1.6",
"next-auth": "^4.24.13",
"next-themes": "^0.4.6",
"nuqs": "^2.4.3",
"openapi-fetch": "^0.14.0",
"openapi-react-query": "^0.5.0",
"postcss": "8.4.31",
"nuqs": "^2.8.9",
"openapi-fetch": "^0.17.0",
"openapi-react-query": "^0.5.4",
"postcss": "8.5.6",
"prop-types": "^15.8.1",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react": "^19.2.4",
"react-dom": "^19.2.4",
"react-dropdown": "^1.11.0",
"react-icons": "^5.0.1",
"react-markdown": "^9.0.0",
"react-qr-code": "^2.0.12",
"react-select-search": "^4.1.7",
"react-icons": "^5.5.0",
"react-markdown": "^10.1.0",
"react-qr-code": "^2.0.18",
"react-uuid-hook": "^0.0.6",
"redlock": "5.0.0-beta.2",
"remeda": "^2.31.1",
"sass": "^1.63.6",
"remeda": "^2.33.6",
"sass": "^1.97.3",
"simple-peer": "^9.11.1",
"tailwindcss": "^3.3.2",
"typescript": "^5.1.6",
"wavesurfer.js": "^7.4.2",
"zod": "^4.1.5"
"tailwindcss": "^4.2.1",
"typescript": "^5.9.3",
"wavesurfer.js": "^7.12.1",
"zod": "^4.3.6"
},
"main": "index.js",
"repository": "https://github.com/Monadical-SAS/reflector-ui.git",
"author": "Andreas <andreas@monadical.com>",
"license": "All Rights Reserved",
"devDependencies": {
"@tailwindcss/postcss": "^4.2.1",
"@types/jest": "^30.0.0",
"@types/react": "18.2.20",
"jest": "^30.1.3",
"openapi-typescript": "^7.9.1",
"prettier": "^3.0.0",
"ts-jest": "^29.4.1"
"@types/react": "19.2.14",
"@types/react-dom": "^19.2.3",
"jest": "^30.2.0",
"openapi-typescript": "^7.13.0",
"prettier": "^3.8.1",
"ts-jest": "^29.4.6"
},
"packageManager": "pnpm@10.14.0+sha512.ad27a79641b49c3e481a16a805baa71817a04bbe06a38d17e60e2eaee83f6a146c6a688125f5792e48dd5ba30e7da52a5cda4c3992b9ccf333f9ce223af84748"
"pnpm": {
"overrides": {
"minimatch@>=5.0.0 <5.1.8": "5.1.8",
"js-yaml@<4.1.1": "4.1.1",
"webpack": "5.105.3"
}
}
}

5747
www/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
module.exports = {
plugins: {
tailwindcss: {},
"@tailwindcss/postcss": {},
autoprefixer: {},
},
};

View File

@@ -5,7 +5,6 @@ module.exports = {
preflight: false,
},
content: [
"./pages/**/*.{js,ts,jsx,tsx,mdx}",
"./components/**/*.{js,ts,jsx,tsx,mdx}",
"./app/**/*.{js,ts,jsx,tsx,mdx}",
],

View File

@@ -13,7 +13,7 @@
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"jsx": "react-jsx",
"plugins": [
{
"name": "next"
@@ -22,6 +22,12 @@
"strictNullChecks": true,
"downlevelIteration": true
},
"include": ["next-env.d.ts", ".next/types/**/*.ts", "**/*.ts", "**/*.tsx"],
"include": [
"next-env.d.ts",
".next/types/**/*.ts",
"**/*.ts",
"**/*.tsx",
".next/dev/types/**/*.ts"
],
"exclude": ["node_modules"]
}

View File

@@ -1,4 +0,0 @@
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1