Compare commits

..

21 Commits

Author SHA1 Message Date
dependabot[bot]
080895be74 build(deps): bump axios
Bumps the npm_and_yarn group with 1 update in the /www directory: [axios](https://github.com/axios/axios).


Updates `axios` from 1.11.0 to 1.12.0
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.11.0...v1.12.0)

---
updated-dependencies:
- dependency-name: axios
  dependency-version: 1.12.0
  dependency-type: direct:production
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-16 00:20:37 +00:00
b42f7cfc60 feat: remove profanity filter that was there for conference (#652) 2025-09-15 18:19:19 -06:00
c546e69739 fix: zulip stream and topic selection in share dialog (#644)
* fix: zulip stream and topic selection in share dialog

Replace useListCollection with createListCollection to match the working
room edit implementation. This ensures collections update when data loads,
fixing the issue where streams and topics wouldn't appear until navigation.

* fix: wrap createListCollection in useMemo to prevent recreation on every render

Both streamCollection and topicCollection are now memoized to improve performance
and prevent unnecessary re-renders of Combobox components
2025-09-15 12:34:51 -06:00
Igor Monadical
3f1fe8c9bf chore: remove timeout-based auth session logic (#649)
* remove timeout-based auth session logic

* remove timeout-based auth session logic

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-15 14:19:10 -04:00
5f143fe364 fix: zulip and consent handler on the file pipeline (#645) 2025-09-15 10:49:20 -06:00
Igor Monadical
79f161436e chore: meeting user id removal and room id requirement (#635)
* chore: remove meeting user id and make meeting room id required

* meeting room_id optional

* orphaned meeting room ids DATA migration

* ci fix

* fix meeting_room_id_fkey downgrade

* fix migration rollback

* fix: put index back (meeting room id)

* fix: put index back (meeting room id)

* fix: put index back (meeting room id)

* remove noop migrations

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-12 13:07:58 -04:00
Igor Monadical
5cba5d310d chore: sentry and nextjs major bumps (#633)
* chore: remove nextjs-config

* build fix

* sentry update

* nextjs update

* feature flags doc

* update readme

* explicit nextjs env vars + remove feature-unrelated things and obsolete vars from config

* full config removal

* remove force-dynamic from pages

* compile fix

* restore claude-deleted tests

* no sentry backward compat

* better .env.example

* AUTHENTIK_REFRESH_TOKEN_URL not so required

* accommodate auth system to requiredLogin feature

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-12 12:41:44 -04:00
43ea9349f5 chore(main): release 0.10.0 (#616) 2025-09-11 20:57:19 -06:00
Igor Monadical
b3a8e9739d chore: whereby & s3 settings env error reporting (#637)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-11 17:52:34 -04:00
Igor Monadical
369ecdff13 feat: replace nextjs-config with environment variables (#632)
* chore: remove nextjs-config

* build fix

* update readme

* explicit nextjs env vars + remove feature-unrelated things and obsolete vars from config

* full config removal

* remove force-dynamic from pages

* compile fix

* restore claude-deleted tests

* better .env.example

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-11 11:20:41 -04:00
fc363bd49b fix: missing follow_redirects=True on modal endpoint (#630) 2025-09-10 08:15:47 -06:00
Igor Monadical
962038ee3f fix: auth post (#627)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 16:46:57 -04:00
Igor Monadical
3b85ff3bdf fix: auth post (#626)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 16:27:46 -04:00
Igor Monadical
cde99ca271 fix: auth post (#624)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 15:48:07 -04:00
Igor Monadical
f81fe9948a fix: anonymous users transcript permissions (#621)
* fix: public transcript visibility

* fix: transcript permissions frontend

* dead code removal

* chore: remove unused code

* fix search tests

* fix search tests

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-09 10:50:29 -04:00
Igor Monadical
5a5b323382 fix: sync backend and frontend token refresh logic (#614)
* sync backend and frontend token refresh logic

* return react strict mode

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-08 10:40:18 -04:00
02a3938822 chore(main): release 0.9.0 (#603) 2025-09-05 22:50:10 -06:00
Igor Monadical
7f5a4c9ddc fix: token refresh locking (#613)
* fix: kv use tls explicit

* fix: token refresh locking

* remove logs

* compile fix

* compile fix

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 23:03:24 -04:00
Igor Monadical
08d88ec349 fix: kv use tls explicit (#610)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 18:39:32 -04:00
Igor Monadical
c4d2825c81 feat: frontend openapi react query (#606)
* refactor: migrate from @hey-api/openapi-ts to openapi-react-query

- Replace @hey-api/openapi-ts with openapi-typescript and openapi-react-query
- Generate TypeScript types from OpenAPI spec
- Set up React Query infrastructure with QueryClientProvider
- Migrate all API hooks to use React Query patterns
- Maintain backward compatibility for existing components
- Remove old API infrastructure and dependencies

* fix: resolve import errors and add missing api hooks

- Create constants.ts for RECORD_A_MEETING_URL
- Add api-types.ts for backward compatible type exports
- Update all imports from deleted api folder to new locations
- Add missing React Query hooks for rooms and zulip operations
- Create useApi compatibility layer for unmigrated components

* feat: migrate components to React Query hooks

- Add comprehensive API hooks for all operations
- Migrate rooms page to use React Query mutations
- Update transcript title component to use mutation hook
- Refactor share/privacy component with proper error handling
- Remove direct API client usage in favor of hooks

* feat: complete migration from @hey-api/openapi-ts to openapi-react-query

- Migrated all components from useApi compatibility layer to direct React Query hooks
- Added new hooks for participant operations, room meetings, and speaker operations
- Updated all imports from old api module to api-types
- Fixed TypeScript types and API endpoint signatures
- Removed deprecated useApi.ts compatibility layer
- Fixed SourceKind enum values to match OpenAPI spec
- Added @ts-ignore for Zulip endpoints not in OpenAPI spec yet
- Fixed all compilation errors and type issues

* fix: authentication flow with React Query migration

- Fix middleware management in apiClient to properly handle auth tokens
- Update ApiAuthProvider to correctly configure base URL and auth
- Add missing NextAuth API route handler at app/api/auth/[...nextauth]/route.ts
- Remove middleware ejection attempts (not supported by openapi-fetch)
- Use global variables to store current auth token and API URL
- Setup middleware once on initialization instead of repeatedly adding

This fixes the login/logout flow that was broken after migrating from
the useApi compatibility layer to native React Query hooks.

* fix: prevent unauthorized API calls before authentication

- Add global AuthGuard component to handle authentication at layout level
- Make all API query hooks conditional on authentication status
- Define public routes (like /transcripts/new) that don't require auth
- Fix login flow to use NextAuth signIn instead of non-existent /login route
- Prevent 401 errors by waiting for auth token before making API calls

Previously, all routes under (app) were publicly accessible with each page
handling auth individually. Now authentication is enforced globally while
still allowing specific routes to remain public.

* refactor: remove redundant client-side AuthGuard

The authentication is already properly handled by Next.js middleware
in middleware.ts with LOGIN_REQUIRED_PAGES. The middleware approach is
superior as it:
- Provides server-side protection before page loads
- Prevents flash of unauthorized content
- Centralizes auth logic in one place
- Better performance (no client-side JS needed)

Keep the API hooks conditional to prevent 401 errors before token is ready.

* fix: use direct status check for API query authentication

Changed all query hooks to use direct `status === "authenticated"` check
instead of derived `isAuthenticated && !isLoading` to avoid race conditions
where queries might fire before the authentication token is properly set.

This prevents the brief 401 errors that occur on page refresh when the
session is being restored.

* fix: correct content-type header for FormData uploads

Previously, the API client was setting a default Content-Type of application/json
for all requests, which broke file uploads that need multipart/form-data.

Now the client only sets application/json when the body is not FormData,
allowing FormData to automatically set the correct multipart boundary.

* fix: resolve authentication race condition with React Query

Previously, API calls were being made before the auth token was configured,
causing initial 401 errors that would retry with 200 after token setup.

Changes:
- Add global auth readiness tracking in apiClient
- Create useAuthReady hook that checks both session and token state
- Update all API hooks to use isAuthReady instead of just session status
- Add AuthWrapper component at layout level for consistent loading UX
- Show spinner while authentication initializes across all pages

This ensures API calls only fire after authentication is fully configured,
eliminating the 401/retry pattern and improving user experience.

* refactor: clean up api-hooks.ts comments and improve search invalidation

- Remove redundant function category comments (exports are self-explanatory)
- Remove obvious inline comments for query invalidation
- Fix search endpoint invalidation to clear all queries regardless of parameters

* refactor: remove api-types.ts compatibility layer

- Migrated all 29 files from api-types.ts to use reflector-api.d.ts directly
- Removed $SourceKind manual enum in favor of OpenAPI-generated types
- Fixed unrelated Spinner component TypeScript error in AuthWrapper.tsx
- All imports now use: import type { components } from "path/to/reflector-api"
- Deleted api-types.ts file completely

* refactor: rename api-hooks.ts to apiHooks.ts for consistency

- Renamed api-hooks.ts to apiHooks.ts to follow camelCase convention
- Updated all 21 import statements across the codebase
- Maintains consistency with other non-component files (apiClient.tsx, useAuthReady.ts, etc.)
- Follows established naming pattern: PascalCase for components, camelCase for utilities/hooks

* chore: add .playwright-mcp to .gitignore

* refactor: remove SK helper object and use inline type casting in FilterSidebar

Replace the SK (SourceKind) helper object with direct inline type casting
to simplify the code and reduce unnecessary abstraction.

* chore: clean up migration comments from React Query refactoring

- Remove temporary "// Use new React Query hooks" comments
- Remove "// React Query hooks" comments from browse and rooms pages
- Update package.json script name from codegen to openapi for consistency

* refactor: remove Redis dependencies from frontend authentication

- Replace Redis/Redlock with in-memory cache for token management
- Remove @vercel/kv, ioredis, and redlock dependencies from package.json
- Implement simple lock mechanism for concurrent token refresh prevention
- Use Map-based cache with TTL for token storage
- Maintain same authentication flow without external dependencies

This simplifies the infrastructure requirements and removes the need for
Redis while maintaining the same functionality through in-memory caching.

* fix: add staleTime to prevent cross-tab staled data

* fix: remove infinite re-render loop in useSessionAccessToken

The hook was maintaining redundant local state that caused re-renders
on every update, which triggered NextAuth to continuously refetch the
session, resulting in hundreds of POST requests to /api/auth/session.

Simplified the hook to directly return session values without
unnecessary state duplication.

* fix: handle undefined access tokens in auth.ts

Added fallback to empty string for potentially undefined access_token
and refresh_token from NextAuth account object to satisfy
JWTWithAccessToken type requirements.

* Igor/mathieu/frontend openapi react query (#597)

* small typing

* typing fixes

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>

* self-review-fix

* authReady callback simplify

* fix auth

* fix compose

* room detail page fix

* compile fix

* room edit fix

* normalize auth provider

* room edition state granular management

* cover TODOs + cross-tab cache

* session auto refresh blink

* schema generator error type doc

* protect from zombie auth

* clarify access token refresh logic a bit

* remove react-query tab sharing cache

* remove react-query tab sharing cache

* websocket dupe react devmode protection

* invalidate room on room update

* redis cache

* test ts server

* ci randomness

* less edgy config (ci)

* less edgy config (ci)

* less edgy config (ci)

* ci randomness

* ci randomness

* ci randomness

* ci randomness

* less edgy config (ci)

* added vs edited room state cleanup

* file upload real-time state management fix

* prettier auth state ternary

* prettier auth state ternary

* proper api address from env

* INTERVAL_REFRESH_MS

* node version 20 for tests

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* github debug

* CI debug

* CI debug

* nextjs magic

* nextjs magic

* doc

* client-side stale auth soft safety net

---------

Co-authored-by: Mathieu Virbel <mat@meltingrocks.com>
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-09-05 16:01:31 -06:00
0663700a61 fix: align whisper transcriber api with parakeet (#602)
* Documents transcriber api

* Update whisper transcriber api to match parakeet

* Update api transcription spec

* Return 400 for unsupported file type

* Add params to api spec

* Update whisper transcriber implementation to match parakeet
2025-09-05 10:52:14 +02:00
76 changed files with 7206 additions and 9712 deletions

View File

@@ -1,5 +1,37 @@
# Changelog
## [0.10.0](https://github.com/Monadical-SAS/reflector/compare/v0.9.0...v0.10.0) (2025-09-11)
### Features
* replace nextjs-config with environment variables ([#632](https://github.com/Monadical-SAS/reflector/issues/632)) ([369ecdf](https://github.com/Monadical-SAS/reflector/commit/369ecdff13f3862d926a9c0b87df52c9d94c4dde))
### Bug Fixes
* anonymous users transcript permissions ([#621](https://github.com/Monadical-SAS/reflector/issues/621)) ([f81fe99](https://github.com/Monadical-SAS/reflector/commit/f81fe9948a9237b3e0001b2d8ca84f54d76878f9))
* auth post ([#624](https://github.com/Monadical-SAS/reflector/issues/624)) ([cde99ca](https://github.com/Monadical-SAS/reflector/commit/cde99ca2716f84ba26798f289047732f0448742e))
* auth post ([#626](https://github.com/Monadical-SAS/reflector/issues/626)) ([3b85ff3](https://github.com/Monadical-SAS/reflector/commit/3b85ff3bdf4fb053b103070646811bc990c0e70a))
* auth post ([#627](https://github.com/Monadical-SAS/reflector/issues/627)) ([962038e](https://github.com/Monadical-SAS/reflector/commit/962038ee3f2a555dc3c03856be0e4409456e0996))
* missing follow_redirects=True on modal endpoint ([#630](https://github.com/Monadical-SAS/reflector/issues/630)) ([fc363bd](https://github.com/Monadical-SAS/reflector/commit/fc363bd49b17b075e64f9186e5e0185abc325ea7))
* sync backend and frontend token refresh logic ([#614](https://github.com/Monadical-SAS/reflector/issues/614)) ([5a5b323](https://github.com/Monadical-SAS/reflector/commit/5a5b3233820df9536da75e87ce6184a983d4713a))
## [0.9.0](https://github.com/Monadical-SAS/reflector/compare/v0.8.2...v0.9.0) (2025-09-06)
### Features
* frontend openapi react query ([#606](https://github.com/Monadical-SAS/reflector/issues/606)) ([c4d2825](https://github.com/Monadical-SAS/reflector/commit/c4d2825c81f81ad8835629fbf6ea8c7383f8c31b))
### Bug Fixes
* align whisper transcriber api with parakeet ([#602](https://github.com/Monadical-SAS/reflector/issues/602)) ([0663700](https://github.com/Monadical-SAS/reflector/commit/0663700a615a4af69a03c96c410f049e23ec9443))
* kv use tls explicit ([#610](https://github.com/Monadical-SAS/reflector/issues/610)) ([08d88ec](https://github.com/Monadical-SAS/reflector/commit/08d88ec349f38b0d13e0fa4cb73486c8dfd31836))
* source kind for file processing ([#601](https://github.com/Monadical-SAS/reflector/issues/601)) ([dc82f8b](https://github.com/Monadical-SAS/reflector/commit/dc82f8bb3bdf3ab3d4088e592a30fd63907319e1))
* token refresh locking ([#613](https://github.com/Monadical-SAS/reflector/issues/613)) ([7f5a4c9](https://github.com/Monadical-SAS/reflector/commit/7f5a4c9ddc7fd098860c8bdda2ca3b57f63ded2f))
## [0.8.2](https://github.com/Monadical-SAS/reflector/compare/v0.8.1...v0.8.2) (2025-08-29)

View File

@@ -66,7 +66,6 @@ pnpm install
# Copy configuration templates
cp .env_template .env
cp config-template.ts config.ts
```
**Development:**

View File

@@ -99,11 +99,10 @@ Start with `cd www`.
```bash
pnpm install
cp .env_template .env
cp config-template.ts config.ts
cp .env.example .env
```
Then, fill in the environment variables in `.env` and the configuration in `config.ts` as needed. If you are unsure on how to proceed, ask in Zulip.
Then, fill in the environment variables in `.env` as needed. If you are unsure on how to proceed, ask in Zulip.
**Run in development mode**
@@ -168,3 +167,34 @@ You can manually process an audio file by calling the process tool:
```bash
uv run python -m reflector.tools.process path/to/audio.wav
```
## Feature Flags
Reflector uses environment variable-based feature flags to control application functionality. These flags allow you to enable or disable features without code changes.
### Available Feature Flags
| Feature Flag | Environment Variable |
|-------------|---------------------|
| `requireLogin` | `NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN` |
| `privacy` | `NEXT_PUBLIC_FEATURE_PRIVACY` |
| `browse` | `NEXT_PUBLIC_FEATURE_BROWSE` |
| `sendToZulip` | `NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP` |
| `rooms` | `NEXT_PUBLIC_FEATURE_ROOMS` |
### Setting Feature Flags
Feature flags are controlled via environment variables using the pattern `NEXT_PUBLIC_FEATURE_{FEATURE_NAME}` where `{FEATURE_NAME}` is the SCREAMING_SNAKE_CASE version of the feature name.
**Examples:**
```bash
# Enable user authentication requirement
NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN=true
# Disable browse functionality
NEXT_PUBLIC_FEATURE_BROWSE=false
# Enable Zulip integration
NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP=true
```

View File

@@ -0,0 +1,194 @@
## Reflector GPU Transcription API (Specification)
This document defines the Reflector GPU transcription API that all implementations must adhere to. Current implementations include NVIDIA Parakeet (NeMo) and Whisper (faster-whisper), both deployed on Modal.com. The API surface and response shapes are OpenAI/Whisper-compatible, so clients can switch implementations by changing only the base URL.
### Base URL and Authentication
- Example base URLs (Modal web endpoints):
- Parakeet: `https://<account>--reflector-transcriber-parakeet-web.modal.run`
- Whisper: `https://<account>--reflector-transcriber-web.modal.run`
- All endpoints are served under `/v1` and require a Bearer token:
```
Authorization: Bearer <REFLECTOR_GPU_APIKEY>
```
Note: To switch implementations, deploy the desired variant and point `TRANSCRIPT_URL` to its base URL. The API is identical.
### Supported file types
`mp3, mp4, mpeg, mpga, m4a, wav, webm`
### Models and languages
- Parakeet (NVIDIA NeMo): default `nvidia/parakeet-tdt-0.6b-v2`
- Language support: only `en`. Other languages return HTTP 400.
- Whisper (faster-whisper): default `large-v2` (or deployment-specific)
- Language support: multilingual (per Whisper model capabilities).
Note: The `model` parameter is accepted by all implementations for interface parity. Some backends may treat it as informational.
### Endpoints
#### POST /v1/audio/transcriptions
Transcribe one or more uploaded audio files.
Request: multipart/form-data
- `file` (File) — optional. Single file to transcribe.
- `files` (File[]) — optional. One or more files to transcribe.
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
- `language` (string) — optional, defaults to `en`.
- Parakeet: only `en` is accepted; other values return HTTP 400
- Whisper: model-dependent; typically multilingual
- `batch` (boolean) — optional, defaults to `false`.
Notes:
- Provide either `file` or `files`, not both. If neither is provided, HTTP 400.
- `batch` requires `files`; using `batch=true` without `files` returns HTTP 400.
- Response shape for multiple files is the same regardless of `batch`.
- Files sent to this endpoint are processed in a single pass (no VAD/chunking). This is intended for short clips (roughly ≤ 30s; depends on GPU memory/model). For longer audio, prefer `/v1/audio/transcriptions-from-url` which supports VAD-based chunking.
Responses
Single file response:
```json
{
"text": "transcribed text",
"words": [
{ "word": "hello", "start": 0.0, "end": 0.5 },
{ "word": "world", "start": 0.5, "end": 1.0 }
],
"filename": "audio.mp3"
}
```
Multiple files response:
```json
{
"results": [
{"filename": "a1.mp3", "text": "...", "words": [...]},
{"filename": "a2.mp3", "text": "...", "words": [...]}]
}
```
Notes:
- Word objects always include keys: `word`, `start`, `end`.
- Some implementations may include a trailing space in `word` to match Whisper tokenization behavior; clients should trim if needed.
Example curl (single file):
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-F "file=@/path/to/audio.mp3" \
-F "language=en" \
"$BASE_URL/v1/audio/transcriptions"
```
Example curl (multiple files, batch):
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-F "files=@/path/a1.mp3" -F "files=@/path/a2.mp3" \
-F "batch=true" -F "language=en" \
"$BASE_URL/v1/audio/transcriptions"
```
#### POST /v1/audio/transcriptions-from-url
Transcribe a single remote audio file by URL.
Request: application/json
Body parameters:
- `audio_file_url` (string) — required. URL of the audio file to transcribe.
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
- `language` (string) — optional, defaults to `en`. Parakeet only accepts `en`.
- `timestamp_offset` (number) — optional, defaults to `0.0`. Added to each word's `start`/`end` in the response.
```json
{
"audio_file_url": "https://example.com/audio.mp3",
"model": "nvidia/parakeet-tdt-0.6b-v2",
"language": "en",
"timestamp_offset": 0.0
}
```
Response:
```json
{
"text": "transcribed text",
"words": [
{ "word": "hello", "start": 10.0, "end": 10.5 },
{ "word": "world", "start": 10.5, "end": 11.0 }
]
}
```
Notes:
- `timestamp_offset` is added to each words `start`/`end` in the response.
- Implementations may perform VAD-based chunking and batching for long-form audio; word timings are adjusted accordingly.
Example curl:
```bash
curl -X POST \
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
-H "Content-Type: application/json" \
-d '{
"audio_file_url": "https://example.com/audio.mp3",
"language": "en",
"timestamp_offset": 0
}' \
"$BASE_URL/v1/audio/transcriptions-from-url"
```
### Error handling
- 400 Bad Request
- Parakeet: `language` other than `en`
- Missing required parameters (`file`/`files` for upload; `audio_file_url` for URL endpoint)
- Unsupported file extension
- 401 Unauthorized
- Missing or invalid Bearer token
- 404 Not Found
- `audio_file_url` does not exist
### Implementation details
- GPUs: A10G for small-file/live, L40S for large-file URL transcription (subject to deployment)
- VAD chunking and segment batching; word timings adjusted and overlapping ends constrained
- Pads very short segments (< 0.5s) to avoid model crashes on some backends
### Server configuration (Reflector API)
Set the Reflector server to use the Modal backend and point `TRANSCRIPT_URL` to your chosen deployment:
```
TRANSCRIPT_BACKEND=modal
TRANSCRIPT_URL=https://<account>--reflector-transcriber-parakeet-web.modal.run
TRANSCRIPT_MODAL_API_KEY=<REFLECTOR_GPU_APIKEY>
```
### Conformance tests
Use the pytest-based conformance tests to validate any new implementation (including self-hosted) against this spec:
```
TRANSCRIPT_URL=https://<your-deployment-base> \
TRANSCRIPT_MODAL_API_KEY=your-api-key \
uv run -m pytest -m gpu_modal --no-cov server/tests/test_gpu_modal_transcript.py
```

View File

@@ -1,41 +1,78 @@
import os
import tempfile
import sys
import threading
import uuid
from typing import Generator, Mapping, NamedTuple, NewType, TypedDict
from urllib.parse import urlparse
import modal
from pydantic import BaseModel
MODELS_DIR = "/models"
MODEL_NAME = "large-v2"
MODEL_COMPUTE_TYPE: str = "float16"
MODEL_NUM_WORKERS: int = 1
MINUTES = 60 # seconds
SAMPLERATE = 16000
UPLOADS_PATH = "/uploads"
CACHE_PATH = "/models"
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
VAD_CONFIG = {
"batch_max_duration": 30.0,
"silence_padding": 0.5,
"window_size": 512,
}
volume = modal.Volume.from_name("models", create_if_missing=True)
WhisperUniqFilename = NewType("WhisperUniqFilename", str)
AudioFileExtension = NewType("AudioFileExtension", str)
app = modal.App("reflector-transcriber")
model_cache = modal.Volume.from_name("models", create_if_missing=True)
upload_volume = modal.Volume.from_name("whisper-uploads", create_if_missing=True)
class TimeSegment(NamedTuple):
"""Represents a time segment with start and end times."""
start: float
end: float
class AudioSegment(NamedTuple):
"""Represents an audio segment with timing and audio data."""
start: float
end: float
audio: any
class TranscriptResult(NamedTuple):
"""Represents a transcription result with text and word timings."""
text: str
words: list["WordTiming"]
class WordTiming(TypedDict):
"""Represents a word with its timing information."""
word: str
start: float
end: float
def download_model():
from faster_whisper import download_model
volume.reload()
model_cache.reload()
download_model(MODEL_NAME, cache_dir=MODELS_DIR)
download_model(MODEL_NAME, cache_dir=CACHE_PATH)
volume.commit()
model_cache.commit()
image = (
modal.Image.debian_slim(python_version="3.12")
.pip_install(
"huggingface_hub==0.27.1",
"hf-transfer==0.1.9",
"torch==2.5.1",
"faster-whisper==1.1.1",
)
.env(
{
"HF_HUB_ENABLE_HF_TRANSFER": "1",
@@ -45,19 +82,98 @@ image = (
),
}
)
.run_function(download_model, volumes={MODELS_DIR: volume})
.apt_install("ffmpeg")
.pip_install(
"huggingface_hub==0.27.1",
"hf-transfer==0.1.9",
"torch==2.5.1",
"faster-whisper==1.1.1",
"fastapi==0.115.12",
"requests",
"librosa==0.10.1",
"numpy<2",
"silero-vad==5.1.0",
)
.run_function(download_model, volumes={CACHE_PATH: model_cache})
)
def detect_audio_format(url: str, headers: Mapping[str, str]) -> AudioFileExtension:
parsed_url = urlparse(url)
url_path = parsed_url.path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return AudioFileExtension(ext)
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return AudioFileExtension("mp3")
if "audio/wav" in content_type:
return AudioFileExtension("wav")
if "audio/mp4" in content_type:
return AudioFileExtension("mp4")
raise ValueError(
f"Unsupported audio format for URL: {url}. "
f"Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
)
def download_audio_to_volume(
audio_file_url: str,
) -> tuple[WhisperUniqFilename, AudioFileExtension]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = WhisperUniqFilename(f"{uuid.uuid4()}.{audio_suffix}")
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
def pad_audio(audio_array, sample_rate: int = SAMPLERATE):
"""Add 0.5s of silence if audio is shorter than the silence_padding window.
Whisper does not require this strictly, but aligning behavior with Parakeet
avoids edge-case crashes on extremely short inputs and makes comparisons easier.
"""
import numpy as np
audio_duration = len(audio_array) / sample_rate
if audio_duration < VAD_CONFIG["silence_padding"]:
silence_samples = int(sample_rate * VAD_CONFIG["silence_padding"])
silence = np.zeros(silence_samples, dtype=np.float32)
return np.concatenate([audio_array, silence])
return audio_array
@app.cls(
gpu="A10G",
timeout=5 * MINUTES,
scaledown_window=5 * MINUTES,
allow_concurrent_inputs=6,
image=image,
volumes={MODELS_DIR: volume},
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
)
class Transcriber:
@modal.concurrent(max_inputs=10)
class TranscriberWhisperLive:
"""Live transcriber class for small audio segments (A10G).
Mirrors the Parakeet live class API but uses Faster-Whisper under the hood.
"""
@modal.enter()
def enter(self):
import faster_whisper
@@ -71,23 +187,200 @@ class Transcriber:
device=self.device,
compute_type=MODEL_COMPUTE_TYPE,
num_workers=MODEL_NUM_WORKERS,
download_root=MODELS_DIR,
download_root=CACHE_PATH,
local_files_only=True,
)
print(f"Model is on device: {self.device}")
@modal.method()
def transcribe_segment(
self,
audio_data: str,
audio_suffix: str,
language: str,
filename: str,
language: str = "en",
):
with tempfile.NamedTemporaryFile("wb+", suffix=f".{audio_suffix}") as fp:
fp.write(audio_data)
"""Transcribe a single uploaded audio file by filename."""
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
with self.lock:
with NoStdStreams():
segments, _ = self.model.transcribe(
file_path,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(segment.text for segment in segments).strip()
words = [
{
"word": word.word,
"start": round(float(word.start), 2),
"end": round(float(word.end), 2),
}
for segment in segments
for word in segment.words
]
return {"text": text, "words": words}
@modal.method()
def transcribe_batch(
self,
filenames: list[str],
language: str = "en",
):
"""Transcribe multiple uploaded audio files and return per-file results."""
upload_volume.reload()
results = []
for filename in filenames:
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"Batch file not found: {file_path}")
with self.lock:
with NoStdStreams():
segments, _ = self.model.transcribe(
file_path,
language=language,
beam_size=5,
word_timestamps=True,
vad_filter=True,
vad_parameters={"min_silence_duration_ms": 500},
)
segments = list(segments)
text = "".join(seg.text for seg in segments).strip()
words = [
{
"word": w.word,
"start": round(float(w.start), 2),
"end": round(float(w.end), 2),
}
for seg in segments
for w in seg.words
]
results.append(
{
"filename": filename,
"text": text,
"words": words,
}
)
return results
@app.cls(
gpu="L40S",
timeout=15 * MINUTES,
image=image,
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
)
class TranscriberWhisperFile:
"""File transcriber for larger/longer audio, using VAD-driven batching (L40S)."""
@modal.enter()
def enter(self):
import faster_whisper
import torch
from silero_vad import load_silero_vad
self.lock = threading.Lock()
self.use_gpu = torch.cuda.is_available()
self.device = "cuda" if self.use_gpu else "cpu"
self.model = faster_whisper.WhisperModel(
MODEL_NAME,
device=self.device,
compute_type=MODEL_COMPUTE_TYPE,
num_workers=MODEL_NUM_WORKERS,
download_root=CACHE_PATH,
local_files_only=True,
)
self.vad_model = load_silero_vad(onnx=False)
@modal.method()
def transcribe_segment(
self, filename: str, timestamp_offset: float = 0.0, language: str = "en"
):
import librosa
import numpy as np
from silero_vad import VADIterator
def vad_segments(
audio_array,
sample_rate: int = SAMPLERATE,
window_size: int = VAD_CONFIG["window_size"],
) -> Generator[TimeSegment, None, None]:
"""Generate speech segments as TimeSegment using Silero VAD."""
iterator = VADIterator(self.vad_model, sampling_rate=sample_rate)
start = None
for i in range(0, len(audio_array), window_size):
chunk = audio_array[i : i + window_size]
if len(chunk) < window_size:
chunk = np.pad(
chunk, (0, window_size - len(chunk)), mode="constant"
)
speech = iterator(chunk)
if not speech:
continue
if "start" in speech:
start = speech["start"]
continue
if "end" in speech and start is not None:
end = speech["end"]
yield TimeSegment(
start / float(SAMPLERATE), end / float(SAMPLERATE)
)
start = None
iterator.reset_states()
upload_volume.reload()
file_path = f"{UPLOADS_PATH}/{filename}"
if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
audio_array, _sr = librosa.load(file_path, sr=SAMPLERATE, mono=True)
# Batch segments up to ~30s windows by merging contiguous VAD segments
merged_batches: list[TimeSegment] = []
batch_start = None
batch_end = None
max_duration = VAD_CONFIG["batch_max_duration"]
for segment in vad_segments(audio_array):
seg_start, seg_end = segment.start, segment.end
if batch_start is None:
batch_start, batch_end = seg_start, seg_end
continue
if seg_end - batch_start <= max_duration:
batch_end = seg_end
else:
merged_batches.append(TimeSegment(batch_start, batch_end))
batch_start, batch_end = seg_start, seg_end
if batch_start is not None and batch_end is not None:
merged_batches.append(TimeSegment(batch_start, batch_end))
all_text = []
all_words = []
for segment in merged_batches:
start_time, end_time = segment.start, segment.end
s_idx = int(start_time * SAMPLERATE)
e_idx = int(end_time * SAMPLERATE)
segment = audio_array[s_idx:e_idx]
segment = pad_audio(segment, SAMPLERATE)
with self.lock:
segments, _ = self.model.transcribe(
fp.name,
segment,
language=language,
beam_size=5,
word_timestamps=True,
@@ -96,66 +389,220 @@ class Transcriber:
)
segments = list(segments)
text = "".join(segment.text for segment in segments)
text = "".join(seg.text for seg in segments).strip()
words = [
{"word": word.word, "start": word.start, "end": word.end}
for segment in segments
for word in segment.words
{
"word": w.word,
"start": round(float(w.start) + start_time + timestamp_offset, 2),
"end": round(float(w.end) + start_time + timestamp_offset, 2),
}
for seg in segments
for w in seg.words
]
if text:
all_text.append(text)
all_words.extend(words)
return {"text": text, "words": words}
return {"text": " ".join(all_text), "words": all_words}
def detect_audio_format(url: str, headers: dict) -> str:
from urllib.parse import urlparse
from fastapi import HTTPException
url_path = urlparse(url).path
for ext in SUPPORTED_FILE_EXTENSIONS:
if url_path.lower().endswith(f".{ext}"):
return ext
content_type = headers.get("content-type", "").lower()
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
return "mp3"
if "audio/wav" in content_type:
return "wav"
if "audio/mp4" in content_type:
return "mp4"
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format for URL. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
def download_audio_to_volume(audio_file_url: str) -> tuple[str, str]:
import requests
from fastapi import HTTPException
response = requests.head(audio_file_url, allow_redirects=True)
if response.status_code == 404:
raise HTTPException(status_code=404, detail="Audio file not found")
response = requests.get(audio_file_url, allow_redirects=True)
response.raise_for_status()
audio_suffix = detect_audio_format(audio_file_url, response.headers)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
f.write(response.content)
upload_volume.commit()
return unique_filename, audio_suffix
@app.function(
scaledown_window=60,
timeout=60,
allow_concurrent_inputs=40,
timeout=600,
secrets=[
modal.Secret.from_name("reflector-gpu"),
],
volumes={MODELS_DIR: volume},
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
image=image,
)
@modal.concurrent(max_inputs=40)
@modal.asgi_app()
def web():
from fastapi import Body, Depends, FastAPI, HTTPException, UploadFile, status
from fastapi import (
Body,
Depends,
FastAPI,
Form,
HTTPException,
UploadFile,
status,
)
from fastapi.security import OAuth2PasswordBearer
from typing_extensions import Annotated
transcriber = Transcriber()
transcriber_live = TranscriberWhisperLive()
transcriber_file = TranscriberWhisperFile()
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
supported_file_types = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
if apikey != os.environ["REFLECTOR_GPU_APIKEY"]:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)
if apikey == os.environ["REFLECTOR_GPU_APIKEY"]:
return
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid API key",
headers={"WWW-Authenticate": "Bearer"},
)
class TranscriptResponse(BaseModel):
result: dict
class TranscriptResponse(dict):
pass
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
def transcribe(
file: UploadFile,
model: str = "whisper-1",
language: Annotated[str, Body(...)] = "en",
) -> TranscriptResponse:
audio_data = file.file.read()
audio_suffix = file.filename.split(".")[-1]
assert audio_suffix in supported_file_types
file: UploadFile = None,
files: list[UploadFile] | None = None,
model: str = Form(MODEL_NAME),
language: str = Form("en"),
batch: bool = Form(False),
):
if not file and not files:
raise HTTPException(
status_code=400, detail="Either 'file' or 'files' parameter is required"
)
if batch and not files:
raise HTTPException(
status_code=400, detail="Batch transcription requires 'files'"
)
func = transcriber.transcribe_segment.spawn(
audio_data=audio_data,
audio_suffix=audio_suffix,
language=language,
)
result = func.get()
return result
upload_files = [file] if file else files
uploaded_filenames: list[str] = []
for upload_file in upload_files:
audio_suffix = upload_file.filename.split(".")[-1]
if audio_suffix not in SUPPORTED_FILE_EXTENSIONS:
raise HTTPException(
status_code=400,
detail=(
f"Unsupported audio format. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
),
)
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
file_path = f"{UPLOADS_PATH}/{unique_filename}"
with open(file_path, "wb") as f:
content = upload_file.file.read()
f.write(content)
uploaded_filenames.append(unique_filename)
upload_volume.commit()
try:
if batch and len(upload_files) > 1:
func = transcriber_live.transcribe_batch.spawn(
filenames=uploaded_filenames,
language=language,
)
results = func.get()
return {"results": results}
results = []
for filename in uploaded_filenames:
func = transcriber_live.transcribe_segment.spawn(
filename=filename,
language=language,
)
result = func.get()
result["filename"] = filename
results.append(result)
return {"results": results} if len(results) > 1 else results[0]
finally:
for filename in uploaded_filenames:
try:
file_path = f"{UPLOADS_PATH}/{filename}"
os.remove(file_path)
except Exception:
pass
upload_volume.commit()
@app.post("/v1/audio/transcriptions-from-url", dependencies=[Depends(apikey_auth)])
def transcribe_from_url(
audio_file_url: str = Body(
..., description="URL of the audio file to transcribe"
),
model: str = Body(MODEL_NAME),
language: str = Body("en"),
timestamp_offset: float = Body(0.0),
):
unique_filename, _audio_suffix = download_audio_to_volume(audio_file_url)
try:
func = transcriber_file.transcribe_segment.spawn(
filename=unique_filename,
timestamp_offset=timestamp_offset,
language=language,
)
result = func.get()
return result
finally:
try:
file_path = f"{UPLOADS_PATH}/{unique_filename}"
os.remove(file_path)
upload_volume.commit()
except Exception:
pass
return app
class NoStdStreams:
def __init__(self):
self.devnull = open(os.devnull, "w")
def __enter__(self):
self._stdout, self._stderr = sys.stdout, sys.stderr
self._stdout.flush()
self._stderr.flush()
sys.stdout, sys.stderr = self.devnull, self.devnull
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout, sys.stderr = self._stdout, self._stderr
self.devnull.close()

View File

@@ -0,0 +1,36 @@
"""remove user_id from meeting table
Revision ID: 0ce521cda2ee
Revises: 6dec9fb5b46c
Create Date: 2025-09-10 12:40:55.688899
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "0ce521cda2ee"
down_revision: Union[str, None] = "6dec9fb5b46c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_column("user_id")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.add_column(
sa.Column("user_id", sa.VARCHAR(), autoincrement=False, nullable=True)
)
# ### end Alembic commands ###

View File

@@ -0,0 +1,32 @@
"""clean up orphaned room_id references in meeting table
Revision ID: 2ae3db106d4e
Revises: def1b5867d4c
Create Date: 2025-09-11 10:35:15.759967
"""
from typing import Sequence, Union
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "2ae3db106d4e"
down_revision: Union[str, None] = "def1b5867d4c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Set room_id to NULL for meetings that reference non-existent rooms
op.execute("""
UPDATE meeting
SET room_id = NULL
WHERE room_id IS NOT NULL
AND room_id NOT IN (SELECT id FROM room WHERE id IS NOT NULL)
""")
def downgrade() -> None:
# Cannot restore orphaned references - no operation needed
pass

View File

@@ -0,0 +1,38 @@
"""make meeting room_id required and add foreign key
Revision ID: 6dec9fb5b46c
Revises: 61882a919591
Create Date: 2025-09-10 10:47:06.006819
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "6dec9fb5b46c"
down_revision: Union[str, None] = "61882a919591"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=False)
batch_op.create_foreign_key(
None, "room", ["room_id"], ["id"], ondelete="CASCADE"
)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.drop_constraint("meeting_room_id_fkey", type_="foreignkey")
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=True)
# ### end Alembic commands ###

View File

@@ -0,0 +1,34 @@
"""make meeting room_id nullable but keep foreign key
Revision ID: def1b5867d4c
Revises: 0ce521cda2ee
Create Date: 2025-09-11 09:42:18.697264
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "def1b5867d4c"
down_revision: Union[str, None] = "0ce521cda2ee"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=True)
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table("meeting", schema=None) as batch_op:
batch_op.alter_column("room_id", existing_type=sa.VARCHAR(), nullable=False)
# ### end Alembic commands ###

View File

@@ -27,7 +27,6 @@ dependencies = [
"prometheus-fastapi-instrumentator>=6.1.0",
"sentencepiece>=0.1.99",
"protobuf>=4.24.3",
"profanityfilter>=2.0.6",
"celery>=5.3.4",
"redis>=5.0.1",
"python-jose[cryptography]>=3.3.0",

View File

@@ -2,7 +2,6 @@ from datetime import datetime
from typing import Literal
import sqlalchemy as sa
from fastapi import HTTPException
from pydantic import BaseModel, Field
from reflector.db import get_database, metadata
@@ -18,8 +17,12 @@ meetings = sa.Table(
sa.Column("host_room_url", sa.String),
sa.Column("start_date", sa.DateTime(timezone=True)),
sa.Column("end_date", sa.DateTime(timezone=True)),
sa.Column("user_id", sa.String),
sa.Column("room_id", sa.String),
sa.Column(
"room_id",
sa.String,
sa.ForeignKey("room.id", ondelete="CASCADE"),
nullable=True,
),
sa.Column("is_locked", sa.Boolean, nullable=False, server_default=sa.false()),
sa.Column("room_mode", sa.String, nullable=False, server_default="normal"),
sa.Column("recording_type", sa.String, nullable=False, server_default="cloud"),
@@ -81,8 +84,7 @@ class Meeting(BaseModel):
host_room_url: str
start_date: datetime
end_date: datetime
user_id: str | None = None
room_id: str | None = None
room_id: str | None
is_locked: bool = False
room_mode: Literal["normal", "group"] = "normal"
recording_type: Literal["none", "local", "cloud"] = "cloud"
@@ -101,12 +103,8 @@ class MeetingController:
host_room_url: str,
start_date: datetime,
end_date: datetime,
user_id: str,
room: Room,
):
"""
Create a new meeting
"""
meeting = Meeting(
id=id,
room_name=room_name,
@@ -114,7 +112,6 @@ class MeetingController:
host_room_url=host_room_url,
start_date=start_date,
end_date=end_date,
user_id=user_id,
room_id=room.id,
is_locked=room.is_locked,
room_mode=room.room_mode,
@@ -126,19 +123,13 @@ class MeetingController:
return meeting
async def get_all_active(self) -> list[Meeting]:
"""
Get active meetings.
"""
query = meetings.select().where(meetings.c.is_active)
return await get_database().fetch_all(query)
async def get_by_room_name(
self,
room_name: str,
) -> Meeting:
"""
Get a meeting by room name.
"""
) -> Meeting | None:
query = meetings.select().where(meetings.c.room_name == room_name)
result = await get_database().fetch_one(query)
if not result:
@@ -146,10 +137,7 @@ class MeetingController:
return Meeting(**result)
async def get_active(self, room: Room, current_time: datetime) -> Meeting:
"""
Get latest active meeting for a room.
"""
async def get_active(self, room: Room, current_time: datetime) -> Meeting | None:
end_date = getattr(meetings.c, "end_date")
query = (
meetings.select()
@@ -169,32 +157,12 @@ class MeetingController:
return Meeting(**result)
async def get_by_id(self, meeting_id: str, **kwargs) -> Meeting | None:
"""
Get a meeting by id
"""
query = meetings.select().where(meetings.c.id == meeting_id)
result = await get_database().fetch_one(query)
if not result:
return None
return Meeting(**result)
async def get_by_id_for_http(self, meeting_id: str, user_id: str | None) -> Meeting:
"""
Get a meeting by ID for HTTP request.
If not found, it will raise a 404 error.
"""
query = meetings.select().where(meetings.c.id == meeting_id)
result = await get_database().fetch_one(query)
if not result:
raise HTTPException(status_code=404, detail="Meeting not found")
meeting = Meeting(**result)
if result["user_id"] != user_id:
meeting.host_room_url = ""
return meeting
async def update_meeting(self, meeting_id: str, **kwargs):
query = meetings.update().where(meetings.c.id == meeting_id).values(**kwargs)
await get_database().execute(query)
@@ -219,7 +187,7 @@ class MeetingConsentController:
result = await get_database().fetch_one(query)
if result is None:
return None
return MeetingConsent(**result) if result else None
return MeetingConsent(**result)
async def upsert(self, consent: MeetingConsent) -> MeetingConsent:
"""Create new consent or update existing one for authenticated users"""

View File

@@ -23,7 +23,7 @@ from pydantic import (
from reflector.db import get_database
from reflector.db.rooms import rooms
from reflector.db.transcripts import SourceKind, transcripts
from reflector.db.transcripts import SourceKind, TranscriptStatus, transcripts
from reflector.db.utils import is_postgresql
from reflector.logger import logger
from reflector.utils.string import NonEmptyString, try_parse_non_empty_string
@@ -161,7 +161,7 @@ class SearchResult(BaseModel):
room_name: str | None = None
source_kind: SourceKind
created_at: datetime
status: str = Field(..., min_length=1)
status: TranscriptStatus = Field(..., min_length=1)
rank: float = Field(..., ge=0, le=1)
duration: NonNegativeFloat | None = Field(..., description="Duration in seconds")
search_snippets: list[str] = Field(

View File

@@ -12,7 +12,7 @@ from pathlib import Path
import av
import structlog
from celery import shared_task
from celery import chain, shared_task
from reflector.asynctask import asynctask
from reflector.db.rooms import rooms_controller
@@ -26,6 +26,8 @@ from reflector.logger import logger
from reflector.pipelines.main_live_pipeline import (
PipelineMainBase,
broadcast_to_sockets,
task_cleanup_consent,
task_pipeline_post_to_zulip,
)
from reflector.processors import (
AudioFileWriterProcessor,
@@ -379,6 +381,28 @@ class PipelineMainFile(PipelineMainBase):
await processor.flush()
@shared_task
@asynctask
async def task_send_webhook_if_needed(*, transcript_id: str):
"""Send webhook if this is a room recording with webhook configured"""
transcript = await transcripts_controller.get_by_id(transcript_id)
if not transcript:
return
if transcript.source_kind == SourceKind.ROOM and transcript.room_id:
room = await rooms_controller.get_by_id(transcript.room_id)
if room and room.webhook_url:
logger.info(
"Dispatching webhook",
transcript_id=transcript_id,
room_id=room.id,
webhook_url=room.webhook_url,
)
send_transcript_webhook.delay(
transcript_id, room.id, event_id=uuid.uuid4().hex
)
@shared_task
@asynctask
async def task_pipeline_file_process(*, transcript_id: str):
@@ -406,16 +430,10 @@ async def task_pipeline_file_process(*, transcript_id: str):
await pipeline.set_status(transcript_id, "error")
raise
# Trigger webhook if this is a room recording with webhook configured
if transcript.source_kind == SourceKind.ROOM and transcript.room_id:
room = await rooms_controller.get_by_id(transcript.room_id)
if room and room.webhook_url:
logger.info(
"Dispatching webhook task",
transcript_id=transcript_id,
room_id=room.id,
webhook_url=room.webhook_url,
)
send_transcript_webhook.delay(
transcript_id, room.id, event_id=uuid.uuid4().hex
)
# Run post-processing chain: consent cleanup -> zulip -> webhook
post_chain = chain(
task_cleanup_consent.si(transcript_id=transcript_id),
task_pipeline_post_to_zulip.si(transcript_id=transcript_id),
task_send_webhook_if_needed.si(transcript_id=transcript_id),
)
post_chain.delay()

View File

@@ -47,6 +47,7 @@ class FileDiarizationModalProcessor(FileDiarizationProcessor):
"audio_file_url": data.audio_url,
"timestamp": 0,
},
follow_redirects=True,
)
response.raise_for_status()
diarization_data = response.json()["diarization"]

View File

@@ -54,6 +54,7 @@ class FileTranscriptModalProcessor(FileTranscriptProcessor):
"language": data.language,
"batch": True,
},
follow_redirects=True,
)
response.raise_for_status()
result = response.json()

View File

@@ -4,11 +4,8 @@ import tempfile
from pathlib import Path
from typing import Annotated, TypedDict
from profanityfilter import ProfanityFilter
from pydantic import BaseModel, Field, PrivateAttr
from reflector.redis_cache import redis_cache
class DiarizationSegment(TypedDict):
"""Type definition for diarization segment containing speaker information"""
@@ -20,9 +17,6 @@ class DiarizationSegment(TypedDict):
PUNC_RE = re.compile(r"[.;:?!…]")
profanity_filter = ProfanityFilter()
profanity_filter.set_censor("*")
class AudioFile(BaseModel):
name: str
@@ -124,21 +118,11 @@ def words_to_segments(words: list[Word]) -> list[TranscriptSegment]:
class Transcript(BaseModel):
translation: str | None = None
words: list[Word] = None
@property
def raw_text(self):
# Uncensored text
return "".join([word.text for word in self.words])
@redis_cache(prefix="profanity", duration=3600 * 24 * 7)
def _get_censored_text(self, text: str):
return profanity_filter.censor(text).strip()
words: list[Word] = []
@property
def text(self):
# Censored text
return self._get_censored_text(self.raw_text)
return "".join([word.text for word in self.words])
@property
def human_timestamp(self):
@@ -170,12 +154,6 @@ class Transcript(BaseModel):
word.start += offset
word.end += offset
def clone(self):
words = [
Word(text=word.text, start=word.start, end=word.end) for word in self.words
]
return Transcript(text=self.text, translation=self.translation, words=words)
def as_segments(self) -> list[TranscriptSegment]:
return words_to_segments(self.words)

View File

@@ -1,6 +1,8 @@
from pydantic.types import PositiveInt
from pydantic_settings import BaseSettings, SettingsConfigDict
from reflector.utils.string import NonEmptyString
class Settings(BaseSettings):
model_config = SettingsConfigDict(
@@ -120,7 +122,7 @@ class Settings(BaseSettings):
# Whereby integration
WHEREBY_API_URL: str = "https://api.whereby.dev/v1"
WHEREBY_API_KEY: str | None = None
WHEREBY_API_KEY: NonEmptyString | None = None
WHEREBY_WEBHOOK_SECRET: str | None = None
AWS_WHEREBY_ACCESS_KEY_ID: str | None = None
AWS_WHEREBY_ACCESS_KEY_SECRET: str | None = None

View File

@@ -10,8 +10,11 @@ NonEmptyString = Annotated[
non_empty_string_adapter = TypeAdapter(NonEmptyString)
def parse_non_empty_string(s: str) -> NonEmptyString:
return non_empty_string_adapter.validate_python(s)
def parse_non_empty_string(s: str, error: str | None = None) -> NonEmptyString:
try:
return non_empty_string_adapter.validate_python(s)
except Exception as e:
raise ValueError(f"{e}: {error}" if error else e) from e
def try_parse_non_empty_string(s: str) -> NonEmptyString | None:

View File

@@ -209,20 +209,15 @@ async def rooms_create_meeting(
host_room_url=whereby_meeting["hostRoomUrl"],
start_date=parse_datetime_with_timezone(whereby_meeting["startDate"]),
end_date=parse_datetime_with_timezone(whereby_meeting["endDate"]),
user_id=user_id,
room=room,
)
except (asyncpg.exceptions.UniqueViolationError, sqlite3.IntegrityError):
# Another request already created a meeting for this room
# Log this race condition occurrence
logger.info(
"Race condition detected for room %s - fetching existing meeting",
room.name,
)
logger.warning(
"Whereby meeting %s was created but not used (resource leak) for room %s",
whereby_meeting["meetingId"],
"Race condition detected for room %s and meeting %s - fetching existing meeting",
room.name,
whereby_meeting["meetingId"],
)
# Fetch the meeting that was created by the other request
@@ -232,7 +227,9 @@ async def rooms_create_meeting(
if meeting is None:
# Edge case: meeting was created but expired/deleted between checks
logger.error(
"Meeting disappeared after race condition for room %s", room.name
"Meeting disappeared after race condition for room %s",
room.name,
exc_info=True,
)
raise HTTPException(
status_code=503, detail="Unable to join meeting - please try again"

View File

@@ -350,8 +350,6 @@ async def transcript_update(
transcript = await transcripts_controller.get_by_id_for_http(
transcript_id, user_id=user_id
)
if not transcript:
raise HTTPException(status_code=404, detail="Transcript not found")
values = info.dict(exclude_unset=True)
updated_transcript = await transcripts_controller.update(transcript, values)
return updated_transcript

View File

@@ -1,18 +1,60 @@
import logging
from datetime import datetime
import httpx
from reflector.db.rooms import Room
from reflector.settings import settings
from reflector.utils.string import parse_non_empty_string
logger = logging.getLogger(__name__)
def _get_headers():
api_key = parse_non_empty_string(
settings.WHEREBY_API_KEY, "WHEREBY_API_KEY value is required."
)
return {
"Content-Type": "application/json; charset=utf-8",
"Authorization": f"Bearer {api_key}",
}
HEADERS = {
"Content-Type": "application/json; charset=utf-8",
"Authorization": f"Bearer {settings.WHEREBY_API_KEY}",
}
TIMEOUT = 10 # seconds
def _get_whereby_s3_auth():
errors = []
try:
bucket_name = parse_non_empty_string(
settings.RECORDING_STORAGE_AWS_BUCKET_NAME,
"RECORDING_STORAGE_AWS_BUCKET_NAME value is required.",
)
except Exception as e:
errors.append(e)
try:
key_id = parse_non_empty_string(
settings.AWS_WHEREBY_ACCESS_KEY_ID,
"AWS_WHEREBY_ACCESS_KEY_ID value is required.",
)
except Exception as e:
errors.append(e)
try:
key_secret = parse_non_empty_string(
settings.AWS_WHEREBY_ACCESS_KEY_SECRET,
"AWS_WHEREBY_ACCESS_KEY_SECRET value is required.",
)
except Exception as e:
errors.append(e)
if len(errors) > 0:
raise Exception(
f"Failed to get Whereby auth settings: {', '.join(str(e) for e in errors)}"
)
return bucket_name, key_id, key_secret
async def create_meeting(room_name_prefix: str, end_date: datetime, room: Room):
s3_bucket_name, s3_key_id, s3_key_secret = _get_whereby_s3_auth()
data = {
"isLocked": room.is_locked,
"roomNamePrefix": room_name_prefix,
@@ -23,23 +65,26 @@ async def create_meeting(room_name_prefix: str, end_date: datetime, room: Room):
"type": room.recording_type,
"destination": {
"provider": "s3",
"bucket": settings.RECORDING_STORAGE_AWS_BUCKET_NAME,
"accessKeyId": settings.AWS_WHEREBY_ACCESS_KEY_ID,
"accessKeySecret": settings.AWS_WHEREBY_ACCESS_KEY_SECRET,
"bucket": s3_bucket_name,
"accessKeyId": s3_key_id,
"accessKeySecret": s3_key_secret,
"fileFormat": "mp4",
},
"startTrigger": room.recording_trigger,
},
"fields": ["hostRoomUrl"],
}
async with httpx.AsyncClient() as client:
response = await client.post(
f"{settings.WHEREBY_API_URL}/meetings",
headers=HEADERS,
headers=_get_headers(),
json=data,
timeout=TIMEOUT,
)
if response.status_code == 403:
logger.warning(
f"Failed to create meeting: access denied on Whereby: {response.text}"
)
response.raise_for_status()
return response.json()
@@ -48,7 +93,7 @@ async def get_room_sessions(room_name: str):
async with httpx.AsyncClient() as client:
response = await client.get(
f"{settings.WHEREBY_API_URL}/insights/room-sessions?roomName={room_name}",
headers=HEADERS,
headers=_get_headers(),
timeout=TIMEOUT,
)
response.raise_for_status()

View File

@@ -105,7 +105,6 @@ async def test_cleanup_deletes_associated_meeting_and_recording():
host_room_url="https://example.com/meeting-host",
start_date=old_date,
end_date=old_date + timedelta(hours=1),
user_id=None,
room_id=None,
)
)
@@ -241,7 +240,6 @@ async def test_meeting_consent_cascade_delete():
host_room_url="https://example.com/cascade-test-host",
start_date=datetime.now(timezone.utc),
end_date=datetime.now(timezone.utc) + timedelta(hours=1),
user_id="test-user",
room_id=None,
)
)

View File

@@ -272,6 +272,9 @@ class TestGPUModalTranscript:
for f in temp_files:
Path(f).unlink(missing_ok=True)
@pytest.mark.skipif(
not "parakeet" in get_model_name(), reason="Parakeet only supports English"
)
def test_transcriptions_error_handling(self):
"""Test error handling for invalid requests."""
url = get_modal_transcript_url()

View File

@@ -58,7 +58,7 @@ async def test_empty_transcript_title_only_match():
"id": test_id,
"name": "Empty Transcript",
"title": "Empty Meeting",
"status": "completed",
"status": "ended",
"locked": False,
"duration": 0.0,
"created_at": datetime.now(timezone.utc),
@@ -109,7 +109,7 @@ async def test_search_with_long_summary():
"id": test_id,
"name": "Test Long Summary",
"title": "Regular Meeting",
"status": "completed",
"status": "ended",
"locked": False,
"duration": 1800.0,
"created_at": datetime.now(timezone.utc),
@@ -165,7 +165,7 @@ async def test_postgresql_search_with_data():
"id": test_id,
"name": "Test Search Transcript",
"title": "Engineering Planning Meeting Q4 2024",
"status": "completed",
"status": "ended",
"locked": False,
"duration": 1800.0,
"created_at": datetime.now(timezone.utc),
@@ -221,7 +221,7 @@ We need to implement PostgreSQL tsvector for better performance.""",
test_result = next((r for r in results if r.id == test_id), None)
if test_result:
assert test_result.title == "Engineering Planning Meeting Q4 2024"
assert test_result.status == "completed"
assert test_result.status == "ended"
assert test_result.duration == 1800.0
assert 0 <= test_result.rank <= 1, "Rank should be normalized to 0-1"
@@ -268,7 +268,7 @@ def mock_db_result():
"title": "Test Transcript",
"created_at": datetime(2024, 6, 15, tzinfo=timezone.utc),
"duration": 3600.0,
"status": "completed",
"status": "ended",
"user_id": "test-user",
"room_id": "room1",
"source_kind": SourceKind.LIVE,
@@ -433,7 +433,7 @@ class TestSearchResultModel:
room_id="room-456",
source_kind=SourceKind.ROOM,
created_at=datetime(2024, 6, 15, tzinfo=timezone.utc),
status="completed",
status="ended",
rank=0.85,
duration=1800.5,
search_snippets=["snippet 1", "snippet 2"],
@@ -443,7 +443,7 @@ class TestSearchResultModel:
assert result.title == "Test Title"
assert result.user_id == "user-123"
assert result.room_id == "room-456"
assert result.status == "completed"
assert result.status == "ended"
assert result.rank == 0.85
assert result.duration == 1800.5
assert len(result.search_snippets) == 2
@@ -474,7 +474,7 @@ class TestSearchResultModel:
id="test-id",
source_kind=SourceKind.LIVE,
created_at=datetime(2024, 6, 15, 12, 30, 45, tzinfo=timezone.utc),
status="completed",
status="ended",
rank=0.9,
duration=None,
search_snippets=[],

View File

@@ -25,7 +25,7 @@ async def test_long_summary_snippet_prioritization():
"id": test_id,
"name": "Test Snippet Priority",
"title": "Meeting About Projects",
"status": "completed",
"status": "ended",
"locked": False,
"duration": 1800.0,
"created_at": datetime.now(timezone.utc),
@@ -106,7 +106,7 @@ async def test_long_summary_only_search():
"id": test_id,
"name": "Test Long Only",
"title": "Standard Meeting",
"status": "completed",
"status": "ended",
"locked": False,
"duration": 1800.0,
"created_at": datetime.now(timezone.utc),

47
server/uv.lock generated
View File

@@ -1325,15 +1325,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/9c/1f/19ebc343cc71a7ffa78f17018535adc5cbdd87afb31d7c34874680148b32/ifaddr-0.2.0-py3-none-any.whl", hash = "sha256:085e0305cfe6f16ab12d72e2024030f5d52674afad6911bb1eee207177b8a748", size = 12314, upload-time = "2022-06-15T21:40:25.756Z" },
]
[[package]]
name = "inflection"
version = "0.5.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e1/7e/691d061b7329bc8d54edbf0ec22fbfb2afe61facb681f9aaa9bff7a27d04/inflection-0.5.1.tar.gz", hash = "sha256:1a29730d366e996aaacffb2f1f1cb9593dc38e2ddd30c91250c6dde09ea9b417", size = 15091, upload-time = "2020-08-22T08:16:29.139Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/59/91/aa6bde563e0085a02a435aa99b49ef75b0a4b062635e606dab23ce18d720/inflection-0.5.1-py2.py3-none-any.whl", hash = "sha256:f38b2b640938a4f35ade69ac3d053042959b62a0f1076a5bbaa1b9526605a8a2", size = 9454, upload-time = "2020-08-22T08:16:27.816Z" },
]
[[package]]
name = "iniconfig"
version = "2.1.0"
@@ -2311,18 +2302,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/74/c1/bb7e334135859c3a92ec399bc89293ea73f28e815e35b43929c8db6af030/primePy-1.3-py3-none-any.whl", hash = "sha256:5ed443718765be9bf7e2ff4c56cdff71b42140a15b39d054f9d99f0009e2317a", size = 4040, upload-time = "2018-05-29T17:18:17.53Z" },
]
[[package]]
name = "profanityfilter"
version = "2.1.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "inflection" },
]
sdist = { url = "https://files.pythonhosted.org/packages/8d/03/08740b5e0800f9eb9f675c149a497a3f3735e7b04e414bcce64136e7e487/profanityfilter-2.1.0.tar.gz", hash = "sha256:0ede04e92a9d7255faa52b53776518edc6586dda828aca677c74b5994dfdd9d8", size = 7910, upload-time = "2024-11-25T22:31:51.194Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/88/03/eb18f72dc6e6398e75e3762677f18ab3a773a384b18efd3ed9119844e892/profanityfilter-2.1.0-py2.py3-none-any.whl", hash = "sha256:e1bc07012760fd74512a335abb93a36877831ed26abab78bfe31bebb68f8c844", size = 7483, upload-time = "2024-11-25T22:31:50.129Z" },
]
[[package]]
name = "prometheus-client"
version = "0.22.1"
@@ -3131,7 +3110,6 @@ dependencies = [
{ name = "loguru" },
{ name = "nltk" },
{ name = "openai" },
{ name = "profanityfilter" },
{ name = "prometheus-fastapi-instrumentator" },
{ name = "protobuf" },
{ name = "psycopg2-binary" },
@@ -3208,7 +3186,6 @@ requires-dist = [
{ name = "loguru", specifier = ">=0.7.0" },
{ name = "nltk", specifier = ">=3.8.1" },
{ name = "openai", specifier = ">=1.59.7" },
{ name = "profanityfilter", specifier = ">=2.0.6" },
{ name = "prometheus-fastapi-instrumentator", specifier = ">=6.1.0" },
{ name = "protobuf", specifier = ">=4.24.3" },
{ name = "psycopg2-binary", specifier = ">=2.9.10" },
@@ -3954,8 +3931,8 @@ dependencies = [
{ name = "typing-extensions", marker = "platform_python_implementation != 'PyPy' and sys_platform == 'darwin'" },
]
wheels = [
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:3d05017d19bc99741288e458888283a44b0ee881d53f05f72f8b1cfea8998122" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:a47b7986bee3f61ad217d8a8ce24605809ab425baf349f97de758815edd2ef54" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0-cp311-none-macosx_11_0_arm64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0-cp312-none-macosx_11_0_arm64.whl" },
]
[[package]]
@@ -3980,16 +3957,16 @@ dependencies = [
{ name = "typing-extensions", marker = "platform_python_implementation == 'PyPy' or sys_platform != 'darwin'" },
]
wheels = [
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-linux_s390x.whl", hash = "sha256:2bfc013dd6efdc8f8223a0241d3529af9f315dffefb53ffa3bf14d3f10127da6" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:680129efdeeec3db5da3f88ee5d28c1b1e103b774aef40f9d638e2cce8f8d8d8" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:cb06175284673a581dd91fb1965662ae4ecaba6e5c357aa0ea7bb8b84b6b7eeb" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-win_amd64.whl", hash = "sha256:7631ef49fbd38d382909525b83696dc12a55d68492ade4ace3883c62b9fc140f" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-win_arm64.whl", hash = "sha256:41e6fc5ec0914fcdce44ccf338b1d19a441b55cafdd741fd0bf1af3f9e4cfd14" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-linux_s390x.whl", hash = "sha256:0e34e276722ab7dd0dffa9e12fe2135a9b34a0e300c456ed7ad6430229404eb5" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:610f600c102386e581327d5efc18c0d6edecb9820b4140d26163354a99cd800d" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:cb9a8ba8137ab24e36bf1742cb79a1294bd374db570f09fc15a5e1318160db4e" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-win_amd64.whl", hash = "sha256:2be20b2c05a0cce10430cc25f32b689259640d273232b2de357c35729132256d" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-win_arm64.whl", hash = "sha256:99fc421a5d234580e45957a7b02effbf3e1c884a5dd077afc85352c77bf41434" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-linux_s390x.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-manylinux_2_28_aarch64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-manylinux_2_28_x86_64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-win_amd64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp311-cp311-win_arm64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-linux_s390x.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-manylinux_2_28_aarch64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-manylinux_2_28_x86_64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-win_amd64.whl" },
{ url = "https://download.pytorch.org/whl/cpu/torch-2.8.0%2Bcpu-cp312-cp312-win_arm64.whl" },
]
[[package]]

34
www/.env.example Normal file
View File

@@ -0,0 +1,34 @@
# Environment
ENVIRONMENT=development
NEXT_PUBLIC_ENV=development
# Site Configuration
NEXT_PUBLIC_SITE_URL=http://localhost:3000
# Nextauth envs
# not used in app code but in lib code
NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=your-nextauth-secret-here
# / Nextauth envs
# Authentication (Authentik OAuth/OIDC)
AUTHENTIK_ISSUER=https://authentik.example.com/application/o/reflector
AUTHENTIK_REFRESH_TOKEN_URL=https://authentik.example.com/application/o/token/
AUTHENTIK_CLIENT_ID=your-client-id-here
AUTHENTIK_CLIENT_SECRET=your-client-secret-here
# Feature Flags
# NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN=true
# NEXT_PUBLIC_FEATURE_PRIVACY=false
# NEXT_PUBLIC_FEATURE_BROWSE=true
# NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP=true
# NEXT_PUBLIC_FEATURE_ROOMS=true
# API URLs
NEXT_PUBLIC_API_URL=http://127.0.0.1:1250
NEXT_PUBLIC_WEBSOCKET_URL=ws://127.0.0.1:1250
NEXT_PUBLIC_AUTH_CALLBACK_URL=http://localhost:3000/auth-callback
# Sentry
# SENTRY_DSN=https://your-dsn@sentry.io/project-id
# SENTRY_IGNORE_API_RESOLUTION_ERROR=1

1
www/.gitignore vendored
View File

@@ -40,7 +40,6 @@ next-env.d.ts
# Sentry Auth Token
.sentryclirc
config.ts
# openapi logs
openapi-ts-error-*.log

View File

@@ -2,6 +2,7 @@
import { Flex, Spinner } from "@chakra-ui/react";
import { useAuth } from "../lib/AuthProvider";
import { useLoginRequiredPages } from "../lib/useLoginRequiredPages";
export default function AuthWrapper({
children,
@@ -9,8 +10,10 @@ export default function AuthWrapper({
children: React.ReactNode;
}) {
const auth = useAuth();
const redirectPath = useLoginRequiredPages();
const redirectHappens = !!redirectPath;
if (auth.status === "loading") {
if (auth.status === "loading" || redirectHappens) {
return (
<Flex
flexDir="column"

View File

@@ -7,9 +7,10 @@ import {
FaMicrophone,
FaGear,
} from "react-icons/fa6";
import { TranscriptStatus } from "../../../lib/transcript";
interface TranscriptStatusIconProps {
status: string;
status: TranscriptStatus;
}
export default function TranscriptStatusIcon({

View File

@@ -1,5 +1,5 @@
import { Container, Flex, Link } from "@chakra-ui/react";
import { getConfig } from "../lib/edgeConfig";
import { featureEnabled } from "../lib/features";
import NextLink from "next/link";
import Image from "next/image";
import UserInfo from "../(auth)/userInfo";
@@ -11,8 +11,6 @@ export default async function AppLayout({
}: {
children: React.ReactNode;
}) {
const config = await getConfig();
const { requireLogin, privacy, browse, rooms } = config.features;
return (
<Container
minW="100vw"
@@ -58,7 +56,7 @@ export default async function AppLayout({
>
Create
</Link>
{browse ? (
{featureEnabled("browse") ? (
<>
&nbsp;·&nbsp;
<Link href="/browse" as={NextLink} className="font-light px-2">
@@ -68,7 +66,7 @@ export default async function AppLayout({
) : (
<></>
)}
{rooms ? (
{featureEnabled("rooms") ? (
<>
&nbsp;·&nbsp;
<Link href="/rooms" as={NextLink} className="font-light px-2">
@@ -78,7 +76,7 @@ export default async function AppLayout({
) : (
<></>
)}
{requireLogin ? (
{featureEnabled("requireLogin") ? (
<>
&nbsp;·&nbsp;
<UserInfo />

View File

@@ -3,8 +3,10 @@ import ScrollToBottom from "../../scrollToBottom";
import { Topic } from "../../webSocketTypes";
import useParticipants from "../../useParticipants";
import { Box, Flex, Text, Accordion } from "@chakra-ui/react";
import { featureEnabled } from "../../../../domainContext";
import { TopicItem } from "./TopicItem";
import { TranscriptStatus } from "../../../../lib/transcript";
import { featureEnabled } from "../../../../lib/features";
type TopicListProps = {
topics: Topic[];
@@ -14,7 +16,7 @@ type TopicListProps = {
];
autoscroll: boolean;
transcriptId: string;
status: string;
status: TranscriptStatus | null;
currentTranscriptText: any;
};

View File

@@ -1,5 +1,5 @@
"use client";
import { useState } from "react";
import { useState, use } from "react";
import TopicHeader from "./topicHeader";
import TopicWords from "./topicWords";
import TopicPlayer from "./topicPlayer";
@@ -9,23 +9,27 @@ import ParticipantList from "./participantList";
import type { components } from "../../../../reflector-api";
type GetTranscriptTopic = components["schemas"]["GetTranscriptTopic"];
import { SelectedText, selectedTextIsTimeSlice } from "./types";
import { useTranscriptUpdate } from "../../../../lib/apiHooks";
import useTranscript from "../../useTranscript";
import {
useTranscriptGet,
useTranscriptUpdate,
} from "../../../../lib/apiHooks";
import { useError } from "../../../../(errors)/errorContext";
import { useRouter } from "next/navigation";
import { Box, Grid } from "@chakra-ui/react";
export type TranscriptCorrect = {
params: {
params: Promise<{
transcriptId: string;
};
}>;
};
export default function TranscriptCorrect({
params: { transcriptId },
}: TranscriptCorrect) {
export default function TranscriptCorrect(props: TranscriptCorrect) {
const params = use(props.params);
const { transcriptId } = params;
const updateTranscriptMutation = useTranscriptUpdate();
const transcript = useTranscript(transcriptId);
const transcript = useTranscriptGet(transcriptId);
const stateCurrentTopic = useState<GetTranscriptTopic>();
const [currentTopic, _sct] = stateCurrentTopic;
const stateSelectedText = useState<SelectedText>();
@@ -36,7 +40,7 @@ export default function TranscriptCorrect({
const router = useRouter();
const markAsDone = async () => {
if (transcript.response && !transcript.response.reviewed) {
if (transcript.data && !transcript.data.reviewed) {
try {
await updateTranscriptMutation.mutateAsync({
params: {
@@ -114,7 +118,7 @@ export default function TranscriptCorrect({
}}
/>
</Grid>
{transcript.response && !transcript.response?.reviewed && (
{transcript.data && !transcript.data?.reviewed && (
<div className="flex flex-row justify-end">
<button
className="p-2 px-4 rounded bg-green-400"

View File

@@ -1,32 +1,38 @@
"use client";
import Modal from "../modal";
import useTranscript from "../useTranscript";
import useTopics from "../useTopics";
import useWaveform from "../useWaveform";
import useMp3 from "../useMp3";
import { TopicList } from "./_components/TopicList";
import { Topic } from "../webSocketTypes";
import React, { useEffect, useState } from "react";
import React, { useEffect, useState, use } from "react";
import FinalSummary from "./finalSummary";
import TranscriptTitle from "../transcriptTitle";
import Player from "../player";
import { useRouter } from "next/navigation";
import { Box, Flex, Grid, GridItem, Skeleton, Text } from "@chakra-ui/react";
import { useTranscriptGet } from "../../../lib/apiHooks";
import { TranscriptStatus } from "../../../lib/transcript";
type TranscriptDetails = {
params: {
params: Promise<{
transcriptId: string;
};
}>;
};
export default function TranscriptDetails(details: TranscriptDetails) {
const transcriptId = details.params.transcriptId;
const params = use(details.params);
const transcriptId = params.transcriptId;
const router = useRouter();
const statusToRedirect = ["idle", "recording", "processing"];
const statusToRedirect = [
"idle",
"recording",
"processing",
] satisfies TranscriptStatus[] as TranscriptStatus[];
const transcript = useTranscript(transcriptId);
const transcriptStatus = transcript.response?.status;
const waiting = statusToRedirect.includes(transcriptStatus || "");
const transcript = useTranscriptGet(transcriptId);
const waiting =
transcript.data && statusToRedirect.includes(transcript.data.status);
const mp3 = useMp3(transcriptId, waiting);
const topics = useTopics(transcriptId);
@@ -38,7 +44,7 @@ export default function TranscriptDetails(details: TranscriptDetails) {
useEffect(() => {
if (waiting) {
const newUrl = "/transcripts/" + details.params.transcriptId + "/record";
const newUrl = "/transcripts/" + params.transcriptId + "/record";
// Shallow redirection does not work on NextJS 13
// https://github.com/vercel/next.js/discussions/48110
// https://github.com/vercel/next.js/discussions/49540
@@ -56,7 +62,7 @@ export default function TranscriptDetails(details: TranscriptDetails) {
);
}
if (transcript?.loading || topics?.loading) {
if (transcript?.isLoading || topics?.loading) {
return <Modal title="Loading" text={"Loading transcript..."} />;
}
@@ -86,7 +92,7 @@ export default function TranscriptDetails(details: TranscriptDetails) {
useActiveTopic={useActiveTopic}
waveform={waveform.waveform}
media={mp3.media}
mediaDuration={transcript.response?.duration || null}
mediaDuration={transcript.data?.duration || null}
/>
) : !mp3.loading && (waveform.error || mp3.error) ? (
<Box p={4} bg="red.100" borderRadius="md">
@@ -116,10 +122,10 @@ export default function TranscriptDetails(details: TranscriptDetails) {
<Flex direction="column" gap={0}>
<Flex alignItems="center" gap={2}>
<TranscriptTitle
title={transcript.response?.title || "Unnamed Transcript"}
title={transcript.data?.title || "Unnamed Transcript"}
transcriptId={transcriptId}
onUpdate={(newTitle) => {
transcript.reload();
transcript.refetch().then(() => {});
}}
/>
</Flex>
@@ -136,23 +142,23 @@ export default function TranscriptDetails(details: TranscriptDetails) {
useActiveTopic={useActiveTopic}
autoscroll={false}
transcriptId={transcriptId}
status={transcript.response?.status}
status={transcript.data?.status || null}
currentTranscriptText=""
/>
{transcript.response && topics.topics ? (
{transcript.data && topics.topics ? (
<>
<FinalSummary
transcriptResponse={transcript.response}
transcriptResponse={transcript.data}
topicsResponse={topics.topics}
onUpdate={(newSummary) => {
transcript.reload();
onUpdate={() => {
transcript.refetch();
}}
/>
</>
) : (
<Flex justify={"center"} alignItems={"center"} h={"100%"}>
<div className="flex flex-col h-full justify-center content-center">
{transcript.response.status == "processing" ? (
{transcript?.data?.status == "processing" ? (
<Text>Loading Transcript</Text>
) : (
<Text>

View File

@@ -1,8 +1,7 @@
"use client";
import { useEffect, useState } from "react";
import { useEffect, useState, use } from "react";
import Recorder from "../../recorder";
import { TopicList } from "../_components/TopicList";
import useTranscript from "../../useTranscript";
import { useWebSockets } from "../../useWebSockets";
import { Topic } from "../../webSocketTypes";
import { lockWakeState, releaseWakeState } from "../../../../lib/wakeLock";
@@ -11,26 +10,29 @@ import useMp3 from "../../useMp3";
import WaveformLoading from "../../waveformLoading";
import { Box, Text, Grid, Heading, VStack, Flex } from "@chakra-ui/react";
import LiveTrancription from "../../liveTranscription";
import { useTranscriptGet } from "../../../../lib/apiHooks";
import { TranscriptStatus } from "../../../../lib/transcript";
type TranscriptDetails = {
params: {
params: Promise<{
transcriptId: string;
};
}>;
};
const TranscriptRecord = (details: TranscriptDetails) => {
const transcript = useTranscript(details.params.transcriptId);
const params = use(details.params);
const transcript = useTranscriptGet(params.transcriptId);
const [transcriptStarted, setTranscriptStarted] = useState(false);
const useActiveTopic = useState<Topic | null>(null);
const webSockets = useWebSockets(details.params.transcriptId);
const webSockets = useWebSockets(params.transcriptId);
const mp3 = useMp3(details.params.transcriptId, true);
const mp3 = useMp3(params.transcriptId, true);
const router = useRouter();
const [status, setStatus] = useState(
webSockets.status.value || transcript.response?.status || "idle",
const [status, setStatus] = useState<TranscriptStatus>(
webSockets.status?.value || transcript.data?.status || "idle",
);
useEffect(() => {
@@ -41,15 +43,15 @@ const TranscriptRecord = (details: TranscriptDetails) => {
useEffect(() => {
//TODO HANDLE ERROR STATUS BETTER
const newStatus =
webSockets.status.value || transcript.response?.status || "idle";
webSockets.status?.value || transcript.data?.status || "idle";
setStatus(newStatus);
if (newStatus && (newStatus == "ended" || newStatus == "error")) {
console.log(newStatus, "redirecting");
const newUrl = "/transcripts/" + details.params.transcriptId;
const newUrl = "/transcripts/" + params.transcriptId;
router.replace(newUrl);
}
}, [webSockets.status.value, transcript.response?.status]);
}, [webSockets.status?.value, transcript.data?.status]);
useEffect(() => {
if (webSockets.waveform && webSockets.waveform) mp3.getNow();
@@ -74,7 +76,7 @@ const TranscriptRecord = (details: TranscriptDetails) => {
<WaveformLoading />
) : (
// todo: only start recording animation when you get "recorded" status
<Recorder transcriptId={details.params.transcriptId} status={status} />
<Recorder transcriptId={params.transcriptId} status={status} />
)}
<VStack
align={"left"}
@@ -97,7 +99,7 @@ const TranscriptRecord = (details: TranscriptDetails) => {
topics={webSockets.topics}
useActiveTopic={useActiveTopic}
autoscroll={true}
transcriptId={details.params.transcriptId}
transcriptId={params.transcriptId}
status={status}
currentTranscriptText={webSockets.accumulatedText}
/>

View File

@@ -1,37 +1,38 @@
"use client";
import { useEffect, useState } from "react";
import useTranscript from "../../useTranscript";
import { useEffect, useState, use } from "react";
import { useWebSockets } from "../../useWebSockets";
import { lockWakeState, releaseWakeState } from "../../../../lib/wakeLock";
import { useRouter } from "next/navigation";
import useMp3 from "../../useMp3";
import { Center, VStack, Text, Heading, Button } from "@chakra-ui/react";
import FileUploadButton from "../../fileUploadButton";
import { useTranscriptGet } from "../../../../lib/apiHooks";
type TranscriptUpload = {
params: {
params: Promise<{
transcriptId: string;
};
}>;
};
const TranscriptUpload = (details: TranscriptUpload) => {
const transcript = useTranscript(details.params.transcriptId);
const params = use(details.params);
const transcript = useTranscriptGet(params.transcriptId);
const [transcriptStarted, setTranscriptStarted] = useState(false);
const webSockets = useWebSockets(details.params.transcriptId);
const webSockets = useWebSockets(params.transcriptId);
const mp3 = useMp3(details.params.transcriptId, true);
const mp3 = useMp3(params.transcriptId, true);
const router = useRouter();
const [status_, setStatus] = useState(
webSockets.status.value || transcript.response?.status || "idle",
webSockets.status?.value || transcript.data?.status || "idle",
);
// status is obviously done if we have transcript
const status =
!transcript.loading && transcript.response?.status === "ended"
? transcript.response?.status
!transcript.isLoading && transcript.data?.status === "ended"
? transcript.data?.status
: status_;
useEffect(() => {
@@ -43,17 +44,17 @@ const TranscriptUpload = (details: TranscriptUpload) => {
//TODO HANDLE ERROR STATUS BETTER
// TODO deprecate webSockets.status.value / depend on transcript.response?.status from query lib
const newStatus =
transcript.response?.status === "ended"
transcript.data?.status === "ended"
? "ended"
: webSockets.status.value || transcript.response?.status || "idle";
: webSockets.status?.value || transcript.data?.status || "idle";
setStatus(newStatus);
if (newStatus && (newStatus == "ended" || newStatus == "error")) {
console.log(newStatus, "redirecting");
const newUrl = "/transcripts/" + details.params.transcriptId;
const newUrl = "/transcripts/" + params.transcriptId;
router.replace(newUrl);
}
}, [webSockets.status.value, transcript.response?.status]);
}, [webSockets.status?.value, transcript.data?.status]);
useEffect(() => {
if (webSockets.waveform && webSockets.waveform) mp3.getNow();
@@ -84,7 +85,7 @@ const TranscriptUpload = (details: TranscriptUpload) => {
Please select the file, supported formats: .mp3, m4a, .wav,
.mp4, .mov or .webm
</Text>
<FileUploadButton transcriptId={details.params.transcriptId} />
<FileUploadButton transcriptId={params.transcriptId} />
</>
)}
{status && status == "uploaded" && (

View File

@@ -9,7 +9,6 @@ import { useRouter } from "next/navigation";
import useCreateTranscript from "../createTranscript";
import SelectSearch from "react-select-search";
import { supportedLanguages } from "../../../supportedLanguages";
import { featureEnabled } from "../../../domainContext";
import {
Flex,
Box,
@@ -21,10 +20,9 @@ import {
Spacer,
} from "@chakra-ui/react";
import { useAuth } from "../../../lib/AuthProvider";
import type { components } from "../../../reflector-api";
import { featureEnabled } from "../../../lib/features";
const TranscriptCreate = () => {
const isClient = typeof window !== "undefined";
const router = useRouter();
const auth = useAuth();
const isAuthenticated = auth.status === "authenticated";
@@ -176,7 +174,7 @@ const TranscriptCreate = () => {
placeholder="Choose your language"
/>
</Box>
{isClient && !loading ? (
{!loading ? (
permissionOk ? (
<Spacer />
) : permissionDenied ? (

View File

@@ -11,10 +11,11 @@ import useAudioDevice from "./useAudioDevice";
import { Box, Flex, IconButton, Menu, RadioGroup } from "@chakra-ui/react";
import { LuScreenShare, LuMic, LuPlay, LuCircleStop } from "react-icons/lu";
import { RECORD_A_MEETING_URL } from "../../api/urls";
import { TranscriptStatus } from "../../lib/transcript";
type RecorderProps = {
transcriptId: string;
status: string;
status: TranscriptStatus;
};
export default function Recorder(props: RecorderProps) {

View File

@@ -1,5 +1,4 @@
import { useEffect, useState } from "react";
import { featureEnabled } from "../../domainContext";
import { ShareMode, toShareMode } from "../../lib/shareMode";
import type { components } from "../../reflector-api";
@@ -24,6 +23,8 @@ import ShareCopy from "./shareCopy";
import ShareZulip from "./shareZulip";
import { useAuth } from "../../lib/AuthProvider";
import { featureEnabled } from "../../lib/features";
type ShareAndPrivacyProps = {
finalSummaryRef: any;
transcriptResponse: GetTranscript;

View File

@@ -1,8 +1,9 @@
import React, { useState, useRef, useEffect, use } from "react";
import { featureEnabled } from "../../domainContext";
import { Button, Flex, Input, Text } from "@chakra-ui/react";
import QRCode from "react-qr-code";
import { featureEnabled } from "../../lib/features";
type ShareLinkProps = {
transcriptId: string;
};

View File

@@ -1,5 +1,4 @@
import { useState, useEffect, useMemo } from "react";
import { featureEnabled } from "../../domainContext";
import type { components } from "../../reflector-api";
type GetTranscript = components["schemas"]["GetTranscript"];
@@ -15,8 +14,7 @@ import {
Checkbox,
Combobox,
Spinner,
useFilter,
useListCollection,
createListCollection,
} from "@chakra-ui/react";
import { TbBrandZulip } from "react-icons/tb";
import {
@@ -25,6 +23,8 @@ import {
useTranscriptPostToZulip,
} from "../../lib/apiHooks";
import { featureEnabled } from "../../lib/features";
type ShareZulipProps = {
transcriptResponse: GetTranscript;
topicsResponse: GetTranscriptTopic[];
@@ -47,8 +47,6 @@ export default function ShareZulip(props: ShareZulipProps & BoxProps) {
const { data: topics = [] } = useZulipTopics(selectedStreamId);
const postToZulipMutation = useTranscriptPostToZulip();
const { contains } = useFilter({ sensitivity: "base" });
const streamItems = useMemo(() => {
return streams.map((stream: Stream) => ({
label: stream.name,
@@ -63,17 +61,21 @@ export default function ShareZulip(props: ShareZulipProps & BoxProps) {
}));
}, [topics]);
const { collection: streamItemsCollection, filter: streamItemsFilter } =
useListCollection({
initialItems: streamItems,
filter: contains,
});
const streamCollection = useMemo(
() =>
createListCollection({
items: streamItems,
}),
[streamItems],
);
const { collection: topicItemsCollection, filter: topicItemsFilter } =
useListCollection({
initialItems: topicItems,
filter: contains,
});
const topicCollection = useMemo(
() =>
createListCollection({
items: topicItems,
}),
[topicItems],
);
// Update selected stream ID when stream changes
useEffect(() => {
@@ -155,15 +157,12 @@ export default function ShareZulip(props: ShareZulipProps & BoxProps) {
<Flex align="center" gap={2}>
<Text>#</Text>
<Combobox.Root
collection={streamItemsCollection}
collection={streamCollection}
value={stream ? [stream] : []}
onValueChange={(e) => {
setTopic(undefined);
setStream(e.value[0]);
}}
onInputValueChange={(e) =>
streamItemsFilter(e.inputValue)
}
openOnClick={true}
positioning={{
strategy: "fixed",
@@ -180,7 +179,7 @@ export default function ShareZulip(props: ShareZulipProps & BoxProps) {
<Combobox.Positioner>
<Combobox.Content>
<Combobox.Empty>No streams found</Combobox.Empty>
{streamItemsCollection.items.map((item) => (
{streamItems.map((item) => (
<Combobox.Item key={item.value} item={item}>
{item.label}
</Combobox.Item>
@@ -196,12 +195,9 @@ export default function ShareZulip(props: ShareZulipProps & BoxProps) {
<Flex align="center" gap={2}>
<Text visibility="hidden">#</Text>
<Combobox.Root
collection={topicItemsCollection}
collection={topicCollection}
value={topic ? [topic] : []}
onValueChange={(e) => setTopic(e.value[0])}
onInputValueChange={(e) =>
topicItemsFilter(e.inputValue)
}
openOnClick
selectionBehavior="replace"
skipAnimationOnMount={true}
@@ -221,7 +217,7 @@ export default function ShareZulip(props: ShareZulipProps & BoxProps) {
<Combobox.Positioner>
<Combobox.Content>
<Combobox.Empty>No topics found</Combobox.Empty>
{topicItemsCollection.items.map((item) => (
{topicItems.map((item) => (
<Combobox.Item key={item.value} item={item}>
{item.label}
<Combobox.ItemIndicator />

View File

@@ -1,7 +1,7 @@
import { useContext, useEffect, useState } from "react";
import { DomainContext } from "../../domainContext";
import { useEffect, useState } from "react";
import { useTranscriptGet } from "../../lib/apiHooks";
import { useAuth } from "../../lib/AuthProvider";
import { API_URL } from "../../lib/apiClient";
export type Mp3Response = {
media: HTMLMediaElement | null;
@@ -19,7 +19,6 @@ const useMp3 = (transcriptId: string, waiting?: boolean): Mp3Response => {
null,
);
const [audioDeleted, setAudioDeleted] = useState<boolean | null>(null);
const { api_url } = useContext(DomainContext);
const auth = useAuth();
const accessTokenInfo =
auth.status === "authenticated" ? auth.accessToken : null;
@@ -78,7 +77,7 @@ const useMp3 = (transcriptId: string, waiting?: boolean): Mp3Response => {
// Audio is not deleted, proceed to load it
audioElement = document.createElement("audio");
audioElement.src = `${api_url}/v1/transcripts/${transcriptId}/audio/mp3`;
audioElement.src = `${API_URL}/v1/transcripts/${transcriptId}/audio/mp3`;
audioElement.crossOrigin = "anonymous";
audioElement.preload = "auto";
@@ -110,7 +109,7 @@ const useMp3 = (transcriptId: string, waiting?: boolean): Mp3Response => {
if (handleError) audioElement.removeEventListener("error", handleError);
}
};
}, [transcriptId, transcript, later, api_url]);
}, [transcriptId, transcript, later]);
const getNow = () => {
setLater(false);

View File

@@ -1,69 +0,0 @@
import type { components } from "../../reflector-api";
import { useTranscriptGet } from "../../lib/apiHooks";
type GetTranscript = components["schemas"]["GetTranscript"];
type ErrorTranscript = {
error: Error;
loading: false;
response: null;
reload: () => void;
};
type LoadingTranscript = {
response: null;
loading: true;
error: false;
reload: () => void;
};
type SuccessTranscript = {
response: GetTranscript;
loading: false;
error: null;
reload: () => void;
};
const useTranscript = (
id: string | null,
): ErrorTranscript | LoadingTranscript | SuccessTranscript => {
const { data, isLoading, error, refetch } = useTranscriptGet(id);
// Map to the expected return format
if (isLoading) {
return {
response: null,
loading: true,
error: false,
reload: refetch,
};
}
if (error) {
return {
error: error as Error,
loading: false,
response: null,
reload: refetch,
};
}
// Check if data is undefined or null
if (!data) {
return {
response: null,
loading: true,
error: false,
reload: refetch,
};
}
return {
response: data,
loading: false,
error: null,
reload: refetch,
};
};
export default useTranscript;

View File

@@ -1,13 +1,12 @@
import { useContext, useEffect, useState } from "react";
import { useEffect, useState } from "react";
import { Topic, FinalSummary, Status } from "./webSocketTypes";
import { useError } from "../../(errors)/errorContext";
import { DomainContext } from "../../domainContext";
import type { components } from "../../reflector-api";
type AudioWaveform = components["schemas"]["AudioWaveform"];
type GetTranscriptSegmentTopic =
components["schemas"]["GetTranscriptSegmentTopic"];
import { useQueryClient } from "@tanstack/react-query";
import { $api } from "../../lib/apiClient";
import { $api, WEBSOCKET_URL } from "../../lib/apiClient";
export type UseWebSockets = {
transcriptTextLive: string;
@@ -16,7 +15,7 @@ export type UseWebSockets = {
title: string;
topics: Topic[];
finalSummary: FinalSummary;
status: Status;
status: Status | null;
waveform: AudioWaveform | null;
duration: number | null;
};
@@ -34,10 +33,9 @@ export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
const [finalSummary, setFinalSummary] = useState<FinalSummary>({
summary: "",
});
const [status, setStatus] = useState<Status>({ value: "" });
const [status, setStatus] = useState<Status | null>(null);
const { setError } = useError();
const { websocket_url: websocketUrl } = useContext(DomainContext);
const queryClient = useQueryClient();
const [accumulatedText, setAccumulatedText] = useState<string>("");
@@ -328,7 +326,7 @@ export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
if (!transcriptId) return;
const url = `${websocketUrl}/v1/transcripts/${transcriptId}/events`;
const url = `${WEBSOCKET_URL}/v1/transcripts/${transcriptId}/events`;
let ws = new WebSocket(url);
ws.onopen = () => {
@@ -494,7 +492,7 @@ export const useWebSockets = (transcriptId: string | null): UseWebSockets => {
return () => {
ws.close();
};
}, [transcriptId, websocketUrl]);
}, [transcriptId]);
return {
transcriptTextLive,

View File

@@ -1,4 +1,5 @@
import type { components } from "../../reflector-api";
import type { TranscriptStatus } from "../../lib/transcript";
type GetTranscriptTopic = components["schemas"]["GetTranscriptTopic"];
@@ -13,7 +14,7 @@ export type FinalSummary = {
};
export type Status = {
value: string;
value: TranscriptStatus;
};
export type TranslatedTopic = {

View File

@@ -7,6 +7,7 @@ import {
useState,
useContext,
RefObject,
use,
} from "react";
import {
Box,
@@ -30,9 +31,9 @@ import { FaBars } from "react-icons/fa6";
import { useAuth } from "../lib/AuthProvider";
export type RoomDetails = {
params: {
params: Promise<{
roomName: string;
};
}>;
};
// stages: we focus on the consent, then whereby steals focus, then we focus on the consent again, then return focus to whoever stole it initially
@@ -255,9 +256,10 @@ const useWhereby = () => {
};
export default function Room(details: RoomDetails) {
const params = use(details.params);
const wherebyLoaded = useWhereby();
const wherebyRef = useRef<HTMLElement>(null);
const roomName = details.params.roomName;
const roomName = params.roomName;
const meeting = useRoomMeeting(roomName);
const router = useRouter();
const status = useAuth().status;

View File

@@ -1,6 +1,6 @@
import NextAuth from "next-auth";
import { authOptions } from "../../../lib/authBackend";
const handler = NextAuth(authOptions);
const handler = NextAuth(authOptions());
export { handler as GET, handler as POST };

View File

@@ -1,49 +0,0 @@
"use client";
import { createContext, useContext, useEffect, useState } from "react";
import { DomainConfig } from "./lib/edgeConfig";
type DomainContextType = Omit<DomainConfig, "auth_callback_url">;
export const DomainContext = createContext<DomainContextType>({
features: {
requireLogin: false,
privacy: true,
browse: false,
sendToZulip: false,
},
api_url: "",
websocket_url: "",
});
export const DomainContextProvider = ({
config,
children,
}: {
config: DomainConfig;
children: any;
}) => {
const [context, setContext] = useState<DomainContextType>();
useEffect(() => {
if (!config) return;
const { auth_callback_url, ...others } = config;
setContext(others);
}, [config]);
if (!context) return;
return (
<DomainContext.Provider value={context}>{children}</DomainContext.Provider>
);
};
// Get feature config client-side with
export const featureEnabled = (
featureName: "requireLogin" | "privacy" | "browse" | "sendToZulip",
) => {
const context = useContext(DomainContext);
return context.features[featureName] as boolean | undefined;
};
// Get config server-side (out of react) : see lib/edgeConfig.

View File

@@ -3,11 +3,10 @@ import { Metadata, Viewport } from "next";
import { Poppins } from "next/font/google";
import { ErrorProvider } from "./(errors)/errorContext";
import ErrorMessage from "./(errors)/errorMessage";
import { DomainContextProvider } from "./domainContext";
import { RecordingConsentProvider } from "./recordingConsentContext";
import { getConfig } from "./lib/edgeConfig";
import { ErrorBoundary } from "@sentry/nextjs";
import { Providers } from "./providers";
import { assertExistsAndNonEmptyString } from "./lib/utils";
const poppins = Poppins({
subsets: ["latin"],
@@ -22,8 +21,13 @@ export const viewport: Viewport = {
maximumScale: 1,
};
const NEXT_PUBLIC_SITE_URL = assertExistsAndNonEmptyString(
process.env.NEXT_PUBLIC_SITE_URL,
"NEXT_PUBLIC_SITE_URL required",
);
export const metadata: Metadata = {
metadataBase: new URL(process.env.NEXT_PUBLIC_SITE_URL!),
metadataBase: new URL(NEXT_PUBLIC_SITE_URL),
title: {
template: "%s Reflector",
default: "Reflector - AI-Powered Meeting Transcriptions by Monadical",
@@ -68,21 +72,17 @@ export default async function RootLayout({
}: {
children: React.ReactNode;
}) {
const config = await getConfig();
return (
<html lang="en" className={poppins.className} suppressHydrationWarning>
<body className={"h-[100svh] w-[100svw] overflow-x-hidden relative"}>
<DomainContextProvider config={config}>
<RecordingConsentProvider>
<ErrorBoundary fallback={<p>"something went really wrong"</p>}>
<ErrorProvider>
<ErrorMessage />
<Providers>{children}</Providers>
</ErrorProvider>
</ErrorBoundary>
</RecordingConsentProvider>
</DomainContextProvider>
<RecordingConsentProvider>
<ErrorBoundary fallback={<p>"something went really wrong"</p>}>
<ErrorProvider>
<ErrorMessage />
<Providers>{children}</Providers>
</ErrorProvider>
</ErrorBoundary>
</RecordingConsentProvider>
</body>
</html>
);

View File

@@ -1,17 +1,19 @@
"use client";
import { createContext, useContext, useEffect } from "react";
import { createContext, useContext } from "react";
import { useSession as useNextAuthSession } from "next-auth/react";
import { signOut, signIn } from "next-auth/react";
import { configureApiAuth, configureApiAuthRefresh } from "./apiClient";
import { configureApiAuth } from "./apiClient";
import { assertCustomSession, CustomSession } from "./types";
import { Session } from "next-auth";
import { SessionAutoRefresh } from "./SessionAutoRefresh";
import { REFRESH_ACCESS_TOKEN_ERROR } from "./auth";
import { assertExists } from "./utils";
import { featureEnabled } from "./features";
type AuthContextType = (
| { status: "loading" }
| { status: "refreshing" }
| { status: "refreshing"; user: CustomSession["user"] }
| { status: "unauthenticated"; error?: string }
| {
status: "authenticated";
@@ -26,74 +28,94 @@ type AuthContextType = (
};
const AuthContext = createContext<AuthContextType | undefined>(undefined);
const isAuthEnabled = featureEnabled("requireLogin");
const noopAuthContext: AuthContextType = {
status: "unauthenticated",
update: async () => {
return null;
},
signIn: async () => {
throw new Error("signIn not supposed to be called");
},
signOut: async () => {
throw new Error("signOut not supposed to be called");
},
};
export function AuthProvider({ children }: { children: React.ReactNode }) {
const { data: session, status, update } = useNextAuthSession();
const customSession = session ? assertCustomSession(session) : null;
const contextValue: AuthContextType = {
...(() => {
switch (status) {
case "loading": {
const sessionIsHere = !!customSession;
switch (sessionIsHere) {
case false: {
return { status };
const contextValue: AuthContextType = isAuthEnabled
? {
...(() => {
switch (status) {
case "loading": {
const sessionIsHere = !!session;
// actually exists sometimes; nextAuth types are something else
switch (sessionIsHere as boolean) {
case false: {
return { status };
}
case true: {
return {
status: "refreshing" as const,
user: assertCustomSession(
assertExists(session as unknown as Session),
).user,
};
}
default: {
throw new Error("unreachable");
}
}
}
case true: {
return { status: "refreshing" as const };
case "authenticated": {
const customSession = assertCustomSession(session);
if (customSession?.error === REFRESH_ACCESS_TOKEN_ERROR) {
// token had expired but next auth still returns "authenticated" so show user unauthenticated state
return {
status: "unauthenticated" as const,
};
} else if (customSession?.accessToken) {
return {
status,
accessToken: customSession.accessToken,
accessTokenExpires: customSession.accessTokenExpires,
user: customSession.user,
};
} else {
console.warn(
"illegal state: authenticated but have no session/or access token. ignoring",
);
return { status: "unauthenticated" as const };
}
}
case "unauthenticated": {
return { status: "unauthenticated" as const };
}
default: {
const _: never = sessionIsHere;
const _: never = status;
throw new Error("unreachable");
}
}
}
case "authenticated": {
if (customSession?.error === REFRESH_ACCESS_TOKEN_ERROR) {
// token had expired but next auth still returns "authenticated" so show user unauthenticated state
return {
status: "unauthenticated" as const,
};
} else if (customSession?.accessToken) {
return {
status,
accessToken: customSession.accessToken,
accessTokenExpires: customSession.accessTokenExpires,
user: customSession.user,
};
} else {
console.warn(
"illegal state: authenticated but have no session/or access token. ignoring",
);
return { status: "unauthenticated" as const };
}
}
case "unauthenticated": {
return { status: "unauthenticated" as const };
}
default: {
const _: never = status;
throw new Error("unreachable");
}
})(),
update,
signIn,
signOut,
}
})(),
update,
signIn,
signOut,
};
: noopAuthContext;
// not useEffect, we need it ASAP
// apparently, still no guarantee this code runs before mutations are fired
configureApiAuth(
contextValue.status === "authenticated" ? contextValue.accessToken : null,
contextValue.status === "authenticated"
? contextValue.accessToken
: contextValue.status === "loading"
? undefined
: null,
);
useEffect(() => {
configureApiAuthRefresh(
contextValue.status === "authenticated" ? contextValue.update : null,
);
}, [contextValue.status === "authenticated" && contextValue.update]);
return (
<AuthContext.Provider value={contextValue}>
<SessionAutoRefresh>{children}</SessionAutoRefresh>

View File

@@ -9,12 +9,11 @@
import { useEffect } from "react";
import { useAuth } from "./AuthProvider";
import { REFRESH_ACCESS_TOKEN_BEFORE } from "./auth";
const REFRESH_BEFORE = REFRESH_ACCESS_TOKEN_BEFORE;
import { shouldRefreshToken } from "./auth";
export function SessionAutoRefresh({ children }) {
const auth = useAuth();
const accessTokenExpires =
auth.status === "authenticated" ? auth.accessTokenExpires : null;
@@ -23,18 +22,15 @@ export function SessionAutoRefresh({ children }) {
// and not too slow (debuggable)
const INTERVAL_REFRESH_MS = 5000;
const interval = setInterval(() => {
if (accessTokenExpires !== null) {
const timeLeft = accessTokenExpires - Date.now();
console.log("time left", timeLeft);
// if (timeLeft < REFRESH_BEFORE) {
// auth
// .update()
// .then(() => {})
// .catch((e) => {
// // note: 401 won't be considered error here
// console.error("error refreshing auth token", e);
// });
// }
if (accessTokenExpires === null) return;
if (shouldRefreshToken(accessTokenExpires)) {
auth
.update()
.then(() => {})
.catch((e) => {
// note: 401 won't be considered error here
console.error("error refreshing auth token", e);
});
}
}, INTERVAL_REFRESH_MS);

View File

@@ -2,46 +2,51 @@
import createClient from "openapi-fetch";
import type { paths } from "../reflector-api";
import {
queryOptions,
useMutation,
useQuery,
useSuspenseQuery,
} from "@tanstack/react-query";
import createFetchClient from "openapi-react-query";
import { assertExistsAndNonEmptyString } from "./utils";
import { assertExistsAndNonEmptyString, parseNonEmptyString } from "./utils";
import { isBuildPhase } from "./next";
import { Session } from "next-auth";
import { assertCustomSession } from "./types";
import { HttpMethod, PathsWithMethod } from "openapi-typescript-helpers";
import { getSession } from "next-auth/react";
import { assertExtendedToken } from "./types";
const API_URL = !isBuildPhase
? assertExistsAndNonEmptyString(process.env.NEXT_PUBLIC_API_URL)
export const API_URL = !isBuildPhase
? assertExistsAndNonEmptyString(
process.env.NEXT_PUBLIC_API_URL,
"NEXT_PUBLIC_API_URL required",
)
: "http://localhost";
// Create the base openapi-fetch client with a default URL
// The actual URL will be set via middleware in AuthProvider
// TODO decide strict validation or not
export const WEBSOCKET_URL =
process.env.NEXT_PUBLIC_WEBSOCKET_URL || "ws://127.0.0.1:1250";
export const client = createClient<paths>({
baseUrl: API_URL,
});
export const $api = createFetchClient<paths>(client);
// will assert presence/absence of login initially
const initialSessionPromise = getSession();
let currentAuthToken: string | null | undefined = null;
let refreshAuthCallback: (() => Promise<Session | null>) | null = null;
const injectAuth = (request: Request, accessToken: string | null) => {
if (accessToken) {
request.headers.set("Authorization", `Bearer ${currentAuthToken}`);
} else {
request.headers.delete("Authorization");
const waitForAuthTokenDefinitivePresenceOrAbsence = async () => {
const initialSession = await initialSessionPromise;
if (currentAuthToken === undefined) {
currentAuthToken =
initialSession === null
? null
: assertExtendedToken(initialSession).accessToken;
}
return request;
// otherwise already overwritten by external forces
return currentAuthToken;
};
client.use({
onRequest({ request }) {
request = injectAuth(request, currentAuthToken || null);
async onRequest({ request }) {
const token = await waitForAuthTokenDefinitivePresenceOrAbsence();
if (token !== null) {
request.headers.set(
"Authorization",
`Bearer ${parseNonEmptyString(token)}`,
);
}
// XXX Only set Content-Type if not already set (FormData will set its own boundary)
// This is a work around for uploading file, we're passing a formdata
// but the content type was still application/json
@@ -55,46 +60,13 @@ client.use({
},
});
client.use({
async onResponse({ response, request, params, schemaPath }) {
if (response.status === 401) {
console.log(
"response.status is 401!",
refreshAuthCallback,
request,
schemaPath,
);
}
if (response.status === 401 && refreshAuthCallback) {
try {
const session = await refreshAuthCallback();
if (!session) {
console.warn("Token refresh failed, no session returned");
return response;
}
const customSession = assertCustomSession(session);
currentAuthToken = customSession.accessToken;
const r = await client.request(
request.method as HttpMethod,
schemaPath as PathsWithMethod<paths, HttpMethod>,
...params,
);
return r.response;
} catch (error) {
console.error("Token refresh failed during 401 retry:", error);
}
}
return response;
},
});
export const $api = createFetchClient<paths>(client);
let currentAuthToken: string | null | undefined = undefined;
// the function contract: lightweight, idempotent
export const configureApiAuth = (token: string | null | undefined) => {
// watch only for the initial loading; "reloading" state assumes token presence/absence
if (token === undefined && currentAuthToken !== undefined) return;
currentAuthToken = token;
};
export const configureApiAuthRefresh = (
callback: (() => Promise<Session | null>) | null,
) => {
refreshAuthCallback = callback;
};

View File

@@ -96,8 +96,6 @@ export function useTranscriptProcess() {
}
export function useTranscriptGet(transcriptId: string | null) {
const { isAuthenticated } = useAuthReady();
return $api.useQuery(
"get",
"/v1/transcripts/{transcript_id}",
@@ -109,7 +107,7 @@ export function useTranscriptGet(transcriptId: string | null) {
},
},
{
enabled: !!transcriptId && isAuthenticated,
enabled: !!transcriptId,
},
);
}
@@ -292,18 +290,16 @@ export function useTranscriptUploadAudio() {
}
export function useTranscriptWaveform(transcriptId: string | null) {
const { isAuthenticated } = useAuthReady();
return $api.useQuery(
"get",
"/v1/transcripts/{transcript_id}/audio/waveform",
{
params: {
path: { transcript_id: transcriptId || "" },
path: { transcript_id: transcriptId! },
},
},
{
enabled: !!transcriptId && isAuthenticated,
enabled: !!transcriptId,
},
);
}
@@ -316,7 +312,7 @@ export function useTranscriptMP3(transcriptId: string | null) {
"/v1/transcripts/{transcript_id}/audio/mp3",
{
params: {
path: { transcript_id: transcriptId || "" },
path: { transcript_id: transcriptId! },
},
},
{
@@ -326,8 +322,6 @@ export function useTranscriptMP3(transcriptId: string | null) {
}
export function useTranscriptTopics(transcriptId: string | null) {
const { isAuthenticated } = useAuthReady();
return $api.useQuery(
"get",
"/v1/transcripts/{transcript_id}/topics",
@@ -337,7 +331,7 @@ export function useTranscriptTopics(transcriptId: string | null) {
},
},
{
enabled: !!transcriptId && isAuthenticated,
enabled: !!transcriptId,
},
);
}

12
www/app/lib/array.ts Normal file
View File

@@ -0,0 +1,12 @@
export type NonEmptyArray<T> = [T, ...T[]];
export const isNonEmptyArray = <T>(arr: T[]): arr is NonEmptyArray<T> =>
arr.length > 0;
export const assertNonEmptyArray = <T>(
arr: T[],
err?: string,
): NonEmptyArray<T> => {
if (isNonEmptyArray(arr)) {
return arr;
}
throw new Error(err ?? "Expected non-empty array");
};

View File

@@ -1,3 +1,20 @@
import { assertExistsAndNonEmptyString } from "./utils";
export const REFRESH_ACCESS_TOKEN_ERROR = "RefreshAccessTokenError" as const;
// 4 min is 1 min less than default authentic value. here we assume that authentic won't be set to access tokens < 4 min
export const REFRESH_ACCESS_TOKEN_BEFORE = 4 * 60 * 1000;
export const shouldRefreshToken = (accessTokenExpires: number): boolean => {
const timeLeft = accessTokenExpires - Date.now();
return timeLeft < REFRESH_ACCESS_TOKEN_BEFORE;
};
export const LOGIN_REQUIRED_PAGES = [
"/transcripts/[!new]",
"/browse(.*)",
"/rooms(.*)",
];
export const PROTECTED_PAGES = new RegExp(
LOGIN_REQUIRED_PAGES.map((page) => `^${page}$`).join("|"),
);

View File

@@ -2,123 +2,163 @@ import { AuthOptions } from "next-auth";
import AuthentikProvider from "next-auth/providers/authentik";
import type { JWT } from "next-auth/jwt";
import { JWTWithAccessToken, CustomSession } from "./types";
import { assertExists, assertExistsAndNonEmptyString } from "./utils";
import {
assertExists,
assertExistsAndNonEmptyString,
assertNotExists,
} from "./utils";
import {
REFRESH_ACCESS_TOKEN_BEFORE,
REFRESH_ACCESS_TOKEN_ERROR,
shouldRefreshToken,
} from "./auth";
import {
getTokenCache,
setTokenCache,
deleteTokenCache,
} from "./redisTokenCache";
import { tokenCacheRedis } from "./redisClient";
import { tokenCacheRedis, redlock } from "./redisClient";
import { isBuildPhase } from "./next";
import { sequenceThrows } from "./errorUtils";
import { featureEnabled } from "./features";
// REFRESH_ACCESS_TOKEN_BEFORE because refresh is based on access token expiration (imagine we cache it 30 days)
const TOKEN_CACHE_TTL = REFRESH_ACCESS_TOKEN_BEFORE;
const getAuthentikClientId = () =>
assertExistsAndNonEmptyString(
process.env.AUTHENTIK_CLIENT_ID,
"AUTHENTIK_CLIENT_ID required",
);
const getAuthentikClientSecret = () =>
assertExistsAndNonEmptyString(
process.env.AUTHENTIK_CLIENT_SECRET,
"AUTHENTIK_CLIENT_SECRET required",
);
const getAuthentikRefreshTokenUrl = () =>
assertExistsAndNonEmptyString(
process.env.AUTHENTIK_REFRESH_TOKEN_URL,
"AUTHENTIK_REFRESH_TOKEN_URL required",
);
const refreshLocks = new Map<string, Promise<JWTWithAccessToken>>();
const CLIENT_ID = !isBuildPhase
? assertExistsAndNonEmptyString(process.env.AUTHENTIK_CLIENT_ID)
: "noop";
const CLIENT_SECRET = !isBuildPhase
? assertExistsAndNonEmptyString(process.env.AUTHENTIK_CLIENT_SECRET)
: "noop";
export const authOptions: AuthOptions = {
providers: [
AuthentikProvider({
clientId: CLIENT_ID,
clientSecret: CLIENT_SECRET,
issuer: process.env.AUTHENTIK_ISSUER,
authorization: {
params: {
scope: "openid email profile offline_access",
export const authOptions = (): AuthOptions =>
featureEnabled("requireLogin")
? {
providers: [
AuthentikProvider({
...(() => {
const [clientId, clientSecret] = sequenceThrows(
getAuthentikClientId,
getAuthentikClientSecret,
);
return {
clientId,
clientSecret,
};
})(),
issuer: process.env.AUTHENTIK_ISSUER,
authorization: {
params: {
scope: "openid email profile offline_access",
},
},
}),
],
session: {
strategy: "jwt",
},
},
}),
],
session: {
strategy: "jwt",
},
callbacks: {
async jwt({ token, account, user }) {
console.log("token.sub jwt callback", token.sub);
const KEY = `token:${token.sub}`;
callbacks: {
async jwt({ token, account, user }) {
if (account && !account.access_token) {
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
}
if (account && user) {
// called only on first login
// XXX account.expires_in used in example is not defined for authentik backend, but expires_at is
const expiresAtS = assertExists(account.expires_at);
const expiresAtMs = expiresAtS * 1000;
if (!account.access_token) {
await deleteTokenCache(tokenCacheRedis, KEY);
} else {
const jwtToken: JWTWithAccessToken = {
...token,
accessToken: account.access_token,
accessTokenExpires: expiresAtMs,
refreshToken: account.refresh_token,
};
await setTokenCache(tokenCacheRedis, KEY, {
token: jwtToken,
timestamp: Date.now(),
});
return jwtToken;
}
}
if (account && user) {
// called only on first login
// XXX account.expires_in used in example is not defined for authentik backend, but expires_at is
if (account.access_token) {
const expiresAtS = assertExists(account.expires_at);
const expiresAtMs = expiresAtS * 1000;
const jwtToken: JWTWithAccessToken = {
...token,
accessToken: account.access_token,
accessTokenExpires: expiresAtMs,
refreshToken: account.refresh_token,
};
if (jwtToken.error) {
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
} else {
assertNotExists(
jwtToken.error,
`panic! trying to cache token with error in jwt: ${jwtToken.error}`,
);
await setTokenCache(tokenCacheRedis, `token:${token.sub}`, {
token: jwtToken,
timestamp: Date.now(),
});
return jwtToken;
}
}
}
const currentToken = await getTokenCache(tokenCacheRedis, KEY);
console.log(
"currentToken.token.accessTokenExpires",
currentToken?.token?.accessTokenExpires,
currentToken?.token?.accessTokenExpires
? Date.now() < currentToken?.token?.accessTokenExpires
: "?",
);
if (currentToken && Date.now() < currentToken.token.accessTokenExpires) {
return currentToken.token;
}
const currentToken = await getTokenCache(
tokenCacheRedis,
`token:${token.sub}`,
);
console.debug(
"currentToken from cache",
JSON.stringify(currentToken, null, 2),
"will be returned?",
currentToken &&
!shouldRefreshToken(currentToken.token.accessTokenExpires),
);
if (
currentToken &&
!shouldRefreshToken(currentToken.token.accessTokenExpires)
) {
return currentToken.token;
}
// access token has expired, try to update it
return await lockedRefreshAccessToken(token);
},
async session({ session, token }) {
const extendedToken = token as JWTWithAccessToken;
return {
...session,
accessToken: extendedToken.accessToken,
accessTokenExpires: extendedToken.accessTokenExpires,
error: extendedToken.error,
user: {
id: assertExists(extendedToken.sub),
name: extendedToken.name,
email: extendedToken.email,
// access token has expired, try to update it
return await lockedRefreshAccessToken(token);
},
async session({ session, token }) {
const extendedToken = token as JWTWithAccessToken;
return {
...session,
accessToken: extendedToken.accessToken,
accessTokenExpires: extendedToken.accessTokenExpires,
error: extendedToken.error,
user: {
id: assertExists(extendedToken.sub),
name: extendedToken.name,
email: extendedToken.email,
},
} satisfies CustomSession;
},
},
} satisfies CustomSession;
},
},
};
}
: {
providers: [],
};
async function lockedRefreshAccessToken(
token: JWT,
): Promise<JWTWithAccessToken> {
const lockKey = `${token.sub}-refresh`;
const lockKey = `${token.sub}-lock`;
const existingRefresh = refreshLocks.get(lockKey);
if (existingRefresh) {
return await existingRefresh;
}
const refreshPromise = (async () => {
try {
return redlock
.using([lockKey], 10000, async () => {
const cached = await getTokenCache(tokenCacheRedis, `token:${token.sub}`);
if (cached)
console.debug(
"received cached token. to delete?",
Date.now() - cached.timestamp > TOKEN_CACHE_TTL,
);
else console.debug("no cached token received");
if (cached) {
if (Date.now() - cached.timestamp > TOKEN_CACHE_TTL) {
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
} else if (Date.now() < cached.token.accessTokenExpires) {
} else if (!shouldRefreshToken(cached.token.accessTokenExpires)) {
console.debug("returning cached token", cached.token);
return cached.token;
}
}
@@ -126,32 +166,51 @@ async function lockedRefreshAccessToken(
const currentToken = cached?.token || (token as JWTWithAccessToken);
const newToken = await refreshAccessToken(currentToken);
console.debug("current token during refresh", currentToken);
console.debug("new token during refresh", newToken);
if (newToken.error) {
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
return newToken;
}
assertNotExists(
newToken.error,
`panic! trying to cache token with error during refresh: ${newToken.error}`,
);
await setTokenCache(tokenCacheRedis, `token:${token.sub}`, {
token: newToken,
timestamp: Date.now(),
});
return newToken;
} finally {
setTimeout(() => refreshLocks.delete(lockKey), 100);
}
})();
refreshLocks.set(lockKey, refreshPromise);
return refreshPromise;
})
.catch((e) => {
console.error("error refreshing token", e);
deleteTokenCache(tokenCacheRedis, `token:${token.sub}`).catch((e) => {
console.error("error deleting errored token", e);
});
return {
...token,
error: REFRESH_ACCESS_TOKEN_ERROR,
} as JWTWithAccessToken;
});
}
async function refreshAccessToken(token: JWT): Promise<JWTWithAccessToken> {
const [url, clientId, clientSecret] = sequenceThrows(
getAuthentikRefreshTokenUrl,
getAuthentikClientId,
getAuthentikClientSecret,
);
try {
const url = `${process.env.AUTHENTIK_REFRESH_TOKEN_URL}`;
const options = {
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: new URLSearchParams({
client_id: process.env.AUTHENTIK_CLIENT_ID as string,
client_secret: process.env.AUTHENTIK_CLIENT_SECRET as string,
client_id: clientId,
client_secret: clientSecret,
grant_type: "refresh_token",
refresh_token: token.refreshToken as string,
}).toString(),

View File

@@ -1,54 +0,0 @@
import { get } from "@vercel/edge-config";
import { isBuildPhase } from "./next";
type EdgeConfig = {
[domainWithDash: string]: {
features: {
[featureName in
| "requireLogin"
| "privacy"
| "browse"
| "sendToZulip"]: boolean;
};
auth_callback_url: string;
websocket_url: string;
api_url: string;
};
};
export type DomainConfig = EdgeConfig["domainWithDash"];
// Edge config main keys can only be alphanumeric and _ or -
export function edgeKeyToDomain(key: string) {
return key.replaceAll("_", ".");
}
export function edgeDomainToKey(domain: string) {
return domain.replaceAll(".", "_");
}
// get edge config server-side (prefer DomainContext when available), domain is the hostname
export async function getConfig() {
if (process.env.NEXT_PUBLIC_ENV === "development") {
try {
return require("../../config").localConfig;
} catch (e) {
// next build() WILL try to execute the require above even if conditionally protected
// but thank god it at least runs catch{} block properly
if (!isBuildPhase) throw new Error(e);
return require("../../config-template").localConfig;
}
}
const domain = new URL(process.env.NEXT_PUBLIC_SITE_URL!).hostname;
let config = await get(edgeDomainToKey(domain));
if (typeof config !== "object") {
console.warn("No config for this domain, falling back to default");
config = await get(edgeDomainToKey("default"));
}
if (typeof config !== "object") throw Error("Error fetching config");
return config as DomainConfig;
}

View File

@@ -1,4 +1,6 @@
function shouldShowError(error: Error | null | undefined) {
import { isNonEmptyArray, NonEmptyArray } from "./array";
export function shouldShowError(error: Error | null | undefined) {
if (
error?.name == "ResponseError" &&
(error["response"].status == 404 || error["response"].status == 403)
@@ -8,4 +10,40 @@ function shouldShowError(error: Error | null | undefined) {
return true;
}
export { shouldShowError };
const defaultMergeErrors = (ex: NonEmptyArray<unknown>): unknown => {
try {
return new Error(
ex
.map((e) =>
e ? (e.toString ? e.toString() : JSON.stringify(e)) : `${e}`,
)
.join("\n"),
);
} catch (e) {
console.error("Error merging errors:", e);
return ex[0];
}
};
type ReturnTypes<T extends readonly (() => any)[]> = {
[K in keyof T]: T[K] extends () => infer R ? R : never;
};
// sequence semantic for "throws"
// calls functions passed and collects its thrown values
export function sequenceThrows<Fns extends readonly (() => any)[]>(
...fs: Fns
): ReturnTypes<Fns> {
const results: unknown[] = [];
const errors: unknown[] = [];
for (const f of fs) {
try {
results.push(f());
} catch (e) {
errors.push(e);
}
}
if (errors.length) throw defaultMergeErrors(errors as NonEmptyArray<unknown>);
return results as ReturnTypes<Fns>;
}

55
www/app/lib/features.ts Normal file
View File

@@ -0,0 +1,55 @@
export const FEATURES = [
"requireLogin",
"privacy",
"browse",
"sendToZulip",
"rooms",
] as const;
export type FeatureName = (typeof FEATURES)[number];
export type Features = Readonly<Record<FeatureName, boolean>>;
export const DEFAULT_FEATURES: Features = {
requireLogin: true,
privacy: true,
browse: true,
sendToZulip: true,
rooms: true,
} as const;
function parseBooleanEnv(
value: string | undefined,
defaultValue: boolean = false,
): boolean {
if (!value) return defaultValue;
return value.toLowerCase() === "true";
}
// WARNING: keep process.env.* as-is, next.js won't see them if you generate dynamically
const features: Features = {
requireLogin: parseBooleanEnv(
process.env.NEXT_PUBLIC_FEATURE_REQUIRE_LOGIN,
DEFAULT_FEATURES.requireLogin,
),
privacy: parseBooleanEnv(
process.env.NEXT_PUBLIC_FEATURE_PRIVACY,
DEFAULT_FEATURES.privacy,
),
browse: parseBooleanEnv(
process.env.NEXT_PUBLIC_FEATURE_BROWSE,
DEFAULT_FEATURES.browse,
),
sendToZulip: parseBooleanEnv(
process.env.NEXT_PUBLIC_FEATURE_SEND_TO_ZULIP,
DEFAULT_FEATURES.sendToZulip,
),
rooms: parseBooleanEnv(
process.env.NEXT_PUBLIC_FEATURE_ROOMS,
DEFAULT_FEATURES.rooms,
),
};
export const featureEnabled = (featureName: FeatureName): boolean => {
return features[featureName];
};

View File

@@ -1,30 +1,41 @@
import Redis from "ioredis";
import { isBuildPhase } from "./next";
import Redlock, { ResourceLockedError } from "redlock";
export type RedisClient = Pick<Redis, "get" | "setex" | "del">;
export type RedlockClient = {
using: <T>(
keys: string | string[],
ttl: number,
cb: () => Promise<T>,
) => Promise<T>;
};
const KV_USE_TLS = process.env.KV_USE_TLS
? process.env.KV_USE_TLS === "true"
: undefined;
let redisClient: Redis | null = null;
const getRedisClient = (): RedisClient => {
if (redisClient) return redisClient;
const redisUrl = process.env.KV_URL;
if (!redisUrl) {
throw new Error("KV_URL environment variable is required");
}
const redis = new Redis(redisUrl, {
redisClient = new Redis(redisUrl, {
maxRetriesPerRequest: 3,
lazyConnect: true,
...(KV_USE_TLS === true
? {
tls: {},
}
: {}),
});
redis.on("error", (error) => {
redisClient.on("error", (error) => {
console.error("Redis error:", error);
});
// not necessary but will indicate redis config errors by failfast at startup
// happens only once; after that connection is allowed to die and the lib is assumed to be able to restore it eventually
redis.connect().catch((e) => {
console.error("Failed to connect to Redis:", e);
process.exit(1);
});
return redis;
return redisClient;
};
// next.js buildtime usage - we want to isolate next.js "build" time concepts here
@@ -43,4 +54,25 @@ const noopClient: RedisClient = (() => {
del: noopDel,
};
})();
const noopRedlock: RedlockClient = {
using: <T>(resource: string | string[], ttl: number, cb: () => Promise<T>) =>
cb(),
};
export const redlock: RedlockClient = isBuildPhase
? noopRedlock
: (() => {
const r = new Redlock([getRedisClient()], {});
r.on("error", (error) => {
if (error instanceof ResourceLockedError) {
return;
}
// Log all other errors.
console.error(error);
});
return r;
})();
export const tokenCacheRedis = isBuildPhase ? noopClient : getRedisClient();

View File

@@ -9,7 +9,6 @@ const TokenCacheEntrySchema = z.object({
accessToken: z.string(),
accessTokenExpires: z.number(),
refreshToken: z.string().optional(),
error: z.string().optional(),
}),
timestamp: z.number(),
});
@@ -46,14 +45,15 @@ export async function getTokenCache(
}
}
const TTL_SECONDS = 30 * 24 * 60 * 60;
export async function setTokenCache(
redis: KV,
key: string,
value: TokenCacheEntry,
): Promise<void> {
const encodedValue = TokenCacheEntryCodec.encode(value);
const ttlSeconds = Math.floor(REFRESH_ACCESS_TOKEN_BEFORE / 1000);
await redis.setex(key, ttlSeconds, encodedValue);
await redis.setex(key, TTL_SECONDS, encodedValue);
}
export async function deleteTokenCache(redis: KV, key: string): Promise<void> {

View File

@@ -0,0 +1,5 @@
import { components } from "../reflector-api";
type ApiTranscriptStatus = components["schemas"]["GetTranscript"]["status"];
export type TranscriptStatus = ApiTranscriptStatus;

View File

@@ -21,7 +21,7 @@ export interface CustomSession extends Session {
// assumption that JWT is JWTWithAccessToken - we set it in jwt callback of auth; typing isn't strong around there
// but the assumption is crucial to auth working
export const assertExtendedToken = <T>(
t: T,
t: Exclude<T, null | undefined>,
): T & {
accessTokenExpires: number;
accessToken: string;
@@ -45,7 +45,7 @@ export const assertExtendedToken = <T>(
};
export const assertExtendedTokenAndUserId = <U, T extends { user?: U }>(
t: T,
t: Exclude<T, null | undefined>,
): T & {
accessTokenExpires: number;
accessToken: string;
@@ -55,7 +55,7 @@ export const assertExtendedTokenAndUserId = <U, T extends { user?: U }>(
} => {
const extendedToken = assertExtendedToken(t);
if (typeof (extendedToken.user as any)?.id === "string") {
return t as T & {
return t as Exclude<T, null | undefined> & {
accessTokenExpires: number;
accessToken: string;
user: U & {
@@ -67,8 +67,14 @@ export const assertExtendedTokenAndUserId = <U, T extends { user?: U }>(
};
// best attempt to check the session is valid
export const assertCustomSession = <S extends Session>(s: S): CustomSession => {
export const assertCustomSession = <T extends Session>(
s: Exclude<T, null | undefined>,
): CustomSession => {
const r = assertExtendedTokenAndUserId(s);
// no other checks for now
return r as CustomSession;
};
export type Mutable<T> = {
-readonly [P in keyof T]: T[P];
};

View File

@@ -0,0 +1,26 @@
// for paths that are not supposed to be public
import { PROTECTED_PAGES } from "./auth";
import { usePathname } from "next/navigation";
import { useAuth } from "./AuthProvider";
import { useEffect } from "react";
const HOME = "/" as const;
export const useLoginRequiredPages = () => {
const pathname = usePathname();
const isProtected = PROTECTED_PAGES.test(pathname);
const auth = useAuth();
const isNotLoggedIn = auth.status === "unauthenticated";
// safety
const isLastDestination = pathname === HOME;
const shouldRedirect = isNotLoggedIn && isProtected && !isLastDestination;
useEffect(() => {
if (!shouldRedirect) return;
// on the backend, the redirect goes straight to the auth provider, but we don't have it because it's hidden inside next-auth middleware
// so we just "softly" lead the user to the main page
// warning: if HOME redirects somewhere else, we won't be protected by isLastDestination
window.location.href = HOME;
}, [shouldRedirect]);
// optionally save from blink, since window.location.href takes a bit of time
return shouldRedirect ? HOME : null;
};

View File

@@ -2,6 +2,7 @@ import { useAuth } from "./AuthProvider";
export const useUserName = (): string | null | undefined => {
const auth = useAuth();
if (auth.status !== "authenticated") return undefined;
if (auth.status !== "authenticated" && auth.status !== "refreshing")
return undefined;
return auth.user?.name || null;
};

View File

@@ -158,7 +158,19 @@ export const assertExists = <T>(
return value;
};
export const assertNotExists = <T>(
value: T | null | undefined,
err?: string,
): void => {
if (value !== null && value !== undefined) {
throw new Error(
`Assertion failed: ${err ?? "value is not null or undefined"}`,
);
}
};
export const assertExistsAndNonEmptyString = (
value: string | null | undefined,
err?: string,
): NonEmptyString =>
parseNonEmptyString(assertExists(value, "Expected non-empty string"));
parseNonEmptyString(assertExists(value, err || "Expected non-empty string"));

View File

@@ -2,8 +2,8 @@
import { ChakraProvider } from "@chakra-ui/react";
import system from "./styles/theme";
import dynamic from "next/dynamic";
import { WherebyProvider } from "@whereby.com/browser-sdk/react";
import { Toaster } from "./components/ui/toaster";
import { NuqsAdapter } from "nuqs/adapters/next/app";
import { QueryClientProvider } from "@tanstack/react-query";
@@ -11,6 +11,14 @@ import { queryClient } from "./lib/queryClient";
import { AuthProvider } from "./lib/AuthProvider";
import { SessionProvider as SessionProviderNextAuth } from "next-auth/react";
const WherebyProvider = dynamic(
() =>
import("@whereby.com/browser-sdk/react").then((mod) => ({
default: mod.WherebyProvider,
})),
{ ssr: false },
);
export function Providers({ children }: { children: React.ReactNode }) {
return (
<NuqsAdapter>

View File

@@ -926,8 +926,17 @@ export interface components {
source_kind: components["schemas"]["SourceKind"];
/** Created At */
created_at: string;
/** Status */
status: string;
/**
* Status
* @enum {string}
*/
status:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
/** Rank */
rank: number;
/**

View File

@@ -1,5 +1,5 @@
"use client";
import { useEffect, useState } from "react";
import { useEffect, useState, use } from "react";
import Link from "next/link";
import Image from "next/image";
import { notFound } from "next/navigation";
@@ -30,9 +30,9 @@ const FORM_FIELDS = {
};
export type WebinarDetails = {
params: {
params: Promise<{
title: string;
};
}>;
};
export type Webinar = {
@@ -63,7 +63,8 @@ const WEBINARS: Webinar[] = [
];
export default function WebinarPage(details: WebinarDetails) {
const title = details.params.title;
const params = use(details.params);
const title = params.title;
const webinar = WEBINARS.find((webinar) => webinar.title === title);
if (!webinar) {
return notFound();

View File

@@ -1,13 +0,0 @@
export const localConfig = {
features: {
requireLogin: true,
privacy: true,
browse: true,
sendToZulip: true,
rooms: true,
},
api_url: "http://127.0.0.1:1250",
websocket_url: "ws://127.0.0.1:1250",
auth_callback_url: "http://localhost:3000/auth-callback",
zulip_streams: "", // Find the value on zulip
};

View File

@@ -23,3 +23,5 @@ if (SENTRY_DSN) {
replaysSessionSampleRate: 0.0,
});
}
export const onRouterTransitionStart = Sentry.captureRouterTransitionStart;

9
www/instrumentation.ts Normal file
View File

@@ -0,0 +1,9 @@
export async function register() {
if (process.env.NEXT_RUNTIME === "nodejs") {
await import("./sentry.server.config");
}
if (process.env.NEXT_RUNTIME === "edge") {
await import("./sentry.edge.config");
}
}

View File

@@ -1,16 +1,7 @@
import { withAuth } from "next-auth/middleware";
import { getConfig } from "./app/lib/edgeConfig";
import { featureEnabled } from "./app/lib/features";
import { NextResponse } from "next/server";
const LOGIN_REQUIRED_PAGES = [
"/transcripts/[!new]",
"/browse(.*)",
"/rooms(.*)",
];
const PROTECTED_PAGES = new RegExp(
LOGIN_REQUIRED_PAGES.map((page) => `^${page}$`).join("|"),
);
import { PROTECTED_PAGES } from "./app/lib/auth";
export const config = {
matcher: [
@@ -28,13 +19,12 @@ export const config = {
export default withAuth(
async function middleware(request) {
const config = await getConfig();
const pathname = request.nextUrl.pathname;
// feature-flags protected paths
if (
(!config.features.browse && pathname.startsWith("/browse")) ||
(!config.features.rooms && pathname.startsWith("/rooms"))
(!featureEnabled("browse") && pathname.startsWith("/browse")) ||
(!featureEnabled("rooms") && pathname.startsWith("/rooms"))
) {
return NextResponse.redirect(request.nextUrl.origin);
}
@@ -42,10 +32,8 @@ export default withAuth(
{
callbacks: {
async authorized({ req, token }) {
const config = await getConfig();
if (
config.features.requireLogin &&
featureEnabled("requireLogin") &&
PROTECTED_PAGES.test(req.nextUrl.pathname)
) {
return !!token;

View File

@@ -1,7 +1,6 @@
/** @type {import('next').NextConfig} */
const nextConfig = {
output: "standalone",
experimental: { esmExternals: "loose" },
env: {
IS_CI: process.env.IS_CI,
},

View File

@@ -17,20 +17,19 @@
"@fortawesome/fontawesome-svg-core": "^6.4.0",
"@fortawesome/free-solid-svg-icons": "^6.4.0",
"@fortawesome/react-fontawesome": "^0.2.0",
"@sentry/nextjs": "^7.77.0",
"@sentry/nextjs": "^10.11.0",
"@tanstack/react-query": "^5.85.9",
"@types/ioredis": "^5.0.0",
"@vercel/edge-config": "^0.4.1",
"@whereby.com/browser-sdk": "^3.3.4",
"autoprefixer": "10.4.20",
"axios": "^1.8.2",
"axios": "^1.12.0",
"eslint": "^9.33.0",
"eslint-config-next": "^14.2.31",
"eslint-config-next": "^15.5.3",
"fontawesome": "^5.6.3",
"ioredis": "^5.7.0",
"jest-worker": "^29.6.2",
"lucide-react": "^0.525.0",
"next": "^14.2.30",
"next": "^15.5.3",
"next-auth": "^4.24.7",
"next-themes": "^0.4.6",
"nuqs": "^2.4.3",
@@ -45,6 +44,7 @@
"react-markdown": "^9.0.0",
"react-qr-code": "^2.0.12",
"react-select-search": "^4.1.7",
"redlock": "5.0.0-beta.2",
"sass": "^1.63.6",
"simple-peer": "^9.11.1",
"tailwindcss": "^3.3.2",
@@ -62,8 +62,7 @@
"jest": "^30.1.3",
"openapi-typescript": "^7.9.1",
"prettier": "^3.0.0",
"ts-jest": "^29.4.1",
"vercel": "^37.3.0"
"ts-jest": "^29.4.1"
},
"packageManager": "pnpm@10.14.0+sha512.ad27a79641b49c3e481a16a805baa71817a04bbe06a38d17e60e2eaee83f6a146c6a688125f5792e48dd5ba30e7da52a5cda4c3992b9ccf333f9ce223af84748"
}

14286
www/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff