mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-21 04:39:06 +00:00
Compare commits
5 Commits
igor/401-r
...
v0.9.0
| Author | SHA1 | Date | |
|---|---|---|---|
| 02a3938822 | |||
|
|
7f5a4c9ddc | ||
|
|
08d88ec349 | ||
|
|
c4d2825c81 | ||
| 0663700a61 |
15
CHANGELOG.md
15
CHANGELOG.md
@@ -1,5 +1,20 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## [0.9.0](https://github.com/Monadical-SAS/reflector/compare/v0.8.2...v0.9.0) (2025-09-06)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* frontend openapi react query ([#606](https://github.com/Monadical-SAS/reflector/issues/606)) ([c4d2825](https://github.com/Monadical-SAS/reflector/commit/c4d2825c81f81ad8835629fbf6ea8c7383f8c31b))
|
||||||
|
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
* align whisper transcriber api with parakeet ([#602](https://github.com/Monadical-SAS/reflector/issues/602)) ([0663700](https://github.com/Monadical-SAS/reflector/commit/0663700a615a4af69a03c96c410f049e23ec9443))
|
||||||
|
* kv use tls explicit ([#610](https://github.com/Monadical-SAS/reflector/issues/610)) ([08d88ec](https://github.com/Monadical-SAS/reflector/commit/08d88ec349f38b0d13e0fa4cb73486c8dfd31836))
|
||||||
|
* source kind for file processing ([#601](https://github.com/Monadical-SAS/reflector/issues/601)) ([dc82f8b](https://github.com/Monadical-SAS/reflector/commit/dc82f8bb3bdf3ab3d4088e592a30fd63907319e1))
|
||||||
|
* token refresh locking ([#613](https://github.com/Monadical-SAS/reflector/issues/613)) ([7f5a4c9](https://github.com/Monadical-SAS/reflector/commit/7f5a4c9ddc7fd098860c8bdda2ca3b57f63ded2f))
|
||||||
|
|
||||||
## [0.8.2](https://github.com/Monadical-SAS/reflector/compare/v0.8.1...v0.8.2) (2025-08-29)
|
## [0.8.2](https://github.com/Monadical-SAS/reflector/compare/v0.8.1...v0.8.2) (2025-08-29)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
194
server/docs/gpu/api-transcription.md
Normal file
194
server/docs/gpu/api-transcription.md
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
## Reflector GPU Transcription API (Specification)
|
||||||
|
|
||||||
|
This document defines the Reflector GPU transcription API that all implementations must adhere to. Current implementations include NVIDIA Parakeet (NeMo) and Whisper (faster-whisper), both deployed on Modal.com. The API surface and response shapes are OpenAI/Whisper-compatible, so clients can switch implementations by changing only the base URL.
|
||||||
|
|
||||||
|
### Base URL and Authentication
|
||||||
|
|
||||||
|
- Example base URLs (Modal web endpoints):
|
||||||
|
|
||||||
|
- Parakeet: `https://<account>--reflector-transcriber-parakeet-web.modal.run`
|
||||||
|
- Whisper: `https://<account>--reflector-transcriber-web.modal.run`
|
||||||
|
|
||||||
|
- All endpoints are served under `/v1` and require a Bearer token:
|
||||||
|
|
||||||
|
```
|
||||||
|
Authorization: Bearer <REFLECTOR_GPU_APIKEY>
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: To switch implementations, deploy the desired variant and point `TRANSCRIPT_URL` to its base URL. The API is identical.
|
||||||
|
|
||||||
|
### Supported file types
|
||||||
|
|
||||||
|
`mp3, mp4, mpeg, mpga, m4a, wav, webm`
|
||||||
|
|
||||||
|
### Models and languages
|
||||||
|
|
||||||
|
- Parakeet (NVIDIA NeMo): default `nvidia/parakeet-tdt-0.6b-v2`
|
||||||
|
- Language support: only `en`. Other languages return HTTP 400.
|
||||||
|
- Whisper (faster-whisper): default `large-v2` (or deployment-specific)
|
||||||
|
- Language support: multilingual (per Whisper model capabilities).
|
||||||
|
|
||||||
|
Note: The `model` parameter is accepted by all implementations for interface parity. Some backends may treat it as informational.
|
||||||
|
|
||||||
|
### Endpoints
|
||||||
|
|
||||||
|
#### POST /v1/audio/transcriptions
|
||||||
|
|
||||||
|
Transcribe one or more uploaded audio files.
|
||||||
|
|
||||||
|
Request: multipart/form-data
|
||||||
|
|
||||||
|
- `file` (File) — optional. Single file to transcribe.
|
||||||
|
- `files` (File[]) — optional. One or more files to transcribe.
|
||||||
|
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
|
||||||
|
- `language` (string) — optional, defaults to `en`.
|
||||||
|
- Parakeet: only `en` is accepted; other values return HTTP 400
|
||||||
|
- Whisper: model-dependent; typically multilingual
|
||||||
|
- `batch` (boolean) — optional, defaults to `false`.
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- Provide either `file` or `files`, not both. If neither is provided, HTTP 400.
|
||||||
|
- `batch` requires `files`; using `batch=true` without `files` returns HTTP 400.
|
||||||
|
- Response shape for multiple files is the same regardless of `batch`.
|
||||||
|
- Files sent to this endpoint are processed in a single pass (no VAD/chunking). This is intended for short clips (roughly ≤ 30s; depends on GPU memory/model). For longer audio, prefer `/v1/audio/transcriptions-from-url` which supports VAD-based chunking.
|
||||||
|
|
||||||
|
Responses
|
||||||
|
|
||||||
|
Single file response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"text": "transcribed text",
|
||||||
|
"words": [
|
||||||
|
{ "word": "hello", "start": 0.0, "end": 0.5 },
|
||||||
|
{ "word": "world", "start": 0.5, "end": 1.0 }
|
||||||
|
],
|
||||||
|
"filename": "audio.mp3"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Multiple files response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"results": [
|
||||||
|
{"filename": "a1.mp3", "text": "...", "words": [...]},
|
||||||
|
{"filename": "a2.mp3", "text": "...", "words": [...]}]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- Word objects always include keys: `word`, `start`, `end`.
|
||||||
|
- Some implementations may include a trailing space in `word` to match Whisper tokenization behavior; clients should trim if needed.
|
||||||
|
|
||||||
|
Example curl (single file):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST \
|
||||||
|
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
|
||||||
|
-F "file=@/path/to/audio.mp3" \
|
||||||
|
-F "language=en" \
|
||||||
|
"$BASE_URL/v1/audio/transcriptions"
|
||||||
|
```
|
||||||
|
|
||||||
|
Example curl (multiple files, batch):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST \
|
||||||
|
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
|
||||||
|
-F "files=@/path/a1.mp3" -F "files=@/path/a2.mp3" \
|
||||||
|
-F "batch=true" -F "language=en" \
|
||||||
|
"$BASE_URL/v1/audio/transcriptions"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### POST /v1/audio/transcriptions-from-url
|
||||||
|
|
||||||
|
Transcribe a single remote audio file by URL.
|
||||||
|
|
||||||
|
Request: application/json
|
||||||
|
|
||||||
|
Body parameters:
|
||||||
|
|
||||||
|
- `audio_file_url` (string) — required. URL of the audio file to transcribe.
|
||||||
|
- `model` (string) — optional. Defaults to the implementation-specific model (see above).
|
||||||
|
- `language` (string) — optional, defaults to `en`. Parakeet only accepts `en`.
|
||||||
|
- `timestamp_offset` (number) — optional, defaults to `0.0`. Added to each word's `start`/`end` in the response.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"audio_file_url": "https://example.com/audio.mp3",
|
||||||
|
"model": "nvidia/parakeet-tdt-0.6b-v2",
|
||||||
|
"language": "en",
|
||||||
|
"timestamp_offset": 0.0
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"text": "transcribed text",
|
||||||
|
"words": [
|
||||||
|
{ "word": "hello", "start": 10.0, "end": 10.5 },
|
||||||
|
{ "word": "world", "start": 10.5, "end": 11.0 }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- `timestamp_offset` is added to each word’s `start`/`end` in the response.
|
||||||
|
- Implementations may perform VAD-based chunking and batching for long-form audio; word timings are adjusted accordingly.
|
||||||
|
|
||||||
|
Example curl:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST \
|
||||||
|
-H "Authorization: Bearer $REFLECTOR_GPU_APIKEY" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"audio_file_url": "https://example.com/audio.mp3",
|
||||||
|
"language": "en",
|
||||||
|
"timestamp_offset": 0
|
||||||
|
}' \
|
||||||
|
"$BASE_URL/v1/audio/transcriptions-from-url"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error handling
|
||||||
|
|
||||||
|
- 400 Bad Request
|
||||||
|
- Parakeet: `language` other than `en`
|
||||||
|
- Missing required parameters (`file`/`files` for upload; `audio_file_url` for URL endpoint)
|
||||||
|
- Unsupported file extension
|
||||||
|
- 401 Unauthorized
|
||||||
|
- Missing or invalid Bearer token
|
||||||
|
- 404 Not Found
|
||||||
|
- `audio_file_url` does not exist
|
||||||
|
|
||||||
|
### Implementation details
|
||||||
|
|
||||||
|
- GPUs: A10G for small-file/live, L40S for large-file URL transcription (subject to deployment)
|
||||||
|
- VAD chunking and segment batching; word timings adjusted and overlapping ends constrained
|
||||||
|
- Pads very short segments (< 0.5s) to avoid model crashes on some backends
|
||||||
|
|
||||||
|
### Server configuration (Reflector API)
|
||||||
|
|
||||||
|
Set the Reflector server to use the Modal backend and point `TRANSCRIPT_URL` to your chosen deployment:
|
||||||
|
|
||||||
|
```
|
||||||
|
TRANSCRIPT_BACKEND=modal
|
||||||
|
TRANSCRIPT_URL=https://<account>--reflector-transcriber-parakeet-web.modal.run
|
||||||
|
TRANSCRIPT_MODAL_API_KEY=<REFLECTOR_GPU_APIKEY>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Conformance tests
|
||||||
|
|
||||||
|
Use the pytest-based conformance tests to validate any new implementation (including self-hosted) against this spec:
|
||||||
|
|
||||||
|
```
|
||||||
|
TRANSCRIPT_URL=https://<your-deployment-base> \
|
||||||
|
TRANSCRIPT_MODAL_API_KEY=your-api-key \
|
||||||
|
uv run -m pytest -m gpu_modal --no-cov server/tests/test_gpu_modal_transcript.py
|
||||||
|
```
|
||||||
@@ -1,41 +1,78 @@
|
|||||||
import os
|
import os
|
||||||
import tempfile
|
import sys
|
||||||
import threading
|
import threading
|
||||||
|
import uuid
|
||||||
|
from typing import Generator, Mapping, NamedTuple, NewType, TypedDict
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
import modal
|
import modal
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
MODELS_DIR = "/models"
|
|
||||||
|
|
||||||
MODEL_NAME = "large-v2"
|
MODEL_NAME = "large-v2"
|
||||||
MODEL_COMPUTE_TYPE: str = "float16"
|
MODEL_COMPUTE_TYPE: str = "float16"
|
||||||
MODEL_NUM_WORKERS: int = 1
|
MODEL_NUM_WORKERS: int = 1
|
||||||
|
|
||||||
MINUTES = 60 # seconds
|
MINUTES = 60 # seconds
|
||||||
|
SAMPLERATE = 16000
|
||||||
|
UPLOADS_PATH = "/uploads"
|
||||||
|
CACHE_PATH = "/models"
|
||||||
|
SUPPORTED_FILE_EXTENSIONS = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
|
||||||
|
VAD_CONFIG = {
|
||||||
|
"batch_max_duration": 30.0,
|
||||||
|
"silence_padding": 0.5,
|
||||||
|
"window_size": 512,
|
||||||
|
}
|
||||||
|
|
||||||
volume = modal.Volume.from_name("models", create_if_missing=True)
|
|
||||||
|
WhisperUniqFilename = NewType("WhisperUniqFilename", str)
|
||||||
|
AudioFileExtension = NewType("AudioFileExtension", str)
|
||||||
|
|
||||||
app = modal.App("reflector-transcriber")
|
app = modal.App("reflector-transcriber")
|
||||||
|
|
||||||
|
model_cache = modal.Volume.from_name("models", create_if_missing=True)
|
||||||
|
upload_volume = modal.Volume.from_name("whisper-uploads", create_if_missing=True)
|
||||||
|
|
||||||
|
|
||||||
|
class TimeSegment(NamedTuple):
|
||||||
|
"""Represents a time segment with start and end times."""
|
||||||
|
|
||||||
|
start: float
|
||||||
|
end: float
|
||||||
|
|
||||||
|
|
||||||
|
class AudioSegment(NamedTuple):
|
||||||
|
"""Represents an audio segment with timing and audio data."""
|
||||||
|
|
||||||
|
start: float
|
||||||
|
end: float
|
||||||
|
audio: any
|
||||||
|
|
||||||
|
|
||||||
|
class TranscriptResult(NamedTuple):
|
||||||
|
"""Represents a transcription result with text and word timings."""
|
||||||
|
|
||||||
|
text: str
|
||||||
|
words: list["WordTiming"]
|
||||||
|
|
||||||
|
|
||||||
|
class WordTiming(TypedDict):
|
||||||
|
"""Represents a word with its timing information."""
|
||||||
|
|
||||||
|
word: str
|
||||||
|
start: float
|
||||||
|
end: float
|
||||||
|
|
||||||
|
|
||||||
def download_model():
|
def download_model():
|
||||||
from faster_whisper import download_model
|
from faster_whisper import download_model
|
||||||
|
|
||||||
volume.reload()
|
model_cache.reload()
|
||||||
|
|
||||||
download_model(MODEL_NAME, cache_dir=MODELS_DIR)
|
download_model(MODEL_NAME, cache_dir=CACHE_PATH)
|
||||||
|
|
||||||
volume.commit()
|
model_cache.commit()
|
||||||
|
|
||||||
|
|
||||||
image = (
|
image = (
|
||||||
modal.Image.debian_slim(python_version="3.12")
|
modal.Image.debian_slim(python_version="3.12")
|
||||||
.pip_install(
|
|
||||||
"huggingface_hub==0.27.1",
|
|
||||||
"hf-transfer==0.1.9",
|
|
||||||
"torch==2.5.1",
|
|
||||||
"faster-whisper==1.1.1",
|
|
||||||
)
|
|
||||||
.env(
|
.env(
|
||||||
{
|
{
|
||||||
"HF_HUB_ENABLE_HF_TRANSFER": "1",
|
"HF_HUB_ENABLE_HF_TRANSFER": "1",
|
||||||
@@ -45,19 +82,98 @@ image = (
|
|||||||
),
|
),
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
.run_function(download_model, volumes={MODELS_DIR: volume})
|
.apt_install("ffmpeg")
|
||||||
|
.pip_install(
|
||||||
|
"huggingface_hub==0.27.1",
|
||||||
|
"hf-transfer==0.1.9",
|
||||||
|
"torch==2.5.1",
|
||||||
|
"faster-whisper==1.1.1",
|
||||||
|
"fastapi==0.115.12",
|
||||||
|
"requests",
|
||||||
|
"librosa==0.10.1",
|
||||||
|
"numpy<2",
|
||||||
|
"silero-vad==5.1.0",
|
||||||
|
)
|
||||||
|
.run_function(download_model, volumes={CACHE_PATH: model_cache})
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def detect_audio_format(url: str, headers: Mapping[str, str]) -> AudioFileExtension:
|
||||||
|
parsed_url = urlparse(url)
|
||||||
|
url_path = parsed_url.path
|
||||||
|
|
||||||
|
for ext in SUPPORTED_FILE_EXTENSIONS:
|
||||||
|
if url_path.lower().endswith(f".{ext}"):
|
||||||
|
return AudioFileExtension(ext)
|
||||||
|
|
||||||
|
content_type = headers.get("content-type", "").lower()
|
||||||
|
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
|
||||||
|
return AudioFileExtension("mp3")
|
||||||
|
if "audio/wav" in content_type:
|
||||||
|
return AudioFileExtension("wav")
|
||||||
|
if "audio/mp4" in content_type:
|
||||||
|
return AudioFileExtension("mp4")
|
||||||
|
|
||||||
|
raise ValueError(
|
||||||
|
f"Unsupported audio format for URL: {url}. "
|
||||||
|
f"Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def download_audio_to_volume(
|
||||||
|
audio_file_url: str,
|
||||||
|
) -> tuple[WhisperUniqFilename, AudioFileExtension]:
|
||||||
|
import requests
|
||||||
|
from fastapi import HTTPException
|
||||||
|
|
||||||
|
response = requests.head(audio_file_url, allow_redirects=True)
|
||||||
|
if response.status_code == 404:
|
||||||
|
raise HTTPException(status_code=404, detail="Audio file not found")
|
||||||
|
|
||||||
|
response = requests.get(audio_file_url, allow_redirects=True)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
audio_suffix = detect_audio_format(audio_file_url, response.headers)
|
||||||
|
unique_filename = WhisperUniqFilename(f"{uuid.uuid4()}.{audio_suffix}")
|
||||||
|
file_path = f"{UPLOADS_PATH}/{unique_filename}"
|
||||||
|
|
||||||
|
with open(file_path, "wb") as f:
|
||||||
|
f.write(response.content)
|
||||||
|
|
||||||
|
upload_volume.commit()
|
||||||
|
return unique_filename, audio_suffix
|
||||||
|
|
||||||
|
|
||||||
|
def pad_audio(audio_array, sample_rate: int = SAMPLERATE):
|
||||||
|
"""Add 0.5s of silence if audio is shorter than the silence_padding window.
|
||||||
|
|
||||||
|
Whisper does not require this strictly, but aligning behavior with Parakeet
|
||||||
|
avoids edge-case crashes on extremely short inputs and makes comparisons easier.
|
||||||
|
"""
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
audio_duration = len(audio_array) / sample_rate
|
||||||
|
if audio_duration < VAD_CONFIG["silence_padding"]:
|
||||||
|
silence_samples = int(sample_rate * VAD_CONFIG["silence_padding"])
|
||||||
|
silence = np.zeros(silence_samples, dtype=np.float32)
|
||||||
|
return np.concatenate([audio_array, silence])
|
||||||
|
return audio_array
|
||||||
|
|
||||||
|
|
||||||
@app.cls(
|
@app.cls(
|
||||||
gpu="A10G",
|
gpu="A10G",
|
||||||
timeout=5 * MINUTES,
|
timeout=5 * MINUTES,
|
||||||
scaledown_window=5 * MINUTES,
|
scaledown_window=5 * MINUTES,
|
||||||
allow_concurrent_inputs=6,
|
|
||||||
image=image,
|
image=image,
|
||||||
volumes={MODELS_DIR: volume},
|
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
|
||||||
)
|
)
|
||||||
class Transcriber:
|
@modal.concurrent(max_inputs=10)
|
||||||
|
class TranscriberWhisperLive:
|
||||||
|
"""Live transcriber class for small audio segments (A10G).
|
||||||
|
|
||||||
|
Mirrors the Parakeet live class API but uses Faster-Whisper under the hood.
|
||||||
|
"""
|
||||||
|
|
||||||
@modal.enter()
|
@modal.enter()
|
||||||
def enter(self):
|
def enter(self):
|
||||||
import faster_whisper
|
import faster_whisper
|
||||||
@@ -71,23 +187,200 @@ class Transcriber:
|
|||||||
device=self.device,
|
device=self.device,
|
||||||
compute_type=MODEL_COMPUTE_TYPE,
|
compute_type=MODEL_COMPUTE_TYPE,
|
||||||
num_workers=MODEL_NUM_WORKERS,
|
num_workers=MODEL_NUM_WORKERS,
|
||||||
download_root=MODELS_DIR,
|
download_root=CACHE_PATH,
|
||||||
local_files_only=True,
|
local_files_only=True,
|
||||||
)
|
)
|
||||||
|
print(f"Model is on device: {self.device}")
|
||||||
|
|
||||||
@modal.method()
|
@modal.method()
|
||||||
def transcribe_segment(
|
def transcribe_segment(
|
||||||
self,
|
self,
|
||||||
audio_data: str,
|
filename: str,
|
||||||
audio_suffix: str,
|
language: str = "en",
|
||||||
language: str,
|
|
||||||
):
|
):
|
||||||
with tempfile.NamedTemporaryFile("wb+", suffix=f".{audio_suffix}") as fp:
|
"""Transcribe a single uploaded audio file by filename."""
|
||||||
fp.write(audio_data)
|
upload_volume.reload()
|
||||||
|
|
||||||
|
file_path = f"{UPLOADS_PATH}/{filename}"
|
||||||
|
if not os.path.exists(file_path):
|
||||||
|
raise FileNotFoundError(f"File not found: {file_path}")
|
||||||
|
|
||||||
|
with self.lock:
|
||||||
|
with NoStdStreams():
|
||||||
|
segments, _ = self.model.transcribe(
|
||||||
|
file_path,
|
||||||
|
language=language,
|
||||||
|
beam_size=5,
|
||||||
|
word_timestamps=True,
|
||||||
|
vad_filter=True,
|
||||||
|
vad_parameters={"min_silence_duration_ms": 500},
|
||||||
|
)
|
||||||
|
|
||||||
|
segments = list(segments)
|
||||||
|
text = "".join(segment.text for segment in segments).strip()
|
||||||
|
words = [
|
||||||
|
{
|
||||||
|
"word": word.word,
|
||||||
|
"start": round(float(word.start), 2),
|
||||||
|
"end": round(float(word.end), 2),
|
||||||
|
}
|
||||||
|
for segment in segments
|
||||||
|
for word in segment.words
|
||||||
|
]
|
||||||
|
|
||||||
|
return {"text": text, "words": words}
|
||||||
|
|
||||||
|
@modal.method()
|
||||||
|
def transcribe_batch(
|
||||||
|
self,
|
||||||
|
filenames: list[str],
|
||||||
|
language: str = "en",
|
||||||
|
):
|
||||||
|
"""Transcribe multiple uploaded audio files and return per-file results."""
|
||||||
|
upload_volume.reload()
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for filename in filenames:
|
||||||
|
file_path = f"{UPLOADS_PATH}/{filename}"
|
||||||
|
if not os.path.exists(file_path):
|
||||||
|
raise FileNotFoundError(f"Batch file not found: {file_path}")
|
||||||
|
|
||||||
|
with self.lock:
|
||||||
|
with NoStdStreams():
|
||||||
|
segments, _ = self.model.transcribe(
|
||||||
|
file_path,
|
||||||
|
language=language,
|
||||||
|
beam_size=5,
|
||||||
|
word_timestamps=True,
|
||||||
|
vad_filter=True,
|
||||||
|
vad_parameters={"min_silence_duration_ms": 500},
|
||||||
|
)
|
||||||
|
|
||||||
|
segments = list(segments)
|
||||||
|
text = "".join(seg.text for seg in segments).strip()
|
||||||
|
words = [
|
||||||
|
{
|
||||||
|
"word": w.word,
|
||||||
|
"start": round(float(w.start), 2),
|
||||||
|
"end": round(float(w.end), 2),
|
||||||
|
}
|
||||||
|
for seg in segments
|
||||||
|
for w in seg.words
|
||||||
|
]
|
||||||
|
|
||||||
|
results.append(
|
||||||
|
{
|
||||||
|
"filename": filename,
|
||||||
|
"text": text,
|
||||||
|
"words": words,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
@app.cls(
|
||||||
|
gpu="L40S",
|
||||||
|
timeout=15 * MINUTES,
|
||||||
|
image=image,
|
||||||
|
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
|
||||||
|
)
|
||||||
|
class TranscriberWhisperFile:
|
||||||
|
"""File transcriber for larger/longer audio, using VAD-driven batching (L40S)."""
|
||||||
|
|
||||||
|
@modal.enter()
|
||||||
|
def enter(self):
|
||||||
|
import faster_whisper
|
||||||
|
import torch
|
||||||
|
from silero_vad import load_silero_vad
|
||||||
|
|
||||||
|
self.lock = threading.Lock()
|
||||||
|
self.use_gpu = torch.cuda.is_available()
|
||||||
|
self.device = "cuda" if self.use_gpu else "cpu"
|
||||||
|
self.model = faster_whisper.WhisperModel(
|
||||||
|
MODEL_NAME,
|
||||||
|
device=self.device,
|
||||||
|
compute_type=MODEL_COMPUTE_TYPE,
|
||||||
|
num_workers=MODEL_NUM_WORKERS,
|
||||||
|
download_root=CACHE_PATH,
|
||||||
|
local_files_only=True,
|
||||||
|
)
|
||||||
|
self.vad_model = load_silero_vad(onnx=False)
|
||||||
|
|
||||||
|
@modal.method()
|
||||||
|
def transcribe_segment(
|
||||||
|
self, filename: str, timestamp_offset: float = 0.0, language: str = "en"
|
||||||
|
):
|
||||||
|
import librosa
|
||||||
|
import numpy as np
|
||||||
|
from silero_vad import VADIterator
|
||||||
|
|
||||||
|
def vad_segments(
|
||||||
|
audio_array,
|
||||||
|
sample_rate: int = SAMPLERATE,
|
||||||
|
window_size: int = VAD_CONFIG["window_size"],
|
||||||
|
) -> Generator[TimeSegment, None, None]:
|
||||||
|
"""Generate speech segments as TimeSegment using Silero VAD."""
|
||||||
|
iterator = VADIterator(self.vad_model, sampling_rate=sample_rate)
|
||||||
|
start = None
|
||||||
|
for i in range(0, len(audio_array), window_size):
|
||||||
|
chunk = audio_array[i : i + window_size]
|
||||||
|
if len(chunk) < window_size:
|
||||||
|
chunk = np.pad(
|
||||||
|
chunk, (0, window_size - len(chunk)), mode="constant"
|
||||||
|
)
|
||||||
|
speech = iterator(chunk)
|
||||||
|
if not speech:
|
||||||
|
continue
|
||||||
|
if "start" in speech:
|
||||||
|
start = speech["start"]
|
||||||
|
continue
|
||||||
|
if "end" in speech and start is not None:
|
||||||
|
end = speech["end"]
|
||||||
|
yield TimeSegment(
|
||||||
|
start / float(SAMPLERATE), end / float(SAMPLERATE)
|
||||||
|
)
|
||||||
|
start = None
|
||||||
|
iterator.reset_states()
|
||||||
|
|
||||||
|
upload_volume.reload()
|
||||||
|
file_path = f"{UPLOADS_PATH}/{filename}"
|
||||||
|
if not os.path.exists(file_path):
|
||||||
|
raise FileNotFoundError(f"File not found: {file_path}")
|
||||||
|
|
||||||
|
audio_array, _sr = librosa.load(file_path, sr=SAMPLERATE, mono=True)
|
||||||
|
|
||||||
|
# Batch segments up to ~30s windows by merging contiguous VAD segments
|
||||||
|
merged_batches: list[TimeSegment] = []
|
||||||
|
batch_start = None
|
||||||
|
batch_end = None
|
||||||
|
max_duration = VAD_CONFIG["batch_max_duration"]
|
||||||
|
for segment in vad_segments(audio_array):
|
||||||
|
seg_start, seg_end = segment.start, segment.end
|
||||||
|
if batch_start is None:
|
||||||
|
batch_start, batch_end = seg_start, seg_end
|
||||||
|
continue
|
||||||
|
if seg_end - batch_start <= max_duration:
|
||||||
|
batch_end = seg_end
|
||||||
|
else:
|
||||||
|
merged_batches.append(TimeSegment(batch_start, batch_end))
|
||||||
|
batch_start, batch_end = seg_start, seg_end
|
||||||
|
if batch_start is not None and batch_end is not None:
|
||||||
|
merged_batches.append(TimeSegment(batch_start, batch_end))
|
||||||
|
|
||||||
|
all_text = []
|
||||||
|
all_words = []
|
||||||
|
|
||||||
|
for segment in merged_batches:
|
||||||
|
start_time, end_time = segment.start, segment.end
|
||||||
|
s_idx = int(start_time * SAMPLERATE)
|
||||||
|
e_idx = int(end_time * SAMPLERATE)
|
||||||
|
segment = audio_array[s_idx:e_idx]
|
||||||
|
segment = pad_audio(segment, SAMPLERATE)
|
||||||
|
|
||||||
with self.lock:
|
with self.lock:
|
||||||
segments, _ = self.model.transcribe(
|
segments, _ = self.model.transcribe(
|
||||||
fp.name,
|
segment,
|
||||||
language=language,
|
language=language,
|
||||||
beam_size=5,
|
beam_size=5,
|
||||||
word_timestamps=True,
|
word_timestamps=True,
|
||||||
@@ -96,66 +389,220 @@ class Transcriber:
|
|||||||
)
|
)
|
||||||
|
|
||||||
segments = list(segments)
|
segments = list(segments)
|
||||||
text = "".join(segment.text for segment in segments)
|
text = "".join(seg.text for seg in segments).strip()
|
||||||
words = [
|
words = [
|
||||||
{"word": word.word, "start": word.start, "end": word.end}
|
{
|
||||||
for segment in segments
|
"word": w.word,
|
||||||
for word in segment.words
|
"start": round(float(w.start) + start_time + timestamp_offset, 2),
|
||||||
|
"end": round(float(w.end) + start_time + timestamp_offset, 2),
|
||||||
|
}
|
||||||
|
for seg in segments
|
||||||
|
for w in seg.words
|
||||||
]
|
]
|
||||||
|
if text:
|
||||||
|
all_text.append(text)
|
||||||
|
all_words.extend(words)
|
||||||
|
|
||||||
return {"text": text, "words": words}
|
return {"text": " ".join(all_text), "words": all_words}
|
||||||
|
|
||||||
|
|
||||||
|
def detect_audio_format(url: str, headers: dict) -> str:
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
from fastapi import HTTPException
|
||||||
|
|
||||||
|
url_path = urlparse(url).path
|
||||||
|
for ext in SUPPORTED_FILE_EXTENSIONS:
|
||||||
|
if url_path.lower().endswith(f".{ext}"):
|
||||||
|
return ext
|
||||||
|
|
||||||
|
content_type = headers.get("content-type", "").lower()
|
||||||
|
if "audio/mpeg" in content_type or "audio/mp3" in content_type:
|
||||||
|
return "mp3"
|
||||||
|
if "audio/wav" in content_type:
|
||||||
|
return "wav"
|
||||||
|
if "audio/mp4" in content_type:
|
||||||
|
return "mp4"
|
||||||
|
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=(
|
||||||
|
f"Unsupported audio format for URL. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def download_audio_to_volume(audio_file_url: str) -> tuple[str, str]:
|
||||||
|
import requests
|
||||||
|
from fastapi import HTTPException
|
||||||
|
|
||||||
|
response = requests.head(audio_file_url, allow_redirects=True)
|
||||||
|
if response.status_code == 404:
|
||||||
|
raise HTTPException(status_code=404, detail="Audio file not found")
|
||||||
|
|
||||||
|
response = requests.get(audio_file_url, allow_redirects=True)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
audio_suffix = detect_audio_format(audio_file_url, response.headers)
|
||||||
|
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
|
||||||
|
file_path = f"{UPLOADS_PATH}/{unique_filename}"
|
||||||
|
|
||||||
|
with open(file_path, "wb") as f:
|
||||||
|
f.write(response.content)
|
||||||
|
|
||||||
|
upload_volume.commit()
|
||||||
|
return unique_filename, audio_suffix
|
||||||
|
|
||||||
|
|
||||||
@app.function(
|
@app.function(
|
||||||
scaledown_window=60,
|
scaledown_window=60,
|
||||||
timeout=60,
|
timeout=600,
|
||||||
allow_concurrent_inputs=40,
|
|
||||||
secrets=[
|
secrets=[
|
||||||
modal.Secret.from_name("reflector-gpu"),
|
modal.Secret.from_name("reflector-gpu"),
|
||||||
],
|
],
|
||||||
volumes={MODELS_DIR: volume},
|
volumes={CACHE_PATH: model_cache, UPLOADS_PATH: upload_volume},
|
||||||
|
image=image,
|
||||||
)
|
)
|
||||||
|
@modal.concurrent(max_inputs=40)
|
||||||
@modal.asgi_app()
|
@modal.asgi_app()
|
||||||
def web():
|
def web():
|
||||||
from fastapi import Body, Depends, FastAPI, HTTPException, UploadFile, status
|
from fastapi import (
|
||||||
|
Body,
|
||||||
|
Depends,
|
||||||
|
FastAPI,
|
||||||
|
Form,
|
||||||
|
HTTPException,
|
||||||
|
UploadFile,
|
||||||
|
status,
|
||||||
|
)
|
||||||
from fastapi.security import OAuth2PasswordBearer
|
from fastapi.security import OAuth2PasswordBearer
|
||||||
from typing_extensions import Annotated
|
|
||||||
|
|
||||||
transcriber = Transcriber()
|
transcriber_live = TranscriberWhisperLive()
|
||||||
|
transcriber_file = TranscriberWhisperFile()
|
||||||
|
|
||||||
app = FastAPI()
|
app = FastAPI()
|
||||||
|
|
||||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
|
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
|
||||||
|
|
||||||
supported_file_types = ["mp3", "mp4", "mpeg", "mpga", "m4a", "wav", "webm"]
|
|
||||||
|
|
||||||
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
|
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
|
||||||
if apikey != os.environ["REFLECTOR_GPU_APIKEY"]:
|
if apikey == os.environ["REFLECTOR_GPU_APIKEY"]:
|
||||||
raise HTTPException(
|
return
|
||||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
raise HTTPException(
|
||||||
detail="Invalid API key",
|
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||||
headers={"WWW-Authenticate": "Bearer"},
|
detail="Invalid API key",
|
||||||
)
|
headers={"WWW-Authenticate": "Bearer"},
|
||||||
|
)
|
||||||
|
|
||||||
class TranscriptResponse(BaseModel):
|
class TranscriptResponse(dict):
|
||||||
result: dict
|
pass
|
||||||
|
|
||||||
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
|
@app.post("/v1/audio/transcriptions", dependencies=[Depends(apikey_auth)])
|
||||||
def transcribe(
|
def transcribe(
|
||||||
file: UploadFile,
|
file: UploadFile = None,
|
||||||
model: str = "whisper-1",
|
files: list[UploadFile] | None = None,
|
||||||
language: Annotated[str, Body(...)] = "en",
|
model: str = Form(MODEL_NAME),
|
||||||
) -> TranscriptResponse:
|
language: str = Form("en"),
|
||||||
audio_data = file.file.read()
|
batch: bool = Form(False),
|
||||||
audio_suffix = file.filename.split(".")[-1]
|
):
|
||||||
assert audio_suffix in supported_file_types
|
if not file and not files:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400, detail="Either 'file' or 'files' parameter is required"
|
||||||
|
)
|
||||||
|
if batch and not files:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400, detail="Batch transcription requires 'files'"
|
||||||
|
)
|
||||||
|
|
||||||
func = transcriber.transcribe_segment.spawn(
|
upload_files = [file] if file else files
|
||||||
audio_data=audio_data,
|
|
||||||
audio_suffix=audio_suffix,
|
uploaded_filenames: list[str] = []
|
||||||
language=language,
|
for upload_file in upload_files:
|
||||||
)
|
audio_suffix = upload_file.filename.split(".")[-1]
|
||||||
result = func.get()
|
if audio_suffix not in SUPPORTED_FILE_EXTENSIONS:
|
||||||
return result
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=(
|
||||||
|
f"Unsupported audio format. Supported extensions: {', '.join(SUPPORTED_FILE_EXTENSIONS)}"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
unique_filename = f"{uuid.uuid4()}.{audio_suffix}"
|
||||||
|
file_path = f"{UPLOADS_PATH}/{unique_filename}"
|
||||||
|
with open(file_path, "wb") as f:
|
||||||
|
content = upload_file.file.read()
|
||||||
|
f.write(content)
|
||||||
|
uploaded_filenames.append(unique_filename)
|
||||||
|
|
||||||
|
upload_volume.commit()
|
||||||
|
|
||||||
|
try:
|
||||||
|
if batch and len(upload_files) > 1:
|
||||||
|
func = transcriber_live.transcribe_batch.spawn(
|
||||||
|
filenames=uploaded_filenames,
|
||||||
|
language=language,
|
||||||
|
)
|
||||||
|
results = func.get()
|
||||||
|
return {"results": results}
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for filename in uploaded_filenames:
|
||||||
|
func = transcriber_live.transcribe_segment.spawn(
|
||||||
|
filename=filename,
|
||||||
|
language=language,
|
||||||
|
)
|
||||||
|
result = func.get()
|
||||||
|
result["filename"] = filename
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
return {"results": results} if len(results) > 1 else results[0]
|
||||||
|
finally:
|
||||||
|
for filename in uploaded_filenames:
|
||||||
|
try:
|
||||||
|
file_path = f"{UPLOADS_PATH}/{filename}"
|
||||||
|
os.remove(file_path)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
upload_volume.commit()
|
||||||
|
|
||||||
|
@app.post("/v1/audio/transcriptions-from-url", dependencies=[Depends(apikey_auth)])
|
||||||
|
def transcribe_from_url(
|
||||||
|
audio_file_url: str = Body(
|
||||||
|
..., description="URL of the audio file to transcribe"
|
||||||
|
),
|
||||||
|
model: str = Body(MODEL_NAME),
|
||||||
|
language: str = Body("en"),
|
||||||
|
timestamp_offset: float = Body(0.0),
|
||||||
|
):
|
||||||
|
unique_filename, _audio_suffix = download_audio_to_volume(audio_file_url)
|
||||||
|
try:
|
||||||
|
func = transcriber_file.transcribe_segment.spawn(
|
||||||
|
filename=unique_filename,
|
||||||
|
timestamp_offset=timestamp_offset,
|
||||||
|
language=language,
|
||||||
|
)
|
||||||
|
result = func.get()
|
||||||
|
return result
|
||||||
|
finally:
|
||||||
|
try:
|
||||||
|
file_path = f"{UPLOADS_PATH}/{unique_filename}"
|
||||||
|
os.remove(file_path)
|
||||||
|
upload_volume.commit()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
return app
|
return app
|
||||||
|
|
||||||
|
|
||||||
|
class NoStdStreams:
|
||||||
|
def __init__(self):
|
||||||
|
self.devnull = open(os.devnull, "w")
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self._stdout, self._stderr = sys.stdout, sys.stderr
|
||||||
|
self._stdout.flush()
|
||||||
|
self._stderr.flush()
|
||||||
|
sys.stdout, sys.stderr = self.devnull, self.devnull
|
||||||
|
|
||||||
|
def __exit__(self, exc_type, exc_value, traceback):
|
||||||
|
sys.stdout, sys.stderr = self._stdout, self._stderr
|
||||||
|
self.devnull.close()
|
||||||
|
|||||||
@@ -272,6 +272,9 @@ class TestGPUModalTranscript:
|
|||||||
for f in temp_files:
|
for f in temp_files:
|
||||||
Path(f).unlink(missing_ok=True)
|
Path(f).unlink(missing_ok=True)
|
||||||
|
|
||||||
|
@pytest.mark.skipif(
|
||||||
|
not "parakeet" in get_model_name(), reason="Parakeet only supports English"
|
||||||
|
)
|
||||||
def test_transcriptions_error_handling(self):
|
def test_transcriptions_error_handling(self):
|
||||||
"""Test error handling for invalid requests."""
|
"""Test error handling for invalid requests."""
|
||||||
url = get_modal_transcript_url()
|
url = get_modal_transcript_url()
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
import { Flex, Spinner } from "@chakra-ui/react";
|
import { Flex, Spinner } from "@chakra-ui/react";
|
||||||
import { useAuth } from "../lib/AuthProvider";
|
import { useAuth } from "../lib/AuthProvider";
|
||||||
|
import { useLoginRequiredPages } from "../lib/useLoginRequiredPages";
|
||||||
|
|
||||||
export default function AuthWrapper({
|
export default function AuthWrapper({
|
||||||
children,
|
children,
|
||||||
@@ -9,8 +10,10 @@ export default function AuthWrapper({
|
|||||||
children: React.ReactNode;
|
children: React.ReactNode;
|
||||||
}) {
|
}) {
|
||||||
const auth = useAuth();
|
const auth = useAuth();
|
||||||
|
const redirectPath = useLoginRequiredPages();
|
||||||
|
const redirectHappens = !!redirectPath;
|
||||||
|
|
||||||
if (auth.status === "loading") {
|
if (auth.status === "loading" || redirectHappens) {
|
||||||
return (
|
return (
|
||||||
<Flex
|
<Flex
|
||||||
flexDir="column"
|
flexDir="column"
|
||||||
|
|||||||
@@ -1,17 +1,18 @@
|
|||||||
"use client";
|
"use client";
|
||||||
|
|
||||||
import { createContext, useContext, useEffect } from "react";
|
import { createContext, useContext } from "react";
|
||||||
import { useSession as useNextAuthSession } from "next-auth/react";
|
import { useSession as useNextAuthSession } from "next-auth/react";
|
||||||
import { signOut, signIn } from "next-auth/react";
|
import { signOut, signIn } from "next-auth/react";
|
||||||
import { configureApiAuth, configureApiAuthRefresh } from "./apiClient";
|
import { configureApiAuth } from "./apiClient";
|
||||||
import { assertCustomSession, CustomSession } from "./types";
|
import { assertCustomSession, CustomSession } from "./types";
|
||||||
import { Session } from "next-auth";
|
import { Session } from "next-auth";
|
||||||
import { SessionAutoRefresh } from "./SessionAutoRefresh";
|
import { SessionAutoRefresh } from "./SessionAutoRefresh";
|
||||||
import { REFRESH_ACCESS_TOKEN_ERROR } from "./auth";
|
import { REFRESH_ACCESS_TOKEN_ERROR } from "./auth";
|
||||||
|
import { assertExists } from "./utils";
|
||||||
|
|
||||||
type AuthContextType = (
|
type AuthContextType = (
|
||||||
| { status: "loading" }
|
| { status: "loading" }
|
||||||
| { status: "refreshing" }
|
| { status: "refreshing"; user: CustomSession["user"] }
|
||||||
| { status: "unauthenticated"; error?: string }
|
| { status: "unauthenticated"; error?: string }
|
||||||
| {
|
| {
|
||||||
status: "authenticated";
|
status: "authenticated";
|
||||||
@@ -41,7 +42,10 @@ export function AuthProvider({ children }: { children: React.ReactNode }) {
|
|||||||
return { status };
|
return { status };
|
||||||
}
|
}
|
||||||
case true: {
|
case true: {
|
||||||
return { status: "refreshing" as const };
|
return {
|
||||||
|
status: "refreshing" as const,
|
||||||
|
user: assertExists(customSession).user,
|
||||||
|
};
|
||||||
}
|
}
|
||||||
default: {
|
default: {
|
||||||
const _: never = sessionIsHere;
|
const _: never = sessionIsHere;
|
||||||
@@ -88,12 +92,6 @@ export function AuthProvider({ children }: { children: React.ReactNode }) {
|
|||||||
contextValue.status === "authenticated" ? contextValue.accessToken : null,
|
contextValue.status === "authenticated" ? contextValue.accessToken : null,
|
||||||
);
|
);
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
configureApiAuthRefresh(
|
|
||||||
contextValue.status === "authenticated" ? contextValue.update : null,
|
|
||||||
);
|
|
||||||
}, [contextValue.status === "authenticated" && contextValue.update]);
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<AuthContext.Provider value={contextValue}>
|
<AuthContext.Provider value={contextValue}>
|
||||||
<SessionAutoRefresh>{children}</SessionAutoRefresh>
|
<SessionAutoRefresh>{children}</SessionAutoRefresh>
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ const REFRESH_BEFORE = REFRESH_ACCESS_TOKEN_BEFORE;
|
|||||||
|
|
||||||
export function SessionAutoRefresh({ children }) {
|
export function SessionAutoRefresh({ children }) {
|
||||||
const auth = useAuth();
|
const auth = useAuth();
|
||||||
|
|
||||||
const accessTokenExpires =
|
const accessTokenExpires =
|
||||||
auth.status === "authenticated" ? auth.accessTokenExpires : null;
|
auth.status === "authenticated" ? auth.accessTokenExpires : null;
|
||||||
|
|
||||||
@@ -23,18 +24,16 @@ export function SessionAutoRefresh({ children }) {
|
|||||||
// and not too slow (debuggable)
|
// and not too slow (debuggable)
|
||||||
const INTERVAL_REFRESH_MS = 5000;
|
const INTERVAL_REFRESH_MS = 5000;
|
||||||
const interval = setInterval(() => {
|
const interval = setInterval(() => {
|
||||||
if (accessTokenExpires !== null) {
|
if (accessTokenExpires === null) return;
|
||||||
const timeLeft = accessTokenExpires - Date.now();
|
const timeLeft = accessTokenExpires - Date.now();
|
||||||
console.log("time left", timeLeft);
|
if (timeLeft < REFRESH_BEFORE) {
|
||||||
// if (timeLeft < REFRESH_BEFORE) {
|
auth
|
||||||
// auth
|
.update()
|
||||||
// .update()
|
.then(() => {})
|
||||||
// .then(() => {})
|
.catch((e) => {
|
||||||
// .catch((e) => {
|
// note: 401 won't be considered error here
|
||||||
// // note: 401 won't be considered error here
|
console.error("error refreshing auth token", e);
|
||||||
// console.error("error refreshing auth token", e);
|
});
|
||||||
// });
|
|
||||||
// }
|
|
||||||
}
|
}
|
||||||
}, INTERVAL_REFRESH_MS);
|
}, INTERVAL_REFRESH_MS);
|
||||||
|
|
||||||
|
|||||||
@@ -11,9 +11,6 @@ import {
|
|||||||
import createFetchClient from "openapi-react-query";
|
import createFetchClient from "openapi-react-query";
|
||||||
import { assertExistsAndNonEmptyString } from "./utils";
|
import { assertExistsAndNonEmptyString } from "./utils";
|
||||||
import { isBuildPhase } from "./next";
|
import { isBuildPhase } from "./next";
|
||||||
import { Session } from "next-auth";
|
|
||||||
import { assertCustomSession } from "./types";
|
|
||||||
import { HttpMethod, PathsWithMethod } from "openapi-typescript-helpers";
|
|
||||||
|
|
||||||
const API_URL = !isBuildPhase
|
const API_URL = !isBuildPhase
|
||||||
? assertExistsAndNonEmptyString(process.env.NEXT_PUBLIC_API_URL)
|
? assertExistsAndNonEmptyString(process.env.NEXT_PUBLIC_API_URL)
|
||||||
@@ -28,20 +25,12 @@ export const client = createClient<paths>({
|
|||||||
export const $api = createFetchClient<paths>(client);
|
export const $api = createFetchClient<paths>(client);
|
||||||
|
|
||||||
let currentAuthToken: string | null | undefined = null;
|
let currentAuthToken: string | null | undefined = null;
|
||||||
let refreshAuthCallback: (() => Promise<Session | null>) | null = null;
|
|
||||||
|
|
||||||
const injectAuth = (request: Request, accessToken: string | null) => {
|
|
||||||
if (accessToken) {
|
|
||||||
request.headers.set("Authorization", `Bearer ${currentAuthToken}`);
|
|
||||||
} else {
|
|
||||||
request.headers.delete("Authorization");
|
|
||||||
}
|
|
||||||
return request;
|
|
||||||
};
|
|
||||||
|
|
||||||
client.use({
|
client.use({
|
||||||
onRequest({ request }) {
|
onRequest({ request }) {
|
||||||
request = injectAuth(request, currentAuthToken || null);
|
if (currentAuthToken) {
|
||||||
|
request.headers.set("Authorization", `Bearer ${currentAuthToken}`);
|
||||||
|
}
|
||||||
// XXX Only set Content-Type if not already set (FormData will set its own boundary)
|
// XXX Only set Content-Type if not already set (FormData will set its own boundary)
|
||||||
// This is a work around for uploading file, we're passing a formdata
|
// This is a work around for uploading file, we're passing a formdata
|
||||||
// but the content type was still application/json
|
// but the content type was still application/json
|
||||||
@@ -55,46 +44,7 @@ client.use({
|
|||||||
},
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
client.use({
|
|
||||||
async onResponse({ response, request, params, schemaPath }) {
|
|
||||||
if (response.status === 401) {
|
|
||||||
console.log(
|
|
||||||
"response.status is 401!",
|
|
||||||
refreshAuthCallback,
|
|
||||||
request,
|
|
||||||
schemaPath,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
if (response.status === 401 && refreshAuthCallback) {
|
|
||||||
try {
|
|
||||||
const session = await refreshAuthCallback();
|
|
||||||
if (!session) {
|
|
||||||
console.warn("Token refresh failed, no session returned");
|
|
||||||
return response;
|
|
||||||
}
|
|
||||||
const customSession = assertCustomSession(session);
|
|
||||||
currentAuthToken = customSession.accessToken;
|
|
||||||
const r = await client.request(
|
|
||||||
request.method as HttpMethod,
|
|
||||||
schemaPath as PathsWithMethod<paths, HttpMethod>,
|
|
||||||
...params,
|
|
||||||
);
|
|
||||||
return r.response;
|
|
||||||
} catch (error) {
|
|
||||||
console.error("Token refresh failed during 401 retry:", error);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return response;
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
// the function contract: lightweight, idempotent
|
// the function contract: lightweight, idempotent
|
||||||
export const configureApiAuth = (token: string | null | undefined) => {
|
export const configureApiAuth = (token: string | null | undefined) => {
|
||||||
currentAuthToken = token;
|
currentAuthToken = token;
|
||||||
};
|
};
|
||||||
|
|
||||||
export const configureApiAuthRefresh = (
|
|
||||||
callback: (() => Promise<Session | null>) | null,
|
|
||||||
) => {
|
|
||||||
refreshAuthCallback = callback;
|
|
||||||
};
|
|
||||||
|
|||||||
@@ -1,3 +1,13 @@
|
|||||||
export const REFRESH_ACCESS_TOKEN_ERROR = "RefreshAccessTokenError" as const;
|
export const REFRESH_ACCESS_TOKEN_ERROR = "RefreshAccessTokenError" as const;
|
||||||
// 4 min is 1 min less than default authentic value. here we assume that authentic won't be set to access tokens < 4 min
|
// 4 min is 1 min less than default authentic value. here we assume that authentic won't be set to access tokens < 4 min
|
||||||
export const REFRESH_ACCESS_TOKEN_BEFORE = 4 * 60 * 1000;
|
export const REFRESH_ACCESS_TOKEN_BEFORE = 4 * 60 * 1000;
|
||||||
|
|
||||||
|
export const LOGIN_REQUIRED_PAGES = [
|
||||||
|
"/transcripts/[!new]",
|
||||||
|
"/browse(.*)",
|
||||||
|
"/rooms(.*)",
|
||||||
|
];
|
||||||
|
|
||||||
|
export const PROTECTED_PAGES = new RegExp(
|
||||||
|
LOGIN_REQUIRED_PAGES.map((page) => `^${page}$`).join("|"),
|
||||||
|
);
|
||||||
|
|||||||
@@ -2,7 +2,11 @@ import { AuthOptions } from "next-auth";
|
|||||||
import AuthentikProvider from "next-auth/providers/authentik";
|
import AuthentikProvider from "next-auth/providers/authentik";
|
||||||
import type { JWT } from "next-auth/jwt";
|
import type { JWT } from "next-auth/jwt";
|
||||||
import { JWTWithAccessToken, CustomSession } from "./types";
|
import { JWTWithAccessToken, CustomSession } from "./types";
|
||||||
import { assertExists, assertExistsAndNonEmptyString } from "./utils";
|
import {
|
||||||
|
assertExists,
|
||||||
|
assertExistsAndNonEmptyString,
|
||||||
|
assertNotExists,
|
||||||
|
} from "./utils";
|
||||||
import {
|
import {
|
||||||
REFRESH_ACCESS_TOKEN_BEFORE,
|
REFRESH_ACCESS_TOKEN_BEFORE,
|
||||||
REFRESH_ACCESS_TOKEN_ERROR,
|
REFRESH_ACCESS_TOKEN_ERROR,
|
||||||
@@ -12,14 +16,10 @@ import {
|
|||||||
setTokenCache,
|
setTokenCache,
|
||||||
deleteTokenCache,
|
deleteTokenCache,
|
||||||
} from "./redisTokenCache";
|
} from "./redisTokenCache";
|
||||||
import { tokenCacheRedis } from "./redisClient";
|
import { tokenCacheRedis, redlock } from "./redisClient";
|
||||||
import { isBuildPhase } from "./next";
|
import { isBuildPhase } from "./next";
|
||||||
|
|
||||||
// REFRESH_ACCESS_TOKEN_BEFORE because refresh is based on access token expiration (imagine we cache it 30 days)
|
|
||||||
const TOKEN_CACHE_TTL = REFRESH_ACCESS_TOKEN_BEFORE;
|
const TOKEN_CACHE_TTL = REFRESH_ACCESS_TOKEN_BEFORE;
|
||||||
|
|
||||||
const refreshLocks = new Map<string, Promise<JWTWithAccessToken>>();
|
|
||||||
|
|
||||||
const CLIENT_ID = !isBuildPhase
|
const CLIENT_ID = !isBuildPhase
|
||||||
? assertExistsAndNonEmptyString(process.env.AUTHENTIK_CLIENT_ID)
|
? assertExistsAndNonEmptyString(process.env.AUTHENTIK_CLIENT_ID)
|
||||||
: "noop";
|
: "noop";
|
||||||
@@ -45,38 +45,47 @@ export const authOptions: AuthOptions = {
|
|||||||
},
|
},
|
||||||
callbacks: {
|
callbacks: {
|
||||||
async jwt({ token, account, user }) {
|
async jwt({ token, account, user }) {
|
||||||
console.log("token.sub jwt callback", token.sub);
|
if (account && !account.access_token) {
|
||||||
const KEY = `token:${token.sub}`;
|
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
|
||||||
|
}
|
||||||
|
|
||||||
if (account && user) {
|
if (account && user) {
|
||||||
// called only on first login
|
// called only on first login
|
||||||
// XXX account.expires_in used in example is not defined for authentik backend, but expires_at is
|
// XXX account.expires_in used in example is not defined for authentik backend, but expires_at is
|
||||||
const expiresAtS = assertExists(account.expires_at);
|
if (account.access_token) {
|
||||||
const expiresAtMs = expiresAtS * 1000;
|
const expiresAtS = assertExists(account.expires_at);
|
||||||
if (!account.access_token) {
|
const expiresAtMs = expiresAtS * 1000;
|
||||||
await deleteTokenCache(tokenCacheRedis, KEY);
|
|
||||||
} else {
|
|
||||||
const jwtToken: JWTWithAccessToken = {
|
const jwtToken: JWTWithAccessToken = {
|
||||||
...token,
|
...token,
|
||||||
accessToken: account.access_token,
|
accessToken: account.access_token,
|
||||||
accessTokenExpires: expiresAtMs,
|
accessTokenExpires: expiresAtMs,
|
||||||
refreshToken: account.refresh_token,
|
refreshToken: account.refresh_token,
|
||||||
};
|
};
|
||||||
await setTokenCache(tokenCacheRedis, KEY, {
|
if (jwtToken.error) {
|
||||||
token: jwtToken,
|
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
|
||||||
timestamp: Date.now(),
|
} else {
|
||||||
});
|
assertNotExists(
|
||||||
return jwtToken;
|
jwtToken.error,
|
||||||
|
`panic! trying to cache token with error in jwt: ${jwtToken.error}`,
|
||||||
|
);
|
||||||
|
await setTokenCache(tokenCacheRedis, `token:${token.sub}`, {
|
||||||
|
token: jwtToken,
|
||||||
|
timestamp: Date.now(),
|
||||||
|
});
|
||||||
|
return jwtToken;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const currentToken = await getTokenCache(tokenCacheRedis, KEY);
|
const currentToken = await getTokenCache(
|
||||||
console.log(
|
tokenCacheRedis,
|
||||||
"currentToken.token.accessTokenExpires",
|
`token:${token.sub}`,
|
||||||
currentToken?.token?.accessTokenExpires,
|
);
|
||||||
currentToken?.token?.accessTokenExpires
|
console.debug(
|
||||||
? Date.now() < currentToken?.token?.accessTokenExpires
|
"currentToken from cache",
|
||||||
: "?",
|
JSON.stringify(currentToken, null, 2),
|
||||||
|
"will be returned?",
|
||||||
|
currentToken && Date.now() < currentToken.token.accessTokenExpires,
|
||||||
);
|
);
|
||||||
if (currentToken && Date.now() < currentToken.token.accessTokenExpires) {
|
if (currentToken && Date.now() < currentToken.token.accessTokenExpires) {
|
||||||
return currentToken.token;
|
return currentToken.token;
|
||||||
@@ -105,20 +114,22 @@ export const authOptions: AuthOptions = {
|
|||||||
async function lockedRefreshAccessToken(
|
async function lockedRefreshAccessToken(
|
||||||
token: JWT,
|
token: JWT,
|
||||||
): Promise<JWTWithAccessToken> {
|
): Promise<JWTWithAccessToken> {
|
||||||
const lockKey = `${token.sub}-refresh`;
|
const lockKey = `${token.sub}-lock`;
|
||||||
|
|
||||||
const existingRefresh = refreshLocks.get(lockKey);
|
return redlock
|
||||||
if (existingRefresh) {
|
.using([lockKey], 10000, async () => {
|
||||||
return await existingRefresh;
|
|
||||||
}
|
|
||||||
|
|
||||||
const refreshPromise = (async () => {
|
|
||||||
try {
|
|
||||||
const cached = await getTokenCache(tokenCacheRedis, `token:${token.sub}`);
|
const cached = await getTokenCache(tokenCacheRedis, `token:${token.sub}`);
|
||||||
|
if (cached)
|
||||||
|
console.debug(
|
||||||
|
"received cached token. to delete?",
|
||||||
|
Date.now() - cached.timestamp > TOKEN_CACHE_TTL,
|
||||||
|
);
|
||||||
|
else console.debug("no cached token received");
|
||||||
if (cached) {
|
if (cached) {
|
||||||
if (Date.now() - cached.timestamp > TOKEN_CACHE_TTL) {
|
if (Date.now() - cached.timestamp > TOKEN_CACHE_TTL) {
|
||||||
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
|
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
|
||||||
} else if (Date.now() < cached.token.accessTokenExpires) {
|
} else if (Date.now() < cached.token.accessTokenExpires) {
|
||||||
|
console.debug("returning cached token", cached.token);
|
||||||
return cached.token;
|
return cached.token;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -126,19 +137,35 @@ async function lockedRefreshAccessToken(
|
|||||||
const currentToken = cached?.token || (token as JWTWithAccessToken);
|
const currentToken = cached?.token || (token as JWTWithAccessToken);
|
||||||
const newToken = await refreshAccessToken(currentToken);
|
const newToken = await refreshAccessToken(currentToken);
|
||||||
|
|
||||||
|
console.debug("current token during refresh", currentToken);
|
||||||
|
console.debug("new token during refresh", newToken);
|
||||||
|
|
||||||
|
if (newToken.error) {
|
||||||
|
await deleteTokenCache(tokenCacheRedis, `token:${token.sub}`);
|
||||||
|
return newToken;
|
||||||
|
}
|
||||||
|
|
||||||
|
assertNotExists(
|
||||||
|
newToken.error,
|
||||||
|
`panic! trying to cache token with error during refresh: ${newToken.error}`,
|
||||||
|
);
|
||||||
await setTokenCache(tokenCacheRedis, `token:${token.sub}`, {
|
await setTokenCache(tokenCacheRedis, `token:${token.sub}`, {
|
||||||
token: newToken,
|
token: newToken,
|
||||||
timestamp: Date.now(),
|
timestamp: Date.now(),
|
||||||
});
|
});
|
||||||
|
|
||||||
return newToken;
|
return newToken;
|
||||||
} finally {
|
})
|
||||||
setTimeout(() => refreshLocks.delete(lockKey), 100);
|
.catch((e) => {
|
||||||
}
|
console.error("error refreshing token", e);
|
||||||
})();
|
deleteTokenCache(tokenCacheRedis, `token:${token.sub}`).catch((e) => {
|
||||||
|
console.error("error deleting errored token", e);
|
||||||
refreshLocks.set(lockKey, refreshPromise);
|
});
|
||||||
return refreshPromise;
|
return {
|
||||||
|
...token,
|
||||||
|
error: REFRESH_ACCESS_TOKEN_ERROR,
|
||||||
|
} as JWTWithAccessToken;
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
async function refreshAccessToken(token: JWT): Promise<JWTWithAccessToken> {
|
async function refreshAccessToken(token: JWT): Promise<JWTWithAccessToken> {
|
||||||
|
|||||||
@@ -1,30 +1,41 @@
|
|||||||
import Redis from "ioredis";
|
import Redis from "ioredis";
|
||||||
import { isBuildPhase } from "./next";
|
import { isBuildPhase } from "./next";
|
||||||
|
import Redlock, { ResourceLockedError } from "redlock";
|
||||||
|
|
||||||
export type RedisClient = Pick<Redis, "get" | "setex" | "del">;
|
export type RedisClient = Pick<Redis, "get" | "setex" | "del">;
|
||||||
|
export type RedlockClient = {
|
||||||
|
using: <T>(
|
||||||
|
keys: string | string[],
|
||||||
|
ttl: number,
|
||||||
|
cb: () => Promise<T>,
|
||||||
|
) => Promise<T>;
|
||||||
|
};
|
||||||
|
const KV_USE_TLS = process.env.KV_USE_TLS
|
||||||
|
? process.env.KV_USE_TLS === "true"
|
||||||
|
: undefined;
|
||||||
|
|
||||||
|
let redisClient: Redis | null = null;
|
||||||
|
|
||||||
const getRedisClient = (): RedisClient => {
|
const getRedisClient = (): RedisClient => {
|
||||||
|
if (redisClient) return redisClient;
|
||||||
const redisUrl = process.env.KV_URL;
|
const redisUrl = process.env.KV_URL;
|
||||||
if (!redisUrl) {
|
if (!redisUrl) {
|
||||||
throw new Error("KV_URL environment variable is required");
|
throw new Error("KV_URL environment variable is required");
|
||||||
}
|
}
|
||||||
const redis = new Redis(redisUrl, {
|
redisClient = new Redis(redisUrl, {
|
||||||
maxRetriesPerRequest: 3,
|
maxRetriesPerRequest: 3,
|
||||||
lazyConnect: true,
|
...(KV_USE_TLS === true
|
||||||
|
? {
|
||||||
|
tls: {},
|
||||||
|
}
|
||||||
|
: {}),
|
||||||
});
|
});
|
||||||
|
|
||||||
redis.on("error", (error) => {
|
redisClient.on("error", (error) => {
|
||||||
console.error("Redis error:", error);
|
console.error("Redis error:", error);
|
||||||
});
|
});
|
||||||
|
|
||||||
// not necessary but will indicate redis config errors by failfast at startup
|
return redisClient;
|
||||||
// happens only once; after that connection is allowed to die and the lib is assumed to be able to restore it eventually
|
|
||||||
redis.connect().catch((e) => {
|
|
||||||
console.error("Failed to connect to Redis:", e);
|
|
||||||
process.exit(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
return redis;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
// next.js buildtime usage - we want to isolate next.js "build" time concepts here
|
// next.js buildtime usage - we want to isolate next.js "build" time concepts here
|
||||||
@@ -43,4 +54,25 @@ const noopClient: RedisClient = (() => {
|
|||||||
del: noopDel,
|
del: noopDel,
|
||||||
};
|
};
|
||||||
})();
|
})();
|
||||||
|
|
||||||
|
const noopRedlock: RedlockClient = {
|
||||||
|
using: <T>(resource: string | string[], ttl: number, cb: () => Promise<T>) =>
|
||||||
|
cb(),
|
||||||
|
};
|
||||||
|
|
||||||
|
export const redlock: RedlockClient = isBuildPhase
|
||||||
|
? noopRedlock
|
||||||
|
: (() => {
|
||||||
|
const r = new Redlock([getRedisClient()], {});
|
||||||
|
r.on("error", (error) => {
|
||||||
|
if (error instanceof ResourceLockedError) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Log all other errors.
|
||||||
|
console.error(error);
|
||||||
|
});
|
||||||
|
return r;
|
||||||
|
})();
|
||||||
|
|
||||||
export const tokenCacheRedis = isBuildPhase ? noopClient : getRedisClient();
|
export const tokenCacheRedis = isBuildPhase ? noopClient : getRedisClient();
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ const TokenCacheEntrySchema = z.object({
|
|||||||
accessToken: z.string(),
|
accessToken: z.string(),
|
||||||
accessTokenExpires: z.number(),
|
accessTokenExpires: z.number(),
|
||||||
refreshToken: z.string().optional(),
|
refreshToken: z.string().optional(),
|
||||||
error: z.string().optional(),
|
|
||||||
}),
|
}),
|
||||||
timestamp: z.number(),
|
timestamp: z.number(),
|
||||||
});
|
});
|
||||||
@@ -46,14 +45,15 @@ export async function getTokenCache(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const TTL_SECONDS = 30 * 24 * 60 * 60;
|
||||||
|
|
||||||
export async function setTokenCache(
|
export async function setTokenCache(
|
||||||
redis: KV,
|
redis: KV,
|
||||||
key: string,
|
key: string,
|
||||||
value: TokenCacheEntry,
|
value: TokenCacheEntry,
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
const encodedValue = TokenCacheEntryCodec.encode(value);
|
const encodedValue = TokenCacheEntryCodec.encode(value);
|
||||||
const ttlSeconds = Math.floor(REFRESH_ACCESS_TOKEN_BEFORE / 1000);
|
await redis.setex(key, TTL_SECONDS, encodedValue);
|
||||||
await redis.setex(key, ttlSeconds, encodedValue);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function deleteTokenCache(redis: KV, key: string): Promise<void> {
|
export async function deleteTokenCache(redis: KV, key: string): Promise<void> {
|
||||||
|
|||||||
26
www/app/lib/useLoginRequiredPages.ts
Normal file
26
www/app/lib/useLoginRequiredPages.ts
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
// for paths that are not supposed to be public
|
||||||
|
import { PROTECTED_PAGES } from "./auth";
|
||||||
|
import { usePathname } from "next/navigation";
|
||||||
|
import { useAuth } from "./AuthProvider";
|
||||||
|
import { useEffect } from "react";
|
||||||
|
|
||||||
|
const HOME = "/" as const;
|
||||||
|
|
||||||
|
export const useLoginRequiredPages = () => {
|
||||||
|
const pathname = usePathname();
|
||||||
|
const isProtected = PROTECTED_PAGES.test(pathname);
|
||||||
|
const auth = useAuth();
|
||||||
|
const isNotLoggedIn = auth.status === "unauthenticated";
|
||||||
|
// safety
|
||||||
|
const isLastDestination = pathname === HOME;
|
||||||
|
const shouldRedirect = isNotLoggedIn && isProtected && !isLastDestination;
|
||||||
|
useEffect(() => {
|
||||||
|
if (!shouldRedirect) return;
|
||||||
|
// on the backend, the redirect goes straight to the auth provider, but we don't have it because it's hidden inside next-auth middleware
|
||||||
|
// so we just "softly" lead the user to the main page
|
||||||
|
// warning: if HOME redirects somewhere else, we won't be protected by isLastDestination
|
||||||
|
window.location.href = HOME;
|
||||||
|
}, [shouldRedirect]);
|
||||||
|
// optionally save from blink, since window.location.href takes a bit of time
|
||||||
|
return shouldRedirect ? HOME : null;
|
||||||
|
};
|
||||||
@@ -2,6 +2,7 @@ import { useAuth } from "./AuthProvider";
|
|||||||
|
|
||||||
export const useUserName = (): string | null | undefined => {
|
export const useUserName = (): string | null | undefined => {
|
||||||
const auth = useAuth();
|
const auth = useAuth();
|
||||||
if (auth.status !== "authenticated") return undefined;
|
if (auth.status !== "authenticated" && auth.status !== "refreshing")
|
||||||
|
return undefined;
|
||||||
return auth.user?.name || null;
|
return auth.user?.name || null;
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -158,6 +158,17 @@ export const assertExists = <T>(
|
|||||||
return value;
|
return value;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
export const assertNotExists = <T>(
|
||||||
|
value: T | null | undefined,
|
||||||
|
err?: string,
|
||||||
|
): void => {
|
||||||
|
if (value !== null && value !== undefined) {
|
||||||
|
throw new Error(
|
||||||
|
`Assertion failed: ${err ?? "value is not null or undefined"}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
export const assertExistsAndNonEmptyString = (
|
export const assertExistsAndNonEmptyString = (
|
||||||
value: string | null | undefined,
|
value: string | null | undefined,
|
||||||
): NonEmptyString =>
|
): NonEmptyString =>
|
||||||
|
|||||||
@@ -1,16 +1,7 @@
|
|||||||
import { withAuth } from "next-auth/middleware";
|
import { withAuth } from "next-auth/middleware";
|
||||||
import { getConfig } from "./app/lib/edgeConfig";
|
import { getConfig } from "./app/lib/edgeConfig";
|
||||||
import { NextResponse } from "next/server";
|
import { NextResponse } from "next/server";
|
||||||
|
import { PROTECTED_PAGES } from "./app/lib/auth";
|
||||||
const LOGIN_REQUIRED_PAGES = [
|
|
||||||
"/transcripts/[!new]",
|
|
||||||
"/browse(.*)",
|
|
||||||
"/rooms(.*)",
|
|
||||||
];
|
|
||||||
|
|
||||||
const PROTECTED_PAGES = new RegExp(
|
|
||||||
LOGIN_REQUIRED_PAGES.map((page) => `^${page}$`).join("|"),
|
|
||||||
);
|
|
||||||
|
|
||||||
export const config = {
|
export const config = {
|
||||||
matcher: [
|
matcher: [
|
||||||
|
|||||||
@@ -45,6 +45,7 @@
|
|||||||
"react-markdown": "^9.0.0",
|
"react-markdown": "^9.0.0",
|
||||||
"react-qr-code": "^2.0.12",
|
"react-qr-code": "^2.0.12",
|
||||||
"react-select-search": "^4.1.7",
|
"react-select-search": "^4.1.7",
|
||||||
|
"redlock": "5.0.0-beta.2",
|
||||||
"sass": "^1.63.6",
|
"sass": "^1.63.6",
|
||||||
"simple-peer": "^9.11.1",
|
"simple-peer": "^9.11.1",
|
||||||
"tailwindcss": "^3.3.2",
|
"tailwindcss": "^3.3.2",
|
||||||
|
|||||||
22
www/pnpm-lock.yaml
generated
22
www/pnpm-lock.yaml
generated
@@ -106,6 +106,9 @@ importers:
|
|||||||
react-select-search:
|
react-select-search:
|
||||||
specifier: ^4.1.7
|
specifier: ^4.1.7
|
||||||
version: 4.1.8(prop-types@15.8.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
|
version: 4.1.8(prop-types@15.8.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
|
||||||
|
redlock:
|
||||||
|
specifier: 5.0.0-beta.2
|
||||||
|
version: 5.0.0-beta.2
|
||||||
sass:
|
sass:
|
||||||
specifier: ^1.63.6
|
specifier: ^1.63.6
|
||||||
version: 1.90.0
|
version: 1.90.0
|
||||||
@@ -6566,6 +6569,12 @@ packages:
|
|||||||
sass:
|
sass:
|
||||||
optional: true
|
optional: true
|
||||||
|
|
||||||
|
node-abort-controller@3.1.1:
|
||||||
|
resolution:
|
||||||
|
{
|
||||||
|
integrity: sha512-AGK2yQKIjRuqnc6VkX2Xj5d+QW8xZ87pa1UK6yA6ouUyuxfHuMP6umE5QK7UmTeOAymo+Zx1Fxiuw9rVx8taHQ==,
|
||||||
|
}
|
||||||
|
|
||||||
node-addon-api@7.1.1:
|
node-addon-api@7.1.1:
|
||||||
resolution:
|
resolution:
|
||||||
{
|
{
|
||||||
@@ -7433,6 +7442,13 @@ packages:
|
|||||||
}
|
}
|
||||||
engines: { node: ">=4" }
|
engines: { node: ">=4" }
|
||||||
|
|
||||||
|
redlock@5.0.0-beta.2:
|
||||||
|
resolution:
|
||||||
|
{
|
||||||
|
integrity: sha512-2RDWXg5jgRptDrB1w9O/JgSZC0j7y4SlaXnor93H/UJm/QyDiFgBKNtrh0TI6oCXqYSaSoXxFh6Sd3VtYfhRXw==,
|
||||||
|
}
|
||||||
|
engines: { node: ">=12" }
|
||||||
|
|
||||||
redux-thunk@3.1.0:
|
redux-thunk@3.1.0:
|
||||||
resolution:
|
resolution:
|
||||||
{
|
{
|
||||||
@@ -13812,6 +13828,8 @@ snapshots:
|
|||||||
- "@babel/core"
|
- "@babel/core"
|
||||||
- babel-plugin-macros
|
- babel-plugin-macros
|
||||||
|
|
||||||
|
node-abort-controller@3.1.1: {}
|
||||||
|
|
||||||
node-addon-api@7.1.1:
|
node-addon-api@7.1.1:
|
||||||
optional: true
|
optional: true
|
||||||
|
|
||||||
@@ -14290,6 +14308,10 @@ snapshots:
|
|||||||
dependencies:
|
dependencies:
|
||||||
redis-errors: 1.2.0
|
redis-errors: 1.2.0
|
||||||
|
|
||||||
|
redlock@5.0.0-beta.2:
|
||||||
|
dependencies:
|
||||||
|
node-abort-controller: 3.1.1
|
||||||
|
|
||||||
redux-thunk@3.1.0(redux@5.0.1):
|
redux-thunk@3.1.0(redux@5.0.1):
|
||||||
dependencies:
|
dependencies:
|
||||||
redux: 5.0.1
|
redux: 5.0.1
|
||||||
|
|||||||
Reference in New Issue
Block a user