Compare commits

...

45 Commits

Author SHA1 Message Date
Igor Loskutov
341884085d migration sync 2025-12-20 11:12:29 -05:00
Igor Loskutov
c9325ea4a4 Merge branch 'main' into feat/durable 2025-12-20 11:07:04 -05:00
Igor Loskutov
f163111b4a durable_started return 2025-12-19 21:07:12 -05:00
f0ee7b531a fix: logout redirect (#802) 2025-12-19 17:19:09 +01:00
37a454f283 chore(main): release 0.24.0 (#793) 2025-12-19 15:00:43 +01:00
964cd78bb6 feat: identify action items (#790)
* Identify action items

* Add action items to mock summary

* Add action items validator

* Remove final prefix from action items

* Make on action items callback required

* Don't mutation action items response

* Assign action items to none on error

* Use timeout constant

* Exclude action items from transcript list
2025-12-18 21:13:47 +01:00
5f458aa4a7 fix: automatically reprocess daily recordings (#797)
* Automatically reprocess recordings

* Restore the comments

* Remove redundant check

* Fix indent

* Add comment about cyclic import
2025-12-18 21:10:04 +01:00
5f7dfadabd fix: retry on workflow timeout (#798) 2025-12-18 20:49:06 +01:00
0bc971ba96 fix: main menu login (#800) 2025-12-18 20:48:39 +01:00
Igor Loskutov
84c1a57c83 better log webhook events 2025-12-18 14:22:03 -05:00
Igor Loskutov
af425e6dfd pr autoreviewer fixes 2025-12-18 13:53:39 -05:00
Igor Loskutov
28007e846f add forgotten file 2025-12-18 13:46:35 -05:00
Igor Loskutov
17a93b7393 can_replay cancelled 2025-12-18 13:44:57 -05:00
Igor Loskutov
0ce38dfeb3 self-review round 2025-12-18 13:39:04 -05:00
Igor Loskutov
8272c79856 self-review round 2025-12-18 13:15:18 -05:00
Igor Loskutov
acad80df50 self-review round 2025-12-18 12:46:05 -05:00
Igor Loskutov
61e2b3211e self-review wip 2025-12-18 11:42:32 -05:00
Igor Loskutov
bf90bd076b Merge branch 'feat/durable' of github-monadical:Monadical-SAS/reflector into feat/durable 2025-12-17 15:47:07 -05:00
Igor Loskutov
557073850e more NES instead of str 2025-12-17 15:46:59 -05:00
Igor Monadical
ce6b185bf7 Merge branch 'main' into feat/durable 2025-12-17 15:42:03 -05:00
Igor Loskutov
cb41e9e779 self-review round 2025-12-17 15:25:29 -05:00
Igor Loskutov
f7f2957fc9 dry hatched with celery - 2 2025-12-17 15:11:33 -05:00
Igor Loskutov
d683a83906 dry hatchet with celery 2025-12-17 14:48:23 -05:00
Igor Loskutov
e77f38a12a self-review round 2025-12-17 13:51:50 -05:00
Igor Loskutov
6ae621eadd self-review round 2025-12-17 13:29:17 -05:00
Igor Loskutov
6ae8f1d870 self-review round 2025-12-17 13:05:08 -05:00
Igor Loskutov
7a29c742c5 hatchet: restore zullip report 2025-12-17 11:06:27 -05:00
Igor Monadical
c62e3c0753 incorporate daily api undocumented feature (#796)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-17 09:51:55 -05:00
Igor Loskutov
298abe8656 self-review (no-mistakes) 2025-12-16 22:59:56 -05:00
Igor Loskutov
67420d2ec4 self-review (no-mistakes) 2025-12-16 22:47:09 -05:00
Igor Loskutov
4b00dda0ca hatchet init db 2025-12-16 17:24:16 -05:00
Igor Loskutov
7591387e52 cleanup 2025-12-16 16:49:42 -05:00
Igor Loskutov
447bf97854 . 2025-12-16 16:44:15 -05:00
Igor Loskutov
c280e8dc1d and add hatchet processor setting to room 2025-12-16 16:40:18 -05:00
Igor Loskutov
9b8f76929e remove shadow mode for hatchet 2025-12-16 16:39:52 -05:00
Igor Loskutov
409c257889 hatched logs 2025-12-16 16:31:29 -05:00
Igor Loskutov
fce0945564 self-review (no-mistakes) 2025-12-16 16:04:52 -05:00
Igor Loskutov
e81e0cb5c3 remove conductor and add hatchet tests (no-mistakes) 2025-12-16 13:24:05 -05:00
Igor Loskutov
1f49deb5b5 hatchet no-mistake, better logging 2025-12-16 12:26:59 -05:00
Igor Loskutov
0f266eabdf hatchet no-mistake 2025-12-16 12:09:02 -05:00
Igor Loskutov
c5498d26bf hatchet no-mistake 2025-12-16 00:48:58 -05:00
Igor Monadical
16284e1ac3 fix: daily video optimisation (#789)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-15 15:00:53 -05:00
Igor Monadical
443982617d coolify pull policy (#792)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-15 14:54:05 -05:00
Igor Monadical
23023b3cdb update nextjs (#791)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-15 13:58:34 -05:00
Igor Loskutov
243ff2177c durable (no-mistakes) 2025-12-15 12:18:47 -05:00
59 changed files with 5684 additions and 2276 deletions

View File

@@ -1,5 +1,20 @@
# Changelog
## [0.24.0](https://github.com/Monadical-SAS/reflector/compare/v0.23.2...v0.24.0) (2025-12-18)
### Features
* identify action items ([#790](https://github.com/Monadical-SAS/reflector/issues/790)) ([964cd78](https://github.com/Monadical-SAS/reflector/commit/964cd78bb699d83d012ae4b8c96565df25b90a5d))
### Bug Fixes
* automatically reprocess daily recordings ([#797](https://github.com/Monadical-SAS/reflector/issues/797)) ([5f458aa](https://github.com/Monadical-SAS/reflector/commit/5f458aa4a7ec3d00ca5ec49d62fcc8ad232b138e))
* daily video optimisation ([#789](https://github.com/Monadical-SAS/reflector/issues/789)) ([16284e1](https://github.com/Monadical-SAS/reflector/commit/16284e1ac3faede2b74f0d91b50c0b5612af2c35))
* main menu login ([#800](https://github.com/Monadical-SAS/reflector/issues/800)) ([0bc971b](https://github.com/Monadical-SAS/reflector/commit/0bc971ba966a52d719c8c240b47dc7b3bdea4391))
* retry on workflow timeout ([#798](https://github.com/Monadical-SAS/reflector/issues/798)) ([5f7dfad](https://github.com/Monadical-SAS/reflector/commit/5f7dfadabd3e8017406ad3720ba495a59963ee34))
## [0.23.2](https://github.com/Monadical-SAS/reflector/compare/v0.23.1...v0.23.2) (2025-12-11)

View File

@@ -4,6 +4,7 @@
services:
web:
image: monadicalsas/reflector-frontend:latest
pull_policy: always
environment:
- KV_URL=${KV_URL:-redis://redis:6379}
- SITE_URL=${SITE_URL}

View File

@@ -34,6 +34,20 @@ services:
environment:
ENTRYPOINT: beat
hatchet-worker:
build:
context: server
volumes:
- ./server/:/app/
- /app/.venv
env_file:
- ./server/.env
environment:
ENTRYPOINT: hatchet-worker
depends_on:
hatchet:
condition: service_healthy
redis:
image: redis:7.2
ports:
@@ -55,6 +69,7 @@ services:
postgres:
image: postgres:17
command: postgres -c 'max_connections=200'
ports:
- 5432:5432
environment:
@@ -63,6 +78,42 @@ services:
POSTGRES_DB: reflector
volumes:
- ./data/postgres:/var/lib/postgresql/data
- ./server/docker/init-hatchet-db.sql:/docker-entrypoint-initdb.d/init-hatchet-db.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -d reflector -U reflector"]
interval: 10s
timeout: 10s
retries: 5
start_period: 10s
hatchet:
image: ghcr.io/hatchet-dev/hatchet/hatchet-lite:latest
ports:
- "8889:8888"
- "7078:7077"
depends_on:
postgres:
condition: service_healthy
environment:
DATABASE_URL: "postgresql://reflector:reflector@postgres:5432/hatchet?sslmode=disable"
SERVER_AUTH_COOKIE_DOMAIN: localhost
SERVER_AUTH_COOKIE_INSECURE: "t"
SERVER_GRPC_BIND_ADDRESS: "0.0.0.0"
SERVER_GRPC_INSECURE: "t"
SERVER_GRPC_BROADCAST_ADDRESS: hatchet:7077
SERVER_GRPC_PORT: "7077"
SERVER_URL: http://localhost:8889
SERVER_AUTH_SET_EMAIL_VERIFIED: "t"
# SERVER_DEFAULT_ENGINE_VERSION: "V1" # default
SERVER_INTERNAL_CLIENT_INTERNAL_GRPC_BROADCAST_ADDRESS: hatchet:7077
volumes:
- ./data/hatchet-config:/config
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8888/api/live"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
default:

View File

@@ -53,6 +53,36 @@ response = sqs.receive_message(QueueUrl=queue_url, ...)
uv run /app/requeue_uploaded_file.py TRANSCRIPT_ID
```
## Hatchet Setup (Fresh DB)
After resetting the Hatchet database:
### Option A: Automatic (CLI)
```bash
# Get default tenant ID and create token in one command
TENANT_ID=$(docker compose exec -T postgres psql -U reflector -d hatchet -t -c \
"SELECT id FROM \"Tenant\" WHERE slug = 'default';" | tr -d ' \n') && \
TOKEN=$(docker compose exec -T hatchet /hatchet-admin token create \
--config /config --tenant-id "$TENANT_ID" 2>/dev/null | tr -d '\n') && \
echo "HATCHET_CLIENT_TOKEN=$TOKEN"
```
Copy the output to `server/.env`.
### Option B: Manual (UI)
1. Create API token at http://localhost:8889 → Settings → API Tokens
2. Update `server/.env`: `HATCHET_CLIENT_TOKEN=<new-token>`
### Then restart workers
```bash
docker compose restart server hatchet-worker
```
Workflows register automatically when hatchet-worker starts.
## Pipeline Management
### Continue stuck pipeline from final summaries (identify_participants) step:

View File

@@ -0,0 +1,2 @@
-- Create hatchet database for Hatchet workflow engine
CREATE DATABASE hatchet;

View File

@@ -0,0 +1,26 @@
"""add_action_items
Revision ID: 05f8688d6895
Revises: bbafedfa510c
Create Date: 2025-12-12 11:57:50.209658
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "05f8688d6895"
down_revision: Union[str, None] = "bbafedfa510c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.add_column("transcript", sa.Column("action_items", sa.JSON(), nullable=True))
def downgrade() -> None:
op.drop_column("transcript", "action_items")

View File

@@ -0,0 +1,28 @@
"""add workflow_run_id to transcript
Revision ID: 0f943fede0e0
Revises: 05f8688d6895
Create Date: 2025-12-16 01:54:13.855106
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "0f943fede0e0"
down_revision: Union[str, None] = "05f8688d6895"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
with op.batch_alter_table("transcript", schema=None) as batch_op:
batch_op.add_column(sa.Column("workflow_run_id", sa.String(), nullable=True))
def downgrade() -> None:
with op.batch_alter_table("transcript", schema=None) as batch_op:
batch_op.drop_column("workflow_run_id")

View File

@@ -0,0 +1,35 @@
"""add use_hatchet to room
Revision ID: bd3a729bb379
Revises: 0f943fede0e0
Create Date: 2025-12-16 16:34:03.594231
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "bd3a729bb379"
down_revision: Union[str, None] = "0f943fede0e0"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.add_column(
sa.Column(
"use_hatchet",
sa.Boolean(),
server_default=sa.text("false"),
nullable=False,
)
)
def downgrade() -> None:
with op.batch_alter_table("room", schema=None) as batch_op:
batch_op.drop_column("use_hatchet")

View File

@@ -39,6 +39,7 @@ dependencies = [
"pytest-env>=1.1.5",
"webvtt-py>=0.5.0",
"icalendar>=6.0.0",
"hatchet-sdk>=0.47.0",
]
[dependency-groups]

View File

@@ -18,6 +18,7 @@ from .requests import (
# Response models
from .responses import (
FinishedRecordingResponse,
MeetingParticipant,
MeetingParticipantsResponse,
MeetingResponse,
@@ -79,6 +80,7 @@ __all__ = [
"MeetingParticipant",
"MeetingResponse",
"RecordingResponse",
"FinishedRecordingResponse",
"RecordingS3Info",
"MeetingTokenResponse",
"WebhookResponse",

View File

@@ -47,7 +47,7 @@ class DailyApiError(Exception):
)
super().__init__(
f"Daily.co API error: {operation} failed with status {self.status_code}"
f"Daily.co API error: {operation} failed with status {self.status_code}: {response.text}"
)

View File

@@ -121,7 +121,10 @@ class RecordingS3Info(BaseModel):
class RecordingResponse(BaseModel):
"""
Response from recording retrieval endpoint.
Response from recording retrieval endpoint (network layer).
Duration may be None for recordings still being processed by Daily.
Use FinishedRecordingResponse for recordings ready for processing.
Reference: https://docs.daily.co/reference/rest-api/recordings
"""
@@ -135,7 +138,9 @@ class RecordingResponse(BaseModel):
max_participants: int | None = Field(
None, description="Maximum participants during recording (may be missing)"
)
duration: int = Field(description="Recording duration in seconds")
duration: int | None = Field(
None, description="Recording duration in seconds (None if still processing)"
)
share_token: NonEmptyString | None = Field(
None, description="Token for sharing recording"
)
@@ -149,6 +154,25 @@ class RecordingResponse(BaseModel):
None, description="Meeting session identifier (may be missing)"
)
def to_finished(self) -> "FinishedRecordingResponse | None":
"""Convert to FinishedRecordingResponse if duration is available and status is finished."""
if self.duration is None or self.status != "finished":
return None
return FinishedRecordingResponse(**self.model_dump())
class FinishedRecordingResponse(RecordingResponse):
"""
Recording with confirmed duration - ready for processing.
This model guarantees duration is present and status is finished.
"""
status: Literal["finished"] = Field(
description="Recording status (always 'finished')"
)
duration: int = Field(description="Recording duration in seconds")
class MeetingTokenResponse(BaseModel):
"""

View File

@@ -3,6 +3,7 @@ from typing import Literal
import sqlalchemy as sa
from pydantic import BaseModel, Field
from sqlalchemy import or_
from reflector.db import get_database, metadata
from reflector.utils import generate_uuid4
@@ -79,5 +80,35 @@ class RecordingController:
results = await get_database().fetch_all(query)
return [Recording(**row) for row in results]
async def get_multitrack_needing_reprocessing(
self, bucket_name: str
) -> list[Recording]:
"""
Get multitrack recordings that need reprocessing:
- Have track_keys (multitrack)
- Either have no transcript OR transcript has error status
This is more efficient than fetching all recordings and filtering in Python.
"""
from reflector.db.transcripts import (
transcripts, # noqa: PLC0415 cyclic import
)
query = (
recordings.select()
.outerjoin(transcripts, recordings.c.id == transcripts.c.recording_id)
.where(
recordings.c.bucket_name == bucket_name,
recordings.c.track_keys.isnot(None),
or_(
transcripts.c.id.is_(None),
transcripts.c.status == "error",
),
)
)
results = await get_database().fetch_all(query)
recordings_list = [Recording(**row) for row in results]
return [r for r in recordings_list if r.is_multitrack]
recordings_controller = RecordingController()

View File

@@ -57,6 +57,12 @@ rooms = sqlalchemy.Table(
sqlalchemy.String,
nullable=False,
),
sqlalchemy.Column(
"use_hatchet",
sqlalchemy.Boolean,
nullable=False,
server_default=false(),
),
sqlalchemy.Index("idx_room_is_shared", "is_shared"),
sqlalchemy.Index("idx_room_ics_enabled", "ics_enabled"),
)
@@ -85,6 +91,7 @@ class Room(BaseModel):
ics_last_sync: datetime | None = None
ics_last_etag: str | None = None
platform: Platform = Field(default_factory=lambda: settings.DEFAULT_VIDEO_PLATFORM)
use_hatchet: bool = False
class RoomController:

View File

@@ -44,6 +44,7 @@ transcripts = sqlalchemy.Table(
sqlalchemy.Column("title", sqlalchemy.String),
sqlalchemy.Column("short_summary", sqlalchemy.String),
sqlalchemy.Column("long_summary", sqlalchemy.String),
sqlalchemy.Column("action_items", sqlalchemy.JSON),
sqlalchemy.Column("topics", sqlalchemy.JSON),
sqlalchemy.Column("events", sqlalchemy.JSON),
sqlalchemy.Column("participants", sqlalchemy.JSON),
@@ -83,6 +84,8 @@ transcripts = sqlalchemy.Table(
sqlalchemy.Column("audio_deleted", sqlalchemy.Boolean),
sqlalchemy.Column("room_id", sqlalchemy.String),
sqlalchemy.Column("webvtt", sqlalchemy.Text),
# Hatchet workflow run ID for resumption of failed workflows
sqlalchemy.Column("workflow_run_id", sqlalchemy.String),
sqlalchemy.Index("idx_transcript_recording_id", "recording_id"),
sqlalchemy.Index("idx_transcript_user_id", "user_id"),
sqlalchemy.Index("idx_transcript_created_at", "created_at"),
@@ -164,6 +167,10 @@ class TranscriptFinalLongSummary(BaseModel):
long_summary: str
class TranscriptActionItems(BaseModel):
action_items: dict
class TranscriptFinalTitle(BaseModel):
title: str
@@ -204,6 +211,7 @@ class Transcript(BaseModel):
locked: bool = False
short_summary: str | None = None
long_summary: str | None = None
action_items: dict | None = None
topics: list[TranscriptTopic] = []
events: list[TranscriptEvent] = []
participants: list[TranscriptParticipant] | None = []
@@ -217,6 +225,7 @@ class Transcript(BaseModel):
zulip_message_id: int | None = None
audio_deleted: bool | None = None
webvtt: str | None = None
workflow_run_id: str | None = None # Hatchet workflow run ID for resumption
@field_serializer("created_at", when_used="json")
def serialize_datetime(self, dt: datetime) -> str:
@@ -368,7 +377,12 @@ class TranscriptController:
room_id: str | None = None,
search_term: str | None = None,
return_query: bool = False,
exclude_columns: list[str] = ["topics", "events", "participants"],
exclude_columns: list[str] = [
"topics",
"events",
"participants",
"action_items",
],
) -> list[Transcript]:
"""
Get all transcripts

View File

@@ -0,0 +1,5 @@
"""Hatchet workflow orchestration for Reflector."""
from reflector.hatchet.client import HatchetClientManager
__all__ = ["HatchetClientManager"]

View File

@@ -0,0 +1,98 @@
"""WebSocket broadcasting helpers for Hatchet workflows.
DUPLICATION NOTE: To be kept when Celery is deprecated. Currently dupes Celery logic.
Provides WebSocket broadcasting for Hatchet that matches Celery's @broadcast_to_sockets
decorator behavior. Events are broadcast to transcript rooms and user rooms.
"""
from typing import Any
import structlog
from reflector.db.transcripts import Transcript, TranscriptEvent, transcripts_controller
from reflector.utils.string import NonEmptyString
from reflector.ws_manager import get_ws_manager
# Events that should also be sent to user room (matches Celery behavior)
USER_ROOM_EVENTS = {"STATUS", "FINAL_TITLE", "DURATION"}
async def broadcast_event(
transcript_id: NonEmptyString,
event: TranscriptEvent,
logger: structlog.BoundLogger,
) -> None:
"""Broadcast a TranscriptEvent to WebSocket subscribers.
Fire-and-forget: errors are logged but don't interrupt workflow execution.
"""
logger.info(
"Broadcasting event",
transcript_id=transcript_id,
event_type=event.event,
)
try:
ws_manager = get_ws_manager()
await ws_manager.send_json(
room_id=f"ts:{transcript_id}",
message=event.model_dump(mode="json"),
)
logger.info(
"Event sent to transcript room",
transcript_id=transcript_id,
event_type=event.event,
)
if event.event in USER_ROOM_EVENTS:
transcript = await transcripts_controller.get_by_id(transcript_id)
if transcript and transcript.user_id:
await ws_manager.send_json(
room_id=f"user:{transcript.user_id}",
message={
"event": f"TRANSCRIPT_{event.event}",
"data": {"id": transcript_id, **event.data},
},
)
except Exception as e:
logger.warning(
"Failed to broadcast event",
error=str(e),
transcript_id=transcript_id,
event_type=event.event,
)
async def set_status_and_broadcast(
transcript_id: NonEmptyString,
status: str,
logger: structlog.BoundLogger,
) -> None:
"""Set transcript status and broadcast to WebSocket.
Wrapper around transcripts_controller.set_status that adds WebSocket broadcasting.
"""
event = await transcripts_controller.set_status(transcript_id, status)
if event:
await broadcast_event(transcript_id, event, logger=logger)
async def append_event_and_broadcast(
transcript_id: NonEmptyString,
transcript: Transcript,
event_name: str,
data: Any,
logger: structlog.BoundLogger,
) -> TranscriptEvent:
"""Append event to transcript and broadcast to WebSocket.
Wrapper around transcripts_controller.append_event that adds WebSocket broadcasting.
"""
event = await transcripts_controller.append_event(
transcript=transcript,
event=event_name,
data=data,
)
await broadcast_event(transcript_id, event, logger=logger)
return event

View File

@@ -0,0 +1,111 @@
"""Hatchet Python client wrapper.
Uses singleton pattern because:
1. Hatchet client maintains persistent gRPC connections for workflow registration
2. Creating multiple clients would cause registration conflicts and resource leaks
3. The SDK is designed for a single client instance per process
4. Tests use `HatchetClientManager.reset()` to isolate state between tests
"""
import logging
import threading
from hatchet_sdk import ClientConfig, Hatchet
from hatchet_sdk.clients.rest.models import V1TaskStatus
from reflector.logger import logger
from reflector.settings import settings
class HatchetClientManager:
"""Singleton manager for Hatchet client connections.
See module docstring for rationale. For test isolation, use `reset()`.
"""
_instance: Hatchet | None = None
_lock = threading.Lock()
@classmethod
def get_client(cls) -> Hatchet:
"""Get or create the Hatchet client (thread-safe singleton)."""
if cls._instance is None:
with cls._lock:
if cls._instance is None:
if not settings.HATCHET_CLIENT_TOKEN:
raise ValueError("HATCHET_CLIENT_TOKEN must be set")
# Pass root logger to Hatchet so workflow logs appear in dashboard
root_logger = logging.getLogger()
cls._instance = Hatchet(
debug=settings.HATCHET_DEBUG,
config=ClientConfig(logger=root_logger),
)
return cls._instance
@classmethod
async def start_workflow(
cls,
workflow_name: str,
input_data: dict,
additional_metadata: dict | None = None,
) -> str:
"""Start a workflow and return the workflow run ID.
Args:
workflow_name: Name of the workflow to trigger.
input_data: Input data for the workflow run.
additional_metadata: Optional metadata for filtering in dashboard
(e.g., transcript_id, recording_id).
"""
client = cls.get_client()
result = await client.runs.aio_create(
workflow_name,
input_data,
additional_metadata=additional_metadata,
)
return result.run.metadata.id
@classmethod
async def get_workflow_run_status(cls, workflow_run_id: str) -> V1TaskStatus:
client = cls.get_client()
return await client.runs.aio_get_status(workflow_run_id)
@classmethod
async def cancel_workflow(cls, workflow_run_id: str) -> None:
client = cls.get_client()
await client.runs.aio_cancel(workflow_run_id)
logger.info("[Hatchet] Cancelled workflow", workflow_run_id=workflow_run_id)
@classmethod
async def replay_workflow(cls, workflow_run_id: str) -> None:
client = cls.get_client()
await client.runs.aio_replay(workflow_run_id)
logger.info("[Hatchet] Replaying workflow", workflow_run_id=workflow_run_id)
@classmethod
async def can_replay(cls, workflow_run_id: str) -> bool:
"""Check if workflow can be replayed (is FAILED)."""
try:
status = await cls.get_workflow_run_status(workflow_run_id)
return status == V1TaskStatus.FAILED or status == V1TaskStatus.CANCELLED
except Exception as e:
logger.warning(
"[Hatchet] Failed to check replay status",
workflow_run_id=workflow_run_id,
error=str(e),
)
return False
@classmethod
async def get_workflow_status(cls, workflow_run_id: str) -> dict:
"""Get the full workflow run details as dict."""
client = cls.get_client()
run = await client.runs.aio_get(workflow_run_id)
return run.to_dict()
@classmethod
def reset(cls) -> None:
"""Reset the client instance (for testing)."""
with cls._lock:
cls._instance = None

View File

@@ -0,0 +1,63 @@
"""
Run Hatchet workers for the diarization pipeline.
Runs as a separate process, just like Celery workers.
Usage:
uv run -m reflector.hatchet.run_workers
# Or via docker:
docker compose exec server uv run -m reflector.hatchet.run_workers
"""
import signal
import sys
from reflector.logger import logger
from reflector.settings import settings
def main() -> None:
"""Start Hatchet worker polling."""
if not settings.HATCHET_ENABLED:
logger.error("HATCHET_ENABLED is False, not starting workers")
sys.exit(1)
if not settings.HATCHET_CLIENT_TOKEN:
logger.error("HATCHET_CLIENT_TOKEN is not set")
sys.exit(1)
logger.info(
"Starting Hatchet workers",
debug=settings.HATCHET_DEBUG,
)
# Import here (not top-level) - workflow modules call HatchetClientManager.get_client()
# at module level because Hatchet SDK decorators (@workflow.task) bind at import time.
# Can't use lazy init: decorators need the client object when function is defined.
from reflector.hatchet.client import HatchetClientManager # noqa: PLC0415
from reflector.hatchet.workflows import ( # noqa: PLC0415
diarization_pipeline,
track_workflow,
)
hatchet = HatchetClientManager.get_client()
worker = hatchet.worker(
"reflector-diarization-worker",
workflows=[diarization_pipeline, track_workflow],
)
def shutdown_handler(signum: int, frame) -> None:
logger.info("Received shutdown signal, stopping workers...")
# Worker cleanup happens automatically on exit
sys.exit(0)
signal.signal(signal.SIGINT, shutdown_handler)
signal.signal(signal.SIGTERM, shutdown_handler)
logger.info("Starting Hatchet worker polling...")
worker.start()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,14 @@
"""Hatchet workflow definitions."""
from reflector.hatchet.workflows.diarization_pipeline import (
PipelineInput,
diarization_pipeline,
)
from reflector.hatchet.workflows.track_processing import TrackInput, track_workflow
__all__ = [
"diarization_pipeline",
"track_workflow",
"PipelineInput",
"TrackInput",
]

View File

@@ -0,0 +1,938 @@
"""
Hatchet main workflow: DiarizationPipeline
Multitrack diarization pipeline for Daily.co recordings.
Orchestrates the full processing flow from recording metadata to final transcript.
Note: This file uses deferred imports (inside functions/tasks) intentionally.
Hatchet workers run in forked processes; fresh imports per task ensure DB connections
are not shared across forks, avoiding connection pooling issues.
"""
import asyncio
import functools
import tempfile
from contextlib import asynccontextmanager
from datetime import timedelta
from pathlib import Path
from typing import Callable
import httpx
from hatchet_sdk import Context
from pydantic import BaseModel
from reflector.dailyco_api.client import DailyApiClient
from reflector.hatchet.broadcast import (
append_event_and_broadcast,
set_status_and_broadcast,
)
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.workflows.models import (
ConsentResult,
FinalizeResult,
MixdownResult,
PaddedTrackInfo,
ParticipantsResult,
ProcessTracksResult,
RecordingResult,
SummaryResult,
TitleResult,
TopicsResult,
WaveformResult,
WebhookResult,
ZulipResult,
)
from reflector.hatchet.workflows.track_processing import TrackInput, track_workflow
from reflector.logger import logger
from reflector.pipelines import topic_processing
from reflector.processors import AudioFileWriterProcessor
from reflector.processors.types import (
TitleSummary,
TitleSummaryWithId,
Word,
)
from reflector.processors.types import (
Transcript as TranscriptType,
)
from reflector.settings import settings
from reflector.storage.storage_aws import AwsStorage
from reflector.utils.audio_constants import (
PRESIGNED_URL_EXPIRATION_SECONDS,
WAVEFORM_SEGMENTS,
)
from reflector.utils.audio_mixdown import (
detect_sample_rate_from_tracks,
mixdown_tracks_pyav,
)
from reflector.utils.audio_waveform import get_audio_waveform
from reflector.utils.daily import (
filter_cam_audio_tracks,
parse_daily_recording_filename,
)
from reflector.utils.string import NonEmptyString, assert_non_none_and_non_empty
from reflector.zulip import post_transcript_notification
class PipelineInput(BaseModel):
"""Input to trigger the diarization pipeline."""
recording_id: NonEmptyString
tracks: list[dict] # List of {"s3_key": str}
bucket_name: NonEmptyString
transcript_id: NonEmptyString
room_id: NonEmptyString | None = None
hatchet = HatchetClientManager.get_client()
diarization_pipeline = hatchet.workflow(
name="DiarizationPipeline", input_validator=PipelineInput
)
@asynccontextmanager
async def fresh_db_connection():
"""Context manager for database connections in Hatchet workers.
TECH DEBT: Made to make connection fork-aware without changing db code too much.
The real fix would be making the db module fork-aware instead of bypassing it.
Current pattern is acceptable given Hatchet's process model.
"""
import databases # noqa: PLC0415
from reflector.db import _database_context # noqa: PLC0415
_database_context.set(None)
db = databases.Database(settings.DATABASE_URL)
_database_context.set(db)
await db.connect()
try:
yield db
finally:
await db.disconnect()
_database_context.set(None)
async def set_workflow_error_status(transcript_id: NonEmptyString) -> bool:
"""Set transcript status to 'error' on workflow failure.
Returns:
True if status was set successfully, False if failed.
Failure is logged as CRITICAL since it means transcript may be stuck.
"""
try:
async with fresh_db_connection():
await set_status_and_broadcast(transcript_id, "error", logger=logger)
return True
except Exception as e:
logger.critical(
"[Hatchet] CRITICAL: Failed to set error status - transcript may be stuck in 'processing'",
transcript_id=transcript_id,
error=str(e),
exc_info=True,
)
return False
def _spawn_storage():
"""Create fresh storage instance."""
return AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
)
def with_error_handling(step_name: str, set_error_status: bool = True) -> Callable:
"""Decorator that handles task failures uniformly.
Args:
step_name: Name of the step for logging and progress tracking.
set_error_status: Whether to set transcript status to 'error' on failure.
"""
def decorator(func: Callable) -> Callable:
@functools.wraps(func)
async def wrapper(input: PipelineInput, ctx: Context):
try:
return await func(input, ctx)
except Exception as e:
logger.error(
f"[Hatchet] {step_name} failed",
transcript_id=input.transcript_id,
error=str(e),
exc_info=True,
)
if set_error_status:
await set_workflow_error_status(input.transcript_id)
raise
return wrapper
return decorator
@diarization_pipeline.task(execution_timeout=timedelta(seconds=60), retries=3)
@with_error_handling("get_recording")
async def get_recording(input: PipelineInput, ctx: Context) -> RecordingResult:
"""Fetch recording metadata from Daily.co API."""
ctx.log(f"get_recording: recording_id={input.recording_id}")
# Set transcript status to "processing" at workflow start (broadcasts to WebSocket)
async with fresh_db_connection():
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript:
await set_status_and_broadcast(
input.transcript_id, "processing", logger=logger
)
ctx.log(f"Set transcript status to processing: {input.transcript_id}")
if not settings.DAILY_API_KEY:
raise ValueError("DAILY_API_KEY not configured")
async with DailyApiClient(api_key=settings.DAILY_API_KEY) as client:
recording = await client.get_recording(input.recording_id)
ctx.log(
f"get_recording complete: room={recording.room_name}, duration={recording.duration}s"
)
return RecordingResult(
id=recording.id,
mtg_session_id=recording.mtgSessionId,
duration=recording.duration,
)
@diarization_pipeline.task(
parents=[get_recording], execution_timeout=timedelta(seconds=60), retries=3
)
@with_error_handling("get_participants")
async def get_participants(input: PipelineInput, ctx: Context) -> ParticipantsResult:
"""Fetch participant list from Daily.co API and update transcript in database."""
ctx.log(f"get_participants: transcript_id={input.transcript_id}")
recording = ctx.task_output(get_recording)
mtg_session_id = recording.mtg_session_id
async with fresh_db_connection():
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptParticipant,
transcripts_controller,
)
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript:
# Note: title NOT cleared - preserves existing titles
await transcripts_controller.update(
transcript,
{
"events": [],
"topics": [],
"participants": [],
},
)
mtg_session_id = assert_non_none_and_non_empty(
mtg_session_id, "mtg_session_id is required"
)
daily_api_key = assert_non_none_and_non_empty(
settings.DAILY_API_KEY, "DAILY_API_KEY is required"
)
async with DailyApiClient(api_key=daily_api_key) as client:
participants = await client.get_meeting_participants(mtg_session_id)
id_to_name = {}
id_to_user_id = {}
for p in participants.data:
if p.user_name:
id_to_name[p.participant_id] = p.user_name
if p.user_id:
id_to_user_id[p.participant_id] = p.user_id
track_keys = [t["s3_key"] for t in input.tracks]
cam_audio_keys = filter_cam_audio_tracks(track_keys)
participants_list = []
for idx, key in enumerate(cam_audio_keys):
try:
parsed = parse_daily_recording_filename(key)
participant_id = parsed.participant_id
except ValueError as e:
logger.error(
"Failed to parse Daily recording filename",
error=str(e),
key=key,
)
continue
default_name = f"Speaker {idx}"
name = id_to_name.get(participant_id, default_name)
user_id = id_to_user_id.get(participant_id)
participant = TranscriptParticipant(
id=participant_id, speaker=idx, name=name, user_id=user_id
)
await transcripts_controller.upsert_participant(transcript, participant)
participants_list.append(
{
"participant_id": participant_id,
"user_name": name,
"speaker": idx,
}
)
ctx.log(f"get_participants complete: {len(participants_list)} participants")
return ParticipantsResult(
participants=participants_list,
num_tracks=len(input.tracks),
source_language=transcript.source_language if transcript else "en",
target_language=transcript.target_language if transcript else "en",
)
@diarization_pipeline.task(
parents=[get_participants], execution_timeout=timedelta(seconds=600), retries=3
)
@with_error_handling("process_tracks")
async def process_tracks(input: PipelineInput, ctx: Context) -> ProcessTracksResult:
"""Spawn child workflows for each track (dynamic fan-out)."""
ctx.log(f"process_tracks: spawning {len(input.tracks)} track workflows")
participants_result = ctx.task_output(get_participants)
source_language = participants_result.source_language
child_coroutines = [
track_workflow.aio_run(
TrackInput(
track_index=i,
s3_key=track["s3_key"],
bucket_name=input.bucket_name,
transcript_id=input.transcript_id,
language=source_language,
)
)
for i, track in enumerate(input.tracks)
]
results = await asyncio.gather(*child_coroutines)
target_language = participants_result.target_language
track_words = []
padded_tracks = []
created_padded_files = set()
for result in results:
transcribe_result = result.get("transcribe_track", {})
track_words.append(transcribe_result.get("words", []))
pad_result = result.get("pad_track", {})
padded_key = pad_result.get("padded_key")
bucket_name = pad_result.get("bucket_name")
# Store S3 key info (not presigned URL) - consumer tasks presign on demand
if padded_key:
padded_tracks.append(
PaddedTrackInfo(key=padded_key, bucket_name=bucket_name)
)
track_index = pad_result.get("track_index")
if pad_result.get("size", 0) > 0 and track_index is not None:
storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{track_index}.webm"
created_padded_files.add(storage_path)
all_words = [word for words in track_words for word in words]
all_words.sort(key=lambda w: w.get("start", 0))
ctx.log(
f"process_tracks complete: {len(all_words)} words from {len(input.tracks)} tracks"
)
return ProcessTracksResult(
all_words=all_words,
padded_tracks=padded_tracks,
word_count=len(all_words),
num_tracks=len(input.tracks),
target_language=target_language,
created_padded_files=list(created_padded_files),
)
@diarization_pipeline.task(
parents=[process_tracks], execution_timeout=timedelta(seconds=300), retries=3
)
@with_error_handling("mixdown_tracks")
async def mixdown_tracks(input: PipelineInput, ctx: Context) -> MixdownResult:
"""Mix all padded tracks into single audio file using PyAV (same as Celery)."""
ctx.log("mixdown_tracks: mixing padded tracks into single audio file")
track_result = ctx.task_output(process_tracks)
padded_tracks = track_result.padded_tracks
# TODO think of NonEmpty type to avoid those checks, e.g. sized.NonEmpty from https://github.com/antonagestam/phantom-types/
if not padded_tracks:
raise ValueError("No padded tracks to mixdown")
storage = _spawn_storage()
# Presign URLs on demand (avoids stale URLs on workflow replay)
padded_urls = []
for track_info in padded_tracks:
if track_info.key:
url = await storage.get_file_url(
track_info.key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
bucket=track_info.bucket_name,
)
padded_urls.append(url)
valid_urls = [url for url in padded_urls if url]
if not valid_urls:
raise ValueError("No valid padded tracks to mixdown")
target_sample_rate = detect_sample_rate_from_tracks(valid_urls, logger=logger)
if not target_sample_rate:
logger.error("Mixdown failed - no decodable audio frames found")
raise ValueError("No decodable audio frames in any track")
output_path = tempfile.mktemp(suffix=".mp3")
duration_ms_callback_capture_container = [0.0]
async def capture_duration(d):
duration_ms_callback_capture_container[0] = d
writer = AudioFileWriterProcessor(path=output_path, on_duration=capture_duration)
await mixdown_tracks_pyav(
valid_urls,
writer,
target_sample_rate,
offsets_seconds=None,
logger=logger,
)
await writer.flush()
file_size = Path(output_path).stat().st_size
storage_path = f"{input.transcript_id}/audio.mp3"
with open(output_path, "rb") as mixed_file:
await storage.put_file(storage_path, mixed_file)
Path(output_path).unlink(missing_ok=True)
async with fresh_db_connection():
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript:
await transcripts_controller.update(
transcript, {"audio_location": "storage"}
)
ctx.log(f"mixdown_tracks complete: uploaded {file_size} bytes to {storage_path}")
return MixdownResult(
audio_key=storage_path,
duration=duration_ms_callback_capture_container[0],
tracks_mixed=len(valid_urls),
)
@diarization_pipeline.task(
parents=[mixdown_tracks], execution_timeout=timedelta(seconds=120), retries=3
)
@with_error_handling("generate_waveform")
async def generate_waveform(input: PipelineInput, ctx: Context) -> WaveformResult:
"""Generate audio waveform visualization using AudioWaveformProcessor (matches Celery)."""
ctx.log(f"generate_waveform: transcript_id={input.transcript_id}")
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptWaveform,
transcripts_controller,
)
mixdown_result = ctx.task_output(mixdown_tracks)
audio_key = mixdown_result.audio_key
storage = _spawn_storage()
audio_url = await storage.get_file_url(
audio_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
# Download MP3 to temp file (AudioWaveformProcessor needs local file)
with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as temp_file:
temp_path = temp_file.name
try:
async with httpx.AsyncClient() as client:
response = await client.get(audio_url, timeout=120)
response.raise_for_status()
with open(temp_path, "wb") as f:
f.write(response.content)
waveform = get_audio_waveform(
path=Path(temp_path), segments_count=WAVEFORM_SEGMENTS
)
async with fresh_db_connection():
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript:
waveform_data = TranscriptWaveform(waveform=waveform)
await append_event_and_broadcast(
input.transcript_id,
transcript,
"WAVEFORM",
waveform_data,
logger=logger,
)
finally:
Path(temp_path).unlink(missing_ok=True)
ctx.log("generate_waveform complete")
return WaveformResult(waveform_generated=True)
@diarization_pipeline.task(
parents=[mixdown_tracks], execution_timeout=timedelta(seconds=300), retries=3
)
@with_error_handling("detect_topics")
async def detect_topics(input: PipelineInput, ctx: Context) -> TopicsResult:
"""Detect topics using LLM and save to database (matches Celery on_topic callback)."""
ctx.log("detect_topics: analyzing transcript for topics")
track_result = ctx.task_output(process_tracks)
words = track_result.all_words
target_language = track_result.target_language
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptTopic,
transcripts_controller,
)
word_objects = [Word(**w) for w in words]
transcript_type = TranscriptType(words=word_objects)
empty_pipeline = topic_processing.EmptyPipeline(logger=logger)
async with fresh_db_connection():
transcript = await transcripts_controller.get_by_id(input.transcript_id)
async def on_topic_callback(data):
topic = TranscriptTopic(
title=data.title,
summary=data.summary,
timestamp=data.timestamp,
transcript=data.transcript.text,
words=data.transcript.words,
)
if isinstance(
data, TitleSummaryWithId
): # Celery parity: main_live_pipeline.py
topic.id = data.id
await transcripts_controller.upsert_topic(transcript, topic)
await append_event_and_broadcast(
input.transcript_id, transcript, "TOPIC", topic, logger=logger
)
topics = await topic_processing.detect_topics(
transcript_type,
target_language,
on_topic_callback=on_topic_callback,
empty_pipeline=empty_pipeline,
)
topics_list = [t.model_dump() for t in topics]
ctx.log(f"detect_topics complete: found {len(topics_list)} topics")
return TopicsResult(topics=topics_list)
@diarization_pipeline.task(
parents=[detect_topics], execution_timeout=timedelta(seconds=120), retries=3
)
@with_error_handling("generate_title")
async def generate_title(input: PipelineInput, ctx: Context) -> TitleResult:
"""Generate meeting title using LLM and save to database (matches Celery on_title callback)."""
ctx.log("generate_title: generating title from topics")
topics_result = ctx.task_output(detect_topics)
topics = topics_result.topics
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptFinalTitle,
transcripts_controller,
)
topic_objects = [TitleSummary(**t) for t in topics]
empty_pipeline = topic_processing.EmptyPipeline(logger=logger)
title_result = None
async with fresh_db_connection():
transcript = await transcripts_controller.get_by_id(input.transcript_id)
async def on_title_callback(data):
nonlocal title_result
title_result = data.title
final_title = TranscriptFinalTitle(title=data.title)
if not transcript.title:
await transcripts_controller.update(
transcript,
{"title": final_title.title},
)
await append_event_and_broadcast(
input.transcript_id,
transcript,
"FINAL_TITLE",
final_title,
logger=logger,
)
await topic_processing.generate_title(
topic_objects,
on_title_callback=on_title_callback,
empty_pipeline=empty_pipeline,
logger=logger,
)
ctx.log(f"generate_title complete: '{title_result}'")
return TitleResult(title=title_result)
@diarization_pipeline.task(
parents=[detect_topics], execution_timeout=timedelta(seconds=300), retries=3
)
@with_error_handling("generate_summary")
async def generate_summary(input: PipelineInput, ctx: Context) -> SummaryResult:
"""Generate meeting summary using LLM and save to database (matches Celery callbacks)."""
ctx.log("generate_summary: generating long and short summaries")
topics_result = ctx.task_output(detect_topics)
topics = topics_result.topics
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptFinalLongSummary,
TranscriptFinalShortSummary,
transcripts_controller,
)
topic_objects = [TitleSummary(**t) for t in topics]
empty_pipeline = topic_processing.EmptyPipeline(logger=logger)
summary_result = None
short_summary_result = None
async with fresh_db_connection():
transcript = await transcripts_controller.get_by_id(input.transcript_id)
async def on_long_summary_callback(data):
nonlocal summary_result
summary_result = data.long_summary
final_long_summary = TranscriptFinalLongSummary(
long_summary=data.long_summary
)
await transcripts_controller.update(
transcript,
{"long_summary": final_long_summary.long_summary},
)
await append_event_and_broadcast(
input.transcript_id,
transcript,
"FINAL_LONG_SUMMARY",
final_long_summary,
logger=logger,
)
async def on_short_summary_callback(data):
nonlocal short_summary_result
short_summary_result = data.short_summary
final_short_summary = TranscriptFinalShortSummary(
short_summary=data.short_summary
)
await transcripts_controller.update(
transcript,
{"short_summary": final_short_summary.short_summary},
)
await append_event_and_broadcast(
input.transcript_id,
transcript,
"FINAL_SHORT_SUMMARY",
final_short_summary,
logger=logger,
)
await topic_processing.generate_summaries(
topic_objects,
transcript, # DB transcript for context
on_long_summary_callback=on_long_summary_callback,
on_short_summary_callback=on_short_summary_callback,
empty_pipeline=empty_pipeline,
logger=logger,
)
ctx.log("generate_summary complete")
return SummaryResult(summary=summary_result, short_summary=short_summary_result)
@diarization_pipeline.task(
parents=[generate_waveform, generate_title, generate_summary],
execution_timeout=timedelta(seconds=60),
retries=3,
)
@with_error_handling("finalize")
async def finalize(input: PipelineInput, ctx: Context) -> FinalizeResult:
"""Finalize transcript: save words, emit TRANSCRIPT event, set status to 'ended'.
Matches Celery's on_transcript + set_status behavior.
Note: Title and summaries are already saved by their respective task callbacks.
"""
ctx.log("finalize: saving transcript and setting status to 'ended'")
mixdown_result = ctx.task_output(mixdown_tracks)
track_result = ctx.task_output(process_tracks)
duration = mixdown_result.duration
all_words = track_result.all_words
# Cleanup temporary padded S3 files (deferred until finalize for semantic parity with Celery)
created_padded_files = track_result.created_padded_files
if created_padded_files:
ctx.log(f"Cleaning up {len(created_padded_files)} temporary S3 files")
storage = _spawn_storage()
cleanup_results = await asyncio.gather(
*[storage.delete_file(path) for path in created_padded_files],
return_exceptions=True,
)
for storage_path, result in zip(created_padded_files, cleanup_results):
if isinstance(result, Exception):
logger.warning(
"[Hatchet] Failed to cleanup temporary padded track",
storage_path=storage_path,
error=str(result),
)
async with fresh_db_connection():
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptDuration,
TranscriptText,
transcripts_controller,
)
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript is None:
raise ValueError(f"Transcript {input.transcript_id} not found in database")
word_objects = [Word(**w) for w in all_words]
merged_transcript = TranscriptType(words=word_objects, translation=None)
await append_event_and_broadcast(
input.transcript_id,
transcript,
"TRANSCRIPT",
TranscriptText(
text=merged_transcript.text,
translation=merged_transcript.translation,
),
logger=logger,
)
# Save duration and clear workflow_run_id (workflow completed successfully)
# Note: title/long_summary/short_summary already saved by their callbacks
await transcripts_controller.update(
transcript,
{
"duration": duration,
"workflow_run_id": None, # Clear on success - no need to resume
},
)
duration_data = TranscriptDuration(duration=duration)
await append_event_and_broadcast(
input.transcript_id, transcript, "DURATION", duration_data, logger=logger
)
await set_status_and_broadcast(input.transcript_id, "ended", logger=logger)
ctx.log(
f"finalize complete: transcript {input.transcript_id} status set to 'ended'"
)
return FinalizeResult(status="COMPLETED")
@diarization_pipeline.task(
parents=[finalize], execution_timeout=timedelta(seconds=60), retries=3
)
@with_error_handling("cleanup_consent", set_error_status=False)
async def cleanup_consent(input: PipelineInput, ctx: Context) -> ConsentResult:
"""Check consent and delete audio files if any participant denied."""
ctx.log(f"cleanup_consent: transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.db.meetings import ( # noqa: PLC0415
meeting_consent_controller,
meetings_controller,
)
from reflector.db.recordings import recordings_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.storage import get_transcripts_storage # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
ctx.log("cleanup_consent: transcript not found")
return ConsentResult()
consent_denied = False
if transcript.meeting_id:
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
if meeting:
consent_denied = await meeting_consent_controller.has_any_denial(
meeting.id
)
if not consent_denied:
ctx.log("cleanup_consent: consent approved, keeping all files")
return ConsentResult()
ctx.log("cleanup_consent: consent denied, deleting audio files")
input_track_keys = set(t["s3_key"] for t in input.tracks)
# Detect if recording.track_keys was manually modified after workflow started
if transcript.recording_id:
recording = await recordings_controller.get_by_id(transcript.recording_id)
if recording and recording.track_keys:
db_track_keys = set(filter_cam_audio_tracks(recording.track_keys))
if input_track_keys != db_track_keys:
added = db_track_keys - input_track_keys
removed = input_track_keys - db_track_keys
logger.warning(
"[Hatchet] Track keys mismatch: DB changed since workflow start",
transcript_id=input.transcript_id,
recording_id=transcript.recording_id,
input_count=len(input_track_keys),
db_count=len(db_track_keys),
added_in_db=list(added) if added else None,
removed_from_db=list(removed) if removed else None,
)
ctx.log(
f"WARNING: track_keys mismatch - "
f"input has {len(input_track_keys)}, DB has {len(db_track_keys)}. "
f"Using input tracks for deletion."
)
deletion_errors = []
if input_track_keys and input.bucket_name:
master_storage = get_transcripts_storage()
for key in input_track_keys:
try:
await master_storage.delete_file(key, bucket=input.bucket_name)
ctx.log(f"Deleted recording file: {input.bucket_name}/{key}")
except Exception as e:
error_msg = f"Failed to delete {key}: {e}"
logger.error(error_msg, exc_info=True)
deletion_errors.append(error_msg)
if transcript.audio_location == "storage":
storage = get_transcripts_storage()
try:
await storage.delete_file(transcript.storage_audio_path)
ctx.log(f"Deleted processed audio: {transcript.storage_audio_path}")
except Exception as e:
error_msg = f"Failed to delete processed audio: {e}"
logger.error(error_msg, exc_info=True)
deletion_errors.append(error_msg)
if deletion_errors:
logger.warning(
"[Hatchet] cleanup_consent completed with errors",
transcript_id=input.transcript_id,
error_count=len(deletion_errors),
errors=deletion_errors,
)
ctx.log(f"cleanup_consent completed with {len(deletion_errors)} errors")
else:
await transcripts_controller.update(transcript, {"audio_deleted": True})
ctx.log("cleanup_consent: all audio deleted successfully")
return ConsentResult()
@diarization_pipeline.task(
parents=[cleanup_consent], execution_timeout=timedelta(seconds=60), retries=5
)
@with_error_handling("post_zulip", set_error_status=False)
async def post_zulip(input: PipelineInput, ctx: Context) -> ZulipResult:
"""Post notification to Zulip."""
ctx.log(f"post_zulip: transcript_id={input.transcript_id}")
if not settings.ZULIP_REALM:
ctx.log("post_zulip skipped (Zulip not configured)")
return ZulipResult(zulip_message_id=None, skipped=True)
async with fresh_db_connection():
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript:
message_id = await post_transcript_notification(transcript)
ctx.log(f"post_zulip complete: zulip_message_id={message_id}")
else:
message_id = None
return ZulipResult(zulip_message_id=message_id)
@diarization_pipeline.task(
parents=[post_zulip], execution_timeout=timedelta(seconds=120), retries=30
)
@with_error_handling("send_webhook", set_error_status=False)
async def send_webhook(input: PipelineInput, ctx: Context) -> WebhookResult:
"""Send completion webhook to external service."""
ctx.log(f"send_webhook: transcript_id={input.transcript_id}")
if not input.room_id:
ctx.log("send_webhook skipped (no room_id)")
return WebhookResult(webhook_sent=False, skipped=True)
async with fresh_db_connection():
from reflector.db.rooms import rooms_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
room = await rooms_controller.get_by_id(input.room_id)
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if room and room.webhook_url and transcript:
webhook_payload = {
"event": "transcript.completed",
"transcript_id": input.transcript_id,
"title": transcript.title,
"duration": transcript.duration,
}
async with httpx.AsyncClient() as client:
response = await client.post(
room.webhook_url, json=webhook_payload, timeout=30
)
response.raise_for_status()
ctx.log(f"send_webhook complete: status_code={response.status_code}")
return WebhookResult(webhook_sent=True, response_code=response.status_code)
return WebhookResult(webhook_sent=False, skipped=True)

View File

@@ -0,0 +1,123 @@
"""
Pydantic models for Hatchet workflow task return types.
Provides static typing for all task outputs, enabling type checking
and better IDE support.
"""
from typing import Any
from pydantic import BaseModel
from reflector.utils.string import NonEmptyString
class PadTrackResult(BaseModel):
"""Result from pad_track task."""
padded_key: NonEmptyString # S3 key (not presigned URL) - presign on demand to avoid stale URLs on replay
bucket_name: (
NonEmptyString | None
) # None means use default transcript storage bucket
size: int
track_index: int
class TranscribeTrackResult(BaseModel):
"""Result from transcribe_track task."""
words: list[dict[str, Any]]
track_index: int
class RecordingResult(BaseModel):
"""Result from get_recording task."""
id: NonEmptyString | None
mtg_session_id: NonEmptyString | None
duration: float
class ParticipantsResult(BaseModel):
"""Result from get_participants task."""
participants: list[dict[str, Any]]
num_tracks: int
source_language: NonEmptyString
target_language: NonEmptyString
class PaddedTrackInfo(BaseModel):
"""Info for a padded track - S3 key + bucket for on-demand presigning."""
key: NonEmptyString
bucket_name: NonEmptyString | None # None = use default storage bucket
class ProcessTracksResult(BaseModel):
"""Result from process_tracks task."""
all_words: list[dict[str, Any]]
padded_tracks: list[PaddedTrackInfo] # S3 keys, not presigned URLs
word_count: int
num_tracks: int
target_language: NonEmptyString
created_padded_files: list[NonEmptyString]
class MixdownResult(BaseModel):
"""Result from mixdown_tracks task."""
audio_key: NonEmptyString
duration: float
tracks_mixed: int
class WaveformResult(BaseModel):
"""Result from generate_waveform task."""
waveform_generated: bool
class TopicsResult(BaseModel):
"""Result from detect_topics task."""
topics: list[dict[str, Any]]
class TitleResult(BaseModel):
"""Result from generate_title task."""
title: str | None
class SummaryResult(BaseModel):
"""Result from generate_summary task."""
summary: str | None
short_summary: str | None
class FinalizeResult(BaseModel):
"""Result from finalize task."""
status: NonEmptyString
class ConsentResult(BaseModel):
"""Result from cleanup_consent task."""
class ZulipResult(BaseModel):
"""Result from post_zulip task."""
zulip_message_id: int | None = None
skipped: bool = False
class WebhookResult(BaseModel):
"""Result from send_webhook task."""
webhook_sent: bool
skipped: bool = False
response_code: int | None = None

View File

@@ -0,0 +1,222 @@
"""
Hatchet child workflow: TrackProcessing
Handles individual audio track processing: padding and transcription.
Spawned dynamically by the main diarization pipeline for each track.
Architecture note: This is a separate workflow (not inline tasks in DiarizationPipeline)
because Hatchet workflow DAGs are defined statically, but the number of tracks varies
at runtime. Child workflow spawning via `aio_run()` + `asyncio.gather()` is the
standard pattern for dynamic fan-out. See `process_tracks` in diarization_pipeline.py.
Note: This file uses deferred imports (inside tasks) intentionally.
Hatchet workers run in forked processes; fresh imports per task ensure
storage/DB connections are not shared across forks.
"""
import tempfile
from datetime import timedelta
from pathlib import Path
import av
from hatchet_sdk import Context
from pydantic import BaseModel
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.workflows.models import PadTrackResult, TranscribeTrackResult
from reflector.logger import logger
from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS
from reflector.utils.audio_padding import (
apply_audio_padding_to_file,
extract_stream_start_time_from_container,
)
class TrackInput(BaseModel):
"""Input for individual track processing."""
track_index: int
s3_key: str
bucket_name: str
transcript_id: str
language: str = "en"
hatchet = HatchetClientManager.get_client()
track_workflow = hatchet.workflow(name="TrackProcessing", input_validator=TrackInput)
@track_workflow.task(execution_timeout=timedelta(seconds=300), retries=3)
async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
"""Pad single audio track with silence for alignment.
Extracts stream.start_time from WebM container metadata and applies
silence padding using PyAV filter graph (adelay).
"""
ctx.log(f"pad_track: track {input.track_index}, s3_key={input.s3_key}")
logger.info(
"[Hatchet] pad_track",
track_index=input.track_index,
s3_key=input.s3_key,
transcript_id=input.transcript_id,
)
try:
# Create fresh storage instance to avoid aioboto3 fork issues
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
)
source_url = await storage.get_file_url(
input.s3_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
bucket=input.bucket_name,
)
with av.open(source_url) as in_container:
start_time_seconds = extract_stream_start_time_from_container(
in_container, input.track_index, logger=logger
)
# If no padding needed, return original S3 key
if start_time_seconds <= 0:
logger.info(
f"Track {input.track_index} requires no padding",
track_index=input.track_index,
)
return PadTrackResult(
padded_key=input.s3_key,
bucket_name=input.bucket_name,
size=0,
track_index=input.track_index,
)
with tempfile.NamedTemporaryFile(suffix=".webm", delete=False) as temp_file:
temp_path = temp_file.name
try:
apply_audio_padding_to_file(
in_container,
temp_path,
start_time_seconds,
input.track_index,
logger=logger,
)
file_size = Path(temp_path).stat().st_size
storage_path = f"file_pipeline_hatchet/{input.transcript_id}/tracks/padded_{input.track_index}.webm"
logger.info(
f"About to upload padded track",
key=storage_path,
size=file_size,
)
with open(temp_path, "rb") as padded_file:
await storage.put_file(storage_path, padded_file)
logger.info(
f"Uploaded padded track to S3",
key=storage_path,
size=file_size,
)
finally:
Path(temp_path).unlink(missing_ok=True)
ctx.log(f"pad_track complete: track {input.track_index} -> {storage_path}")
logger.info(
"[Hatchet] pad_track complete",
track_index=input.track_index,
padded_key=storage_path,
)
# Return S3 key (not presigned URL) - consumer tasks presign on demand
# This avoids stale URLs when workflow is replayed
return PadTrackResult(
padded_key=storage_path,
bucket_name=None, # None = use default transcript storage bucket
size=file_size,
track_index=input.track_index,
)
except Exception as e:
logger.error("[Hatchet] pad_track failed", error=str(e), exc_info=True)
raise
@track_workflow.task(
parents=[pad_track], execution_timeout=timedelta(seconds=600), retries=3
)
async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackResult:
"""Transcribe audio track using GPU (Modal.com) or local Whisper."""
ctx.log(f"transcribe_track: track {input.track_index}, language={input.language}")
logger.info(
"[Hatchet] transcribe_track",
track_index=input.track_index,
language=input.language,
)
try:
pad_result = ctx.task_output(pad_track)
padded_key = pad_result.padded_key
bucket_name = pad_result.bucket_name
if not padded_key:
raise ValueError("Missing padded_key from pad_track")
# Presign URL on demand (avoids stale URLs on workflow replay)
from reflector.settings import settings # noqa: PLC0415
from reflector.storage.storage_aws import AwsStorage # noqa: PLC0415
storage = AwsStorage(
aws_bucket_name=settings.TRANSCRIPT_STORAGE_AWS_BUCKET_NAME,
aws_region=settings.TRANSCRIPT_STORAGE_AWS_REGION,
aws_access_key_id=settings.TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY,
)
audio_url = await storage.get_file_url(
padded_key,
operation="get_object",
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
bucket=bucket_name,
)
from reflector.pipelines.transcription_helpers import ( # noqa: PLC0415
transcribe_file_with_processor,
)
transcript = await transcribe_file_with_processor(audio_url, input.language)
# Tag all words with speaker index
words = []
for word in transcript.words:
word_dict = word.model_dump()
word_dict["speaker"] = input.track_index
words.append(word_dict)
ctx.log(
f"transcribe_track complete: track {input.track_index}, {len(words)} words"
)
logger.info(
"[Hatchet] transcribe_track complete",
track_index=input.track_index,
word_count=len(words),
)
return TranscribeTrackResult(
words=words,
track_index=input.track_index,
)
except Exception as e:
logger.error("[Hatchet] transcribe_track failed", error=str(e), exc_info=True)
raise

View File

@@ -16,6 +16,9 @@ from llama_index.core.workflow import (
)
from llama_index.llms.openai_like import OpenAILike
from pydantic import BaseModel, ValidationError
from workflows.errors import WorkflowTimeoutError
from reflector.utils.retry import retry
T = TypeVar("T", bound=BaseModel)
OutputT = TypeVar("OutputT", bound=BaseModel)
@@ -229,26 +232,38 @@ class LLM:
texts: list[str],
output_cls: Type[T],
tone_name: str | None = None,
timeout: int | None = None,
) -> T:
"""Get structured output from LLM with validation retry via Workflow."""
workflow = StructuredOutputWorkflow(
output_cls=output_cls,
max_retries=self.settings_obj.LLM_PARSE_MAX_RETRIES + 1,
timeout=120,
)
if timeout is None:
timeout = self.settings_obj.LLM_STRUCTURED_RESPONSE_TIMEOUT
result = await workflow.run(
prompt=prompt,
texts=texts,
tone_name=tone_name,
)
if "error" in result:
error_msg = result["error"] or "Max retries exceeded"
raise LLMParseError(
async def run_workflow():
workflow = StructuredOutputWorkflow(
output_cls=output_cls,
error_msg=error_msg,
attempts=result.get("attempts", 0),
max_retries=self.settings_obj.LLM_PARSE_MAX_RETRIES + 1,
timeout=timeout,
)
return result["success"]
result = await workflow.run(
prompt=prompt,
texts=texts,
tone_name=tone_name,
)
if "error" in result:
error_msg = result["error"] or "Max retries exceeded"
raise LLMParseError(
output_cls=output_cls,
error_msg=error_msg,
attempts=result.get("attempts", 0),
)
return result["success"]
return await retry(run_workflow)(
retry_attempts=3,
retry_backoff_interval=1.0,
retry_backoff_max=30.0,
retry_ignore_exc_types=(WorkflowTimeoutError,),
)

View File

@@ -97,13 +97,8 @@ class PipelineMainFile(PipelineMainBase):
},
)
# Extract audio and write to transcript location
audio_path = await self.extract_and_write_audio(file_path, transcript)
# Upload for processing
audio_url = await self.upload_audio(audio_path, transcript)
# Run parallel processing
await self.run_parallel_processing(
audio_path,
audio_url,
@@ -197,7 +192,6 @@ class PipelineMainFile(PipelineMainBase):
transcript_result = results[0]
diarization_result = results[1]
# Handle errors - raise any exception that occurred
self._handle_gather_exceptions(results, "parallel processing")
for result in results:
if isinstance(result, Exception):
@@ -212,7 +206,6 @@ class PipelineMainFile(PipelineMainBase):
transcript=transcript_result, diarization=diarization_result or []
)
# Store result for retrieval
diarized_transcript: Transcript | None = None
async def capture_result(transcript):
@@ -309,6 +302,7 @@ class PipelineMainFile(PipelineMainBase):
transcript,
on_long_summary_callback=self.on_long_summary,
on_short_summary_callback=self.on_short_summary,
on_action_items_callback=self.on_action_items,
empty_pipeline=self.empty_pipeline,
logger=self.logger,
)
@@ -348,7 +342,6 @@ async def task_pipeline_file_process(*, transcript_id: str):
try:
await pipeline.set_status(transcript_id, "processing")
# Find the file to process
audio_file = next(transcript.data_path.glob("upload.*"), None)
if not audio_file:
audio_file = next(transcript.data_path.glob("audio.*"), None)

View File

@@ -27,6 +27,7 @@ from reflector.db.recordings import recordings_controller
from reflector.db.rooms import rooms_controller
from reflector.db.transcripts import (
Transcript,
TranscriptActionItems,
TranscriptDuration,
TranscriptFinalLongSummary,
TranscriptFinalShortSummary,
@@ -306,6 +307,23 @@ class PipelineMainBase(PipelineRunner[PipelineMessage], Generic[PipelineMessage]
data=final_short_summary,
)
@broadcast_to_sockets
async def on_action_items(self, data):
action_items = TranscriptActionItems(action_items=data.action_items)
async with self.transaction():
transcript = await self.get_transcript()
await transcripts_controller.update(
transcript,
{
"action_items": action_items.action_items,
},
)
return await transcripts_controller.append_event(
transcript=transcript,
event="ACTION_ITEMS",
data=action_items,
)
@broadcast_to_sockets
async def on_duration(self, data):
async with self.transaction():
@@ -465,6 +483,7 @@ class PipelineMainFinalSummaries(PipelineMainFromTopics):
transcript=self._transcript,
callback=self.on_long_summary,
on_short_summary=self.on_short_summary,
on_action_items=self.on_action_items,
),
]

View File

@@ -1,11 +1,8 @@
import asyncio
import math
import tempfile
from fractions import Fraction
from pathlib import Path
import av
from av.audio.resampler import AudioResampler
from celery import chain, shared_task
from reflector.asynctask import asynctask
@@ -32,6 +29,15 @@ from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
from reflector.processors.types import TitleSummary
from reflector.processors.types import Transcript as TranscriptType
from reflector.storage import Storage, get_transcripts_storage
from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS
from reflector.utils.audio_mixdown import (
detect_sample_rate_from_tracks,
mixdown_tracks_pyav,
)
from reflector.utils.audio_padding import (
apply_audio_padding_to_file,
extract_stream_start_time_from_container,
)
from reflector.utils.daily import (
filter_cam_audio_tracks,
parse_daily_recording_filename,
@@ -39,13 +45,6 @@ from reflector.utils.daily import (
from reflector.utils.string import NonEmptyString
from reflector.video_platforms.factory import create_platform_client
# Audio encoding constants
OPUS_STANDARD_SAMPLE_RATE = 48000
OPUS_DEFAULT_BIT_RATE = 128000
# Storage operation constants
PRESIGNED_URL_EXPIRATION_SECONDS = 7200 # 2 hours
class PipelineMainMultitrack(PipelineMainBase):
def __init__(self, transcript_id: str):
@@ -125,8 +124,8 @@ class PipelineMainMultitrack(PipelineMainBase):
try:
# PyAV streams input from S3 URL efficiently (2-5MB fixed overhead for codec/filters)
with av.open(track_url) as in_container:
start_time_seconds = self._extract_stream_start_time_from_container(
in_container, track_idx
start_time_seconds = extract_stream_start_time_from_container(
in_container, track_idx, logger=self.logger
)
if start_time_seconds <= 0:
@@ -144,8 +143,12 @@ class PipelineMainMultitrack(PipelineMainBase):
temp_path = temp_file.name
try:
self._apply_audio_padding_to_file(
in_container, temp_path, start_time_seconds, track_idx
apply_audio_padding_to_file(
in_container,
temp_path,
start_time_seconds,
track_idx,
logger=self.logger,
)
storage_path = (
@@ -156,7 +159,6 @@ class PipelineMainMultitrack(PipelineMainBase):
with open(temp_path, "rb") as padded_file:
await storage.put_file(storage_path, padded_file)
finally:
# Clean up temp file
Path(temp_path).unlink(missing_ok=True)
padded_url = await storage.get_file_url(
@@ -186,317 +188,28 @@ class PipelineMainMultitrack(PipelineMainBase):
f"Track {track_idx} padding failed - transcript would have incorrect timestamps"
) from e
def _extract_stream_start_time_from_container(
self, container, track_idx: int
) -> float:
"""
Extract meeting-relative start time from WebM stream metadata.
Uses PyAV to read stream.start_time from WebM container.
More accurate than filename timestamps by ~209ms due to network/encoding delays.
"""
start_time_seconds = 0.0
try:
audio_streams = [s for s in container.streams if s.type == "audio"]
stream = audio_streams[0] if audio_streams else container.streams[0]
# 1) Try stream-level start_time (most reliable for Daily.co tracks)
if stream.start_time is not None and stream.time_base is not None:
start_time_seconds = float(stream.start_time * stream.time_base)
# 2) Fallback to container-level start_time (in av.time_base units)
if (start_time_seconds <= 0) and (container.start_time is not None):
start_time_seconds = float(container.start_time * av.time_base)
# 3) Fallback to first packet DTS in stream.time_base
if start_time_seconds <= 0:
for packet in container.demux(stream):
if packet.dts is not None:
start_time_seconds = float(packet.dts * stream.time_base)
break
except Exception as e:
self.logger.warning(
"PyAV metadata read failed; assuming 0 start_time",
track_idx=track_idx,
error=str(e),
)
start_time_seconds = 0.0
self.logger.info(
f"Track {track_idx} stream metadata: start_time={start_time_seconds:.3f}s",
track_idx=track_idx,
)
return start_time_seconds
def _apply_audio_padding_to_file(
self,
in_container,
output_path: str,
start_time_seconds: float,
track_idx: int,
) -> None:
"""Apply silence padding to audio track using PyAV filter graph, writing to file"""
delay_ms = math.floor(start_time_seconds * 1000)
self.logger.info(
f"Padding track {track_idx} with {delay_ms}ms delay using PyAV",
track_idx=track_idx,
delay_ms=delay_ms,
)
try:
with av.open(output_path, "w", format="webm") as out_container:
in_stream = next(
(s for s in in_container.streams if s.type == "audio"), None
)
if in_stream is None:
raise Exception("No audio stream in input")
out_stream = out_container.add_stream(
"libopus", rate=OPUS_STANDARD_SAMPLE_RATE
)
out_stream.bit_rate = OPUS_DEFAULT_BIT_RATE
graph = av.filter.Graph()
abuf_args = (
f"time_base=1/{OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_rate={OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_fmt=s16:"
f"channel_layout=stereo"
)
src = graph.add("abuffer", args=abuf_args, name="src")
aresample_f = graph.add("aresample", args="async=1", name="ares")
# adelay requires one delay value per channel separated by '|'
delays_arg = f"{delay_ms}|{delay_ms}"
adelay_f = graph.add(
"adelay", args=f"delays={delays_arg}:all=1", name="delay"
)
sink = graph.add("abuffersink", name="sink")
src.link_to(aresample_f)
aresample_f.link_to(adelay_f)
adelay_f.link_to(sink)
graph.configure()
resampler = AudioResampler(
format="s16", layout="stereo", rate=OPUS_STANDARD_SAMPLE_RATE
)
# Decode -> resample -> push through graph -> encode Opus
for frame in in_container.decode(in_stream):
out_frames = resampler.resample(frame) or []
for rframe in out_frames:
rframe.sample_rate = OPUS_STANDARD_SAMPLE_RATE
rframe.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
src.push(rframe)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
src.push(None)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
for packet in out_stream.encode(None):
out_container.mux(packet)
except Exception as e:
self.logger.error(
"PyAV padding failed for track",
track_idx=track_idx,
delay_ms=delay_ms,
error=str(e),
exc_info=True,
)
raise
async def mixdown_tracks(
self,
track_urls: list[str],
writer: AudioFileWriterProcessor,
offsets_seconds: list[float] | None = None,
) -> None:
"""Multi-track mixdown using PyAV filter graph (amix), reading from S3 presigned URLs"""
target_sample_rate: int | None = None
for url in track_urls:
if not url:
continue
container = None
try:
container = av.open(url)
for frame in container.decode(audio=0):
target_sample_rate = frame.sample_rate
break
except Exception:
continue
finally:
if container is not None:
container.close()
if target_sample_rate:
break
"""Multi-track mixdown using PyAV filter graph (amix), reading from S3 presigned URLs."""
target_sample_rate = detect_sample_rate_from_tracks(
track_urls, logger=self.logger
)
if not target_sample_rate:
self.logger.error("Mixdown failed - no decodable audio frames found")
raise Exception("Mixdown failed: No decodable audio frames in any track")
# Build PyAV filter graph:
# N abuffer (s32/stereo)
# -> optional adelay per input (for alignment)
# -> amix (s32)
# -> aformat(s16)
# -> sink
graph = av.filter.Graph()
inputs = []
valid_track_urls = [url for url in track_urls if url]
input_offsets_seconds = None
if offsets_seconds is not None:
input_offsets_seconds = [
offsets_seconds[i] for i, url in enumerate(track_urls) if url
]
for idx, url in enumerate(valid_track_urls):
args = (
f"time_base=1/{target_sample_rate}:"
f"sample_rate={target_sample_rate}:"
f"sample_fmt=s32:"
f"channel_layout=stereo"
)
in_ctx = graph.add("abuffer", args=args, name=f"in{idx}")
inputs.append(in_ctx)
if not inputs:
self.logger.error("Mixdown failed - no valid inputs for graph")
raise Exception("Mixdown failed: No valid inputs for filter graph")
mixer = graph.add("amix", args=f"inputs={len(inputs)}:normalize=0", name="mix")
fmt = graph.add(
"aformat",
args=(
f"sample_fmts=s32:channel_layouts=stereo:sample_rates={target_sample_rate}"
),
name="fmt",
await mixdown_tracks_pyav(
track_urls,
writer,
target_sample_rate,
offsets_seconds=offsets_seconds,
logger=self.logger,
)
sink = graph.add("abuffersink", name="out")
# Optional per-input delay before mixing
delays_ms: list[int] = []
if input_offsets_seconds is not None:
base = min(input_offsets_seconds) if input_offsets_seconds else 0.0
delays_ms = [
max(0, int(round((o - base) * 1000))) for o in input_offsets_seconds
]
else:
delays_ms = [0 for _ in inputs]
for idx, in_ctx in enumerate(inputs):
delay_ms = delays_ms[idx] if idx < len(delays_ms) else 0
if delay_ms > 0:
# adelay requires one value per channel; use same for stereo
adelay = graph.add(
"adelay",
args=f"delays={delay_ms}|{delay_ms}:all=1",
name=f"delay{idx}",
)
in_ctx.link_to(adelay)
adelay.link_to(mixer, 0, idx)
else:
in_ctx.link_to(mixer, 0, idx)
mixer.link_to(fmt)
fmt.link_to(sink)
graph.configure()
containers = []
try:
# Open all containers with cleanup guaranteed
for i, url in enumerate(valid_track_urls):
try:
c = av.open(
url,
options={
# it's trying to stream from s3 by default
"reconnect": "1",
"reconnect_streamed": "1",
"reconnect_delay_max": "5",
},
)
containers.append(c)
except Exception as e:
self.logger.warning(
"Mixdown: failed to open container from URL",
input=i,
url=url,
error=str(e),
)
if not containers:
self.logger.error("Mixdown failed - no valid containers opened")
raise Exception("Mixdown failed: Could not open any track containers")
decoders = [c.decode(audio=0) for c in containers]
active = [True] * len(decoders)
resamplers = [
AudioResampler(format="s32", layout="stereo", rate=target_sample_rate)
for _ in decoders
]
while any(active):
for i, (dec, is_active) in enumerate(zip(decoders, active)):
if not is_active:
continue
try:
frame = next(dec)
except StopIteration:
active[i] = False
# causes stream to move on / unclogs memory
inputs[i].push(None)
continue
if frame.sample_rate != target_sample_rate:
continue
out_frames = resamplers[i].resample(frame) or []
for rf in out_frames:
rf.sample_rate = target_sample_rate
rf.time_base = Fraction(1, target_sample_rate)
inputs[i].push(rf)
while True:
try:
mixed = sink.pull()
except Exception:
break
mixed.sample_rate = target_sample_rate
mixed.time_base = Fraction(1, target_sample_rate)
await writer.push(mixed)
while True:
try:
mixed = sink.pull()
except Exception:
break
mixed.sample_rate = target_sample_rate
mixed.time_base = Fraction(1, target_sample_rate)
await writer.push(mixed)
finally:
# Cleanup all containers, even if processing failed
for c in containers:
if c is not None:
try:
c.close()
except Exception:
pass # Best effort cleanup
@broadcast_to_sockets
async def set_status(self, transcript_id: str, status: TranscriptStatus):
async with self.lock_transaction():
@@ -772,6 +485,7 @@ class PipelineMainMultitrack(PipelineMainBase):
transcript,
on_long_summary_callback=self.on_long_summary,
on_short_summary_callback=self.on_short_summary,
on_action_items_callback=self.on_action_items,
empty_pipeline=self.empty_pipeline,
logger=self.logger,
)

View File

@@ -89,6 +89,7 @@ async def generate_summaries(
*,
on_long_summary_callback: Callable,
on_short_summary_callback: Callable,
on_action_items_callback: Callable,
empty_pipeline: EmptyPipeline,
logger: structlog.BoundLogger,
):
@@ -96,11 +97,14 @@ async def generate_summaries(
logger.warning("No topics for summary generation")
return
processor = TranscriptFinalSummaryProcessor(
transcript=transcript,
callback=on_long_summary_callback,
on_short_summary=on_short_summary_callback,
)
processor_kwargs = {
"transcript": transcript,
"callback": on_long_summary_callback,
"on_short_summary": on_short_summary_callback,
"on_action_items": on_action_items_callback,
}
processor = TranscriptFinalSummaryProcessor(**processor_kwargs)
processor.set_pipeline(empty_pipeline)
for topic in topics:

View File

@@ -96,6 +96,36 @@ RECAP_PROMPT = dedent(
"""
).strip()
ACTION_ITEMS_PROMPT = dedent(
"""
Identify action items from this meeting transcript. Your goal is to identify what was decided and what needs to happen next.
Look for:
1. **Decisions Made**: Any decisions, choices, or conclusions reached during the meeting. For each decision:
- What was decided? (be specific)
- Who made the decision or was involved? (use actual participant names)
- Why was this decision made? (key factors, reasoning, or rationale)
2. **Next Steps / Action Items**: Any tasks, follow-ups, or actions that were mentioned or assigned. For each action item:
- What specific task needs to be done? (be concrete and actionable)
- Who is responsible? (use actual participant names if mentioned, or "team" if unclear)
- When is it due? (any deadlines, timeframes, or "by next meeting" type commitments)
- What context is needed? (any additional details that help understand the task)
Guidelines:
- Be thorough and identify all action items, even if they seem minor
- Include items that were agreed upon, assigned, or committed to
- Include decisions even if they seem obvious or implicit
- If someone says "I'll do X" or "We should do Y", that's an action item
- If someone says "Let's go with option A", that's a decision
- Use the exact participant names from the transcript
- If no participant name is mentioned, you can leave assigned_to/decided_by as null
Only return empty lists if the transcript contains NO decisions and NO action items whatsoever.
"""
).strip()
STRUCTURED_RESPONSE_PROMPT_TEMPLATE = dedent(
"""
Based on the following analysis, provide the information in the requested JSON format:
@@ -155,6 +185,53 @@ class SubjectsResponse(BaseModel):
)
class ActionItem(BaseModel):
"""A single action item from the meeting"""
task: str = Field(description="The task or action item to be completed")
assigned_to: str | None = Field(
default=None, description="Person or team assigned to this task (name)"
)
assigned_to_participant_id: str | None = Field(
default=None, description="Participant ID if assigned_to matches a participant"
)
deadline: str | None = Field(
default=None, description="Deadline or timeframe mentioned for this task"
)
context: str | None = Field(
default=None, description="Additional context or notes about this task"
)
class Decision(BaseModel):
"""A decision made during the meeting"""
decision: str = Field(description="What was decided")
rationale: str | None = Field(
default=None,
description="Reasoning or key factors that influenced this decision",
)
decided_by: str | None = Field(
default=None, description="Person or group who made the decision (name)"
)
decided_by_participant_id: str | None = Field(
default=None, description="Participant ID if decided_by matches a participant"
)
class ActionItemsResponse(BaseModel):
"""Pydantic model for identified action items"""
decisions: list[Decision] = Field(
default_factory=list,
description="List of decisions made during the meeting",
)
next_steps: list[ActionItem] = Field(
default_factory=list,
description="List of action items and next steps to be taken",
)
class SummaryBuilder:
def __init__(self, llm: LLM, filename: str | None = None, logger=None) -> None:
self.transcript: str | None = None
@@ -166,6 +243,8 @@ class SummaryBuilder:
self.model_name: str = llm.model_name
self.logger = logger or structlog.get_logger()
self.participant_instructions: str | None = None
self.action_items: ActionItemsResponse | None = None
self.participant_name_to_id: dict[str, str] = {}
if filename:
self.read_transcript_from_file(filename)
@@ -189,13 +268,20 @@ class SummaryBuilder:
self.llm = llm
async def _get_structured_response(
self, prompt: str, output_cls: Type[T], tone_name: str | None = None
self,
prompt: str,
output_cls: Type[T],
tone_name: str | None = None,
timeout: int | None = None,
) -> T:
"""Generic function to get structured output from LLM for non-function-calling models."""
# Add participant instructions to the prompt if available
enhanced_prompt = self._enhance_prompt_with_participants(prompt)
return await self.llm.get_structured_response(
enhanced_prompt, [self.transcript], output_cls, tone_name=tone_name
enhanced_prompt,
[self.transcript],
output_cls,
tone_name=tone_name,
timeout=timeout,
)
async def _get_response(
@@ -216,11 +302,19 @@ class SummaryBuilder:
# Participants
# ----------------------------------------------------------------------------
def set_known_participants(self, participants: list[str]) -> None:
def set_known_participants(
self,
participants: list[str],
participant_name_to_id: dict[str, str] | None = None,
) -> None:
"""
Set known participants directly without LLM identification.
This is used when participants are already identified and stored.
They are appended at the end of the transcript, providing more context for the assistant.
Args:
participants: List of participant names
participant_name_to_id: Optional mapping of participant names to their IDs
"""
if not participants:
self.logger.warning("No participants provided")
@@ -231,10 +325,12 @@ class SummaryBuilder:
participants=participants,
)
if participant_name_to_id:
self.participant_name_to_id = participant_name_to_id
participants_md = self.format_list_md(participants)
self.transcript += f"\n\n# Participants\n\n{participants_md}"
# Set instructions that will be automatically added to all prompts
participants_list = ", ".join(participants)
self.participant_instructions = dedent(
f"""
@@ -413,6 +509,92 @@ class SummaryBuilder:
self.recap = str(recap_response)
self.logger.info(f"Quick recap: {self.recap}")
def _map_participant_names_to_ids(
self, response: ActionItemsResponse
) -> ActionItemsResponse:
"""Map participant names in action items to participant IDs."""
if not self.participant_name_to_id:
return response
decisions = []
for decision in response.decisions:
new_decision = decision.model_copy()
if (
decision.decided_by
and decision.decided_by in self.participant_name_to_id
):
new_decision.decided_by_participant_id = self.participant_name_to_id[
decision.decided_by
]
decisions.append(new_decision)
next_steps = []
for item in response.next_steps:
new_item = item.model_copy()
if item.assigned_to and item.assigned_to in self.participant_name_to_id:
new_item.assigned_to_participant_id = self.participant_name_to_id[
item.assigned_to
]
next_steps.append(new_item)
return ActionItemsResponse(decisions=decisions, next_steps=next_steps)
async def identify_action_items(self) -> ActionItemsResponse | None:
"""Identify action items (decisions and next steps) from the transcript."""
self.logger.info("--- identify action items using TreeSummarize")
if not self.transcript:
self.logger.warning(
"No transcript available for action items identification"
)
self.action_items = None
return None
action_items_prompt = ACTION_ITEMS_PROMPT
try:
response = await self._get_structured_response(
action_items_prompt,
ActionItemsResponse,
tone_name="Action item identifier",
timeout=settings.LLM_STRUCTURED_RESPONSE_TIMEOUT,
)
response = self._map_participant_names_to_ids(response)
self.action_items = response
self.logger.info(
f"Identified {len(response.decisions)} decisions and {len(response.next_steps)} action items",
decisions_count=len(response.decisions),
next_steps_count=len(response.next_steps),
)
if response.decisions:
self.logger.debug(
"Decisions identified",
decisions=[d.decision for d in response.decisions],
)
if response.next_steps:
self.logger.debug(
"Action items identified",
tasks=[item.task for item in response.next_steps],
)
if not response.decisions and not response.next_steps:
self.logger.warning(
"No action items identified from transcript",
transcript_length=len(self.transcript),
)
return response
except Exception as e:
self.logger.error(
f"Error identifying action items: {e}",
exc_info=True,
)
self.action_items = None
return None
async def generate_summary(self, only_subjects: bool = False) -> None:
"""
Generate summary by extracting subjects, creating summaries for each, and generating a recap.
@@ -424,6 +606,7 @@ class SummaryBuilder:
await self.generate_subject_summaries()
await self.generate_recap()
await self.identify_action_items()
# ----------------------------------------------------------------------------
# Markdown
@@ -526,8 +709,6 @@ if __name__ == "__main__":
if args.summary:
await sm.generate_summary()
# Note: action items generation has been removed
print("")
print("-" * 80)
print("")

View File

@@ -1,7 +1,12 @@
from reflector.llm import LLM
from reflector.processors.base import Processor
from reflector.processors.summary.summary_builder import SummaryBuilder
from reflector.processors.types import FinalLongSummary, FinalShortSummary, TitleSummary
from reflector.processors.types import (
ActionItems,
FinalLongSummary,
FinalShortSummary,
TitleSummary,
)
from reflector.settings import settings
@@ -27,15 +32,20 @@ class TranscriptFinalSummaryProcessor(Processor):
builder = SummaryBuilder(self.llm, logger=self.logger)
builder.set_transcript(text)
# Use known participants if available, otherwise identify them
if self.transcript and self.transcript.participants:
# Extract participant names from the stored participants
participant_names = [p.name for p in self.transcript.participants if p.name]
if participant_names:
self.logger.info(
f"Using {len(participant_names)} known participants from transcript"
)
builder.set_known_participants(participant_names)
participant_name_to_id = {
p.name: p.id
for p in self.transcript.participants
if p.name and p.id
}
builder.set_known_participants(
participant_names, participant_name_to_id=participant_name_to_id
)
else:
self.logger.info(
"Participants field exists but is empty, identifying participants"
@@ -63,7 +73,6 @@ class TranscriptFinalSummaryProcessor(Processor):
self.logger.warning("No summary to output")
return
# build the speakermap from the transcript
speakermap = {}
if self.transcript:
speakermap = {
@@ -76,8 +85,6 @@ class TranscriptFinalSummaryProcessor(Processor):
speakermap=speakermap,
)
# build the transcript as a single string
# Replace speaker IDs with actual participant names if available
text_transcript = []
unique_speakers = set()
for topic in self.chunks:
@@ -111,4 +118,9 @@ class TranscriptFinalSummaryProcessor(Processor):
)
await self.emit(final_short_summary, name="short_summary")
if self.builder and self.builder.action_items:
action_items = self.builder.action_items.model_dump()
action_items = ActionItems(action_items=action_items)
await self.emit(action_items, name="action_items")
await self.emit(final_long_summary)

View File

@@ -78,7 +78,11 @@ class TranscriptTopicDetectorProcessor(Processor):
"""
prompt = TOPIC_PROMPT.format(text=text)
response = await self.llm.get_structured_response(
prompt, [text], TopicResponse, tone_name="Topic analyzer"
prompt,
[text],
TopicResponse,
tone_name="Topic analyzer",
timeout=settings.LLM_STRUCTURED_RESPONSE_TIMEOUT,
)
return response

View File

@@ -264,6 +264,10 @@ class FinalShortSummary(BaseModel):
duration: float
class ActionItems(BaseModel):
action_items: dict # JSON-serializable dict from ActionItemsResponse
class FinalTitle(BaseModel):
title: str

View File

@@ -11,13 +11,19 @@ from typing import Literal, Union, assert_never
import celery
from celery.result import AsyncResult
from hatchet_sdk.clients.rest.exceptions import ApiException
from hatchet_sdk.clients.rest.models import V1TaskStatus
from reflector.db.recordings import recordings_controller
from reflector.db.transcripts import Transcript
from reflector.db.rooms import rooms_controller
from reflector.db.transcripts import Transcript, transcripts_controller
from reflector.hatchet.client import HatchetClientManager
from reflector.logger import logger
from reflector.pipelines.main_file_pipeline import task_pipeline_file_process
from reflector.pipelines.main_multitrack_pipeline import (
task_pipeline_multitrack_process,
)
from reflector.settings import settings
from reflector.utils.string import NonEmptyString
@@ -37,6 +43,8 @@ class MultitrackProcessingConfig:
transcript_id: NonEmptyString
bucket_name: NonEmptyString
track_keys: list[str]
recording_id: NonEmptyString | None = None
room_id: NonEmptyString | None = None
mode: Literal["multitrack"] = "multitrack"
@@ -49,6 +57,7 @@ class ValidationOk:
# transcript currently doesnt always have recording_id
recording_id: NonEmptyString | None
transcript_id: NonEmptyString
room_id: NonEmptyString | None = None
@dataclass
@@ -96,6 +105,7 @@ async def validate_transcript_for_processing(
if transcript.status == "idle":
return ValidationNotReady(detail="Recording is not ready for processing")
# Check Celery tasks
if task_is_scheduled_or_active(
"reflector.pipelines.main_file_pipeline.task_pipeline_file_process",
transcript_id=transcript.id,
@@ -105,8 +115,25 @@ async def validate_transcript_for_processing(
):
return ValidationAlreadyScheduled(detail="already running")
# Check Hatchet workflows (if enabled)
if settings.HATCHET_ENABLED and transcript.workflow_run_id:
try:
status = await HatchetClientManager.get_workflow_run_status(
transcript.workflow_run_id
)
# If workflow is running or queued, don't allow new processing
if status in (V1TaskStatus.RUNNING, V1TaskStatus.QUEUED):
return ValidationAlreadyScheduled(
detail="Hatchet workflow already running"
)
except ApiException:
# Workflow might be gone (404) or API issue - allow processing
pass
return ValidationOk(
recording_id=transcript.recording_id, transcript_id=transcript.id
recording_id=transcript.recording_id,
transcript_id=transcript.id,
room_id=transcript.room_id,
)
@@ -116,6 +143,7 @@ async def prepare_transcript_processing(validation: ValidationOk) -> PrepareResu
"""
bucket_name: str | None = None
track_keys: list[str] | None = None
recording_id: str | None = validation.recording_id
if validation.recording_id:
recording = await recordings_controller.get_by_id(validation.recording_id)
@@ -137,6 +165,8 @@ async def prepare_transcript_processing(validation: ValidationOk) -> PrepareResu
bucket_name=bucket_name, # type: ignore (validated above)
track_keys=track_keys,
transcript_id=validation.transcript_id,
recording_id=recording_id,
room_id=validation.room_id,
)
return FileProcessingConfig(
@@ -144,8 +174,104 @@ async def prepare_transcript_processing(validation: ValidationOk) -> PrepareResu
)
def dispatch_transcript_processing(config: ProcessingConfig) -> AsyncResult:
async def dispatch_transcript_processing(
config: ProcessingConfig, force: bool = False
) -> AsyncResult | None:
"""Dispatch transcript processing to appropriate backend (Hatchet or Celery).
Returns AsyncResult for Celery tasks, None for Hatchet workflows.
"""
if isinstance(config, MultitrackProcessingConfig):
# Check if room has use_hatchet=True (overrides env vars)
room_forces_hatchet = False
if config.room_id:
room = await rooms_controller.get_by_id(config.room_id)
room_forces_hatchet = room.use_hatchet if room else False
# Start durable workflow if enabled (Hatchet)
# or if room has use_hatchet=True
use_hatchet = settings.HATCHET_ENABLED or room_forces_hatchet
if room_forces_hatchet:
logger.info(
"Room forces Hatchet workflow",
room_id=config.room_id,
transcript_id=config.transcript_id,
)
if use_hatchet:
# First check if we can replay (outside transaction since it's read-only)
transcript = await transcripts_controller.get_by_id(config.transcript_id)
if transcript and transcript.workflow_run_id and not force:
can_replay = await HatchetClientManager.can_replay(
transcript.workflow_run_id
)
if can_replay:
await HatchetClientManager.replay_workflow(
transcript.workflow_run_id
)
logger.info(
"Replaying Hatchet workflow",
workflow_id=transcript.workflow_run_id,
)
return None
# Force: cancel old workflow if exists
if force and transcript and transcript.workflow_run_id:
await HatchetClientManager.cancel_workflow(transcript.workflow_run_id)
logger.info(
"Cancelled old workflow (--force)",
workflow_id=transcript.workflow_run_id,
)
await transcripts_controller.update(
transcript, {"workflow_run_id": None}
)
# Re-fetch and check for concurrent dispatch (optimistic approach).
# No database lock - worst case is duplicate dispatch, but Hatchet
# workflows are idempotent so this is acceptable.
transcript = await transcripts_controller.get_by_id(config.transcript_id)
if transcript and transcript.workflow_run_id:
# Another process started a workflow between validation and now
try:
status = await HatchetClientManager.get_workflow_run_status(
transcript.workflow_run_id
)
if status in (V1TaskStatus.RUNNING, V1TaskStatus.QUEUED):
logger.info(
"Concurrent workflow detected, skipping dispatch",
workflow_id=transcript.workflow_run_id,
)
return None
except ApiException:
# Workflow might be gone (404) or API issue - proceed with new workflow
pass
workflow_id = await HatchetClientManager.start_workflow(
workflow_name="DiarizationPipeline",
input_data={
"recording_id": config.recording_id,
"tracks": [{"s3_key": k} for k in config.track_keys],
"bucket_name": config.bucket_name,
"transcript_id": config.transcript_id,
"room_id": config.room_id,
},
additional_metadata={
"transcript_id": config.transcript_id,
"recording_id": config.recording_id,
"daily_recording_id": config.recording_id,
},
)
if transcript:
await transcripts_controller.update(
transcript, {"workflow_run_id": workflow_id}
)
logger.info("Hatchet workflow dispatched", workflow_id=workflow_id)
return None
# Celery pipeline (durable workflows disabled)
return task_pipeline_multitrack_process.delay(
transcript_id=config.transcript_id,
bucket_name=config.bucket_name,

View File

@@ -77,6 +77,9 @@ class Settings(BaseSettings):
LLM_PARSE_MAX_RETRIES: int = (
3 # Max retries for JSON/validation errors (total attempts = retries + 1)
)
LLM_STRUCTURED_RESPONSE_TIMEOUT: int = (
300 # Timeout in seconds for structured responses (5 minutes)
)
# Diarization
DIARIZATION_ENABLED: bool = True
@@ -150,5 +153,19 @@ class Settings(BaseSettings):
ZULIP_API_KEY: str | None = None
ZULIP_BOT_EMAIL: str | None = None
# Durable workflow orchestration
# Provider: "hatchet" (or "none" to disable)
DURABLE_WORKFLOW_PROVIDER: str = "none"
# Hatchet workflow orchestration
HATCHET_CLIENT_TOKEN: str | None = None
HATCHET_CLIENT_TLS_STRATEGY: str = "none" # none, tls, mtls
HATCHET_DEBUG: bool = False
@property
def HATCHET_ENABLED(self) -> bool:
"""True if Hatchet is the active provider."""
return self.DURABLE_WORKFLOW_PROVIDER == "hatchet"
settings = Settings()

View File

@@ -15,8 +15,11 @@ import time
from typing import Callable
from celery.result import AsyncResult
from hatchet_sdk.clients.rest.models import V1TaskStatus
from reflector.db import get_database
from reflector.db.transcripts import Transcript, transcripts_controller
from reflector.hatchet.client import HatchetClientManager
from reflector.services.transcript_process import (
FileProcessingConfig,
MultitrackProcessingConfig,
@@ -34,24 +37,26 @@ async def process_transcript_inner(
transcript: Transcript,
on_validation: Callable[[ValidationResult], None],
on_preprocess: Callable[[PrepareResult], None],
) -> AsyncResult:
force: bool = False,
) -> AsyncResult | None:
validation = await validate_transcript_for_processing(transcript)
on_validation(validation)
config = await prepare_transcript_processing(validation)
on_preprocess(config)
return dispatch_transcript_processing(config)
return await dispatch_transcript_processing(config, force=force)
async def process_transcript(transcript_id: str, sync: bool = False) -> None:
async def process_transcript(
transcript_id: str, sync: bool = False, force: bool = False
) -> None:
"""
Process a transcript by ID, auto-detecting multitrack vs file pipeline.
Args:
transcript_id: The transcript UUID
sync: If True, wait for task completion. If False, dispatch and exit.
force: If True, cancel old workflow and start new (latest code). If False, replay failed workflow.
"""
from reflector.db import get_database
database = get_database()
await database.connect()
@@ -82,10 +87,42 @@ async def process_transcript(transcript_id: str, sync: bool = False) -> None:
print(f"Dispatching file pipeline", file=sys.stderr)
result = await process_transcript_inner(
transcript, on_validation=on_validation, on_preprocess=on_preprocess
transcript,
on_validation=on_validation,
on_preprocess=on_preprocess,
force=force,
)
if sync:
if result is None:
# Hatchet workflow dispatched
if sync:
# Re-fetch transcript to get workflow_run_id
transcript = await transcripts_controller.get_by_id(transcript_id)
if not transcript or not transcript.workflow_run_id:
print("Error: workflow_run_id not found", file=sys.stderr)
sys.exit(1)
print("Waiting for Hatchet workflow...", file=sys.stderr)
while True:
status = await HatchetClientManager.get_workflow_run_status(
transcript.workflow_run_id
)
print(f" Status: {status.value}", file=sys.stderr)
if status == V1TaskStatus.COMPLETED:
print("Workflow completed successfully", file=sys.stderr)
break
elif status in (V1TaskStatus.FAILED, V1TaskStatus.CANCELLED):
print(f"Workflow failed: {status}", file=sys.stderr)
sys.exit(1)
await asyncio.sleep(5)
else:
print(
"Task dispatched (use --sync to wait for completion)",
file=sys.stderr,
)
elif sync:
print("Waiting for task completion...", file=sys.stderr)
while not result.ready():
print(f" Status: {result.state}", file=sys.stderr)
@@ -118,9 +155,16 @@ def main():
action="store_true",
help="Wait for task completion instead of just dispatching",
)
parser.add_argument(
"--force",
action="store_true",
help="Cancel old workflow and start new (uses latest code instead of replaying)",
)
args = parser.parse_args()
asyncio.run(process_transcript(args.transcript_id, sync=args.sync))
asyncio.run(
process_transcript(args.transcript_id, sync=args.sync, force=args.force)
)
if __name__ == "__main__":

View File

@@ -0,0 +1,15 @@
"""
Shared audio processing constants.
Used by both Hatchet workflows and Celery pipelines for consistent audio encoding.
"""
# Opus codec settings
OPUS_STANDARD_SAMPLE_RATE = 48000
OPUS_DEFAULT_BIT_RATE = 128000 # 128kbps for good speech quality
# S3 presigned URL expiration
PRESIGNED_URL_EXPIRATION_SECONDS = 7200 # 2 hours
# Waveform visualization
WAVEFORM_SEGMENTS = 255

View File

@@ -0,0 +1,227 @@
"""
Audio track mixdown utilities.
Shared PyAV-based functions for mixing multiple audio tracks into a single output.
Used by both Hatchet workflows and Celery pipelines.
"""
from fractions import Fraction
import av
from av.audio.resampler import AudioResampler
def detect_sample_rate_from_tracks(track_urls: list[str], logger=None) -> int | None:
"""Detect sample rate from first decodable audio frame.
Args:
track_urls: List of URLs to audio files (S3 presigned or local)
logger: Optional logger instance
Returns:
Sample rate in Hz, or None if no decodable frames found
"""
for url in track_urls:
if not url:
continue
container = None
try:
container = av.open(url)
for frame in container.decode(audio=0):
return frame.sample_rate
except Exception:
continue
finally:
if container is not None:
container.close()
return None
async def mixdown_tracks_pyav(
track_urls: list[str],
writer,
target_sample_rate: int,
offsets_seconds: list[float] | None = None,
logger=None,
) -> None:
"""Multi-track mixdown using PyAV filter graph (amix).
Builds a filter graph: N abuffer -> optional adelay -> amix -> aformat -> sink
Reads from S3 presigned URLs or local files, pushes mixed frames to writer.
Args:
track_urls: List of URLs to audio tracks (S3 presigned or local)
writer: AudioFileWriterProcessor instance with async push() method
target_sample_rate: Sample rate for output (Hz)
offsets_seconds: Optional per-track delays in seconds for alignment.
If provided, must have same length as track_urls. Delays are relative
to the minimum offset (earliest track has delay=0).
logger: Optional logger instance
Raises:
ValueError: If offsets_seconds length doesn't match track_urls,
no valid tracks provided, or no containers can be opened
"""
if offsets_seconds is not None and len(offsets_seconds) != len(track_urls):
raise ValueError(
f"offsets_seconds length ({len(offsets_seconds)}) must match track_urls ({len(track_urls)})"
)
valid_track_urls = [url for url in track_urls if url]
if not valid_track_urls:
if logger:
logger.error("Mixdown failed - no valid track URLs provided")
raise ValueError("Mixdown failed: No valid track URLs")
# Calculate per-input delays if offsets provided
input_offsets_seconds = None
if offsets_seconds is not None:
input_offsets_seconds = [
offsets_seconds[i] for i, url in enumerate(track_urls) if url
]
# Build PyAV filter graph:
# N abuffer (s32/stereo)
# -> optional adelay per input (for alignment)
# -> amix (s32)
# -> aformat(s16)
# -> sink
graph = av.filter.Graph()
inputs = []
for idx, url in enumerate(valid_track_urls):
args = (
f"time_base=1/{target_sample_rate}:"
f"sample_rate={target_sample_rate}:"
f"sample_fmt=s32:"
f"channel_layout=stereo"
)
in_ctx = graph.add("abuffer", args=args, name=f"in{idx}")
inputs.append(in_ctx)
if not inputs:
if logger:
logger.error("Mixdown failed - no valid inputs for graph")
raise ValueError("Mixdown failed: No valid inputs for filter graph")
mixer = graph.add("amix", args=f"inputs={len(inputs)}:normalize=0", name="mix")
fmt = graph.add(
"aformat",
args=f"sample_fmts=s32:channel_layouts=stereo:sample_rates={target_sample_rate}",
name="fmt",
)
sink = graph.add("abuffersink", name="out")
# Optional per-input delay before mixing
delays_ms: list[int] = []
if input_offsets_seconds is not None:
base = min(input_offsets_seconds) if input_offsets_seconds else 0.0
delays_ms = [
max(0, int(round((o - base) * 1000))) for o in input_offsets_seconds
]
else:
delays_ms = [0 for _ in inputs]
for idx, in_ctx in enumerate(inputs):
delay_ms = delays_ms[idx] if idx < len(delays_ms) else 0
if delay_ms > 0:
# adelay requires one value per channel; use same for stereo
adelay = graph.add(
"adelay",
args=f"delays={delay_ms}|{delay_ms}:all=1",
name=f"delay{idx}",
)
in_ctx.link_to(adelay)
adelay.link_to(mixer, 0, idx)
else:
in_ctx.link_to(mixer, 0, idx)
mixer.link_to(fmt)
fmt.link_to(sink)
graph.configure()
containers = []
try:
# Open all containers with cleanup guaranteed
for i, url in enumerate(valid_track_urls):
try:
c = av.open(
url,
options={
# S3 streaming options
"reconnect": "1",
"reconnect_streamed": "1",
"reconnect_delay_max": "5",
},
)
containers.append(c)
except Exception as e:
if logger:
logger.warning(
"Mixdown: failed to open container from URL",
input=i,
url=url,
error=str(e),
)
if not containers:
if logger:
logger.error("Mixdown failed - no valid containers opened")
raise ValueError("Mixdown failed: Could not open any track containers")
decoders = [c.decode(audio=0) for c in containers]
active = [True] * len(decoders)
resamplers = [
AudioResampler(format="s32", layout="stereo", rate=target_sample_rate)
for _ in decoders
]
while any(active):
for i, (dec, is_active) in enumerate(zip(decoders, active)):
if not is_active:
continue
try:
frame = next(dec)
except StopIteration:
active[i] = False
# Signal end of stream to filter graph
inputs[i].push(None)
continue
if frame.sample_rate != target_sample_rate:
continue
out_frames = resamplers[i].resample(frame) or []
for rf in out_frames:
rf.sample_rate = target_sample_rate
rf.time_base = Fraction(1, target_sample_rate)
inputs[i].push(rf)
while True:
try:
mixed = sink.pull()
except Exception:
break
mixed.sample_rate = target_sample_rate
mixed.time_base = Fraction(1, target_sample_rate)
await writer.push(mixed)
# Flush remaining frames from filter graph
while True:
try:
mixed = sink.pull()
except Exception:
break
mixed.sample_rate = target_sample_rate
mixed.time_base = Fraction(1, target_sample_rate)
await writer.push(mixed)
finally:
# Cleanup all containers, even if processing failed
for c in containers:
if c is not None:
try:
c.close()
except Exception:
pass # Best effort cleanup

View File

@@ -0,0 +1,186 @@
"""
Audio track padding utilities.
Shared PyAV-based functions for extracting stream metadata and applying
silence padding to audio tracks. Used by both Hatchet workflows and Celery pipelines.
"""
import math
from fractions import Fraction
import av
from av.audio.resampler import AudioResampler
from reflector.utils.audio_constants import (
OPUS_DEFAULT_BIT_RATE,
OPUS_STANDARD_SAMPLE_RATE,
)
def extract_stream_start_time_from_container(
container,
track_idx: int,
logger=None,
) -> float:
"""Extract meeting-relative start time from WebM stream metadata.
Uses PyAV to read stream.start_time from WebM container.
More accurate than filename timestamps by ~209ms due to network/encoding delays.
Args:
container: PyAV container opened from audio file/URL
track_idx: Track index for logging context
logger: Optional logger instance (structlog or stdlib compatible)
Returns:
Start time in seconds (0.0 if extraction fails)
"""
start_time_seconds = 0.0
try:
audio_streams = [s for s in container.streams if s.type == "audio"]
stream = audio_streams[0] if audio_streams else container.streams[0]
# 1) Try stream-level start_time (most reliable for Daily.co tracks)
if stream.start_time is not None and stream.time_base is not None:
start_time_seconds = float(stream.start_time * stream.time_base)
# 2) Fallback to container-level start_time (in av.time_base units)
if (start_time_seconds <= 0) and (container.start_time is not None):
start_time_seconds = float(container.start_time * av.time_base)
# 3) Fallback to first packet DTS in stream.time_base
if start_time_seconds <= 0:
for packet in container.demux(stream):
if packet.dts is not None:
start_time_seconds = float(packet.dts * stream.time_base)
break
except Exception as e:
if logger:
logger.warning(
"PyAV metadata read failed; assuming 0 start_time",
track_idx=track_idx,
error=str(e),
)
start_time_seconds = 0.0
if logger:
logger.info(
f"Track {track_idx} stream metadata: start_time={start_time_seconds:.3f}s",
track_idx=track_idx,
)
return start_time_seconds
def apply_audio_padding_to_file(
in_container,
output_path: str,
start_time_seconds: float,
track_idx: int,
logger=None,
) -> None:
"""Apply silence padding to audio track using PyAV filter graph.
Uses adelay filter to prepend silence, aligning track to meeting start time.
Output is WebM/Opus format.
Args:
in_container: PyAV container opened from source audio
output_path: Path for output WebM file
start_time_seconds: Amount of silence to prepend (in seconds)
track_idx: Track index for logging context
logger: Optional logger instance (structlog or stdlib compatible)
Raises:
Exception: If no audio stream found or PyAV processing fails
"""
delay_ms = math.floor(start_time_seconds * 1000)
if logger:
logger.info(
f"Padding track {track_idx} with {delay_ms}ms delay using PyAV",
track_idx=track_idx,
delay_ms=delay_ms,
)
try:
with av.open(output_path, "w", format="webm") as out_container:
in_stream = next(
(s for s in in_container.streams if s.type == "audio"), None
)
if in_stream is None:
raise Exception("No audio stream in input")
out_stream = out_container.add_stream(
"libopus", rate=OPUS_STANDARD_SAMPLE_RATE
)
out_stream.bit_rate = OPUS_DEFAULT_BIT_RATE
graph = av.filter.Graph()
abuf_args = (
f"time_base=1/{OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_rate={OPUS_STANDARD_SAMPLE_RATE}:"
f"sample_fmt=s16:"
f"channel_layout=stereo"
)
src = graph.add("abuffer", args=abuf_args, name="src")
aresample_f = graph.add("aresample", args="async=1", name="ares")
# adelay requires one delay value per channel separated by '|'
delays_arg = f"{delay_ms}|{delay_ms}"
adelay_f = graph.add(
"adelay", args=f"delays={delays_arg}:all=1", name="delay"
)
sink = graph.add("abuffersink", name="sink")
src.link_to(aresample_f)
aresample_f.link_to(adelay_f)
adelay_f.link_to(sink)
graph.configure()
resampler = AudioResampler(
format="s16", layout="stereo", rate=OPUS_STANDARD_SAMPLE_RATE
)
# Decode -> resample -> push through graph -> encode Opus
for frame in in_container.decode(in_stream):
out_frames = resampler.resample(frame) or []
for rframe in out_frames:
rframe.sample_rate = OPUS_STANDARD_SAMPLE_RATE
rframe.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
src.push(rframe)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
# Flush remaining frames from filter graph
src.push(None)
while True:
try:
f_out = sink.pull()
except Exception:
break
f_out.sample_rate = OPUS_STANDARD_SAMPLE_RATE
f_out.time_base = Fraction(1, OPUS_STANDARD_SAMPLE_RATE)
for packet in out_stream.encode(f_out):
out_container.mux(packet)
# Flush encoder
for packet in out_stream.encode(None):
out_container.mux(packet)
except Exception as e:
if logger:
logger.error(
"PyAV padding failed for track",
track_idx=track_idx,
delay_ms=delay_ms,
error=str(e),
exc_info=True,
)
raise

View File

@@ -0,0 +1,4 @@
def assert_not_none[T](value: T | None, message: str = "Value is None") -> T:
if value is None:
raise ValueError(message)
return value

View File

@@ -2,6 +2,17 @@ from typing import Annotated, TypeVar
from pydantic import Field, TypeAdapter, constr
T_NotNone = TypeVar("T_NotNone")
def assert_not_none(
value: T_NotNone | None, message: str = "Value is None"
) -> T_NotNone:
if value is None:
raise ValueError(message)
return value
NonEmptyStringBase = constr(min_length=1, strip_whitespace=False)
NonEmptyString = Annotated[
NonEmptyStringBase,
@@ -23,10 +34,18 @@ def try_parse_non_empty_string(s: str) -> NonEmptyString | None:
return parse_non_empty_string(s)
T = TypeVar("T", bound=str)
T_Str = TypeVar("T_Str", bound=str)
def assert_equal[T](s1: T, s2: T) -> T:
def assert_equal(s1: T_Str, s2: T_Str) -> T_Str:
if s1 != s2:
raise ValueError(f"assert_equal: {s1} != {s2}")
return s1
def assert_non_none_and_non_empty(
value: str | None, error: str | None = None
) -> NonEmptyString:
return parse_non_empty_string(
assert_not_none(value, error or "Value is None"), error
)

View File

@@ -501,6 +501,7 @@ async def transcript_get(
"title": transcript.title,
"short_summary": transcript.short_summary,
"long_summary": transcript.long_summary,
"action_items": transcript.action_items,
"created_at": transcript.created_at,
"share_mode": transcript.share_mode,
"source_language": transcript.source_language,

View File

@@ -50,5 +50,5 @@ async def transcript_process(
if isinstance(config, ProcessError):
raise HTTPException(status_code=500, detail=config.detail)
else:
dispatch_transcript_processing(config)
await dispatch_transcript_processing(config)
return ProcessStatus(status="ok")

View File

@@ -38,6 +38,10 @@ else:
"task": "reflector.worker.process.reprocess_failed_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST
},
"reprocess_failed_daily_recordings": {
"task": "reflector.worker.process.reprocess_failed_daily_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST
},
"poll_daily_recordings": {
"task": "reflector.worker.process.poll_daily_recordings",
"schedule": 180.0, # Every 3 minutes (configurable lookback window)

View File

@@ -12,7 +12,7 @@ from celery import shared_task
from celery.utils.log import get_task_logger
from pydantic import ValidationError
from reflector.dailyco_api import RecordingResponse
from reflector.dailyco_api import FinishedRecordingResponse, RecordingResponse
from reflector.db.daily_participant_sessions import (
DailyParticipantSession,
daily_participant_sessions_controller,
@@ -24,6 +24,7 @@ from reflector.db.transcripts import (
SourceKind,
transcripts_controller,
)
from reflector.hatchet.client import HatchetClientManager
from reflector.pipelines.main_file_pipeline import task_pipeline_file_process
from reflector.pipelines.main_live_pipeline import asynctask
from reflector.pipelines.main_multitrack_pipeline import (
@@ -286,6 +287,45 @@ async def _process_multitrack_recording_inner(
room_id=room.id,
)
# Start durable workflow if enabled (Hatchet) or room overrides it
durable_started = False
use_hatchet = settings.HATCHET_ENABLED or (room and room.use_hatchet)
if room and room.use_hatchet and not settings.HATCHET_ENABLED:
logger.info(
"Room forces Hatchet workflow",
room_id=room.id,
transcript_id=transcript.id,
)
if use_hatchet:
workflow_id = await HatchetClientManager.start_workflow(
workflow_name="DiarizationPipeline",
input_data={
"recording_id": recording_id,
"tracks": [{"s3_key": k} for k in filter_cam_audio_tracks(track_keys)],
"bucket_name": bucket_name,
"transcript_id": transcript.id,
"room_id": room.id,
},
additional_metadata={
"transcript_id": transcript.id,
"recording_id": recording_id,
"daily_recording_id": recording_id,
},
)
logger.info(
"Started Hatchet workflow",
workflow_id=workflow_id,
transcript_id=transcript.id,
)
await transcripts_controller.update(
transcript, {"workflow_run_id": workflow_id}
)
return
# Celery pipeline (runs when durable workflows disabled)
task_pipeline_multitrack_process.delay(
transcript_id=transcript.id,
bucket_name=bucket_name,
@@ -322,16 +362,38 @@ async def poll_daily_recordings():
)
return
recording_ids = [rec.id for rec in api_recordings]
finished_recordings: List[FinishedRecordingResponse] = []
for rec in api_recordings:
finished = rec.to_finished()
if finished is None:
logger.debug(
"Skipping unfinished recording",
recording_id=rec.id,
room_name=rec.room_name,
status=rec.status,
)
continue
finished_recordings.append(finished)
if not finished_recordings:
logger.debug(
"No finished recordings found from Daily.co API",
total_api_count=len(api_recordings),
)
return
recording_ids = [rec.id for rec in finished_recordings]
existing_recordings = await recordings_controller.get_by_ids(recording_ids)
existing_ids = {rec.id for rec in existing_recordings}
missing_recordings = [rec for rec in api_recordings if rec.id not in existing_ids]
missing_recordings = [
rec for rec in finished_recordings if rec.id not in existing_ids
]
if not missing_recordings:
logger.debug(
"All recordings already in DB",
api_count=len(api_recordings),
api_count=len(finished_recordings),
existing_count=len(existing_recordings),
)
return
@@ -339,7 +401,7 @@ async def poll_daily_recordings():
logger.info(
"Found recordings missing from DB",
missing_count=len(missing_recordings),
total_api_count=len(api_recordings),
total_api_count=len(finished_recordings),
existing_count=len(existing_recordings),
)
@@ -649,7 +711,7 @@ async def reprocess_failed_recordings():
Find recordings in Whereby S3 bucket and check if they have proper transcriptions.
If not, requeue them for processing.
Note: Daily.co recordings are processed via webhooks, not this cron job.
Note: Daily.co multitrack recordings are handled by reprocess_failed_daily_recordings.
"""
logger.info("Checking Whereby recordings that need processing or reprocessing")
@@ -702,6 +764,103 @@ async def reprocess_failed_recordings():
return reprocessed_count
@shared_task
@asynctask
async def reprocess_failed_daily_recordings():
"""
Find Daily.co multitrack recordings in the database and check if they have proper transcriptions.
If not, requeue them for processing.
"""
logger.info(
"Checking Daily.co multitrack recordings that need processing or reprocessing"
)
if not settings.DAILYCO_STORAGE_AWS_BUCKET_NAME:
logger.debug(
"DAILYCO_STORAGE_AWS_BUCKET_NAME not configured; skipping Daily recording reprocessing"
)
return 0
bucket_name = settings.DAILYCO_STORAGE_AWS_BUCKET_NAME
reprocessed_count = 0
try:
multitrack_recordings = (
await recordings_controller.get_multitrack_needing_reprocessing(bucket_name)
)
logger.info(
"Found multitrack recordings needing reprocessing",
count=len(multitrack_recordings),
bucket=bucket_name,
)
for recording in multitrack_recordings:
if not recording.meeting_id:
logger.debug(
"Skipping recording without meeting_id",
recording_id=recording.id,
)
continue
meeting = await meetings_controller.get_by_id(recording.meeting_id)
if not meeting:
logger.warning(
"Meeting not found for recording",
recording_id=recording.id,
meeting_id=recording.meeting_id,
)
continue
transcript = None
try:
transcript = await transcripts_controller.get_by_recording_id(
recording.id
)
except ValidationError:
await transcripts_controller.remove_by_recording_id(recording.id)
logger.warning(
"Removed invalid transcript for recording",
recording_id=recording.id,
)
if not recording.track_keys:
logger.warning(
"Recording has no track_keys, cannot reprocess",
recording_id=recording.id,
)
continue
logger.info(
"Queueing Daily recording for reprocessing",
recording_id=recording.id,
room_name=meeting.room_name,
track_count=len(recording.track_keys),
transcript_status=transcript.status if transcript else None,
)
process_multitrack_recording.delay(
bucket_name=bucket_name,
daily_room_name=meeting.room_name,
recording_id=recording.id,
track_keys=recording.track_keys,
)
reprocessed_count += 1
except Exception as e:
logger.error(
"Error checking Daily multitrack recordings",
error=str(e),
exc_info=True,
)
logger.info(
"Daily reprocessing complete",
requeued_count=reprocessed_count,
)
return reprocessed_count
@shared_task
@asynctask
async def trigger_daily_reconciliation() -> None:

View File

@@ -123,6 +123,7 @@ async def send_transcript_webhook(
"target_language": transcript.target_language,
"status": transcript.status,
"frontend_url": frontend_url,
"action_items": transcript.action_items,
},
"room": {
"id": room.id,

View File

@@ -16,6 +16,7 @@ import threading
import redis.asyncio as redis
from fastapi import WebSocket
from reflector.events import subscribers_shutdown
from reflector.settings import settings
@@ -109,29 +110,30 @@ class WebsocketManager:
await socket.send_json(data)
_ws_manager_instance: WebsocketManager | None = None
_ws_manager_lock = threading.Lock()
def get_ws_manager() -> WebsocketManager:
"""
Returns the WebsocketManager instance for managing websockets.
"""Returns the WebsocketManager singleton instance."""
global _ws_manager_instance
if _ws_manager_instance is None:
with _ws_manager_lock:
if _ws_manager_instance is None:
pubsub_client = RedisPubSubManager(
host=settings.REDIS_HOST,
port=settings.REDIS_PORT,
)
_ws_manager_instance = WebsocketManager(pubsub_client=pubsub_client)
return _ws_manager_instance
This function initializes and returns the WebsocketManager instance,
which is responsible for managing websockets and handling websocket
connections.
Returns:
WebsocketManager: The initialized WebsocketManager instance.
async def cleanup_ws_manager(_app=None) -> None:
"""Cleanup WebsocketManager singleton on shutdown."""
global _ws_manager_instance
if _ws_manager_instance is not None:
await _ws_manager_instance.pubsub_client.disconnect()
_ws_manager_instance = None
Raises:
ImportError: If the 'reflector.settings' module cannot be imported.
RedisConnectionError: If there is an error connecting to the Redis server.
"""
local = threading.local()
if hasattr(local, "ws_manager"):
return local.ws_manager
pubsub_client = RedisPubSubManager(
host=settings.REDIS_HOST,
port=settings.REDIS_PORT,
)
ws_manager = WebsocketManager(pubsub_client=pubsub_client)
local.ws_manager = ws_manager
return ws_manager
subscribers_shutdown.append(cleanup_ws_manager)

View File

@@ -3,7 +3,8 @@ from urllib.parse import urlparse
import httpx
from reflector.db.transcripts import Transcript
from reflector.db.rooms import rooms_controller
from reflector.db.transcripts import Transcript, transcripts_controller
from reflector.settings import settings
@@ -113,6 +114,49 @@ def get_zulip_message(transcript: Transcript, include_topics: bool):
return message
async def post_transcript_notification(transcript: Transcript) -> int | None:
"""Post or update transcript notification in Zulip.
Uses transcript.room_id directly (Hatchet flow).
Celery's pipeline_post_to_zulip uses recording→meeting→room path instead.
DUPLICATION NOTE: This function will stay when we use Celery no more, and Celery one will be removed.
"""
if not transcript.room_id:
return None
room = await rooms_controller.get_by_id(transcript.room_id)
if not room or not room.zulip_stream or not room.zulip_auto_post:
return None
message = get_zulip_message(transcript=transcript, include_topics=True)
message_updated = False
if transcript.zulip_message_id:
try:
await update_zulip_message(
transcript.zulip_message_id,
room.zulip_stream,
room.zulip_topic,
message,
)
message_updated = True
except Exception:
pass
if not message_updated:
response = await send_message_to_zulip(
room.zulip_stream, room.zulip_topic, message
)
message_id = response.get("id")
if message_id:
await transcripts_controller.update(
transcript, {"zulip_message_id": message_id}
)
return message_id
return transcript.zulip_message_id
def extract_domain(url: str) -> str:
return urlparse(url).netloc

View File

@@ -7,6 +7,8 @@ elif [ "${ENTRYPOINT}" = "worker" ]; then
uv run celery -A reflector.worker.app worker --loglevel=info
elif [ "${ENTRYPOINT}" = "beat" ]; then
uv run celery -A reflector.worker.app beat --loglevel=info
elif [ "${ENTRYPOINT}" = "hatchet-worker" ]; then
uv run python -m reflector.hatchet.run_workers
else
echo "Unknown command"
fi

View File

@@ -527,6 +527,22 @@ def fake_mp3_upload():
yield
@pytest.fixture(autouse=True)
def reset_hatchet_client():
"""Reset HatchetClientManager singleton before and after each test.
This ensures test isolation - each test starts with a fresh client state.
The fixture is autouse=True so it applies to all tests automatically.
"""
from reflector.hatchet.client import HatchetClientManager
# Reset before test
HatchetClientManager.reset()
yield
# Reset after test to clean up
HatchetClientManager.reset()
@pytest.fixture
async def fake_transcript_with_topics(tmpdir, client):
import shutil

View File

@@ -0,0 +1,54 @@
"""
Tests for HatchetClientManager error handling and validation.
Only tests that catch real bugs - not mock verification tests.
Note: The `reset_hatchet_client` fixture (autouse=True in conftest.py)
automatically resets the singleton before and after each test.
"""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
@pytest.mark.asyncio
async def test_hatchet_client_can_replay_handles_exception():
"""Test can_replay returns False when status check fails.
Useful: Ensures network/API errors don't crash the system and
gracefully allow reprocessing when workflow state is unknown.
"""
from reflector.hatchet.client import HatchetClientManager
with patch("reflector.hatchet.client.settings") as mock_settings:
mock_settings.HATCHET_CLIENT_TOKEN = "test-token"
mock_settings.HATCHET_DEBUG = False
with patch("reflector.hatchet.client.Hatchet") as mock_hatchet_class:
mock_client = MagicMock()
mock_hatchet_class.return_value = mock_client
mock_client.runs.aio_get_status = AsyncMock(
side_effect=Exception("Network error")
)
can_replay = await HatchetClientManager.can_replay("workflow-123")
# Should return False on error (workflow might be gone)
assert can_replay is False
def test_hatchet_client_raises_without_token():
"""Test that get_client raises ValueError without token.
Useful: Catches if someone removes the token validation,
which would cause cryptic errors later.
"""
from reflector.hatchet.client import HatchetClientManager
with patch("reflector.hatchet.client.settings") as mock_settings:
mock_settings.HATCHET_CLIENT_TOKEN = None
with pytest.raises(ValueError, match="HATCHET_CLIENT_TOKEN must be set"):
HatchetClientManager.get_client()

View File

@@ -0,0 +1,398 @@
"""
Tests for Hatchet workflow dispatch and routing logic.
These tests verify:
1. Routing to Hatchet when HATCHET_ENABLED=True
2. Replay logic for failed workflows
3. Force flag to cancel and restart
4. Validation prevents concurrent workflows
"""
from unittest.mock import AsyncMock, patch
import pytest
from hatchet_sdk.clients.rest.exceptions import ApiException
from hatchet_sdk.clients.rest.models import V1TaskStatus
from reflector.db.transcripts import Transcript
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_hatchet_validation_blocks_running_workflow():
"""Test that validation blocks reprocessing when workflow is running."""
from reflector.services.transcript_process import (
ValidationAlreadyScheduled,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="processing",
source_kind="room",
workflow_run_id="running-workflow-123",
)
with patch("reflector.services.transcript_process.settings") as mock_settings:
mock_settings.HATCHET_ENABLED = True
with patch(
"reflector.services.transcript_process.HatchetClientManager"
) as mock_hatchet:
mock_hatchet.get_workflow_run_status = AsyncMock(
return_value=V1TaskStatus.RUNNING
)
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
assert isinstance(result, ValidationAlreadyScheduled)
assert "running" in result.detail.lower()
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_hatchet_validation_blocks_queued_workflow():
"""Test that validation blocks reprocessing when workflow is queued."""
from reflector.services.transcript_process import (
ValidationAlreadyScheduled,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="processing",
source_kind="room",
workflow_run_id="queued-workflow-123",
)
with patch("reflector.services.transcript_process.settings") as mock_settings:
mock_settings.HATCHET_ENABLED = True
with patch(
"reflector.services.transcript_process.HatchetClientManager"
) as mock_hatchet:
mock_hatchet.get_workflow_run_status = AsyncMock(
return_value=V1TaskStatus.QUEUED
)
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
assert isinstance(result, ValidationAlreadyScheduled)
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_hatchet_validation_allows_failed_workflow():
"""Test that validation allows reprocessing when workflow has failed."""
from reflector.services.transcript_process import (
ValidationOk,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="error",
source_kind="room",
workflow_run_id="failed-workflow-123",
recording_id="test-recording-id",
)
with patch("reflector.services.transcript_process.settings") as mock_settings:
mock_settings.HATCHET_ENABLED = True
with patch(
"reflector.services.transcript_process.HatchetClientManager"
) as mock_hatchet:
mock_hatchet.get_workflow_run_status = AsyncMock(
return_value=V1TaskStatus.FAILED
)
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
assert isinstance(result, ValidationOk)
assert result.transcript_id == "test-transcript-id"
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_hatchet_validation_allows_completed_workflow():
"""Test that validation allows reprocessing when workflow has completed."""
from reflector.services.transcript_process import (
ValidationOk,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="ended",
source_kind="room",
workflow_run_id="completed-workflow-123",
recording_id="test-recording-id",
)
with patch("reflector.services.transcript_process.settings") as mock_settings:
mock_settings.HATCHET_ENABLED = True
with patch(
"reflector.services.transcript_process.HatchetClientManager"
) as mock_hatchet:
mock_hatchet.get_workflow_run_status = AsyncMock(
return_value=V1TaskStatus.COMPLETED
)
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
assert isinstance(result, ValidationOk)
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_hatchet_validation_allows_when_status_check_fails():
"""Test that validation allows reprocessing when status check fails (workflow might be gone)."""
from reflector.services.transcript_process import (
ValidationOk,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="error",
source_kind="room",
workflow_run_id="old-workflow-123",
recording_id="test-recording-id",
)
with patch("reflector.services.transcript_process.settings") as mock_settings:
mock_settings.HATCHET_ENABLED = True
with patch(
"reflector.services.transcript_process.HatchetClientManager"
) as mock_hatchet:
# Status check fails (workflow might be deleted)
mock_hatchet.get_workflow_run_status = AsyncMock(
side_effect=ApiException("Workflow not found")
)
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
# Should allow processing when we can't get status
assert isinstance(result, ValidationOk)
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_hatchet_validation_skipped_when_no_workflow_id():
"""Test that Hatchet validation is skipped when transcript has no workflow_run_id."""
from reflector.services.transcript_process import (
ValidationOk,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="uploaded",
source_kind="room",
workflow_run_id=None, # No workflow yet
recording_id="test-recording-id",
)
with patch("reflector.services.transcript_process.settings") as mock_settings:
mock_settings.HATCHET_ENABLED = True
with patch(
"reflector.services.transcript_process.HatchetClientManager"
) as mock_hatchet:
# Should not be called
mock_hatchet.get_workflow_run_status = AsyncMock()
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
# Should not check Hatchet status
mock_hatchet.get_workflow_run_status.assert_not_called()
assert isinstance(result, ValidationOk)
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_hatchet_validation_skipped_when_disabled():
"""Test that Hatchet validation is skipped when HATCHET_ENABLED is False."""
from reflector.services.transcript_process import (
ValidationOk,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="uploaded",
source_kind="room",
workflow_run_id="some-workflow-123",
recording_id="test-recording-id",
)
with patch("reflector.services.transcript_process.settings") as mock_settings:
mock_settings.HATCHET_ENABLED = False # Hatchet disabled
with patch(
"reflector.services.transcript_process.task_is_scheduled_or_active"
) as mock_celery_check:
mock_celery_check.return_value = False
result = await validate_transcript_for_processing(mock_transcript)
# Should not check Hatchet at all
assert isinstance(result, ValidationOk)
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_validation_locked_transcript():
"""Test that validation rejects locked transcripts."""
from reflector.services.transcript_process import (
ValidationLocked,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="ended",
source_kind="room",
locked=True,
)
result = await validate_transcript_for_processing(mock_transcript)
assert isinstance(result, ValidationLocked)
assert "locked" in result.detail.lower()
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_validation_idle_transcript():
"""Test that validation rejects idle transcripts (not ready)."""
from reflector.services.transcript_process import (
ValidationNotReady,
validate_transcript_for_processing,
)
mock_transcript = Transcript(
id="test-transcript-id",
name="Test",
status="idle",
source_kind="room",
)
result = await validate_transcript_for_processing(mock_transcript)
assert isinstance(result, ValidationNotReady)
assert "not ready" in result.detail.lower()
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_prepare_multitrack_config():
"""Test preparing multitrack processing config."""
from reflector.db.recordings import Recording
from reflector.services.transcript_process import (
MultitrackProcessingConfig,
ValidationOk,
prepare_transcript_processing,
)
validation = ValidationOk(
recording_id="test-recording-id",
transcript_id="test-transcript-id",
)
mock_recording = Recording(
id="test-recording-id",
bucket_name="test-bucket",
object_key="recordings/test",
recorded_at="2024-01-01T00:00:00Z",
track_keys=["track1.webm", "track2.webm"],
)
with patch(
"reflector.services.transcript_process.recordings_controller"
) as mock_rc:
mock_rc.get_by_id = AsyncMock(return_value=mock_recording)
result = await prepare_transcript_processing(validation)
assert isinstance(result, MultitrackProcessingConfig)
assert result.bucket_name == "test-bucket"
assert result.track_keys == ["track1.webm", "track2.webm"]
assert result.transcript_id == "test-transcript-id"
assert result.room_id is None # ValidationOk didn't specify room_id
@pytest.mark.usefixtures("setup_database")
@pytest.mark.asyncio
async def test_prepare_file_config():
"""Test preparing file processing config (no track keys)."""
from reflector.db.recordings import Recording
from reflector.services.transcript_process import (
FileProcessingConfig,
ValidationOk,
prepare_transcript_processing,
)
validation = ValidationOk(
recording_id="test-recording-id",
transcript_id="test-transcript-id",
)
mock_recording = Recording(
id="test-recording-id",
bucket_name="test-bucket",
object_key="recordings/test.mp4",
recorded_at="2024-01-01T00:00:00Z",
track_keys=None, # No track keys = file pipeline
)
with patch(
"reflector.services.transcript_process.recordings_controller"
) as mock_rc:
mock_rc.get_by_id = AsyncMock(return_value=mock_recording)
result = await prepare_transcript_processing(validation)
assert isinstance(result, FileProcessingConfig)
assert result.transcript_id == "test-transcript-id"

View File

@@ -1,12 +1,14 @@
"""Tests for LLM parse error recovery using llama-index Workflow"""
from time import monotonic
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from pydantic import BaseModel, Field
from workflows.errors import WorkflowRuntimeError
from workflows.errors import WorkflowRuntimeError, WorkflowTimeoutError
from reflector.llm import LLM, LLMParseError, StructuredOutputWorkflow
from reflector.utils.retry import RetryException
class TestResponse(BaseModel):
@@ -355,3 +357,132 @@ class TestNetworkErrorRetries:
# Only called once - Workflow doesn't retry network errors
assert mock_settings.llm.acomplete.call_count == 1
class TestWorkflowTimeoutRetry:
"""Test timeout retry mechanism in get_structured_response"""
@pytest.mark.asyncio
async def test_timeout_retry_succeeds_on_retry(self, test_settings):
"""Test that WorkflowTimeoutError triggers retry and succeeds"""
llm = LLM(settings=test_settings, temperature=0.4, max_tokens=100)
call_count = {"count": 0}
async def workflow_run_side_effect(*args, **kwargs):
call_count["count"] += 1
if call_count["count"] == 1:
raise WorkflowTimeoutError("Operation timed out after 120 seconds")
return {
"success": TestResponse(
title="Test", summary="Summary", confidence=0.95
)
}
with (
patch("reflector.llm.StructuredOutputWorkflow") as mock_workflow_class,
patch("reflector.llm.TreeSummarize") as mock_summarize,
patch("reflector.llm.Settings") as mock_settings,
):
mock_workflow = MagicMock()
mock_workflow.run = AsyncMock(side_effect=workflow_run_side_effect)
mock_workflow_class.return_value = mock_workflow
mock_summarizer = MagicMock()
mock_summarize.return_value = mock_summarizer
mock_summarizer.aget_response = AsyncMock(return_value="Some analysis")
mock_settings.llm.acomplete = AsyncMock(
return_value=make_completion_response(
'{"title": "Test", "summary": "Summary", "confidence": 0.95}'
)
)
result = await llm.get_structured_response(
prompt="Test prompt", texts=["Test text"], output_cls=TestResponse
)
assert result.title == "Test"
assert result.summary == "Summary"
assert call_count["count"] == 2
@pytest.mark.asyncio
async def test_timeout_retry_exhausts_after_max_attempts(self, test_settings):
"""Test that timeout retry stops after max attempts"""
llm = LLM(settings=test_settings, temperature=0.4, max_tokens=100)
call_count = {"count": 0}
async def workflow_run_side_effect(*args, **kwargs):
call_count["count"] += 1
raise WorkflowTimeoutError("Operation timed out after 120 seconds")
with (
patch("reflector.llm.StructuredOutputWorkflow") as mock_workflow_class,
patch("reflector.llm.TreeSummarize") as mock_summarize,
patch("reflector.llm.Settings") as mock_settings,
):
mock_workflow = MagicMock()
mock_workflow.run = AsyncMock(side_effect=workflow_run_side_effect)
mock_workflow_class.return_value = mock_workflow
mock_summarizer = MagicMock()
mock_summarize.return_value = mock_summarizer
mock_summarizer.aget_response = AsyncMock(return_value="Some analysis")
mock_settings.llm.acomplete = AsyncMock(
return_value=make_completion_response(
'{"title": "Test", "summary": "Summary", "confidence": 0.95}'
)
)
with pytest.raises(RetryException, match="Retry attempts exceeded"):
await llm.get_structured_response(
prompt="Test prompt", texts=["Test text"], output_cls=TestResponse
)
assert call_count["count"] == 3
@pytest.mark.asyncio
async def test_timeout_retry_with_backoff(self, test_settings):
"""Test that exponential backoff is applied between retries"""
llm = LLM(settings=test_settings, temperature=0.4, max_tokens=100)
call_times = []
async def workflow_run_side_effect(*args, **kwargs):
call_times.append(monotonic())
if len(call_times) < 3:
raise WorkflowTimeoutError("Operation timed out after 120 seconds")
return {
"success": TestResponse(
title="Test", summary="Summary", confidence=0.95
)
}
with (
patch("reflector.llm.StructuredOutputWorkflow") as mock_workflow_class,
patch("reflector.llm.TreeSummarize") as mock_summarize,
patch("reflector.llm.Settings") as mock_settings,
):
mock_workflow = MagicMock()
mock_workflow.run = AsyncMock(side_effect=workflow_run_side_effect)
mock_workflow_class.return_value = mock_workflow
mock_summarizer = MagicMock()
mock_summarize.return_value = mock_summarizer
mock_summarizer.aget_response = AsyncMock(return_value="Some analysis")
mock_settings.llm.acomplete = AsyncMock(
return_value=make_completion_response(
'{"title": "Test", "summary": "Summary", "confidence": 0.95}'
)
)
result = await llm.get_structured_response(
prompt="Test prompt", texts=["Test text"], output_cls=TestResponse
)
assert result.title == "Test"
if len(call_times) >= 2:
time_between_calls = call_times[1] - call_times[0]
assert (
time_between_calls >= 1.5
), f"Expected ~2s backoff, got {time_between_calls}s"

View File

@@ -266,7 +266,11 @@ async def mock_summary_processor():
# When flush is called, simulate summary generation by calling the callbacks
async def flush_with_callback():
mock_summary.flush_called = True
from reflector.processors.types import FinalLongSummary, FinalShortSummary
from reflector.processors.types import (
ActionItems,
FinalLongSummary,
FinalShortSummary,
)
if hasattr(mock_summary, "_callback"):
await mock_summary._callback(
@@ -276,12 +280,19 @@ async def mock_summary_processor():
await mock_summary._on_short_summary(
FinalShortSummary(short_summary="Test short summary", duration=10.0)
)
if hasattr(mock_summary, "_on_action_items"):
await mock_summary._on_action_items(
ActionItems(action_items={"test": "action item"})
)
mock_summary.flush = flush_with_callback
def init_with_callback(transcript=None, callback=None, on_short_summary=None):
def init_with_callback(
transcript=None, callback=None, on_short_summary=None, on_action_items=None
):
mock_summary._callback = callback
mock_summary._on_short_summary = on_short_summary
mock_summary._on_action_items = on_action_items
return mock_summary
mock_summary_class.side_effect = init_with_callback

3416
server/uv.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -2,20 +2,29 @@
import { Spinner, Link } from "@chakra-ui/react";
import { useAuth } from "../lib/AuthProvider";
import { usePathname } from "next/navigation";
import { getLogoutRedirectUrl } from "../lib/auth";
export default function UserInfo() {
const auth = useAuth();
const pathname = usePathname();
const status = auth.status;
const isLoading = status === "loading";
const isAuthenticated = status === "authenticated";
const isRefreshing = status === "refreshing";
const callbackUrl = getLogoutRedirectUrl(pathname);
return isLoading ? (
<Spinner size="xs" className="mx-3" />
) : !isAuthenticated && !isRefreshing ? (
<Link
href="/"
href="#"
className="font-light px-2"
onClick={() => auth.signIn("authentik")}
onClick={(e) => {
e.preventDefault();
auth.signIn("authentik");
}}
>
Log in
</Link>
@@ -23,7 +32,7 @@ export default function UserInfo() {
<Link
href="#"
className="font-light px-2"
onClick={() => auth.signOut({ callbackUrl: "/" })}
onClick={() => auth.signOut({ callbackUrl })}
>
Log out
</Link>

View File

@@ -105,7 +105,19 @@ export default function DailyRoom({ meeting }: DailyRoomProps) {
}
});
await frame.join({ url: roomUrl });
await frame.join({
url: roomUrl,
sendSettings: {
video: {
// Optimize bandwidth for camera video
// allowAdaptiveLayers automatically adjusts quality based on network conditions
allowAdaptiveLayers: true,
// Use bandwidth-optimized preset as fallback for browsers without adaptive support
maxQuality: "medium",
},
// Note: screenVideo intentionally not configured to preserve full quality for screen shares
},
});
} catch (error) {
console.error("Error creating Daily frame:", error);
}

View File

@@ -18,3 +18,8 @@ export const LOGIN_REQUIRED_PAGES = [
export const PROTECTED_PAGES = new RegExp(
LOGIN_REQUIRED_PAGES.map((page) => `^${page}$`).join("|"),
);
export function getLogoutRedirectUrl(pathname: string): string {
const transcriptPagePattern = /^\/transcripts\/[^/]+$/;
return transcriptPagePattern.test(pathname) ? pathname : "/";
}

View File

@@ -31,7 +31,7 @@
"ioredis": "^5.7.0",
"jest-worker": "^29.6.2",
"lucide-react": "^0.525.0",
"next": "^15.5.7",
"next": "^15.5.9",
"next-auth": "^4.24.7",
"next-themes": "^0.4.6",
"nuqs": "^2.4.3",

444
www/pnpm-lock.yaml generated
View File

@@ -27,7 +27,7 @@ importers:
version: 0.2.3(@fortawesome/fontawesome-svg-core@6.7.2)(react@18.3.1)
"@sentry/nextjs":
specifier: ^10.11.0
version: 10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3)
version: 10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3)
"@tanstack/react-query":
specifier: ^5.85.9
version: 5.85.9(react@18.3.1)
@@ -62,17 +62,17 @@ importers:
specifier: ^0.525.0
version: 0.525.0(react@18.3.1)
next:
specifier: ^15.5.7
version: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
specifier: ^15.5.9
version: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
next-auth:
specifier: ^4.24.7
version: 4.24.11(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
version: 4.24.11(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
next-themes:
specifier: ^0.4.6
version: 0.4.6(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
nuqs:
specifier: ^2.4.3
version: 2.4.3(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)
version: 2.4.3(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)
openapi-fetch:
specifier: ^0.14.0
version: 0.14.0
@@ -509,6 +509,12 @@ packages:
integrity: sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg==,
}
"@emnapi/runtime@1.7.1":
resolution:
{
integrity: sha512-PVtJr5CmLwYAU9PZDMITZoR5iAOShYREoR45EyyLrbntV50mdePTgUn4AmOw90Ifcj+x2kRjdzr1HP3RrNiHGA==,
}
"@emnapi/wasi-threads@1.0.4":
resolution:
{
@@ -758,189 +764,213 @@ packages:
}
engines: { node: ">=18.18" }
"@img/sharp-darwin-arm64@0.34.3":
"@img/colour@1.0.0":
resolution:
{
integrity: sha512-ryFMfvxxpQRsgZJqBd4wsttYQbCxsJksrv9Lw/v798JcQ8+w84mBWuXwl+TT0WJ/WrYOLaYpwQXi3sA9nTIaIg==,
integrity: sha512-A5P/LfWGFSl6nsckYtjw9da+19jB8hkJ6ACTGcDfEJ0aE+l2n2El7dsVM7UVHZQ9s2lmYMWlrS21YLy2IR1LUw==,
}
engines: { node: ">=18" }
"@img/sharp-darwin-arm64@0.34.5":
resolution:
{
integrity: sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64]
os: [darwin]
"@img/sharp-darwin-x64@0.34.3":
"@img/sharp-darwin-x64@0.34.5":
resolution:
{
integrity: sha512-yHpJYynROAj12TA6qil58hmPmAwxKKC7reUqtGLzsOHfP7/rniNGTL8tjWX6L3CTV4+5P4ypcS7Pp+7OB+8ihA==,
integrity: sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64]
os: [darwin]
"@img/sharp-libvips-darwin-arm64@1.2.0":
"@img/sharp-libvips-darwin-arm64@1.2.4":
resolution:
{
integrity: sha512-sBZmpwmxqwlqG9ueWFXtockhsxefaV6O84BMOrhtg/YqbTaRdqDE7hxraVE3y6gVM4eExmfzW4a8el9ArLeEiQ==,
integrity: sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g==,
}
cpu: [arm64]
os: [darwin]
"@img/sharp-libvips-darwin-x64@1.2.0":
"@img/sharp-libvips-darwin-x64@1.2.4":
resolution:
{
integrity: sha512-M64XVuL94OgiNHa5/m2YvEQI5q2cl9d/wk0qFTDVXcYzi43lxuiFTftMR1tOnFQovVXNZJ5TURSDK2pNe9Yzqg==,
integrity: sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg==,
}
cpu: [x64]
os: [darwin]
"@img/sharp-libvips-linux-arm64@1.2.0":
"@img/sharp-libvips-linux-arm64@1.2.4":
resolution:
{
integrity: sha512-RXwd0CgG+uPRX5YYrkzKyalt2OJYRiJQ8ED/fi1tq9WQW2jsQIn0tqrlR5l5dr/rjqq6AHAxURhj2DVjyQWSOA==,
integrity: sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw==,
}
cpu: [arm64]
os: [linux]
"@img/sharp-libvips-linux-arm@1.2.0":
"@img/sharp-libvips-linux-arm@1.2.4":
resolution:
{
integrity: sha512-mWd2uWvDtL/nvIzThLq3fr2nnGfyr/XMXlq8ZJ9WMR6PXijHlC3ksp0IpuhK6bougvQrchUAfzRLnbsen0Cqvw==,
integrity: sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A==,
}
cpu: [arm]
os: [linux]
"@img/sharp-libvips-linux-ppc64@1.2.0":
"@img/sharp-libvips-linux-ppc64@1.2.4":
resolution:
{
integrity: sha512-Xod/7KaDDHkYu2phxxfeEPXfVXFKx70EAFZ0qyUdOjCcxbjqyJOEUpDe6RIyaunGxT34Anf9ue/wuWOqBW2WcQ==,
integrity: sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA==,
}
cpu: [ppc64]
os: [linux]
"@img/sharp-libvips-linux-s390x@1.2.0":
"@img/sharp-libvips-linux-riscv64@1.2.4":
resolution:
{
integrity: sha512-eMKfzDxLGT8mnmPJTNMcjfO33fLiTDsrMlUVcp6b96ETbnJmd4uvZxVJSKPQfS+odwfVaGifhsB07J1LynFehw==,
integrity: sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA==,
}
cpu: [riscv64]
os: [linux]
"@img/sharp-libvips-linux-s390x@1.2.4":
resolution:
{
integrity: sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ==,
}
cpu: [s390x]
os: [linux]
"@img/sharp-libvips-linux-x64@1.2.0":
"@img/sharp-libvips-linux-x64@1.2.4":
resolution:
{
integrity: sha512-ZW3FPWIc7K1sH9E3nxIGB3y3dZkpJlMnkk7z5tu1nSkBoCgw2nSRTFHI5pB/3CQaJM0pdzMF3paf9ckKMSE9Tg==,
integrity: sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw==,
}
cpu: [x64]
os: [linux]
"@img/sharp-libvips-linuxmusl-arm64@1.2.0":
"@img/sharp-libvips-linuxmusl-arm64@1.2.4":
resolution:
{
integrity: sha512-UG+LqQJbf5VJ8NWJ5Z3tdIe/HXjuIdo4JeVNADXBFuG7z9zjoegpzzGIyV5zQKi4zaJjnAd2+g2nna8TZvuW9Q==,
integrity: sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw==,
}
cpu: [arm64]
os: [linux]
"@img/sharp-libvips-linuxmusl-x64@1.2.0":
"@img/sharp-libvips-linuxmusl-x64@1.2.4":
resolution:
{
integrity: sha512-SRYOLR7CXPgNze8akZwjoGBoN1ThNZoqpOgfnOxmWsklTGVfJiGJoC/Lod7aNMGA1jSsKWM1+HRX43OP6p9+6Q==,
integrity: sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg==,
}
cpu: [x64]
os: [linux]
"@img/sharp-linux-arm64@0.34.3":
"@img/sharp-linux-arm64@0.34.5":
resolution:
{
integrity: sha512-QdrKe3EvQrqwkDrtuTIjI0bu6YEJHTgEeqdzI3uWJOH6G1O8Nl1iEeVYRGdj1h5I21CqxSvQp1Yv7xeU3ZewbA==,
integrity: sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64]
os: [linux]
"@img/sharp-linux-arm@0.34.3":
"@img/sharp-linux-arm@0.34.5":
resolution:
{
integrity: sha512-oBK9l+h6KBN0i3dC8rYntLiVfW8D8wH+NPNT3O/WBHeW0OQWCjfWksLUaPidsrDKpJgXp3G3/hkmhptAW0I3+A==,
integrity: sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm]
os: [linux]
"@img/sharp-linux-ppc64@0.34.3":
"@img/sharp-linux-ppc64@0.34.5":
resolution:
{
integrity: sha512-GLtbLQMCNC5nxuImPR2+RgrviwKwVql28FWZIW1zWruy6zLgA5/x2ZXk3mxj58X/tszVF69KK0Is83V8YgWhLA==,
integrity: sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [ppc64]
os: [linux]
"@img/sharp-linux-s390x@0.34.3":
"@img/sharp-linux-riscv64@0.34.5":
resolution:
{
integrity: sha512-3gahT+A6c4cdc2edhsLHmIOXMb17ltffJlxR0aC2VPZfwKoTGZec6u5GrFgdR7ciJSsHT27BD3TIuGcuRT0KmQ==,
integrity: sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [riscv64]
os: [linux]
"@img/sharp-linux-s390x@0.34.5":
resolution:
{
integrity: sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [s390x]
os: [linux]
"@img/sharp-linux-x64@0.34.3":
"@img/sharp-linux-x64@0.34.5":
resolution:
{
integrity: sha512-8kYso8d806ypnSq3/Ly0QEw90V5ZoHh10yH0HnrzOCr6DKAPI6QVHvwleqMkVQ0m+fc7EH8ah0BB0QPuWY6zJQ==,
integrity: sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64]
os: [linux]
"@img/sharp-linuxmusl-arm64@0.34.3":
"@img/sharp-linuxmusl-arm64@0.34.5":
resolution:
{
integrity: sha512-vAjbHDlr4izEiXM1OTggpCcPg9tn4YriK5vAjowJsHwdBIdx0fYRsURkxLG2RLm9gyBq66gwtWI8Gx0/ov+JKQ==,
integrity: sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64]
os: [linux]
"@img/sharp-linuxmusl-x64@0.34.3":
"@img/sharp-linuxmusl-x64@0.34.5":
resolution:
{
integrity: sha512-gCWUn9547K5bwvOn9l5XGAEjVTTRji4aPTqLzGXHvIr6bIDZKNTA34seMPgM0WmSf+RYBH411VavCejp3PkOeQ==,
integrity: sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64]
os: [linux]
"@img/sharp-wasm32@0.34.3":
"@img/sharp-wasm32@0.34.5":
resolution:
{
integrity: sha512-+CyRcpagHMGteySaWos8IbnXcHgfDn7pO2fiC2slJxvNq9gDipYBN42/RagzctVRKgxATmfqOSulgZv5e1RdMg==,
integrity: sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [wasm32]
"@img/sharp-win32-arm64@0.34.3":
"@img/sharp-win32-arm64@0.34.5":
resolution:
{
integrity: sha512-MjnHPnbqMXNC2UgeLJtX4XqoVHHlZNd+nPt1kRPmj63wURegwBhZlApELdtxM2OIZDRv/DFtLcNhVbd1z8GYXQ==,
integrity: sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64]
os: [win32]
"@img/sharp-win32-ia32@0.34.3":
"@img/sharp-win32-ia32@0.34.5":
resolution:
{
integrity: sha512-xuCdhH44WxuXgOM714hn4amodJMZl3OEvf0GVTm0BEyMeA2to+8HEdRPShH0SLYptJY1uBw+SCFP9WVQi1Q/cw==,
integrity: sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [ia32]
os: [win32]
"@img/sharp-win32-x64@0.34.3":
"@img/sharp-win32-x64@0.34.5":
resolution:
{
integrity: sha512-OWwz05d++TxzLEv4VnsTz5CmZ6mI6S05sfQGEMrNrQcOEERbX46332IvE7pO/EUiw7jUrrS40z/M7kPyjfl04g==,
integrity: sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64]
@@ -1184,10 +1214,10 @@ packages:
integrity: sha512-ZVWUcfwY4E/yPitQJl481FjFo3K22D6qF0DuFH6Y/nbnE11GY5uguDxZMGXPQ8WQ0128MXQD7TnfHyK4oWoIJQ==,
}
"@next/env@15.5.7":
"@next/env@15.5.9":
resolution:
{
integrity: sha512-4h6Y2NyEkIEN7Z8YxkA27pq6zTkS09bUSYC0xjd0NpwFxjnIKeZEeH591o5WECSmjpUhLn3H2QLJcDye3Uzcvg==,
integrity: sha512-4GlTZ+EJM7WaW2HEZcyU317tIQDjkQIyENDLxYJfSWlfqguN+dHkZgyQTV/7ykvobU7yEH5gKvreNrH4B6QgIg==,
}
"@next/eslint-plugin-next@15.5.3":
@@ -2610,10 +2640,10 @@ packages:
peerDependencies:
react: ^18 || ^19
"@tsconfig/node10@1.0.11":
"@tsconfig/node10@1.0.12":
resolution:
{
integrity: sha512-DcRjDCujK/kCk/cUe8Xz8ZSpm8mS3mNNpta+jGCA6USEDfktlNvm1+IuZ9eTcDbNk41BHwpHHeW+N1lKCz4zOw==,
integrity: sha512-UCYBaeFvM11aU2y3YPZ//O5Rhj+xKyzy7mvcIoAjASbigy8mHMryP5cK7dgjlz2hWxh1g5pLw084E0a/wlUSFQ==,
}
"@tsconfig/node12@1.0.11":
@@ -2785,10 +2815,10 @@ packages:
integrity: sha512-DRh5K+ka5eJic8CjH7td8QpYEV6Zo10gfRkjHCO3weqZHWDtAaSTFtl4+VMqOJ4N5jcuhZ9/l+yy8rVgw7BQeQ==,
}
"@types/node@24.3.1":
"@types/node@25.0.2":
resolution:
{
integrity: sha512-3vXmQDXy+woz+gnrTvuvNrPzekOi+Ds0ReMxw0LzBiK3a+1k0kQn9f2NWk+lgD4rJehFUmYy2gMhJ2ZI+7YP9g==,
integrity: sha512-gWEkeiyYE4vqjON/+Obqcoeffmk0NF15WSBwSs7zwVA2bAbTaE0SJ7P0WNGoJn8uE7fiaV5a7dKYIJriEqOrmA==,
}
"@types/parse-json@4.0.2":
@@ -4202,6 +4232,12 @@ packages:
integrity: sha512-uhE1Ye5vgqju6OI71HTQqcBCZrvHugk0MjLak7Q+HfoBgoq5Bi+5YnwjP4fjDgrtYr/l8MVRBvzz9dPD4KyK0A==,
}
caniuse-lite@1.0.30001760:
resolution:
{
integrity: sha512-7AAMPcueWELt1p3mi13HR/LHH0TJLT11cnwDJEs3xA4+CK/PLKeO9Kl1oru24htkyUKtkGCvAx4ohB0Ttry8Dw==,
}
ccount@2.0.1:
resolution:
{
@@ -4371,19 +4407,6 @@ packages:
integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==,
}
color-string@1.9.1:
resolution:
{
integrity: sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==,
}
color@4.2.3:
resolution:
{
integrity: sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A==,
}
engines: { node: ">=12.5.0" }
colorette@1.4.0:
resolution:
{
@@ -4622,10 +4645,10 @@ packages:
engines: { node: ">=0.10" }
hasBin: true
detect-libc@2.0.4:
detect-libc@2.1.2:
resolution:
{
integrity: sha512-3UDv+G9CsCKO1WKMGw9fwq/SWJYbI0c5Y7LU1AXYoDdbhE2AHQ6N6Nb34sG8Fj7T5APy8qXDCKuuIHd1BR0tVA==,
integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==,
}
engines: { node: ">=8" }
@@ -4750,10 +4773,10 @@ packages:
}
engines: { node: ">=10.0.0" }
enhanced-resolve@5.18.3:
enhanced-resolve@5.18.4:
resolution:
{
integrity: sha512-d4lC8xfavMeBjzGr2vECC3fsGXziXZQyJxD868h2M/mBI3PwAuODxAkLkq5HYuvrPYcUtiLzsTo8U3PgX3Ocww==,
integrity: sha512-LgQMM4WXU3QI+SYgEc2liRgznaD5ojbmY3sb8LxyguVkIg5FxdpTkvk72te2R38/TGKxH634oLxXRGY6d7AP+Q==,
}
engines: { node: ">=10.13.0" }
@@ -5711,12 +5734,6 @@ packages:
integrity: sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==,
}
is-arrayish@0.3.2:
resolution:
{
integrity: sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ==,
}
is-async-function@2.1.1:
resolution:
{
@@ -6392,10 +6409,10 @@ packages:
integrity: sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==,
}
loader-runner@4.3.0:
loader-runner@4.3.1:
resolution:
{
integrity: sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg==,
integrity: sha512-IWqP2SCPhyVFTBtRcgMHdzlf9ul25NwaFx4wCEH/KjAXuuHY4yNjvPXsBokp8jCB936PyWRaPKUNh8NvylLp2Q==,
}
engines: { node: ">=6.11.5" }
@@ -6863,10 +6880,10 @@ packages:
react: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc
react-dom: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc
next@15.5.7:
next@15.5.9:
resolution:
{
integrity: sha512-+t2/0jIJ48kUpGKkdlhgkv+zPTEOoXyr60qXe68eB/pl3CMJaLeIGjzp5D6Oqt25hCBiBTt8wEeeAzfJvUKnPQ==,
integrity: sha512-agNLK89seZEtC5zUHwtut0+tNrc0Xw4FT/Dg+B/VLEo9pAcS9rtTKpek3V6kVcVwsB2YlqMaHdfZL4eLEVYuCg==,
}
engines: { node: ^18.18.0 || ^19.8.0 || >= 20.0.0 }
hasBin: true
@@ -7870,10 +7887,10 @@ packages:
integrity: sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==,
}
schema-utils@4.3.2:
schema-utils@4.3.3:
resolution:
{
integrity: sha512-Gn/JaSk/Mt9gYubxTtSn/QCV4em9mpAPiR1rqy/Ocu19u/G9J5WWdNoUT4SiV6mFC3y6cxyFcFwdzPM3FgxGAQ==,
integrity: sha512-eflK8wEtyOE6+hsaRVPxvUKYCpRgzLqDTb8krvAsRIwOGlHoSgYLgBXoubGgLd2fT41/OUYdb48v4k4WWHQurA==,
}
engines: { node: ">= 10.13.0" }
@@ -7905,6 +7922,14 @@ packages:
engines: { node: ">=10" }
hasBin: true
semver@7.7.3:
resolution:
{
integrity: sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==,
}
engines: { node: ">=10" }
hasBin: true
serialize-javascript@6.0.2:
resolution:
{
@@ -7932,10 +7957,10 @@ packages:
}
engines: { node: ">= 0.4" }
sharp@0.34.3:
sharp@0.34.5:
resolution:
{
integrity: sha512-eX2IQ6nFohW4DbvHIOLRB3MHFpYqaqvXd3Tp5e/T/dSH83fxaNJQRvDMhASmkNTsNTVF2/OOopzRCt7xokgPfg==,
integrity: sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
@@ -8006,12 +8031,6 @@ packages:
integrity: sha512-D1SaWpOW8afq1CZGWB8xTfrT3FekjQmPValrqncJMX7QFl8YwhrPTZvMCANLtgBwwdS+7zURyqxDDEmY558tTw==,
}
simple-swizzle@0.2.2:
resolution:
{
integrity: sha512-JA//kQgZtbuY83m+xT+tXJkmJncGMTFT+C+g2h2R9uxkYIrE2yy9sgmcLhCnw57/WSD+Eh3J97FPEDFnbXnDUg==,
}
slash@3.0.0:
resolution:
{
@@ -8325,17 +8344,17 @@ packages:
engines: { node: ">=14.0.0" }
hasBin: true
tapable@2.2.3:
tapable@2.3.0:
resolution:
{
integrity: sha512-ZL6DDuAlRlLGghwcfmSn9sK3Hr6ArtyudlSAiCqQ6IfE+b+HHbydbYDIG15IfS5do+7XQQBdBiubF/cV2dnDzg==,
integrity: sha512-g9ljZiwki/LfxmQADO3dEY1CbpmXT5Hm2fJ+QaGKwSXUylMybePR7/67YW7jOrrvjEgL1Fmz5kzyAjWVWLlucg==,
}
engines: { node: ">=6" }
terser-webpack-plugin@5.3.14:
terser-webpack-plugin@5.3.16:
resolution:
{
integrity: sha512-vkZjpUjb6OMS7dhV+tILUW6BhpDR7P2L/aQSAv+Uwk+m8KATX9EccViHTJR2qDtACKPIYndLGCyl3FMo+r2LMw==,
integrity: sha512-h9oBFCWrq78NyWWVcSwZarJkZ01c2AyGrzs1crmHZO3QUg9D61Wu4NPjBy69n7JqylFF5y+CsUZYmYEIZ3mR+Q==,
}
engines: { node: ">= 10.13.0" }
peerDependencies:
@@ -8351,10 +8370,10 @@ packages:
uglify-js:
optional: true
terser@5.44.0:
terser@5.44.1:
resolution:
{
integrity: sha512-nIVck8DK+GM/0Frwd+nIhZ84pR/BX7rmXMfYwyg+Sri5oGVE99/E3KvXqpC2xHFxyqXyGHTKBSioxxplrO4I4w==,
integrity: sha512-t/R3R/n0MSwnnazuPpPNVO60LX0SKL45pyl9YlvxIdkH0Of7D5qM2EVe+yASRIlY5pZ73nclYJfNANGWPwFDZw==,
}
engines: { node: ">=10" }
hasBin: true
@@ -8633,6 +8652,12 @@ packages:
integrity: sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag==,
}
undici-types@7.16.0:
resolution:
{
integrity: sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==,
}
unified@11.0.5:
resolution:
{
@@ -9365,6 +9390,11 @@ snapshots:
tslib: 2.8.1
optional: true
"@emnapi/runtime@1.7.1":
dependencies:
tslib: 2.8.1
optional: true
"@emnapi/wasi-threads@1.0.4":
dependencies:
tslib: 2.8.1
@@ -9533,90 +9563,101 @@ snapshots:
"@humanwhocodes/retry@0.4.3": {}
"@img/sharp-darwin-arm64@0.34.3":
"@img/colour@1.0.0":
optional: true
"@img/sharp-darwin-arm64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-darwin-arm64": 1.2.0
"@img/sharp-libvips-darwin-arm64": 1.2.4
optional: true
"@img/sharp-darwin-x64@0.34.3":
"@img/sharp-darwin-x64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-darwin-x64": 1.2.0
"@img/sharp-libvips-darwin-x64": 1.2.4
optional: true
"@img/sharp-libvips-darwin-arm64@1.2.0":
"@img/sharp-libvips-darwin-arm64@1.2.4":
optional: true
"@img/sharp-libvips-darwin-x64@1.2.0":
"@img/sharp-libvips-darwin-x64@1.2.4":
optional: true
"@img/sharp-libvips-linux-arm64@1.2.0":
"@img/sharp-libvips-linux-arm64@1.2.4":
optional: true
"@img/sharp-libvips-linux-arm@1.2.0":
"@img/sharp-libvips-linux-arm@1.2.4":
optional: true
"@img/sharp-libvips-linux-ppc64@1.2.0":
"@img/sharp-libvips-linux-ppc64@1.2.4":
optional: true
"@img/sharp-libvips-linux-s390x@1.2.0":
"@img/sharp-libvips-linux-riscv64@1.2.4":
optional: true
"@img/sharp-libvips-linux-x64@1.2.0":
"@img/sharp-libvips-linux-s390x@1.2.4":
optional: true
"@img/sharp-libvips-linuxmusl-arm64@1.2.0":
"@img/sharp-libvips-linux-x64@1.2.4":
optional: true
"@img/sharp-libvips-linuxmusl-x64@1.2.0":
"@img/sharp-libvips-linuxmusl-arm64@1.2.4":
optional: true
"@img/sharp-linux-arm64@0.34.3":
"@img/sharp-libvips-linuxmusl-x64@1.2.4":
optional: true
"@img/sharp-linux-arm64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linux-arm64": 1.2.0
"@img/sharp-libvips-linux-arm64": 1.2.4
optional: true
"@img/sharp-linux-arm@0.34.3":
"@img/sharp-linux-arm@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linux-arm": 1.2.0
"@img/sharp-libvips-linux-arm": 1.2.4
optional: true
"@img/sharp-linux-ppc64@0.34.3":
"@img/sharp-linux-ppc64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linux-ppc64": 1.2.0
"@img/sharp-libvips-linux-ppc64": 1.2.4
optional: true
"@img/sharp-linux-s390x@0.34.3":
"@img/sharp-linux-riscv64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linux-s390x": 1.2.0
"@img/sharp-libvips-linux-riscv64": 1.2.4
optional: true
"@img/sharp-linux-x64@0.34.3":
"@img/sharp-linux-s390x@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linux-x64": 1.2.0
"@img/sharp-libvips-linux-s390x": 1.2.4
optional: true
"@img/sharp-linuxmusl-arm64@0.34.3":
"@img/sharp-linux-x64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linuxmusl-arm64": 1.2.0
"@img/sharp-libvips-linux-x64": 1.2.4
optional: true
"@img/sharp-linuxmusl-x64@0.34.3":
"@img/sharp-linuxmusl-arm64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linuxmusl-x64": 1.2.0
"@img/sharp-libvips-linuxmusl-arm64": 1.2.4
optional: true
"@img/sharp-wasm32@0.34.3":
"@img/sharp-linuxmusl-x64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linuxmusl-x64": 1.2.4
optional: true
"@img/sharp-wasm32@0.34.5":
dependencies:
"@emnapi/runtime": 1.4.5
"@emnapi/runtime": 1.7.1
optional: true
"@img/sharp-win32-arm64@0.34.3":
"@img/sharp-win32-arm64@0.34.5":
optional: true
"@img/sharp-win32-ia32@0.34.3":
"@img/sharp-win32-ia32@0.34.5":
optional: true
"@img/sharp-win32-x64@0.34.3":
"@img/sharp-win32-x64@0.34.5":
optional: true
"@internationalized/date@3.8.2":
@@ -9877,7 +9918,7 @@ snapshots:
"@tybys/wasm-util": 0.10.0
optional: true
"@next/env@15.5.7": {}
"@next/env@15.5.9": {}
"@next/eslint-plugin-next@15.5.3":
dependencies:
@@ -10684,7 +10725,7 @@ snapshots:
"@sentry/core@8.55.0": {}
"@sentry/nextjs@10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3)":
"@sentry/nextjs@10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3)":
dependencies:
"@opentelemetry/api": 1.9.0
"@opentelemetry/semantic-conventions": 1.37.0
@@ -10698,7 +10739,7 @@ snapshots:
"@sentry/vercel-edge": 10.11.0
"@sentry/webpack-plugin": 4.3.0(webpack@5.101.3)
chalk: 3.0.0
next: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
next: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
resolve: 1.22.8
rollup: 4.50.1
stacktrace-parser: 0.1.11
@@ -10829,7 +10870,7 @@ snapshots:
"@tanstack/query-core": 5.85.9
react: 18.3.1
"@tsconfig/node10@1.0.11":
"@tsconfig/node10@1.0.12":
optional: true
"@tsconfig/node12@1.0.11":
@@ -10941,9 +10982,9 @@ snapshots:
dependencies:
undici-types: 7.10.0
"@types/node@24.3.1":
"@types/node@25.0.2":
dependencies:
undici-types: 7.10.0
undici-types: 7.16.0
"@types/parse-json@4.0.2": {}
@@ -12127,6 +12168,8 @@ snapshots:
caniuse-lite@1.0.30001734: {}
caniuse-lite@1.0.30001760: {}
ccount@2.0.1: {}
chalk@3.0.0:
@@ -12205,18 +12248,6 @@ snapshots:
color-name@1.1.4: {}
color-string@1.9.1:
dependencies:
color-name: 1.1.4
simple-swizzle: 0.2.2
optional: true
color@4.2.3:
dependencies:
color-convert: 2.0.1
color-string: 1.9.1
optional: true
colorette@1.4.0: {}
combined-stream@1.0.8:
@@ -12335,7 +12366,7 @@ snapshots:
detect-libc@1.0.3:
optional: true
detect-libc@2.0.4:
detect-libc@2.1.2:
optional: true
detect-newline@3.1.0: {}
@@ -12405,10 +12436,10 @@ snapshots:
engine.io-parser@5.2.3: {}
enhanced-resolve@5.18.3:
enhanced-resolve@5.18.4:
dependencies:
graceful-fs: 4.2.11
tapable: 2.2.3
tapable: 2.3.0
err-code@3.0.1: {}
@@ -13147,9 +13178,6 @@ snapshots:
is-arrayish@0.2.1: {}
is-arrayish@0.3.2:
optional: true
is-async-function@2.1.1:
dependencies:
async-function: 1.0.0
@@ -13629,7 +13657,7 @@ snapshots:
jest-worker@27.5.1:
dependencies:
"@types/node": 24.3.1
"@types/node": 25.0.2
merge-stream: 2.0.0
supports-color: 8.1.1
@@ -13738,7 +13766,7 @@ snapshots:
lines-and-columns@1.2.4: {}
loader-runner@4.3.0: {}
loader-runner@4.3.1: {}
locate-path@5.0.0:
dependencies:
@@ -14093,13 +14121,13 @@ snapshots:
neo-async@2.6.2: {}
next-auth@4.24.11(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
next-auth@4.24.11(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
dependencies:
"@babel/runtime": 7.28.2
"@panva/hkdf": 1.2.1
cookie: 0.7.2
jose: 4.15.9
next: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
next: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
oauth: 0.9.15
openid-client: 5.7.1
preact: 10.27.0
@@ -14113,11 +14141,11 @@ snapshots:
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0):
next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0):
dependencies:
"@next/env": 15.5.7
"@next/env": 15.5.9
"@swc/helpers": 0.5.15
caniuse-lite: 1.0.30001734
caniuse-lite: 1.0.30001760
postcss: 8.4.31
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
@@ -14133,7 +14161,7 @@ snapshots:
"@next/swc-win32-x64-msvc": 15.5.7
"@opentelemetry/api": 1.9.0
sass: 1.90.0
sharp: 0.34.3
sharp: 0.34.5
transitivePeerDependencies:
- "@babel/core"
- babel-plugin-macros
@@ -14159,12 +14187,12 @@ snapshots:
dependencies:
path-key: 3.1.1
nuqs@2.4.3(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1):
nuqs@2.4.3(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1):
dependencies:
mitt: 3.0.1
react: 18.3.1
optionalDependencies:
next: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
next: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
oauth@0.9.15: {}
@@ -14734,7 +14762,7 @@ snapshots:
dependencies:
loose-envify: 1.4.0
schema-utils@4.3.2:
schema-utils@4.3.3:
dependencies:
"@types/json-schema": 7.0.15
ajv: 8.17.1
@@ -14749,6 +14777,9 @@ snapshots:
semver@7.7.2: {}
semver@7.7.3:
optional: true
serialize-javascript@6.0.2:
dependencies:
randombytes: 2.1.0
@@ -14775,34 +14806,36 @@ snapshots:
es-errors: 1.3.0
es-object-atoms: 1.1.1
sharp@0.34.3:
sharp@0.34.5:
dependencies:
color: 4.2.3
detect-libc: 2.0.4
semver: 7.7.2
"@img/colour": 1.0.0
detect-libc: 2.1.2
semver: 7.7.3
optionalDependencies:
"@img/sharp-darwin-arm64": 0.34.3
"@img/sharp-darwin-x64": 0.34.3
"@img/sharp-libvips-darwin-arm64": 1.2.0
"@img/sharp-libvips-darwin-x64": 1.2.0
"@img/sharp-libvips-linux-arm": 1.2.0
"@img/sharp-libvips-linux-arm64": 1.2.0
"@img/sharp-libvips-linux-ppc64": 1.2.0
"@img/sharp-libvips-linux-s390x": 1.2.0
"@img/sharp-libvips-linux-x64": 1.2.0
"@img/sharp-libvips-linuxmusl-arm64": 1.2.0
"@img/sharp-libvips-linuxmusl-x64": 1.2.0
"@img/sharp-linux-arm": 0.34.3
"@img/sharp-linux-arm64": 0.34.3
"@img/sharp-linux-ppc64": 0.34.3
"@img/sharp-linux-s390x": 0.34.3
"@img/sharp-linux-x64": 0.34.3
"@img/sharp-linuxmusl-arm64": 0.34.3
"@img/sharp-linuxmusl-x64": 0.34.3
"@img/sharp-wasm32": 0.34.3
"@img/sharp-win32-arm64": 0.34.3
"@img/sharp-win32-ia32": 0.34.3
"@img/sharp-win32-x64": 0.34.3
"@img/sharp-darwin-arm64": 0.34.5
"@img/sharp-darwin-x64": 0.34.5
"@img/sharp-libvips-darwin-arm64": 1.2.4
"@img/sharp-libvips-darwin-x64": 1.2.4
"@img/sharp-libvips-linux-arm": 1.2.4
"@img/sharp-libvips-linux-arm64": 1.2.4
"@img/sharp-libvips-linux-ppc64": 1.2.4
"@img/sharp-libvips-linux-riscv64": 1.2.4
"@img/sharp-libvips-linux-s390x": 1.2.4
"@img/sharp-libvips-linux-x64": 1.2.4
"@img/sharp-libvips-linuxmusl-arm64": 1.2.4
"@img/sharp-libvips-linuxmusl-x64": 1.2.4
"@img/sharp-linux-arm": 0.34.5
"@img/sharp-linux-arm64": 0.34.5
"@img/sharp-linux-ppc64": 0.34.5
"@img/sharp-linux-riscv64": 0.34.5
"@img/sharp-linux-s390x": 0.34.5
"@img/sharp-linux-x64": 0.34.5
"@img/sharp-linuxmusl-arm64": 0.34.5
"@img/sharp-linuxmusl-x64": 0.34.5
"@img/sharp-wasm32": 0.34.5
"@img/sharp-win32-arm64": 0.34.5
"@img/sharp-win32-ia32": 0.34.5
"@img/sharp-win32-x64": 0.34.5
optional: true
shebang-command@2.0.0:
@@ -14857,11 +14890,6 @@ snapshots:
transitivePeerDependencies:
- supports-color
simple-swizzle@0.2.2:
dependencies:
is-arrayish: 0.3.2
optional: true
slash@3.0.0: {}
socket.io-client@4.7.2:
@@ -15086,18 +15114,18 @@ snapshots:
transitivePeerDependencies:
- ts-node
tapable@2.2.3: {}
tapable@2.3.0: {}
terser-webpack-plugin@5.3.14(webpack@5.101.3):
terser-webpack-plugin@5.3.16(webpack@5.101.3):
dependencies:
"@jridgewell/trace-mapping": 0.3.31
jest-worker: 27.5.1
schema-utils: 4.3.2
schema-utils: 4.3.3
serialize-javascript: 6.0.2
terser: 5.44.0
terser: 5.44.1
webpack: 5.101.3
terser@5.44.0:
terser@5.44.1:
dependencies:
"@jridgewell/source-map": 0.3.11
acorn: 8.15.0
@@ -15164,7 +15192,7 @@ snapshots:
ts-node@10.9.1(@types/node@24.2.1)(typescript@5.9.2):
dependencies:
"@cspotcode/source-map-support": 0.8.1
"@tsconfig/node10": 1.0.11
"@tsconfig/node10": 1.0.12
"@tsconfig/node12": 1.0.11
"@tsconfig/node14": 1.0.3
"@tsconfig/node16": 1.0.4
@@ -15274,6 +15302,8 @@ snapshots:
undici-types@7.10.0: {}
undici-types@7.16.0: {}
unified@11.0.5:
dependencies:
"@types/unist": 3.0.3
@@ -15427,19 +15457,19 @@ snapshots:
acorn-import-phases: 1.0.4(acorn@8.15.0)
browserslist: 4.25.2
chrome-trace-event: 1.0.4
enhanced-resolve: 5.18.3
enhanced-resolve: 5.18.4
es-module-lexer: 1.7.0
eslint-scope: 5.1.1
events: 3.3.0
glob-to-regexp: 0.4.1
graceful-fs: 4.2.11
json-parse-even-better-errors: 2.3.1
loader-runner: 4.3.0
loader-runner: 4.3.1
mime-types: 2.1.35
neo-async: 2.6.2
schema-utils: 4.3.2
tapable: 2.2.3
terser-webpack-plugin: 5.3.14(webpack@5.101.3)
schema-utils: 4.3.3
tapable: 2.3.0
terser-webpack-plugin: 5.3.16(webpack@5.101.3)
watchpack: 2.4.4
webpack-sources: 3.3.3
transitivePeerDependencies: