mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2026-03-26 00:46:46 +00:00
Compare commits
11 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ea89fa5261 | ||
|
|
1f98790e7b | ||
|
|
7b8d190c52 | ||
|
|
f19113a3cf | ||
|
|
e2ba502697 | ||
|
|
74b9b97453 | ||
|
|
9e37d60b3f | ||
|
|
55222ecc47 | ||
|
|
41e7b3e84f | ||
|
|
e5712a4168 | ||
|
|
a76f114378 |
17
CHANGELOG.md
17
CHANGELOG.md
@@ -1,5 +1,22 @@
|
||||
# Changelog
|
||||
|
||||
## [0.41.0](https://github.com/GreyhavenHQ/reflector/compare/v0.40.0...v0.41.0) (2026-03-25)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* add auto-generated captions, speaker-colored progress bar with sync controls, and speaker tooltip to cloud video player ([#926](https://github.com/GreyhavenHQ/reflector/issues/926)) ([f19113a](https://github.com/GreyhavenHQ/reflector/commit/f19113a3cfa27797a70b9496bfcf1baff9d89f0d))
|
||||
* send email in share transcript and add email sending in room ([#924](https://github.com/GreyhavenHQ/reflector/issues/924)) ([e2ba502](https://github.com/GreyhavenHQ/reflector/commit/e2ba502697ce331c4d87fb019648fcbe4e7cca73))
|
||||
* zulip dag monitor for failed runs ([#928](https://github.com/GreyhavenHQ/reflector/issues/928)) ([1f98790](https://github.com/GreyhavenHQ/reflector/commit/1f98790e7bc58013690ec81aefa051da5e36e93e))
|
||||
|
||||
## [0.40.0](https://github.com/GreyhavenHQ/reflector/compare/v0.39.0...v0.40.0) (2026-03-20)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* allow participants to ask for email transcript ([#923](https://github.com/GreyhavenHQ/reflector/issues/923)) ([55222ec](https://github.com/GreyhavenHQ/reflector/commit/55222ecc4736f99ad461f03a006c8d97b5876142))
|
||||
* download files, show cloud video, solf deletion with no reprocessing ([#920](https://github.com/GreyhavenHQ/reflector/issues/920)) ([a76f114](https://github.com/GreyhavenHQ/reflector/commit/a76f1143783d3cf137a8847a851b72302e04445b))
|
||||
|
||||
## [0.39.0](https://github.com/GreyhavenHQ/reflector/compare/v0.38.2...v0.39.0) (2026-03-18)
|
||||
|
||||
|
||||
|
||||
13
CLAUDE.md
13
CLAUDE.md
@@ -41,14 +41,14 @@ uv run celery -A reflector.worker.app beat
|
||||
|
||||
**Testing:**
|
||||
```bash
|
||||
# Run all tests with coverage
|
||||
uv run pytest
|
||||
# Run all tests with coverage (requires Redis on localhost)
|
||||
REDIS_HOST=localhost REDIS_PORT=6379 uv run pytest
|
||||
|
||||
# Run specific test file
|
||||
uv run pytest tests/test_transcripts.py
|
||||
REDIS_HOST=localhost REDIS_PORT=6379 uv run pytest tests/test_transcripts.py
|
||||
|
||||
# Run tests with verbose output
|
||||
uv run pytest -v
|
||||
REDIS_HOST=localhost REDIS_PORT=6379 uv run pytest -v
|
||||
```
|
||||
|
||||
**Process Audio Files:**
|
||||
@@ -192,3 +192,8 @@ Modal.com integration for scalable ML processing:
|
||||
## Pipeline/worker related info
|
||||
|
||||
If you need to do any worker/pipeline related work, search for "Pipeline" classes and their "create" or "build" methods to find the main processor sequence. Look for task orchestration patterns (like "chord", "group", or "chain") to identify the post-processing flow with parallel execution chains. This will give you abstract vision on how processing pipeling is organized.
|
||||
|
||||
## Code Style
|
||||
|
||||
- Always put imports at the top of the file. Let ruff/pre-commit handle sorting and formatting of imports.
|
||||
- Exception: In Hatchet pipeline task functions, DB controller imports (e.g., `transcripts_controller`, `meetings_controller`) stay as deferred/inline imports inside `fresh_db_connection()` blocks — this is intentional to avoid sharing DB connections across forked processes. Non-DB imports (utilities, services) should still go at the top of the file.
|
||||
|
||||
@@ -36,7 +36,7 @@ services:
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "127.0.0.1:1250:1250"
|
||||
- "51000-51100:51000-51100/udp"
|
||||
- "40000-40100:40000-40100/udp"
|
||||
env_file:
|
||||
- ./server/.env
|
||||
environment:
|
||||
@@ -50,7 +50,7 @@ services:
|
||||
# HF_TOKEN needed for in-process pyannote diarization (--cpu mode)
|
||||
HF_TOKEN: ${HF_TOKEN:-}
|
||||
# WebRTC: fixed UDP port range for ICE candidates (mapped above)
|
||||
WEBRTC_PORT_RANGE: "51000-51100"
|
||||
WEBRTC_PORT_RANGE: "40000-40100"
|
||||
# Hatchet workflow engine (always-on for processing pipelines)
|
||||
HATCHET_CLIENT_SERVER_URL: ${HATCHET_CLIENT_SERVER_URL:-http://hatchet:8888}
|
||||
HATCHET_CLIENT_HOST_PORT: ${HATCHET_CLIENT_HOST_PORT:-hatchet:7077}
|
||||
@@ -308,6 +308,24 @@ services:
|
||||
- web
|
||||
- server
|
||||
|
||||
# ===========================================================
|
||||
# Mailpit — local SMTP sink for testing email transcript notifications
|
||||
# Start with: --profile mailpit
|
||||
# Web UI at http://localhost:8025
|
||||
# ===========================================================
|
||||
|
||||
mailpit:
|
||||
image: axllent/mailpit:latest
|
||||
profiles: [mailpit]
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "127.0.0.1:8025:8025" # Web UI
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8025/api/v1/messages"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
retries: 5
|
||||
|
||||
# ===========================================================
|
||||
# Hatchet workflow engine + workers
|
||||
# Required for all processing pipelines (file, live, Daily.co multitrack).
|
||||
|
||||
@@ -199,6 +199,11 @@ Without `--caddy` or `--domain`, no ports are exposed. Point your own reverse pr
|
||||
| `DAILY_SUBDOMAIN` | Daily.co subdomain | *(unset)* |
|
||||
| `DAILYCO_STORAGE_AWS_ACCESS_KEY_ID` | AWS access key for reading Daily's recording bucket | *(unset)* |
|
||||
| `DAILYCO_STORAGE_AWS_SECRET_ACCESS_KEY` | AWS secret key for reading Daily's recording bucket | *(unset)* |
|
||||
| `ZULIP_REALM` | Zulip server hostname (e.g. `zulip.example.com`) | *(unset)* |
|
||||
| `ZULIP_API_KEY` | Zulip bot API key | *(unset)* |
|
||||
| `ZULIP_BOT_EMAIL` | Zulip bot email address | *(unset)* |
|
||||
| `ZULIP_DAG_STREAM` | Zulip stream for pipeline failure alerts | *(unset)* |
|
||||
| `ZULIP_DAG_TOPIC` | Zulip topic for pipeline failure alerts | *(unset)* |
|
||||
| `HATCHET_CLIENT_TOKEN` | Hatchet API token (auto-generated) | *(unset)* |
|
||||
| `HATCHET_CLIENT_SERVER_URL` | Hatchet server URL | Auto-set when Daily.co configured |
|
||||
| `HATCHET_CLIENT_HOST_PORT` | Hatchet gRPC address | Auto-set when Daily.co configured |
|
||||
|
||||
@@ -13,14 +13,25 @@
|
||||
# Optional:
|
||||
# LLM_MODEL — Model name (default: qwen2.5:14b)
|
||||
#
|
||||
# Flags:
|
||||
# --build — Rebuild backend Docker images (server, workers, test-runner)
|
||||
#
|
||||
# Usage:
|
||||
# export LLM_URL="https://api.openai.com/v1"
|
||||
# export LLM_API_KEY="sk-..."
|
||||
# export HF_TOKEN="hf_..."
|
||||
# ./scripts/run-integration-tests.sh
|
||||
# ./scripts/run-integration-tests.sh --build # rebuild backend images
|
||||
#
|
||||
set -euo pipefail
|
||||
|
||||
BUILD_FLAG=""
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--build) BUILD_FLAG="--build" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
COMPOSE_DIR="$REPO_ROOT/server/tests"
|
||||
@@ -66,7 +77,7 @@ trap cleanup EXIT
|
||||
|
||||
# ── Step 1: Build and start infrastructure ──────────────────────────────────
|
||||
info "Building and starting infrastructure services..."
|
||||
$COMPOSE up -d --build postgres redis garage hatchet mock-daily
|
||||
$COMPOSE up -d --build postgres redis garage hatchet mock-daily mailpit
|
||||
|
||||
# ── Step 2: Set up Garage (S3 bucket + keys) ───────────────────────────────
|
||||
wait_for "Garage" "$COMPOSE exec -T garage /garage stats" 60
|
||||
@@ -116,7 +127,7 @@ ok "Hatchet token generated"
|
||||
|
||||
# ── Step 4: Start backend services ──────────────────────────────────────────
|
||||
info "Starting backend services..."
|
||||
$COMPOSE up -d server worker hatchet-worker-cpu hatchet-worker-llm test-runner
|
||||
$COMPOSE up -d $BUILD_FLAG server worker hatchet-worker-cpu hatchet-worker-llm test-runner
|
||||
|
||||
# ── Step 5: Wait for server + run migrations ────────────────────────────────
|
||||
wait_for "Server" "$COMPOSE exec -T test-runner curl -sf http://server:1250/health" 60
|
||||
|
||||
@@ -419,3 +419,18 @@ User-room broadcasts to `user:{user_id}`:
|
||||
- `TRANSCRIPT_STATUS`
|
||||
- `TRANSCRIPT_FINAL_TITLE`
|
||||
- `TRANSCRIPT_DURATION`
|
||||
|
||||
## Failed Runs Monitor (Hatchet Cron)
|
||||
|
||||
A `FailedRunsMonitor` Hatchet cron workflow runs hourly (`0 * * * *`) and checks for failed pipeline runs
|
||||
(DiarizationPipeline, FilePipeline, LivePostProcessingPipeline) in the last hour. For each failed run,
|
||||
it renders a DAG status overview and posts it to Zulip.
|
||||
|
||||
**Required env vars** (all must be set to enable):
|
||||
- `ZULIP_REALM` — Zulip server hostname
|
||||
- `ZULIP_API_KEY` — Zulip bot API key
|
||||
- `ZULIP_BOT_EMAIL` — Zulip bot email
|
||||
- `ZULIP_DAG_STREAM` — Zulip stream for alerts
|
||||
- `ZULIP_DAG_TOPIC` — Zulip topic for alerts
|
||||
|
||||
If any of these are unset, the monitor workflow is not registered with the Hatchet worker.
|
||||
|
||||
@@ -0,0 +1,47 @@
|
||||
"""add soft delete fields to transcript and recording
|
||||
|
||||
Revision ID: 501c73a6b0d5
|
||||
Revises: e1f093f7f124
|
||||
Create Date: 2026-03-19 00:00:00.000000
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
|
||||
revision: str = "501c73a6b0d5"
|
||||
down_revision: Union[str, None] = "e1f093f7f124"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.add_column(
|
||||
"transcript",
|
||||
sa.Column("deleted_at", sa.DateTime(timezone=True), nullable=True),
|
||||
)
|
||||
op.add_column(
|
||||
"recording",
|
||||
sa.Column("deleted_at", sa.DateTime(timezone=True), nullable=True),
|
||||
)
|
||||
op.create_index(
|
||||
"idx_transcript_not_deleted",
|
||||
"transcript",
|
||||
["id"],
|
||||
postgresql_where=sa.text("deleted_at IS NULL"),
|
||||
)
|
||||
op.create_index(
|
||||
"idx_recording_not_deleted",
|
||||
"recording",
|
||||
["id"],
|
||||
postgresql_where=sa.text("deleted_at IS NULL"),
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_index("idx_recording_not_deleted", table_name="recording")
|
||||
op.drop_index("idx_transcript_not_deleted", table_name="transcript")
|
||||
op.drop_column("recording", "deleted_at")
|
||||
op.drop_column("transcript", "deleted_at")
|
||||
@@ -0,0 +1,29 @@
|
||||
"""add email_recipients to meeting
|
||||
|
||||
Revision ID: a2b3c4d5e6f7
|
||||
Revises: 501c73a6b0d5
|
||||
Create Date: 2026-03-20 00:00:00.000000
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
from sqlalchemy.dialects.postgresql import JSONB
|
||||
|
||||
revision: str = "a2b3c4d5e6f7"
|
||||
down_revision: Union[str, None] = "501c73a6b0d5"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.add_column(
|
||||
"meeting",
|
||||
sa.Column("email_recipients", JSONB, nullable=True),
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_column("meeting", "email_recipients")
|
||||
@@ -0,0 +1,28 @@
|
||||
"""add email_transcript_to to room
|
||||
|
||||
Revision ID: b4c7e8f9a012
|
||||
Revises: a2b3c4d5e6f7
|
||||
Create Date: 2026-03-24 00:00:00.000000
|
||||
|
||||
"""
|
||||
|
||||
from typing import Sequence, Union
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
|
||||
revision: str = "b4c7e8f9a012"
|
||||
down_revision: Union[str, None] = "a2b3c4d5e6f7"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
op.add_column(
|
||||
"room",
|
||||
sa.Column("email_transcript_to", sa.String(), nullable=True),
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_column("room", "email_transcript_to")
|
||||
@@ -40,6 +40,8 @@ dependencies = [
|
||||
"icalendar>=6.0.0",
|
||||
"hatchet-sdk==1.22.16",
|
||||
"pydantic>=2.12.5",
|
||||
"aiosmtplib>=3.0.0",
|
||||
"email-validator>=2.0.0",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
|
||||
@@ -13,18 +13,21 @@ from reflector.events import subscribers_shutdown, subscribers_startup
|
||||
from reflector.logger import logger
|
||||
from reflector.metrics import metrics_init
|
||||
from reflector.settings import settings
|
||||
from reflector.views.config import router as config_router
|
||||
from reflector.views.daily import router as daily_router
|
||||
from reflector.views.meetings import router as meetings_router
|
||||
from reflector.views.rooms import router as rooms_router
|
||||
from reflector.views.rtc_offer import router as rtc_offer_router
|
||||
from reflector.views.transcripts import router as transcripts_router
|
||||
from reflector.views.transcripts_audio import router as transcripts_audio_router
|
||||
from reflector.views.transcripts_download import router as transcripts_download_router
|
||||
from reflector.views.transcripts_participants import (
|
||||
router as transcripts_participants_router,
|
||||
)
|
||||
from reflector.views.transcripts_process import router as transcripts_process_router
|
||||
from reflector.views.transcripts_speaker import router as transcripts_speaker_router
|
||||
from reflector.views.transcripts_upload import router as transcripts_upload_router
|
||||
from reflector.views.transcripts_video import router as transcripts_video_router
|
||||
from reflector.views.transcripts_webrtc import router as transcripts_webrtc_router
|
||||
from reflector.views.transcripts_websocket import router as transcripts_websocket_router
|
||||
from reflector.views.user import router as user_router
|
||||
@@ -97,12 +100,15 @@ app.include_router(transcripts_audio_router, prefix="/v1")
|
||||
app.include_router(transcripts_participants_router, prefix="/v1")
|
||||
app.include_router(transcripts_speaker_router, prefix="/v1")
|
||||
app.include_router(transcripts_upload_router, prefix="/v1")
|
||||
app.include_router(transcripts_download_router, prefix="/v1")
|
||||
app.include_router(transcripts_video_router, prefix="/v1")
|
||||
app.include_router(transcripts_websocket_router, prefix="/v1")
|
||||
app.include_router(transcripts_webrtc_router, prefix="/v1")
|
||||
app.include_router(transcripts_process_router, prefix="/v1")
|
||||
app.include_router(user_router, prefix="/v1")
|
||||
app.include_router(user_api_keys_router, prefix="/v1")
|
||||
app.include_router(user_ws_router, prefix="/v1")
|
||||
app.include_router(config_router, prefix="/v1")
|
||||
app.include_router(zulip_router, prefix="/v1")
|
||||
app.include_router(whereby_router, prefix="/v1")
|
||||
app.include_router(daily_router, prefix="/v1/daily")
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from contextlib import asynccontextmanager
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Any, Literal
|
||||
|
||||
@@ -66,6 +67,8 @@ meetings = sa.Table(
|
||||
# Daily.co composed video (Brady Bunch grid layout) - Daily.co only, not Whereby
|
||||
sa.Column("daily_composed_video_s3_key", sa.String, nullable=True),
|
||||
sa.Column("daily_composed_video_duration", sa.Integer, nullable=True),
|
||||
# Email recipients for transcript notification
|
||||
sa.Column("email_recipients", JSONB, nullable=True),
|
||||
sa.Index("idx_meeting_room_id", "room_id"),
|
||||
sa.Index("idx_meeting_calendar_event", "calendar_event_id"),
|
||||
)
|
||||
@@ -116,6 +119,8 @@ class Meeting(BaseModel):
|
||||
# Daily.co composed video (Brady Bunch grid) - Daily.co only
|
||||
daily_composed_video_s3_key: str | None = None
|
||||
daily_composed_video_duration: int | None = None
|
||||
# Email recipients for transcript notification
|
||||
email_recipients: list[str] | None = None
|
||||
|
||||
|
||||
class MeetingController:
|
||||
@@ -388,6 +393,24 @@ class MeetingController:
|
||||
# If was_null=False, the WHERE clause prevented the update
|
||||
return was_null
|
||||
|
||||
@asynccontextmanager
|
||||
async def transaction(self):
|
||||
"""A context manager for database transaction."""
|
||||
async with get_database().transaction(isolation="serializable"):
|
||||
yield
|
||||
|
||||
async def add_email_recipient(self, meeting_id: str, email: str) -> list[str]:
|
||||
"""Add an email to the meeting's email_recipients list (no duplicates)."""
|
||||
async with self.transaction():
|
||||
meeting = await self.get_by_id(meeting_id)
|
||||
if not meeting:
|
||||
raise ValueError(f"Meeting {meeting_id} not found")
|
||||
current = meeting.email_recipients or []
|
||||
if email not in current:
|
||||
current.append(email)
|
||||
await self.update_meeting(meeting_id, email_recipients=current)
|
||||
return current
|
||||
|
||||
async def increment_num_clients(self, meeting_id: str) -> None:
|
||||
"""Atomically increment participant count."""
|
||||
query = (
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timezone
|
||||
from typing import Literal
|
||||
|
||||
import sqlalchemy as sa
|
||||
@@ -24,6 +24,7 @@ recordings = sa.Table(
|
||||
),
|
||||
sa.Column("meeting_id", sa.String),
|
||||
sa.Column("track_keys", sa.JSON, nullable=True),
|
||||
sa.Column("deleted_at", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.Index("idx_recording_meeting_id", "meeting_id"),
|
||||
)
|
||||
|
||||
@@ -40,6 +41,7 @@ class Recording(BaseModel):
|
||||
# track_keys can be empty list [] if recording finished but no audio was captured (silence/muted)
|
||||
# None means not a multitrack recording, [] means multitrack with no tracks
|
||||
track_keys: list[str] | None = None
|
||||
deleted_at: datetime | None = None
|
||||
|
||||
@property
|
||||
def is_multitrack(self) -> bool:
|
||||
@@ -69,7 +71,11 @@ class RecordingController:
|
||||
return Recording(**result) if result else None
|
||||
|
||||
async def remove_by_id(self, id: str) -> None:
|
||||
query = recordings.delete().where(recordings.c.id == id)
|
||||
query = (
|
||||
recordings.update()
|
||||
.where(recordings.c.id == id)
|
||||
.values(deleted_at=datetime.now(timezone.utc))
|
||||
)
|
||||
await get_database().execute(query)
|
||||
|
||||
async def set_meeting_id(
|
||||
@@ -114,6 +120,7 @@ class RecordingController:
|
||||
.where(
|
||||
recordings.c.bucket_name == bucket_name,
|
||||
recordings.c.track_keys.isnot(None),
|
||||
recordings.c.deleted_at.is_(None),
|
||||
or_(
|
||||
transcripts.c.id.is_(None),
|
||||
transcripts.c.status == "error",
|
||||
|
||||
@@ -63,6 +63,7 @@ rooms = sqlalchemy.Table(
|
||||
nullable=False,
|
||||
server_default=sqlalchemy.sql.false(),
|
||||
),
|
||||
sqlalchemy.Column("email_transcript_to", sqlalchemy.String, nullable=True),
|
||||
sqlalchemy.Index("idx_room_is_shared", "is_shared"),
|
||||
sqlalchemy.Index("idx_room_ics_enabled", "ics_enabled"),
|
||||
)
|
||||
@@ -92,6 +93,7 @@ class Room(BaseModel):
|
||||
ics_last_etag: str | None = None
|
||||
platform: Platform = Field(default_factory=lambda: settings.DEFAULT_VIDEO_PLATFORM)
|
||||
skip_consent: bool = False
|
||||
email_transcript_to: str | None = None
|
||||
|
||||
|
||||
class RoomController:
|
||||
@@ -147,6 +149,7 @@ class RoomController:
|
||||
ics_enabled: bool = False,
|
||||
platform: Platform = settings.DEFAULT_VIDEO_PLATFORM,
|
||||
skip_consent: bool = False,
|
||||
email_transcript_to: str | None = None,
|
||||
):
|
||||
"""
|
||||
Add a new room
|
||||
@@ -172,6 +175,7 @@ class RoomController:
|
||||
"ics_enabled": ics_enabled,
|
||||
"platform": platform,
|
||||
"skip_consent": skip_consent,
|
||||
"email_transcript_to": email_transcript_to,
|
||||
}
|
||||
|
||||
room = Room(**room_data)
|
||||
|
||||
@@ -387,6 +387,8 @@ class SearchController:
|
||||
transcripts.join(rooms, transcripts.c.room_id == rooms.c.id, isouter=True)
|
||||
)
|
||||
|
||||
base_query = base_query.where(transcripts.c.deleted_at.is_(None))
|
||||
|
||||
if params.query_text is not None:
|
||||
# because already initialized based on params.query_text presence above
|
||||
assert search_query is not None
|
||||
|
||||
@@ -91,6 +91,7 @@ transcripts = sqlalchemy.Table(
|
||||
sqlalchemy.Column("webvtt", sqlalchemy.Text),
|
||||
# Hatchet workflow run ID for resumption of failed workflows
|
||||
sqlalchemy.Column("workflow_run_id", sqlalchemy.String),
|
||||
sqlalchemy.Column("deleted_at", sqlalchemy.DateTime(timezone=True), nullable=True),
|
||||
sqlalchemy.Column(
|
||||
"change_seq",
|
||||
sqlalchemy.BigInteger,
|
||||
@@ -238,6 +239,7 @@ class Transcript(BaseModel):
|
||||
webvtt: str | None = None
|
||||
workflow_run_id: str | None = None # Hatchet workflow run ID for resumption
|
||||
change_seq: int | None = None
|
||||
deleted_at: datetime | None = None
|
||||
|
||||
@field_serializer("created_at", when_used="json")
|
||||
def serialize_datetime(self, dt: datetime) -> str:
|
||||
@@ -418,6 +420,8 @@ class TranscriptController:
|
||||
rooms, transcripts.c.room_id == rooms.c.id, isouter=True
|
||||
)
|
||||
|
||||
query = query.where(transcripts.c.deleted_at.is_(None))
|
||||
|
||||
if user_id:
|
||||
query = query.where(
|
||||
or_(transcripts.c.user_id == user_id, rooms.c.is_shared)
|
||||
@@ -500,7 +504,10 @@ class TranscriptController:
|
||||
"""
|
||||
Get transcripts by room_id (direct access without joins)
|
||||
"""
|
||||
query = transcripts.select().where(transcripts.c.room_id == room_id)
|
||||
query = transcripts.select().where(
|
||||
transcripts.c.room_id == room_id,
|
||||
transcripts.c.deleted_at.is_(None),
|
||||
)
|
||||
if "user_id" in kwargs:
|
||||
query = query.where(transcripts.c.user_id == kwargs["user_id"])
|
||||
if "order_by" in kwargs:
|
||||
@@ -531,8 +538,11 @@ class TranscriptController:
|
||||
if not result:
|
||||
raise HTTPException(status_code=404, detail="Transcript not found")
|
||||
|
||||
# if the transcript is anonymous, share mode is not checked
|
||||
transcript = Transcript(**result)
|
||||
if transcript.deleted_at is not None:
|
||||
raise HTTPException(status_code=404, detail="Transcript not found")
|
||||
|
||||
# if the transcript is anonymous, share mode is not checked
|
||||
if transcript.user_id is None:
|
||||
return transcript
|
||||
|
||||
@@ -632,56 +642,49 @@ class TranscriptController:
|
||||
user_id: str | None = None,
|
||||
) -> None:
|
||||
"""
|
||||
Remove a transcript by id
|
||||
Soft-delete a transcript by id.
|
||||
|
||||
Sets deleted_at on the transcript and its associated recording.
|
||||
All files (S3 and local) are preserved for later retrieval.
|
||||
"""
|
||||
transcript = await self.get_by_id(transcript_id)
|
||||
if not transcript:
|
||||
return
|
||||
if user_id is not None and transcript.user_id != user_id:
|
||||
return
|
||||
if transcript.audio_location == "storage" and not transcript.audio_deleted:
|
||||
try:
|
||||
await get_transcripts_storage().delete_file(
|
||||
transcript.storage_audio_path
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to delete transcript audio from storage",
|
||||
exc_info=e,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
transcript.unlink()
|
||||
if transcript.deleted_at is not None:
|
||||
return
|
||||
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
# Soft-delete the associated recording (keeps S3 files intact)
|
||||
if transcript.recording_id:
|
||||
try:
|
||||
recording = await recordings_controller.get_by_id(
|
||||
transcript.recording_id
|
||||
)
|
||||
if recording:
|
||||
try:
|
||||
await get_transcripts_storage().delete_file(
|
||||
recording.object_key, bucket=recording.bucket_name
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to delete recording object from S3",
|
||||
exc_info=e,
|
||||
recording_id=transcript.recording_id,
|
||||
)
|
||||
await recordings_controller.remove_by_id(transcript.recording_id)
|
||||
await recordings_controller.remove_by_id(transcript.recording_id)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to delete recording row",
|
||||
"Failed to soft-delete recording",
|
||||
exc_info=e,
|
||||
recording_id=transcript.recording_id,
|
||||
)
|
||||
query = transcripts.delete().where(transcripts.c.id == transcript_id)
|
||||
|
||||
# Soft-delete the transcript (keeps all files intact)
|
||||
query = (
|
||||
transcripts.update()
|
||||
.where(transcripts.c.id == transcript_id)
|
||||
.values(deleted_at=now)
|
||||
)
|
||||
await get_database().execute(query)
|
||||
|
||||
async def remove_by_recording_id(self, recording_id: str):
|
||||
"""
|
||||
Remove a transcript by recording_id
|
||||
Soft-delete a transcript by recording_id
|
||||
"""
|
||||
query = transcripts.delete().where(transcripts.c.recording_id == recording_id)
|
||||
query = (
|
||||
transcripts.update()
|
||||
.where(transcripts.c.recording_id == recording_id)
|
||||
.values(deleted_at=datetime.now(timezone.utc))
|
||||
)
|
||||
await get_database().execute(query)
|
||||
|
||||
@staticmethod
|
||||
|
||||
84
server/reflector/email.py
Normal file
84
server/reflector/email.py
Normal file
@@ -0,0 +1,84 @@
|
||||
from email.mime.multipart import MIMEMultipart
|
||||
from email.mime.text import MIMEText
|
||||
|
||||
import aiosmtplib
|
||||
import structlog
|
||||
|
||||
from reflector.db.transcripts import Transcript
|
||||
from reflector.settings import settings
|
||||
|
||||
logger = structlog.get_logger(__name__)
|
||||
|
||||
|
||||
def is_email_configured() -> bool:
|
||||
return bool(settings.SMTP_HOST and settings.SMTP_FROM_EMAIL)
|
||||
|
||||
|
||||
def get_transcript_url(transcript: Transcript) -> str:
|
||||
return f"{settings.UI_BASE_URL}/transcripts/{transcript.id}"
|
||||
|
||||
|
||||
def _build_plain_text(transcript: Transcript, url: str) -> str:
|
||||
title = transcript.title or "Unnamed recording"
|
||||
lines = [
|
||||
f"Your transcript is ready: {title}",
|
||||
"",
|
||||
f"View it here: {url}",
|
||||
]
|
||||
if transcript.short_summary:
|
||||
lines.extend(["", "Summary:", transcript.short_summary])
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _build_html(transcript: Transcript, url: str) -> str:
|
||||
title = transcript.title or "Unnamed recording"
|
||||
summary_html = ""
|
||||
if transcript.short_summary:
|
||||
summary_html = f"<p style='color:#555;'>{transcript.short_summary}</p>"
|
||||
|
||||
return f"""\
|
||||
<div style="font-family:sans-serif;max-width:600px;margin:0 auto;">
|
||||
<h2>Your transcript is ready</h2>
|
||||
<p><strong>{title}</strong></p>
|
||||
{summary_html}
|
||||
<p><a href="{url}" style="display:inline-block;padding:10px 20px;background:#4A90D9;color:#fff;text-decoration:none;border-radius:4px;">View Transcript</a></p>
|
||||
<p style="color:#999;font-size:12px;">This email was sent because you requested to receive the transcript from a meeting.</p>
|
||||
</div>"""
|
||||
|
||||
|
||||
async def send_transcript_email(to_emails: list[str], transcript: Transcript) -> int:
|
||||
"""Send transcript notification to all emails. Returns count sent."""
|
||||
if not is_email_configured() or not to_emails:
|
||||
return 0
|
||||
|
||||
url = get_transcript_url(transcript)
|
||||
title = transcript.title or "Unnamed recording"
|
||||
sent = 0
|
||||
|
||||
for email_addr in to_emails:
|
||||
msg = MIMEMultipart("alternative")
|
||||
msg["Subject"] = f"Transcript Ready: {title}"
|
||||
msg["From"] = settings.SMTP_FROM_EMAIL
|
||||
msg["To"] = email_addr
|
||||
|
||||
msg.attach(MIMEText(_build_plain_text(transcript, url), "plain"))
|
||||
msg.attach(MIMEText(_build_html(transcript, url), "html"))
|
||||
|
||||
try:
|
||||
await aiosmtplib.send(
|
||||
msg,
|
||||
hostname=settings.SMTP_HOST,
|
||||
port=settings.SMTP_PORT,
|
||||
username=settings.SMTP_USERNAME,
|
||||
password=settings.SMTP_PASSWORD,
|
||||
start_tls=settings.SMTP_USE_TLS,
|
||||
)
|
||||
sent += 1
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Failed to send transcript email",
|
||||
to=email_addr,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
|
||||
return sent
|
||||
@@ -21,6 +21,7 @@ class TaskName(StrEnum):
|
||||
CLEANUP_CONSENT = "cleanup_consent"
|
||||
POST_ZULIP = "post_zulip"
|
||||
SEND_WEBHOOK = "send_webhook"
|
||||
SEND_EMAIL = "send_email"
|
||||
PAD_TRACK = "pad_track"
|
||||
TRANSCRIBE_TRACK = "transcribe_track"
|
||||
DETECT_CHUNK_TOPIC = "detect_chunk_topic"
|
||||
@@ -59,7 +60,7 @@ TIMEOUT_AUDIO = 720 # Audio processing: padding, mixdown (Hatchet execution_tim
|
||||
TIMEOUT_AUDIO_HTTP = (
|
||||
660 # httpx timeout for pad_track — below 720 so Hatchet doesn't race
|
||||
)
|
||||
TIMEOUT_HEAVY = 600 # Transcription, fan-out LLM tasks (Hatchet execution_timeout)
|
||||
TIMEOUT_HEAVY = 1200 # Transcription, fan-out LLM tasks (Hatchet execution_timeout)
|
||||
TIMEOUT_HEAVY_HTTP = (
|
||||
540 # httpx timeout for transcribe_track — below 600 so Hatchet doesn't race
|
||||
1150 # httpx timeout for transcribe_track — below 1200 so Hatchet doesn't race
|
||||
)
|
||||
|
||||
@@ -16,6 +16,7 @@ from reflector.hatchet.workflows.subject_processing import subject_workflow
|
||||
from reflector.hatchet.workflows.topic_chunk_processing import topic_chunk_workflow
|
||||
from reflector.hatchet.workflows.track_processing import track_workflow
|
||||
from reflector.logger import logger
|
||||
from reflector.settings import settings
|
||||
|
||||
SLOTS = 10
|
||||
WORKER_NAME = "llm-worker-pool"
|
||||
@@ -34,6 +35,38 @@ def main():
|
||||
error=str(e),
|
||||
)
|
||||
|
||||
workflows = [
|
||||
daily_multitrack_pipeline,
|
||||
file_pipeline,
|
||||
live_post_pipeline,
|
||||
topic_chunk_workflow,
|
||||
subject_workflow,
|
||||
track_workflow,
|
||||
]
|
||||
|
||||
_zulip_dag_enabled = all(
|
||||
[
|
||||
settings.ZULIP_REALM,
|
||||
settings.ZULIP_API_KEY,
|
||||
settings.ZULIP_BOT_EMAIL,
|
||||
settings.ZULIP_DAG_STREAM,
|
||||
settings.ZULIP_DAG_TOPIC,
|
||||
]
|
||||
)
|
||||
if _zulip_dag_enabled:
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import ( # noqa: PLC0415
|
||||
failed_runs_monitor,
|
||||
)
|
||||
|
||||
workflows.append(failed_runs_monitor)
|
||||
logger.info(
|
||||
"FailedRunsMonitor cron enabled",
|
||||
stream=settings.ZULIP_DAG_STREAM,
|
||||
topic=settings.ZULIP_DAG_TOPIC,
|
||||
)
|
||||
else:
|
||||
logger.info("FailedRunsMonitor cron disabled (Zulip DAG not configured)")
|
||||
|
||||
logger.info(
|
||||
"Starting Hatchet LLM worker pool (all tasks except mixdown)",
|
||||
worker_name=WORKER_NAME,
|
||||
@@ -47,14 +80,7 @@ def main():
|
||||
labels={
|
||||
"pool": POOL,
|
||||
},
|
||||
workflows=[
|
||||
daily_multitrack_pipeline,
|
||||
file_pipeline,
|
||||
live_post_pipeline,
|
||||
topic_chunk_workflow,
|
||||
subject_workflow,
|
||||
track_workflow,
|
||||
],
|
||||
workflows=workflows,
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
@@ -33,6 +33,7 @@ from hatchet_sdk.labels import DesiredWorkerLabel
|
||||
from pydantic import BaseModel
|
||||
|
||||
from reflector.dailyco_api.client import DailyApiClient
|
||||
from reflector.email import is_email_configured, send_transcript_email
|
||||
from reflector.hatchet.broadcast import (
|
||||
append_event_and_broadcast,
|
||||
set_status_and_broadcast,
|
||||
@@ -51,6 +52,7 @@ from reflector.hatchet.error_classification import is_non_retryable
|
||||
from reflector.hatchet.workflows.models import (
|
||||
ActionItemsResult,
|
||||
ConsentResult,
|
||||
EmailResult,
|
||||
FinalizeResult,
|
||||
MixdownResult,
|
||||
PaddedTrackInfo,
|
||||
@@ -1465,6 +1467,69 @@ async def send_webhook(input: PipelineInput, ctx: Context) -> WebhookResult:
|
||||
return WebhookResult(webhook_sent=False)
|
||||
|
||||
|
||||
@daily_multitrack_pipeline.task(
|
||||
parents=[cleanup_consent],
|
||||
execution_timeout=timedelta(seconds=TIMEOUT_SHORT),
|
||||
retries=5,
|
||||
backoff_factor=2.0,
|
||||
backoff_max_seconds=15,
|
||||
)
|
||||
@with_error_handling(TaskName.SEND_EMAIL, set_error_status=False)
|
||||
async def send_email(input: PipelineInput, ctx: Context) -> EmailResult:
|
||||
"""Send transcript email to collected recipients."""
|
||||
ctx.log(f"send_email: transcript_id={input.transcript_id}")
|
||||
|
||||
if not is_email_configured():
|
||||
ctx.log("send_email skipped (SMTP not configured)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
async with fresh_db_connection():
|
||||
from reflector.db.meetings import meetings_controller # noqa: PLC0415
|
||||
from reflector.db.recordings import recordings_controller # noqa: PLC0415
|
||||
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
|
||||
|
||||
transcript = await transcripts_controller.get_by_id(input.transcript_id)
|
||||
if not transcript:
|
||||
ctx.log("send_email skipped (transcript not found)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
meeting = None
|
||||
if transcript.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
|
||||
if not meeting and transcript.recording_id:
|
||||
recording = await recordings_controller.get_by_id(transcript.recording_id)
|
||||
if recording and recording.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(recording.meeting_id)
|
||||
|
||||
recipients = (
|
||||
list(meeting.email_recipients)
|
||||
if meeting and meeting.email_recipients
|
||||
else []
|
||||
)
|
||||
|
||||
# Also check room-level email
|
||||
from reflector.db.rooms import rooms_controller # noqa: PLC0415
|
||||
|
||||
if transcript.room_id:
|
||||
room = await rooms_controller.get_by_id(transcript.room_id)
|
||||
if room and room.email_transcript_to:
|
||||
if room.email_transcript_to not in recipients:
|
||||
recipients.append(room.email_transcript_to)
|
||||
|
||||
if not recipients:
|
||||
ctx.log("send_email skipped (no email recipients)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
# For room-level emails, do NOT change share_mode (only set public if meeting had recipients)
|
||||
if meeting and meeting.email_recipients:
|
||||
await transcripts_controller.update(transcript, {"share_mode": "public"})
|
||||
|
||||
count = await send_transcript_email(recipients, transcript)
|
||||
ctx.log(f"send_email complete: sent {count} emails")
|
||||
|
||||
return EmailResult(emails_sent=count)
|
||||
|
||||
|
||||
async def on_workflow_failure(input: PipelineInput, ctx: Context) -> None:
|
||||
"""Run when the workflow is truly dead (all retries exhausted).
|
||||
|
||||
|
||||
109
server/reflector/hatchet/workflows/failed_runs_monitor.py
Normal file
109
server/reflector/hatchet/workflows/failed_runs_monitor.py
Normal file
@@ -0,0 +1,109 @@
|
||||
"""
|
||||
Hatchet cron workflow: FailedRunsMonitor
|
||||
|
||||
Runs hourly, queries Hatchet for failed pipeline runs in the last hour,
|
||||
and posts details to Zulip for visibility.
|
||||
|
||||
Only registered with the worker when Zulip DAG settings are configured.
|
||||
"""
|
||||
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
from hatchet_sdk import Context
|
||||
from hatchet_sdk.clients.rest.models import V1TaskStatus
|
||||
|
||||
from reflector.hatchet.client import HatchetClientManager
|
||||
from reflector.logger import logger
|
||||
from reflector.settings import settings
|
||||
from reflector.tools.render_hatchet_run import render_run_detail
|
||||
from reflector.zulip import send_message_to_zulip
|
||||
|
||||
MONITORED_PIPELINES = {
|
||||
"DiarizationPipeline",
|
||||
"FilePipeline",
|
||||
"LivePostProcessingPipeline",
|
||||
}
|
||||
|
||||
LOOKBACK_HOURS = 1
|
||||
|
||||
hatchet = HatchetClientManager.get_client()
|
||||
|
||||
failed_runs_monitor = hatchet.workflow(
|
||||
name="FailedRunsMonitor",
|
||||
on_crons=["0 * * * *"],
|
||||
)
|
||||
|
||||
|
||||
async def _check_failed_runs() -> dict:
|
||||
"""Core logic: query for failed pipeline runs and post each to Zulip.
|
||||
|
||||
Extracted from the Hatchet task for testability.
|
||||
"""
|
||||
now = datetime.now(tz=timezone.utc)
|
||||
since = now - timedelta(hours=LOOKBACK_HOURS)
|
||||
|
||||
client = HatchetClientManager.get_client()
|
||||
|
||||
try:
|
||||
result = await client.runs.aio_list(
|
||||
statuses=[V1TaskStatus.FAILED],
|
||||
since=since,
|
||||
until=now,
|
||||
limit=200,
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("[FailedRunsMonitor] Failed to list runs from Hatchet")
|
||||
return {"checked": 0, "reported": 0, "error": "failed to list runs"}
|
||||
|
||||
rows = result.rows or []
|
||||
|
||||
# Filter to main pipelines only (skip child workflows like TrackProcessing, etc.)
|
||||
failed_main_runs = [run for run in rows if run.workflow_name in MONITORED_PIPELINES]
|
||||
|
||||
if not failed_main_runs:
|
||||
logger.info(
|
||||
"[FailedRunsMonitor] No failed pipeline runs in the last hour",
|
||||
total_failed=len(rows),
|
||||
since=since.isoformat(),
|
||||
)
|
||||
return {"checked": len(rows), "reported": 0}
|
||||
|
||||
logger.info(
|
||||
"[FailedRunsMonitor] Found failed pipeline runs",
|
||||
count=len(failed_main_runs),
|
||||
since=since.isoformat(),
|
||||
)
|
||||
|
||||
reported = 0
|
||||
for run in failed_main_runs:
|
||||
try:
|
||||
details = await client.runs.aio_get(run.workflow_run_external_id)
|
||||
content = render_run_detail(details)
|
||||
await send_message_to_zulip(
|
||||
settings.ZULIP_DAG_STREAM,
|
||||
settings.ZULIP_DAG_TOPIC,
|
||||
content,
|
||||
)
|
||||
reported += 1
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"[FailedRunsMonitor] Failed to report run",
|
||||
workflow_run_id=run.workflow_run_external_id,
|
||||
workflow_name=run.workflow_name,
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"[FailedRunsMonitor] Finished reporting",
|
||||
reported=reported,
|
||||
total_failed_main=len(failed_main_runs),
|
||||
)
|
||||
return {"checked": len(rows), "reported": reported}
|
||||
|
||||
|
||||
@failed_runs_monitor.task(
|
||||
execution_timeout=timedelta(seconds=120),
|
||||
retries=1,
|
||||
)
|
||||
async def check_failed_runs(input, ctx: Context) -> dict:
|
||||
"""Hatchet task entry point — delegates to _check_failed_runs."""
|
||||
return await _check_failed_runs()
|
||||
@@ -18,6 +18,7 @@ from pathlib import Path
|
||||
from hatchet_sdk import Context
|
||||
from pydantic import BaseModel
|
||||
|
||||
from reflector.email import is_email_configured, send_transcript_email
|
||||
from reflector.hatchet.broadcast import (
|
||||
append_event_and_broadcast,
|
||||
set_status_and_broadcast,
|
||||
@@ -37,6 +38,7 @@ from reflector.hatchet.workflows.daily_multitrack_pipeline import (
|
||||
)
|
||||
from reflector.hatchet.workflows.models import (
|
||||
ConsentResult,
|
||||
EmailResult,
|
||||
TitleResult,
|
||||
TopicsResult,
|
||||
WaveformResult,
|
||||
@@ -859,6 +861,70 @@ async def send_webhook(input: FilePipelineInput, ctx: Context) -> WebhookResult:
|
||||
return WebhookResult(webhook_sent=False)
|
||||
|
||||
|
||||
@file_pipeline.task(
|
||||
parents=[cleanup_consent],
|
||||
execution_timeout=timedelta(seconds=TIMEOUT_SHORT),
|
||||
retries=5,
|
||||
backoff_factor=2.0,
|
||||
backoff_max_seconds=15,
|
||||
)
|
||||
@with_error_handling(TaskName.SEND_EMAIL, set_error_status=False)
|
||||
async def send_email(input: FilePipelineInput, ctx: Context) -> EmailResult:
|
||||
"""Send transcript email to collected recipients."""
|
||||
ctx.log(f"send_email: transcript_id={input.transcript_id}")
|
||||
|
||||
if not is_email_configured():
|
||||
ctx.log("send_email skipped (SMTP not configured)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
async with fresh_db_connection():
|
||||
from reflector.db.meetings import meetings_controller # noqa: PLC0415
|
||||
from reflector.db.recordings import recordings_controller # noqa: PLC0415
|
||||
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
|
||||
|
||||
transcript = await transcripts_controller.get_by_id(input.transcript_id)
|
||||
if not transcript:
|
||||
ctx.log("send_email skipped (transcript not found)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
# Try transcript.meeting_id first, then fall back to recording.meeting_id
|
||||
meeting = None
|
||||
if transcript.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
|
||||
if not meeting and transcript.recording_id:
|
||||
recording = await recordings_controller.get_by_id(transcript.recording_id)
|
||||
if recording and recording.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(recording.meeting_id)
|
||||
|
||||
recipients = (
|
||||
list(meeting.email_recipients)
|
||||
if meeting and meeting.email_recipients
|
||||
else []
|
||||
)
|
||||
|
||||
# Also check room-level email
|
||||
from reflector.db.rooms import rooms_controller # noqa: PLC0415
|
||||
|
||||
if transcript.room_id:
|
||||
room = await rooms_controller.get_by_id(transcript.room_id)
|
||||
if room and room.email_transcript_to:
|
||||
if room.email_transcript_to not in recipients:
|
||||
recipients.append(room.email_transcript_to)
|
||||
|
||||
if not recipients:
|
||||
ctx.log("send_email skipped (no email recipients)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
# For room-level emails, do NOT change share_mode (only set public if meeting had recipients)
|
||||
if meeting and meeting.email_recipients:
|
||||
await transcripts_controller.update(transcript, {"share_mode": "public"})
|
||||
|
||||
count = await send_transcript_email(recipients, transcript)
|
||||
ctx.log(f"send_email complete: sent {count} emails")
|
||||
|
||||
return EmailResult(emails_sent=count)
|
||||
|
||||
|
||||
# --- On failure handler ---
|
||||
|
||||
|
||||
|
||||
@@ -17,6 +17,7 @@ from datetime import timedelta
|
||||
from hatchet_sdk import Context
|
||||
from pydantic import BaseModel
|
||||
|
||||
from reflector.email import is_email_configured, send_transcript_email
|
||||
from reflector.hatchet.client import HatchetClientManager
|
||||
from reflector.hatchet.constants import (
|
||||
TIMEOUT_HEAVY,
|
||||
@@ -32,6 +33,7 @@ from reflector.hatchet.workflows.daily_multitrack_pipeline import (
|
||||
)
|
||||
from reflector.hatchet.workflows.models import (
|
||||
ConsentResult,
|
||||
EmailResult,
|
||||
TitleResult,
|
||||
WaveformResult,
|
||||
WebhookResult,
|
||||
@@ -361,6 +363,69 @@ async def send_webhook(input: LivePostPipelineInput, ctx: Context) -> WebhookRes
|
||||
return WebhookResult(webhook_sent=False)
|
||||
|
||||
|
||||
@live_post_pipeline.task(
|
||||
parents=[final_summaries],
|
||||
execution_timeout=timedelta(seconds=TIMEOUT_SHORT),
|
||||
retries=5,
|
||||
backoff_factor=2.0,
|
||||
backoff_max_seconds=15,
|
||||
)
|
||||
@with_error_handling(TaskName.SEND_EMAIL, set_error_status=False)
|
||||
async def send_email(input: LivePostPipelineInput, ctx: Context) -> EmailResult:
|
||||
"""Send transcript email to collected recipients."""
|
||||
ctx.log(f"send_email: transcript_id={input.transcript_id}")
|
||||
|
||||
if not is_email_configured():
|
||||
ctx.log("send_email skipped (SMTP not configured)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
async with fresh_db_connection():
|
||||
from reflector.db.meetings import meetings_controller # noqa: PLC0415
|
||||
from reflector.db.recordings import recordings_controller # noqa: PLC0415
|
||||
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
|
||||
|
||||
transcript = await transcripts_controller.get_by_id(input.transcript_id)
|
||||
if not transcript:
|
||||
ctx.log("send_email skipped (transcript not found)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
meeting = None
|
||||
if transcript.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
|
||||
if not meeting and transcript.recording_id:
|
||||
recording = await recordings_controller.get_by_id(transcript.recording_id)
|
||||
if recording and recording.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(recording.meeting_id)
|
||||
|
||||
recipients = (
|
||||
list(meeting.email_recipients)
|
||||
if meeting and meeting.email_recipients
|
||||
else []
|
||||
)
|
||||
|
||||
# Also check room-level email
|
||||
from reflector.db.rooms import rooms_controller # noqa: PLC0415
|
||||
|
||||
if transcript.room_id:
|
||||
room = await rooms_controller.get_by_id(transcript.room_id)
|
||||
if room and room.email_transcript_to:
|
||||
if room.email_transcript_to not in recipients:
|
||||
recipients.append(room.email_transcript_to)
|
||||
|
||||
if not recipients:
|
||||
ctx.log("send_email skipped (no email recipients)")
|
||||
return EmailResult(skipped=True)
|
||||
|
||||
# For room-level emails, do NOT change share_mode (only set public if meeting had recipients)
|
||||
if meeting and meeting.email_recipients:
|
||||
await transcripts_controller.update(transcript, {"share_mode": "public"})
|
||||
|
||||
count = await send_transcript_email(recipients, transcript)
|
||||
ctx.log(f"send_email complete: sent {count} emails")
|
||||
|
||||
return EmailResult(emails_sent=count)
|
||||
|
||||
|
||||
# --- On failure handler ---
|
||||
|
||||
|
||||
|
||||
@@ -170,3 +170,10 @@ class WebhookResult(BaseModel):
|
||||
webhook_sent: bool
|
||||
skipped: bool = False
|
||||
response_code: int | None = None
|
||||
|
||||
|
||||
class EmailResult(BaseModel):
|
||||
"""Result from send_email task."""
|
||||
|
||||
emails_sent: int = 0
|
||||
skipped: bool = False
|
||||
|
||||
@@ -194,6 +194,16 @@ class Settings(BaseSettings):
|
||||
ZULIP_REALM: str | None = None
|
||||
ZULIP_API_KEY: str | None = None
|
||||
ZULIP_BOT_EMAIL: str | None = None
|
||||
ZULIP_DAG_STREAM: str | None = None
|
||||
ZULIP_DAG_TOPIC: str | None = None
|
||||
|
||||
# Email / SMTP integration (for transcript email notifications)
|
||||
SMTP_HOST: str | None = None
|
||||
SMTP_PORT: int = 587
|
||||
SMTP_USERNAME: str | None = None
|
||||
SMTP_PASSWORD: str | None = None
|
||||
SMTP_FROM_EMAIL: str | None = None
|
||||
SMTP_USE_TLS: bool = True
|
||||
|
||||
# Hatchet workflow orchestration (always enabled for multitrack processing)
|
||||
HATCHET_CLIENT_TOKEN: str | None = None
|
||||
|
||||
@@ -116,9 +116,12 @@ class Storage:
|
||||
expires_in: int = 3600,
|
||||
*,
|
||||
bucket: str | None = None,
|
||||
extra_params: dict | None = None,
|
||||
) -> str:
|
||||
"""Generate presigned URL. bucket: override instance default if provided."""
|
||||
return await self._get_file_url(filename, operation, expires_in, bucket=bucket)
|
||||
return await self._get_file_url(
|
||||
filename, operation, expires_in, bucket=bucket, extra_params=extra_params
|
||||
)
|
||||
|
||||
async def _get_file_url(
|
||||
self,
|
||||
@@ -127,6 +130,7 @@ class Storage:
|
||||
expires_in: int = 3600,
|
||||
*,
|
||||
bucket: str | None = None,
|
||||
extra_params: dict | None = None,
|
||||
) -> str:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
@@ -170,16 +170,23 @@ class AwsStorage(Storage):
|
||||
expires_in: int = 3600,
|
||||
*,
|
||||
bucket: str | None = None,
|
||||
extra_params: dict | None = None,
|
||||
) -> str:
|
||||
actual_bucket = bucket or self._bucket_name
|
||||
folder = self.aws_folder
|
||||
s3filename = f"{folder}/{filename}" if folder else filename
|
||||
params = {}
|
||||
if extra_params:
|
||||
params.update(extra_params)
|
||||
# Always set Bucket/Key after extra_params to prevent overrides
|
||||
params["Bucket"] = actual_bucket
|
||||
params["Key"] = s3filename
|
||||
async with self.session.client(
|
||||
"s3", config=self.boto_config, endpoint_url=self._endpoint_url
|
||||
) as client:
|
||||
presigned_url = await client.generate_presigned_url(
|
||||
operation,
|
||||
Params={"Bucket": actual_bucket, "Key": s3filename},
|
||||
Params=params,
|
||||
ExpiresIn=expires_in,
|
||||
)
|
||||
|
||||
|
||||
257
server/reflector/tools/deleted_transcripts.py
Normal file
257
server/reflector/tools/deleted_transcripts.py
Normal file
@@ -0,0 +1,257 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
CLI tool for managing soft-deleted transcripts.
|
||||
|
||||
Usage:
|
||||
uv run python -m reflector.tools.deleted_transcripts list
|
||||
uv run python -m reflector.tools.deleted_transcripts files <transcript_id>
|
||||
uv run python -m reflector.tools.deleted_transcripts download <transcript_id> [--output-dir ./]
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
|
||||
import structlog
|
||||
|
||||
from reflector.db import get_database
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.recordings import recordings_controller
|
||||
from reflector.db.transcripts import Transcript, transcripts
|
||||
from reflector.storage import get_source_storage, get_transcripts_storage
|
||||
|
||||
logger = structlog.get_logger(__name__)
|
||||
|
||||
|
||||
async def list_deleted():
|
||||
"""List all soft-deleted transcripts."""
|
||||
database = get_database()
|
||||
await database.connect()
|
||||
try:
|
||||
query = (
|
||||
transcripts.select()
|
||||
.where(transcripts.c.deleted_at.isnot(None))
|
||||
.order_by(transcripts.c.deleted_at.desc())
|
||||
)
|
||||
results = await database.fetch_all(query)
|
||||
|
||||
if not results:
|
||||
print("No deleted transcripts found.")
|
||||
return
|
||||
|
||||
print(
|
||||
f"{'ID':<40} {'Title':<40} {'Deleted At':<28} {'Recording ID':<40} {'Meeting ID'}"
|
||||
)
|
||||
print("-" * 180)
|
||||
for row in results:
|
||||
t = Transcript(**row)
|
||||
title = (t.title or "")[:38]
|
||||
deleted = t.deleted_at.isoformat() if t.deleted_at else ""
|
||||
print(
|
||||
f"{t.id:<40} {title:<40} {deleted:<28} {t.recording_id or '':<40} {t.meeting_id or ''}"
|
||||
)
|
||||
|
||||
print(f"\nTotal: {len(results)} deleted transcript(s)")
|
||||
finally:
|
||||
await database.disconnect()
|
||||
|
||||
|
||||
async def list_files(transcript_id: str):
|
||||
"""List all S3 keys associated with a deleted transcript."""
|
||||
database = get_database()
|
||||
await database.connect()
|
||||
try:
|
||||
query = transcripts.select().where(transcripts.c.id == transcript_id)
|
||||
result = await database.fetch_one(query)
|
||||
if not result:
|
||||
print(f"Transcript {transcript_id} not found.")
|
||||
return
|
||||
|
||||
t = Transcript(**result)
|
||||
if t.deleted_at is None:
|
||||
print(f"Transcript {transcript_id} is not deleted.")
|
||||
return
|
||||
|
||||
print(f"Transcript: {t.id}")
|
||||
print(f"Title: {t.title}")
|
||||
print(f"Deleted at: {t.deleted_at}")
|
||||
print()
|
||||
|
||||
files = []
|
||||
|
||||
# Transcript audio
|
||||
if t.audio_location == "storage" and not t.audio_deleted:
|
||||
files.append(("Transcript audio", t.storage_audio_path, None))
|
||||
|
||||
# Recording files
|
||||
if t.recording_id:
|
||||
recording = await recordings_controller.get_by_id(t.recording_id)
|
||||
if recording:
|
||||
if recording.object_key:
|
||||
files.append(
|
||||
(
|
||||
"Recording object_key",
|
||||
recording.object_key,
|
||||
recording.bucket_name,
|
||||
)
|
||||
)
|
||||
if recording.track_keys:
|
||||
for i, key in enumerate(recording.track_keys):
|
||||
files.append((f"Track {i}", key, recording.bucket_name))
|
||||
|
||||
# Cloud video
|
||||
if t.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(t.meeting_id)
|
||||
if meeting and meeting.daily_composed_video_s3_key:
|
||||
files.append(("Cloud video", meeting.daily_composed_video_s3_key, None))
|
||||
|
||||
if not files:
|
||||
print("No associated files found.")
|
||||
return
|
||||
|
||||
print(f"{'Type':<25} {'Bucket':<30} {'S3 Key'}")
|
||||
print("-" * 120)
|
||||
for label, key, bucket in files:
|
||||
print(f"{label:<25} {bucket or '(default)':<30} {key}")
|
||||
|
||||
# Generate presigned URLs
|
||||
print("\nPresigned URLs (valid for 1 hour):")
|
||||
print("-" * 120)
|
||||
storage = get_transcripts_storage()
|
||||
for label, key, bucket in files:
|
||||
try:
|
||||
url = await storage.get_file_url(key, bucket=bucket, expires_in=3600)
|
||||
print(f"{label}: {url}")
|
||||
except Exception as e:
|
||||
print(f"{label}: ERROR - {e}")
|
||||
finally:
|
||||
await database.disconnect()
|
||||
|
||||
|
||||
async def download_files(transcript_id: str, output_dir: str):
|
||||
"""Download all files associated with a deleted transcript."""
|
||||
database = get_database()
|
||||
await database.connect()
|
||||
try:
|
||||
query = transcripts.select().where(transcripts.c.id == transcript_id)
|
||||
result = await database.fetch_one(query)
|
||||
if not result:
|
||||
print(f"Transcript {transcript_id} not found.")
|
||||
return
|
||||
|
||||
t = Transcript(**result)
|
||||
if t.deleted_at is None:
|
||||
print(f"Transcript {transcript_id} is not deleted.")
|
||||
return
|
||||
|
||||
dest = os.path.join(output_dir, t.id)
|
||||
os.makedirs(dest, exist_ok=True)
|
||||
|
||||
storage = get_transcripts_storage()
|
||||
|
||||
# Download transcript audio
|
||||
if t.audio_location == "storage" and not t.audio_deleted:
|
||||
try:
|
||||
data = await storage.get_file(t.storage_audio_path)
|
||||
path = os.path.join(dest, "audio.mp3")
|
||||
with open(path, "wb") as f:
|
||||
f.write(data)
|
||||
print(f"Downloaded: {path}")
|
||||
except Exception as e:
|
||||
print(f"Failed to download audio: {e}")
|
||||
|
||||
# Download recording files
|
||||
if t.recording_id:
|
||||
recording = await recordings_controller.get_by_id(t.recording_id)
|
||||
if recording and recording.track_keys:
|
||||
tracks_dir = os.path.join(dest, "tracks")
|
||||
os.makedirs(tracks_dir, exist_ok=True)
|
||||
for i, key in enumerate(recording.track_keys):
|
||||
try:
|
||||
data = await storage.get_file(key, bucket=recording.bucket_name)
|
||||
filename = os.path.basename(key) or f"track_{i}"
|
||||
path = os.path.join(tracks_dir, filename)
|
||||
with open(path, "wb") as f:
|
||||
f.write(data)
|
||||
print(f"Downloaded: {path}")
|
||||
except Exception as e:
|
||||
print(f"Failed to download track {i}: {e}")
|
||||
|
||||
# Download cloud video
|
||||
if t.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(t.meeting_id)
|
||||
if meeting and meeting.daily_composed_video_s3_key:
|
||||
try:
|
||||
source_storage = get_source_storage("daily")
|
||||
data = await source_storage.get_file(
|
||||
meeting.daily_composed_video_s3_key
|
||||
)
|
||||
path = os.path.join(dest, "cloud_video.mp4")
|
||||
with open(path, "wb") as f:
|
||||
f.write(data)
|
||||
print(f"Downloaded: {path}")
|
||||
except Exception as e:
|
||||
print(f"Failed to download cloud video: {e}")
|
||||
|
||||
# Write metadata
|
||||
metadata = {
|
||||
"id": t.id,
|
||||
"title": t.title,
|
||||
"created_at": t.created_at.isoformat() if t.created_at else None,
|
||||
"deleted_at": t.deleted_at.isoformat() if t.deleted_at else None,
|
||||
"duration": t.duration,
|
||||
"source_language": t.source_language,
|
||||
"target_language": t.target_language,
|
||||
"short_summary": t.short_summary,
|
||||
"long_summary": t.long_summary,
|
||||
"topics": [topic.model_dump() for topic in t.topics] if t.topics else [],
|
||||
"participants": [p.model_dump() for p in t.participants]
|
||||
if t.participants
|
||||
else [],
|
||||
"action_items": t.action_items,
|
||||
"webvtt": t.webvtt,
|
||||
"recording_id": t.recording_id,
|
||||
"meeting_id": t.meeting_id,
|
||||
}
|
||||
path = os.path.join(dest, "metadata.json")
|
||||
with open(path, "w") as f:
|
||||
json.dump(metadata, f, indent=2, default=str)
|
||||
print(f"Downloaded: {path}")
|
||||
|
||||
print(f"\nAll files saved to: {dest}")
|
||||
finally:
|
||||
await database.disconnect()
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Manage soft-deleted transcripts")
|
||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||
|
||||
subparsers.add_parser("list", help="List all deleted transcripts")
|
||||
|
||||
files_parser = subparsers.add_parser(
|
||||
"files", help="List S3 keys for a deleted transcript"
|
||||
)
|
||||
files_parser.add_argument("transcript_id", help="Transcript ID")
|
||||
|
||||
download_parser = subparsers.add_parser(
|
||||
"download", help="Download files for a deleted transcript"
|
||||
)
|
||||
download_parser.add_argument("transcript_id", help="Transcript ID")
|
||||
download_parser.add_argument(
|
||||
"--output-dir", default=".", help="Output directory (default: .)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.command == "list":
|
||||
asyncio.run(list_deleted())
|
||||
elif args.command == "files":
|
||||
asyncio.run(list_files(args.transcript_id))
|
||||
elif args.command == "download":
|
||||
asyncio.run(download_files(args.transcript_id, args.output_dir))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
412
server/reflector/tools/render_hatchet_run.py
Normal file
412
server/reflector/tools/render_hatchet_run.py
Normal file
@@ -0,0 +1,412 @@
|
||||
"""
|
||||
Render Hatchet workflow runs as text DAG.
|
||||
|
||||
Usage:
|
||||
# Show latest 5 runs (summary table)
|
||||
uv run -m reflector.tools.render_hatchet_run
|
||||
|
||||
# Show specific run with full DAG + task details
|
||||
uv run -m reflector.tools.render_hatchet_run <workflow_run_id>
|
||||
|
||||
# Drill into Nth run from the list (1-indexed)
|
||||
uv run -m reflector.tools.render_hatchet_run --show 1
|
||||
|
||||
# Show latest N runs
|
||||
uv run -m reflector.tools.render_hatchet_run --last 10
|
||||
|
||||
# Filter by status
|
||||
uv run -m reflector.tools.render_hatchet_run --status FAILED
|
||||
uv run -m reflector.tools.render_hatchet_run --status RUNNING
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
from hatchet_sdk.clients.rest.models import (
|
||||
V1TaskEvent,
|
||||
V1TaskStatus,
|
||||
V1TaskSummary,
|
||||
V1WorkflowRunDetails,
|
||||
WorkflowRunShapeItemForWorkflowRunDetails,
|
||||
)
|
||||
|
||||
from reflector.hatchet.client import HatchetClientManager
|
||||
|
||||
STATUS_ICON = {
|
||||
V1TaskStatus.COMPLETED: "\u2705",
|
||||
V1TaskStatus.RUNNING: "\u23f3",
|
||||
V1TaskStatus.FAILED: "\u274c",
|
||||
V1TaskStatus.QUEUED: "\u23f8\ufe0f",
|
||||
V1TaskStatus.CANCELLED: "\u26a0\ufe0f",
|
||||
}
|
||||
|
||||
STATUS_LABEL = {
|
||||
V1TaskStatus.COMPLETED: "Complete",
|
||||
V1TaskStatus.RUNNING: "Running",
|
||||
V1TaskStatus.FAILED: "FAILED",
|
||||
V1TaskStatus.QUEUED: "Queued",
|
||||
V1TaskStatus.CANCELLED: "Cancelled",
|
||||
}
|
||||
|
||||
|
||||
def _fmt_time(dt: datetime | None) -> str:
|
||||
if dt is None:
|
||||
return "-"
|
||||
return dt.strftime("%H:%M:%S")
|
||||
|
||||
|
||||
def _fmt_duration(ms: int | None) -> str:
|
||||
if ms is None:
|
||||
return "-"
|
||||
secs = ms / 1000
|
||||
if secs < 60:
|
||||
return f"{secs:.1f}s"
|
||||
mins = secs / 60
|
||||
return f"{mins:.1f}m"
|
||||
|
||||
|
||||
def _fmt_status_line(task: V1TaskSummary) -> str:
|
||||
"""Format a status line like: Complete (finished 20:31:44)"""
|
||||
label = STATUS_LABEL.get(task.status, task.status.value)
|
||||
icon = STATUS_ICON.get(task.status, "?")
|
||||
|
||||
if task.status == V1TaskStatus.COMPLETED and task.finished_at:
|
||||
return f"{icon} {label} (finished {_fmt_time(task.finished_at)})"
|
||||
elif task.status == V1TaskStatus.RUNNING and task.started_at:
|
||||
parts = [f"started {_fmt_time(task.started_at)}"]
|
||||
if task.duration:
|
||||
parts.append(f"{_fmt_duration(task.duration)} elapsed")
|
||||
return f"{icon} {label} ({', '.join(parts)})"
|
||||
elif task.status == V1TaskStatus.FAILED and task.finished_at:
|
||||
return f"{icon} {label} (failed {_fmt_time(task.finished_at)})"
|
||||
elif task.status == V1TaskStatus.CANCELLED:
|
||||
return f"{icon} {label}"
|
||||
elif task.status == V1TaskStatus.QUEUED:
|
||||
return f"{icon} {label}"
|
||||
return f"{icon} {label}"
|
||||
|
||||
|
||||
def _topo_sort(
|
||||
shape: list[WorkflowRunShapeItemForWorkflowRunDetails],
|
||||
) -> list[str]:
|
||||
"""Topological sort of step_ids from shape DAG."""
|
||||
step_ids = {s.step_id for s in shape}
|
||||
children_map: dict[str, list[str]] = {}
|
||||
in_degree: dict[str, int] = {sid: 0 for sid in step_ids}
|
||||
|
||||
for s in shape:
|
||||
children = [c for c in (s.children_step_ids or []) if c in step_ids]
|
||||
children_map[s.step_id] = children
|
||||
for c in children:
|
||||
in_degree[c] += 1
|
||||
|
||||
queue = sorted(sid for sid, deg in in_degree.items() if deg == 0)
|
||||
result: list[str] = []
|
||||
while queue:
|
||||
node = queue.pop(0)
|
||||
result.append(node)
|
||||
for c in children_map.get(node, []):
|
||||
in_degree[c] -= 1
|
||||
if in_degree[c] == 0:
|
||||
queue.append(c)
|
||||
queue.sort()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def render_run_detail(details: V1WorkflowRunDetails) -> str:
|
||||
"""Render a single workflow run as markdown DAG with task details."""
|
||||
shape = details.shape or []
|
||||
tasks = details.tasks or []
|
||||
events = details.task_events or []
|
||||
run = details.run
|
||||
|
||||
if not shape:
|
||||
return f"Run {run.metadata.id}: {run.status.value} (no shape data)"
|
||||
|
||||
# Build lookups
|
||||
step_to_shape: dict[str, WorkflowRunShapeItemForWorkflowRunDetails] = {
|
||||
s.step_id: s for s in shape
|
||||
}
|
||||
step_to_name: dict[str, str] = {s.step_id: s.task_name for s in shape}
|
||||
|
||||
# Reverse edges (parents)
|
||||
parents: dict[str, list[str]] = {s.step_id: [] for s in shape}
|
||||
for s in shape:
|
||||
for child_id in s.children_step_ids or []:
|
||||
if child_id in parents:
|
||||
parents[child_id].append(s.step_id)
|
||||
|
||||
# Join tasks by step_id
|
||||
task_by_step: dict[str, V1TaskSummary] = {}
|
||||
for t in tasks:
|
||||
if t.step_id and t.step_id in step_to_name:
|
||||
task_by_step[t.step_id] = t
|
||||
|
||||
# Events indexed by task_external_id
|
||||
events_by_task: dict[str, list[V1TaskEvent]] = defaultdict(list)
|
||||
for ev in events:
|
||||
events_by_task[ev.task_id].append(ev)
|
||||
|
||||
ordered = _topo_sort(shape)
|
||||
|
||||
lines: list[str] = []
|
||||
|
||||
# Run header
|
||||
run_icon = STATUS_ICON.get(run.status, "?")
|
||||
run_name = run.display_name or run.workflow_id
|
||||
dur = _fmt_duration(run.duration)
|
||||
lines.append(f"**{run_name}** {run_icon} {dur}")
|
||||
lines.append(f"ID: `{run.metadata.id}`")
|
||||
if run.additional_metadata:
|
||||
meta_parts = [f"{k}=`{v}`" for k, v in run.additional_metadata.items()]
|
||||
lines.append(f"Meta: {', '.join(meta_parts)}")
|
||||
if run.error_message:
|
||||
# Take first line of error only for header
|
||||
first_line = run.error_message.split("\n")[0]
|
||||
lines.append(f"Error: {first_line}")
|
||||
lines.append("")
|
||||
|
||||
# DAG Status Overview table (collapsible)
|
||||
lines.append("```spoiler DAG Status Overview")
|
||||
lines.append("| Node | Status | Duration | Dependencies |")
|
||||
lines.append("|------|--------|----------|--------------|")
|
||||
|
||||
for step_id in ordered:
|
||||
s = step_to_shape[step_id]
|
||||
t = task_by_step.get(step_id)
|
||||
name = step_to_name[step_id]
|
||||
icon = STATUS_ICON.get(t.status, "?") if t else "?"
|
||||
dur = _fmt_duration(t.duration) if t else "-"
|
||||
|
||||
parent_names = [step_to_name[p] for p in parents[step_id]]
|
||||
child_names = [
|
||||
step_to_name[c] for c in (s.children_step_ids or []) if c in step_to_name
|
||||
]
|
||||
deps_left = ", ".join(parent_names) if parent_names else ""
|
||||
deps_right = ", ".join(child_names) if child_names else ""
|
||||
if deps_left and deps_right:
|
||||
deps = f"{deps_left} \u2192 {deps_right}"
|
||||
elif deps_right:
|
||||
deps = f"\u2192 {deps_right}"
|
||||
elif deps_left:
|
||||
deps = f"{deps_left} \u2192"
|
||||
else:
|
||||
deps = "-"
|
||||
|
||||
lines.append(f"| {name} | {icon} | {dur} | {deps} |")
|
||||
|
||||
lines.append("```")
|
||||
lines.append("")
|
||||
|
||||
# Node details (collapsible)
|
||||
lines.append("```spoiler Node Details")
|
||||
for step_id in ordered:
|
||||
t = task_by_step.get(step_id)
|
||||
name = step_to_name[step_id]
|
||||
|
||||
if not t:
|
||||
lines.append(f"**\U0001f4e6 {name}**")
|
||||
lines.append("Status: no task data")
|
||||
lines.append("")
|
||||
continue
|
||||
|
||||
lines.append(f"**\U0001f4e6 {name}**")
|
||||
lines.append(f"Status: {_fmt_status_line(t)}")
|
||||
|
||||
if t.duration:
|
||||
lines.append(f"Duration: {_fmt_duration(t.duration)}")
|
||||
if t.retry_count and t.retry_count > 0:
|
||||
lines.append(f"Retries: {t.retry_count}")
|
||||
|
||||
# Fan-out children
|
||||
if t.num_spawned_children and t.num_spawned_children > 0:
|
||||
children = t.children or []
|
||||
completed = sum(1 for c in children if c.status == V1TaskStatus.COMPLETED)
|
||||
failed = sum(1 for c in children if c.status == V1TaskStatus.FAILED)
|
||||
running = sum(1 for c in children if c.status == V1TaskStatus.RUNNING)
|
||||
lines.append(
|
||||
f"Spawned children: {completed}/{t.num_spawned_children} done"
|
||||
f"{f', {running} running' if running else ''}"
|
||||
f"{f', {failed} failed' if failed else ''}"
|
||||
)
|
||||
|
||||
# Error message (first meaningful line only, full trace in events)
|
||||
if t.error_message:
|
||||
err_lines = t.error_message.strip().split("\n")
|
||||
# Find first non-empty, non-traceback line
|
||||
err_summary = err_lines[0]
|
||||
for line in err_lines:
|
||||
stripped = line.strip()
|
||||
if stripped and not stripped.startswith(
|
||||
("Traceback", "File ", "{", ")")
|
||||
):
|
||||
err_summary = stripped
|
||||
break
|
||||
lines.append(f"Error: `{err_summary}`")
|
||||
|
||||
# Events log
|
||||
task_events = sorted(
|
||||
events_by_task.get(t.task_external_id, []),
|
||||
key=lambda e: e.timestamp,
|
||||
)
|
||||
if task_events:
|
||||
lines.append("Events:")
|
||||
for ev in task_events:
|
||||
ts = ev.timestamp.strftime("%H:%M:%S")
|
||||
ev_icon = ""
|
||||
if ev.event_type.value == "FINISHED":
|
||||
ev_icon = "\u2705 "
|
||||
elif ev.event_type.value in ("FAILED", "TIMED_OUT"):
|
||||
ev_icon = "\u274c "
|
||||
elif ev.event_type.value == "STARTED":
|
||||
ev_icon = "\u25b6\ufe0f "
|
||||
elif ev.event_type.value == "RETRYING":
|
||||
ev_icon = "\U0001f504 "
|
||||
elif ev.event_type.value == "CANCELLED":
|
||||
ev_icon = "\u26a0\ufe0f "
|
||||
|
||||
msg = ev.message.strip()
|
||||
if ev.error_message:
|
||||
# Just first line of error in event log
|
||||
err_first = ev.error_message.strip().split("\n")[0]
|
||||
if msg:
|
||||
msg += f" | {err_first}"
|
||||
else:
|
||||
msg = err_first
|
||||
|
||||
if msg:
|
||||
lines.append(f" `{ts}` {ev_icon}{ev.event_type.value}: {msg}")
|
||||
else:
|
||||
lines.append(f" `{ts}` {ev_icon}{ev.event_type.value}")
|
||||
|
||||
lines.append("")
|
||||
|
||||
lines.append("```")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def render_run_summary(idx: int, run: V1TaskSummary) -> str:
|
||||
"""One-line summary for a run in the list view."""
|
||||
icon = STATUS_ICON.get(run.status, "?")
|
||||
name = run.display_name or run.workflow_name or "?"
|
||||
run_id = run.workflow_run_external_id or "?"
|
||||
dur = _fmt_duration(run.duration)
|
||||
started = _fmt_time(run.started_at)
|
||||
meta = ""
|
||||
if run.additional_metadata:
|
||||
meta_parts = [f"{k}=`{v}`" for k, v in run.additional_metadata.items()]
|
||||
meta = f" ({', '.join(meta_parts)})"
|
||||
return (
|
||||
f" {idx}. {icon} **{name}** started={started} dur={dur}{meta}\n"
|
||||
f" `{run_id}`"
|
||||
)
|
||||
|
||||
|
||||
async def _fetch_run_list(
|
||||
count: int = 5,
|
||||
statuses: list[V1TaskStatus] | None = None,
|
||||
) -> list[V1TaskSummary]:
|
||||
client = HatchetClientManager.get_client()
|
||||
since = datetime.now(timezone.utc) - timedelta(days=7)
|
||||
runs = await client.runs.aio_list(
|
||||
since=since,
|
||||
statuses=statuses,
|
||||
limit=count,
|
||||
)
|
||||
return runs.rows or []
|
||||
|
||||
|
||||
async def list_recent_runs(
|
||||
count: int = 5,
|
||||
statuses: list[V1TaskStatus] | None = None,
|
||||
) -> str:
|
||||
"""List recent workflow runs as text."""
|
||||
rows = await _fetch_run_list(count, statuses)
|
||||
|
||||
if not rows:
|
||||
return "No runs found in the last 7 days."
|
||||
|
||||
lines = [f"Recent runs ({len(rows)}):", ""]
|
||||
for i, run in enumerate(rows, 1):
|
||||
lines.append(render_run_summary(i, run))
|
||||
|
||||
lines.append("")
|
||||
lines.append("Use `--show N` to see full DAG for run N")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
async def show_run(workflow_run_id: str) -> str:
|
||||
"""Fetch and render a single run."""
|
||||
client = HatchetClientManager.get_client()
|
||||
details = await client.runs.aio_get(workflow_run_id)
|
||||
return render_run_detail(details)
|
||||
|
||||
|
||||
async def show_nth_run(
|
||||
n: int,
|
||||
count: int = 5,
|
||||
statuses: list[V1TaskStatus] | None = None,
|
||||
) -> str:
|
||||
"""Fetch list, then drill into Nth run."""
|
||||
rows = await _fetch_run_list(count, statuses)
|
||||
|
||||
if not rows:
|
||||
return "No runs found in the last 7 days."
|
||||
if n < 1 or n > len(rows):
|
||||
return f"Invalid index {n}. Have {len(rows)} runs (1-{len(rows)})."
|
||||
|
||||
run = rows[n - 1]
|
||||
return await show_run(run.workflow_run_external_id)
|
||||
|
||||
|
||||
async def main_async(args: argparse.Namespace) -> None:
|
||||
statuses = [V1TaskStatus(args.status)] if args.status else None
|
||||
|
||||
if args.run_id:
|
||||
output = await show_run(args.run_id)
|
||||
elif args.show is not None:
|
||||
output = await show_nth_run(args.show, count=args.last, statuses=statuses)
|
||||
else:
|
||||
output = await list_recent_runs(count=args.last, statuses=statuses)
|
||||
|
||||
print(output)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Render Hatchet workflow runs as text DAG"
|
||||
)
|
||||
parser.add_argument(
|
||||
"run_id",
|
||||
nargs="?",
|
||||
default=None,
|
||||
help="Workflow run ID to show in detail. If omitted, lists recent runs.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--show",
|
||||
type=int,
|
||||
default=None,
|
||||
metavar="N",
|
||||
help="Show full DAG for the Nth run in the list (1-indexed)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--last",
|
||||
type=int,
|
||||
default=5,
|
||||
help="Number of recent runs to list (default: 5)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--status",
|
||||
choices=["QUEUED", "RUNNING", "COMPLETED", "FAILED", "CANCELLED"],
|
||||
help="Filter by status",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
asyncio.run(main_async(args))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
20
server/reflector/views/config.py
Normal file
20
server/reflector/views/config.py
Normal file
@@ -0,0 +1,20 @@
|
||||
from fastapi import APIRouter
|
||||
from pydantic import BaseModel
|
||||
|
||||
from reflector.email import is_email_configured
|
||||
from reflector.settings import settings
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
class ConfigResponse(BaseModel):
|
||||
zulip_enabled: bool
|
||||
email_enabled: bool
|
||||
|
||||
|
||||
@router.get("/config", response_model=ConfigResponse)
|
||||
async def get_config():
|
||||
return ConfigResponse(
|
||||
zulip_enabled=bool(settings.ZULIP_REALM),
|
||||
email_enabled=is_email_configured(),
|
||||
)
|
||||
@@ -4,7 +4,7 @@ from typing import Annotated, Any, Optional
|
||||
from uuid import UUID
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, EmailStr
|
||||
|
||||
import reflector.auth as auth
|
||||
from reflector.dailyco_api import RecordingType
|
||||
@@ -151,3 +151,25 @@ async def start_recording(
|
||||
raise HTTPException(
|
||||
status_code=500, detail=f"Failed to start recording: {str(e)}"
|
||||
)
|
||||
|
||||
|
||||
class AddEmailRecipientRequest(BaseModel):
|
||||
email: EmailStr
|
||||
|
||||
|
||||
@router.post("/meetings/{meeting_id}/email-recipient")
|
||||
async def add_email_recipient(
|
||||
meeting_id: str,
|
||||
request: AddEmailRecipientRequest,
|
||||
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
|
||||
):
|
||||
"""Add an email address to receive the transcript link when processing completes."""
|
||||
meeting = await meetings_controller.get_by_id(meeting_id)
|
||||
if not meeting:
|
||||
raise HTTPException(status_code=404, detail="Meeting not found")
|
||||
|
||||
recipients = await meetings_controller.add_email_recipient(
|
||||
meeting_id, request.email
|
||||
)
|
||||
|
||||
return {"status": "success", "email_recipients": recipients}
|
||||
|
||||
@@ -44,6 +44,7 @@ class Room(BaseModel):
|
||||
ics_last_etag: Optional[str] = None
|
||||
platform: Platform
|
||||
skip_consent: bool = False
|
||||
email_transcript_to: str | None = None
|
||||
|
||||
|
||||
class RoomDetails(Room):
|
||||
@@ -93,6 +94,7 @@ class CreateRoom(BaseModel):
|
||||
ics_enabled: bool = False
|
||||
platform: Platform
|
||||
skip_consent: bool = False
|
||||
email_transcript_to: str | None = None
|
||||
|
||||
|
||||
class UpdateRoom(BaseModel):
|
||||
@@ -112,6 +114,7 @@ class UpdateRoom(BaseModel):
|
||||
ics_enabled: Optional[bool] = None
|
||||
platform: Optional[Platform] = None
|
||||
skip_consent: Optional[bool] = None
|
||||
email_transcript_to: Optional[str] = None
|
||||
|
||||
|
||||
class CreateRoomMeeting(BaseModel):
|
||||
@@ -253,6 +256,7 @@ async def rooms_create(
|
||||
ics_enabled=room.ics_enabled,
|
||||
platform=room.platform,
|
||||
skip_consent=room.skip_consent,
|
||||
email_transcript_to=room.email_transcript_to,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ from pydantic import (
|
||||
|
||||
import reflector.auth as auth
|
||||
from reflector.db import get_database
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.recordings import recordings_controller
|
||||
from reflector.db.rooms import rooms_controller
|
||||
from reflector.db.search import (
|
||||
@@ -39,6 +40,7 @@ from reflector.db.transcripts import (
|
||||
transcripts_controller,
|
||||
)
|
||||
from reflector.db.users import user_controller
|
||||
from reflector.email import is_email_configured, send_transcript_email
|
||||
from reflector.processors.types import Transcript as ProcessorTranscript
|
||||
from reflector.processors.types import Word
|
||||
from reflector.schemas.transcript_formats import TranscriptFormat, TranscriptSegment
|
||||
@@ -112,6 +114,8 @@ class GetTranscriptMinimal(BaseModel):
|
||||
room_name: str | None = None
|
||||
audio_deleted: bool | None = None
|
||||
change_seq: int | None = None
|
||||
has_cloud_video: bool = False
|
||||
cloud_video_duration: int | None = None
|
||||
|
||||
|
||||
class TranscriptParticipantWithEmail(TranscriptParticipant):
|
||||
@@ -501,6 +505,14 @@ async def transcript_get(
|
||||
)
|
||||
)
|
||||
|
||||
has_cloud_video = False
|
||||
cloud_video_duration = None
|
||||
if transcript.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
|
||||
if meeting and meeting.daily_composed_video_s3_key:
|
||||
has_cloud_video = True
|
||||
cloud_video_duration = meeting.daily_composed_video_duration
|
||||
|
||||
base_data = {
|
||||
"id": transcript.id,
|
||||
"user_id": transcript.user_id,
|
||||
@@ -524,6 +536,8 @@ async def transcript_get(
|
||||
"audio_deleted": transcript.audio_deleted,
|
||||
"change_seq": transcript.change_seq,
|
||||
"participants": participants,
|
||||
"has_cloud_video": has_cloud_video,
|
||||
"cloud_video_duration": cloud_video_duration,
|
||||
}
|
||||
|
||||
if transcript_format == "text":
|
||||
@@ -705,3 +719,31 @@ async def transcript_post_to_zulip(
|
||||
await transcripts_controller.update(
|
||||
transcript, {"zulip_message_id": response["id"]}
|
||||
)
|
||||
|
||||
|
||||
class SendEmailRequest(BaseModel):
|
||||
email: str
|
||||
|
||||
|
||||
class SendEmailResponse(BaseModel):
|
||||
sent: int
|
||||
|
||||
|
||||
@router.post("/transcripts/{transcript_id}/email", response_model=SendEmailResponse)
|
||||
async def transcript_send_email(
|
||||
transcript_id: str,
|
||||
request: SendEmailRequest,
|
||||
user: Annotated[auth.UserInfo, Depends(auth.current_user)],
|
||||
):
|
||||
if not is_email_configured():
|
||||
raise HTTPException(status_code=400, detail="Email not configured")
|
||||
user_id = user["sub"]
|
||||
transcript = await transcripts_controller.get_by_id_for_http(
|
||||
transcript_id, user_id=user_id
|
||||
)
|
||||
if not transcript:
|
||||
raise HTTPException(status_code=404, detail="Transcript not found")
|
||||
if not transcripts_controller.user_can_mutate(transcript, user_id):
|
||||
raise HTTPException(status_code=403, detail="Not authorized")
|
||||
sent = await send_transcript_email([request.email], transcript)
|
||||
return SendEmailResponse(sent=sent)
|
||||
|
||||
@@ -53,9 +53,22 @@ async def transcript_get_audio_mp3(
|
||||
else:
|
||||
user_id = token_user["sub"]
|
||||
|
||||
transcript = await transcripts_controller.get_by_id_for_http(
|
||||
transcript_id, user_id=user_id
|
||||
)
|
||||
if not user_id and not token:
|
||||
# No authentication provided at all. Only anonymous transcripts
|
||||
# (user_id=None) are accessible without auth, to preserve
|
||||
# pipeline access via _generate_local_audio_link().
|
||||
transcript = await transcripts_controller.get_by_id(transcript_id)
|
||||
if not transcript or transcript.deleted_at is not None:
|
||||
raise HTTPException(status_code=404, detail="Transcript not found")
|
||||
if transcript.user_id is not None:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Authentication required",
|
||||
)
|
||||
else:
|
||||
transcript = await transcripts_controller.get_by_id_for_http(
|
||||
transcript_id, user_id=user_id
|
||||
)
|
||||
|
||||
if transcript.audio_location == "storage":
|
||||
# proxy S3 file, to prevent issue with CORS
|
||||
@@ -94,16 +107,16 @@ async def transcript_get_audio_mp3(
|
||||
request,
|
||||
transcript.audio_mp3_filename,
|
||||
content_type="audio/mpeg",
|
||||
content_disposition=f"attachment; filename={filename}",
|
||||
content_disposition=f"inline; filename={filename}",
|
||||
)
|
||||
|
||||
|
||||
@router.get("/transcripts/{transcript_id}/audio/waveform")
|
||||
async def transcript_get_audio_waveform(
|
||||
transcript_id: str,
|
||||
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
|
||||
user: Annotated[auth.UserInfo, Depends(auth.current_user)],
|
||||
) -> AudioWaveform:
|
||||
user_id = user["sub"] if user else None
|
||||
user_id = user["sub"]
|
||||
transcript = await transcripts_controller.get_by_id_for_http(
|
||||
transcript_id, user_id=user_id
|
||||
)
|
||||
|
||||
169
server/reflector/views/transcripts_download.py
Normal file
169
server/reflector/views/transcripts_download.py
Normal file
@@ -0,0 +1,169 @@
|
||||
"""
|
||||
Transcript download endpoint — generates a zip archive with all transcript files.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import tempfile
|
||||
import zipfile
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from fastapi.responses import StreamingResponse
|
||||
|
||||
import reflector.auth as auth
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.recordings import recordings_controller
|
||||
from reflector.db.transcripts import transcripts_controller
|
||||
from reflector.logger import logger
|
||||
from reflector.storage import get_source_storage, get_transcripts_storage
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@router.get(
|
||||
"/transcripts/{transcript_id}/download/zip",
|
||||
operation_id="transcript_download_zip",
|
||||
)
|
||||
async def transcript_download_zip(
|
||||
transcript_id: str,
|
||||
user: Annotated[auth.UserInfo, Depends(auth.current_user)],
|
||||
):
|
||||
user_id = user["sub"]
|
||||
transcript = await transcripts_controller.get_by_id_for_http(
|
||||
transcript_id, user_id=user_id
|
||||
)
|
||||
if not transcripts_controller.user_can_mutate(transcript, user_id):
|
||||
raise HTTPException(status_code=403, detail="Not authorized")
|
||||
|
||||
recording = None
|
||||
if transcript.recording_id:
|
||||
recording = await recordings_controller.get_by_id(transcript.recording_id)
|
||||
|
||||
meeting = None
|
||||
if transcript.meeting_id:
|
||||
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
|
||||
|
||||
truncated_id = str(transcript.id).split("-")[0]
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
zip_path = os.path.join(tmpdir, f"transcript_{truncated_id}.zip")
|
||||
|
||||
with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_DEFLATED) as zf:
|
||||
# Transcript audio
|
||||
if transcript.audio_location == "storage" and not transcript.audio_deleted:
|
||||
try:
|
||||
storage = get_transcripts_storage()
|
||||
data = await storage.get_file(transcript.storage_audio_path)
|
||||
audio_path = os.path.join(tmpdir, "audio.mp3")
|
||||
with open(audio_path, "wb") as f:
|
||||
f.write(data)
|
||||
zf.write(audio_path, "audio.mp3")
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to download transcript audio for zip",
|
||||
exc_info=e,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
elif (
|
||||
not transcript.audio_deleted
|
||||
and hasattr(transcript, "audio_mp3_filename")
|
||||
and transcript.audio_mp3_filename
|
||||
and transcript.audio_mp3_filename.exists()
|
||||
):
|
||||
zf.write(str(transcript.audio_mp3_filename), "audio.mp3")
|
||||
|
||||
# Recording tracks (multitrack)
|
||||
if recording and recording.track_keys:
|
||||
try:
|
||||
source_storage = get_source_storage(
|
||||
"daily" if recording.track_keys else None
|
||||
)
|
||||
except Exception:
|
||||
source_storage = get_transcripts_storage()
|
||||
|
||||
for i, key in enumerate(recording.track_keys):
|
||||
try:
|
||||
data = await source_storage.get_file(
|
||||
key, bucket=recording.bucket_name
|
||||
)
|
||||
filename = os.path.basename(key) or f"track_{i}"
|
||||
track_path = os.path.join(tmpdir, f"track_{i}")
|
||||
with open(track_path, "wb") as f:
|
||||
f.write(data)
|
||||
zf.write(track_path, f"tracks/{filename}")
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to download track for zip",
|
||||
exc_info=e,
|
||||
track_key=key,
|
||||
)
|
||||
|
||||
# Cloud video
|
||||
if meeting and meeting.daily_composed_video_s3_key:
|
||||
try:
|
||||
source_storage = get_source_storage("daily")
|
||||
data = await source_storage.get_file(
|
||||
meeting.daily_composed_video_s3_key
|
||||
)
|
||||
video_path = os.path.join(tmpdir, "cloud_video.mp4")
|
||||
with open(video_path, "wb") as f:
|
||||
f.write(data)
|
||||
zf.write(video_path, "cloud_video.mp4")
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to download cloud video for zip",
|
||||
exc_info=e,
|
||||
s3_key=meeting.daily_composed_video_s3_key,
|
||||
)
|
||||
|
||||
# Metadata JSON
|
||||
metadata = {
|
||||
"id": transcript.id,
|
||||
"title": transcript.title,
|
||||
"created_at": (
|
||||
transcript.created_at.isoformat() if transcript.created_at else None
|
||||
),
|
||||
"duration": transcript.duration,
|
||||
"source_language": transcript.source_language,
|
||||
"target_language": transcript.target_language,
|
||||
"short_summary": transcript.short_summary,
|
||||
"long_summary": transcript.long_summary,
|
||||
"topics": (
|
||||
[t.model_dump() for t in transcript.topics]
|
||||
if transcript.topics
|
||||
else []
|
||||
),
|
||||
"participants": (
|
||||
[p.model_dump() for p in transcript.participants]
|
||||
if transcript.participants
|
||||
else []
|
||||
),
|
||||
"action_items": transcript.action_items,
|
||||
"webvtt": transcript.webvtt,
|
||||
"recording_id": transcript.recording_id,
|
||||
"meeting_id": transcript.meeting_id,
|
||||
}
|
||||
meta_path = os.path.join(tmpdir, "metadata.json")
|
||||
with open(meta_path, "w") as f:
|
||||
json.dump(metadata, f, indent=2, default=str)
|
||||
zf.write(meta_path, "metadata.json")
|
||||
|
||||
# Read zip into memory before tmpdir is cleaned up
|
||||
with open(zip_path, "rb") as f:
|
||||
zip_bytes = f.read()
|
||||
|
||||
def iter_zip():
|
||||
offset = 0
|
||||
chunk_size = 64 * 1024
|
||||
while offset < len(zip_bytes):
|
||||
yield zip_bytes[offset : offset + chunk_size]
|
||||
offset += chunk_size
|
||||
|
||||
return StreamingResponse(
|
||||
iter_zip(),
|
||||
media_type="application/zip",
|
||||
headers={
|
||||
"Content-Disposition": f"attachment; filename=transcript_{truncated_id}.zip"
|
||||
},
|
||||
)
|
||||
60
server/reflector/views/transcripts_video.py
Normal file
60
server/reflector/views/transcripts_video.py
Normal file
@@ -0,0 +1,60 @@
|
||||
"""
|
||||
Transcript cloud video endpoint — returns a presigned URL for streaming playback.
|
||||
"""
|
||||
|
||||
from typing import Annotated
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from pydantic import BaseModel
|
||||
|
||||
import reflector.auth as auth
|
||||
from reflector.db.meetings import meetings_controller
|
||||
from reflector.db.transcripts import transcripts_controller
|
||||
from reflector.storage import get_source_storage
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
class VideoUrlResponse(BaseModel):
|
||||
url: str
|
||||
duration: int | None = None
|
||||
content_type: str = "video/mp4"
|
||||
|
||||
|
||||
@router.get(
|
||||
"/transcripts/{transcript_id}/video/url",
|
||||
operation_id="transcript_get_video_url",
|
||||
response_model=VideoUrlResponse,
|
||||
)
|
||||
async def transcript_get_video_url(
|
||||
transcript_id: str,
|
||||
user: Annotated[auth.UserInfo, Depends(auth.current_user)],
|
||||
):
|
||||
user_id = user["sub"]
|
||||
|
||||
transcript = await transcripts_controller.get_by_id_for_http(
|
||||
transcript_id, user_id=user_id
|
||||
)
|
||||
|
||||
if not transcript.meeting_id:
|
||||
raise HTTPException(status_code=404, detail="No video available")
|
||||
|
||||
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
|
||||
if not meeting or not meeting.daily_composed_video_s3_key:
|
||||
raise HTTPException(status_code=404, detail="No video available")
|
||||
|
||||
source_storage = get_source_storage("daily")
|
||||
url = await source_storage.get_file_url(
|
||||
meeting.daily_composed_video_s3_key,
|
||||
operation="get_object",
|
||||
expires_in=900,
|
||||
extra_params={
|
||||
"ResponseContentDisposition": "inline",
|
||||
"ResponseContentType": "video/mp4",
|
||||
},
|
||||
)
|
||||
|
||||
return VideoUrlResponse(
|
||||
url=url,
|
||||
duration=meeting.daily_composed_video_duration,
|
||||
)
|
||||
@@ -90,7 +90,9 @@ async def cleanup_old_transcripts(
|
||||
):
|
||||
"""Delete old anonymous transcripts and their associated recordings/meetings."""
|
||||
query = transcripts.select().where(
|
||||
(transcripts.c.created_at < cutoff_date) & (transcripts.c.user_id.is_(None))
|
||||
(transcripts.c.created_at < cutoff_date)
|
||||
& (transcripts.c.user_id.is_(None))
|
||||
& (transcripts.c.deleted_at.is_(None))
|
||||
)
|
||||
old_transcripts = await db.fetch_all(query)
|
||||
|
||||
|
||||
@@ -104,6 +104,12 @@ async def process_recording(bucket_name: str, object_key: str):
|
||||
room = await rooms_controller.get_by_id(meeting.room_id)
|
||||
|
||||
recording = await recordings_controller.get_by_object_key(bucket_name, object_key)
|
||||
if recording and recording.deleted_at is not None:
|
||||
logger.info(
|
||||
"Skipping soft-deleted recording",
|
||||
recording_id=recording.id,
|
||||
)
|
||||
return
|
||||
if not recording:
|
||||
recording = await recordings_controller.create(
|
||||
Recording(
|
||||
@@ -115,6 +121,13 @@ async def process_recording(bucket_name: str, object_key: str):
|
||||
)
|
||||
|
||||
transcript = await transcripts_controller.get_by_recording_id(recording.id)
|
||||
if transcript and transcript.deleted_at is not None:
|
||||
logger.info(
|
||||
"Skipping soft-deleted transcript for recording",
|
||||
recording_id=recording.id,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
return
|
||||
if transcript:
|
||||
await transcripts_controller.update(
|
||||
transcript,
|
||||
@@ -262,6 +275,13 @@ async def _process_multitrack_recording_inner(
|
||||
# Check if recording already exists (reprocessing path)
|
||||
recording = await recordings_controller.get_by_id(recording_id)
|
||||
|
||||
if recording and recording.deleted_at is not None:
|
||||
logger.info(
|
||||
"Skipping soft-deleted recording",
|
||||
recording_id=recording_id,
|
||||
)
|
||||
return
|
||||
|
||||
if recording and recording.meeting_id:
|
||||
# Reprocessing: recording exists with meeting already linked
|
||||
meeting = await meetings_controller.get_by_id(recording.meeting_id)
|
||||
@@ -341,6 +361,13 @@ async def _process_multitrack_recording_inner(
|
||||
)
|
||||
|
||||
transcript = await transcripts_controller.get_by_recording_id(recording.id)
|
||||
if transcript and transcript.deleted_at is not None:
|
||||
logger.info(
|
||||
"Skipping soft-deleted transcript for recording",
|
||||
recording_id=recording.id,
|
||||
transcript_id=transcript.id,
|
||||
)
|
||||
return
|
||||
if not transcript:
|
||||
transcript = await transcripts_controller.add(
|
||||
"",
|
||||
|
||||
@@ -40,6 +40,11 @@ x-backend-env: &backend-env
|
||||
# Garage S3 credentials — hardcoded test keys, containers are ephemeral
|
||||
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID: GK0123456789abcdef01234567 # gitleaks:allow
|
||||
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY: "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" # gitleaks:allow
|
||||
# Email / SMTP — Mailpit captures emails without sending
|
||||
SMTP_HOST: mailpit
|
||||
SMTP_PORT: "1025"
|
||||
SMTP_FROM_EMAIL: test@reflector.local
|
||||
SMTP_USE_TLS: "false"
|
||||
# NOTE: DAILYCO_STORAGE_AWS_* intentionally NOT set — forces fallback to
|
||||
# get_transcripts_storage() which has ENDPOINT_URL pointing at Garage.
|
||||
# Setting them would bypass the endpoint and generate presigned URLs for AWS.
|
||||
@@ -101,6 +106,14 @@ services:
|
||||
retries: 10
|
||||
start_period: 5s
|
||||
|
||||
mailpit:
|
||||
image: axllent/mailpit:latest
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8025/api/v1/messages"]
|
||||
interval: 5s
|
||||
timeout: 3s
|
||||
retries: 5
|
||||
|
||||
mock-daily:
|
||||
build:
|
||||
context: .
|
||||
@@ -131,6 +144,8 @@ services:
|
||||
condition: service_healthy
|
||||
mock-daily:
|
||||
condition: service_healthy
|
||||
mailpit:
|
||||
condition: service_healthy
|
||||
volumes:
|
||||
- server_data:/app/data
|
||||
|
||||
@@ -194,6 +209,7 @@ services:
|
||||
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
|
||||
SERVER_URL: http://server:1250
|
||||
GARAGE_ENDPOINT: http://garage:3900
|
||||
MAILPIT_URL: http://mailpit:8025
|
||||
depends_on:
|
||||
server:
|
||||
condition: service_started
|
||||
|
||||
@@ -17,6 +17,7 @@ from sqlalchemy.ext.asyncio import create_async_engine
|
||||
|
||||
SERVER_URL = os.environ.get("SERVER_URL", "http://server:1250")
|
||||
GARAGE_ENDPOINT = os.environ.get("GARAGE_ENDPOINT", "http://garage:3900")
|
||||
MAILPIT_URL = os.environ.get("MAILPIT_URL", "http://mailpit:8025")
|
||||
DATABASE_URL = os.environ.get(
|
||||
"DATABASE_URL_ASYNC",
|
||||
os.environ.get(
|
||||
@@ -114,3 +115,44 @@ async def _poll_transcript_status(
|
||||
def poll_transcript_status():
|
||||
"""Returns the poll_transcript_status async helper function."""
|
||||
return _poll_transcript_status
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
async def mailpit_client():
|
||||
"""HTTP client for Mailpit API — query captured emails."""
|
||||
async with httpx.AsyncClient(
|
||||
base_url=MAILPIT_URL,
|
||||
timeout=httpx.Timeout(10.0),
|
||||
) as client:
|
||||
# Clear inbox before each test
|
||||
await client.delete("/api/v1/messages")
|
||||
yield client
|
||||
|
||||
|
||||
async def _poll_mailpit_messages(
|
||||
mailpit: httpx.AsyncClient,
|
||||
to_email: str,
|
||||
max_wait: int = 30,
|
||||
interval: int = 2,
|
||||
) -> list[dict]:
|
||||
"""
|
||||
Poll Mailpit API until at least one message is delivered to the given address.
|
||||
Returns the list of matching messages.
|
||||
"""
|
||||
elapsed = 0
|
||||
while elapsed < max_wait:
|
||||
resp = await mailpit.get("/api/v1/messages", params={"query": f"to:{to_email}"})
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
messages = data.get("messages", [])
|
||||
if messages:
|
||||
return messages
|
||||
await asyncio.sleep(interval)
|
||||
elapsed += interval
|
||||
raise TimeoutError(f"No email delivered to {to_email} within {max_wait}s")
|
||||
|
||||
|
||||
@pytest_asyncio.fixture
|
||||
def poll_mailpit_messages():
|
||||
"""Returns the poll_mailpit_messages async helper function."""
|
||||
return _poll_mailpit_messages
|
||||
|
||||
@@ -4,10 +4,12 @@ Integration test: Multitrack → DailyMultitrackPipeline → full processing.
|
||||
Exercises: S3 upload → DB recording setup → process endpoint →
|
||||
Hatchet DiarizationPipeline → mock Daily API → whisper per-track transcription →
|
||||
diarization → mixdown → LLM summarization/topics → status "ended".
|
||||
Also tests email transcript notification via Mailpit SMTP sink.
|
||||
"""
|
||||
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
import uuid
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
import pytest
|
||||
from sqlalchemy import text
|
||||
@@ -22,6 +24,9 @@ TRACK_KEYS = [
|
||||
]
|
||||
|
||||
|
||||
TEST_EMAIL = "integration-test@reflector.local"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_multitrack_pipeline_end_to_end(
|
||||
api_client,
|
||||
@@ -30,6 +35,8 @@ async def test_multitrack_pipeline_end_to_end(
|
||||
test_records_dir,
|
||||
bucket_name,
|
||||
poll_transcript_status,
|
||||
mailpit_client,
|
||||
poll_mailpit_messages,
|
||||
):
|
||||
"""Set up multitrack recording in S3/DB and verify the full pipeline completes."""
|
||||
# 1. Upload test audio as two separate tracks to Garage S3
|
||||
@@ -52,16 +59,41 @@ async def test_multitrack_pipeline_end_to_end(
|
||||
transcript = resp.json()
|
||||
transcript_id = transcript["id"]
|
||||
|
||||
# 3. Insert Recording row and link to transcript via direct DB access
|
||||
# 3. Insert Meeting, Recording, and link to transcript via direct DB access
|
||||
recording_id = f"rec-integration-{transcript_id[:8]}"
|
||||
meeting_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
async with db_engine.begin() as conn:
|
||||
# Insert recording with track_keys
|
||||
# Insert meeting with email_recipients for email notification test
|
||||
await conn.execute(
|
||||
text("""
|
||||
INSERT INTO recording (id, bucket_name, object_key, recorded_at, status, track_keys)
|
||||
VALUES (:id, :bucket_name, :object_key, :recorded_at, :status, CAST(:track_keys AS json))
|
||||
INSERT INTO meeting (
|
||||
id, room_name, room_url, host_room_url,
|
||||
start_date, end_date, platform, email_recipients
|
||||
)
|
||||
VALUES (
|
||||
:id, :room_name, :room_url, :host_room_url,
|
||||
:start_date, :end_date, :platform, CAST(:email_recipients AS json)
|
||||
)
|
||||
"""),
|
||||
{
|
||||
"id": meeting_id,
|
||||
"room_name": "integration-test-room",
|
||||
"room_url": "https://test.daily.co/integration-test-room",
|
||||
"host_room_url": "https://test.daily.co/integration-test-room",
|
||||
"start_date": now,
|
||||
"end_date": now + timedelta(hours=1),
|
||||
"platform": "daily",
|
||||
"email_recipients": json.dumps([TEST_EMAIL]),
|
||||
},
|
||||
)
|
||||
|
||||
# Insert recording with track_keys, linked to meeting
|
||||
await conn.execute(
|
||||
text("""
|
||||
INSERT INTO recording (id, bucket_name, object_key, recorded_at, status, track_keys, meeting_id)
|
||||
VALUES (:id, :bucket_name, :object_key, :recorded_at, :status, CAST(:track_keys AS json), :meeting_id)
|
||||
"""),
|
||||
{
|
||||
"id": recording_id,
|
||||
@@ -70,6 +102,7 @@ async def test_multitrack_pipeline_end_to_end(
|
||||
"recorded_at": now,
|
||||
"status": "completed",
|
||||
"track_keys": json.dumps(TRACK_KEYS),
|
||||
"meeting_id": meeting_id,
|
||||
},
|
||||
)
|
||||
|
||||
@@ -127,3 +160,22 @@ async def test_multitrack_pipeline_end_to_end(
|
||||
assert (
|
||||
len(participants) >= 2
|
||||
), f"Expected at least 2 speakers for multitrack, got {len(participants)}"
|
||||
|
||||
# 7. Verify email transcript notification
|
||||
# The send_email pipeline task should have:
|
||||
# a) Set the transcript to public share_mode
|
||||
# b) Sent an email to TEST_EMAIL via Mailpit
|
||||
transcript_resp = await api_client.get(f"/transcripts/{transcript_id}")
|
||||
transcript_resp.raise_for_status()
|
||||
transcript_data = transcript_resp.json()
|
||||
assert (
|
||||
transcript_data.get("share_mode") == "public"
|
||||
), "Transcript should be set to public when email recipients exist"
|
||||
|
||||
# Poll Mailpit for the delivered email (send_email task runs async after finalize)
|
||||
messages = await poll_mailpit_messages(mailpit_client, TEST_EMAIL, max_wait=30)
|
||||
assert len(messages) >= 1, "Should have received at least 1 email"
|
||||
email_msg = messages[0]
|
||||
assert (
|
||||
"Transcript Ready" in email_msg.get("Subject", "")
|
||||
), f"Email subject should contain 'Transcript Ready', got: {email_msg.get('Subject')}"
|
||||
|
||||
@@ -76,8 +76,10 @@ async def test_cleanup_old_public_data_deletes_old_anonymous_transcripts():
|
||||
assert result["transcripts_deleted"] == 1
|
||||
assert result["errors"] == []
|
||||
|
||||
# Verify old anonymous transcript was deleted
|
||||
assert await transcripts_controller.get_by_id(old_transcript.id) is None
|
||||
# Verify old anonymous transcript was soft-deleted
|
||||
old = await transcripts_controller.get_by_id(old_transcript.id)
|
||||
assert old is not None
|
||||
assert old.deleted_at is not None
|
||||
|
||||
# Verify new anonymous transcript still exists
|
||||
assert await transcripts_controller.get_by_id(new_transcript.id) is not None
|
||||
@@ -150,15 +152,17 @@ async def test_cleanup_deletes_associated_meeting_and_recording():
|
||||
assert result["recordings_deleted"] == 1
|
||||
assert result["errors"] == []
|
||||
|
||||
# Verify transcript was deleted
|
||||
assert await transcripts_controller.get_by_id(old_transcript.id) is None
|
||||
# Verify transcript was soft-deleted
|
||||
old = await transcripts_controller.get_by_id(old_transcript.id)
|
||||
assert old is not None
|
||||
assert old.deleted_at is not None
|
||||
|
||||
# Verify meeting was deleted
|
||||
# Verify meeting was hard-deleted (cleanup deletes meetings directly)
|
||||
query = meetings.select().where(meetings.c.id == meeting_id)
|
||||
meeting_result = await get_database().fetch_one(query)
|
||||
assert meeting_result is None
|
||||
|
||||
# Verify recording was deleted
|
||||
# Verify recording was hard-deleted (cleanup deletes recordings directly)
|
||||
assert await recordings_controller.get_by_id(recording.id) is None
|
||||
|
||||
|
||||
|
||||
290
server/tests/test_failed_runs_monitor.py
Normal file
290
server/tests/test_failed_runs_monitor.py
Normal file
@@ -0,0 +1,290 @@
|
||||
"""
|
||||
Tests for FailedRunsMonitor Hatchet cron workflow.
|
||||
|
||||
Tests cover:
|
||||
- No Zulip message sent when no failures found
|
||||
- Messages sent for failed main pipeline runs
|
||||
- Child workflow failures filtered out
|
||||
- Errors in the monitor itself are caught and logged
|
||||
"""
|
||||
|
||||
from datetime import timezone
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from hatchet_sdk.clients.rest.models import V1TaskStatus
|
||||
|
||||
|
||||
def _make_task_summary(
|
||||
workflow_name: str,
|
||||
workflow_run_external_id: str = "run-123",
|
||||
status: V1TaskStatus = V1TaskStatus.FAILED,
|
||||
):
|
||||
"""Create a mock V1TaskSummary."""
|
||||
mock = MagicMock()
|
||||
mock.workflow_name = workflow_name
|
||||
mock.workflow_run_external_id = workflow_run_external_id
|
||||
mock.status = status
|
||||
return mock
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
class TestCheckFailedRuns:
|
||||
async def test_no_failures_sends_no_message(self):
|
||||
mock_result = MagicMock()
|
||||
mock_result.rows = []
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.runs.aio_list = AsyncMock(return_value=mock_result)
|
||||
|
||||
with (
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.HatchetClientManager.get_client",
|
||||
return_value=mock_client,
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.send_message_to_zulip",
|
||||
new_callable=AsyncMock,
|
||||
) as mock_send,
|
||||
):
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import (
|
||||
_check_failed_runs,
|
||||
)
|
||||
|
||||
result = await _check_failed_runs()
|
||||
|
||||
assert result["checked"] == 0
|
||||
assert result["reported"] == 0
|
||||
mock_send.assert_not_called()
|
||||
|
||||
async def test_reports_failed_main_pipeline_runs(self):
|
||||
failed_runs = [
|
||||
_make_task_summary("DiarizationPipeline", "run-1"),
|
||||
_make_task_summary("FilePipeline", "run-2"),
|
||||
]
|
||||
mock_result = MagicMock()
|
||||
mock_result.rows = failed_runs
|
||||
|
||||
mock_details = MagicMock()
|
||||
mock_client = MagicMock()
|
||||
mock_client.runs.aio_list = AsyncMock(return_value=mock_result)
|
||||
mock_client.runs.aio_get = AsyncMock(return_value=mock_details)
|
||||
|
||||
with (
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.HatchetClientManager.get_client",
|
||||
return_value=mock_client,
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.render_run_detail",
|
||||
return_value="**rendered DAG**",
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.send_message_to_zulip",
|
||||
new_callable=AsyncMock,
|
||||
return_value={"id": 1},
|
||||
) as mock_send,
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.settings"
|
||||
) as mock_settings,
|
||||
):
|
||||
mock_settings.ZULIP_DAG_STREAM = "dag-stream"
|
||||
mock_settings.ZULIP_DAG_TOPIC = "dag-topic"
|
||||
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import (
|
||||
_check_failed_runs,
|
||||
)
|
||||
|
||||
result = await _check_failed_runs()
|
||||
|
||||
assert result["checked"] == 2
|
||||
assert result["reported"] == 2
|
||||
assert mock_send.call_count == 2
|
||||
mock_send.assert_any_call("dag-stream", "dag-topic", "**rendered DAG**")
|
||||
|
||||
async def test_filters_out_child_workflows(self):
|
||||
runs = [
|
||||
_make_task_summary("DiarizationPipeline", "run-1"),
|
||||
_make_task_summary("TrackProcessing", "run-2"),
|
||||
_make_task_summary("TopicChunkProcessing", "run-3"),
|
||||
_make_task_summary("SubjectProcessing", "run-4"),
|
||||
]
|
||||
mock_result = MagicMock()
|
||||
mock_result.rows = runs
|
||||
|
||||
mock_details = MagicMock()
|
||||
mock_client = MagicMock()
|
||||
mock_client.runs.aio_list = AsyncMock(return_value=mock_result)
|
||||
mock_client.runs.aio_get = AsyncMock(return_value=mock_details)
|
||||
|
||||
with (
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.HatchetClientManager.get_client",
|
||||
return_value=mock_client,
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.render_run_detail",
|
||||
return_value="**rendered**",
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.send_message_to_zulip",
|
||||
new_callable=AsyncMock,
|
||||
return_value={"id": 1},
|
||||
) as mock_send,
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.settings"
|
||||
) as mock_settings,
|
||||
):
|
||||
mock_settings.ZULIP_DAG_STREAM = "dag-stream"
|
||||
mock_settings.ZULIP_DAG_TOPIC = "dag-topic"
|
||||
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import (
|
||||
_check_failed_runs,
|
||||
)
|
||||
|
||||
result = await _check_failed_runs()
|
||||
|
||||
# Only DiarizationPipeline should be reported
|
||||
assert result["checked"] == 4
|
||||
assert result["reported"] == 1
|
||||
assert mock_send.call_count == 1
|
||||
|
||||
async def test_all_three_pipelines_reported(self):
|
||||
runs = [
|
||||
_make_task_summary("DiarizationPipeline", "run-1"),
|
||||
_make_task_summary("FilePipeline", "run-2"),
|
||||
_make_task_summary("LivePostProcessingPipeline", "run-3"),
|
||||
]
|
||||
mock_result = MagicMock()
|
||||
mock_result.rows = runs
|
||||
|
||||
mock_details = MagicMock()
|
||||
mock_client = MagicMock()
|
||||
mock_client.runs.aio_list = AsyncMock(return_value=mock_result)
|
||||
mock_client.runs.aio_get = AsyncMock(return_value=mock_details)
|
||||
|
||||
with (
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.HatchetClientManager.get_client",
|
||||
return_value=mock_client,
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.render_run_detail",
|
||||
return_value="**rendered**",
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.send_message_to_zulip",
|
||||
new_callable=AsyncMock,
|
||||
return_value={"id": 1},
|
||||
) as mock_send,
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.settings"
|
||||
) as mock_settings,
|
||||
):
|
||||
mock_settings.ZULIP_DAG_STREAM = "dag-stream"
|
||||
mock_settings.ZULIP_DAG_TOPIC = "dag-topic"
|
||||
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import (
|
||||
_check_failed_runs,
|
||||
)
|
||||
|
||||
result = await _check_failed_runs()
|
||||
|
||||
assert result["reported"] == 3
|
||||
assert mock_send.call_count == 3
|
||||
|
||||
async def test_continues_on_individual_run_failure(self):
|
||||
"""If one run fails to report, the others should still be reported."""
|
||||
runs = [
|
||||
_make_task_summary("DiarizationPipeline", "run-1"),
|
||||
_make_task_summary("FilePipeline", "run-2"),
|
||||
]
|
||||
mock_result = MagicMock()
|
||||
mock_result.rows = runs
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.runs.aio_list = AsyncMock(return_value=mock_result)
|
||||
# First call raises, second succeeds
|
||||
mock_client.runs.aio_get = AsyncMock(
|
||||
side_effect=[Exception("Hatchet API error"), MagicMock()]
|
||||
)
|
||||
|
||||
with (
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.HatchetClientManager.get_client",
|
||||
return_value=mock_client,
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.render_run_detail",
|
||||
return_value="**rendered**",
|
||||
),
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.send_message_to_zulip",
|
||||
new_callable=AsyncMock,
|
||||
return_value={"id": 1},
|
||||
) as mock_send,
|
||||
patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.settings"
|
||||
) as mock_settings,
|
||||
):
|
||||
mock_settings.ZULIP_DAG_STREAM = "dag-stream"
|
||||
mock_settings.ZULIP_DAG_TOPIC = "dag-topic"
|
||||
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import (
|
||||
_check_failed_runs,
|
||||
)
|
||||
|
||||
result = await _check_failed_runs()
|
||||
|
||||
# First run failed to report, second succeeded
|
||||
assert result["reported"] == 1
|
||||
assert mock_send.call_count == 1
|
||||
|
||||
async def test_handles_list_api_failure(self):
|
||||
"""If aio_list fails, should return error and not crash."""
|
||||
mock_client = MagicMock()
|
||||
mock_client.runs.aio_list = AsyncMock(
|
||||
side_effect=Exception("Connection refused")
|
||||
)
|
||||
|
||||
with patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.HatchetClientManager.get_client",
|
||||
return_value=mock_client,
|
||||
):
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import (
|
||||
_check_failed_runs,
|
||||
)
|
||||
|
||||
result = await _check_failed_runs()
|
||||
|
||||
assert result["checked"] == 0
|
||||
assert result["reported"] == 0
|
||||
assert "error" in result
|
||||
|
||||
async def test_uses_correct_time_window(self):
|
||||
"""Verify the correct since/until parameters are passed to aio_list."""
|
||||
mock_result = MagicMock()
|
||||
mock_result.rows = []
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.runs.aio_list = AsyncMock(return_value=mock_result)
|
||||
|
||||
with patch(
|
||||
"reflector.hatchet.workflows.failed_runs_monitor.HatchetClientManager.get_client",
|
||||
return_value=mock_client,
|
||||
):
|
||||
from reflector.hatchet.workflows.failed_runs_monitor import (
|
||||
_check_failed_runs,
|
||||
)
|
||||
|
||||
await _check_failed_runs()
|
||||
|
||||
call_kwargs = mock_client.runs.aio_list.call_args
|
||||
assert call_kwargs.kwargs["statuses"] == [V1TaskStatus.FAILED]
|
||||
since = call_kwargs.kwargs["since"]
|
||||
until = call_kwargs.kwargs["until"]
|
||||
assert since.tzinfo == timezone.utc
|
||||
assert until.tzinfo == timezone.utc
|
||||
# Window should be ~1 hour
|
||||
delta = until - since
|
||||
assert 3590 < delta.total_seconds() < 3610
|
||||
@@ -137,6 +137,7 @@ async def mock_storage():
|
||||
operation: str = "get_object",
|
||||
expires_in: int = 3600,
|
||||
bucket=None,
|
||||
extra_params=None,
|
||||
):
|
||||
return f"http://test-storage/{path}"
|
||||
|
||||
|
||||
@@ -373,9 +373,9 @@ async def test_audio_mp3_requires_token_for_owned_transcript(
|
||||
tr.audio_mp3_filename.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy(audio_path, tr.audio_mp3_filename)
|
||||
|
||||
# Anonymous GET without token should be 403 or 404 depending on access; we call mp3
|
||||
# Anonymous GET without token should be 401 (auth required)
|
||||
resp = await client.get(f"/transcripts/{t.id}/audio/mp3")
|
||||
assert resp.status_code == 403
|
||||
assert resp.status_code == 401
|
||||
|
||||
# With token should succeed
|
||||
token = create_access_token(
|
||||
@@ -898,7 +898,7 @@ async def test_anonymous_transcript_in_list_when_public_mode(client, monkeypatch
|
||||
@pytest.mark.asyncio
|
||||
async def test_anonymous_transcript_audio_accessible(client, monkeypatch, tmpdir):
|
||||
"""Anonymous transcript audio (mp3) is accessible without authentication
|
||||
because user_id=None bypasses share_mode checks."""
|
||||
because user_id=None bypasses the auth requirement (pipeline access)."""
|
||||
monkeypatch.setattr(settings, "PUBLIC_MODE", True)
|
||||
monkeypatch.setattr(settings, "DATA_DIR", Path(tmpdir).as_posix())
|
||||
|
||||
@@ -920,7 +920,7 @@ async def test_anonymous_transcript_audio_accessible(client, monkeypatch, tmpdir
|
||||
resp = await client.get(f"/transcripts/{t.id}/audio/mp3")
|
||||
assert (
|
||||
resp.status_code == 200
|
||||
), f"Anonymous transcript audio should be accessible: {resp.text}"
|
||||
), f"Anonymous transcript audio should be accessible for pipeline: {resp.text}"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
import pytest
|
||||
|
||||
from reflector.db.recordings import Recording, recordings_controller
|
||||
from reflector.db.rooms import rooms_controller
|
||||
from reflector.db.transcripts import transcripts_controller
|
||||
from reflector.db.transcripts import SourceKind, transcripts_controller
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@@ -192,9 +193,93 @@ async def test_transcript_delete(authenticated_client, client):
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "ok"
|
||||
|
||||
# API returns 404 for soft-deleted transcripts
|
||||
response = await client.get(f"/transcripts/{tid}")
|
||||
assert response.status_code == 404
|
||||
|
||||
# But the transcript still exists in DB with deleted_at set
|
||||
transcript = await transcripts_controller.get_by_id(tid)
|
||||
assert transcript is not None
|
||||
assert transcript.deleted_at is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_deleted_transcript_not_in_list(authenticated_client, client):
|
||||
"""Soft-deleted transcripts should not appear in the list endpoint."""
|
||||
response = await client.post("/transcripts", json={"name": "testdel_list"})
|
||||
assert response.status_code == 200
|
||||
tid = response.json()["id"]
|
||||
|
||||
# Verify it appears in the list
|
||||
response = await client.get("/transcripts")
|
||||
assert response.status_code == 200
|
||||
ids = [t["id"] for t in response.json()["items"]]
|
||||
assert tid in ids
|
||||
|
||||
# Delete it
|
||||
response = await client.delete(f"/transcripts/{tid}")
|
||||
assert response.status_code == 200
|
||||
|
||||
# Verify it no longer appears in the list
|
||||
response = await client.get("/transcripts")
|
||||
assert response.status_code == 200
|
||||
ids = [t["id"] for t in response.json()["items"]]
|
||||
assert tid not in ids
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_delete_already_deleted_is_idempotent(authenticated_client, client):
|
||||
"""Deleting an already-deleted transcript is idempotent (returns 200)."""
|
||||
response = await client.post("/transcripts", json={"name": "testdel_idem"})
|
||||
assert response.status_code == 200
|
||||
tid = response.json()["id"]
|
||||
|
||||
# First delete
|
||||
response = await client.delete(f"/transcripts/{tid}")
|
||||
assert response.status_code == 200
|
||||
|
||||
# Second delete — idempotent, still returns ok
|
||||
response = await client.delete(f"/transcripts/{tid}")
|
||||
assert response.status_code == 200
|
||||
|
||||
# But deleted_at was only set once (not updated)
|
||||
transcript = await transcripts_controller.get_by_id(tid)
|
||||
assert transcript is not None
|
||||
assert transcript.deleted_at is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_deleted_transcript_recording_soft_deleted(authenticated_client, client):
|
||||
"""Soft-deleting a transcript also soft-deletes its recording."""
|
||||
from datetime import datetime, timezone
|
||||
|
||||
recording = await recordings_controller.create(
|
||||
Recording(
|
||||
bucket_name="test-bucket",
|
||||
object_key="test.mp4",
|
||||
recorded_at=datetime.now(timezone.utc),
|
||||
)
|
||||
)
|
||||
transcript = await transcripts_controller.add(
|
||||
name="with-recording",
|
||||
source_kind=SourceKind.ROOM,
|
||||
recording_id=recording.id,
|
||||
user_id="randomuserid",
|
||||
)
|
||||
|
||||
response = await client.delete(f"/transcripts/{transcript.id}")
|
||||
assert response.status_code == 200
|
||||
|
||||
# Recording still in DB with deleted_at set
|
||||
rec = await recordings_controller.get_by_id(recording.id)
|
||||
assert rec is not None
|
||||
assert rec.deleted_at is not None
|
||||
|
||||
# Transcript still in DB with deleted_at set
|
||||
tr = await transcripts_controller.get_by_id(transcript.id)
|
||||
assert tr is not None
|
||||
assert tr.deleted_at is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_transcript_mark_reviewed(authenticated_client, client):
|
||||
|
||||
@@ -40,7 +40,7 @@ async def fake_transcript(tmpdir, client, monkeypatch):
|
||||
],
|
||||
)
|
||||
async def test_transcript_audio_download(
|
||||
fake_transcript, url_suffix, content_type, client
|
||||
authenticated_client, fake_transcript, url_suffix, content_type, client
|
||||
):
|
||||
response = await client.get(f"/transcripts/{fake_transcript.id}/audio{url_suffix}")
|
||||
assert response.status_code == 200
|
||||
@@ -61,7 +61,7 @@ async def test_transcript_audio_download(
|
||||
],
|
||||
)
|
||||
async def test_transcript_audio_download_head(
|
||||
fake_transcript, url_suffix, content_type, client
|
||||
authenticated_client, fake_transcript, url_suffix, content_type, client
|
||||
):
|
||||
response = await client.head(f"/transcripts/{fake_transcript.id}/audio{url_suffix}")
|
||||
assert response.status_code == 200
|
||||
@@ -82,7 +82,7 @@ async def test_transcript_audio_download_head(
|
||||
],
|
||||
)
|
||||
async def test_transcript_audio_download_range(
|
||||
fake_transcript, url_suffix, content_type, client
|
||||
authenticated_client, fake_transcript, url_suffix, content_type, client
|
||||
):
|
||||
response = await client.get(
|
||||
f"/transcripts/{fake_transcript.id}/audio{url_suffix}",
|
||||
@@ -102,7 +102,7 @@ async def test_transcript_audio_download_range(
|
||||
],
|
||||
)
|
||||
async def test_transcript_audio_download_range_with_seek(
|
||||
fake_transcript, url_suffix, content_type, client
|
||||
authenticated_client, fake_transcript, url_suffix, content_type, client
|
||||
):
|
||||
response = await client.get(
|
||||
f"/transcripts/{fake_transcript.id}/audio{url_suffix}",
|
||||
|
||||
@@ -98,10 +98,10 @@ async def private_transcript(tmpdir):
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_audio_mp3_private_no_auth_returns_403(private_transcript, client):
|
||||
"""Without auth, accessing a private transcript's audio returns 403."""
|
||||
async def test_audio_mp3_private_no_auth_returns_401(private_transcript, client):
|
||||
"""Without auth, accessing a private transcript's audio returns 401."""
|
||||
response = await client.get(f"/transcripts/{private_transcript.id}/audio/mp3")
|
||||
assert response.status_code == 403
|
||||
assert response.status_code == 401
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@@ -125,8 +125,8 @@ async def test_audio_mp3_with_bearer_header(private_transcript, client):
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_audio_mp3_public_transcript_no_auth_ok(tmpdir, client):
|
||||
"""Public transcripts are accessible without any auth."""
|
||||
async def test_audio_mp3_public_transcript_no_auth_returns_401(tmpdir, client):
|
||||
"""Public transcripts require authentication for audio access."""
|
||||
from reflector.db.transcripts import SourceKind, transcripts_controller
|
||||
from reflector.settings import settings
|
||||
|
||||
@@ -146,8 +146,7 @@ async def test_audio_mp3_public_transcript_no_auth_ok(tmpdir, client):
|
||||
shutil.copy(mp3_source, audio_filename)
|
||||
|
||||
response = await client.get(f"/transcripts/{transcript.id}/audio/mp3")
|
||||
assert response.status_code == 200
|
||||
assert response.headers["content-type"] == "audio/mpeg"
|
||||
assert response.status_code == 401
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -299,11 +298,9 @@ async def test_local_audio_link_token_works_with_authentik_backend(
|
||||
"""_generate_local_audio_link creates an HS256 token via create_access_token.
|
||||
|
||||
When the Authentik (RS256) auth backend is active, verify_raw_token uses
|
||||
JWTAuth which expects RS256 + public key. The HS256 token created by
|
||||
_generate_local_audio_link will fail verification, returning 401.
|
||||
|
||||
This test documents the bug: the internal audio URL generated for the
|
||||
diarization pipeline is unusable under the JWT auth backend.
|
||||
JWTAuth which expects RS256 + public key. The HS256 token fails RS256
|
||||
verification, but the audio endpoint's HS256 fallback (jwt.decode with
|
||||
SECRET_KEY) correctly handles it, so the request succeeds with 200.
|
||||
"""
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
@@ -322,6 +319,55 @@ async def test_local_audio_link_token_works_with_authentik_backend(
|
||||
f"/transcripts/{private_transcript.id}/audio/mp3?token={token}"
|
||||
)
|
||||
|
||||
# BUG: this should be 200 (the token was created by our own server),
|
||||
# but the Authentik backend rejects it because it's HS256, not RS256.
|
||||
# The HS256 fallback in the audio endpoint handles this correctly.
|
||||
assert response.status_code == 200
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Waveform endpoint auth tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_waveform_requires_authentication(client):
|
||||
"""Waveform endpoint returns 401 for unauthenticated requests."""
|
||||
response = await client.get("/transcripts/any-id/audio/waveform")
|
||||
assert response.status_code == 401
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_audio_mp3_authenticated_user_accesses_anonymous_transcript(
|
||||
tmpdir, client
|
||||
):
|
||||
"""Authenticated user can access audio for an anonymous (user_id=None) transcript."""
|
||||
from reflector.app import app
|
||||
from reflector.auth import current_user, current_user_optional
|
||||
from reflector.db.transcripts import SourceKind, transcripts_controller
|
||||
from reflector.settings import settings
|
||||
|
||||
settings.DATA_DIR = Path(tmpdir)
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
"Anonymous audio test",
|
||||
source_kind=SourceKind.FILE,
|
||||
user_id=None,
|
||||
share_mode="private",
|
||||
)
|
||||
await transcripts_controller.update(transcript, {"status": "ended"})
|
||||
|
||||
audio_filename = transcript.audio_mp3_filename
|
||||
mp3_source = Path(__file__).parent / "records" / "test_mathieu_hello.mp3"
|
||||
audio_filename.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy(mp3_source, audio_filename)
|
||||
|
||||
_user = lambda: {"sub": "some-authenticated-user", "email": "user@example.com"}
|
||||
app.dependency_overrides[current_user] = _user
|
||||
app.dependency_overrides[current_user_optional] = _user
|
||||
try:
|
||||
response = await client.get(f"/transcripts/{transcript.id}/audio/mp3")
|
||||
finally:
|
||||
del app.dependency_overrides[current_user]
|
||||
del app.dependency_overrides[current_user_optional]
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.headers["content-type"] == "audio/mpeg"
|
||||
|
||||
36
server/tests/test_transcripts_download.py
Normal file
36
server/tests/test_transcripts_download.py
Normal file
@@ -0,0 +1,36 @@
|
||||
import io
|
||||
import zipfile
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_download_zip_returns_valid_zip(
|
||||
authenticated_client, client, fake_transcript_with_topics
|
||||
):
|
||||
"""Test that the zip download endpoint returns a valid zip file."""
|
||||
transcript = fake_transcript_with_topics
|
||||
response = await client.get(f"/transcripts/{transcript.id}/download/zip")
|
||||
assert response.status_code == 200
|
||||
assert response.headers["content-type"] == "application/zip"
|
||||
|
||||
# Verify it's a valid zip
|
||||
zip_buffer = io.BytesIO(response.content)
|
||||
with zipfile.ZipFile(zip_buffer) as zf:
|
||||
names = zf.namelist()
|
||||
assert "metadata.json" in names
|
||||
assert "audio.mp3" in names
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_download_zip_requires_auth(client):
|
||||
"""Test that zip download requires authentication."""
|
||||
response = await client.get("/transcripts/nonexistent/download/zip")
|
||||
assert response.status_code in (401, 403, 422)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_download_zip_not_found(authenticated_client, client):
|
||||
"""Test 404 for non-existent transcript."""
|
||||
response = await client.get("/transcripts/nonexistent-id/download/zip")
|
||||
assert response.status_code == 404
|
||||
@@ -1,5 +1,4 @@
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -9,6 +8,7 @@ from reflector.db.transcripts import SourceKind, transcripts_controller
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_recording_deleted_with_transcript():
|
||||
"""Soft-delete: recording and transcript remain in DB with deleted_at set, no files deleted."""
|
||||
recording = await recordings_controller.create(
|
||||
Recording(
|
||||
bucket_name="test-bucket",
|
||||
@@ -22,16 +22,13 @@ async def test_recording_deleted_with_transcript():
|
||||
recording_id=recording.id,
|
||||
)
|
||||
|
||||
with patch("reflector.db.transcripts.get_transcripts_storage") as mock_get_storage:
|
||||
storage_instance = mock_get_storage.return_value
|
||||
storage_instance.delete_file = AsyncMock()
|
||||
await transcripts_controller.remove_by_id(transcript.id)
|
||||
|
||||
await transcripts_controller.remove_by_id(transcript.id)
|
||||
# Both should still exist in DB but with deleted_at set
|
||||
rec = await recordings_controller.get_by_id(recording.id)
|
||||
assert rec is not None
|
||||
assert rec.deleted_at is not None
|
||||
|
||||
# Should be called with bucket override
|
||||
storage_instance.delete_file.assert_awaited_once_with(
|
||||
recording.object_key, bucket=recording.bucket_name
|
||||
)
|
||||
|
||||
assert await recordings_controller.get_by_id(recording.id) is None
|
||||
assert await transcripts_controller.get_by_id(transcript.id) is None
|
||||
tr = await transcripts_controller.get_by_id(transcript.id)
|
||||
assert tr is not None
|
||||
assert tr.deleted_at is not None
|
||||
|
||||
264
server/tests/test_transcripts_video.py
Normal file
264
server/tests/test_transcripts_video.py
Normal file
@@ -0,0 +1,264 @@
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from reflector.db.transcripts import SourceKind, transcripts_controller
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_returns_404_when_no_meeting(authenticated_client, client):
|
||||
"""Test that video URL returns 404 when transcript has no meeting."""
|
||||
response = await client.post("/transcripts", json={"name": "no-meeting"})
|
||||
assert response.status_code == 200
|
||||
tid = response.json()["id"]
|
||||
|
||||
response = await client.get(f"/transcripts/{tid}/video/url")
|
||||
assert response.status_code == 404
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_returns_404_when_no_cloud_video(authenticated_client, client):
|
||||
"""Test that video URL returns 404 when meeting has no cloud video."""
|
||||
from reflector.db import get_database
|
||||
from reflector.db.meetings import meetings
|
||||
|
||||
meeting_id = "test-meeting-no-video"
|
||||
await get_database().execute(
|
||||
meetings.insert().values(
|
||||
id=meeting_id,
|
||||
room_name="No Video Meeting",
|
||||
room_url="https://example.com",
|
||||
host_room_url="https://example.com/host",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=datetime.now(timezone.utc) + timedelta(hours=1),
|
||||
room_id=None,
|
||||
)
|
||||
)
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
name="with-meeting",
|
||||
source_kind=SourceKind.ROOM,
|
||||
meeting_id=meeting_id,
|
||||
user_id="randomuserid",
|
||||
)
|
||||
|
||||
response = await client.get(f"/transcripts/{transcript.id}/video/url")
|
||||
assert response.status_code == 404
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_returns_presigned_url(authenticated_client, client):
|
||||
"""Test that video URL returns a presigned URL when cloud video exists."""
|
||||
from reflector.db import get_database
|
||||
from reflector.db.meetings import meetings
|
||||
|
||||
meeting_id = "test-meeting-with-video"
|
||||
await get_database().execute(
|
||||
meetings.insert().values(
|
||||
id=meeting_id,
|
||||
room_name="Video Meeting",
|
||||
room_url="https://example.com",
|
||||
host_room_url="https://example.com/host",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=datetime.now(timezone.utc) + timedelta(hours=1),
|
||||
room_id=None,
|
||||
daily_composed_video_s3_key="recordings/video.mp4",
|
||||
daily_composed_video_duration=120,
|
||||
)
|
||||
)
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
name="with-video",
|
||||
source_kind=SourceKind.ROOM,
|
||||
meeting_id=meeting_id,
|
||||
user_id="randomuserid",
|
||||
)
|
||||
|
||||
with patch("reflector.views.transcripts_video.get_source_storage") as mock_storage:
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.get_file_url = AsyncMock(
|
||||
return_value="https://s3.example.com/presigned-url"
|
||||
)
|
||||
mock_storage.return_value = mock_instance
|
||||
|
||||
response = await client.get(f"/transcripts/{transcript.id}/video/url")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["url"] == "https://s3.example.com/presigned-url"
|
||||
assert data["duration"] == 120
|
||||
assert data["content_type"] == "video/mp4"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_transcript_get_includes_video_fields(authenticated_client, client):
|
||||
"""Test that transcript GET response includes has_cloud_video field."""
|
||||
response = await client.post("/transcripts", json={"name": "video-fields"})
|
||||
assert response.status_code == 200
|
||||
tid = response.json()["id"]
|
||||
|
||||
response = await client.get(f"/transcripts/{tid}")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["has_cloud_video"] is False
|
||||
assert data["cloud_video_duration"] is None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_requires_authentication(client):
|
||||
"""Test that video URL endpoint returns 401 for unauthenticated requests."""
|
||||
response = await client.get("/transcripts/any-id/video/url")
|
||||
assert response.status_code == 401
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_presigned_params(authenticated_client, client):
|
||||
"""Test that presigned URL is generated with short expiry and inline disposition."""
|
||||
from reflector.db import get_database
|
||||
from reflector.db.meetings import meetings
|
||||
|
||||
meeting_id = "test-meeting-presigned-params"
|
||||
await get_database().execute(
|
||||
meetings.insert().values(
|
||||
id=meeting_id,
|
||||
room_name="Presigned Params Meeting",
|
||||
room_url="https://example.com",
|
||||
host_room_url="https://example.com/host",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=datetime.now(timezone.utc) + timedelta(hours=1),
|
||||
room_id=None,
|
||||
daily_composed_video_s3_key="recordings/video.mp4",
|
||||
daily_composed_video_duration=60,
|
||||
)
|
||||
)
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
name="presigned-params",
|
||||
source_kind=SourceKind.ROOM,
|
||||
meeting_id=meeting_id,
|
||||
user_id="randomuserid",
|
||||
)
|
||||
|
||||
with patch("reflector.views.transcripts_video.get_source_storage") as mock_storage:
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.get_file_url = AsyncMock(
|
||||
return_value="https://s3.example.com/presigned-url"
|
||||
)
|
||||
mock_storage.return_value = mock_instance
|
||||
|
||||
await client.get(f"/transcripts/{transcript.id}/video/url")
|
||||
|
||||
mock_instance.get_file_url.assert_called_once_with(
|
||||
"recordings/video.mp4",
|
||||
operation="get_object",
|
||||
expires_in=900,
|
||||
extra_params={
|
||||
"ResponseContentDisposition": "inline",
|
||||
"ResponseContentType": "video/mp4",
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
async def _create_meeting_with_video(meeting_id):
|
||||
"""Helper to create a meeting with cloud video."""
|
||||
from reflector.db import get_database
|
||||
from reflector.db.meetings import meetings
|
||||
|
||||
await get_database().execute(
|
||||
meetings.insert().values(
|
||||
id=meeting_id,
|
||||
room_name="Video Meeting",
|
||||
room_url="https://example.com",
|
||||
host_room_url="https://example.com/host",
|
||||
start_date=datetime.now(timezone.utc),
|
||||
end_date=datetime.now(timezone.utc) + timedelta(hours=1),
|
||||
room_id=None,
|
||||
daily_composed_video_s3_key="recordings/video.mp4",
|
||||
daily_composed_video_duration=60,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_private_transcript_denies_non_owner(
|
||||
authenticated_client, client
|
||||
):
|
||||
"""Authenticated non-owner cannot access video for a private transcript."""
|
||||
meeting_id = "test-meeting-private-deny"
|
||||
await _create_meeting_with_video(meeting_id)
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
name="private-video",
|
||||
source_kind=SourceKind.ROOM,
|
||||
meeting_id=meeting_id,
|
||||
user_id="other-owner",
|
||||
share_mode="private",
|
||||
)
|
||||
|
||||
with patch("reflector.views.transcripts_video.get_source_storage") as mock_storage:
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.get_file_url = AsyncMock(
|
||||
return_value="https://s3.example.com/url"
|
||||
)
|
||||
mock_storage.return_value = mock_instance
|
||||
|
||||
response = await client.get(f"/transcripts/{transcript.id}/video/url")
|
||||
|
||||
assert response.status_code == 403
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_public_transcript_allows_authenticated_non_owner(
|
||||
authenticated_client, client
|
||||
):
|
||||
"""Authenticated non-owner can access video for a public transcript."""
|
||||
meeting_id = "test-meeting-public-allow"
|
||||
await _create_meeting_with_video(meeting_id)
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
name="public-video",
|
||||
source_kind=SourceKind.ROOM,
|
||||
meeting_id=meeting_id,
|
||||
user_id="other-owner",
|
||||
share_mode="public",
|
||||
)
|
||||
|
||||
with patch("reflector.views.transcripts_video.get_source_storage") as mock_storage:
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.get_file_url = AsyncMock(
|
||||
return_value="https://s3.example.com/url"
|
||||
)
|
||||
mock_storage.return_value = mock_instance
|
||||
|
||||
response = await client.get(f"/transcripts/{transcript.id}/video/url")
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_video_url_semi_private_allows_authenticated_non_owner(
|
||||
authenticated_client, client
|
||||
):
|
||||
"""Authenticated non-owner can access video for a semi-private transcript."""
|
||||
meeting_id = "test-meeting-semi-private-allow"
|
||||
await _create_meeting_with_video(meeting_id)
|
||||
|
||||
transcript = await transcripts_controller.add(
|
||||
name="semi-private-video",
|
||||
source_kind=SourceKind.ROOM,
|
||||
meeting_id=meeting_id,
|
||||
user_id="other-owner",
|
||||
share_mode="semi-private",
|
||||
)
|
||||
|
||||
with patch("reflector.views.transcripts_video.get_source_storage") as mock_storage:
|
||||
mock_instance = AsyncMock()
|
||||
mock_instance.get_file_url = AsyncMock(
|
||||
return_value="https://s3.example.com/url"
|
||||
)
|
||||
mock_storage.return_value = mock_instance
|
||||
|
||||
response = await client.get(f"/transcripts/{transcript.id}/video/url")
|
||||
|
||||
assert response.status_code == 200
|
||||
25
server/uv.lock
generated
25
server/uv.lock
generated
@@ -188,6 +188,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/76/641ae371508676492379f16e2fa48f4e2c11741bd63c48be4b12a6b09cba/aiosignal-1.4.0-py3-none-any.whl", hash = "sha256:053243f8b92b990551949e63930a839ff0cf0b0ebbe0597b0f3fb19e1a0fe82e", size = 7490, upload-time = "2025-07-03T22:54:42.156Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "aiosmtplib"
|
||||
version = "5.1.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e7/ad/240a7ce4e50713b111dff8b781a898d8d4770e5d6ad4899103f84c86005c/aiosmtplib-5.1.0.tar.gz", hash = "sha256:2504a23b2b63c9de6bc4ea719559a38996dba68f73f6af4eb97be20ee4c5e6c4", size = 66176, upload-time = "2026-01-25T01:51:11.408Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/37/82/70f2c452acd7ed18c558c8ace9a8cf4fdcc70eae9a41749b5bdc53eb6f45/aiosmtplib-5.1.0-py3-none-any.whl", hash = "sha256:368029440645b486b69db7029208a7a78c6691b90d24a5332ddba35d9109d55b", size = 27778, upload-time = "2026-01-25T01:51:10.026Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "aiosqlite"
|
||||
version = "0.21.0"
|
||||
@@ -2255,7 +2264,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "nltk"
|
||||
version = "3.9.3"
|
||||
version = "3.9.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "click" },
|
||||
@@ -2263,9 +2272,9 @@ dependencies = [
|
||||
{ name = "regex" },
|
||||
{ name = "tqdm" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e1/8f/915e1c12df07c70ed779d18ab83d065718a926e70d3ea33eb0cd66ffb7c0/nltk-3.9.3.tar.gz", hash = "sha256:cb5945d6424a98d694c2b9a0264519fab4363711065a46aa0ae7a2195b92e71f", size = 2923673, upload-time = "2026-02-24T12:05:53.833Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/74/a1/b3b4adf15585a5bc4c357adde150c01ebeeb642173ded4d871e89468767c/nltk-3.9.4.tar.gz", hash = "sha256:ed03bc098a40481310320808b2db712d95d13ca65b27372f8a403949c8b523d0", size = 2946864, upload-time = "2026-03-24T06:13:40.641Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/7e/9af5a710a1236e4772de8dfcc6af942a561327bb9f42b5b4a24d0cf100fd/nltk-3.9.3-py3-none-any.whl", hash = "sha256:60b3db6e9995b3dd976b1f0fa7dec22069b2677e759c28eb69b62ddd44870522", size = 1525385, upload-time = "2026-02-24T12:05:46.54Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/91/04e965f8e717ba0ab4bdca5c112deeab11c9e750d94c4d4602f050295d39/nltk-3.9.4-py3-none-any.whl", hash = "sha256:f2fa301c3a12718ce4a0e9305c5675299da5ad9e26068218b69d692fda84828f", size = 1552087, upload-time = "2026-03-24T06:13:38.47Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2976,11 +2985,11 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "6.8.0"
|
||||
version = "6.9.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b4/a3/e705b0805212b663a4c27b861c8a603dba0f8b4bb281f96f8e746576a50d/pypdf-6.8.0.tar.gz", hash = "sha256:cb7eaeaa4133ce76f762184069a854e03f4d9a08568f0e0623f7ea810407833b", size = 5307831, upload-time = "2026-03-09T13:37:40.591Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/31/83/691bdb309306232362503083cb15777491045dd54f45393a317dc7d8082f/pypdf-6.9.2.tar.gz", hash = "sha256:7f850faf2b0d4ab936582c05da32c52214c2b089d61a316627b5bfb5b0dab46c", size = 5311837, upload-time = "2026-03-23T14:53:27.983Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/8c/ec/4ccf3bb86b1afe5d7176e1c8abcdbf22b53dd682ec2eda50e1caadcf6846/pypdf-6.8.0-py3-none-any.whl", hash = "sha256:2a025080a8dd73f48123c89c57174a5ff3806c71763ee4e49572dc90454943c7", size = 332177, upload-time = "2026-03-09T13:37:38.774Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/7e/c85f41243086a8fe5d1baeba527cb26a1918158a565932b41e0f7c0b32e9/pypdf-6.9.2-py3-none-any.whl", hash = "sha256:662cf29bcb419a36a1365232449624ab40b7c2d0cfc28e54f42eeecd1fd7e844", size = 333744, upload-time = "2026-03-23T14:53:26.573Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3343,10 +3352,12 @@ dependencies = [
|
||||
{ name = "aiohttp" },
|
||||
{ name = "aiohttp-cors" },
|
||||
{ name = "aiortc" },
|
||||
{ name = "aiosmtplib" },
|
||||
{ name = "alembic" },
|
||||
{ name = "av" },
|
||||
{ name = "celery" },
|
||||
{ name = "databases", extra = ["aiosqlite", "asyncpg"] },
|
||||
{ name = "email-validator" },
|
||||
{ name = "fastapi", extra = ["standard"] },
|
||||
{ name = "fastapi-pagination" },
|
||||
{ name = "hatchet-sdk" },
|
||||
@@ -3422,10 +3433,12 @@ requires-dist = [
|
||||
{ name = "aiohttp", specifier = ">=3.9.0" },
|
||||
{ name = "aiohttp-cors", specifier = ">=0.7.0" },
|
||||
{ name = "aiortc", specifier = ">=1.5.0" },
|
||||
{ name = "aiosmtplib", specifier = ">=3.0.0" },
|
||||
{ name = "alembic", specifier = ">=1.11.3" },
|
||||
{ name = "av", specifier = ">=15.0.0" },
|
||||
{ name = "celery", specifier = ">=5.3.4" },
|
||||
{ name = "databases", extras = ["aiosqlite", "asyncpg"], specifier = ">=0.7.0" },
|
||||
{ name = "email-validator", specifier = ">=2.0.0" },
|
||||
{ name = "fastapi", extras = ["standard"], specifier = ">=0.100.1" },
|
||||
{ name = "fastapi-pagination", specifier = ">=0.14.2" },
|
||||
{ name = "hatchet-sdk", specifier = "==1.22.16" },
|
||||
|
||||
@@ -31,6 +31,7 @@ import {
|
||||
useZulipTopics,
|
||||
useRoomGet,
|
||||
useRoomTestWebhook,
|
||||
useConfig,
|
||||
} from "../../lib/apiHooks";
|
||||
import { RoomList } from "./_components/RoomList";
|
||||
import { PaginationPage } from "../browse/_components/Pagination";
|
||||
@@ -92,6 +93,7 @@ const roomInitialState = {
|
||||
icsFetchInterval: 5,
|
||||
platform: "whereby",
|
||||
skipConsent: false,
|
||||
emailTranscriptTo: "",
|
||||
};
|
||||
|
||||
export default function RoomsList() {
|
||||
@@ -133,11 +135,15 @@ export default function RoomsList() {
|
||||
null,
|
||||
);
|
||||
const [showWebhookSecret, setShowWebhookSecret] = useState(false);
|
||||
const [emailTranscriptEnabled, setEmailTranscriptEnabled] = useState(false);
|
||||
|
||||
const createRoomMutation = useRoomCreate();
|
||||
const updateRoomMutation = useRoomUpdate();
|
||||
const deleteRoomMutation = useRoomDelete();
|
||||
const { data: streams = [] } = useZulipStreams();
|
||||
const { data: config } = useConfig();
|
||||
const zulipEnabled = config?.zulip_enabled ?? false;
|
||||
const emailEnabled = config?.email_enabled ?? false;
|
||||
const { data: streams = [] } = useZulipStreams(zulipEnabled);
|
||||
const { data: topics = [] } = useZulipTopics(selectedStreamId);
|
||||
|
||||
const {
|
||||
@@ -177,6 +183,7 @@ export default function RoomsList() {
|
||||
icsFetchInterval: detailedEditedRoom.ics_fetch_interval || 5,
|
||||
platform: detailedEditedRoom.platform,
|
||||
skipConsent: detailedEditedRoom.skip_consent || false,
|
||||
emailTranscriptTo: detailedEditedRoom.email_transcript_to || "",
|
||||
}
|
||||
: null,
|
||||
[detailedEditedRoom],
|
||||
@@ -329,6 +336,7 @@ export default function RoomsList() {
|
||||
ics_fetch_interval: room.icsFetchInterval,
|
||||
platform,
|
||||
skip_consent: room.skipConsent,
|
||||
email_transcript_to: room.emailTranscriptTo || null,
|
||||
};
|
||||
|
||||
if (isEditing) {
|
||||
@@ -369,6 +377,7 @@ export default function RoomsList() {
|
||||
// Reset states
|
||||
setShowWebhookSecret(false);
|
||||
setWebhookTestResult(null);
|
||||
setEmailTranscriptEnabled(!!roomData.email_transcript_to);
|
||||
|
||||
setRoomInput({
|
||||
name: roomData.name,
|
||||
@@ -392,6 +401,7 @@ export default function RoomsList() {
|
||||
icsFetchInterval: roomData.ics_fetch_interval || 5,
|
||||
platform: roomData.platform,
|
||||
skipConsent: roomData.skip_consent || false,
|
||||
emailTranscriptTo: roomData.email_transcript_to || "",
|
||||
});
|
||||
setEditRoomId(roomId);
|
||||
setIsEditing(true);
|
||||
@@ -469,6 +479,7 @@ export default function RoomsList() {
|
||||
setNameError("");
|
||||
setShowWebhookSecret(false);
|
||||
setWebhookTestResult(null);
|
||||
setEmailTranscriptEnabled(false);
|
||||
onOpen();
|
||||
}}
|
||||
>
|
||||
@@ -504,7 +515,9 @@ export default function RoomsList() {
|
||||
<Tabs.List>
|
||||
<Tabs.Trigger value="general">General</Tabs.Trigger>
|
||||
<Tabs.Trigger value="calendar">Calendar</Tabs.Trigger>
|
||||
<Tabs.Trigger value="share">Share</Tabs.Trigger>
|
||||
{(zulipEnabled || emailEnabled) && (
|
||||
<Tabs.Trigger value="share">Share</Tabs.Trigger>
|
||||
)}
|
||||
<Tabs.Trigger value="webhook">WebHook</Tabs.Trigger>
|
||||
</Tabs.List>
|
||||
|
||||
@@ -831,96 +844,144 @@ export default function RoomsList() {
|
||||
</Tabs.Content>
|
||||
|
||||
<Tabs.Content value="share" pt={6}>
|
||||
<Field.Root>
|
||||
<Checkbox.Root
|
||||
name="zulipAutoPost"
|
||||
checked={room.zulipAutoPost}
|
||||
onCheckedChange={(e) => {
|
||||
const syntheticEvent = {
|
||||
target: {
|
||||
name: "zulipAutoPost",
|
||||
type: "checkbox",
|
||||
checked: e.checked,
|
||||
},
|
||||
};
|
||||
handleRoomChange(syntheticEvent);
|
||||
}}
|
||||
>
|
||||
<Checkbox.HiddenInput />
|
||||
<Checkbox.Control>
|
||||
<Checkbox.Indicator />
|
||||
</Checkbox.Control>
|
||||
<Checkbox.Label>
|
||||
Automatically post transcription to Zulip
|
||||
</Checkbox.Label>
|
||||
</Checkbox.Root>
|
||||
</Field.Root>
|
||||
<Field.Root mt={4}>
|
||||
<Field.Label>Zulip stream</Field.Label>
|
||||
<Select.Root
|
||||
value={room.zulipStream ? [room.zulipStream] : []}
|
||||
onValueChange={(e) =>
|
||||
setRoomInput({
|
||||
...room,
|
||||
zulipStream: e.value[0],
|
||||
zulipTopic: "",
|
||||
})
|
||||
}
|
||||
collection={streamCollection}
|
||||
disabled={!room.zulipAutoPost}
|
||||
>
|
||||
<Select.HiddenSelect />
|
||||
<Select.Control>
|
||||
<Select.Trigger>
|
||||
<Select.ValueText placeholder="Select stream" />
|
||||
</Select.Trigger>
|
||||
<Select.IndicatorGroup>
|
||||
<Select.Indicator />
|
||||
</Select.IndicatorGroup>
|
||||
</Select.Control>
|
||||
<Select.Positioner>
|
||||
<Select.Content>
|
||||
{streamOptions.map((option) => (
|
||||
<Select.Item key={option.value} item={option}>
|
||||
{option.label}
|
||||
<Select.ItemIndicator />
|
||||
</Select.Item>
|
||||
))}
|
||||
</Select.Content>
|
||||
</Select.Positioner>
|
||||
</Select.Root>
|
||||
</Field.Root>
|
||||
<Field.Root mt={4}>
|
||||
<Field.Label>Zulip topic</Field.Label>
|
||||
<Select.Root
|
||||
value={room.zulipTopic ? [room.zulipTopic] : []}
|
||||
onValueChange={(e) =>
|
||||
setRoomInput({ ...room, zulipTopic: e.value[0] })
|
||||
}
|
||||
collection={topicCollection}
|
||||
disabled={!room.zulipAutoPost}
|
||||
>
|
||||
<Select.HiddenSelect />
|
||||
<Select.Control>
|
||||
<Select.Trigger>
|
||||
<Select.ValueText placeholder="Select topic" />
|
||||
</Select.Trigger>
|
||||
<Select.IndicatorGroup>
|
||||
<Select.Indicator />
|
||||
</Select.IndicatorGroup>
|
||||
</Select.Control>
|
||||
<Select.Positioner>
|
||||
<Select.Content>
|
||||
{topicOptions.map((option) => (
|
||||
<Select.Item key={option.value} item={option}>
|
||||
{option.label}
|
||||
<Select.ItemIndicator />
|
||||
</Select.Item>
|
||||
))}
|
||||
</Select.Content>
|
||||
</Select.Positioner>
|
||||
</Select.Root>
|
||||
</Field.Root>
|
||||
{emailEnabled && (
|
||||
<>
|
||||
<Field.Root>
|
||||
<Checkbox.Root
|
||||
checked={emailTranscriptEnabled}
|
||||
onCheckedChange={(e) => {
|
||||
setEmailTranscriptEnabled(!!e.checked);
|
||||
if (!e.checked) {
|
||||
setRoomInput({
|
||||
...room,
|
||||
emailTranscriptTo: "",
|
||||
});
|
||||
}
|
||||
}}
|
||||
>
|
||||
<Checkbox.HiddenInput />
|
||||
<Checkbox.Control>
|
||||
<Checkbox.Indicator />
|
||||
</Checkbox.Control>
|
||||
<Checkbox.Label>
|
||||
Email me transcript when processed
|
||||
</Checkbox.Label>
|
||||
</Checkbox.Root>
|
||||
</Field.Root>
|
||||
{emailTranscriptEnabled && (
|
||||
<Field.Root mt={2}>
|
||||
<Input
|
||||
name="emailTranscriptTo"
|
||||
type="email"
|
||||
placeholder="your@email.com"
|
||||
value={room.emailTranscriptTo}
|
||||
onChange={handleRoomChange}
|
||||
/>
|
||||
<Field.HelperText>
|
||||
Transcript will be emailed to this address after
|
||||
processing
|
||||
</Field.HelperText>
|
||||
</Field.Root>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
{zulipEnabled && (
|
||||
<>
|
||||
<Field.Root mt={emailEnabled ? 4 : 0}>
|
||||
<Checkbox.Root
|
||||
name="zulipAutoPost"
|
||||
checked={room.zulipAutoPost}
|
||||
onCheckedChange={(e) => {
|
||||
const syntheticEvent = {
|
||||
target: {
|
||||
name: "zulipAutoPost",
|
||||
type: "checkbox",
|
||||
checked: e.checked,
|
||||
},
|
||||
};
|
||||
handleRoomChange(syntheticEvent);
|
||||
}}
|
||||
>
|
||||
<Checkbox.HiddenInput />
|
||||
<Checkbox.Control>
|
||||
<Checkbox.Indicator />
|
||||
</Checkbox.Control>
|
||||
<Checkbox.Label>
|
||||
Automatically post transcription to Zulip
|
||||
</Checkbox.Label>
|
||||
</Checkbox.Root>
|
||||
</Field.Root>
|
||||
<Field.Root mt={4}>
|
||||
<Field.Label>Zulip stream</Field.Label>
|
||||
<Select.Root
|
||||
value={room.zulipStream ? [room.zulipStream] : []}
|
||||
onValueChange={(e) =>
|
||||
setRoomInput({
|
||||
...room,
|
||||
zulipStream: e.value[0],
|
||||
zulipTopic: "",
|
||||
})
|
||||
}
|
||||
collection={streamCollection}
|
||||
disabled={!room.zulipAutoPost}
|
||||
>
|
||||
<Select.HiddenSelect />
|
||||
<Select.Control>
|
||||
<Select.Trigger>
|
||||
<Select.ValueText placeholder="Select stream" />
|
||||
</Select.Trigger>
|
||||
<Select.IndicatorGroup>
|
||||
<Select.Indicator />
|
||||
</Select.IndicatorGroup>
|
||||
</Select.Control>
|
||||
<Select.Positioner>
|
||||
<Select.Content>
|
||||
{streamOptions.map((option) => (
|
||||
<Select.Item key={option.value} item={option}>
|
||||
{option.label}
|
||||
<Select.ItemIndicator />
|
||||
</Select.Item>
|
||||
))}
|
||||
</Select.Content>
|
||||
</Select.Positioner>
|
||||
</Select.Root>
|
||||
</Field.Root>
|
||||
<Field.Root mt={4}>
|
||||
<Field.Label>Zulip topic</Field.Label>
|
||||
<Select.Root
|
||||
value={room.zulipTopic ? [room.zulipTopic] : []}
|
||||
onValueChange={(e) =>
|
||||
setRoomInput({
|
||||
...room,
|
||||
zulipTopic: e.value[0],
|
||||
})
|
||||
}
|
||||
collection={topicCollection}
|
||||
disabled={!room.zulipAutoPost}
|
||||
>
|
||||
<Select.HiddenSelect />
|
||||
<Select.Control>
|
||||
<Select.Trigger>
|
||||
<Select.ValueText placeholder="Select topic" />
|
||||
</Select.Trigger>
|
||||
<Select.IndicatorGroup>
|
||||
<Select.Indicator />
|
||||
</Select.IndicatorGroup>
|
||||
</Select.Control>
|
||||
<Select.Positioner>
|
||||
<Select.Content>
|
||||
{topicOptions.map((option) => (
|
||||
<Select.Item key={option.value} item={option}>
|
||||
{option.label}
|
||||
<Select.ItemIndicator />
|
||||
</Select.Item>
|
||||
))}
|
||||
</Select.Content>
|
||||
</Select.Positioner>
|
||||
</Select.Root>
|
||||
</Field.Root>
|
||||
</>
|
||||
)}
|
||||
</Tabs.Content>
|
||||
|
||||
<Tabs.Content value="webhook" pt={6}>
|
||||
|
||||
@@ -5,10 +5,11 @@ import useWaveform from "../useWaveform";
|
||||
import useMp3 from "../useMp3";
|
||||
import { TopicList } from "./_components/TopicList";
|
||||
import { Topic } from "../webSocketTypes";
|
||||
import React, { useEffect, useState, use } from "react";
|
||||
import React, { useEffect, useState, useCallback, use } from "react";
|
||||
import FinalSummary from "./finalSummary";
|
||||
import TranscriptTitle from "../transcriptTitle";
|
||||
import Player from "../player";
|
||||
import VideoPlayer from "../videoPlayer";
|
||||
import { useWebSockets } from "../useWebSockets";
|
||||
import { useRouter } from "next/navigation";
|
||||
import { parseNonEmptyString } from "../../../lib/utils";
|
||||
@@ -23,6 +24,8 @@ import {
|
||||
} from "@chakra-ui/react";
|
||||
import { useTranscriptGet } from "../../../lib/apiHooks";
|
||||
import { TranscriptStatus } from "../../../lib/transcript";
|
||||
import { useAuth } from "../../../lib/AuthProvider";
|
||||
import { featureEnabled } from "../../../lib/features";
|
||||
|
||||
type TranscriptDetails = {
|
||||
params: Promise<{
|
||||
@@ -56,6 +59,24 @@ export default function TranscriptDetails(details: TranscriptDetails) {
|
||||
const [finalSummaryElement, setFinalSummaryElement] =
|
||||
useState<HTMLDivElement | null>(null);
|
||||
|
||||
const auth = useAuth();
|
||||
const isAuthenticated =
|
||||
auth.status === "authenticated" || !featureEnabled("requireLogin");
|
||||
const hasCloudVideo = !!transcript.data?.has_cloud_video && isAuthenticated;
|
||||
const [videoExpanded, setVideoExpanded] = useState(false);
|
||||
const [videoNewBadge, setVideoNewBadge] = useState(() => {
|
||||
if (typeof window === "undefined") return true;
|
||||
return !localStorage.getItem(`video-seen-${transcriptId}`);
|
||||
});
|
||||
|
||||
const handleVideoToggle = useCallback(() => {
|
||||
setVideoExpanded((prev) => !prev);
|
||||
if (videoNewBadge) {
|
||||
setVideoNewBadge(false);
|
||||
localStorage.setItem(`video-seen-${transcriptId}`, "1");
|
||||
}
|
||||
}, [videoNewBadge, transcriptId]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!waiting || !transcript.data) return;
|
||||
|
||||
@@ -129,7 +150,7 @@ export default function TranscriptDetails(details: TranscriptDetails) {
|
||||
mt={4}
|
||||
mb={4}
|
||||
>
|
||||
{!mp3.audioDeleted && (
|
||||
{isAuthenticated && !mp3.audioDeleted && (
|
||||
<>
|
||||
{waveform.waveform && mp3.media && topics.topics ? (
|
||||
<Player
|
||||
@@ -156,8 +177,14 @@ export default function TranscriptDetails(details: TranscriptDetails) {
|
||||
<Grid
|
||||
templateColumns={{ base: "minmax(0, 1fr)", md: "repeat(2, 1fr)" }}
|
||||
templateRows={{
|
||||
base: "auto minmax(0, 1fr) minmax(0, 1fr)",
|
||||
md: "auto minmax(0, 1fr)",
|
||||
base:
|
||||
hasCloudVideo && videoExpanded
|
||||
? "auto auto minmax(0, 1fr) minmax(0, 1fr)"
|
||||
: "auto minmax(0, 1fr) minmax(0, 1fr)",
|
||||
md:
|
||||
hasCloudVideo && videoExpanded
|
||||
? "auto auto minmax(0, 1fr)"
|
||||
: "auto minmax(0, 1fr)",
|
||||
}}
|
||||
gap={4}
|
||||
gridRowGap={2}
|
||||
@@ -180,6 +207,10 @@ export default function TranscriptDetails(details: TranscriptDetails) {
|
||||
transcript={transcript.data || null}
|
||||
topics={topics.topics}
|
||||
finalSummaryElement={finalSummaryElement}
|
||||
hasCloudVideo={hasCloudVideo}
|
||||
videoExpanded={videoExpanded}
|
||||
onVideoToggle={handleVideoToggle}
|
||||
videoNewBadge={videoNewBadge}
|
||||
/>
|
||||
</Flex>
|
||||
{mp3.audioDeleted && (
|
||||
@@ -190,6 +221,18 @@ export default function TranscriptDetails(details: TranscriptDetails) {
|
||||
)}
|
||||
</Flex>
|
||||
</GridItem>
|
||||
{hasCloudVideo && videoExpanded && (
|
||||
<GridItem colSpan={{ base: 1, md: 2 }}>
|
||||
<VideoPlayer
|
||||
transcriptId={transcriptId}
|
||||
duration={transcript.data?.cloud_video_duration ?? null}
|
||||
expanded={videoExpanded}
|
||||
onClose={() => setVideoExpanded(false)}
|
||||
sourceLanguage={transcript.data?.source_language ?? null}
|
||||
participants={transcript.data?.participants ?? null}
|
||||
/>
|
||||
</GridItem>
|
||||
)}
|
||||
<TopicList
|
||||
topics={topics.topics || []}
|
||||
useActiveTopic={useActiveTopic}
|
||||
|
||||
@@ -21,6 +21,10 @@ import { useAuth } from "../../../lib/AuthProvider";
|
||||
import { featureEnabled } from "../../../lib/features";
|
||||
import { SearchableLanguageSelect } from "../../../components/SearchableLanguageSelect";
|
||||
|
||||
const sourceLanguages = supportedLanguages.filter(
|
||||
(l) => l.value && l.value !== "NOTRANSLATION",
|
||||
);
|
||||
|
||||
const TranscriptCreate = () => {
|
||||
const router = useRouter();
|
||||
const auth = useAuth();
|
||||
@@ -33,8 +37,13 @@ const TranscriptCreate = () => {
|
||||
const nameChange = (event: React.ChangeEvent<HTMLInputElement>) => {
|
||||
setName(event.target.value);
|
||||
};
|
||||
const [sourceLanguage, setSourceLanguage] = useState<string>("");
|
||||
const [targetLanguage, setTargetLanguage] = useState<string>("NOTRANSLATION");
|
||||
|
||||
const onSourceLanguageChange = (newval) => {
|
||||
(!newval || typeof newval === "string") &&
|
||||
setSourceLanguage(newval || "en");
|
||||
};
|
||||
const onLanguageChange = (newval) => {
|
||||
(!newval || typeof newval === "string") && setTargetLanguage(newval);
|
||||
};
|
||||
@@ -55,7 +64,7 @@ const TranscriptCreate = () => {
|
||||
const targetLang = getTargetLanguage();
|
||||
createTranscript.create({
|
||||
name,
|
||||
source_language: "en",
|
||||
source_language: sourceLanguage || "en",
|
||||
target_language: targetLang || "en",
|
||||
source_kind: "live",
|
||||
});
|
||||
@@ -67,7 +76,7 @@ const TranscriptCreate = () => {
|
||||
const targetLang = getTargetLanguage();
|
||||
createTranscript.create({
|
||||
name,
|
||||
source_language: "en",
|
||||
source_language: sourceLanguage || "en",
|
||||
target_language: targetLang || "en",
|
||||
source_kind: "file",
|
||||
});
|
||||
@@ -160,6 +169,15 @@ const TranscriptCreate = () => {
|
||||
placeholder="Optional"
|
||||
/>
|
||||
</Box>
|
||||
<Box mb={4}>
|
||||
<Text mb={1}>Audio language</Text>
|
||||
<SearchableLanguageSelect
|
||||
options={sourceLanguages}
|
||||
value={sourceLanguage}
|
||||
onChange={onSourceLanguageChange}
|
||||
placeholder="Select language"
|
||||
/>
|
||||
</Box>
|
||||
<Box mb={4}>
|
||||
<Text mb={1}>Do you want to enable live translation?</Text>
|
||||
<SearchableLanguageSelect
|
||||
|
||||
@@ -18,10 +18,11 @@ import {
|
||||
createListCollection,
|
||||
} from "@chakra-ui/react";
|
||||
import { LuShare2 } from "react-icons/lu";
|
||||
import { useTranscriptUpdate } from "../../lib/apiHooks";
|
||||
import { useTranscriptUpdate, useConfig } from "../../lib/apiHooks";
|
||||
import ShareLink from "./shareLink";
|
||||
import ShareCopy from "./shareCopy";
|
||||
import ShareZulip from "./shareZulip";
|
||||
import ShareEmail from "./shareEmail";
|
||||
import { useAuth } from "../../lib/AuthProvider";
|
||||
|
||||
import { featureEnabled } from "../../lib/features";
|
||||
@@ -55,6 +56,9 @@ export default function ShareAndPrivacy(props: ShareAndPrivacyProps) {
|
||||
const [shareLoading, setShareLoading] = useState(false);
|
||||
const requireLogin = featureEnabled("requireLogin");
|
||||
const updateTranscriptMutation = useTranscriptUpdate();
|
||||
const { data: config } = useConfig();
|
||||
const zulipEnabled = config?.zulip_enabled ?? false;
|
||||
const emailEnabled = config?.email_enabled ?? false;
|
||||
|
||||
const updateShareMode = async (selectedValue: string) => {
|
||||
const selectedOption = shareOptionsData.find(
|
||||
@@ -169,14 +173,20 @@ export default function ShareAndPrivacy(props: ShareAndPrivacyProps) {
|
||||
<Text fontSize="sm" mb="2" fontWeight={"bold"}>
|
||||
Share options
|
||||
</Text>
|
||||
<Flex gap={2} mb={2}>
|
||||
{requireLogin && (
|
||||
<Flex gap={2} mb={2} flexWrap="wrap">
|
||||
{requireLogin && zulipEnabled && (
|
||||
<ShareZulip
|
||||
transcript={props.transcript}
|
||||
topics={props.topics}
|
||||
disabled={toShareMode(shareMode.value) === "private"}
|
||||
/>
|
||||
)}
|
||||
{emailEnabled && (
|
||||
<ShareEmail
|
||||
transcript={props.transcript}
|
||||
disabled={toShareMode(shareMode.value) === "private"}
|
||||
/>
|
||||
)}
|
||||
<ShareCopy
|
||||
finalSummaryElement={props.finalSummaryElement}
|
||||
transcript={props.transcript}
|
||||
|
||||
110
www/app/(app)/transcripts/shareEmail.tsx
Normal file
110
www/app/(app)/transcripts/shareEmail.tsx
Normal file
@@ -0,0 +1,110 @@
|
||||
import { useState } from "react";
|
||||
import type { components } from "../../reflector-api";
|
||||
|
||||
type GetTranscriptWithParticipants =
|
||||
components["schemas"]["GetTranscriptWithParticipants"];
|
||||
import {
|
||||
Button,
|
||||
Dialog,
|
||||
CloseButton,
|
||||
Input,
|
||||
Box,
|
||||
Text,
|
||||
} from "@chakra-ui/react";
|
||||
import { LuMail } from "react-icons/lu";
|
||||
import { useTranscriptSendEmail } from "../../lib/apiHooks";
|
||||
|
||||
type ShareEmailProps = {
|
||||
transcript: GetTranscriptWithParticipants;
|
||||
disabled: boolean;
|
||||
};
|
||||
|
||||
export default function ShareEmail(props: ShareEmailProps) {
|
||||
const [showModal, setShowModal] = useState(false);
|
||||
const [email, setEmail] = useState("");
|
||||
const [sent, setSent] = useState(false);
|
||||
const sendEmailMutation = useTranscriptSendEmail();
|
||||
|
||||
const handleSend = async () => {
|
||||
if (!email) return;
|
||||
try {
|
||||
await sendEmailMutation.mutateAsync({
|
||||
params: {
|
||||
path: { transcript_id: props.transcript.id },
|
||||
},
|
||||
body: { email },
|
||||
});
|
||||
setSent(true);
|
||||
setTimeout(() => {
|
||||
setSent(false);
|
||||
setShowModal(false);
|
||||
setEmail("");
|
||||
}, 2000);
|
||||
} catch (error) {
|
||||
console.error("Error sending email:", error);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<>
|
||||
<Button disabled={props.disabled} onClick={() => setShowModal(true)}>
|
||||
<LuMail /> Send Email
|
||||
</Button>
|
||||
|
||||
<Dialog.Root
|
||||
open={showModal}
|
||||
onOpenChange={(e) => {
|
||||
setShowModal(e.open);
|
||||
if (!e.open) {
|
||||
setSent(false);
|
||||
setEmail("");
|
||||
}
|
||||
}}
|
||||
size="md"
|
||||
>
|
||||
<Dialog.Backdrop />
|
||||
<Dialog.Positioner>
|
||||
<Dialog.Content>
|
||||
<Dialog.Header>
|
||||
<Dialog.Title>Send Transcript via Email</Dialog.Title>
|
||||
<Dialog.CloseTrigger asChild>
|
||||
<CloseButton />
|
||||
</Dialog.CloseTrigger>
|
||||
</Dialog.Header>
|
||||
<Dialog.Body>
|
||||
{sent ? (
|
||||
<Text color="green.500">Email sent successfully!</Text>
|
||||
) : (
|
||||
<Box>
|
||||
<Text mb={2}>
|
||||
Enter the email address to send this transcript to:
|
||||
</Text>
|
||||
<Input
|
||||
type="email"
|
||||
placeholder="recipient@example.com"
|
||||
value={email}
|
||||
onChange={(e) => setEmail(e.target.value)}
|
||||
onKeyDown={(e) => e.key === "Enter" && handleSend()}
|
||||
/>
|
||||
</Box>
|
||||
)}
|
||||
</Dialog.Body>
|
||||
<Dialog.Footer>
|
||||
<Button variant="ghost" onClick={() => setShowModal(false)}>
|
||||
Close
|
||||
</Button>
|
||||
{!sent && (
|
||||
<Button
|
||||
disabled={!email || sendEmailMutation.isPending}
|
||||
onClick={handleSend}
|
||||
>
|
||||
{sendEmailMutation.isPending ? "Sending..." : "Send"}
|
||||
</Button>
|
||||
)}
|
||||
</Dialog.Footer>
|
||||
</Dialog.Content>
|
||||
</Dialog.Positioner>
|
||||
</Dialog.Root>
|
||||
</>
|
||||
);
|
||||
}
|
||||
@@ -10,11 +10,22 @@ import {
|
||||
useTranscriptUpdate,
|
||||
useTranscriptParticipants,
|
||||
} from "../../lib/apiHooks";
|
||||
import { Heading, IconButton, Input, Flex, Spacer } from "@chakra-ui/react";
|
||||
import { LuPen, LuCopy, LuCheck } from "react-icons/lu";
|
||||
import {
|
||||
Heading,
|
||||
IconButton,
|
||||
Input,
|
||||
Flex,
|
||||
Spacer,
|
||||
Spinner,
|
||||
Box,
|
||||
Text,
|
||||
} from "@chakra-ui/react";
|
||||
import { LuPen, LuCopy, LuCheck, LuDownload, LuVideo } from "react-icons/lu";
|
||||
import ShareAndPrivacy from "./shareAndPrivacy";
|
||||
import { buildTranscriptWithTopics } from "./buildTranscriptWithTopics";
|
||||
import { toaster } from "../../components/ui/toaster";
|
||||
import { useAuth } from "../../lib/AuthProvider";
|
||||
import { API_URL } from "../../lib/apiClient";
|
||||
|
||||
type TranscriptTitle = {
|
||||
title: string;
|
||||
@@ -25,13 +36,51 @@ type TranscriptTitle = {
|
||||
transcript: GetTranscriptWithParticipants | null;
|
||||
topics: GetTranscriptTopic[] | null;
|
||||
finalSummaryElement: HTMLDivElement | null;
|
||||
|
||||
// video props
|
||||
hasCloudVideo?: boolean;
|
||||
videoExpanded?: boolean;
|
||||
onVideoToggle?: () => void;
|
||||
videoNewBadge?: boolean;
|
||||
};
|
||||
|
||||
const TranscriptTitle = (props: TranscriptTitle) => {
|
||||
const [displayedTitle, setDisplayedTitle] = useState(props.title);
|
||||
const [preEditTitle, setPreEditTitle] = useState(props.title);
|
||||
const [isEditing, setIsEditing] = useState(false);
|
||||
const [downloading, setDownloading] = useState(false);
|
||||
const updateTranscriptMutation = useTranscriptUpdate();
|
||||
const auth = useAuth();
|
||||
const accessToken = auth.status === "authenticated" ? auth.accessToken : null;
|
||||
const userId = auth.status === "authenticated" ? auth.user?.id : null;
|
||||
const isOwner = !!(userId && userId === props.transcript?.user_id);
|
||||
|
||||
const handleDownloadZip = async () => {
|
||||
if (!props.transcriptId || downloading) return;
|
||||
setDownloading(true);
|
||||
try {
|
||||
const headers: Record<string, string> = {};
|
||||
if (accessToken) {
|
||||
headers["Authorization"] = `Bearer ${accessToken}`;
|
||||
}
|
||||
const resp = await fetch(
|
||||
`${API_URL}/v1/transcripts/${props.transcriptId}/download/zip`,
|
||||
{ headers },
|
||||
);
|
||||
if (!resp.ok) throw new Error("Download failed");
|
||||
const blob = await resp.blob();
|
||||
const url = URL.createObjectURL(blob);
|
||||
const a = document.createElement("a");
|
||||
a.href = url;
|
||||
a.download = `transcript_${props.transcriptId.split("-")[0]}.zip`;
|
||||
a.click();
|
||||
URL.revokeObjectURL(url);
|
||||
} catch (err) {
|
||||
console.error("Failed to download zip:", err);
|
||||
} finally {
|
||||
setDownloading(false);
|
||||
}
|
||||
};
|
||||
const participantsQuery = useTranscriptParticipants(
|
||||
props.transcript?.id ? parseMaybeNonEmptyString(props.transcript.id) : null,
|
||||
);
|
||||
@@ -173,6 +222,51 @@ const TranscriptTitle = (props: TranscriptTitle) => {
|
||||
>
|
||||
<LuCopy />
|
||||
</IconButton>
|
||||
{isOwner && (
|
||||
<IconButton
|
||||
aria-label="Download Transcript Zip"
|
||||
size="sm"
|
||||
variant="subtle"
|
||||
onClick={handleDownloadZip}
|
||||
disabled={downloading}
|
||||
>
|
||||
{downloading ? <Spinner size="sm" /> : <LuDownload />}
|
||||
</IconButton>
|
||||
)}
|
||||
{props.hasCloudVideo && props.onVideoToggle && (
|
||||
<Box position="relative" display="inline-flex">
|
||||
<IconButton
|
||||
aria-label={
|
||||
props.videoExpanded
|
||||
? "Hide cloud recording"
|
||||
: "Show cloud recording"
|
||||
}
|
||||
size="sm"
|
||||
variant={props.videoExpanded ? "solid" : "subtle"}
|
||||
colorPalette={props.videoExpanded ? "blue" : undefined}
|
||||
onClick={props.onVideoToggle}
|
||||
>
|
||||
<LuVideo />
|
||||
</IconButton>
|
||||
{props.videoNewBadge && (
|
||||
<Text
|
||||
position="absolute"
|
||||
top="-1"
|
||||
right="-1"
|
||||
fontSize="2xs"
|
||||
fontWeight="bold"
|
||||
color="white"
|
||||
bg="red.500"
|
||||
px={1}
|
||||
borderRadius="sm"
|
||||
lineHeight="tall"
|
||||
pointerEvents="none"
|
||||
>
|
||||
new
|
||||
</Text>
|
||||
)}
|
||||
</Box>
|
||||
)}
|
||||
<ShareAndPrivacy
|
||||
finalSummaryElement={props.finalSummaryElement}
|
||||
transcript={props.transcript}
|
||||
|
||||
@@ -56,7 +56,7 @@ const useMp3 = (transcriptId: string, waiting?: boolean): Mp3Response => {
|
||||
}, [navigator.serviceWorker, !serviceWorker, accessTokenInfo]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!transcriptId || later || !transcript) return;
|
||||
if (!transcriptId || later || !transcript || !accessTokenInfo) return;
|
||||
|
||||
let stopped = false;
|
||||
let audioElement: HTMLAudioElement | null = null;
|
||||
@@ -113,7 +113,7 @@ const useMp3 = (transcriptId: string, waiting?: boolean): Mp3Response => {
|
||||
if (handleError) audioElement.removeEventListener("error", handleError);
|
||||
}
|
||||
};
|
||||
}, [transcriptId, transcript, later]);
|
||||
}, [transcriptId, transcript, later, accessTokenInfo]);
|
||||
|
||||
const getNow = () => {
|
||||
setLater(false);
|
||||
|
||||
508
www/app/(app)/transcripts/videoPlayer.tsx
Normal file
508
www/app/(app)/transcripts/videoPlayer.tsx
Normal file
@@ -0,0 +1,508 @@
|
||||
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
|
||||
import { Box, Flex, Skeleton, Text } from "@chakra-ui/react";
|
||||
import { LuMinus, LuPlus, LuVideo, LuX } from "react-icons/lu";
|
||||
import { useAuth } from "../../lib/AuthProvider";
|
||||
import { API_URL } from "../../lib/apiClient";
|
||||
import { generateHighContrastColor } from "../../lib/utils";
|
||||
|
||||
type SpeakerInfo = { speaker: number | null; name: string };
|
||||
|
||||
type VideoPlayerProps = {
|
||||
transcriptId: string;
|
||||
duration: number | null;
|
||||
expanded: boolean;
|
||||
onClose: () => void;
|
||||
sourceLanguage?: string | null;
|
||||
participants?: SpeakerInfo[] | null;
|
||||
};
|
||||
|
||||
function formatDuration(seconds: number): string {
|
||||
const h = Math.floor(seconds / 3600);
|
||||
const m = Math.floor((seconds % 3600) / 60);
|
||||
const s = seconds % 60;
|
||||
if (h > 0)
|
||||
return `${h}:${String(m).padStart(2, "0")}:${String(s).padStart(2, "0")}`;
|
||||
return `${m}:${String(s).padStart(2, "0")}`;
|
||||
}
|
||||
|
||||
const VTT_TIMESTAMP_RE =
|
||||
/(\d{2}:\d{2}:\d{2}\.\d{3})\s*-->\s*(\d{2}:\d{2}:\d{2}\.\d{3})/g;
|
||||
|
||||
function parseVttTimestamp(ts: string): number {
|
||||
const [h, m, rest] = ts.split(":");
|
||||
const [s, ms] = rest.split(".");
|
||||
return Number(h) * 3600 + Number(m) * 60 + Number(s) + Number(ms) / 1000;
|
||||
}
|
||||
|
||||
function formatVttTimestamp(totalSeconds: number): string {
|
||||
const clamped = Math.max(0, totalSeconds);
|
||||
const h = Math.floor(clamped / 3600);
|
||||
const m = Math.floor((clamped % 3600) / 60);
|
||||
const s = Math.floor(clamped % 60);
|
||||
const ms = Math.round((clamped % 1) * 1000);
|
||||
return `${String(h).padStart(2, "0")}:${String(m).padStart(2, "0")}:${String(s).padStart(2, "0")}.${String(ms).padStart(3, "0")}`;
|
||||
}
|
||||
|
||||
function shiftVttTimestamps(vttContent: string, offsetSeconds: number): string {
|
||||
if (offsetSeconds === 0) return vttContent;
|
||||
return vttContent.replace(
|
||||
VTT_TIMESTAMP_RE,
|
||||
(_match, start: string, end: string) => {
|
||||
const newStart = formatVttTimestamp(
|
||||
parseVttTimestamp(start) + offsetSeconds,
|
||||
);
|
||||
const newEnd = formatVttTimestamp(parseVttTimestamp(end) + offsetSeconds);
|
||||
return `${newStart} --> ${newEnd}`;
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
type VttSegment = { start: number; end: number; speaker: string };
|
||||
|
||||
const VTT_CUE_RE =
|
||||
/(\d{2}:\d{2}:\d{2}\.\d{3})\s*-->\s*(\d{2}:\d{2}:\d{2}\.\d{3})\n<v ([^>]+)>/g;
|
||||
|
||||
function parseVttSegments(vttContent: string): VttSegment[] {
|
||||
const segments: VttSegment[] = [];
|
||||
let match;
|
||||
while ((match = VTT_CUE_RE.exec(vttContent)) !== null) {
|
||||
segments.push({
|
||||
start: parseVttTimestamp(match[1]),
|
||||
end: parseVttTimestamp(match[2]),
|
||||
speaker: match[3],
|
||||
});
|
||||
}
|
||||
return segments;
|
||||
}
|
||||
|
||||
// Same background as TopicSegment so speaker colors match the transcript UI
|
||||
const SPEAKER_COLOR_BG: [number, number, number] = [96, 165, 250];
|
||||
|
||||
function SpeakerProgressBar({
|
||||
segments,
|
||||
videoDuration,
|
||||
currentTime,
|
||||
captionOffset,
|
||||
onSeek,
|
||||
participants,
|
||||
}: {
|
||||
segments: VttSegment[];
|
||||
videoDuration: number;
|
||||
currentTime: number;
|
||||
captionOffset: number;
|
||||
onSeek: (time: number) => void;
|
||||
participants?: SpeakerInfo[] | null;
|
||||
}) {
|
||||
const barRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
// Build a name→"Speaker N" reverse lookup so colors match TopicSegment
|
||||
const speakerColors = useMemo(() => {
|
||||
const nameToColorKey: Record<string, string> = {};
|
||||
if (participants) {
|
||||
for (const p of participants) {
|
||||
if (p.speaker != null) {
|
||||
nameToColorKey[p.name] = `Speaker ${p.speaker}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
const map: Record<string, string | undefined> = {};
|
||||
for (const seg of segments) {
|
||||
if (!map[seg.speaker]) {
|
||||
const colorKey = nameToColorKey[seg.speaker] ?? seg.speaker;
|
||||
map[seg.speaker] = generateHighContrastColor(
|
||||
colorKey,
|
||||
SPEAKER_COLOR_BG,
|
||||
);
|
||||
}
|
||||
}
|
||||
return map;
|
||||
}, [segments, participants]);
|
||||
|
||||
const activeSpeaker = useMemo(() => {
|
||||
for (const seg of segments) {
|
||||
const adjStart = seg.start + captionOffset;
|
||||
const adjEnd = seg.end + captionOffset;
|
||||
if (currentTime >= adjStart && currentTime < adjEnd) {
|
||||
return seg.speaker;
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}, [segments, currentTime, captionOffset]);
|
||||
|
||||
const handleClick = (e: React.MouseEvent<HTMLDivElement>) => {
|
||||
if (!barRef.current || !videoDuration) return;
|
||||
const rect = barRef.current.getBoundingClientRect();
|
||||
const fraction = Math.max(
|
||||
0,
|
||||
Math.min(1, (e.clientX - rect.left) / rect.width),
|
||||
);
|
||||
onSeek(fraction * videoDuration);
|
||||
};
|
||||
|
||||
const progressPct =
|
||||
videoDuration > 0 ? (currentTime / videoDuration) * 100 : 0;
|
||||
|
||||
return (
|
||||
<Box position="relative" mb={4}>
|
||||
<Box
|
||||
ref={barRef}
|
||||
position="relative"
|
||||
h="8px"
|
||||
bg="gray.700"
|
||||
cursor="pointer"
|
||||
onClick={handleClick}
|
||||
borderBottomRadius="md"
|
||||
overflow="hidden"
|
||||
>
|
||||
{segments.map((seg, i) => {
|
||||
const adjStart = Math.max(0, seg.start + captionOffset);
|
||||
const adjEnd = Math.max(0, seg.end + captionOffset);
|
||||
if (adjEnd <= 0 || adjStart >= videoDuration) return null;
|
||||
const leftPct = (adjStart / videoDuration) * 100;
|
||||
const widthPct = ((adjEnd - adjStart) / videoDuration) * 100;
|
||||
return (
|
||||
<Box
|
||||
key={i}
|
||||
position="absolute"
|
||||
top={0}
|
||||
bottom={0}
|
||||
left={`${leftPct}%`}
|
||||
width={`${widthPct}%`}
|
||||
bg={speakerColors[seg.speaker]}
|
||||
/>
|
||||
);
|
||||
})}
|
||||
{/* Playhead */}
|
||||
<Box
|
||||
position="absolute"
|
||||
top={0}
|
||||
bottom={0}
|
||||
left={`${progressPct}%`}
|
||||
w="2px"
|
||||
bg="white"
|
||||
zIndex={1}
|
||||
pointerEvents="none"
|
||||
/>
|
||||
</Box>
|
||||
{/* Speaker tooltip below the bar */}
|
||||
{activeSpeaker && (
|
||||
<Text
|
||||
position="absolute"
|
||||
top="10px"
|
||||
left={`${progressPct}%`}
|
||||
transform="translateX(-50%)"
|
||||
fontSize="2xs"
|
||||
color={speakerColors[activeSpeaker]}
|
||||
fontWeight="semibold"
|
||||
whiteSpace="nowrap"
|
||||
pointerEvents="none"
|
||||
>
|
||||
{activeSpeaker}
|
||||
</Text>
|
||||
)}
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
export default function VideoPlayer({
|
||||
transcriptId,
|
||||
duration,
|
||||
expanded,
|
||||
onClose,
|
||||
sourceLanguage,
|
||||
participants,
|
||||
}: VideoPlayerProps) {
|
||||
const [videoUrl, setVideoUrl] = useState<string | null>(null);
|
||||
const [rawVtt, setRawVtt] = useState<string | null>(null);
|
||||
const [captionsUrl, setCaptionsUrl] = useState<string | null>(null);
|
||||
const [captionOffset, setCaptionOffset] = useState(0);
|
||||
const [currentTime, setCurrentTime] = useState(0);
|
||||
const [videoDuration, setVideoDuration] = useState(0);
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const prevBlobUrl = useRef<string | null>(null);
|
||||
const videoRef = useRef<HTMLVideoElement>(null);
|
||||
const auth = useAuth();
|
||||
const accessToken = auth.status === "authenticated" ? auth.accessToken : null;
|
||||
|
||||
useEffect(() => {
|
||||
if (!expanded || !transcriptId || videoUrl) return;
|
||||
|
||||
const fetchVideoUrl = async () => {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
try {
|
||||
const url = `${API_URL}/v1/transcripts/${transcriptId}/video/url`;
|
||||
const headers: Record<string, string> = {};
|
||||
if (accessToken) {
|
||||
headers["Authorization"] = `Bearer ${accessToken}`;
|
||||
}
|
||||
const resp = await fetch(url, { headers });
|
||||
if (!resp.ok) {
|
||||
if (resp.status === 401) {
|
||||
throw new Error("Sign in to view the video recording");
|
||||
}
|
||||
throw new Error("Failed to load video");
|
||||
}
|
||||
const data = await resp.json();
|
||||
setVideoUrl(data.url);
|
||||
} catch (err) {
|
||||
setError(err instanceof Error ? err.message : "Failed to load video");
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
fetchVideoUrl();
|
||||
}, [expanded, transcriptId, accessToken, videoUrl]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!videoUrl || !transcriptId) return;
|
||||
|
||||
let cancelled = false;
|
||||
|
||||
const fetchCaptions = async () => {
|
||||
try {
|
||||
const url = `${API_URL}/v1/transcripts/${transcriptId}?transcript_format=webvtt-named`;
|
||||
const headers: Record<string, string> = {};
|
||||
if (accessToken) {
|
||||
headers["Authorization"] = `Bearer ${accessToken}`;
|
||||
}
|
||||
const resp = await fetch(url, { headers });
|
||||
if (!resp.ok) return;
|
||||
const data = await resp.json();
|
||||
const vttContent = data?.transcript;
|
||||
if (!vttContent || cancelled) return;
|
||||
setRawVtt(vttContent);
|
||||
} catch {
|
||||
// Captions are non-critical — fail silently
|
||||
}
|
||||
};
|
||||
|
||||
fetchCaptions();
|
||||
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [videoUrl, transcriptId, accessToken]);
|
||||
|
||||
// Rebuild blob URL whenever rawVtt or captionOffset changes
|
||||
useEffect(() => {
|
||||
if (!rawVtt) return;
|
||||
|
||||
const shifted = shiftVttTimestamps(rawVtt, captionOffset);
|
||||
const blob = new Blob([shifted], { type: "text/vtt" });
|
||||
const blobUrl = URL.createObjectURL(blob);
|
||||
|
||||
if (prevBlobUrl.current) {
|
||||
URL.revokeObjectURL(prevBlobUrl.current);
|
||||
}
|
||||
prevBlobUrl.current = blobUrl;
|
||||
setCaptionsUrl(blobUrl);
|
||||
|
||||
return () => {
|
||||
URL.revokeObjectURL(blobUrl);
|
||||
prevBlobUrl.current = null;
|
||||
};
|
||||
}, [rawVtt, captionOffset]);
|
||||
|
||||
const adjustOffset = useCallback((delta: number) => {
|
||||
setCaptionOffset((prev) => Math.round((prev + delta) * 10) / 10);
|
||||
}, []);
|
||||
|
||||
const formattedOffset = useMemo(() => {
|
||||
const sign = captionOffset >= 0 ? "+" : "";
|
||||
return `${sign}${captionOffset.toFixed(1)}s`;
|
||||
}, [captionOffset]);
|
||||
|
||||
const segments = useMemo(
|
||||
() => (rawVtt ? parseVttSegments(rawVtt) : []),
|
||||
[rawVtt],
|
||||
);
|
||||
|
||||
// Track video currentTime and duration
|
||||
useEffect(() => {
|
||||
const video = videoRef.current;
|
||||
if (!video) return;
|
||||
|
||||
const onTimeUpdate = () => setCurrentTime(video.currentTime);
|
||||
const onDurationChange = () => {
|
||||
if (video.duration && isFinite(video.duration)) {
|
||||
setVideoDuration(video.duration);
|
||||
}
|
||||
};
|
||||
|
||||
video.addEventListener("timeupdate", onTimeUpdate);
|
||||
video.addEventListener("loadedmetadata", onDurationChange);
|
||||
video.addEventListener("durationchange", onDurationChange);
|
||||
|
||||
return () => {
|
||||
video.removeEventListener("timeupdate", onTimeUpdate);
|
||||
video.removeEventListener("loadedmetadata", onDurationChange);
|
||||
video.removeEventListener("durationchange", onDurationChange);
|
||||
};
|
||||
}, [videoUrl]);
|
||||
|
||||
const handleSeek = useCallback((time: number) => {
|
||||
if (videoRef.current) {
|
||||
videoRef.current.currentTime = time;
|
||||
}
|
||||
}, []);
|
||||
|
||||
if (!expanded) return null;
|
||||
|
||||
if (loading) {
|
||||
return (
|
||||
<Box
|
||||
borderRadius="md"
|
||||
overflow="hidden"
|
||||
bg="gray.900"
|
||||
w="fit-content"
|
||||
maxW="100%"
|
||||
>
|
||||
<Skeleton h="200px" w="400px" maxW="100%" />
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
if (error || !videoUrl) {
|
||||
return (
|
||||
<Box
|
||||
p={3}
|
||||
bg="red.100"
|
||||
borderRadius="md"
|
||||
role="alert"
|
||||
w="fit-content"
|
||||
maxW="100%"
|
||||
>
|
||||
<Text fontSize="sm">{error || "Failed to load video recording"}</Text>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<Box borderRadius="md" bg="black" w="fit-content" maxW="100%" mx="auto">
|
||||
{/* Header bar with title and close button */}
|
||||
<Flex
|
||||
align="center"
|
||||
justify="space-between"
|
||||
px={3}
|
||||
py={1.5}
|
||||
bg="gray.800"
|
||||
borderTopRadius="md"
|
||||
gap={4}
|
||||
>
|
||||
<Flex align="center" gap={2}>
|
||||
<LuVideo size={14} color="white" />
|
||||
<Text fontSize="xs" fontWeight="medium" color="white">
|
||||
Cloud recording
|
||||
</Text>
|
||||
{duration != null && (
|
||||
<Text fontSize="xs" color="gray.400">
|
||||
{formatDuration(duration)}
|
||||
</Text>
|
||||
)}
|
||||
</Flex>
|
||||
<Flex align="center" gap={3}>
|
||||
{rawVtt && (
|
||||
<Flex align="center" gap={1}>
|
||||
<Text fontSize="2xs" color="gray.400">
|
||||
CC sync
|
||||
</Text>
|
||||
<Flex
|
||||
align="center"
|
||||
justify="center"
|
||||
borderRadius="sm"
|
||||
p={0.5}
|
||||
cursor="pointer"
|
||||
onClick={() => adjustOffset(-0.5)}
|
||||
_hover={{ bg: "whiteAlpha.300" }}
|
||||
transition="background 0.15s"
|
||||
>
|
||||
<LuMinus size={12} color="white" />
|
||||
</Flex>
|
||||
<Text
|
||||
fontSize="2xs"
|
||||
color="gray.300"
|
||||
fontFamily="mono"
|
||||
minW="3.5em"
|
||||
textAlign="center"
|
||||
>
|
||||
{formattedOffset}
|
||||
</Text>
|
||||
<Flex
|
||||
align="center"
|
||||
justify="center"
|
||||
borderRadius="sm"
|
||||
p={0.5}
|
||||
cursor="pointer"
|
||||
onClick={() => adjustOffset(0.5)}
|
||||
_hover={{ bg: "whiteAlpha.300" }}
|
||||
transition="background 0.15s"
|
||||
>
|
||||
<LuPlus size={12} color="white" />
|
||||
</Flex>
|
||||
</Flex>
|
||||
)}
|
||||
<Flex
|
||||
align="center"
|
||||
justify="center"
|
||||
borderRadius="full"
|
||||
p={1}
|
||||
cursor="pointer"
|
||||
onClick={onClose}
|
||||
_hover={{ bg: "whiteAlpha.300" }}
|
||||
transition="background 0.15s"
|
||||
>
|
||||
<LuX size={14} color="white" />
|
||||
</Flex>
|
||||
</Flex>
|
||||
</Flex>
|
||||
{/* Video element with visible controls */}
|
||||
<video
|
||||
ref={videoRef}
|
||||
src={videoUrl}
|
||||
controls
|
||||
autoPlay
|
||||
controlsList="nodownload"
|
||||
disablePictureInPicture
|
||||
onContextMenu={(e) => e.preventDefault()}
|
||||
style={{
|
||||
display: "block",
|
||||
width: "100%",
|
||||
maxWidth: "640px",
|
||||
maxHeight: "45vh",
|
||||
minHeight: "180px",
|
||||
objectFit: "contain",
|
||||
background: "black",
|
||||
...(segments.length === 0
|
||||
? {
|
||||
borderBottomLeftRadius: "0.375rem",
|
||||
borderBottomRightRadius: "0.375rem",
|
||||
}
|
||||
: {}),
|
||||
}}
|
||||
>
|
||||
{captionsUrl && (
|
||||
<track
|
||||
kind="captions"
|
||||
src={captionsUrl}
|
||||
srcLang={sourceLanguage || "en"}
|
||||
label="Auto-generated captions"
|
||||
default
|
||||
/>
|
||||
)}
|
||||
</video>
|
||||
{segments.length > 0 && videoDuration > 0 && (
|
||||
<SpeakerProgressBar
|
||||
segments={segments}
|
||||
videoDuration={videoDuration}
|
||||
currentTime={currentTime}
|
||||
captionOffset={captionOffset}
|
||||
onSeek={handleSeek}
|
||||
participants={participants}
|
||||
/>
|
||||
)}
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
@@ -22,6 +22,8 @@ import DailyIframe, {
|
||||
import type { components } from "../../reflector-api";
|
||||
import { useAuth } from "../../lib/AuthProvider";
|
||||
import { useConsentDialog } from "../../lib/consent";
|
||||
import { useEmailTranscriptDialog } from "../../lib/emailTranscript";
|
||||
import { featureEnabled } from "../../lib/features";
|
||||
import {
|
||||
useRoomJoinMeeting,
|
||||
useMeetingStartRecording,
|
||||
@@ -37,6 +39,7 @@ import { useUuidV5 } from "react-uuid-hook";
|
||||
|
||||
const CONSENT_BUTTON_ID = "recording-consent";
|
||||
const RECORDING_INDICATOR_ID = "recording-indicator";
|
||||
const EMAIL_TRANSCRIPT_BUTTON_ID = "email-transcript";
|
||||
|
||||
// Namespace UUID for UUIDv5 generation of raw-tracks instanceIds
|
||||
// DO NOT CHANGE: Breaks instanceId determinism across deployments
|
||||
@@ -209,6 +212,12 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
const showConsentModalRef = useRef(showConsentModal);
|
||||
showConsentModalRef.current = showConsentModal;
|
||||
|
||||
const { showEmailModal } = useEmailTranscriptDialog({
|
||||
meetingId: assertMeetingId(meeting.id),
|
||||
});
|
||||
const showEmailModalRef = useRef(showEmailModal);
|
||||
showEmailModalRef.current = showEmailModal;
|
||||
|
||||
useEffect(() => {
|
||||
if (authLastUserId === undefined || !meeting?.id || !roomName) return;
|
||||
|
||||
@@ -242,6 +251,9 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
if (ev.button_id === CONSENT_BUTTON_ID) {
|
||||
showConsentModalRef.current();
|
||||
}
|
||||
if (ev.button_id === EMAIL_TRANSCRIPT_BUTTON_ID) {
|
||||
showEmailModalRef.current();
|
||||
}
|
||||
},
|
||||
[
|
||||
/*keep static; iframe recreation depends on it*/
|
||||
@@ -319,6 +331,10 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
() => new URL("/recording-icon.svg", window.location.origin),
|
||||
[],
|
||||
);
|
||||
const emailIconUrl = useMemo(
|
||||
() => new URL("/email-icon.svg", window.location.origin),
|
||||
[],
|
||||
);
|
||||
|
||||
const [frame, { setCustomTrayButton }] = useFrame(container, {
|
||||
onLeftMeeting: handleLeave,
|
||||
@@ -371,6 +387,20 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
|
||||
);
|
||||
}, [showConsentButton, recordingIconUrl, setCustomTrayButton]);
|
||||
|
||||
useEffect(() => {
|
||||
const show = featureEnabled("emailTranscript");
|
||||
setCustomTrayButton(
|
||||
EMAIL_TRANSCRIPT_BUTTON_ID,
|
||||
show
|
||||
? {
|
||||
iconPath: emailIconUrl.href,
|
||||
label: "Email Transcript",
|
||||
tooltip: "Get transcript emailed to you",
|
||||
}
|
||||
: null,
|
||||
);
|
||||
}, [emailIconUrl, setCustomTrayButton]);
|
||||
|
||||
if (authLastUserId === undefined) {
|
||||
return (
|
||||
<Center width="100vw" height="100vh">
|
||||
|
||||
@@ -67,7 +67,7 @@ export function SearchableLanguageSelect({
|
||||
|
||||
const collection = useMemo(() => createListCollection({ items }), [items]);
|
||||
|
||||
const selectedValues = value && value !== "NOTRANSLATION" ? [value] : [];
|
||||
const selectedValues = value ? [value] : [];
|
||||
|
||||
return (
|
||||
<Combobox.Root
|
||||
|
||||
@@ -228,7 +228,11 @@ export function useRoomDelete() {
|
||||
});
|
||||
}
|
||||
|
||||
export function useZulipStreams() {
|
||||
export function useConfig() {
|
||||
return $api.useQuery("get", "/v1/config", {});
|
||||
}
|
||||
|
||||
export function useZulipStreams(enabled: boolean = true) {
|
||||
const { isAuthenticated } = useAuthReady();
|
||||
|
||||
return $api.useQuery(
|
||||
@@ -236,7 +240,7 @@ export function useZulipStreams() {
|
||||
"/v1/zulip/streams",
|
||||
{},
|
||||
{
|
||||
enabled: isAuthenticated,
|
||||
enabled: enabled && isAuthenticated,
|
||||
},
|
||||
);
|
||||
}
|
||||
@@ -291,6 +295,16 @@ export function useTranscriptPostToZulip() {
|
||||
});
|
||||
}
|
||||
|
||||
export function useTranscriptSendEmail() {
|
||||
const { setError } = useError();
|
||||
|
||||
return $api.useMutation("post", "/v1/transcripts/{transcript_id}/email", {
|
||||
onError: (error) => {
|
||||
setError(error as Error, "There was an error sending the email");
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
export function useTranscriptUploadAudio() {
|
||||
const { setError } = useError();
|
||||
const queryClient = useQueryClient();
|
||||
@@ -643,6 +657,16 @@ export function useMeetingAudioConsent() {
|
||||
});
|
||||
}
|
||||
|
||||
export function useMeetingAddEmailRecipient() {
|
||||
const { setError } = useError();
|
||||
|
||||
return $api.useMutation("post", "/v1/meetings/{meeting_id}/email-recipient", {
|
||||
onError: (error) => {
|
||||
setError(error as Error, "There was an error adding the email");
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
export function useMeetingDeactivate() {
|
||||
const { setError } = useError();
|
||||
const queryClient = useQueryClient();
|
||||
|
||||
@@ -13,6 +13,8 @@ export const FEATURE_PRIVACY_ENV_NAME = "FEATURE_PRIVACY" as const;
|
||||
export const FEATURE_BROWSE_ENV_NAME = "FEATURE_BROWSE" as const;
|
||||
export const FEATURE_SEND_TO_ZULIP_ENV_NAME = "FEATURE_SEND_TO_ZULIP" as const;
|
||||
export const FEATURE_ROOMS_ENV_NAME = "FEATURE_ROOMS" as const;
|
||||
export const FEATURE_EMAIL_TRANSCRIPT_ENV_NAME =
|
||||
"FEATURE_EMAIL_TRANSCRIPT" as const;
|
||||
|
||||
const FEATURE_ENV_NAMES = [
|
||||
FEATURE_REQUIRE_LOGIN_ENV_NAME,
|
||||
@@ -20,6 +22,7 @@ const FEATURE_ENV_NAMES = [
|
||||
FEATURE_BROWSE_ENV_NAME,
|
||||
FEATURE_SEND_TO_ZULIP_ENV_NAME,
|
||||
FEATURE_ROOMS_ENV_NAME,
|
||||
FEATURE_EMAIL_TRANSCRIPT_ENV_NAME,
|
||||
] as const;
|
||||
|
||||
export type FeatureEnvName = (typeof FEATURE_ENV_NAMES)[number];
|
||||
|
||||
70
www/app/lib/emailTranscript/EmailTranscriptDialog.tsx
Normal file
70
www/app/lib/emailTranscript/EmailTranscriptDialog.tsx
Normal file
@@ -0,0 +1,70 @@
|
||||
"use client";
|
||||
|
||||
import { useState, useEffect } from "react";
|
||||
import { Box, Button, Input, Text, VStack, HStack } from "@chakra-ui/react";
|
||||
|
||||
interface EmailTranscriptDialogProps {
|
||||
onSubmit: (email: string) => void;
|
||||
onDismiss: () => void;
|
||||
}
|
||||
|
||||
export function EmailTranscriptDialog({
|
||||
onSubmit,
|
||||
onDismiss,
|
||||
}: EmailTranscriptDialogProps) {
|
||||
const [email, setEmail] = useState("");
|
||||
const [inputEl, setInputEl] = useState<HTMLInputElement | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
inputEl?.focus();
|
||||
}, [inputEl]);
|
||||
|
||||
const handleSubmit = () => {
|
||||
const trimmed = email.trim();
|
||||
if (trimmed) {
|
||||
onSubmit(trimmed);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<Box
|
||||
p={6}
|
||||
bg="rgba(255, 255, 255, 0.7)"
|
||||
borderRadius="lg"
|
||||
boxShadow="lg"
|
||||
maxW="md"
|
||||
mx="auto"
|
||||
>
|
||||
<VStack gap={4} alignItems="center">
|
||||
<Text fontSize="md" textAlign="center" fontWeight="medium">
|
||||
Enter your email to receive the transcript when it's ready
|
||||
</Text>
|
||||
<Input
|
||||
ref={setInputEl}
|
||||
type="email"
|
||||
placeholder="your@email.com"
|
||||
value={email}
|
||||
onChange={(e) => setEmail(e.target.value)}
|
||||
onKeyDown={(e) => {
|
||||
if (e.key === "Enter") handleSubmit();
|
||||
}}
|
||||
size="sm"
|
||||
bg="white"
|
||||
/>
|
||||
<HStack gap={4} justifyContent="center">
|
||||
<Button variant="ghost" size="sm" onClick={onDismiss}>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button
|
||||
colorPalette="primary"
|
||||
size="sm"
|
||||
onClick={handleSubmit}
|
||||
disabled={!email.trim()}
|
||||
>
|
||||
Send
|
||||
</Button>
|
||||
</HStack>
|
||||
</VStack>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
1
www/app/lib/emailTranscript/index.ts
Normal file
1
www/app/lib/emailTranscript/index.ts
Normal file
@@ -0,0 +1 @@
|
||||
export { useEmailTranscriptDialog } from "./useEmailTranscriptDialog";
|
||||
128
www/app/lib/emailTranscript/useEmailTranscriptDialog.tsx
Normal file
128
www/app/lib/emailTranscript/useEmailTranscriptDialog.tsx
Normal file
@@ -0,0 +1,128 @@
|
||||
"use client";
|
||||
|
||||
import { useCallback, useState, useEffect, useRef } from "react";
|
||||
import { Box, Text } from "@chakra-ui/react";
|
||||
import { toaster } from "../../components/ui/toaster";
|
||||
import { useMeetingAddEmailRecipient } from "../apiHooks";
|
||||
import { EmailTranscriptDialog } from "./EmailTranscriptDialog";
|
||||
import type { MeetingId } from "../types";
|
||||
|
||||
const TOAST_CHECK_INTERVAL_MS = 100;
|
||||
|
||||
type UseEmailTranscriptDialogParams = {
|
||||
meetingId: MeetingId;
|
||||
};
|
||||
|
||||
export function useEmailTranscriptDialog({
|
||||
meetingId,
|
||||
}: UseEmailTranscriptDialogParams) {
|
||||
const [modalOpen, setModalOpen] = useState(false);
|
||||
const addEmailMutation = useMeetingAddEmailRecipient();
|
||||
const intervalRef = useRef<NodeJS.Timeout | null>(null);
|
||||
const keydownHandlerRef = useRef<((event: KeyboardEvent) => void) | null>(
|
||||
null,
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (intervalRef.current) {
|
||||
clearInterval(intervalRef.current);
|
||||
intervalRef.current = null;
|
||||
}
|
||||
if (keydownHandlerRef.current) {
|
||||
document.removeEventListener("keydown", keydownHandlerRef.current);
|
||||
keydownHandlerRef.current = null;
|
||||
}
|
||||
};
|
||||
}, []);
|
||||
|
||||
const handleSubmitEmail = useCallback(
|
||||
async (email: string) => {
|
||||
try {
|
||||
await addEmailMutation.mutateAsync({
|
||||
params: {
|
||||
path: { meeting_id: meetingId },
|
||||
},
|
||||
body: {
|
||||
email,
|
||||
},
|
||||
});
|
||||
|
||||
toaster.create({
|
||||
duration: 4000,
|
||||
render: () => (
|
||||
<Box
|
||||
p={4}
|
||||
bg="green.100"
|
||||
borderRadius="md"
|
||||
boxShadow="md"
|
||||
textAlign="center"
|
||||
>
|
||||
<Text fontWeight="medium">Email registered</Text>
|
||||
<Text fontSize="sm" color="gray.600">
|
||||
You will receive the transcript link when processing is
|
||||
complete.
|
||||
</Text>
|
||||
</Box>
|
||||
),
|
||||
});
|
||||
} catch (error) {
|
||||
console.error("Error adding email recipient:", error);
|
||||
}
|
||||
},
|
||||
[addEmailMutation, meetingId],
|
||||
);
|
||||
|
||||
const showEmailModal = useCallback(() => {
|
||||
if (modalOpen) return;
|
||||
|
||||
setModalOpen(true);
|
||||
|
||||
const toastId = toaster.create({
|
||||
placement: "top",
|
||||
duration: null,
|
||||
render: ({ dismiss }) => (
|
||||
<EmailTranscriptDialog
|
||||
onSubmit={(email) => {
|
||||
handleSubmitEmail(email);
|
||||
dismiss();
|
||||
}}
|
||||
onDismiss={() => {
|
||||
dismiss();
|
||||
}}
|
||||
/>
|
||||
),
|
||||
});
|
||||
|
||||
const handleKeyDown = (event: KeyboardEvent) => {
|
||||
if (event.key === "Escape") {
|
||||
toastId.then((id) => toaster.dismiss(id));
|
||||
}
|
||||
};
|
||||
|
||||
keydownHandlerRef.current = handleKeyDown;
|
||||
document.addEventListener("keydown", handleKeyDown);
|
||||
|
||||
toastId.then((id) => {
|
||||
intervalRef.current = setInterval(() => {
|
||||
if (!toaster.isActive(id)) {
|
||||
setModalOpen(false);
|
||||
|
||||
if (intervalRef.current) {
|
||||
clearInterval(intervalRef.current);
|
||||
intervalRef.current = null;
|
||||
}
|
||||
|
||||
if (keydownHandlerRef.current) {
|
||||
document.removeEventListener("keydown", keydownHandlerRef.current);
|
||||
keydownHandlerRef.current = null;
|
||||
}
|
||||
}
|
||||
}, TOAST_CHECK_INTERVAL_MS);
|
||||
});
|
||||
}, [handleSubmitEmail, modalOpen]);
|
||||
|
||||
return {
|
||||
showEmailModal,
|
||||
};
|
||||
}
|
||||
@@ -1,5 +1,6 @@
|
||||
import {
|
||||
FEATURE_BROWSE_ENV_NAME,
|
||||
FEATURE_EMAIL_TRANSCRIPT_ENV_NAME,
|
||||
FEATURE_PRIVACY_ENV_NAME,
|
||||
FEATURE_REQUIRE_LOGIN_ENV_NAME,
|
||||
FEATURE_ROOMS_ENV_NAME,
|
||||
@@ -14,6 +15,7 @@ export const FEATURES = [
|
||||
"browse",
|
||||
"sendToZulip",
|
||||
"rooms",
|
||||
"emailTranscript",
|
||||
] as const;
|
||||
|
||||
export type FeatureName = (typeof FEATURES)[number];
|
||||
@@ -26,6 +28,7 @@ export const DEFAULT_FEATURES: Features = {
|
||||
browse: true,
|
||||
sendToZulip: true,
|
||||
rooms: true,
|
||||
emailTranscript: false,
|
||||
} as const;
|
||||
|
||||
export const ENV_TO_FEATURE: {
|
||||
@@ -36,6 +39,7 @@ export const ENV_TO_FEATURE: {
|
||||
FEATURE_BROWSE: "browse",
|
||||
FEATURE_SEND_TO_ZULIP: "sendToZulip",
|
||||
FEATURE_ROOMS: "rooms",
|
||||
FEATURE_EMAIL_TRANSCRIPT: "emailTranscript",
|
||||
} as const;
|
||||
|
||||
export const FEATURE_TO_ENV: {
|
||||
@@ -46,6 +50,7 @@ export const FEATURE_TO_ENV: {
|
||||
browse: "FEATURE_BROWSE",
|
||||
sendToZulip: "FEATURE_SEND_TO_ZULIP",
|
||||
rooms: "FEATURE_ROOMS",
|
||||
emailTranscript: "FEATURE_EMAIL_TRANSCRIPT",
|
||||
};
|
||||
|
||||
const features = getClientEnv();
|
||||
|
||||
425
www/app/reflector-api.d.ts
vendored
425
www/app/reflector-api.d.ts
vendored
@@ -90,8 +90,6 @@ export interface paths {
|
||||
*
|
||||
* Both cloud and raw-tracks are started via REST API to bypass enable_recording limitation of allowing only 1 recording at a time.
|
||||
* Uses different instanceIds for cloud vs raw-tracks (same won't work)
|
||||
*
|
||||
* Note: No authentication required - anonymous users supported. TODO this is a DOS vector
|
||||
*/
|
||||
post: operations["v1_start_recording"];
|
||||
delete?: never;
|
||||
@@ -100,6 +98,26 @@ export interface paths {
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/meetings/{meeting_id}/email-recipient": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
get?: never;
|
||||
put?: never;
|
||||
/**
|
||||
* Add Email Recipient
|
||||
* @description Add an email address to receive the transcript link when processing completes.
|
||||
*/
|
||||
post: operations["v1_add_email_recipient"];
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/rooms": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -438,6 +456,23 @@ export interface paths {
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/transcripts/{transcript_id}/email": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
get?: never;
|
||||
put?: never;
|
||||
/** Transcript Send Email */
|
||||
post: operations["v1_transcript_send_email"];
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/transcripts/{transcript_id}/audio/mp3": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -561,6 +596,40 @@ export interface paths {
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/transcripts/{transcript_id}/download/zip": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
/** Transcript Download Zip */
|
||||
get: operations["v1_transcript_download_zip"];
|
||||
put?: never;
|
||||
post?: never;
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/transcripts/{transcript_id}/video/url": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
/** Transcript Get Video Url */
|
||||
get: operations["v1_transcript_get_video_url"];
|
||||
put?: never;
|
||||
post?: never;
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/transcripts/{transcript_id}/events": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -687,6 +756,23 @@ export interface paths {
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/config": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
/** Get Config */
|
||||
get: operations["v1_get_config"];
|
||||
put?: never;
|
||||
post?: never;
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/zulip/streams": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -785,10 +871,35 @@ export interface paths {
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/v1/auth/login": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
get?: never;
|
||||
put?: never;
|
||||
/** Login */
|
||||
post: operations["v1_login"];
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
}
|
||||
export type webhooks = Record<string, never>;
|
||||
export interface components {
|
||||
schemas: {
|
||||
/** AddEmailRecipientRequest */
|
||||
AddEmailRecipientRequest: {
|
||||
/**
|
||||
* Email
|
||||
* Format: email
|
||||
*/
|
||||
email: string;
|
||||
};
|
||||
/** ApiKeyResponse */
|
||||
ApiKeyResponse: {
|
||||
/**
|
||||
@@ -816,10 +927,7 @@ export interface components {
|
||||
};
|
||||
/** Body_transcript_record_upload_v1_transcripts__transcript_id__record_upload_post */
|
||||
Body_transcript_record_upload_v1_transcripts__transcript_id__record_upload_post: {
|
||||
/**
|
||||
* Chunk
|
||||
* Format: binary
|
||||
*/
|
||||
/** Chunk */
|
||||
chunk: string;
|
||||
};
|
||||
/** CalendarEventResponse */
|
||||
@@ -868,6 +976,13 @@ export interface components {
|
||||
*/
|
||||
updated_at: string;
|
||||
};
|
||||
/** ConfigResponse */
|
||||
ConfigResponse: {
|
||||
/** Zulip Enabled */
|
||||
zulip_enabled: boolean;
|
||||
/** Email Enabled */
|
||||
email_enabled: boolean;
|
||||
};
|
||||
/** CreateApiKeyRequest */
|
||||
CreateApiKeyRequest: {
|
||||
/** Name */
|
||||
@@ -951,6 +1066,8 @@ export interface components {
|
||||
* @default false
|
||||
*/
|
||||
skip_consent: boolean;
|
||||
/** Email Transcript To */
|
||||
email_transcript_to?: string | null;
|
||||
};
|
||||
/** CreateRoomMeeting */
|
||||
CreateRoomMeeting: {
|
||||
@@ -1034,6 +1151,13 @@ export interface components {
|
||||
audio_deleted?: boolean | null;
|
||||
/** Change Seq */
|
||||
change_seq?: number | null;
|
||||
/**
|
||||
* Has Cloud Video
|
||||
* @default false
|
||||
*/
|
||||
has_cloud_video: boolean;
|
||||
/** Cloud Video Duration */
|
||||
cloud_video_duration?: number | null;
|
||||
};
|
||||
/** GetTranscriptSegmentTopic */
|
||||
GetTranscriptSegmentTopic: {
|
||||
@@ -1182,6 +1306,13 @@ export interface components {
|
||||
audio_deleted?: boolean | null;
|
||||
/** Change Seq */
|
||||
change_seq?: number | null;
|
||||
/**
|
||||
* Has Cloud Video
|
||||
* @default false
|
||||
*/
|
||||
has_cloud_video: boolean;
|
||||
/** Cloud Video Duration */
|
||||
cloud_video_duration?: number | null;
|
||||
/** Participants */
|
||||
participants:
|
||||
| components["schemas"]["TranscriptParticipantWithEmail"][]
|
||||
@@ -1247,6 +1378,13 @@ export interface components {
|
||||
audio_deleted?: boolean | null;
|
||||
/** Change Seq */
|
||||
change_seq?: number | null;
|
||||
/**
|
||||
* Has Cloud Video
|
||||
* @default false
|
||||
*/
|
||||
has_cloud_video: boolean;
|
||||
/** Cloud Video Duration */
|
||||
cloud_video_duration?: number | null;
|
||||
/** Participants */
|
||||
participants:
|
||||
| components["schemas"]["TranscriptParticipantWithEmail"][]
|
||||
@@ -1313,6 +1451,13 @@ export interface components {
|
||||
audio_deleted?: boolean | null;
|
||||
/** Change Seq */
|
||||
change_seq?: number | null;
|
||||
/**
|
||||
* Has Cloud Video
|
||||
* @default false
|
||||
*/
|
||||
has_cloud_video: boolean;
|
||||
/** Cloud Video Duration */
|
||||
cloud_video_duration?: number | null;
|
||||
/** Participants */
|
||||
participants:
|
||||
| components["schemas"]["TranscriptParticipantWithEmail"][]
|
||||
@@ -1386,6 +1531,13 @@ export interface components {
|
||||
audio_deleted?: boolean | null;
|
||||
/** Change Seq */
|
||||
change_seq?: number | null;
|
||||
/**
|
||||
* Has Cloud Video
|
||||
* @default false
|
||||
*/
|
||||
has_cloud_video: boolean;
|
||||
/** Cloud Video Duration */
|
||||
cloud_video_duration?: number | null;
|
||||
/** Participants */
|
||||
participants:
|
||||
| components["schemas"]["TranscriptParticipantWithEmail"][]
|
||||
@@ -1461,6 +1613,13 @@ export interface components {
|
||||
audio_deleted?: boolean | null;
|
||||
/** Change Seq */
|
||||
change_seq?: number | null;
|
||||
/**
|
||||
* Has Cloud Video
|
||||
* @default false
|
||||
*/
|
||||
has_cloud_video: boolean;
|
||||
/** Cloud Video Duration */
|
||||
cloud_video_duration?: number | null;
|
||||
/** Participants */
|
||||
participants:
|
||||
| components["schemas"]["TranscriptParticipantWithEmail"][]
|
||||
@@ -1532,6 +1691,25 @@ export interface components {
|
||||
/** Reason */
|
||||
reason?: string | null;
|
||||
};
|
||||
/** LoginRequest */
|
||||
LoginRequest: {
|
||||
/** Email */
|
||||
email: string;
|
||||
/** Password */
|
||||
password: string;
|
||||
};
|
||||
/** LoginResponse */
|
||||
LoginResponse: {
|
||||
/** Access Token */
|
||||
access_token: string;
|
||||
/**
|
||||
* Token Type
|
||||
* @default bearer
|
||||
*/
|
||||
token_type: string;
|
||||
/** Expires In */
|
||||
expires_in: number;
|
||||
};
|
||||
/** Meeting */
|
||||
Meeting: {
|
||||
/** Id */
|
||||
@@ -1619,26 +1797,26 @@ export interface components {
|
||||
/** Items */
|
||||
items: components["schemas"]["GetTranscriptMinimal"][];
|
||||
/** Total */
|
||||
total?: number | null;
|
||||
total: number;
|
||||
/** Page */
|
||||
page: number | null;
|
||||
page: number;
|
||||
/** Size */
|
||||
size: number | null;
|
||||
size: number;
|
||||
/** Pages */
|
||||
pages?: number | null;
|
||||
pages: number;
|
||||
};
|
||||
/** Page[RoomDetails] */
|
||||
Page_RoomDetails_: {
|
||||
/** Items */
|
||||
items: components["schemas"]["RoomDetails"][];
|
||||
/** Total */
|
||||
total?: number | null;
|
||||
total: number;
|
||||
/** Page */
|
||||
page: number | null;
|
||||
page: number;
|
||||
/** Size */
|
||||
size: number | null;
|
||||
size: number;
|
||||
/** Pages */
|
||||
pages?: number | null;
|
||||
pages: number;
|
||||
};
|
||||
/** Participant */
|
||||
Participant: {
|
||||
@@ -1709,6 +1887,8 @@ export interface components {
|
||||
* @default false
|
||||
*/
|
||||
skip_consent: boolean;
|
||||
/** Email Transcript To */
|
||||
email_transcript_to?: string | null;
|
||||
};
|
||||
/** RoomDetails */
|
||||
RoomDetails: {
|
||||
@@ -1765,6 +1945,8 @@ export interface components {
|
||||
* @default false
|
||||
*/
|
||||
skip_consent: boolean;
|
||||
/** Email Transcript To */
|
||||
email_transcript_to?: string | null;
|
||||
/** Webhook Url */
|
||||
webhook_url: string | null;
|
||||
/** Webhook Secret */
|
||||
@@ -1849,6 +2031,16 @@ export interface components {
|
||||
/** Change Seq */
|
||||
change_seq?: number | null;
|
||||
};
|
||||
/** SendEmailRequest */
|
||||
SendEmailRequest: {
|
||||
/** Email */
|
||||
email: string;
|
||||
};
|
||||
/** SendEmailResponse */
|
||||
SendEmailResponse: {
|
||||
/** Sent */
|
||||
sent: number;
|
||||
};
|
||||
/**
|
||||
* SourceKind
|
||||
* @enum {string}
|
||||
@@ -2129,6 +2321,8 @@ export interface components {
|
||||
platform?: ("whereby" | "daily") | null;
|
||||
/** Skip Consent */
|
||||
skip_consent?: boolean | null;
|
||||
/** Email Transcript To */
|
||||
email_transcript_to?: string | null;
|
||||
};
|
||||
/** UpdateTranscript */
|
||||
UpdateTranscript: {
|
||||
@@ -2269,6 +2463,22 @@ export interface components {
|
||||
msg: string;
|
||||
/** Error Type */
|
||||
type: string;
|
||||
/** Input */
|
||||
input?: unknown;
|
||||
/** Context */
|
||||
ctx?: Record<string, never>;
|
||||
};
|
||||
/** VideoUrlResponse */
|
||||
VideoUrlResponse: {
|
||||
/** Url */
|
||||
url: string;
|
||||
/** Duration */
|
||||
duration?: number | null;
|
||||
/**
|
||||
* Content Type
|
||||
* @default video/mp4
|
||||
*/
|
||||
content_type: string;
|
||||
};
|
||||
/** WebhookTestResult */
|
||||
WebhookTestResult: {
|
||||
@@ -2479,6 +2689,41 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_add_email_recipient: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path: {
|
||||
meeting_id: string;
|
||||
};
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["AddEmailRecipientRequest"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": unknown;
|
||||
};
|
||||
};
|
||||
/** @description Validation Error */
|
||||
422: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["HTTPValidationError"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_rooms_list: {
|
||||
parameters: {
|
||||
query?: {
|
||||
@@ -3311,6 +3556,41 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_transcript_send_email: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path: {
|
||||
transcript_id: string;
|
||||
};
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["SendEmailRequest"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["SendEmailResponse"];
|
||||
};
|
||||
};
|
||||
/** @description Validation Error */
|
||||
422: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["HTTPValidationError"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_transcript_get_audio_mp3: {
|
||||
parameters: {
|
||||
query?: {
|
||||
@@ -3682,6 +3962,70 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_transcript_download_zip: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path: {
|
||||
transcript_id: string;
|
||||
};
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": unknown;
|
||||
};
|
||||
};
|
||||
/** @description Validation Error */
|
||||
422: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["HTTPValidationError"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_transcript_get_video_url: {
|
||||
parameters: {
|
||||
query?: {
|
||||
token?: string | null;
|
||||
};
|
||||
header?: never;
|
||||
path: {
|
||||
transcript_id: string;
|
||||
};
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["VideoUrlResponse"];
|
||||
};
|
||||
};
|
||||
/** @description Validation Error */
|
||||
422: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["HTTPValidationError"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_transcript_get_websocket_events: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -3917,6 +4261,26 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_get_config: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["ConfigResponse"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_zulip_get_streams: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -4021,4 +4385,37 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
v1_login: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["LoginRequest"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
/** @description Successful Response */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["LoginResponse"];
|
||||
};
|
||||
};
|
||||
/** @description Validation Error */
|
||||
422: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["HTTPValidationError"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
32
www/pnpm-lock.yaml
generated
32
www/pnpm-lock.yaml
generated
@@ -2993,8 +2993,8 @@ packages:
|
||||
resolution: {integrity: sha512-phv3E1Xl4tQOShqSte26C7Fl84EwUdZsyOuSSk9qtAGyyQs2s3jJzComh+Abf4g187lUUAvH+H26omrqia2aGg==}
|
||||
engines: {node: '>=10.13.0'}
|
||||
|
||||
enhanced-resolve@5.20.0:
|
||||
resolution: {integrity: sha512-/ce7+jQ1PQ6rVXwe+jKEg5hW5ciicHwIQUagZkp6IufBoY3YDgdTTY1azVs0qoRgVmvsNB+rbjLJxDAeHHtwsQ==}
|
||||
enhanced-resolve@5.20.1:
|
||||
resolution: {integrity: sha512-Qohcme7V1inbAfvjItgw0EaxVX5q2rdVEZHRBrEQdRZTssLDGsL8Lwrznl8oQ/6kuTJONLaDcGjkNP247XEhcA==}
|
||||
engines: {node: '>=10.13.0'}
|
||||
|
||||
err-code@3.0.1:
|
||||
@@ -3257,8 +3257,8 @@ packages:
|
||||
resolution: {integrity: sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==}
|
||||
engines: {node: '>=16'}
|
||||
|
||||
flatted@3.4.1:
|
||||
resolution: {integrity: sha512-IxfVbRFVlV8V/yRaGzk0UVIcsKKHMSfYw66T/u4nTwlWteQePsxe//LjudR1AMX4tZW3WFCh3Zqa/sjlqpbURQ==}
|
||||
flatted@3.4.2:
|
||||
resolution: {integrity: sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==}
|
||||
|
||||
follow-redirects@1.15.11:
|
||||
resolution: {integrity: sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==}
|
||||
@@ -4853,8 +4853,8 @@ packages:
|
||||
resolution: {integrity: sha512-vtA0uD4ibrYD793SOIAwlo8cj6haOeMHrGvwPxJsxH7CeIksqJ+3Zc06RvWTIFgiSqx4A3sOnTXpfAEE2Zyz6w==}
|
||||
engines: {node: '>=10.0.0'}
|
||||
|
||||
socket.io-parser@4.2.5:
|
||||
resolution: {integrity: sha512-bPMmpy/5WWKHea5Y/jYAP6k74A+hvmRCQaJuJB6I/ML5JZq/KfNieUVo/3Mh7SAqn7TyFdIo6wqYHInG1MU1bQ==}
|
||||
socket.io-parser@4.2.6:
|
||||
resolution: {integrity: sha512-asJqbVBDsBCJx0pTqw3WfesSY0iRX+2xzWEWzrpcH7L6fLzrhyF8WPI8UaeM4YCuDfpwA/cgsdugMsmtz8EJeg==}
|
||||
engines: {node: '>=10.0.0'}
|
||||
|
||||
source-map-js@1.2.1:
|
||||
@@ -5029,8 +5029,8 @@ packages:
|
||||
uglify-js:
|
||||
optional: true
|
||||
|
||||
terser@5.46.0:
|
||||
resolution: {integrity: sha512-jTwoImyr/QbOWFFso3YoU3ik0jBBDJ6JTOQiy/J2YxVJdZCc+5u7skhNwiOR3FQIygFqVUPHl7qbbxtjW2K3Qg==}
|
||||
terser@5.46.1:
|
||||
resolution: {integrity: sha512-vzCjQO/rgUuK9sf8VJZvjqiqiHFaZLnOiimmUuOKODxWL8mm/xua7viT7aqX7dgPY60otQjUotzFMmCB4VdmqQ==}
|
||||
engines: {node: '>=10'}
|
||||
hasBin: true
|
||||
|
||||
@@ -8757,7 +8757,7 @@ snapshots:
|
||||
graceful-fs: 4.2.11
|
||||
tapable: 2.3.0
|
||||
|
||||
enhanced-resolve@5.20.0:
|
||||
enhanced-resolve@5.20.1:
|
||||
dependencies:
|
||||
graceful-fs: 4.2.11
|
||||
tapable: 2.3.0
|
||||
@@ -9170,10 +9170,10 @@ snapshots:
|
||||
|
||||
flat-cache@4.0.1:
|
||||
dependencies:
|
||||
flatted: 3.4.1
|
||||
flatted: 3.4.2
|
||||
keyv: 4.5.4
|
||||
|
||||
flatted@3.4.1: {}
|
||||
flatted@3.4.2: {}
|
||||
|
||||
follow-redirects@1.15.11: {}
|
||||
|
||||
@@ -11166,13 +11166,13 @@ snapshots:
|
||||
'@socket.io/component-emitter': 3.1.2
|
||||
debug: 4.3.7
|
||||
engine.io-client: 6.5.4
|
||||
socket.io-parser: 4.2.5
|
||||
socket.io-parser: 4.2.6
|
||||
transitivePeerDependencies:
|
||||
- bufferutil
|
||||
- supports-color
|
||||
- utf-8-validate
|
||||
|
||||
socket.io-parser@4.2.5:
|
||||
socket.io-parser@4.2.6:
|
||||
dependencies:
|
||||
'@socket.io/component-emitter': 3.1.2
|
||||
debug: 4.4.3(supports-color@10.2.2)
|
||||
@@ -11351,10 +11351,10 @@ snapshots:
|
||||
'@jridgewell/trace-mapping': 0.3.31
|
||||
jest-worker: 27.5.1
|
||||
schema-utils: 4.3.3
|
||||
terser: 5.46.0
|
||||
terser: 5.46.1
|
||||
webpack: 5.105.3
|
||||
|
||||
terser@5.46.0:
|
||||
terser@5.46.1:
|
||||
dependencies:
|
||||
'@jridgewell/source-map': 0.3.11
|
||||
acorn: 8.16.0
|
||||
@@ -11642,7 +11642,7 @@ snapshots:
|
||||
acorn-import-phases: 1.0.4(acorn@8.16.0)
|
||||
browserslist: 4.28.1
|
||||
chrome-trace-event: 1.0.4
|
||||
enhanced-resolve: 5.20.0
|
||||
enhanced-resolve: 5.20.1
|
||||
es-module-lexer: 2.0.0
|
||||
eslint-scope: 5.1.1
|
||||
events: 3.3.0
|
||||
|
||||
4
www/public/email-icon.svg
Normal file
4
www/public/email-icon.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<rect x="2" y="4" width="20" height="16" rx="2"/>
|
||||
<path d="m22 7-8.97 5.7a1.94 1.94 0 0 1-2.06 0L2 7"/>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 274 B |
Reference in New Issue
Block a user