Compare commits

..

6 Commits

Author SHA1 Message Date
Juan Diego García
74b9b97453 chore(main): release 0.40.0 (#921) 2026-03-20 15:57:59 -05:00
dependabot[bot]
9e37d60b3f build(deps): bump flatted (#922)
Bumps the npm_and_yarn group with 1 update in the /www directory: [flatted](https://github.com/WebReflection/flatted).


Updates `flatted` from 3.4.1 to 3.4.2
- [Commits](https://github.com/WebReflection/flatted/compare/v3.4.1...v3.4.2)

---
updated-dependencies:
- dependency-name: flatted
  dependency-version: 3.4.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 15:44:14 -05:00
Juan Diego García
55222ecc47 feat: allow participants to ask for email transcript (#923)
* feat: allow participants to ask for email transcript

* fix: set email update in a transaction
2026-03-20 15:43:58 -05:00
dependabot[bot]
41e7b3e84f build(deps): bump socket.io-parser (#918)
Bumps the npm_and_yarn group with 1 update in the /www directory: [socket.io-parser](https://github.com/socketio/socket.io).


Updates `socket.io-parser` from 4.2.5 to 4.2.6
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/socket.io-parser@4.2.5...socket.io-parser@4.2.6)

---
updated-dependencies:
- dependency-name: socket.io-parser
  dependency-version: 4.2.6
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 11:33:14 -05:00
dependabot[bot]
e5712a4168 build(deps): bump pypdf in /server in the uv group across 1 directory (#917)
Bumps the uv group with 1 update in the /server directory: [pypdf](https://github.com/py-pdf/pypdf).


Updates `pypdf` from 6.8.0 to 6.9.1
- [Release notes](https://github.com/py-pdf/pypdf/releases)
- [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md)
- [Commits](https://github.com/py-pdf/pypdf/compare/6.8.0...6.9.1)

---
updated-dependencies:
- dependency-name: pypdf
  dependency-version: 6.9.1
  dependency-type: indirect
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 11:33:00 -05:00
Juan Diego García
a76f114378 feat: download files, show cloud video, solf deletion with no reprocessing (#920)
* fix: move upd ports out of MacOS internal Range

* feat: download files, show cloud video, solf deletion with no reprocessing
2026-03-20 11:04:53 -05:00
48 changed files with 2243 additions and 106 deletions

View File

@@ -1,5 +1,13 @@
# Changelog
## [0.40.0](https://github.com/GreyhavenHQ/reflector/compare/v0.39.0...v0.40.0) (2026-03-20)
### Features
* allow participants to ask for email transcript ([#923](https://github.com/GreyhavenHQ/reflector/issues/923)) ([55222ec](https://github.com/GreyhavenHQ/reflector/commit/55222ecc4736f99ad461f03a006c8d97b5876142))
* download files, show cloud video, solf deletion with no reprocessing ([#920](https://github.com/GreyhavenHQ/reflector/issues/920)) ([a76f114](https://github.com/GreyhavenHQ/reflector/commit/a76f1143783d3cf137a8847a851b72302e04445b))
## [0.39.0](https://github.com/GreyhavenHQ/reflector/compare/v0.38.2...v0.39.0) (2026-03-18)

View File

@@ -192,3 +192,8 @@ Modal.com integration for scalable ML processing:
## Pipeline/worker related info
If you need to do any worker/pipeline related work, search for "Pipeline" classes and their "create" or "build" methods to find the main processor sequence. Look for task orchestration patterns (like "chord", "group", or "chain") to identify the post-processing flow with parallel execution chains. This will give you abstract vision on how processing pipeling is organized.
## Code Style
- Always put imports at the top of the file. Let ruff/pre-commit handle sorting and formatting of imports.
- Exception: In Hatchet pipeline task functions, DB controller imports (e.g., `transcripts_controller`, `meetings_controller`) stay as deferred/inline imports inside `fresh_db_connection()` blocks — this is intentional to avoid sharing DB connections across forked processes. Non-DB imports (utilities, services) should still go at the top of the file.

View File

@@ -36,7 +36,7 @@ services:
restart: unless-stopped
ports:
- "127.0.0.1:1250:1250"
- "51000-51100:51000-51100/udp"
- "40000-40100:40000-40100/udp"
env_file:
- ./server/.env
environment:
@@ -50,7 +50,7 @@ services:
# HF_TOKEN needed for in-process pyannote diarization (--cpu mode)
HF_TOKEN: ${HF_TOKEN:-}
# WebRTC: fixed UDP port range for ICE candidates (mapped above)
WEBRTC_PORT_RANGE: "51000-51100"
WEBRTC_PORT_RANGE: "40000-40100"
# Hatchet workflow engine (always-on for processing pipelines)
HATCHET_CLIENT_SERVER_URL: ${HATCHET_CLIENT_SERVER_URL:-http://hatchet:8888}
HATCHET_CLIENT_HOST_PORT: ${HATCHET_CLIENT_HOST_PORT:-hatchet:7077}
@@ -308,6 +308,24 @@ services:
- web
- server
# ===========================================================
# Mailpit — local SMTP sink for testing email transcript notifications
# Start with: --profile mailpit
# Web UI at http://localhost:8025
# ===========================================================
mailpit:
image: axllent/mailpit:latest
profiles: [mailpit]
restart: unless-stopped
ports:
- "127.0.0.1:8025:8025" # Web UI
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8025/api/v1/messages"]
interval: 10s
timeout: 3s
retries: 5
# ===========================================================
# Hatchet workflow engine + workers
# Required for all processing pipelines (file, live, Daily.co multitrack).

View File

@@ -13,14 +13,25 @@
# Optional:
# LLM_MODEL — Model name (default: qwen2.5:14b)
#
# Flags:
# --build — Rebuild backend Docker images (server, workers, test-runner)
#
# Usage:
# export LLM_URL="https://api.openai.com/v1"
# export LLM_API_KEY="sk-..."
# export HF_TOKEN="hf_..."
# ./scripts/run-integration-tests.sh
# ./scripts/run-integration-tests.sh --build # rebuild backend images
#
set -euo pipefail
BUILD_FLAG=""
for arg in "$@"; do
case "$arg" in
--build) BUILD_FLAG="--build" ;;
esac
done
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
COMPOSE_DIR="$REPO_ROOT/server/tests"
@@ -66,7 +77,7 @@ trap cleanup EXIT
# ── Step 1: Build and start infrastructure ──────────────────────────────────
info "Building and starting infrastructure services..."
$COMPOSE up -d --build postgres redis garage hatchet mock-daily
$COMPOSE up -d --build postgres redis garage hatchet mock-daily mailpit
# ── Step 2: Set up Garage (S3 bucket + keys) ───────────────────────────────
wait_for "Garage" "$COMPOSE exec -T garage /garage stats" 60
@@ -116,7 +127,7 @@ ok "Hatchet token generated"
# ── Step 4: Start backend services ──────────────────────────────────────────
info "Starting backend services..."
$COMPOSE up -d server worker hatchet-worker-cpu hatchet-worker-llm test-runner
$COMPOSE up -d $BUILD_FLAG server worker hatchet-worker-cpu hatchet-worker-llm test-runner
# ── Step 5: Wait for server + run migrations ────────────────────────────────
wait_for "Server" "$COMPOSE exec -T test-runner curl -sf http://server:1250/health" 60

View File

@@ -0,0 +1,47 @@
"""add soft delete fields to transcript and recording
Revision ID: 501c73a6b0d5
Revises: e1f093f7f124
Create Date: 2026-03-19 00:00:00.000000
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
revision: str = "501c73a6b0d5"
down_revision: Union[str, None] = "e1f093f7f124"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.add_column(
"transcript",
sa.Column("deleted_at", sa.DateTime(timezone=True), nullable=True),
)
op.add_column(
"recording",
sa.Column("deleted_at", sa.DateTime(timezone=True), nullable=True),
)
op.create_index(
"idx_transcript_not_deleted",
"transcript",
["id"],
postgresql_where=sa.text("deleted_at IS NULL"),
)
op.create_index(
"idx_recording_not_deleted",
"recording",
["id"],
postgresql_where=sa.text("deleted_at IS NULL"),
)
def downgrade() -> None:
op.drop_index("idx_recording_not_deleted", table_name="recording")
op.drop_index("idx_transcript_not_deleted", table_name="transcript")
op.drop_column("recording", "deleted_at")
op.drop_column("transcript", "deleted_at")

View File

@@ -0,0 +1,29 @@
"""add email_recipients to meeting
Revision ID: a2b3c4d5e6f7
Revises: 501c73a6b0d5
Create Date: 2026-03-20 00:00:00.000000
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects.postgresql import JSONB
revision: str = "a2b3c4d5e6f7"
down_revision: Union[str, None] = "501c73a6b0d5"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.add_column(
"meeting",
sa.Column("email_recipients", JSONB, nullable=True),
)
def downgrade() -> None:
op.drop_column("meeting", "email_recipients")

View File

@@ -40,6 +40,8 @@ dependencies = [
"icalendar>=6.0.0",
"hatchet-sdk==1.22.16",
"pydantic>=2.12.5",
"aiosmtplib>=3.0.0",
"email-validator>=2.0.0",
]
[dependency-groups]

View File

@@ -19,12 +19,14 @@ from reflector.views.rooms import router as rooms_router
from reflector.views.rtc_offer import router as rtc_offer_router
from reflector.views.transcripts import router as transcripts_router
from reflector.views.transcripts_audio import router as transcripts_audio_router
from reflector.views.transcripts_download import router as transcripts_download_router
from reflector.views.transcripts_participants import (
router as transcripts_participants_router,
)
from reflector.views.transcripts_process import router as transcripts_process_router
from reflector.views.transcripts_speaker import router as transcripts_speaker_router
from reflector.views.transcripts_upload import router as transcripts_upload_router
from reflector.views.transcripts_video import router as transcripts_video_router
from reflector.views.transcripts_webrtc import router as transcripts_webrtc_router
from reflector.views.transcripts_websocket import router as transcripts_websocket_router
from reflector.views.user import router as user_router
@@ -97,6 +99,8 @@ app.include_router(transcripts_audio_router, prefix="/v1")
app.include_router(transcripts_participants_router, prefix="/v1")
app.include_router(transcripts_speaker_router, prefix="/v1")
app.include_router(transcripts_upload_router, prefix="/v1")
app.include_router(transcripts_download_router, prefix="/v1")
app.include_router(transcripts_video_router, prefix="/v1")
app.include_router(transcripts_websocket_router, prefix="/v1")
app.include_router(transcripts_webrtc_router, prefix="/v1")
app.include_router(transcripts_process_router, prefix="/v1")

View File

@@ -1,3 +1,4 @@
from contextlib import asynccontextmanager
from datetime import datetime, timedelta
from typing import Any, Literal
@@ -66,6 +67,8 @@ meetings = sa.Table(
# Daily.co composed video (Brady Bunch grid layout) - Daily.co only, not Whereby
sa.Column("daily_composed_video_s3_key", sa.String, nullable=True),
sa.Column("daily_composed_video_duration", sa.Integer, nullable=True),
# Email recipients for transcript notification
sa.Column("email_recipients", JSONB, nullable=True),
sa.Index("idx_meeting_room_id", "room_id"),
sa.Index("idx_meeting_calendar_event", "calendar_event_id"),
)
@@ -116,6 +119,8 @@ class Meeting(BaseModel):
# Daily.co composed video (Brady Bunch grid) - Daily.co only
daily_composed_video_s3_key: str | None = None
daily_composed_video_duration: int | None = None
# Email recipients for transcript notification
email_recipients: list[str] | None = None
class MeetingController:
@@ -388,6 +393,24 @@ class MeetingController:
# If was_null=False, the WHERE clause prevented the update
return was_null
@asynccontextmanager
async def transaction(self):
"""A context manager for database transaction."""
async with get_database().transaction(isolation="serializable"):
yield
async def add_email_recipient(self, meeting_id: str, email: str) -> list[str]:
"""Add an email to the meeting's email_recipients list (no duplicates)."""
async with self.transaction():
meeting = await self.get_by_id(meeting_id)
if not meeting:
raise ValueError(f"Meeting {meeting_id} not found")
current = meeting.email_recipients or []
if email not in current:
current.append(email)
await self.update_meeting(meeting_id, email_recipients=current)
return current
async def increment_num_clients(self, meeting_id: str) -> None:
"""Atomically increment participant count."""
query = (

View File

@@ -1,4 +1,4 @@
from datetime import datetime
from datetime import datetime, timezone
from typing import Literal
import sqlalchemy as sa
@@ -24,6 +24,7 @@ recordings = sa.Table(
),
sa.Column("meeting_id", sa.String),
sa.Column("track_keys", sa.JSON, nullable=True),
sa.Column("deleted_at", sa.DateTime(timezone=True), nullable=True),
sa.Index("idx_recording_meeting_id", "meeting_id"),
)
@@ -40,6 +41,7 @@ class Recording(BaseModel):
# track_keys can be empty list [] if recording finished but no audio was captured (silence/muted)
# None means not a multitrack recording, [] means multitrack with no tracks
track_keys: list[str] | None = None
deleted_at: datetime | None = None
@property
def is_multitrack(self) -> bool:
@@ -69,7 +71,11 @@ class RecordingController:
return Recording(**result) if result else None
async def remove_by_id(self, id: str) -> None:
query = recordings.delete().where(recordings.c.id == id)
query = (
recordings.update()
.where(recordings.c.id == id)
.values(deleted_at=datetime.now(timezone.utc))
)
await get_database().execute(query)
async def set_meeting_id(
@@ -114,6 +120,7 @@ class RecordingController:
.where(
recordings.c.bucket_name == bucket_name,
recordings.c.track_keys.isnot(None),
recordings.c.deleted_at.is_(None),
or_(
transcripts.c.id.is_(None),
transcripts.c.status == "error",

View File

@@ -387,6 +387,8 @@ class SearchController:
transcripts.join(rooms, transcripts.c.room_id == rooms.c.id, isouter=True)
)
base_query = base_query.where(transcripts.c.deleted_at.is_(None))
if params.query_text is not None:
# because already initialized based on params.query_text presence above
assert search_query is not None

View File

@@ -91,6 +91,7 @@ transcripts = sqlalchemy.Table(
sqlalchemy.Column("webvtt", sqlalchemy.Text),
# Hatchet workflow run ID for resumption of failed workflows
sqlalchemy.Column("workflow_run_id", sqlalchemy.String),
sqlalchemy.Column("deleted_at", sqlalchemy.DateTime(timezone=True), nullable=True),
sqlalchemy.Column(
"change_seq",
sqlalchemy.BigInteger,
@@ -238,6 +239,7 @@ class Transcript(BaseModel):
webvtt: str | None = None
workflow_run_id: str | None = None # Hatchet workflow run ID for resumption
change_seq: int | None = None
deleted_at: datetime | None = None
@field_serializer("created_at", when_used="json")
def serialize_datetime(self, dt: datetime) -> str:
@@ -418,6 +420,8 @@ class TranscriptController:
rooms, transcripts.c.room_id == rooms.c.id, isouter=True
)
query = query.where(transcripts.c.deleted_at.is_(None))
if user_id:
query = query.where(
or_(transcripts.c.user_id == user_id, rooms.c.is_shared)
@@ -500,7 +504,10 @@ class TranscriptController:
"""
Get transcripts by room_id (direct access without joins)
"""
query = transcripts.select().where(transcripts.c.room_id == room_id)
query = transcripts.select().where(
transcripts.c.room_id == room_id,
transcripts.c.deleted_at.is_(None),
)
if "user_id" in kwargs:
query = query.where(transcripts.c.user_id == kwargs["user_id"])
if "order_by" in kwargs:
@@ -531,8 +538,11 @@ class TranscriptController:
if not result:
raise HTTPException(status_code=404, detail="Transcript not found")
# if the transcript is anonymous, share mode is not checked
transcript = Transcript(**result)
if transcript.deleted_at is not None:
raise HTTPException(status_code=404, detail="Transcript not found")
# if the transcript is anonymous, share mode is not checked
if transcript.user_id is None:
return transcript
@@ -632,56 +642,49 @@ class TranscriptController:
user_id: str | None = None,
) -> None:
"""
Remove a transcript by id
Soft-delete a transcript by id.
Sets deleted_at on the transcript and its associated recording.
All files (S3 and local) are preserved for later retrieval.
"""
transcript = await self.get_by_id(transcript_id)
if not transcript:
return
if user_id is not None and transcript.user_id != user_id:
return
if transcript.audio_location == "storage" and not transcript.audio_deleted:
try:
await get_transcripts_storage().delete_file(
transcript.storage_audio_path
)
except Exception as e:
logger.warning(
"Failed to delete transcript audio from storage",
exc_info=e,
transcript_id=transcript.id,
)
transcript.unlink()
if transcript.deleted_at is not None:
return
now = datetime.now(timezone.utc)
# Soft-delete the associated recording (keeps S3 files intact)
if transcript.recording_id:
try:
recording = await recordings_controller.get_by_id(
transcript.recording_id
)
if recording:
try:
await get_transcripts_storage().delete_file(
recording.object_key, bucket=recording.bucket_name
)
except Exception as e:
logger.warning(
"Failed to delete recording object from S3",
exc_info=e,
recording_id=transcript.recording_id,
)
await recordings_controller.remove_by_id(transcript.recording_id)
await recordings_controller.remove_by_id(transcript.recording_id)
except Exception as e:
logger.warning(
"Failed to delete recording row",
"Failed to soft-delete recording",
exc_info=e,
recording_id=transcript.recording_id,
)
query = transcripts.delete().where(transcripts.c.id == transcript_id)
# Soft-delete the transcript (keeps all files intact)
query = (
transcripts.update()
.where(transcripts.c.id == transcript_id)
.values(deleted_at=now)
)
await get_database().execute(query)
async def remove_by_recording_id(self, recording_id: str):
"""
Remove a transcript by recording_id
Soft-delete a transcript by recording_id
"""
query = transcripts.delete().where(transcripts.c.recording_id == recording_id)
query = (
transcripts.update()
.where(transcripts.c.recording_id == recording_id)
.values(deleted_at=datetime.now(timezone.utc))
)
await get_database().execute(query)
@staticmethod

84
server/reflector/email.py Normal file
View File

@@ -0,0 +1,84 @@
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import aiosmtplib
import structlog
from reflector.db.transcripts import Transcript
from reflector.settings import settings
logger = structlog.get_logger(__name__)
def is_email_configured() -> bool:
return bool(settings.SMTP_HOST and settings.SMTP_FROM_EMAIL)
def get_transcript_url(transcript: Transcript) -> str:
return f"{settings.UI_BASE_URL}/transcripts/{transcript.id}"
def _build_plain_text(transcript: Transcript, url: str) -> str:
title = transcript.title or "Unnamed recording"
lines = [
f"Your transcript is ready: {title}",
"",
f"View it here: {url}",
]
if transcript.short_summary:
lines.extend(["", "Summary:", transcript.short_summary])
return "\n".join(lines)
def _build_html(transcript: Transcript, url: str) -> str:
title = transcript.title or "Unnamed recording"
summary_html = ""
if transcript.short_summary:
summary_html = f"<p style='color:#555;'>{transcript.short_summary}</p>"
return f"""\
<div style="font-family:sans-serif;max-width:600px;margin:0 auto;">
<h2>Your transcript is ready</h2>
<p><strong>{title}</strong></p>
{summary_html}
<p><a href="{url}" style="display:inline-block;padding:10px 20px;background:#4A90D9;color:#fff;text-decoration:none;border-radius:4px;">View Transcript</a></p>
<p style="color:#999;font-size:12px;">This email was sent because you requested to receive the transcript from a meeting.</p>
</div>"""
async def send_transcript_email(to_emails: list[str], transcript: Transcript) -> int:
"""Send transcript notification to all emails. Returns count sent."""
if not is_email_configured() or not to_emails:
return 0
url = get_transcript_url(transcript)
title = transcript.title or "Unnamed recording"
sent = 0
for email_addr in to_emails:
msg = MIMEMultipart("alternative")
msg["Subject"] = f"Transcript Ready: {title}"
msg["From"] = settings.SMTP_FROM_EMAIL
msg["To"] = email_addr
msg.attach(MIMEText(_build_plain_text(transcript, url), "plain"))
msg.attach(MIMEText(_build_html(transcript, url), "html"))
try:
await aiosmtplib.send(
msg,
hostname=settings.SMTP_HOST,
port=settings.SMTP_PORT,
username=settings.SMTP_USERNAME,
password=settings.SMTP_PASSWORD,
start_tls=settings.SMTP_USE_TLS,
)
sent += 1
except Exception:
logger.exception(
"Failed to send transcript email",
to=email_addr,
transcript_id=transcript.id,
)
return sent

View File

@@ -21,6 +21,7 @@ class TaskName(StrEnum):
CLEANUP_CONSENT = "cleanup_consent"
POST_ZULIP = "post_zulip"
SEND_WEBHOOK = "send_webhook"
SEND_EMAIL = "send_email"
PAD_TRACK = "pad_track"
TRANSCRIBE_TRACK = "transcribe_track"
DETECT_CHUNK_TOPIC = "detect_chunk_topic"
@@ -59,7 +60,7 @@ TIMEOUT_AUDIO = 720 # Audio processing: padding, mixdown (Hatchet execution_tim
TIMEOUT_AUDIO_HTTP = (
660 # httpx timeout for pad_track — below 720 so Hatchet doesn't race
)
TIMEOUT_HEAVY = 600 # Transcription, fan-out LLM tasks (Hatchet execution_timeout)
TIMEOUT_HEAVY = 1200 # Transcription, fan-out LLM tasks (Hatchet execution_timeout)
TIMEOUT_HEAVY_HTTP = (
540 # httpx timeout for transcribe_track — below 600 so Hatchet doesn't race
1150 # httpx timeout for transcribe_track — below 1200 so Hatchet doesn't race
)

View File

@@ -33,6 +33,7 @@ from hatchet_sdk.labels import DesiredWorkerLabel
from pydantic import BaseModel
from reflector.dailyco_api.client import DailyApiClient
from reflector.email import is_email_configured, send_transcript_email
from reflector.hatchet.broadcast import (
append_event_and_broadcast,
set_status_and_broadcast,
@@ -51,6 +52,7 @@ from reflector.hatchet.error_classification import is_non_retryable
from reflector.hatchet.workflows.models import (
ActionItemsResult,
ConsentResult,
EmailResult,
FinalizeResult,
MixdownResult,
PaddedTrackInfo,
@@ -1465,6 +1467,52 @@ async def send_webhook(input: PipelineInput, ctx: Context) -> WebhookResult:
return WebhookResult(webhook_sent=False)
@daily_multitrack_pipeline.task(
parents=[cleanup_consent],
execution_timeout=timedelta(seconds=TIMEOUT_SHORT),
retries=5,
backoff_factor=2.0,
backoff_max_seconds=15,
)
@with_error_handling(TaskName.SEND_EMAIL, set_error_status=False)
async def send_email(input: PipelineInput, ctx: Context) -> EmailResult:
"""Send transcript email to collected recipients."""
ctx.log(f"send_email: transcript_id={input.transcript_id}")
if not is_email_configured():
ctx.log("send_email skipped (SMTP not configured)")
return EmailResult(skipped=True)
async with fresh_db_connection():
from reflector.db.meetings import meetings_controller # noqa: PLC0415
from reflector.db.recordings import recordings_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
ctx.log("send_email skipped (transcript not found)")
return EmailResult(skipped=True)
meeting = None
if transcript.meeting_id:
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
if not meeting and transcript.recording_id:
recording = await recordings_controller.get_by_id(transcript.recording_id)
if recording and recording.meeting_id:
meeting = await meetings_controller.get_by_id(recording.meeting_id)
if not meeting or not meeting.email_recipients:
ctx.log("send_email skipped (no email recipients)")
return EmailResult(skipped=True)
await transcripts_controller.update(transcript, {"share_mode": "public"})
count = await send_transcript_email(meeting.email_recipients, transcript)
ctx.log(f"send_email complete: sent {count} emails")
return EmailResult(emails_sent=count)
async def on_workflow_failure(input: PipelineInput, ctx: Context) -> None:
"""Run when the workflow is truly dead (all retries exhausted).

View File

@@ -18,6 +18,7 @@ from pathlib import Path
from hatchet_sdk import Context
from pydantic import BaseModel
from reflector.email import is_email_configured, send_transcript_email
from reflector.hatchet.broadcast import (
append_event_and_broadcast,
set_status_and_broadcast,
@@ -37,6 +38,7 @@ from reflector.hatchet.workflows.daily_multitrack_pipeline import (
)
from reflector.hatchet.workflows.models import (
ConsentResult,
EmailResult,
TitleResult,
TopicsResult,
WaveformResult,
@@ -859,6 +861,54 @@ async def send_webhook(input: FilePipelineInput, ctx: Context) -> WebhookResult:
return WebhookResult(webhook_sent=False)
@file_pipeline.task(
parents=[cleanup_consent],
execution_timeout=timedelta(seconds=TIMEOUT_SHORT),
retries=5,
backoff_factor=2.0,
backoff_max_seconds=15,
)
@with_error_handling(TaskName.SEND_EMAIL, set_error_status=False)
async def send_email(input: FilePipelineInput, ctx: Context) -> EmailResult:
"""Send transcript email to collected recipients."""
ctx.log(f"send_email: transcript_id={input.transcript_id}")
if not is_email_configured():
ctx.log("send_email skipped (SMTP not configured)")
return EmailResult(skipped=True)
async with fresh_db_connection():
from reflector.db.meetings import meetings_controller # noqa: PLC0415
from reflector.db.recordings import recordings_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
ctx.log("send_email skipped (transcript not found)")
return EmailResult(skipped=True)
# Try transcript.meeting_id first, then fall back to recording.meeting_id
meeting = None
if transcript.meeting_id:
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
if not meeting and transcript.recording_id:
recording = await recordings_controller.get_by_id(transcript.recording_id)
if recording and recording.meeting_id:
meeting = await meetings_controller.get_by_id(recording.meeting_id)
if not meeting or not meeting.email_recipients:
ctx.log("send_email skipped (no email recipients)")
return EmailResult(skipped=True)
# Set transcript to public so the link works for anyone
await transcripts_controller.update(transcript, {"share_mode": "public"})
count = await send_transcript_email(meeting.email_recipients, transcript)
ctx.log(f"send_email complete: sent {count} emails")
return EmailResult(emails_sent=count)
# --- On failure handler ---

View File

@@ -17,6 +17,7 @@ from datetime import timedelta
from hatchet_sdk import Context
from pydantic import BaseModel
from reflector.email import is_email_configured, send_transcript_email
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.constants import (
TIMEOUT_HEAVY,
@@ -32,6 +33,7 @@ from reflector.hatchet.workflows.daily_multitrack_pipeline import (
)
from reflector.hatchet.workflows.models import (
ConsentResult,
EmailResult,
TitleResult,
WaveformResult,
WebhookResult,
@@ -361,6 +363,52 @@ async def send_webhook(input: LivePostPipelineInput, ctx: Context) -> WebhookRes
return WebhookResult(webhook_sent=False)
@live_post_pipeline.task(
parents=[final_summaries],
execution_timeout=timedelta(seconds=TIMEOUT_SHORT),
retries=5,
backoff_factor=2.0,
backoff_max_seconds=15,
)
@with_error_handling(TaskName.SEND_EMAIL, set_error_status=False)
async def send_email(input: LivePostPipelineInput, ctx: Context) -> EmailResult:
"""Send transcript email to collected recipients."""
ctx.log(f"send_email: transcript_id={input.transcript_id}")
if not is_email_configured():
ctx.log("send_email skipped (SMTP not configured)")
return EmailResult(skipped=True)
async with fresh_db_connection():
from reflector.db.meetings import meetings_controller # noqa: PLC0415
from reflector.db.recordings import recordings_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
ctx.log("send_email skipped (transcript not found)")
return EmailResult(skipped=True)
meeting = None
if transcript.meeting_id:
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
if not meeting and transcript.recording_id:
recording = await recordings_controller.get_by_id(transcript.recording_id)
if recording and recording.meeting_id:
meeting = await meetings_controller.get_by_id(recording.meeting_id)
if not meeting or not meeting.email_recipients:
ctx.log("send_email skipped (no email recipients)")
return EmailResult(skipped=True)
await transcripts_controller.update(transcript, {"share_mode": "public"})
count = await send_transcript_email(meeting.email_recipients, transcript)
ctx.log(f"send_email complete: sent {count} emails")
return EmailResult(emails_sent=count)
# --- On failure handler ---

View File

@@ -170,3 +170,10 @@ class WebhookResult(BaseModel):
webhook_sent: bool
skipped: bool = False
response_code: int | None = None
class EmailResult(BaseModel):
"""Result from send_email task."""
emails_sent: int = 0
skipped: bool = False

View File

@@ -195,6 +195,14 @@ class Settings(BaseSettings):
ZULIP_API_KEY: str | None = None
ZULIP_BOT_EMAIL: str | None = None
# Email / SMTP integration (for transcript email notifications)
SMTP_HOST: str | None = None
SMTP_PORT: int = 587
SMTP_USERNAME: str | None = None
SMTP_PASSWORD: str | None = None
SMTP_FROM_EMAIL: str | None = None
SMTP_USE_TLS: bool = True
# Hatchet workflow orchestration (always enabled for multitrack processing)
HATCHET_CLIENT_TOKEN: str | None = None
HATCHET_CLIENT_TLS_STRATEGY: str = "none" # none, tls, mtls

View File

@@ -0,0 +1,257 @@
#!/usr/bin/env python
"""
CLI tool for managing soft-deleted transcripts.
Usage:
uv run python -m reflector.tools.deleted_transcripts list
uv run python -m reflector.tools.deleted_transcripts files <transcript_id>
uv run python -m reflector.tools.deleted_transcripts download <transcript_id> [--output-dir ./]
"""
import argparse
import asyncio
import json
import os
import structlog
from reflector.db import get_database
from reflector.db.meetings import meetings_controller
from reflector.db.recordings import recordings_controller
from reflector.db.transcripts import Transcript, transcripts
from reflector.storage import get_source_storage, get_transcripts_storage
logger = structlog.get_logger(__name__)
async def list_deleted():
"""List all soft-deleted transcripts."""
database = get_database()
await database.connect()
try:
query = (
transcripts.select()
.where(transcripts.c.deleted_at.isnot(None))
.order_by(transcripts.c.deleted_at.desc())
)
results = await database.fetch_all(query)
if not results:
print("No deleted transcripts found.")
return
print(
f"{'ID':<40} {'Title':<40} {'Deleted At':<28} {'Recording ID':<40} {'Meeting ID'}"
)
print("-" * 180)
for row in results:
t = Transcript(**row)
title = (t.title or "")[:38]
deleted = t.deleted_at.isoformat() if t.deleted_at else ""
print(
f"{t.id:<40} {title:<40} {deleted:<28} {t.recording_id or '':<40} {t.meeting_id or ''}"
)
print(f"\nTotal: {len(results)} deleted transcript(s)")
finally:
await database.disconnect()
async def list_files(transcript_id: str):
"""List all S3 keys associated with a deleted transcript."""
database = get_database()
await database.connect()
try:
query = transcripts.select().where(transcripts.c.id == transcript_id)
result = await database.fetch_one(query)
if not result:
print(f"Transcript {transcript_id} not found.")
return
t = Transcript(**result)
if t.deleted_at is None:
print(f"Transcript {transcript_id} is not deleted.")
return
print(f"Transcript: {t.id}")
print(f"Title: {t.title}")
print(f"Deleted at: {t.deleted_at}")
print()
files = []
# Transcript audio
if t.audio_location == "storage" and not t.audio_deleted:
files.append(("Transcript audio", t.storage_audio_path, None))
# Recording files
if t.recording_id:
recording = await recordings_controller.get_by_id(t.recording_id)
if recording:
if recording.object_key:
files.append(
(
"Recording object_key",
recording.object_key,
recording.bucket_name,
)
)
if recording.track_keys:
for i, key in enumerate(recording.track_keys):
files.append((f"Track {i}", key, recording.bucket_name))
# Cloud video
if t.meeting_id:
meeting = await meetings_controller.get_by_id(t.meeting_id)
if meeting and meeting.daily_composed_video_s3_key:
files.append(("Cloud video", meeting.daily_composed_video_s3_key, None))
if not files:
print("No associated files found.")
return
print(f"{'Type':<25} {'Bucket':<30} {'S3 Key'}")
print("-" * 120)
for label, key, bucket in files:
print(f"{label:<25} {bucket or '(default)':<30} {key}")
# Generate presigned URLs
print("\nPresigned URLs (valid for 1 hour):")
print("-" * 120)
storage = get_transcripts_storage()
for label, key, bucket in files:
try:
url = await storage.get_file_url(key, bucket=bucket, expires_in=3600)
print(f"{label}: {url}")
except Exception as e:
print(f"{label}: ERROR - {e}")
finally:
await database.disconnect()
async def download_files(transcript_id: str, output_dir: str):
"""Download all files associated with a deleted transcript."""
database = get_database()
await database.connect()
try:
query = transcripts.select().where(transcripts.c.id == transcript_id)
result = await database.fetch_one(query)
if not result:
print(f"Transcript {transcript_id} not found.")
return
t = Transcript(**result)
if t.deleted_at is None:
print(f"Transcript {transcript_id} is not deleted.")
return
dest = os.path.join(output_dir, t.id)
os.makedirs(dest, exist_ok=True)
storage = get_transcripts_storage()
# Download transcript audio
if t.audio_location == "storage" and not t.audio_deleted:
try:
data = await storage.get_file(t.storage_audio_path)
path = os.path.join(dest, "audio.mp3")
with open(path, "wb") as f:
f.write(data)
print(f"Downloaded: {path}")
except Exception as e:
print(f"Failed to download audio: {e}")
# Download recording files
if t.recording_id:
recording = await recordings_controller.get_by_id(t.recording_id)
if recording and recording.track_keys:
tracks_dir = os.path.join(dest, "tracks")
os.makedirs(tracks_dir, exist_ok=True)
for i, key in enumerate(recording.track_keys):
try:
data = await storage.get_file(key, bucket=recording.bucket_name)
filename = os.path.basename(key) or f"track_{i}"
path = os.path.join(tracks_dir, filename)
with open(path, "wb") as f:
f.write(data)
print(f"Downloaded: {path}")
except Exception as e:
print(f"Failed to download track {i}: {e}")
# Download cloud video
if t.meeting_id:
meeting = await meetings_controller.get_by_id(t.meeting_id)
if meeting and meeting.daily_composed_video_s3_key:
try:
source_storage = get_source_storage("daily")
data = await source_storage.get_file(
meeting.daily_composed_video_s3_key
)
path = os.path.join(dest, "cloud_video.mp4")
with open(path, "wb") as f:
f.write(data)
print(f"Downloaded: {path}")
except Exception as e:
print(f"Failed to download cloud video: {e}")
# Write metadata
metadata = {
"id": t.id,
"title": t.title,
"created_at": t.created_at.isoformat() if t.created_at else None,
"deleted_at": t.deleted_at.isoformat() if t.deleted_at else None,
"duration": t.duration,
"source_language": t.source_language,
"target_language": t.target_language,
"short_summary": t.short_summary,
"long_summary": t.long_summary,
"topics": [topic.model_dump() for topic in t.topics] if t.topics else [],
"participants": [p.model_dump() for p in t.participants]
if t.participants
else [],
"action_items": t.action_items,
"webvtt": t.webvtt,
"recording_id": t.recording_id,
"meeting_id": t.meeting_id,
}
path = os.path.join(dest, "metadata.json")
with open(path, "w") as f:
json.dump(metadata, f, indent=2, default=str)
print(f"Downloaded: {path}")
print(f"\nAll files saved to: {dest}")
finally:
await database.disconnect()
def main():
parser = argparse.ArgumentParser(description="Manage soft-deleted transcripts")
subparsers = parser.add_subparsers(dest="command", required=True)
subparsers.add_parser("list", help="List all deleted transcripts")
files_parser = subparsers.add_parser(
"files", help="List S3 keys for a deleted transcript"
)
files_parser.add_argument("transcript_id", help="Transcript ID")
download_parser = subparsers.add_parser(
"download", help="Download files for a deleted transcript"
)
download_parser.add_argument("transcript_id", help="Transcript ID")
download_parser.add_argument(
"--output-dir", default=".", help="Output directory (default: .)"
)
args = parser.parse_args()
if args.command == "list":
asyncio.run(list_deleted())
elif args.command == "files":
asyncio.run(list_files(args.transcript_id))
elif args.command == "download":
asyncio.run(download_files(args.transcript_id, args.output_dir))
if __name__ == "__main__":
main()

View File

@@ -4,7 +4,7 @@ from typing import Annotated, Any, Optional
from uuid import UUID
from fastapi import APIRouter, Depends, HTTPException, Request
from pydantic import BaseModel
from pydantic import BaseModel, EmailStr
import reflector.auth as auth
from reflector.dailyco_api import RecordingType
@@ -151,3 +151,25 @@ async def start_recording(
raise HTTPException(
status_code=500, detail=f"Failed to start recording: {str(e)}"
)
class AddEmailRecipientRequest(BaseModel):
email: EmailStr
@router.post("/meetings/{meeting_id}/email-recipient")
async def add_email_recipient(
meeting_id: str,
request: AddEmailRecipientRequest,
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
):
"""Add an email address to receive the transcript link when processing completes."""
meeting = await meetings_controller.get_by_id(meeting_id)
if not meeting:
raise HTTPException(status_code=404, detail="Meeting not found")
recipients = await meetings_controller.add_email_recipient(
meeting_id, request.email
)
return {"status": "success", "email_recipients": recipients}

View File

@@ -16,6 +16,7 @@ from pydantic import (
import reflector.auth as auth
from reflector.db import get_database
from reflector.db.meetings import meetings_controller
from reflector.db.recordings import recordings_controller
from reflector.db.rooms import rooms_controller
from reflector.db.search import (
@@ -112,6 +113,8 @@ class GetTranscriptMinimal(BaseModel):
room_name: str | None = None
audio_deleted: bool | None = None
change_seq: int | None = None
has_cloud_video: bool = False
cloud_video_duration: int | None = None
class TranscriptParticipantWithEmail(TranscriptParticipant):
@@ -501,6 +504,14 @@ async def transcript_get(
)
)
has_cloud_video = False
cloud_video_duration = None
if transcript.meeting_id:
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
if meeting and meeting.daily_composed_video_s3_key:
has_cloud_video = True
cloud_video_duration = meeting.daily_composed_video_duration
base_data = {
"id": transcript.id,
"user_id": transcript.user_id,
@@ -524,6 +535,8 @@ async def transcript_get(
"audio_deleted": transcript.audio_deleted,
"change_seq": transcript.change_seq,
"participants": participants,
"has_cloud_video": has_cloud_video,
"cloud_video_duration": cloud_video_duration,
}
if transcript_format == "text":

View File

@@ -0,0 +1,169 @@
"""
Transcript download endpoint — generates a zip archive with all transcript files.
"""
import json
import os
import tempfile
import zipfile
from typing import Annotated
from fastapi import APIRouter, Depends, HTTPException
from fastapi.responses import StreamingResponse
import reflector.auth as auth
from reflector.db.meetings import meetings_controller
from reflector.db.recordings import recordings_controller
from reflector.db.transcripts import transcripts_controller
from reflector.logger import logger
from reflector.storage import get_source_storage, get_transcripts_storage
router = APIRouter()
@router.get(
"/transcripts/{transcript_id}/download/zip",
operation_id="transcript_download_zip",
)
async def transcript_download_zip(
transcript_id: str,
user: Annotated[auth.UserInfo, Depends(auth.current_user)],
):
user_id = user["sub"]
transcript = await transcripts_controller.get_by_id_for_http(
transcript_id, user_id=user_id
)
if not transcripts_controller.user_can_mutate(transcript, user_id):
raise HTTPException(status_code=403, detail="Not authorized")
recording = None
if transcript.recording_id:
recording = await recordings_controller.get_by_id(transcript.recording_id)
meeting = None
if transcript.meeting_id:
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
truncated_id = str(transcript.id).split("-")[0]
with tempfile.TemporaryDirectory() as tmpdir:
zip_path = os.path.join(tmpdir, f"transcript_{truncated_id}.zip")
with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_DEFLATED) as zf:
# Transcript audio
if transcript.audio_location == "storage" and not transcript.audio_deleted:
try:
storage = get_transcripts_storage()
data = await storage.get_file(transcript.storage_audio_path)
audio_path = os.path.join(tmpdir, "audio.mp3")
with open(audio_path, "wb") as f:
f.write(data)
zf.write(audio_path, "audio.mp3")
except Exception as e:
logger.warning(
"Failed to download transcript audio for zip",
exc_info=e,
transcript_id=transcript.id,
)
elif (
not transcript.audio_deleted
and hasattr(transcript, "audio_mp3_filename")
and transcript.audio_mp3_filename
and transcript.audio_mp3_filename.exists()
):
zf.write(str(transcript.audio_mp3_filename), "audio.mp3")
# Recording tracks (multitrack)
if recording and recording.track_keys:
try:
source_storage = get_source_storage(
"daily" if recording.track_keys else None
)
except Exception:
source_storage = get_transcripts_storage()
for i, key in enumerate(recording.track_keys):
try:
data = await source_storage.get_file(
key, bucket=recording.bucket_name
)
filename = os.path.basename(key) or f"track_{i}"
track_path = os.path.join(tmpdir, f"track_{i}")
with open(track_path, "wb") as f:
f.write(data)
zf.write(track_path, f"tracks/{filename}")
except Exception as e:
logger.warning(
"Failed to download track for zip",
exc_info=e,
track_key=key,
)
# Cloud video
if meeting and meeting.daily_composed_video_s3_key:
try:
source_storage = get_source_storage("daily")
data = await source_storage.get_file(
meeting.daily_composed_video_s3_key
)
video_path = os.path.join(tmpdir, "cloud_video.mp4")
with open(video_path, "wb") as f:
f.write(data)
zf.write(video_path, "cloud_video.mp4")
except Exception as e:
logger.warning(
"Failed to download cloud video for zip",
exc_info=e,
s3_key=meeting.daily_composed_video_s3_key,
)
# Metadata JSON
metadata = {
"id": transcript.id,
"title": transcript.title,
"created_at": (
transcript.created_at.isoformat() if transcript.created_at else None
),
"duration": transcript.duration,
"source_language": transcript.source_language,
"target_language": transcript.target_language,
"short_summary": transcript.short_summary,
"long_summary": transcript.long_summary,
"topics": (
[t.model_dump() for t in transcript.topics]
if transcript.topics
else []
),
"participants": (
[p.model_dump() for p in transcript.participants]
if transcript.participants
else []
),
"action_items": transcript.action_items,
"webvtt": transcript.webvtt,
"recording_id": transcript.recording_id,
"meeting_id": transcript.meeting_id,
}
meta_path = os.path.join(tmpdir, "metadata.json")
with open(meta_path, "w") as f:
json.dump(metadata, f, indent=2, default=str)
zf.write(meta_path, "metadata.json")
# Read zip into memory before tmpdir is cleaned up
with open(zip_path, "rb") as f:
zip_bytes = f.read()
def iter_zip():
offset = 0
chunk_size = 64 * 1024
while offset < len(zip_bytes):
yield zip_bytes[offset : offset + chunk_size]
offset += chunk_size
return StreamingResponse(
iter_zip(),
media_type="application/zip",
headers={
"Content-Disposition": f"attachment; filename=transcript_{truncated_id}.zip"
},
)

View File

@@ -0,0 +1,75 @@
"""
Transcript cloud video endpoint — returns a presigned URL for streaming playback.
"""
from typing import Annotated, Optional
import jwt
from fastapi import APIRouter, Depends, HTTPException, status
from pydantic import BaseModel
import reflector.auth as auth
from reflector.db.meetings import meetings_controller
from reflector.db.transcripts import transcripts_controller
from reflector.settings import settings
from reflector.storage import get_source_storage
router = APIRouter()
class VideoUrlResponse(BaseModel):
url: str
duration: int | None = None
content_type: str = "video/mp4"
@router.get(
"/transcripts/{transcript_id}/video/url",
operation_id="transcript_get_video_url",
response_model=VideoUrlResponse,
)
async def transcript_get_video_url(
transcript_id: str,
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
token: str | None = None,
):
user_id = user["sub"] if user else None
if not user_id and token:
try:
token_user = await auth.verify_raw_token(token)
except Exception:
token_user = None
if not token_user:
try:
payload = jwt.decode(token, settings.SECRET_KEY, algorithms=["HS256"])
user_id = payload.get("sub")
except jwt.PyJWTError:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid or expired token",
)
else:
user_id = token_user["sub"]
transcript = await transcripts_controller.get_by_id_for_http(
transcript_id, user_id=user_id
)
if not transcript.meeting_id:
raise HTTPException(status_code=404, detail="No video available")
meeting = await meetings_controller.get_by_id(transcript.meeting_id)
if not meeting or not meeting.daily_composed_video_s3_key:
raise HTTPException(status_code=404, detail="No video available")
source_storage = get_source_storage("daily")
url = await source_storage.get_file_url(
meeting.daily_composed_video_s3_key,
operation="get_object",
expires_in=3600,
)
return VideoUrlResponse(
url=url,
duration=meeting.daily_composed_video_duration,
)

View File

@@ -90,7 +90,9 @@ async def cleanup_old_transcripts(
):
"""Delete old anonymous transcripts and their associated recordings/meetings."""
query = transcripts.select().where(
(transcripts.c.created_at < cutoff_date) & (transcripts.c.user_id.is_(None))
(transcripts.c.created_at < cutoff_date)
& (transcripts.c.user_id.is_(None))
& (transcripts.c.deleted_at.is_(None))
)
old_transcripts = await db.fetch_all(query)

View File

@@ -104,6 +104,12 @@ async def process_recording(bucket_name: str, object_key: str):
room = await rooms_controller.get_by_id(meeting.room_id)
recording = await recordings_controller.get_by_object_key(bucket_name, object_key)
if recording and recording.deleted_at is not None:
logger.info(
"Skipping soft-deleted recording",
recording_id=recording.id,
)
return
if not recording:
recording = await recordings_controller.create(
Recording(
@@ -115,6 +121,13 @@ async def process_recording(bucket_name: str, object_key: str):
)
transcript = await transcripts_controller.get_by_recording_id(recording.id)
if transcript and transcript.deleted_at is not None:
logger.info(
"Skipping soft-deleted transcript for recording",
recording_id=recording.id,
transcript_id=transcript.id,
)
return
if transcript:
await transcripts_controller.update(
transcript,
@@ -262,6 +275,13 @@ async def _process_multitrack_recording_inner(
# Check if recording already exists (reprocessing path)
recording = await recordings_controller.get_by_id(recording_id)
if recording and recording.deleted_at is not None:
logger.info(
"Skipping soft-deleted recording",
recording_id=recording_id,
)
return
if recording and recording.meeting_id:
# Reprocessing: recording exists with meeting already linked
meeting = await meetings_controller.get_by_id(recording.meeting_id)
@@ -341,6 +361,13 @@ async def _process_multitrack_recording_inner(
)
transcript = await transcripts_controller.get_by_recording_id(recording.id)
if transcript and transcript.deleted_at is not None:
logger.info(
"Skipping soft-deleted transcript for recording",
recording_id=recording.id,
transcript_id=transcript.id,
)
return
if not transcript:
transcript = await transcripts_controller.add(
"",

View File

@@ -40,6 +40,11 @@ x-backend-env: &backend-env
# Garage S3 credentials — hardcoded test keys, containers are ephemeral
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID: GK0123456789abcdef01234567 # gitleaks:allow
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY: "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" # gitleaks:allow
# Email / SMTP — Mailpit captures emails without sending
SMTP_HOST: mailpit
SMTP_PORT: "1025"
SMTP_FROM_EMAIL: test@reflector.local
SMTP_USE_TLS: "false"
# NOTE: DAILYCO_STORAGE_AWS_* intentionally NOT set — forces fallback to
# get_transcripts_storage() which has ENDPOINT_URL pointing at Garage.
# Setting them would bypass the endpoint and generate presigned URLs for AWS.
@@ -101,6 +106,14 @@ services:
retries: 10
start_period: 5s
mailpit:
image: axllent/mailpit:latest
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8025/api/v1/messages"]
interval: 5s
timeout: 3s
retries: 5
mock-daily:
build:
context: .
@@ -131,6 +144,8 @@ services:
condition: service_healthy
mock-daily:
condition: service_healthy
mailpit:
condition: service_healthy
volumes:
- server_data:/app/data
@@ -194,6 +209,7 @@ services:
DATABASE_URL: postgresql+asyncpg://reflector:reflector@postgres:5432/reflector
SERVER_URL: http://server:1250
GARAGE_ENDPOINT: http://garage:3900
MAILPIT_URL: http://mailpit:8025
depends_on:
server:
condition: service_started

View File

@@ -17,6 +17,7 @@ from sqlalchemy.ext.asyncio import create_async_engine
SERVER_URL = os.environ.get("SERVER_URL", "http://server:1250")
GARAGE_ENDPOINT = os.environ.get("GARAGE_ENDPOINT", "http://garage:3900")
MAILPIT_URL = os.environ.get("MAILPIT_URL", "http://mailpit:8025")
DATABASE_URL = os.environ.get(
"DATABASE_URL_ASYNC",
os.environ.get(
@@ -114,3 +115,44 @@ async def _poll_transcript_status(
def poll_transcript_status():
"""Returns the poll_transcript_status async helper function."""
return _poll_transcript_status
@pytest_asyncio.fixture
async def mailpit_client():
"""HTTP client for Mailpit API — query captured emails."""
async with httpx.AsyncClient(
base_url=MAILPIT_URL,
timeout=httpx.Timeout(10.0),
) as client:
# Clear inbox before each test
await client.delete("/api/v1/messages")
yield client
async def _poll_mailpit_messages(
mailpit: httpx.AsyncClient,
to_email: str,
max_wait: int = 30,
interval: int = 2,
) -> list[dict]:
"""
Poll Mailpit API until at least one message is delivered to the given address.
Returns the list of matching messages.
"""
elapsed = 0
while elapsed < max_wait:
resp = await mailpit.get("/api/v1/messages", params={"query": f"to:{to_email}"})
resp.raise_for_status()
data = resp.json()
messages = data.get("messages", [])
if messages:
return messages
await asyncio.sleep(interval)
elapsed += interval
raise TimeoutError(f"No email delivered to {to_email} within {max_wait}s")
@pytest_asyncio.fixture
def poll_mailpit_messages():
"""Returns the poll_mailpit_messages async helper function."""
return _poll_mailpit_messages

View File

@@ -4,10 +4,12 @@ Integration test: Multitrack → DailyMultitrackPipeline → full processing.
Exercises: S3 upload → DB recording setup → process endpoint →
Hatchet DiarizationPipeline → mock Daily API → whisper per-track transcription →
diarization → mixdown → LLM summarization/topics → status "ended".
Also tests email transcript notification via Mailpit SMTP sink.
"""
import json
from datetime import datetime, timezone
import uuid
from datetime import datetime, timedelta, timezone
import pytest
from sqlalchemy import text
@@ -22,6 +24,9 @@ TRACK_KEYS = [
]
TEST_EMAIL = "integration-test@reflector.local"
@pytest.mark.asyncio
async def test_multitrack_pipeline_end_to_end(
api_client,
@@ -30,6 +35,8 @@ async def test_multitrack_pipeline_end_to_end(
test_records_dir,
bucket_name,
poll_transcript_status,
mailpit_client,
poll_mailpit_messages,
):
"""Set up multitrack recording in S3/DB and verify the full pipeline completes."""
# 1. Upload test audio as two separate tracks to Garage S3
@@ -52,16 +59,41 @@ async def test_multitrack_pipeline_end_to_end(
transcript = resp.json()
transcript_id = transcript["id"]
# 3. Insert Recording row and link to transcript via direct DB access
# 3. Insert Meeting, Recording, and link to transcript via direct DB access
recording_id = f"rec-integration-{transcript_id[:8]}"
meeting_id = str(uuid.uuid4())
now = datetime.now(timezone.utc)
async with db_engine.begin() as conn:
# Insert recording with track_keys
# Insert meeting with email_recipients for email notification test
await conn.execute(
text("""
INSERT INTO recording (id, bucket_name, object_key, recorded_at, status, track_keys)
VALUES (:id, :bucket_name, :object_key, :recorded_at, :status, CAST(:track_keys AS json))
INSERT INTO meeting (
id, room_name, room_url, host_room_url,
start_date, end_date, platform, email_recipients
)
VALUES (
:id, :room_name, :room_url, :host_room_url,
:start_date, :end_date, :platform, CAST(:email_recipients AS json)
)
"""),
{
"id": meeting_id,
"room_name": "integration-test-room",
"room_url": "https://test.daily.co/integration-test-room",
"host_room_url": "https://test.daily.co/integration-test-room",
"start_date": now,
"end_date": now + timedelta(hours=1),
"platform": "daily",
"email_recipients": json.dumps([TEST_EMAIL]),
},
)
# Insert recording with track_keys, linked to meeting
await conn.execute(
text("""
INSERT INTO recording (id, bucket_name, object_key, recorded_at, status, track_keys, meeting_id)
VALUES (:id, :bucket_name, :object_key, :recorded_at, :status, CAST(:track_keys AS json), :meeting_id)
"""),
{
"id": recording_id,
@@ -70,6 +102,7 @@ async def test_multitrack_pipeline_end_to_end(
"recorded_at": now,
"status": "completed",
"track_keys": json.dumps(TRACK_KEYS),
"meeting_id": meeting_id,
},
)
@@ -127,3 +160,22 @@ async def test_multitrack_pipeline_end_to_end(
assert (
len(participants) >= 2
), f"Expected at least 2 speakers for multitrack, got {len(participants)}"
# 7. Verify email transcript notification
# The send_email pipeline task should have:
# a) Set the transcript to public share_mode
# b) Sent an email to TEST_EMAIL via Mailpit
transcript_resp = await api_client.get(f"/transcripts/{transcript_id}")
transcript_resp.raise_for_status()
transcript_data = transcript_resp.json()
assert (
transcript_data.get("share_mode") == "public"
), "Transcript should be set to public when email recipients exist"
# Poll Mailpit for the delivered email (send_email task runs async after finalize)
messages = await poll_mailpit_messages(mailpit_client, TEST_EMAIL, max_wait=30)
assert len(messages) >= 1, "Should have received at least 1 email"
email_msg = messages[0]
assert (
"Transcript Ready" in email_msg.get("Subject", "")
), f"Email subject should contain 'Transcript Ready', got: {email_msg.get('Subject')}"

View File

@@ -76,8 +76,10 @@ async def test_cleanup_old_public_data_deletes_old_anonymous_transcripts():
assert result["transcripts_deleted"] == 1
assert result["errors"] == []
# Verify old anonymous transcript was deleted
assert await transcripts_controller.get_by_id(old_transcript.id) is None
# Verify old anonymous transcript was soft-deleted
old = await transcripts_controller.get_by_id(old_transcript.id)
assert old is not None
assert old.deleted_at is not None
# Verify new anonymous transcript still exists
assert await transcripts_controller.get_by_id(new_transcript.id) is not None
@@ -150,15 +152,17 @@ async def test_cleanup_deletes_associated_meeting_and_recording():
assert result["recordings_deleted"] == 1
assert result["errors"] == []
# Verify transcript was deleted
assert await transcripts_controller.get_by_id(old_transcript.id) is None
# Verify transcript was soft-deleted
old = await transcripts_controller.get_by_id(old_transcript.id)
assert old is not None
assert old.deleted_at is not None
# Verify meeting was deleted
# Verify meeting was hard-deleted (cleanup deletes meetings directly)
query = meetings.select().where(meetings.c.id == meeting_id)
meeting_result = await get_database().fetch_one(query)
assert meeting_result is None
# Verify recording was deleted
# Verify recording was hard-deleted (cleanup deletes recordings directly)
assert await recordings_controller.get_by_id(recording.id) is None

View File

@@ -1,7 +1,8 @@
import pytest
from reflector.db.recordings import Recording, recordings_controller
from reflector.db.rooms import rooms_controller
from reflector.db.transcripts import transcripts_controller
from reflector.db.transcripts import SourceKind, transcripts_controller
@pytest.mark.asyncio
@@ -192,9 +193,93 @@ async def test_transcript_delete(authenticated_client, client):
assert response.status_code == 200
assert response.json()["status"] == "ok"
# API returns 404 for soft-deleted transcripts
response = await client.get(f"/transcripts/{tid}")
assert response.status_code == 404
# But the transcript still exists in DB with deleted_at set
transcript = await transcripts_controller.get_by_id(tid)
assert transcript is not None
assert transcript.deleted_at is not None
@pytest.mark.asyncio
async def test_deleted_transcript_not_in_list(authenticated_client, client):
"""Soft-deleted transcripts should not appear in the list endpoint."""
response = await client.post("/transcripts", json={"name": "testdel_list"})
assert response.status_code == 200
tid = response.json()["id"]
# Verify it appears in the list
response = await client.get("/transcripts")
assert response.status_code == 200
ids = [t["id"] for t in response.json()["items"]]
assert tid in ids
# Delete it
response = await client.delete(f"/transcripts/{tid}")
assert response.status_code == 200
# Verify it no longer appears in the list
response = await client.get("/transcripts")
assert response.status_code == 200
ids = [t["id"] for t in response.json()["items"]]
assert tid not in ids
@pytest.mark.asyncio
async def test_delete_already_deleted_is_idempotent(authenticated_client, client):
"""Deleting an already-deleted transcript is idempotent (returns 200)."""
response = await client.post("/transcripts", json={"name": "testdel_idem"})
assert response.status_code == 200
tid = response.json()["id"]
# First delete
response = await client.delete(f"/transcripts/{tid}")
assert response.status_code == 200
# Second delete — idempotent, still returns ok
response = await client.delete(f"/transcripts/{tid}")
assert response.status_code == 200
# But deleted_at was only set once (not updated)
transcript = await transcripts_controller.get_by_id(tid)
assert transcript is not None
assert transcript.deleted_at is not None
@pytest.mark.asyncio
async def test_deleted_transcript_recording_soft_deleted(authenticated_client, client):
"""Soft-deleting a transcript also soft-deletes its recording."""
from datetime import datetime, timezone
recording = await recordings_controller.create(
Recording(
bucket_name="test-bucket",
object_key="test.mp4",
recorded_at=datetime.now(timezone.utc),
)
)
transcript = await transcripts_controller.add(
name="with-recording",
source_kind=SourceKind.ROOM,
recording_id=recording.id,
user_id="randomuserid",
)
response = await client.delete(f"/transcripts/{transcript.id}")
assert response.status_code == 200
# Recording still in DB with deleted_at set
rec = await recordings_controller.get_by_id(recording.id)
assert rec is not None
assert rec.deleted_at is not None
# Transcript still in DB with deleted_at set
tr = await transcripts_controller.get_by_id(transcript.id)
assert tr is not None
assert tr.deleted_at is not None
@pytest.mark.asyncio
async def test_transcript_mark_reviewed(authenticated_client, client):

View File

@@ -0,0 +1,36 @@
import io
import zipfile
import pytest
@pytest.mark.asyncio
async def test_download_zip_returns_valid_zip(
authenticated_client, client, fake_transcript_with_topics
):
"""Test that the zip download endpoint returns a valid zip file."""
transcript = fake_transcript_with_topics
response = await client.get(f"/transcripts/{transcript.id}/download/zip")
assert response.status_code == 200
assert response.headers["content-type"] == "application/zip"
# Verify it's a valid zip
zip_buffer = io.BytesIO(response.content)
with zipfile.ZipFile(zip_buffer) as zf:
names = zf.namelist()
assert "metadata.json" in names
assert "audio.mp3" in names
@pytest.mark.asyncio
async def test_download_zip_requires_auth(client):
"""Test that zip download requires authentication."""
response = await client.get("/transcripts/nonexistent/download/zip")
assert response.status_code in (401, 403, 422)
@pytest.mark.asyncio
async def test_download_zip_not_found(authenticated_client, client):
"""Test 404 for non-existent transcript."""
response = await client.get("/transcripts/nonexistent-id/download/zip")
assert response.status_code == 404

View File

@@ -1,5 +1,4 @@
from datetime import datetime, timezone
from unittest.mock import AsyncMock, patch
import pytest
@@ -9,6 +8,7 @@ from reflector.db.transcripts import SourceKind, transcripts_controller
@pytest.mark.asyncio
async def test_recording_deleted_with_transcript():
"""Soft-delete: recording and transcript remain in DB with deleted_at set, no files deleted."""
recording = await recordings_controller.create(
Recording(
bucket_name="test-bucket",
@@ -22,16 +22,13 @@ async def test_recording_deleted_with_transcript():
recording_id=recording.id,
)
with patch("reflector.db.transcripts.get_transcripts_storage") as mock_get_storage:
storage_instance = mock_get_storage.return_value
storage_instance.delete_file = AsyncMock()
await transcripts_controller.remove_by_id(transcript.id)
await transcripts_controller.remove_by_id(transcript.id)
# Both should still exist in DB but with deleted_at set
rec = await recordings_controller.get_by_id(recording.id)
assert rec is not None
assert rec.deleted_at is not None
# Should be called with bucket override
storage_instance.delete_file.assert_awaited_once_with(
recording.object_key, bucket=recording.bucket_name
)
assert await recordings_controller.get_by_id(recording.id) is None
assert await transcripts_controller.get_by_id(transcript.id) is None
tr = await transcripts_controller.get_by_id(transcript.id)
assert tr is not None
assert tr.deleted_at is not None

View File

@@ -0,0 +1,105 @@
from datetime import datetime, timedelta, timezone
from unittest.mock import AsyncMock, patch
import pytest
from reflector.db.transcripts import SourceKind, transcripts_controller
@pytest.mark.asyncio
async def test_video_url_returns_404_when_no_meeting(authenticated_client, client):
"""Test that video URL returns 404 when transcript has no meeting."""
response = await client.post("/transcripts", json={"name": "no-meeting"})
assert response.status_code == 200
tid = response.json()["id"]
response = await client.get(f"/transcripts/{tid}/video/url")
assert response.status_code == 404
@pytest.mark.asyncio
async def test_video_url_returns_404_when_no_cloud_video(authenticated_client, client):
"""Test that video URL returns 404 when meeting has no cloud video."""
from reflector.db import get_database
from reflector.db.meetings import meetings
meeting_id = "test-meeting-no-video"
await get_database().execute(
meetings.insert().values(
id=meeting_id,
room_name="No Video Meeting",
room_url="https://example.com",
host_room_url="https://example.com/host",
start_date=datetime.now(timezone.utc),
end_date=datetime.now(timezone.utc) + timedelta(hours=1),
room_id=None,
)
)
transcript = await transcripts_controller.add(
name="with-meeting",
source_kind=SourceKind.ROOM,
meeting_id=meeting_id,
user_id="randomuserid",
)
response = await client.get(f"/transcripts/{transcript.id}/video/url")
assert response.status_code == 404
@pytest.mark.asyncio
async def test_video_url_returns_presigned_url(authenticated_client, client):
"""Test that video URL returns a presigned URL when cloud video exists."""
from reflector.db import get_database
from reflector.db.meetings import meetings
meeting_id = "test-meeting-with-video"
await get_database().execute(
meetings.insert().values(
id=meeting_id,
room_name="Video Meeting",
room_url="https://example.com",
host_room_url="https://example.com/host",
start_date=datetime.now(timezone.utc),
end_date=datetime.now(timezone.utc) + timedelta(hours=1),
room_id=None,
daily_composed_video_s3_key="recordings/video.mp4",
daily_composed_video_duration=120,
)
)
transcript = await transcripts_controller.add(
name="with-video",
source_kind=SourceKind.ROOM,
meeting_id=meeting_id,
user_id="randomuserid",
)
with patch("reflector.views.transcripts_video.get_source_storage") as mock_storage:
mock_instance = AsyncMock()
mock_instance.get_file_url = AsyncMock(
return_value="https://s3.example.com/presigned-url"
)
mock_storage.return_value = mock_instance
response = await client.get(f"/transcripts/{transcript.id}/video/url")
assert response.status_code == 200
data = response.json()
assert data["url"] == "https://s3.example.com/presigned-url"
assert data["duration"] == 120
assert data["content_type"] == "video/mp4"
@pytest.mark.asyncio
async def test_transcript_get_includes_video_fields(authenticated_client, client):
"""Test that transcript GET response includes has_cloud_video field."""
response = await client.post("/transcripts", json={"name": "video-fields"})
assert response.status_code == 200
tid = response.json()["id"]
response = await client.get(f"/transcripts/{tid}")
assert response.status_code == 200
data = response.json()
assert data["has_cloud_video"] is False
assert data["cloud_video_duration"] is None

19
server/uv.lock generated
View File

@@ -188,6 +188,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fb/76/641ae371508676492379f16e2fa48f4e2c11741bd63c48be4b12a6b09cba/aiosignal-1.4.0-py3-none-any.whl", hash = "sha256:053243f8b92b990551949e63930a839ff0cf0b0ebbe0597b0f3fb19e1a0fe82e", size = 7490, upload-time = "2025-07-03T22:54:42.156Z" },
]
[[package]]
name = "aiosmtplib"
version = "5.1.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e7/ad/240a7ce4e50713b111dff8b781a898d8d4770e5d6ad4899103f84c86005c/aiosmtplib-5.1.0.tar.gz", hash = "sha256:2504a23b2b63c9de6bc4ea719559a38996dba68f73f6af4eb97be20ee4c5e6c4", size = 66176, upload-time = "2026-01-25T01:51:11.408Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/37/82/70f2c452acd7ed18c558c8ace9a8cf4fdcc70eae9a41749b5bdc53eb6f45/aiosmtplib-5.1.0-py3-none-any.whl", hash = "sha256:368029440645b486b69db7029208a7a78c6691b90d24a5332ddba35d9109d55b", size = 27778, upload-time = "2026-01-25T01:51:10.026Z" },
]
[[package]]
name = "aiosqlite"
version = "0.21.0"
@@ -2976,11 +2985,11 @@ wheels = [
[[package]]
name = "pypdf"
version = "6.8.0"
version = "6.9.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b4/a3/e705b0805212b663a4c27b861c8a603dba0f8b4bb281f96f8e746576a50d/pypdf-6.8.0.tar.gz", hash = "sha256:cb7eaeaa4133ce76f762184069a854e03f4d9a08568f0e0623f7ea810407833b", size = 5307831, upload-time = "2026-03-09T13:37:40.591Z" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/fb/dc2e8cb006e80b0020ed20d8649106fe4274e82d8e756ad3e24ade19c0df/pypdf-6.9.1.tar.gz", hash = "sha256:ae052407d33d34de0c86c5c729be6d51010bf36e03035a8f23ab449bca52377d", size = 5311551, upload-time = "2026-03-17T10:46:07.876Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/8c/ec/4ccf3bb86b1afe5d7176e1c8abcdbf22b53dd682ec2eda50e1caadcf6846/pypdf-6.8.0-py3-none-any.whl", hash = "sha256:2a025080a8dd73f48123c89c57174a5ff3806c71763ee4e49572dc90454943c7", size = 332177, upload-time = "2026-03-09T13:37:38.774Z" },
{ url = "https://files.pythonhosted.org/packages/f9/f4/75543fa802b86e72f87e9395440fe1a89a6d149887e3e55745715c3352ac/pypdf-6.9.1-py3-none-any.whl", hash = "sha256:f35a6a022348fae47e092a908339a8f3dc993510c026bb39a96718fc7185e89f", size = 333661, upload-time = "2026-03-17T10:46:06.286Z" },
]
[[package]]
@@ -3343,10 +3352,12 @@ dependencies = [
{ name = "aiohttp" },
{ name = "aiohttp-cors" },
{ name = "aiortc" },
{ name = "aiosmtplib" },
{ name = "alembic" },
{ name = "av" },
{ name = "celery" },
{ name = "databases", extra = ["aiosqlite", "asyncpg"] },
{ name = "email-validator" },
{ name = "fastapi", extra = ["standard"] },
{ name = "fastapi-pagination" },
{ name = "hatchet-sdk" },
@@ -3422,10 +3433,12 @@ requires-dist = [
{ name = "aiohttp", specifier = ">=3.9.0" },
{ name = "aiohttp-cors", specifier = ">=0.7.0" },
{ name = "aiortc", specifier = ">=1.5.0" },
{ name = "aiosmtplib", specifier = ">=3.0.0" },
{ name = "alembic", specifier = ">=1.11.3" },
{ name = "av", specifier = ">=15.0.0" },
{ name = "celery", specifier = ">=5.3.4" },
{ name = "databases", extras = ["aiosqlite", "asyncpg"], specifier = ">=0.7.0" },
{ name = "email-validator", specifier = ">=2.0.0" },
{ name = "fastapi", extras = ["standard"], specifier = ">=0.100.1" },
{ name = "fastapi-pagination", specifier = ">=0.14.2" },
{ name = "hatchet-sdk", specifier = "==1.22.16" },

View File

@@ -5,10 +5,11 @@ import useWaveform from "../useWaveform";
import useMp3 from "../useMp3";
import { TopicList } from "./_components/TopicList";
import { Topic } from "../webSocketTypes";
import React, { useEffect, useState, use } from "react";
import React, { useEffect, useState, useCallback, use } from "react";
import FinalSummary from "./finalSummary";
import TranscriptTitle from "../transcriptTitle";
import Player from "../player";
import VideoPlayer from "../videoPlayer";
import { useWebSockets } from "../useWebSockets";
import { useRouter } from "next/navigation";
import { parseNonEmptyString } from "../../../lib/utils";
@@ -56,6 +57,21 @@ export default function TranscriptDetails(details: TranscriptDetails) {
const [finalSummaryElement, setFinalSummaryElement] =
useState<HTMLDivElement | null>(null);
const hasCloudVideo = !!transcript.data?.has_cloud_video;
const [videoExpanded, setVideoExpanded] = useState(false);
const [videoNewBadge, setVideoNewBadge] = useState(() => {
if (typeof window === "undefined") return true;
return !localStorage.getItem(`video-seen-${transcriptId}`);
});
const handleVideoToggle = useCallback(() => {
setVideoExpanded((prev) => !prev);
if (videoNewBadge) {
setVideoNewBadge(false);
localStorage.setItem(`video-seen-${transcriptId}`, "1");
}
}, [videoNewBadge, transcriptId]);
useEffect(() => {
if (!waiting || !transcript.data) return;
@@ -156,8 +172,14 @@ export default function TranscriptDetails(details: TranscriptDetails) {
<Grid
templateColumns={{ base: "minmax(0, 1fr)", md: "repeat(2, 1fr)" }}
templateRows={{
base: "auto minmax(0, 1fr) minmax(0, 1fr)",
md: "auto minmax(0, 1fr)",
base:
hasCloudVideo && videoExpanded
? "auto auto minmax(0, 1fr) minmax(0, 1fr)"
: "auto minmax(0, 1fr) minmax(0, 1fr)",
md:
hasCloudVideo && videoExpanded
? "auto auto minmax(0, 1fr)"
: "auto minmax(0, 1fr)",
}}
gap={4}
gridRowGap={2}
@@ -180,6 +202,10 @@ export default function TranscriptDetails(details: TranscriptDetails) {
transcript={transcript.data || null}
topics={topics.topics}
finalSummaryElement={finalSummaryElement}
hasCloudVideo={hasCloudVideo}
videoExpanded={videoExpanded}
onVideoToggle={handleVideoToggle}
videoNewBadge={videoNewBadge}
/>
</Flex>
{mp3.audioDeleted && (
@@ -190,6 +216,16 @@ export default function TranscriptDetails(details: TranscriptDetails) {
)}
</Flex>
</GridItem>
{hasCloudVideo && videoExpanded && (
<GridItem colSpan={{ base: 1, md: 2 }}>
<VideoPlayer
transcriptId={transcriptId}
duration={transcript.data?.cloud_video_duration ?? null}
expanded={videoExpanded}
onClose={() => setVideoExpanded(false)}
/>
</GridItem>
)}
<TopicList
topics={topics.topics || []}
useActiveTopic={useActiveTopic}

View File

@@ -10,11 +10,22 @@ import {
useTranscriptUpdate,
useTranscriptParticipants,
} from "../../lib/apiHooks";
import { Heading, IconButton, Input, Flex, Spacer } from "@chakra-ui/react";
import { LuPen, LuCopy, LuCheck } from "react-icons/lu";
import {
Heading,
IconButton,
Input,
Flex,
Spacer,
Spinner,
Box,
Text,
} from "@chakra-ui/react";
import { LuPen, LuCopy, LuCheck, LuDownload, LuVideo } from "react-icons/lu";
import ShareAndPrivacy from "./shareAndPrivacy";
import { buildTranscriptWithTopics } from "./buildTranscriptWithTopics";
import { toaster } from "../../components/ui/toaster";
import { useAuth } from "../../lib/AuthProvider";
import { API_URL } from "../../lib/apiClient";
type TranscriptTitle = {
title: string;
@@ -25,13 +36,51 @@ type TranscriptTitle = {
transcript: GetTranscriptWithParticipants | null;
topics: GetTranscriptTopic[] | null;
finalSummaryElement: HTMLDivElement | null;
// video props
hasCloudVideo?: boolean;
videoExpanded?: boolean;
onVideoToggle?: () => void;
videoNewBadge?: boolean;
};
const TranscriptTitle = (props: TranscriptTitle) => {
const [displayedTitle, setDisplayedTitle] = useState(props.title);
const [preEditTitle, setPreEditTitle] = useState(props.title);
const [isEditing, setIsEditing] = useState(false);
const [downloading, setDownloading] = useState(false);
const updateTranscriptMutation = useTranscriptUpdate();
const auth = useAuth();
const accessToken = auth.status === "authenticated" ? auth.accessToken : null;
const userId = auth.status === "authenticated" ? auth.user?.id : null;
const isOwner = !!(userId && userId === props.transcript?.user_id);
const handleDownloadZip = async () => {
if (!props.transcriptId || downloading) return;
setDownloading(true);
try {
const headers: Record<string, string> = {};
if (accessToken) {
headers["Authorization"] = `Bearer ${accessToken}`;
}
const resp = await fetch(
`${API_URL}/v1/transcripts/${props.transcriptId}/download/zip`,
{ headers },
);
if (!resp.ok) throw new Error("Download failed");
const blob = await resp.blob();
const url = URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = `transcript_${props.transcriptId.split("-")[0]}.zip`;
a.click();
URL.revokeObjectURL(url);
} catch (err) {
console.error("Failed to download zip:", err);
} finally {
setDownloading(false);
}
};
const participantsQuery = useTranscriptParticipants(
props.transcript?.id ? parseMaybeNonEmptyString(props.transcript.id) : null,
);
@@ -173,6 +222,51 @@ const TranscriptTitle = (props: TranscriptTitle) => {
>
<LuCopy />
</IconButton>
{isOwner && (
<IconButton
aria-label="Download Transcript Zip"
size="sm"
variant="subtle"
onClick={handleDownloadZip}
disabled={downloading}
>
{downloading ? <Spinner size="sm" /> : <LuDownload />}
</IconButton>
)}
{props.hasCloudVideo && props.onVideoToggle && (
<Box position="relative" display="inline-flex">
<IconButton
aria-label={
props.videoExpanded
? "Hide cloud recording"
: "Show cloud recording"
}
size="sm"
variant={props.videoExpanded ? "solid" : "subtle"}
colorPalette={props.videoExpanded ? "blue" : undefined}
onClick={props.onVideoToggle}
>
<LuVideo />
</IconButton>
{props.videoNewBadge && (
<Text
position="absolute"
top="-1"
right="-1"
fontSize="2xs"
fontWeight="bold"
color="white"
bg="red.500"
px={1}
borderRadius="sm"
lineHeight="tall"
pointerEvents="none"
>
new
</Text>
)}
</Box>
)}
<ShareAndPrivacy
finalSummaryElement={props.finalSummaryElement}
transcript={props.transcript}

View File

@@ -0,0 +1,153 @@
import { useEffect, useState } from "react";
import { Box, Flex, Skeleton, Text } from "@chakra-ui/react";
import { LuVideo, LuX } from "react-icons/lu";
import { useAuth } from "../../lib/AuthProvider";
import { API_URL } from "../../lib/apiClient";
type VideoPlayerProps = {
transcriptId: string;
duration: number | null;
expanded: boolean;
onClose: () => void;
};
function formatDuration(seconds: number): string {
const h = Math.floor(seconds / 3600);
const m = Math.floor((seconds % 3600) / 60);
const s = seconds % 60;
if (h > 0)
return `${h}:${String(m).padStart(2, "0")}:${String(s).padStart(2, "0")}`;
return `${m}:${String(s).padStart(2, "0")}`;
}
export default function VideoPlayer({
transcriptId,
duration,
expanded,
onClose,
}: VideoPlayerProps) {
const [videoUrl, setVideoUrl] = useState<string | null>(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const auth = useAuth();
const accessToken = auth.status === "authenticated" ? auth.accessToken : null;
useEffect(() => {
if (!expanded || !transcriptId || videoUrl) return;
const fetchVideoUrl = async () => {
setLoading(true);
setError(null);
try {
const params = new URLSearchParams();
if (accessToken) {
params.set("token", accessToken);
}
const url = `${API_URL}/v1/transcripts/${transcriptId}/video/url?${params}`;
const headers: Record<string, string> = {};
if (accessToken) {
headers["Authorization"] = `Bearer ${accessToken}`;
}
const resp = await fetch(url, { headers });
if (!resp.ok) {
throw new Error("Failed to load video");
}
const data = await resp.json();
setVideoUrl(data.url);
} catch (err) {
setError(err instanceof Error ? err.message : "Failed to load video");
} finally {
setLoading(false);
}
};
fetchVideoUrl();
}, [expanded, transcriptId, accessToken, videoUrl]);
if (!expanded) return null;
if (loading) {
return (
<Box
borderRadius="md"
overflow="hidden"
bg="gray.900"
w="fit-content"
maxW="100%"
>
<Skeleton h="200px" w="400px" maxW="100%" />
</Box>
);
}
if (error || !videoUrl) {
return (
<Box
p={3}
bg="red.100"
borderRadius="md"
role="alert"
w="fit-content"
maxW="100%"
>
<Text fontSize="sm">Failed to load video recording</Text>
</Box>
);
}
return (
<Box borderRadius="md" bg="black" w="fit-content" maxW="100%" mx="auto">
{/* Header bar with title and close button */}
<Flex
align="center"
justify="space-between"
px={3}
py={1.5}
bg="gray.800"
borderTopRadius="md"
gap={4}
>
<Flex align="center" gap={2}>
<LuVideo size={14} color="white" />
<Text fontSize="xs" fontWeight="medium" color="white">
Cloud recording
</Text>
{duration != null && (
<Text fontSize="xs" color="gray.400">
{formatDuration(duration)}
</Text>
)}
</Flex>
<Flex
align="center"
justify="center"
borderRadius="full"
p={1}
cursor="pointer"
onClick={onClose}
_hover={{ bg: "whiteAlpha.300" }}
transition="background 0.15s"
>
<LuX size={14} color="white" />
</Flex>
</Flex>
{/* Video element with visible controls */}
<video
src={videoUrl}
controls
autoPlay
style={{
display: "block",
width: "100%",
maxWidth: "640px",
maxHeight: "45vh",
minHeight: "180px",
objectFit: "contain",
background: "black",
borderBottomLeftRadius: "0.375rem",
borderBottomRightRadius: "0.375rem",
}}
/>
</Box>
);
}

View File

@@ -22,6 +22,8 @@ import DailyIframe, {
import type { components } from "../../reflector-api";
import { useAuth } from "../../lib/AuthProvider";
import { useConsentDialog } from "../../lib/consent";
import { useEmailTranscriptDialog } from "../../lib/emailTranscript";
import { featureEnabled } from "../../lib/features";
import {
useRoomJoinMeeting,
useMeetingStartRecording,
@@ -37,6 +39,7 @@ import { useUuidV5 } from "react-uuid-hook";
const CONSENT_BUTTON_ID = "recording-consent";
const RECORDING_INDICATOR_ID = "recording-indicator";
const EMAIL_TRANSCRIPT_BUTTON_ID = "email-transcript";
// Namespace UUID for UUIDv5 generation of raw-tracks instanceIds
// DO NOT CHANGE: Breaks instanceId determinism across deployments
@@ -209,6 +212,12 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
const showConsentModalRef = useRef(showConsentModal);
showConsentModalRef.current = showConsentModal;
const { showEmailModal } = useEmailTranscriptDialog({
meetingId: assertMeetingId(meeting.id),
});
const showEmailModalRef = useRef(showEmailModal);
showEmailModalRef.current = showEmailModal;
useEffect(() => {
if (authLastUserId === undefined || !meeting?.id || !roomName) return;
@@ -242,6 +251,9 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
if (ev.button_id === CONSENT_BUTTON_ID) {
showConsentModalRef.current();
}
if (ev.button_id === EMAIL_TRANSCRIPT_BUTTON_ID) {
showEmailModalRef.current();
}
},
[
/*keep static; iframe recreation depends on it*/
@@ -319,6 +331,10 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
() => new URL("/recording-icon.svg", window.location.origin),
[],
);
const emailIconUrl = useMemo(
() => new URL("/email-icon.svg", window.location.origin),
[],
);
const [frame, { setCustomTrayButton }] = useFrame(container, {
onLeftMeeting: handleLeave,
@@ -371,6 +387,20 @@ export default function DailyRoom({ meeting, room }: DailyRoomProps) {
);
}, [showConsentButton, recordingIconUrl, setCustomTrayButton]);
useEffect(() => {
const show = featureEnabled("emailTranscript");
setCustomTrayButton(
EMAIL_TRANSCRIPT_BUTTON_ID,
show
? {
iconPath: emailIconUrl.href,
label: "Email Transcript",
tooltip: "Get transcript emailed to you",
}
: null,
);
}, [emailIconUrl, setCustomTrayButton]);
if (authLastUserId === undefined) {
return (
<Center width="100vw" height="100vh">

View File

@@ -643,6 +643,16 @@ export function useMeetingAudioConsent() {
});
}
export function useMeetingAddEmailRecipient() {
const { setError } = useError();
return $api.useMutation("post", "/v1/meetings/{meeting_id}/email-recipient", {
onError: (error) => {
setError(error as Error, "There was an error adding the email");
},
});
}
export function useMeetingDeactivate() {
const { setError } = useError();
const queryClient = useQueryClient();

View File

@@ -13,6 +13,8 @@ export const FEATURE_PRIVACY_ENV_NAME = "FEATURE_PRIVACY" as const;
export const FEATURE_BROWSE_ENV_NAME = "FEATURE_BROWSE" as const;
export const FEATURE_SEND_TO_ZULIP_ENV_NAME = "FEATURE_SEND_TO_ZULIP" as const;
export const FEATURE_ROOMS_ENV_NAME = "FEATURE_ROOMS" as const;
export const FEATURE_EMAIL_TRANSCRIPT_ENV_NAME =
"FEATURE_EMAIL_TRANSCRIPT" as const;
const FEATURE_ENV_NAMES = [
FEATURE_REQUIRE_LOGIN_ENV_NAME,
@@ -20,6 +22,7 @@ const FEATURE_ENV_NAMES = [
FEATURE_BROWSE_ENV_NAME,
FEATURE_SEND_TO_ZULIP_ENV_NAME,
FEATURE_ROOMS_ENV_NAME,
FEATURE_EMAIL_TRANSCRIPT_ENV_NAME,
] as const;
export type FeatureEnvName = (typeof FEATURE_ENV_NAMES)[number];

View File

@@ -0,0 +1,70 @@
"use client";
import { useState, useEffect } from "react";
import { Box, Button, Input, Text, VStack, HStack } from "@chakra-ui/react";
interface EmailTranscriptDialogProps {
onSubmit: (email: string) => void;
onDismiss: () => void;
}
export function EmailTranscriptDialog({
onSubmit,
onDismiss,
}: EmailTranscriptDialogProps) {
const [email, setEmail] = useState("");
const [inputEl, setInputEl] = useState<HTMLInputElement | null>(null);
useEffect(() => {
inputEl?.focus();
}, [inputEl]);
const handleSubmit = () => {
const trimmed = email.trim();
if (trimmed) {
onSubmit(trimmed);
}
};
return (
<Box
p={6}
bg="rgba(255, 255, 255, 0.7)"
borderRadius="lg"
boxShadow="lg"
maxW="md"
mx="auto"
>
<VStack gap={4} alignItems="center">
<Text fontSize="md" textAlign="center" fontWeight="medium">
Enter your email to receive the transcript when it&apos;s ready
</Text>
<Input
ref={setInputEl}
type="email"
placeholder="your@email.com"
value={email}
onChange={(e) => setEmail(e.target.value)}
onKeyDown={(e) => {
if (e.key === "Enter") handleSubmit();
}}
size="sm"
bg="white"
/>
<HStack gap={4} justifyContent="center">
<Button variant="ghost" size="sm" onClick={onDismiss}>
Cancel
</Button>
<Button
colorPalette="primary"
size="sm"
onClick={handleSubmit}
disabled={!email.trim()}
>
Send
</Button>
</HStack>
</VStack>
</Box>
);
}

View File

@@ -0,0 +1 @@
export { useEmailTranscriptDialog } from "./useEmailTranscriptDialog";

View File

@@ -0,0 +1,128 @@
"use client";
import { useCallback, useState, useEffect, useRef } from "react";
import { Box, Text } from "@chakra-ui/react";
import { toaster } from "../../components/ui/toaster";
import { useMeetingAddEmailRecipient } from "../apiHooks";
import { EmailTranscriptDialog } from "./EmailTranscriptDialog";
import type { MeetingId } from "../types";
const TOAST_CHECK_INTERVAL_MS = 100;
type UseEmailTranscriptDialogParams = {
meetingId: MeetingId;
};
export function useEmailTranscriptDialog({
meetingId,
}: UseEmailTranscriptDialogParams) {
const [modalOpen, setModalOpen] = useState(false);
const addEmailMutation = useMeetingAddEmailRecipient();
const intervalRef = useRef<NodeJS.Timeout | null>(null);
const keydownHandlerRef = useRef<((event: KeyboardEvent) => void) | null>(
null,
);
useEffect(() => {
return () => {
if (intervalRef.current) {
clearInterval(intervalRef.current);
intervalRef.current = null;
}
if (keydownHandlerRef.current) {
document.removeEventListener("keydown", keydownHandlerRef.current);
keydownHandlerRef.current = null;
}
};
}, []);
const handleSubmitEmail = useCallback(
async (email: string) => {
try {
await addEmailMutation.mutateAsync({
params: {
path: { meeting_id: meetingId },
},
body: {
email,
},
});
toaster.create({
duration: 4000,
render: () => (
<Box
p={4}
bg="green.100"
borderRadius="md"
boxShadow="md"
textAlign="center"
>
<Text fontWeight="medium">Email registered</Text>
<Text fontSize="sm" color="gray.600">
You will receive the transcript link when processing is
complete.
</Text>
</Box>
),
});
} catch (error) {
console.error("Error adding email recipient:", error);
}
},
[addEmailMutation, meetingId],
);
const showEmailModal = useCallback(() => {
if (modalOpen) return;
setModalOpen(true);
const toastId = toaster.create({
placement: "top",
duration: null,
render: ({ dismiss }) => (
<EmailTranscriptDialog
onSubmit={(email) => {
handleSubmitEmail(email);
dismiss();
}}
onDismiss={() => {
dismiss();
}}
/>
),
});
const handleKeyDown = (event: KeyboardEvent) => {
if (event.key === "Escape") {
toastId.then((id) => toaster.dismiss(id));
}
};
keydownHandlerRef.current = handleKeyDown;
document.addEventListener("keydown", handleKeyDown);
toastId.then((id) => {
intervalRef.current = setInterval(() => {
if (!toaster.isActive(id)) {
setModalOpen(false);
if (intervalRef.current) {
clearInterval(intervalRef.current);
intervalRef.current = null;
}
if (keydownHandlerRef.current) {
document.removeEventListener("keydown", keydownHandlerRef.current);
keydownHandlerRef.current = null;
}
}
}, TOAST_CHECK_INTERVAL_MS);
});
}, [handleSubmitEmail, modalOpen]);
return {
showEmailModal,
};
}

View File

@@ -1,5 +1,6 @@
import {
FEATURE_BROWSE_ENV_NAME,
FEATURE_EMAIL_TRANSCRIPT_ENV_NAME,
FEATURE_PRIVACY_ENV_NAME,
FEATURE_REQUIRE_LOGIN_ENV_NAME,
FEATURE_ROOMS_ENV_NAME,
@@ -14,6 +15,7 @@ export const FEATURES = [
"browse",
"sendToZulip",
"rooms",
"emailTranscript",
] as const;
export type FeatureName = (typeof FEATURES)[number];
@@ -26,6 +28,7 @@ export const DEFAULT_FEATURES: Features = {
browse: true,
sendToZulip: true,
rooms: true,
emailTranscript: false,
} as const;
export const ENV_TO_FEATURE: {
@@ -36,6 +39,7 @@ export const ENV_TO_FEATURE: {
FEATURE_BROWSE: "browse",
FEATURE_SEND_TO_ZULIP: "sendToZulip",
FEATURE_ROOMS: "rooms",
FEATURE_EMAIL_TRANSCRIPT: "emailTranscript",
} as const;
export const FEATURE_TO_ENV: {
@@ -46,6 +50,7 @@ export const FEATURE_TO_ENV: {
browse: "FEATURE_BROWSE",
sendToZulip: "FEATURE_SEND_TO_ZULIP",
rooms: "FEATURE_ROOMS",
emailTranscript: "FEATURE_EMAIL_TRANSCRIPT",
};
const features = getClientEnv();

View File

@@ -90,8 +90,6 @@ export interface paths {
*
* Both cloud and raw-tracks are started via REST API to bypass enable_recording limitation of allowing only 1 recording at a time.
* Uses different instanceIds for cloud vs raw-tracks (same won't work)
*
* Note: No authentication required - anonymous users supported. TODO this is a DOS vector
*/
post: operations["v1_start_recording"];
delete?: never;
@@ -100,6 +98,26 @@ export interface paths {
patch?: never;
trace?: never;
};
"/v1/meetings/{meeting_id}/email-recipient": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/**
* Add Email Recipient
* @description Add an email address to receive the transcript link when processing completes.
*/
post: operations["v1_add_email_recipient"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/v1/rooms": {
parameters: {
query?: never;
@@ -561,6 +579,40 @@ export interface paths {
patch?: never;
trace?: never;
};
"/v1/transcripts/{transcript_id}/download/zip": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/** Transcript Download Zip */
get: operations["v1_transcript_download_zip"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/v1/transcripts/{transcript_id}/video/url": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
/** Transcript Get Video Url */
get: operations["v1_transcript_get_video_url"];
put?: never;
post?: never;
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
"/v1/transcripts/{transcript_id}/events": {
parameters: {
query?: never;
@@ -785,10 +837,35 @@ export interface paths {
patch?: never;
trace?: never;
};
"/v1/auth/login": {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
get?: never;
put?: never;
/** Login */
post: operations["v1_login"];
delete?: never;
options?: never;
head?: never;
patch?: never;
trace?: never;
};
}
export type webhooks = Record<string, never>;
export interface components {
schemas: {
/** AddEmailRecipientRequest */
AddEmailRecipientRequest: {
/**
* Email
* Format: email
*/
email: string;
};
/** ApiKeyResponse */
ApiKeyResponse: {
/**
@@ -816,10 +893,7 @@ export interface components {
};
/** Body_transcript_record_upload_v1_transcripts__transcript_id__record_upload_post */
Body_transcript_record_upload_v1_transcripts__transcript_id__record_upload_post: {
/**
* Chunk
* Format: binary
*/
/** Chunk */
chunk: string;
};
/** CalendarEventResponse */
@@ -1034,6 +1108,13 @@ export interface components {
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/**
* Has Cloud Video
* @default false
*/
has_cloud_video: boolean;
/** Cloud Video Duration */
cloud_video_duration?: number | null;
};
/** GetTranscriptSegmentTopic */
GetTranscriptSegmentTopic: {
@@ -1182,6 +1263,13 @@ export interface components {
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/**
* Has Cloud Video
* @default false
*/
has_cloud_video: boolean;
/** Cloud Video Duration */
cloud_video_duration?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1247,6 +1335,13 @@ export interface components {
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/**
* Has Cloud Video
* @default false
*/
has_cloud_video: boolean;
/** Cloud Video Duration */
cloud_video_duration?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1313,6 +1408,13 @@ export interface components {
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/**
* Has Cloud Video
* @default false
*/
has_cloud_video: boolean;
/** Cloud Video Duration */
cloud_video_duration?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1386,6 +1488,13 @@ export interface components {
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/**
* Has Cloud Video
* @default false
*/
has_cloud_video: boolean;
/** Cloud Video Duration */
cloud_video_duration?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1461,6 +1570,13 @@ export interface components {
audio_deleted?: boolean | null;
/** Change Seq */
change_seq?: number | null;
/**
* Has Cloud Video
* @default false
*/
has_cloud_video: boolean;
/** Cloud Video Duration */
cloud_video_duration?: number | null;
/** Participants */
participants:
| components["schemas"]["TranscriptParticipantWithEmail"][]
@@ -1532,6 +1648,25 @@ export interface components {
/** Reason */
reason?: string | null;
};
/** LoginRequest */
LoginRequest: {
/** Email */
email: string;
/** Password */
password: string;
};
/** LoginResponse */
LoginResponse: {
/** Access Token */
access_token: string;
/**
* Token Type
* @default bearer
*/
token_type: string;
/** Expires In */
expires_in: number;
};
/** Meeting */
Meeting: {
/** Id */
@@ -1619,26 +1754,26 @@ export interface components {
/** Items */
items: components["schemas"]["GetTranscriptMinimal"][];
/** Total */
total?: number | null;
total: number;
/** Page */
page: number | null;
page: number;
/** Size */
size: number | null;
size: number;
/** Pages */
pages?: number | null;
pages: number;
};
/** Page[RoomDetails] */
Page_RoomDetails_: {
/** Items */
items: components["schemas"]["RoomDetails"][];
/** Total */
total?: number | null;
total: number;
/** Page */
page: number | null;
page: number;
/** Size */
size: number | null;
size: number;
/** Pages */
pages?: number | null;
pages: number;
};
/** Participant */
Participant: {
@@ -2269,6 +2404,22 @@ export interface components {
msg: string;
/** Error Type */
type: string;
/** Input */
input?: unknown;
/** Context */
ctx?: Record<string, never>;
};
/** VideoUrlResponse */
VideoUrlResponse: {
/** Url */
url: string;
/** Duration */
duration?: number | null;
/**
* Content Type
* @default video/mp4
*/
content_type: string;
};
/** WebhookTestResult */
WebhookTestResult: {
@@ -2479,6 +2630,41 @@ export interface operations {
};
};
};
v1_add_email_recipient: {
parameters: {
query?: never;
header?: never;
path: {
meeting_id: string;
};
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["AddEmailRecipientRequest"];
};
};
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": unknown;
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
v1_rooms_list: {
parameters: {
query?: {
@@ -3682,6 +3868,70 @@ export interface operations {
};
};
};
v1_transcript_download_zip: {
parameters: {
query?: never;
header?: never;
path: {
transcript_id: string;
};
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": unknown;
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
v1_transcript_get_video_url: {
parameters: {
query?: {
token?: string | null;
};
header?: never;
path: {
transcript_id: string;
};
cookie?: never;
};
requestBody?: never;
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["VideoUrlResponse"];
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
v1_transcript_get_websocket_events: {
parameters: {
query?: never;
@@ -4021,4 +4271,37 @@ export interface operations {
};
};
};
v1_login: {
parameters: {
query?: never;
header?: never;
path?: never;
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["LoginRequest"];
};
};
responses: {
/** @description Successful Response */
200: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["LoginResponse"];
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
}

32
www/pnpm-lock.yaml generated
View File

@@ -2993,8 +2993,8 @@ packages:
resolution: {integrity: sha512-phv3E1Xl4tQOShqSte26C7Fl84EwUdZsyOuSSk9qtAGyyQs2s3jJzComh+Abf4g187lUUAvH+H26omrqia2aGg==}
engines: {node: '>=10.13.0'}
enhanced-resolve@5.20.0:
resolution: {integrity: sha512-/ce7+jQ1PQ6rVXwe+jKEg5hW5ciicHwIQUagZkp6IufBoY3YDgdTTY1azVs0qoRgVmvsNB+rbjLJxDAeHHtwsQ==}
enhanced-resolve@5.20.1:
resolution: {integrity: sha512-Qohcme7V1inbAfvjItgw0EaxVX5q2rdVEZHRBrEQdRZTssLDGsL8Lwrznl8oQ/6kuTJONLaDcGjkNP247XEhcA==}
engines: {node: '>=10.13.0'}
err-code@3.0.1:
@@ -3257,8 +3257,8 @@ packages:
resolution: {integrity: sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==}
engines: {node: '>=16'}
flatted@3.4.1:
resolution: {integrity: sha512-IxfVbRFVlV8V/yRaGzk0UVIcsKKHMSfYw66T/u4nTwlWteQePsxe//LjudR1AMX4tZW3WFCh3Zqa/sjlqpbURQ==}
flatted@3.4.2:
resolution: {integrity: sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==}
follow-redirects@1.15.11:
resolution: {integrity: sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==}
@@ -4853,8 +4853,8 @@ packages:
resolution: {integrity: sha512-vtA0uD4ibrYD793SOIAwlo8cj6haOeMHrGvwPxJsxH7CeIksqJ+3Zc06RvWTIFgiSqx4A3sOnTXpfAEE2Zyz6w==}
engines: {node: '>=10.0.0'}
socket.io-parser@4.2.5:
resolution: {integrity: sha512-bPMmpy/5WWKHea5Y/jYAP6k74A+hvmRCQaJuJB6I/ML5JZq/KfNieUVo/3Mh7SAqn7TyFdIo6wqYHInG1MU1bQ==}
socket.io-parser@4.2.6:
resolution: {integrity: sha512-asJqbVBDsBCJx0pTqw3WfesSY0iRX+2xzWEWzrpcH7L6fLzrhyF8WPI8UaeM4YCuDfpwA/cgsdugMsmtz8EJeg==}
engines: {node: '>=10.0.0'}
source-map-js@1.2.1:
@@ -5029,8 +5029,8 @@ packages:
uglify-js:
optional: true
terser@5.46.0:
resolution: {integrity: sha512-jTwoImyr/QbOWFFso3YoU3ik0jBBDJ6JTOQiy/J2YxVJdZCc+5u7skhNwiOR3FQIygFqVUPHl7qbbxtjW2K3Qg==}
terser@5.46.1:
resolution: {integrity: sha512-vzCjQO/rgUuK9sf8VJZvjqiqiHFaZLnOiimmUuOKODxWL8mm/xua7viT7aqX7dgPY60otQjUotzFMmCB4VdmqQ==}
engines: {node: '>=10'}
hasBin: true
@@ -8757,7 +8757,7 @@ snapshots:
graceful-fs: 4.2.11
tapable: 2.3.0
enhanced-resolve@5.20.0:
enhanced-resolve@5.20.1:
dependencies:
graceful-fs: 4.2.11
tapable: 2.3.0
@@ -9170,10 +9170,10 @@ snapshots:
flat-cache@4.0.1:
dependencies:
flatted: 3.4.1
flatted: 3.4.2
keyv: 4.5.4
flatted@3.4.1: {}
flatted@3.4.2: {}
follow-redirects@1.15.11: {}
@@ -11166,13 +11166,13 @@ snapshots:
'@socket.io/component-emitter': 3.1.2
debug: 4.3.7
engine.io-client: 6.5.4
socket.io-parser: 4.2.5
socket.io-parser: 4.2.6
transitivePeerDependencies:
- bufferutil
- supports-color
- utf-8-validate
socket.io-parser@4.2.5:
socket.io-parser@4.2.6:
dependencies:
'@socket.io/component-emitter': 3.1.2
debug: 4.4.3(supports-color@10.2.2)
@@ -11351,10 +11351,10 @@ snapshots:
'@jridgewell/trace-mapping': 0.3.31
jest-worker: 27.5.1
schema-utils: 4.3.3
terser: 5.46.0
terser: 5.46.1
webpack: 5.105.3
terser@5.46.0:
terser@5.46.1:
dependencies:
'@jridgewell/source-map': 0.3.11
acorn: 8.16.0
@@ -11642,7 +11642,7 @@ snapshots:
acorn-import-phases: 1.0.4(acorn@8.16.0)
browserslist: 4.28.1
chrome-trace-event: 1.0.4
enhanced-resolve: 5.20.0
enhanced-resolve: 5.20.1
es-module-lexer: 2.0.0
eslint-scope: 5.1.1
events: 3.3.0

View File

@@ -0,0 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<rect x="2" y="4" width="20" height="16" rx="2"/>
<path d="m22 7-8.97 5.7a1.94 1.94 0 0 1-2.06 0L2 7"/>
</svg>

After

Width:  |  Height:  |  Size: 274 B