Compare commits

...

6 Commits

Author SHA1 Message Date
692895c859 chore(main): release 0.22.0 (#748) 2025-11-26 16:53:27 -05:00
Igor Monadical
d63040e2fd feat: Multitrack segmentation (#747)
* segmentation multitrack (no-mistakes)

* segmentation multitrack (no-mistakes)

* self review

* self review

* recording poll daily doc

* filter cam_audio tracks to remove screensharing from daily processing

* pr review

---------

Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-26 16:21:32 -05:00
8d696aa775 chore(main): release 0.21.0 (#746) 2025-11-26 19:12:02 +01:00
f6ca07505f feat: add transcript format parameter to GET endpoint (#709)
* feat: add transcript format parameter to GET endpoint

Add transcript_format query parameter to /v1/transcripts/{id} endpoint
with support for multiple output formats using discriminated unions.

Formats supported:
- text: Plain speaker dialogue (default)
- text-timestamped: Dialogue with [MM:SS] timestamps
- webvtt-named: WebVTT subtitles with participant names
- json: Structured segments with full metadata

Response models use Pydantic discriminated unions with transcript_format
as discriminator field. POST/PATCH endpoints return GetTranscriptWithParticipants
for minimal responses. GET endpoint returns format-specific models.

* Copy transcript format

* Regenerate types

* Fix transcript formats

* Don't throw inside try

* Remove any type

* Toast share copy errors

* transcript_format exhaustiveness and python idiomatic assert_never

* format_timestamp_mmss clear type definition

* Rename seconds_to_timestamp

* Test transcript format with overlapping speakers

* exact match for vtt multispeaker test

---------

Co-authored-by: Sergey Mankovsky <sergey@monadical.com>
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-26 18:51:14 +01:00
Igor Monadical
3aef926203 room creatio hotfix (#744)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-25 22:42:09 -05:00
Igor Monadical
0b2c82227d is_owner pass for dailyco (#745)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-11-25 22:41:54 -05:00
20 changed files with 2077 additions and 170 deletions

View File

@@ -1,5 +1,19 @@
# Changelog
## [0.22.0](https://github.com/Monadical-SAS/reflector/compare/v0.21.0...v0.22.0) (2025-11-26)
### Features
* Multitrack segmentation ([#747](https://github.com/Monadical-SAS/reflector/issues/747)) ([d63040e](https://github.com/Monadical-SAS/reflector/commit/d63040e2fdc07e7b272e85a39eb2411cd6a14798))
## [0.21.0](https://github.com/Monadical-SAS/reflector/compare/v0.20.0...v0.21.0) (2025-11-26)
### Features
* add transcript format parameter to GET endpoint ([#709](https://github.com/Monadical-SAS/reflector/issues/709)) ([f6ca075](https://github.com/Monadical-SAS/reflector/commit/f6ca07505f34483b02270a2ef3bd809e9d2e1045))
## [0.20.0](https://github.com/Monadical-SAS/reflector/compare/v0.19.0...v0.20.0) (2025-11-25)

241
docs/transcript.md Normal file
View File

@@ -0,0 +1,241 @@
# Transcript Formats
The Reflector API provides multiple output formats for transcript data through the `transcript_format` query parameter on the GET `/v1/transcripts/{id}` endpoint.
## Overview
When retrieving a transcript, you can specify the desired format using the `transcript_format` query parameter. The API supports four formats optimized for different use cases:
- **text** - Plain text with speaker names (default)
- **text-timestamped** - Timestamped text with speaker names
- **webvtt-named** - WebVTT subtitle format with participant names
- **json** - Structured JSON segments with full metadata
All formats include participant information when available, resolving speaker IDs to actual names.
## Query Parameter Usage
```
GET /v1/transcripts/{id}?transcript_format={format}
```
### Parameters
- `transcript_format` (optional): The desired output format
- Type: `"text" | "text-timestamped" | "webvtt-named" | "json"`
- Default: `"text"`
## Format Descriptions
### Text Format (`text`)
**Use case:** Simple, human-readable transcript for display or export.
**Format:** Speaker names followed by their dialogue, one line per segment.
**Example:**
```
John Smith: Hello everyone
Jane Doe: Hi there
John Smith: How are you today?
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=text
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "text",
"transcript": "John Smith: Hello everyone\nJane Doe: Hi there\nJohn Smith: How are you today?",
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
### Text Timestamped Format (`text-timestamped`)
**Use case:** Transcript with timing information for navigation or reference.
**Format:** `[MM:SS]` timestamp prefix before each speaker and dialogue.
**Example:**
```
[00:00] John Smith: Hello everyone
[00:05] Jane Doe: Hi there
[00:12] John Smith: How are you today?
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=text-timestamped
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "text-timestamped",
"transcript": "[00:00] John Smith: Hello everyone\n[00:05] Jane Doe: Hi there\n[00:12] John Smith: How are you today?",
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
### WebVTT Named Format (`webvtt-named`)
**Use case:** Subtitle files for video players, accessibility tools, or video editing.
**Format:** Standard WebVTT subtitle format with voice tags using participant names.
**Example:**
```
WEBVTT
00:00:00.000 --> 00:00:05.000
<v John Smith>Hello everyone
00:00:05.000 --> 00:00:12.000
<v Jane Doe>Hi there
00:00:12.000 --> 00:00:18.000
<v John Smith>How are you today?
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=webvtt-named
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "webvtt-named",
"transcript": "WEBVTT\n\n00:00:00.000 --> 00:00:05.000\n<v John Smith>Hello everyone\n\n...",
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
### JSON Format (`json`)
**Use case:** Programmatic access with full timing and speaker metadata.
**Format:** Array of segment objects with speaker information, text content, and precise timing.
**Example:**
```json
[
{
"speaker": 0,
"speaker_name": "John Smith",
"text": "Hello everyone",
"start": 0.0,
"end": 5.0
},
{
"speaker": 1,
"speaker_name": "Jane Doe",
"text": "Hi there",
"start": 5.0,
"end": 12.0
},
{
"speaker": 0,
"speaker_name": "John Smith",
"text": "How are you today?",
"start": 12.0,
"end": 18.0
}
]
```
**Request:**
```bash
GET /v1/transcripts/{id}?transcript_format=json
```
**Response:**
```json
{
"id": "transcript_123",
"name": "Meeting Recording",
"transcript_format": "json",
"transcript": [
{
"speaker": 0,
"speaker_name": "John Smith",
"text": "Hello everyone",
"start": 0.0,
"end": 5.0
},
{
"speaker": 1,
"speaker_name": "Jane Doe",
"text": "Hi there",
"start": 5.0,
"end": 12.0
}
],
"participants": [
{"id": "p1", "speaker": 0, "name": "John Smith"},
{"id": "p2", "speaker": 1, "name": "Jane Doe"}
],
...
}
```
## Response Structure
All formats return the same base transcript metadata with an additional `transcript_format` field and format-specific `transcript` field:
### Common Fields
- `id`: Transcript identifier
- `user_id`: Owner user ID (if authenticated)
- `name`: Transcript name
- `status`: Processing status
- `locked`: Whether transcript is locked for editing
- `duration`: Total duration in seconds
- `title`: Auto-generated or custom title
- `short_summary`: Brief summary
- `long_summary`: Detailed summary
- `created_at`: Creation timestamp
- `share_mode`: Access control setting
- `source_language`: Original audio language
- `target_language`: Translation target language
- `reviewed`: Whether transcript has been reviewed
- `meeting_id`: Associated meeting ID (if applicable)
- `source_kind`: Source type (live, file, room)
- `room_id`: Associated room ID (if applicable)
- `audio_deleted`: Whether audio has been deleted
- `participants`: Array of participant objects with speaker mappings
### Format-Specific Fields
- `transcript_format`: The format identifier (discriminator field)
- `transcript`: The formatted transcript content (string for text/webvtt formats, array for json format)
## Speaker Name Resolution
All formats resolve speaker IDs to participant names when available:
- If a participant exists for the speaker ID, their name is used
- If no participant exists, a default name like "Speaker 0" is generated
- Speaker IDs are integers (0, 1, 2, etc.) assigned during diarization

View File

@@ -35,8 +35,15 @@ class Recording(BaseModel):
status: Literal["pending", "processing", "completed", "failed"] = "pending"
meeting_id: str | None = None
# for multitrack reprocessing
# track_keys can be empty list [] if recording finished but no audio was captured (silence/muted)
# None means not a multitrack recording, [] means multitrack with no tracks
track_keys: list[str] | None = None
@property
def is_multitrack(self) -> bool:
"""True if recording has separate audio tracks (1+ tracks counts as multitrack)."""
return self.track_keys is not None and len(self.track_keys) > 0
class RecordingController:
async def create(self, recording: Recording):

View File

@@ -1,6 +1,7 @@
import io
import re
import tempfile
from collections import defaultdict
from pathlib import Path
from typing import Annotated, TypedDict
@@ -16,6 +17,17 @@ class DiarizationSegment(TypedDict):
PUNC_RE = re.compile(r"[.;:?!…]")
SENTENCE_END_RE = re.compile(r"[.?!…]$")
# Max segment length for words_to_segments() - breaks on any punctuation (. ; : ? ! …)
# when segment exceeds this limit. Used for non-multitrack recordings.
MAX_SEGMENT_CHARS = 120
# Max segment length for words_to_segments_by_sentence() - only breaks on sentence-ending
# punctuation (. ? ! …) when segment exceeds this limit. Higher threshold allows complete
# sentences in multitrack recordings where speakers overlap.
# similar number to server/reflector/processors/transcript_liner.py
MAX_SENTENCE_SEGMENT_CHARS = 1000
class AudioFile(BaseModel):
@@ -76,7 +88,6 @@ def words_to_segments(words: list[Word]) -> list[TranscriptSegment]:
# but separate if the speaker changes, or if the punctuation is a . , ; : ? !
segments = []
current_segment = None
MAX_SEGMENT_LENGTH = 120
for word in words:
if current_segment is None:
@@ -106,7 +117,7 @@ def words_to_segments(words: list[Word]) -> list[TranscriptSegment]:
current_segment.end = word.end
have_punc = PUNC_RE.search(word.text)
if have_punc and (len(current_segment.text) > MAX_SEGMENT_LENGTH):
if have_punc and (len(current_segment.text) > MAX_SEGMENT_CHARS):
segments.append(current_segment)
current_segment = None
@@ -116,6 +127,70 @@ def words_to_segments(words: list[Word]) -> list[TranscriptSegment]:
return segments
def words_to_segments_by_sentence(words: list[Word]) -> list[TranscriptSegment]:
"""Group words by speaker, then split into sentences.
For multitrack recordings where words from different speakers are interleaved
by timestamp, this function first groups all words by speaker, then creates
segments based on sentence boundaries within each speaker's words.
This produces cleaner output than words_to_segments() which breaks on every
speaker change, resulting in many tiny segments when speakers overlap.
"""
if not words:
return []
# Group words by speaker, preserving order within each speaker
by_speaker: dict[int, list[Word]] = defaultdict(list)
for w in words:
by_speaker[w.speaker].append(w)
segments: list[TranscriptSegment] = []
for speaker, speaker_words in by_speaker.items():
current_text = ""
current_start: float | None = None
current_end: float = 0.0
for word in speaker_words:
if current_start is None:
current_start = word.start
current_text += word.text
current_end = word.end
# Check for sentence end or max length
is_sentence_end = SENTENCE_END_RE.search(word.text.strip())
is_too_long = len(current_text) >= MAX_SENTENCE_SEGMENT_CHARS
if is_sentence_end or is_too_long:
segments.append(
TranscriptSegment(
text=current_text,
start=current_start,
end=current_end,
speaker=speaker,
)
)
current_text = ""
current_start = None
# Flush remaining words for this speaker
if current_text and current_start is not None:
segments.append(
TranscriptSegment(
text=current_text,
start=current_start,
end=current_end,
speaker=speaker,
)
)
# Sort segments by start time
segments.sort(key=lambda s: s.start)
return segments
class Transcript(BaseModel):
translation: str | None = None
words: list[Word] = []
@@ -154,7 +229,9 @@ class Transcript(BaseModel):
word.start += offset
word.end += offset
def as_segments(self) -> list[TranscriptSegment]:
def as_segments(self, is_multitrack: bool = False) -> list[TranscriptSegment]:
if is_multitrack:
return words_to_segments_by_sentence(self.words)
return words_to_segments(self.words)

View File

@@ -0,0 +1,17 @@
"""Schema definitions for transcript format types and segments."""
from typing import Literal
from pydantic import BaseModel
TranscriptFormat = Literal["text", "text-timestamped", "webvtt-named", "json"]
class TranscriptSegment(BaseModel):
"""A single transcript segment with speaker and timing information."""
speaker: int
speaker_name: str
text: str
start: float
end: float

View File

@@ -7,7 +7,7 @@ This module provides result-based error handling that works in both contexts:
"""
from dataclasses import dataclass
from typing import Literal, Union
from typing import Literal, Union, assert_never
import celery
from celery.result import AsyncResult
@@ -18,7 +18,6 @@ from reflector.pipelines.main_file_pipeline import task_pipeline_file_process
from reflector.pipelines.main_multitrack_pipeline import (
task_pipeline_multitrack_process,
)
from reflector.utils.match import absurd
from reflector.utils.string import NonEmptyString
@@ -155,7 +154,7 @@ def dispatch_transcript_processing(config: ProcessingConfig) -> AsyncResult:
elif isinstance(config, FileProcessingConfig):
return task_pipeline_file_process.delay(transcript_id=config.transcript_id)
else:
absurd(config)
assert_never(config)
def task_is_scheduled_or_active(task_name: str, **kwargs):

View File

@@ -64,6 +64,11 @@ def recording_lock_key(recording_id: NonEmptyString) -> NonEmptyString:
return f"recording:{recording_id}"
def filter_cam_audio_tracks(track_keys: list[str]) -> list[str]:
"""Filter track keys to cam-audio tracks only (skip screen-audio, etc.)."""
return [k for k in track_keys if "cam-audio" in k]
def extract_base_room_name(daily_room_name: DailyRoomName) -> NonEmptyString:
"""
Extract base room name from Daily.co timestamped room name.

View File

@@ -1,10 +0,0 @@
from typing import NoReturn
def assert_exhaustiveness(x: NoReturn) -> NoReturn:
"""Provide an assertion at type-check time that this function is never called."""
raise AssertionError(f"Invalid value: {x!r}")
def absurd(x: NoReturn) -> NoReturn:
return assert_exhaustiveness(x)

View File

@@ -0,0 +1,133 @@
"""Utilities for converting transcript data to various output formats."""
import webvtt
from reflector.db.transcripts import TranscriptParticipant, TranscriptTopic
from reflector.processors.types import (
Transcript as ProcessorTranscript,
)
from reflector.schemas.transcript_formats import TranscriptSegment
from reflector.utils.webvtt import seconds_to_timestamp
def get_speaker_name(
speaker: int, participants: list[TranscriptParticipant] | None
) -> str:
"""Get participant name for speaker or default to 'Speaker N'."""
if participants:
for participant in participants:
if participant.speaker == speaker:
return participant.name
return f"Speaker {speaker}"
def format_timestamp_mmss(seconds: float | int) -> str:
"""Format seconds as MM:SS timestamp."""
minutes = int(seconds // 60)
secs = int(seconds % 60)
return f"{minutes:02d}:{secs:02d}"
def transcript_to_text(
topics: list[TranscriptTopic],
participants: list[TranscriptParticipant] | None,
is_multitrack: bool = False,
) -> str:
"""Convert transcript topics to plain text with speaker names."""
lines = []
for topic in topics:
if not topic.words:
continue
transcript = ProcessorTranscript(words=topic.words)
segments = transcript.as_segments(is_multitrack)
for segment in segments:
speaker_name = get_speaker_name(segment.speaker, participants)
text = segment.text.strip()
lines.append(f"{speaker_name}: {text}")
return "\n".join(lines)
def transcript_to_text_timestamped(
topics: list[TranscriptTopic],
participants: list[TranscriptParticipant] | None,
is_multitrack: bool = False,
) -> str:
"""Convert transcript topics to timestamped text with speaker names."""
lines = []
for topic in topics:
if not topic.words:
continue
transcript = ProcessorTranscript(words=topic.words)
segments = transcript.as_segments(is_multitrack)
for segment in segments:
speaker_name = get_speaker_name(segment.speaker, participants)
timestamp = format_timestamp_mmss(segment.start)
text = segment.text.strip()
lines.append(f"[{timestamp}] {speaker_name}: {text}")
return "\n".join(lines)
def topics_to_webvtt_named(
topics: list[TranscriptTopic],
participants: list[TranscriptParticipant] | None,
is_multitrack: bool = False,
) -> str:
"""Convert transcript topics to WebVTT format with participant names."""
vtt = webvtt.WebVTT()
for topic in topics:
if not topic.words:
continue
transcript = ProcessorTranscript(words=topic.words)
segments = transcript.as_segments(is_multitrack)
for segment in segments:
speaker_name = get_speaker_name(segment.speaker, participants)
text = segment.text.strip()
text = f"<v {speaker_name}>{text}"
caption = webvtt.Caption(
start=seconds_to_timestamp(segment.start),
end=seconds_to_timestamp(segment.end),
text=text,
)
vtt.captions.append(caption)
return vtt.content
def transcript_to_json_segments(
topics: list[TranscriptTopic],
participants: list[TranscriptParticipant] | None,
is_multitrack: bool = False,
) -> list[TranscriptSegment]:
"""Convert transcript topics to a flat list of JSON segments."""
result = []
for topic in topics:
if not topic.words:
continue
transcript = ProcessorTranscript(words=topic.words)
segments = transcript.as_segments(is_multitrack)
for segment in segments:
speaker_name = get_speaker_name(segment.speaker, participants)
result.append(
TranscriptSegment(
speaker=segment.speaker,
speaker_name=speaker_name,
text=segment.text.strip(),
start=segment.start,
end=segment.end,
)
)
return result

View File

@@ -13,7 +13,7 @@ VttTimestamp = Annotated[str, "vtt_timestamp"]
WebVTTStr = Annotated[str, "webvtt_str"]
def _seconds_to_timestamp(seconds: Seconds) -> VttTimestamp:
def seconds_to_timestamp(seconds: Seconds) -> VttTimestamp:
# lib doesn't do that
hours = int(seconds // 3600)
minutes = int((seconds % 3600) // 60)
@@ -37,8 +37,8 @@ def words_to_webvtt(words: list[Word]) -> WebVTTStr:
text = f"<v Speaker{segment.speaker}>{text}"
caption = webvtt.Caption(
start=_seconds_to_timestamp(segment.start),
end=_seconds_to_timestamp(segment.end),
start=seconds_to_timestamp(segment.start),
end=seconds_to_timestamp(segment.end),
text=text,
)
vtt.captions.append(caption)

View File

@@ -173,15 +173,16 @@ class DailyClient(VideoPlatformClient):
self,
room_name: DailyRoomName,
enable_recording: bool,
user_id: str | None = None,
) -> str:
user_id: NonEmptyString | None = None,
is_owner: bool = False,
) -> NonEmptyString:
properties = MeetingTokenProperties(
room_name=room_name,
user_id=user_id,
start_cloud_recording=enable_recording,
enable_recording_ui=not enable_recording,
enable_recording_ui=False,
is_owner=is_owner,
)
request = CreateMeetingTokenRequest(properties=properties)
result = await self._api_client.create_meeting_token(request)
return result.token

View File

@@ -248,7 +248,7 @@ async def rooms_create(
ics_url=room.ics_url,
ics_fetch_interval=room.ics_fetch_interval,
ics_enabled=room.ics_enabled,
platform=room.platform,
platform=room.platform or settings.DEFAULT_VIDEO_PLATFORM,
)
@@ -556,6 +556,7 @@ async def rooms_join_meeting(
meeting.room_name,
enable_recording=enable_recording,
user_id=user_id,
is_owner=user_id == room.user_id,
)
meeting = meeting.model_copy()
meeting.room_url = add_query_param(meeting.room_url, "t", token)

View File

@@ -1,14 +1,22 @@
from datetime import datetime, timedelta, timezone
from typing import Annotated, Literal, Optional
from typing import Annotated, Literal, Optional, assert_never
from fastapi import APIRouter, Depends, HTTPException, Query
from fastapi_pagination import Page
from fastapi_pagination.ext.databases import apaginate
from jose import jwt
from pydantic import AwareDatetime, BaseModel, Field, constr, field_serializer
from pydantic import (
AwareDatetime,
BaseModel,
Discriminator,
Field,
constr,
field_serializer,
)
import reflector.auth as auth
from reflector.db import get_database
from reflector.db.recordings import recordings_controller
from reflector.db.search import (
DEFAULT_SEARCH_LIMIT,
SearchLimit,
@@ -31,7 +39,14 @@ from reflector.db.transcripts import (
)
from reflector.processors.types import Transcript as ProcessorTranscript
from reflector.processors.types import Word
from reflector.schemas.transcript_formats import TranscriptFormat, TranscriptSegment
from reflector.settings import settings
from reflector.utils.transcript_formats import (
topics_to_webvtt_named,
transcript_to_json_segments,
transcript_to_text,
transcript_to_text_timestamped,
)
from reflector.ws_manager import get_ws_manager
from reflector.zulip import (
InvalidMessageError,
@@ -46,6 +61,14 @@ ALGORITHM = "HS256"
DOWNLOAD_EXPIRE_MINUTES = 60
async def _get_is_multitrack(transcript) -> bool:
"""Detect if transcript is from multitrack recording."""
if not transcript.recording_id:
return False
recording = await recordings_controller.get_by_id(transcript.recording_id)
return recording is not None and recording.is_multitrack
def create_access_token(data: dict, expires_delta: timedelta):
to_encode = data.copy()
expire = datetime.now(timezone.utc) + expires_delta
@@ -88,10 +111,84 @@ class GetTranscriptMinimal(BaseModel):
audio_deleted: bool | None = None
class GetTranscript(GetTranscriptMinimal):
class GetTranscriptWithParticipants(GetTranscriptMinimal):
participants: list[TranscriptParticipant] | None
class GetTranscriptWithText(GetTranscriptWithParticipants):
"""
Transcript response with plain text format.
Format: Speaker names followed by their dialogue, one line per segment.
Example:
John Smith: Hello everyone
Jane Doe: Hi there
"""
transcript_format: Literal["text"] = "text"
transcript: str
class GetTranscriptWithTextTimestamped(GetTranscriptWithParticipants):
"""
Transcript response with timestamped text format.
Format: [MM:SS] timestamp prefix before each speaker and dialogue.
Example:
[00:00] John Smith: Hello everyone
[00:05] Jane Doe: Hi there
"""
transcript_format: Literal["text-timestamped"] = "text-timestamped"
transcript: str
class GetTranscriptWithWebVTTNamed(GetTranscriptWithParticipants):
"""
Transcript response in WebVTT subtitle format with participant names.
Format: Standard WebVTT with voice tags using participant names.
Example:
WEBVTT
00:00:00.000 --> 00:00:05.000
<v John Smith>Hello everyone
"""
transcript_format: Literal["webvtt-named"] = "webvtt-named"
transcript: str
class GetTranscriptWithJSON(GetTranscriptWithParticipants):
"""
Transcript response as structured JSON segments.
Format: Array of segment objects with speaker info, text, and timing.
Example:
[
{
"speaker": 0,
"speaker_name": "John Smith",
"text": "Hello everyone",
"start": 0.0,
"end": 5.0
}
]
"""
transcript_format: Literal["json"] = "json"
transcript: list[TranscriptSegment]
GetTranscript = Annotated[
GetTranscriptWithText
| GetTranscriptWithTextTimestamped
| GetTranscriptWithWebVTTNamed
| GetTranscriptWithJSON,
Discriminator("transcript_format"),
]
class CreateTranscript(BaseModel):
name: str
source_language: str = Field("en")
@@ -228,7 +325,7 @@ async def transcripts_search(
)
@router.post("/transcripts", response_model=GetTranscript)
@router.post("/transcripts", response_model=GetTranscriptWithParticipants)
async def transcripts_create(
info: CreateTranscript,
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
@@ -272,7 +369,7 @@ class GetTranscriptTopic(BaseModel):
segments: list[GetTranscriptSegmentTopic] = []
@classmethod
def from_transcript_topic(cls, topic: TranscriptTopic):
def from_transcript_topic(cls, topic: TranscriptTopic, is_multitrack: bool = False):
if not topic.words:
# In previous version, words were missing
# Just output a segment with speaker 0
@@ -296,7 +393,7 @@ class GetTranscriptTopic(BaseModel):
start=segment.start,
speaker=segment.speaker,
)
for segment in transcript.as_segments()
for segment in transcript.as_segments(is_multitrack)
]
return cls(
id=topic.id,
@@ -313,8 +410,8 @@ class GetTranscriptTopicWithWords(GetTranscriptTopic):
words: list[Word] = []
@classmethod
def from_transcript_topic(cls, topic: TranscriptTopic):
instance = super().from_transcript_topic(topic)
def from_transcript_topic(cls, topic: TranscriptTopic, is_multitrack: bool = False):
instance = super().from_transcript_topic(topic, is_multitrack)
if topic.words:
instance.words = topic.words
return instance
@@ -329,8 +426,8 @@ class GetTranscriptTopicWithWordsPerSpeaker(GetTranscriptTopic):
words_per_speaker: list[SpeakerWords] = []
@classmethod
def from_transcript_topic(cls, topic: TranscriptTopic):
instance = super().from_transcript_topic(topic)
def from_transcript_topic(cls, topic: TranscriptTopic, is_multitrack: bool = False):
instance = super().from_transcript_topic(topic, is_multitrack)
if topic.words:
words_per_speakers = []
# group words by speaker
@@ -362,14 +459,76 @@ class GetTranscriptTopicWithWordsPerSpeaker(GetTranscriptTopic):
async def transcript_get(
transcript_id: str,
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
transcript_format: TranscriptFormat = "text",
):
user_id = user["sub"] if user else None
return await transcripts_controller.get_by_id_for_http(
transcript = await transcripts_controller.get_by_id_for_http(
transcript_id, user_id=user_id
)
is_multitrack = await _get_is_multitrack(transcript)
@router.patch("/transcripts/{transcript_id}", response_model=GetTranscript)
base_data = {
"id": transcript.id,
"user_id": transcript.user_id,
"name": transcript.name,
"status": transcript.status,
"locked": transcript.locked,
"duration": transcript.duration,
"title": transcript.title,
"short_summary": transcript.short_summary,
"long_summary": transcript.long_summary,
"created_at": transcript.created_at,
"share_mode": transcript.share_mode,
"source_language": transcript.source_language,
"target_language": transcript.target_language,
"reviewed": transcript.reviewed,
"meeting_id": transcript.meeting_id,
"source_kind": transcript.source_kind,
"room_id": transcript.room_id,
"audio_deleted": transcript.audio_deleted,
"participants": transcript.participants,
}
if transcript_format == "text":
return GetTranscriptWithText(
**base_data,
transcript_format="text",
transcript=transcript_to_text(
transcript.topics, transcript.participants, is_multitrack
),
)
elif transcript_format == "text-timestamped":
return GetTranscriptWithTextTimestamped(
**base_data,
transcript_format="text-timestamped",
transcript=transcript_to_text_timestamped(
transcript.topics, transcript.participants, is_multitrack
),
)
elif transcript_format == "webvtt-named":
return GetTranscriptWithWebVTTNamed(
**base_data,
transcript_format="webvtt-named",
transcript=topics_to_webvtt_named(
transcript.topics, transcript.participants, is_multitrack
),
)
elif transcript_format == "json":
return GetTranscriptWithJSON(
**base_data,
transcript_format="json",
transcript=transcript_to_json_segments(
transcript.topics, transcript.participants, is_multitrack
),
)
else:
assert_never(transcript_format)
@router.patch(
"/transcripts/{transcript_id}", response_model=GetTranscriptWithParticipants
)
async def transcript_update(
transcript_id: str,
info: UpdateTranscript,
@@ -419,9 +578,12 @@ async def transcript_get_topics(
transcript_id, user_id=user_id
)
is_multitrack = await _get_is_multitrack(transcript)
# convert to GetTranscriptTopic
return [
GetTranscriptTopic.from_transcript_topic(topic) for topic in transcript.topics
GetTranscriptTopic.from_transcript_topic(topic, is_multitrack)
for topic in transcript.topics
]
@@ -438,9 +600,11 @@ async def transcript_get_topics_with_words(
transcript_id, user_id=user_id
)
is_multitrack = await _get_is_multitrack(transcript)
# convert to GetTranscriptTopicWithWords
return [
GetTranscriptTopicWithWords.from_transcript_topic(topic)
GetTranscriptTopicWithWords.from_transcript_topic(topic, is_multitrack)
for topic in transcript.topics
]
@@ -459,13 +623,17 @@ async def transcript_get_topics_with_words_per_speaker(
transcript_id, user_id=user_id
)
is_multitrack = await _get_is_multitrack(transcript)
# get the topic from the transcript
topic = next((t for t in transcript.topics if t.id == topic_id), None)
if not topic:
raise HTTPException(status_code=404, detail="Topic not found")
# convert to GetTranscriptTopicWithWordsPerSpeaker
return GetTranscriptTopicWithWordsPerSpeaker.from_transcript_topic(topic)
return GetTranscriptTopicWithWordsPerSpeaker.from_transcript_topic(
topic, is_multitrack
)
@router.post("/transcripts/{transcript_id}/zulip")

View File

@@ -1,4 +1,4 @@
from typing import Annotated, Optional
from typing import Annotated, Optional, assert_never
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
@@ -15,7 +15,6 @@ from reflector.services.transcript_process import (
prepare_transcript_processing,
validate_transcript_for_processing,
)
from reflector.utils.match import absurd
router = APIRouter()
@@ -44,7 +43,7 @@ async def transcript_process(
elif isinstance(validation, ValidationOk):
pass
else:
absurd(validation)
assert_never(validation)
config = await prepare_transcript_processing(validation)

View File

@@ -2,6 +2,7 @@ import json
import os
import re
from datetime import datetime, timezone
from typing import List
from urllib.parse import unquote
import av
@@ -11,7 +12,7 @@ from celery import shared_task
from celery.utils.log import get_task_logger
from pydantic import ValidationError
from reflector.dailyco_api import MeetingParticipantsResponse
from reflector.dailyco_api import MeetingParticipantsResponse, RecordingResponse
from reflector.db.daily_participant_sessions import (
DailyParticipantSession,
daily_participant_sessions_controller,
@@ -38,6 +39,7 @@ from reflector.storage import get_transcripts_storage
from reflector.utils.daily import (
DailyRoomName,
extract_base_room_name,
filter_cam_audio_tracks,
parse_daily_recording_filename,
recording_lock_key,
)
@@ -338,7 +340,9 @@ async def _process_multitrack_recording_inner(
exc_info=True,
)
for idx, key in enumerate(track_keys):
cam_audio_keys = filter_cam_audio_tracks(track_keys)
for idx, key in enumerate(cam_audio_keys):
try:
parsed = parse_daily_recording_filename(key)
participant_id = parsed.participant_id
@@ -366,7 +370,7 @@ async def _process_multitrack_recording_inner(
task_pipeline_multitrack_process.delay(
transcript_id=transcript.id,
bucket_name=bucket_name,
track_keys=track_keys,
track_keys=filter_cam_audio_tracks(track_keys),
)
@@ -391,7 +395,7 @@ async def poll_daily_recordings():
async with create_platform_client("daily") as daily_client:
# latest 100. TODO cursor-based state
api_recordings = await daily_client.list_recordings()
api_recordings: List[RecordingResponse] = await daily_client.list_recordings()
if not api_recordings:
logger.debug(
@@ -422,11 +426,13 @@ async def poll_daily_recordings():
for recording in missing_recordings:
if not recording.tracks:
assert recording.status != "finished", (
f"Recording {recording.id} has status='finished' but no tracks. "
f"Daily.co API guarantees finished recordings have tracks available. "
f"room_name={recording.room_name}"
if recording.status == "finished":
logger.warning(
"Finished recording has no tracks (no audio captured)",
recording_id=recording.id,
room_name=recording.room_name,
)
else:
logger.debug(
"No tracks in recording yet",
recording_id=recording.id,

View File

@@ -159,3 +159,78 @@ def test_processor_transcript_segment():
assert segments[3].start == 30.72
assert segments[4].start == 31.56
assert segments[5].start == 32.38
def test_processor_transcript_segment_multitrack_interleaved():
"""Test as_segments(is_multitrack=True) with interleaved speakers.
Multitrack recordings have words from different speakers sorted by start time,
causing frequent speaker alternation. The multitrack mode should group by
speaker first, then split into sentences.
"""
from reflector.processors.types import Transcript, Word
# Simulate real multitrack data: words sorted by start time, speakers interleave
# Speaker 0 says: "Hello there."
# Speaker 1 says: "I'm good."
# When sorted by time, words interleave
transcript = Transcript(
words=[
Word(text="Hello ", start=0.0, end=0.5, speaker=0),
Word(text="I'm ", start=0.5, end=0.8, speaker=1),
Word(text="there.", start=0.5, end=1.0, speaker=0),
Word(text="good.", start=1.0, end=1.5, speaker=1),
]
)
# Default behavior (is_multitrack=False): breaks on every speaker change = 4 segments
segments_default = transcript.as_segments(is_multitrack=False)
assert len(segments_default) == 4
# Multitrack behavior: groups by speaker, then sentences = 2 segments
segments_multitrack = transcript.as_segments(is_multitrack=True)
assert len(segments_multitrack) == 2
# Check content - sorted by start time
assert segments_multitrack[0].speaker == 0
assert segments_multitrack[0].text == "Hello there."
assert segments_multitrack[0].start == 0.0
assert segments_multitrack[0].end == 1.0
assert segments_multitrack[1].speaker == 1
assert segments_multitrack[1].text == "I'm good."
assert segments_multitrack[1].start == 0.5
assert segments_multitrack[1].end == 1.5
def test_processor_transcript_segment_multitrack_overlapping_timestamps():
"""Test multitrack with exactly overlapping timestamps (real Daily.co data pattern)."""
from reflector.processors.types import Transcript, Word
# Real pattern from transcript 38d84d57: words with identical timestamps
transcript = Transcript(
words=[
Word(text="speaking ", start=6.71, end=7.11, speaker=0),
Word(text="Speaking ", start=6.71, end=7.11, speaker=1),
Word(text="at ", start=7.11, end=7.27, speaker=0),
Word(text="at ", start=7.11, end=7.27, speaker=1),
Word(text="the ", start=7.27, end=7.43, speaker=0),
Word(text="the ", start=7.27, end=7.43, speaker=1),
Word(text="same ", start=7.43, end=7.59, speaker=0),
Word(text="same ", start=7.43, end=7.59, speaker=1),
Word(text="time.", start=7.59, end=8.0, speaker=0),
Word(text="time.", start=7.59, end=8.0, speaker=1),
]
)
# Default: 10 segments (one per speaker change)
segments_default = transcript.as_segments(is_multitrack=False)
assert len(segments_default) == 10
# Multitrack: 2 segments (one per speaker sentence)
segments_multitrack = transcript.as_segments(is_multitrack=True)
assert len(segments_multitrack) == 2
# Both should have complete sentences
assert "speaking at the same time." in segments_multitrack[0].text
assert "Speaking at the same time." in segments_multitrack[1].text

View File

@@ -0,0 +1,779 @@
"""Tests for transcript format conversion functionality."""
import pytest
from reflector.db.transcripts import TranscriptParticipant, TranscriptTopic
from reflector.processors.types import Word
from reflector.utils.transcript_formats import (
format_timestamp_mmss,
get_speaker_name,
topics_to_webvtt_named,
transcript_to_json_segments,
transcript_to_text,
transcript_to_text_timestamped,
)
@pytest.mark.asyncio
async def test_get_speaker_name_with_participants():
"""Test speaker name resolution with participants list."""
participants = [
TranscriptParticipant(id="1", speaker=0, name="John Smith"),
TranscriptParticipant(id="2", speaker=1, name="Jane Doe"),
]
assert get_speaker_name(0, participants) == "John Smith"
assert get_speaker_name(1, participants) == "Jane Doe"
assert get_speaker_name(2, participants) == "Speaker 2"
@pytest.mark.asyncio
async def test_get_speaker_name_without_participants():
"""Test speaker name resolution without participants list."""
assert get_speaker_name(0, None) == "Speaker 0"
assert get_speaker_name(1, None) == "Speaker 1"
assert get_speaker_name(5, []) == "Speaker 5"
@pytest.mark.asyncio
async def test_format_timestamp_mmss():
"""Test timestamp formatting to MM:SS."""
assert format_timestamp_mmss(0) == "00:00"
assert format_timestamp_mmss(5) == "00:05"
assert format_timestamp_mmss(65) == "01:05"
assert format_timestamp_mmss(125.7) == "02:05"
assert format_timestamp_mmss(3661) == "61:01"
@pytest.mark.asyncio
async def test_transcript_to_text():
"""Test plain text format conversion."""
topics = [
TranscriptTopic(
id="1",
title="Topic 1",
summary="Summary 1",
timestamp=0.0,
words=[
Word(text="Hello", start=0.0, end=1.0, speaker=0),
Word(text=" world.", start=1.0, end=2.0, speaker=0),
],
),
TranscriptTopic(
id="2",
title="Topic 2",
summary="Summary 2",
timestamp=2.0,
words=[
Word(text="How", start=2.0, end=3.0, speaker=1),
Word(text=" are", start=3.0, end=4.0, speaker=1),
Word(text=" you?", start=4.0, end=5.0, speaker=1),
],
),
]
participants = [
TranscriptParticipant(id="1", speaker=0, name="John Smith"),
TranscriptParticipant(id="2", speaker=1, name="Jane Doe"),
]
result = transcript_to_text(topics, participants)
lines = result.split("\n")
assert len(lines) == 2
assert lines[0] == "John Smith: Hello world."
assert lines[1] == "Jane Doe: How are you?"
@pytest.mark.asyncio
async def test_transcript_to_text_timestamped():
"""Test timestamped text format conversion."""
topics = [
TranscriptTopic(
id="1",
title="Topic 1",
summary="Summary 1",
timestamp=0.0,
words=[
Word(text="Hello", start=0.0, end=1.0, speaker=0),
Word(text=" world.", start=1.0, end=2.0, speaker=0),
],
),
TranscriptTopic(
id="2",
title="Topic 2",
summary="Summary 2",
timestamp=65.0,
words=[
Word(text="How", start=65.0, end=66.0, speaker=1),
Word(text=" are", start=66.0, end=67.0, speaker=1),
Word(text=" you?", start=67.0, end=68.0, speaker=1),
],
),
]
participants = [
TranscriptParticipant(id="1", speaker=0, name="John Smith"),
TranscriptParticipant(id="2", speaker=1, name="Jane Doe"),
]
result = transcript_to_text_timestamped(topics, participants)
lines = result.split("\n")
assert len(lines) == 2
assert lines[0] == "[00:00] John Smith: Hello world."
assert lines[1] == "[01:05] Jane Doe: How are you?"
@pytest.mark.asyncio
async def test_topics_to_webvtt_named():
"""Test WebVTT format conversion with participant names."""
topics = [
TranscriptTopic(
id="1",
title="Topic 1",
summary="Summary 1",
timestamp=0.0,
words=[
Word(text="Hello", start=0.0, end=1.0, speaker=0),
Word(text=" world.", start=1.0, end=2.0, speaker=0),
],
),
]
participants = [
TranscriptParticipant(id="1", speaker=0, name="John Smith"),
]
result = topics_to_webvtt_named(topics, participants)
assert result.startswith("WEBVTT")
assert "<v John Smith>" in result
assert "00:00:00.000 --> 00:00:02.000" in result
assert "Hello world." in result
@pytest.mark.asyncio
async def test_transcript_to_json_segments():
"""Test JSON segments format conversion."""
topics = [
TranscriptTopic(
id="1",
title="Topic 1",
summary="Summary 1",
timestamp=0.0,
words=[
Word(text="Hello", start=0.0, end=1.0, speaker=0),
Word(text=" world.", start=1.0, end=2.0, speaker=0),
],
),
TranscriptTopic(
id="2",
title="Topic 2",
summary="Summary 2",
timestamp=2.0,
words=[
Word(text="How", start=2.0, end=3.0, speaker=1),
Word(text=" are", start=3.0, end=4.0, speaker=1),
Word(text=" you?", start=4.0, end=5.0, speaker=1),
],
),
]
participants = [
TranscriptParticipant(id="1", speaker=0, name="John Smith"),
TranscriptParticipant(id="2", speaker=1, name="Jane Doe"),
]
result = transcript_to_json_segments(topics, participants)
assert len(result) == 2
assert result[0].speaker == 0
assert result[0].speaker_name == "John Smith"
assert result[0].text == "Hello world."
assert result[0].start == 0.0
assert result[0].end == 2.0
assert result[1].speaker == 1
assert result[1].speaker_name == "Jane Doe"
assert result[1].text == "How are you?"
assert result[1].start == 2.0
assert result[1].end == 5.0
@pytest.mark.asyncio
async def test_transcript_formats_with_empty_topics():
"""Test format conversion with empty topics list."""
topics = []
participants = []
assert transcript_to_text(topics, participants) == ""
assert transcript_to_text_timestamped(topics, participants) == ""
assert "WEBVTT" in topics_to_webvtt_named(topics, participants)
assert transcript_to_json_segments(topics, participants) == []
@pytest.mark.asyncio
async def test_transcript_formats_with_empty_words():
"""Test format conversion with topics containing no words."""
topics = [
TranscriptTopic(
id="1",
title="Topic 1",
summary="Summary 1",
timestamp=0.0,
words=[],
),
]
participants = []
assert transcript_to_text(topics, participants) == ""
assert transcript_to_text_timestamped(topics, participants) == ""
assert "WEBVTT" in topics_to_webvtt_named(topics, participants)
assert transcript_to_json_segments(topics, participants) == []
@pytest.mark.asyncio
async def test_transcript_formats_with_multiple_speakers():
"""Test format conversion with multiple speaker changes."""
topics = [
TranscriptTopic(
id="1",
title="Topic 1",
summary="Summary 1",
timestamp=0.0,
words=[
Word(text="Hello", start=0.0, end=1.0, speaker=0),
Word(text=" there.", start=1.0, end=2.0, speaker=0),
Word(text="Hi", start=2.0, end=3.0, speaker=1),
Word(text=" back.", start=3.0, end=4.0, speaker=1),
Word(text="Good", start=4.0, end=5.0, speaker=0),
Word(text=" morning.", start=5.0, end=6.0, speaker=0),
],
),
]
participants = [
TranscriptParticipant(id="1", speaker=0, name="Alice"),
TranscriptParticipant(id="2", speaker=1, name="Bob"),
]
text_result = transcript_to_text(topics, participants)
lines = text_result.split("\n")
assert len(lines) == 3
assert "Alice: Hello there." in lines[0]
assert "Bob: Hi back." in lines[1]
assert "Alice: Good morning." in lines[2]
json_result = transcript_to_json_segments(topics, participants)
assert len(json_result) == 3
assert json_result[0].speaker_name == "Alice"
assert json_result[1].speaker_name == "Bob"
assert json_result[2].speaker_name == "Alice"
@pytest.mark.asyncio
async def test_transcript_formats_with_overlapping_speakers_multitrack():
"""Test format conversion for multitrack recordings with truly interleaved words.
Multitrack recordings have words from different speakers sorted by start time,
causing frequent speaker alternation. This tests the sentence-based segmentation
that groups each speaker's words into complete sentences.
"""
# Real multitrack data: words sorted by start time, speakers interleave
# Alice says: "Hello there." (0.0-1.0)
# Bob says: "I'm good." (0.5-1.5)
# When sorted by time, words interleave: Hello, I'm, there., good.
topics = [
TranscriptTopic(
id="1",
title="Topic 1",
summary="Summary 1",
timestamp=0.0,
words=[
Word(text="Hello ", start=0.0, end=0.5, speaker=0),
Word(text="I'm ", start=0.5, end=0.8, speaker=1),
Word(text="there.", start=0.5, end=1.0, speaker=0),
Word(text="good.", start=1.0, end=1.5, speaker=1),
],
),
]
participants = [
TranscriptParticipant(id="1", speaker=0, name="Alice"),
TranscriptParticipant(id="2", speaker=1, name="Bob"),
]
# With is_multitrack=True, should produce 2 segments (one per speaker sentence)
# not 4 segments (one per speaker change)
webvtt_result = topics_to_webvtt_named(topics, participants, is_multitrack=True)
expected_webvtt = """WEBVTT
00:00:00.000 --> 00:00:01.000
<v Alice>Hello there.
00:00:00.500 --> 00:00:01.500
<v Bob>I'm good.
"""
assert webvtt_result == expected_webvtt
text_result = transcript_to_text(topics, participants, is_multitrack=True)
lines = text_result.split("\n")
assert len(lines) == 2
assert "Alice: Hello there." in lines[0]
assert "Bob: I'm good." in lines[1]
timestamped_result = transcript_to_text_timestamped(
topics, participants, is_multitrack=True
)
timestamped_lines = timestamped_result.split("\n")
assert len(timestamped_lines) == 2
assert "[00:00] Alice: Hello there." in timestamped_lines[0]
assert "[00:00] Bob: I'm good." in timestamped_lines[1]
segments = transcript_to_json_segments(topics, participants, is_multitrack=True)
assert len(segments) == 2
assert segments[0].speaker_name == "Alice"
assert segments[0].text == "Hello there."
assert segments[1].speaker_name == "Bob"
assert segments[1].text == "I'm good."
@pytest.mark.asyncio
async def test_api_transcript_format_text(client):
"""Test GET /transcripts/{id} with transcript_format=text."""
response = await client.post("/transcripts", json={"name": "Test transcript"})
assert response.status_code == 200
tid = response.json()["id"]
from reflector.db.transcripts import (
TranscriptParticipant,
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.types import Word
transcript = await transcripts_controller.get_by_id(tid)
await transcripts_controller.update(
transcript,
{
"participants": [
TranscriptParticipant(
id="1", speaker=0, name="John Smith"
).model_dump(),
TranscriptParticipant(id="2", speaker=1, name="Jane Doe").model_dump(),
]
},
)
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello", start=0, end=1, speaker=0),
Word(text=" world.", start=1, end=2, speaker=0),
],
),
)
response = await client.get(f"/transcripts/{tid}?transcript_format=text")
assert response.status_code == 200
data = response.json()
assert data["transcript_format"] == "text"
assert "transcript" in data
assert "John Smith: Hello world." in data["transcript"]
@pytest.mark.asyncio
async def test_api_transcript_format_text_timestamped(client):
"""Test GET /transcripts/{id} with transcript_format=text-timestamped."""
response = await client.post("/transcripts", json={"name": "Test transcript"})
assert response.status_code == 200
tid = response.json()["id"]
from reflector.db.transcripts import (
TranscriptParticipant,
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.types import Word
transcript = await transcripts_controller.get_by_id(tid)
await transcripts_controller.update(
transcript,
{
"participants": [
TranscriptParticipant(
id="1", speaker=0, name="John Smith"
).model_dump(),
]
},
)
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello", start=65, end=66, speaker=0),
Word(text=" world.", start=66, end=67, speaker=0),
],
),
)
response = await client.get(
f"/transcripts/{tid}?transcript_format=text-timestamped"
)
assert response.status_code == 200
data = response.json()
assert data["transcript_format"] == "text-timestamped"
assert "transcript" in data
assert "[01:05] John Smith: Hello world." in data["transcript"]
@pytest.mark.asyncio
async def test_api_transcript_format_webvtt_named(client):
"""Test GET /transcripts/{id} with transcript_format=webvtt-named."""
response = await client.post("/transcripts", json={"name": "Test transcript"})
assert response.status_code == 200
tid = response.json()["id"]
from reflector.db.transcripts import (
TranscriptParticipant,
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.types import Word
transcript = await transcripts_controller.get_by_id(tid)
await transcripts_controller.update(
transcript,
{
"participants": [
TranscriptParticipant(
id="1", speaker=0, name="John Smith"
).model_dump(),
]
},
)
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello", start=0, end=1, speaker=0),
Word(text=" world.", start=1, end=2, speaker=0),
],
),
)
response = await client.get(f"/transcripts/{tid}?transcript_format=webvtt-named")
assert response.status_code == 200
data = response.json()
assert data["transcript_format"] == "webvtt-named"
assert "transcript" in data
assert "WEBVTT" in data["transcript"]
assert "<v John Smith>" in data["transcript"]
@pytest.mark.asyncio
async def test_api_transcript_format_json(client):
"""Test GET /transcripts/{id} with transcript_format=json."""
response = await client.post("/transcripts", json={"name": "Test transcript"})
assert response.status_code == 200
tid = response.json()["id"]
from reflector.db.transcripts import (
TranscriptParticipant,
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.types import Word
transcript = await transcripts_controller.get_by_id(tid)
await transcripts_controller.update(
transcript,
{
"participants": [
TranscriptParticipant(
id="1", speaker=0, name="John Smith"
).model_dump(),
]
},
)
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello", start=0, end=1, speaker=0),
Word(text=" world.", start=1, end=2, speaker=0),
],
),
)
response = await client.get(f"/transcripts/{tid}?transcript_format=json")
assert response.status_code == 200
data = response.json()
assert data["transcript_format"] == "json"
assert "transcript" in data
assert isinstance(data["transcript"], list)
assert len(data["transcript"]) == 1
assert data["transcript"][0]["speaker"] == 0
assert data["transcript"][0]["speaker_name"] == "John Smith"
assert data["transcript"][0]["text"] == "Hello world."
@pytest.mark.asyncio
async def test_api_transcript_format_default_is_text(client):
"""Test GET /transcripts/{id} defaults to text format."""
response = await client.post("/transcripts", json={"name": "Test transcript"})
assert response.status_code == 200
tid = response.json()["id"]
from reflector.db.transcripts import TranscriptTopic, transcripts_controller
from reflector.processors.types import Word
transcript = await transcripts_controller.get_by_id(tid)
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello", start=0, end=1, speaker=0),
],
),
)
response = await client.get(f"/transcripts/{tid}")
assert response.status_code == 200
data = response.json()
assert data["transcript_format"] == "text"
assert "transcript" in data
@pytest.mark.asyncio
async def test_api_topics_endpoint_multitrack_segmentation(client):
"""Test GET /transcripts/{id}/topics uses sentence-based segmentation for multitrack.
This tests the fix for TASKS2.md - ensuring /topics endpoints correctly detect
multitrack recordings and use sentence-based segmentation instead of fragmenting
on every speaker change.
"""
from datetime import datetime, timezone
from reflector.db.recordings import Recording, recordings_controller
from reflector.db.transcripts import (
TranscriptParticipant,
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.types import Word
# Create a multitrack recording (has track_keys)
recording = Recording(
bucket_name="test-bucket",
object_key="test-key",
recorded_at=datetime.now(timezone.utc),
track_keys=["track1.webm", "track2.webm"], # This makes it multitrack
)
await recordings_controller.create(recording)
# Create transcript linked to the recording
transcript = await transcripts_controller.add(
name="Multitrack Test",
source_kind="file",
recording_id=recording.id,
)
await transcripts_controller.update(
transcript,
{
"participants": [
TranscriptParticipant(id="1", speaker=0, name="Alice").model_dump(),
TranscriptParticipant(id="2", speaker=1, name="Bob").model_dump(),
]
},
)
# Add interleaved words (as they appear in real multitrack data)
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello ", start=0.0, end=0.5, speaker=0),
Word(text="I'm ", start=0.5, end=0.8, speaker=1),
Word(text="there.", start=0.5, end=1.0, speaker=0),
Word(text="good.", start=1.0, end=1.5, speaker=1),
],
),
)
# Test /topics endpoint
response = await client.get(f"/transcripts/{transcript.id}/topics")
assert response.status_code == 200
data = response.json()
assert len(data) == 1
topic = data[0]
# Key assertion: multitrack should produce 2 segments (one per speaker sentence)
# Not 4 segments (one per speaker change)
assert len(topic["segments"]) == 2
# Check content
segment_texts = [s["text"] for s in topic["segments"]]
assert "Hello there." in segment_texts
assert "I'm good." in segment_texts
@pytest.mark.asyncio
async def test_api_topics_endpoint_non_multitrack_segmentation(client):
"""Test GET /transcripts/{id}/topics uses default segmentation for non-multitrack.
Ensures backward compatibility - transcripts without multitrack recordings
should continue using the default speaker-change-based segmentation.
"""
from reflector.db.transcripts import (
TranscriptParticipant,
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.types import Word
# Create transcript WITHOUT recording (defaulted as not multitrack) TODO better heuristic
response = await client.post("/transcripts", json={"name": "Test transcript"})
assert response.status_code == 200
tid = response.json()["id"]
transcript = await transcripts_controller.get_by_id(tid)
await transcripts_controller.update(
transcript,
{
"participants": [
TranscriptParticipant(id="1", speaker=0, name="Alice").model_dump(),
TranscriptParticipant(id="2", speaker=1, name="Bob").model_dump(),
]
},
)
# Add interleaved words
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello ", start=0.0, end=0.5, speaker=0),
Word(text="I'm ", start=0.5, end=0.8, speaker=1),
Word(text="there.", start=0.5, end=1.0, speaker=0),
Word(text="good.", start=1.0, end=1.5, speaker=1),
],
),
)
# Test /topics endpoint
response = await client.get(f"/transcripts/{tid}/topics")
assert response.status_code == 200
data = response.json()
assert len(data) == 1
topic = data[0]
# Non-multitrack: should produce 4 segments (one per speaker change)
assert len(topic["segments"]) == 4
@pytest.mark.asyncio
async def test_api_topics_with_words_endpoint_multitrack(client):
"""Test GET /transcripts/{id}/topics/with-words uses multitrack segmentation."""
from datetime import datetime, timezone
from reflector.db.recordings import Recording, recordings_controller
from reflector.db.transcripts import (
TranscriptParticipant,
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.types import Word
# Create multitrack recording
recording = Recording(
bucket_name="test-bucket",
object_key="test-key-2",
recorded_at=datetime.now(timezone.utc),
track_keys=["track1.webm", "track2.webm"],
)
await recordings_controller.create(recording)
transcript = await transcripts_controller.add(
name="Multitrack Test 2",
source_kind="file",
recording_id=recording.id,
)
await transcripts_controller.update(
transcript,
{
"participants": [
TranscriptParticipant(id="1", speaker=0, name="Alice").model_dump(),
TranscriptParticipant(id="2", speaker=1, name="Bob").model_dump(),
]
},
)
await transcripts_controller.upsert_topic(
transcript,
TranscriptTopic(
title="Topic 1",
summary="Summary 1",
timestamp=0,
words=[
Word(text="Hello ", start=0.0, end=0.5, speaker=0),
Word(text="I'm ", start=0.5, end=0.8, speaker=1),
Word(text="there.", start=0.5, end=1.0, speaker=0),
Word(text="good.", start=1.0, end=1.5, speaker=1),
],
),
)
response = await client.get(f"/transcripts/{transcript.id}/topics/with-words")
assert response.status_code == 200
data = response.json()
assert len(data) == 1
topic = data[0]
# Should have 2 segments (multitrack sentence-based)
assert len(topic["segments"]) == 2
# Should also have words field
assert "words" in topic
assert len(topic["words"]) == 4

View File

@@ -1,14 +1,16 @@
import { useState } from "react";
import type { components } from "../../reflector-api";
type GetTranscript = components["schemas"]["GetTranscript"];
import type { components, operations } from "../../reflector-api";
type GetTranscriptWithParticipants =
components["schemas"]["GetTranscriptWithParticipants"];
type GetTranscriptTopic = components["schemas"]["GetTranscriptTopic"];
import { Button, BoxProps, Box } from "@chakra-ui/react";
import { buildTranscriptWithTopics } from "./buildTranscriptWithTopics";
import { useTranscriptParticipants } from "../../lib/apiHooks";
import { Button, BoxProps, Box, Menu, Text } from "@chakra-ui/react";
import { LuChevronDown } from "react-icons/lu";
import { client } from "../../lib/apiClient";
import { toaster } from "../../components/ui/toaster";
type ShareCopyProps = {
finalSummaryElement: HTMLDivElement | null;
transcript: GetTranscript;
transcript: GetTranscriptWithParticipants;
topics: GetTranscriptTopic[];
};
@@ -20,11 +22,33 @@ export default function ShareCopy({
}: ShareCopyProps & BoxProps) {
const [isCopiedSummary, setIsCopiedSummary] = useState(false);
const [isCopiedTranscript, setIsCopiedTranscript] = useState(false);
const participantsQuery = useTranscriptParticipants(transcript?.id || null);
const [isCopying, setIsCopying] = useState(false);
type ApiTranscriptFormat = NonNullable<
operations["v1_transcript_get"]["parameters"]["query"]
>["transcript_format"];
const TRANSCRIPT_FORMATS = [
"text",
"text-timestamped",
"webvtt-named",
"json",
] as const satisfies ApiTranscriptFormat[];
type TranscriptFormat = (typeof TRANSCRIPT_FORMATS)[number];
const TRANSCRIPT_FORMAT_LABELS: { [k in TranscriptFormat]: string } = {
text: "Plain text",
"text-timestamped": "Text + timestamps",
"webvtt-named": "WebVTT (named)",
json: "JSON",
};
const formatOptions = TRANSCRIPT_FORMATS.map((f) => ({
value: f,
label: TRANSCRIPT_FORMAT_LABELS[f],
}));
const onCopySummaryClick = () => {
const text_to_copy = finalSummaryElement?.innerText;
if (text_to_copy) {
navigator.clipboard.writeText(text_to_copy).then(() => {
setIsCopiedSummary(true);
@@ -34,27 +58,91 @@ export default function ShareCopy({
}
};
const onCopyTranscriptClick = () => {
const text_to_copy =
buildTranscriptWithTopics(
topics || [],
participantsQuery?.data || null,
transcript?.title || null,
) || "";
text_to_copy &&
navigator.clipboard.writeText(text_to_copy).then(() => {
setIsCopiedTranscript(true);
// Reset the copied state after 2 seconds
setTimeout(() => setIsCopiedTranscript(false), 2000);
const onCopyTranscriptFormatClick = async (format: TranscriptFormat) => {
try {
setIsCopying(true);
const { data, error } = await client.GET(
"/v1/transcripts/{transcript_id}",
{
params: {
path: { transcript_id: transcript.id },
query: { transcript_format: format },
},
},
);
if (error) {
console.error("Failed to copy transcript:", error);
toaster.create({
duration: 3000,
render: () => (
<Box bg="red.500" color="white" px={4} py={3} borderRadius="md">
<Text fontWeight="bold">Error</Text>
<Text fontSize="sm">Failed to fetch transcript</Text>
</Box>
),
});
return;
}
const copiedText =
format === "json"
? JSON.stringify(data?.transcript ?? {}, null, 2)
: String(data?.transcript ?? "");
if (copiedText) {
await navigator.clipboard.writeText(copiedText);
setIsCopiedTranscript(true);
setTimeout(() => setIsCopiedTranscript(false), 2000);
}
} catch (e) {
console.error("Failed to copy transcript:", e);
toaster.create({
duration: 3000,
render: () => (
<Box bg="red.500" color="white" px={4} py={3} borderRadius="md">
<Text fontWeight="bold">Error</Text>
<Text fontSize="sm">Failed to copy transcript</Text>
</Box>
),
});
} finally {
setIsCopying(false);
}
};
return (
<Box {...boxProps}>
<Button onClick={onCopyTranscriptClick} mr={2} variant="subtle">
<Menu.Root
closeOnSelect={true}
lazyMount={true}
positioning={{ gutter: 4 }}
>
<Menu.Trigger asChild>
<Button
mr={2}
variant="subtle"
loading={isCopying}
loadingText="Copying..."
>
{isCopiedTranscript ? "Copied!" : "Copy Transcript"}
<LuChevronDown style={{ marginLeft: 6 }} />
</Button>
</Menu.Trigger>
<Menu.Positioner>
<Menu.Content>
{formatOptions.map((opt) => (
<Menu.Item
key={opt.value}
value={opt.value}
_hover={{ backgroundColor: "gray.100" }}
onClick={() => onCopyTranscriptFormatClick(opt.value)}
>
{opt.label}
</Menu.Item>
))}
</Menu.Content>
</Menu.Positioner>
</Menu.Root>
<Button onClick={onCopySummaryClick} variant="subtle">
{isCopiedSummary ? "Copied!" : "Copy Summary"}
</Button>

View File

@@ -32,6 +32,11 @@ async function getUserId(accessToken: string): Promise<string | null> {
});
if (!response.ok) {
try {
console.error(await response.text());
} catch (e) {
console.error("Failed to parse error response", e);
}
return null;
}

View File

@@ -696,7 +696,7 @@ export interface paths {
patch?: never;
trace?: never;
};
"/v1/webhook": {
"/v1/daily/webhook": {
parameters: {
query?: never;
header?: never;
@@ -708,6 +708,27 @@ export interface paths {
/**
* Webhook
* @description Handle Daily webhook events.
*
* Example webhook payload:
* {
* "version": "1.0.0",
* "type": "recording.ready-to-download",
* "id": "rec-rtd-c3df927c-f738-4471-a2b7-066fa7e95a6b-1692124192",
* "payload": {
* "recording_id": "08fa0b24-9220-44c5-846c-3f116cf8e738",
* "room_name": "Xcm97xRZ08b2dePKb78g",
* "start_ts": 1692124183,
* "status": "finished",
* "max_participants": 1,
* "duration": 9,
* "share_token": "ntDCL5k98Ulq", #gitleaks:allow
* "s3_key": "api-test-1j8fizhzd30c/Xcm97xRZ08b2dePKb78g/1692124183028"
* },
* "event_ts": 1692124192
* }
*
* Daily.co circuit-breaker: After 3+ failed responses (4xx/5xx), webhook
* state→FAILED, stops sending events. Reset: scripts/recreate_daily_webhook.py
*/
post: operations["v1_webhook"];
delete?: never;
@@ -899,81 +920,11 @@ export interface components {
target_language: string;
source_kind?: components["schemas"]["SourceKind"] | null;
};
/**
* DailyWebhookEvent
* @description Daily webhook event structure.
*/
DailyWebhookEvent: {
/** Type */
type: string;
/** Id */
id: string;
/** Ts */
ts: number;
/** Data */
data: {
[key: string]: unknown;
};
};
/** DeletionStatus */
DeletionStatus: {
/** Status */
status: string;
};
/** GetTranscript */
GetTranscript: {
/** Id */
id: string;
/** User Id */
user_id: string | null;
/** Name */
name: string;
/**
* Status
* @enum {string}
*/
status:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
/** Locked */
locked: boolean;
/** Duration */
duration: number;
/** Title */
title: string | null;
/** Short Summary */
short_summary: string | null;
/** Long Summary */
long_summary: string | null;
/** Created At */
created_at: string;
/**
* Share Mode
* @default private
*/
share_mode: string;
/** Source Language */
source_language: string | null;
/** Target Language */
target_language: string | null;
/** Reviewed */
reviewed: boolean;
/** Meeting Id */
meeting_id: string | null;
source_kind: components["schemas"]["SourceKind"];
/** Room Id */
room_id?: string | null;
/** Room Name */
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Participants */
participants: components["schemas"]["TranscriptParticipant"][] | null;
};
/** GetTranscriptMinimal */
GetTranscriptMinimal: {
/** Id */
@@ -1105,6 +1056,345 @@ export interface components {
*/
words_per_speaker: components["schemas"]["SpeakerWords"][];
};
/**
* GetTranscriptWithJSON
* @description Transcript response as structured JSON segments.
*
* Format: Array of segment objects with speaker info, text, and timing.
* Example:
* [
* {
* "speaker": 0,
* "speaker_name": "John Smith",
* "text": "Hello everyone",
* "start": 0.0,
* "end": 5.0
* }
* ]
*/
GetTranscriptWithJSON: {
/** Id */
id: string;
/** User Id */
user_id: string | null;
/** Name */
name: string;
/**
* Status
* @enum {string}
*/
status:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
/** Locked */
locked: boolean;
/** Duration */
duration: number;
/** Title */
title: string | null;
/** Short Summary */
short_summary: string | null;
/** Long Summary */
long_summary: string | null;
/** Created At */
created_at: string;
/**
* Share Mode
* @default private
*/
share_mode: string;
/** Source Language */
source_language: string | null;
/** Target Language */
target_language: string | null;
/** Reviewed */
reviewed: boolean;
/** Meeting Id */
meeting_id: string | null;
source_kind: components["schemas"]["SourceKind"];
/** Room Id */
room_id?: string | null;
/** Room Name */
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Participants */
participants: components["schemas"]["TranscriptParticipant"][] | null;
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
transcript_format: "json";
/** Transcript */
transcript: components["schemas"]["TranscriptSegment"][];
};
/** GetTranscriptWithParticipants */
GetTranscriptWithParticipants: {
/** Id */
id: string;
/** User Id */
user_id: string | null;
/** Name */
name: string;
/**
* Status
* @enum {string}
*/
status:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
/** Locked */
locked: boolean;
/** Duration */
duration: number;
/** Title */
title: string | null;
/** Short Summary */
short_summary: string | null;
/** Long Summary */
long_summary: string | null;
/** Created At */
created_at: string;
/**
* Share Mode
* @default private
*/
share_mode: string;
/** Source Language */
source_language: string | null;
/** Target Language */
target_language: string | null;
/** Reviewed */
reviewed: boolean;
/** Meeting Id */
meeting_id: string | null;
source_kind: components["schemas"]["SourceKind"];
/** Room Id */
room_id?: string | null;
/** Room Name */
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Participants */
participants: components["schemas"]["TranscriptParticipant"][] | null;
};
/**
* GetTranscriptWithText
* @description Transcript response with plain text format.
*
* Format: Speaker names followed by their dialogue, one line per segment.
* Example:
* John Smith: Hello everyone
* Jane Doe: Hi there
*/
GetTranscriptWithText: {
/** Id */
id: string;
/** User Id */
user_id: string | null;
/** Name */
name: string;
/**
* Status
* @enum {string}
*/
status:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
/** Locked */
locked: boolean;
/** Duration */
duration: number;
/** Title */
title: string | null;
/** Short Summary */
short_summary: string | null;
/** Long Summary */
long_summary: string | null;
/** Created At */
created_at: string;
/**
* Share Mode
* @default private
*/
share_mode: string;
/** Source Language */
source_language: string | null;
/** Target Language */
target_language: string | null;
/** Reviewed */
reviewed: boolean;
/** Meeting Id */
meeting_id: string | null;
source_kind: components["schemas"]["SourceKind"];
/** Room Id */
room_id?: string | null;
/** Room Name */
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Participants */
participants: components["schemas"]["TranscriptParticipant"][] | null;
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
transcript_format: "text";
/** Transcript */
transcript: string;
};
/**
* GetTranscriptWithTextTimestamped
* @description Transcript response with timestamped text format.
*
* Format: [MM:SS] timestamp prefix before each speaker and dialogue.
* Example:
* [00:00] John Smith: Hello everyone
* [00:05] Jane Doe: Hi there
*/
GetTranscriptWithTextTimestamped: {
/** Id */
id: string;
/** User Id */
user_id: string | null;
/** Name */
name: string;
/**
* Status
* @enum {string}
*/
status:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
/** Locked */
locked: boolean;
/** Duration */
duration: number;
/** Title */
title: string | null;
/** Short Summary */
short_summary: string | null;
/** Long Summary */
long_summary: string | null;
/** Created At */
created_at: string;
/**
* Share Mode
* @default private
*/
share_mode: string;
/** Source Language */
source_language: string | null;
/** Target Language */
target_language: string | null;
/** Reviewed */
reviewed: boolean;
/** Meeting Id */
meeting_id: string | null;
source_kind: components["schemas"]["SourceKind"];
/** Room Id */
room_id?: string | null;
/** Room Name */
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Participants */
participants: components["schemas"]["TranscriptParticipant"][] | null;
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
transcript_format: "text-timestamped";
/** Transcript */
transcript: string;
};
/**
* GetTranscriptWithWebVTTNamed
* @description Transcript response in WebVTT subtitle format with participant names.
*
* Format: Standard WebVTT with voice tags using participant names.
* Example:
* WEBVTT
*
* 00:00:00.000 --> 00:00:05.000
* <v John Smith>Hello everyone
*/
GetTranscriptWithWebVTTNamed: {
/** Id */
id: string;
/** User Id */
user_id: string | null;
/** Name */
name: string;
/**
* Status
* @enum {string}
*/
status:
| "idle"
| "uploaded"
| "recording"
| "processing"
| "error"
| "ended";
/** Locked */
locked: boolean;
/** Duration */
duration: number;
/** Title */
title: string | null;
/** Short Summary */
short_summary: string | null;
/** Long Summary */
long_summary: string | null;
/** Created At */
created_at: string;
/**
* Share Mode
* @default private
*/
share_mode: string;
/** Source Language */
source_language: string | null;
/** Target Language */
target_language: string | null;
/** Reviewed */
reviewed: boolean;
/** Meeting Id */
meeting_id: string | null;
source_kind: components["schemas"]["SourceKind"];
/** Room Id */
room_id?: string | null;
/** Room Name */
room_name?: string | null;
/** Audio Deleted */
audio_deleted?: boolean | null;
/** Participants */
participants: components["schemas"]["TranscriptParticipant"][] | null;
/**
* @description discriminator enum property added by openapi-typescript
* @enum {string}
*/
transcript_format: "webvtt-named";
/** Transcript */
transcript: string;
};
/** HTTPValidationError */
HTTPValidationError: {
/** Detail */
@@ -1233,7 +1523,6 @@ export interface components {
} | null;
/**
* Platform
* @default whereby
* @enum {string}
*/
platform: "whereby" | "daily";
@@ -1325,7 +1614,6 @@ export interface components {
ics_last_etag?: string | null;
/**
* Platform
* @default whereby
* @enum {string}
*/
platform: "whereby" | "daily";
@@ -1377,7 +1665,6 @@ export interface components {
ics_last_etag?: string | null;
/**
* Platform
* @default whereby
* @enum {string}
*/
platform: "whereby" | "daily";
@@ -1523,6 +1810,24 @@ export interface components {
speaker: number | null;
/** Name */
name: string;
/** User Id */
user_id?: string | null;
};
/**
* TranscriptSegment
* @description A single transcript segment with speaker and timing information.
*/
TranscriptSegment: {
/** Speaker */
speaker: number;
/** Speaker Name */
speaker_name: string;
/** Text */
text: string;
/** Start */
start: number;
/** End */
end: number;
};
/** UpdateParticipant */
UpdateParticipant: {
@@ -2311,7 +2616,7 @@ export interface operations {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["GetTranscript"];
"application/json": components["schemas"]["GetTranscriptWithParticipants"];
};
};
/** @description Validation Error */
@@ -2369,7 +2674,13 @@ export interface operations {
};
v1_transcript_get: {
parameters: {
query?: never;
query?: {
transcript_format?:
| "text"
| "text-timestamped"
| "webvtt-named"
| "json";
};
header?: never;
path: {
transcript_id: string;
@@ -2384,7 +2695,11 @@ export interface operations {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["GetTranscript"];
"application/json":
| components["schemas"]["GetTranscriptWithText"]
| components["schemas"]["GetTranscriptWithTextTimestamped"]
| components["schemas"]["GetTranscriptWithWebVTTNamed"]
| components["schemas"]["GetTranscriptWithJSON"];
};
};
/** @description Validation Error */
@@ -2450,7 +2765,7 @@ export interface operations {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["GetTranscript"];
"application/json": components["schemas"]["GetTranscriptWithParticipants"];
};
};
/** @description Validation Error */
@@ -3256,11 +3571,7 @@ export interface operations {
path?: never;
cookie?: never;
};
requestBody: {
content: {
"application/json": components["schemas"]["DailyWebhookEvent"];
};
};
requestBody?: never;
responses: {
/** @description Successful Response */
200: {
@@ -3271,15 +3582,6 @@ export interface operations {
"application/json": unknown;
};
};
/** @description Validation Error */
422: {
headers: {
[name: string]: unknown;
};
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
}