Compare commits

..

1 Commits

Author SHA1 Message Date
dependabot[bot]
2f14df4b44 build(deps): bump the uv group across 2 directories with 1 update
Bumps the uv group with 1 update in the /gpu/self_hosted directory: [transformers](https://github.com/huggingface/transformers).
Bumps the uv group with 1 update in the /server directory: [transformers](https://github.com/huggingface/transformers).


Updates `transformers` from 4.56.1 to 5.0.0rc3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.56.1...v5.0.0rc3)

Updates `transformers` from 4.53.2 to 5.0.0rc3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.56.1...v5.0.0rc3)

---
updated-dependencies:
- dependency-name: transformers
  dependency-version: 5.0.0rc3
  dependency-type: direct:production
  dependency-group: uv
- dependency-name: transformers
  dependency-version: 5.0.0rc3
  dependency-type: direct:production
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-08 07:36:05 +00:00
156 changed files with 419 additions and 19763 deletions

6
.gitignore vendored
View File

@@ -33,9 +33,3 @@ Caddyfile.gpu-host
.env.gpu-host
vibedocs/
server/tests/integration/logs/
node_modules
node_modules
greyhaven-design-system/
.claude/
AGENTS.md

View File

@@ -1,18 +1,5 @@
# Changelog
## [0.45.0](https://github.com/GreyhavenHQ/reflector/compare/v0.44.0...v0.45.0) (2026-04-09)
### Features
* make video recording optional, deleting video tracks ([#954](https://github.com/GreyhavenHQ/reflector/issues/954)) ([ee8db36](https://github.com/GreyhavenHQ/reflector/commit/ee8db36f2cd93b8f1ff4f4318e331fe2bac219c5))
### Bug Fixes
* better topic chunking and subject extraction ([#952](https://github.com/GreyhavenHQ/reflector/issues/952)) ([5f0c563](https://github.com/GreyhavenHQ/reflector/commit/5f0c5635eb77955b70168242ad7c336a20c98dd0))
* inline imports ([#955](https://github.com/GreyhavenHQ/reflector/issues/955)) ([739cd51](https://github.com/GreyhavenHQ/reflector/commit/739cd513751cd52d8e3d6d80b64568b1cf409414))
## [0.44.0](https://github.com/GreyhavenHQ/reflector/compare/v0.43.0...v0.44.0) (2026-04-07)

View File

@@ -201,52 +201,4 @@ If you need to do any worker/pipeline related work, search for "Pipeline" classe
## Code Style
- Always put imports at the top of the file. Let ruff/pre-commit handle sorting and formatting of imports.
- The **only** imports allowed to remain inline are from `reflector.db.*` modules (e.g., `reflector.db.transcripts`, `reflector.db.meetings`, `reflector.db.recordings`, `reflector.db.rooms`). These stay as deferred/inline imports inside `fresh_db_connection()` blocks in Hatchet pipeline task functions — this is intentional to avoid sharing DB connections across forked processes. All other imports (utilities, services, processors, storage, third-party libs) **must** go at the top of the file, even in Hatchet workflows.
This project uses the **Greyhaven Design System**.
## Rules
- **ALWAYS use TypeScript** (`.tsx` / `.ts`). NEVER generate plain JavaScript (`.jsx` / `.js`).
- Use the `greyhaven` SKILL.md for full design system context (tokens, components, composition rules). It should be installed at `.claude/skills/greyhaven-design-system.md` or accessible to your AI tool.
- If the `greyhaven` MCP server is available, use its tools:
- `list_components()` to find the right component for a UI need
- `get_component(name)` to get exact props, variants, and usage examples
- `validate_colors(code)` to check code for off-brand colors
- `suggest_component(description)` to get recommendations
- Import components from `components/ui/` (or `@/components/ui/` with path alias)
- Never use raw hex colors -- use semantic Tailwind classes (`bg-primary`, `text-foreground`, `border-border`, etc.)
- Use `font-sans` (Aspekta) for UI elements: buttons, nav, labels, forms
- Use `font-serif` (Source Serif) for content: headings, body text
- Trust the design system's default component variants for accent -- they apply orange at the right scale. Don't apply `bg-primary` to large surfaces, containers, or section backgrounds
- All components are framework-agnostic React (no Next.js, no framework-specific imports)
- Dark mode is toggled via the `.dark` class -- use semantic tokens that adapt automatically
## Component Summary
38 components across 8 categories: primitives (11), layout (4), overlay (5), navigation (3), data (4), feedback (4), form (1), composition (6).
For full component specs, props, and examples, refer to the SKILL.md file or use the MCP `get_component(name)` tool.
## Key Patterns
- **CVA variants**: Components use `class-variance-authority` for variant props
- **Slot composition**: Components use `data-slot="name"` attributes
- **Class merging**: Always use `cn()` from `@/lib/utils` (clsx + tailwind-merge)
- **Focus rings**: `focus-visible:border-ring focus-visible:ring-ring/50 focus-visible:ring-[3px]`
- **Disabled**: `disabled:pointer-events-none disabled:opacity-50`
- **Card spacing**: `gap-6` between cards, `p-6` internal padding
- **Section rhythm**: `py-16` between major sections
- **Form layout**: Vertical stack with `gap-4`, labels above inputs
## Font Setup
If fonts aren't loaded yet, add to your global CSS:
```css
@font-face { font-family: 'Aspekta'; font-weight: 400; font-display: swap; src: url('/fonts/Aspekta-400.woff2') format('woff2'); }
@font-face { font-family: 'Aspekta'; font-weight: 500; font-display: swap; src: url('/fonts/Aspekta-500.woff2') format('woff2'); }
@font-face { font-family: 'Aspekta'; font-weight: 600; font-display: swap; src: url('/fonts/Aspekta-600.woff2') format('woff2'); }
@font-face { font-family: 'Aspekta'; font-weight: 700; font-display: swap; src: url('/fonts/Aspekta-700.woff2') format('woff2'); }
```
- Exception: In Hatchet pipeline task functions, DB controller imports (e.g., `transcripts_controller`, `meetings_controller`) stay as deferred/inline imports inside `fresh_db_connection()` blocks — this is intentional to avoid sharing DB connections across forked processes. Non-DB imports (utilities, services) should still go at the top of the file.

View File

@@ -19,9 +19,6 @@
handle /health {
reverse_proxy server:1250
}
handle /v2* {
reverse_proxy ui:80
}
handle {
reverse_proxy web:3000
}

View File

@@ -129,23 +129,6 @@ services:
depends_on:
- redis
# Reflector v2 UI — Vite SPA served at /v2 behind Caddy.
# Build-time env vars are baked into the bundle; pass VITE_OIDC_* via build args.
ui:
build:
context: ./ui
dockerfile: Dockerfile
args:
VITE_OIDC_AUTHORITY: ${VITE_OIDC_AUTHORITY:-}
VITE_OIDC_CLIENT_ID: ${VITE_OIDC_CLIENT_ID:-}
VITE_OIDC_SCOPE: ${VITE_OIDC_SCOPE:-openid profile email}
image: monadicalsas/reflector-ui:latest
restart: unless-stopped
ports:
- "${BIND_HOST:-127.0.0.1}:3001:80"
depends_on:
- server
redis:
image: redis:7.2-alpine
restart: unless-stopped

106
gpu/self_hosted/uv.lock generated
View File

@@ -764,17 +764,34 @@ wheels = [
[[package]]
name = "hf-xet"
version = "1.1.9"
version = "1.4.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/23/0f/5b60fc28ee7f8cc17a5114a584fd6b86e11c3e0a6e142a7f97a161e9640a/hf_xet-1.1.9.tar.gz", hash = "sha256:c99073ce404462e909f1d5839b2d14a3827b8fe75ed8aed551ba6609c026c803", size = 484242, upload-time = "2025-08-27T23:05:19.441Z" }
sdist = { url = "https://files.pythonhosted.org/packages/53/92/ec9ad04d0b5728dca387a45af7bc98fbb0d73b2118759f5f6038b61a57e8/hf_xet-1.4.3.tar.gz", hash = "sha256:8ddedb73c8c08928c793df2f3401ec26f95be7f7e516a7bee2fbb546f6676113", size = 670477, upload-time = "2026-03-31T22:40:07.874Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/de/12/56e1abb9a44cdef59a411fe8a8673313195711b5ecce27880eb9c8fa90bd/hf_xet-1.1.9-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:a3b6215f88638dd7a6ff82cb4e738dcbf3d863bf667997c093a3c990337d1160", size = 2762553, upload-time = "2025-08-27T23:05:15.153Z" },
{ url = "https://files.pythonhosted.org/packages/3a/e6/2d0d16890c5f21b862f5df3146519c182e7f0ae49b4b4bf2bd8a40d0b05e/hf_xet-1.1.9-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:9b486de7a64a66f9a172f4b3e0dfe79c9f0a93257c501296a2521a13495a698a", size = 2623216, upload-time = "2025-08-27T23:05:13.778Z" },
{ url = "https://files.pythonhosted.org/packages/81/42/7e6955cf0621e87491a1fb8cad755d5c2517803cea174229b0ec00ff0166/hf_xet-1.1.9-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a4c5a840c2c4e6ec875ed13703a60e3523bc7f48031dfd750923b2a4d1a5fc3c", size = 3186789, upload-time = "2025-08-27T23:05:12.368Z" },
{ url = "https://files.pythonhosted.org/packages/df/8b/759233bce05457f5f7ec062d63bbfd2d0c740b816279eaaa54be92aa452a/hf_xet-1.1.9-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:96a6139c9e44dad1c52c52520db0fffe948f6bce487cfb9d69c125f254bb3790", size = 3088747, upload-time = "2025-08-27T23:05:10.439Z" },
{ url = "https://files.pythonhosted.org/packages/6c/3c/28cc4db153a7601a996985bcb564f7b8f5b9e1a706c7537aad4b4809f358/hf_xet-1.1.9-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ad1022e9a998e784c97b2173965d07fe33ee26e4594770b7785a8cc8f922cd95", size = 3251429, upload-time = "2025-08-27T23:05:16.471Z" },
{ url = "https://files.pythonhosted.org/packages/84/17/7caf27a1d101bfcb05be85850d4aa0a265b2e1acc2d4d52a48026ef1d299/hf_xet-1.1.9-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:86754c2d6d5afb11b0a435e6e18911a4199262fe77553f8c50d75e21242193ea", size = 3354643, upload-time = "2025-08-27T23:05:17.828Z" },
{ url = "https://files.pythonhosted.org/packages/cd/50/0c39c9eed3411deadcc98749a6699d871b822473f55fe472fad7c01ec588/hf_xet-1.1.9-cp37-abi3-win_amd64.whl", hash = "sha256:5aad3933de6b725d61d51034e04174ed1dce7a57c63d530df0014dea15a40127", size = 2804797, upload-time = "2025-08-27T23:05:20.77Z" },
{ url = "https://files.pythonhosted.org/packages/72/43/724d307b34e353da0abd476e02f72f735cdd2bc86082dee1b32ea0bfee1d/hf_xet-1.4.3-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:7551659ba4f1e1074e9623996f28c3873682530aee0a846b7f2f066239228144", size = 3800935, upload-time = "2026-03-31T22:39:49.618Z" },
{ url = "https://files.pythonhosted.org/packages/2b/d2/8bee5996b699262edb87dbb54118d287c0e1b2fc78af7cdc41857ba5e3c4/hf_xet-1.4.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:bee693ada985e7045997f05f081d0e12c4c08bd7626dc397f8a7c487e6c04f7f", size = 3558942, upload-time = "2026-03-31T22:39:47.938Z" },
{ url = "https://files.pythonhosted.org/packages/c3/a1/e993d09cbe251196fb60812b09a58901c468127b7259d2bf0f68bf6088eb/hf_xet-1.4.3-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:21644b404bb0100fe3857892f752c4d09642586fd988e61501c95bbf44b393a3", size = 4207657, upload-time = "2026-03-31T22:39:39.69Z" },
{ url = "https://files.pythonhosted.org/packages/64/44/9eb6d21e5c34c63e5e399803a6932fa983cabdf47c0ecbcfe7ea97684b8c/hf_xet-1.4.3-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:987f09cfe418237812896a6736b81b1af02a3a6dcb4b4944425c4c4fca7a7cf8", size = 3986765, upload-time = "2026-03-31T22:39:37.936Z" },
{ url = "https://files.pythonhosted.org/packages/ea/7b/8ad6f16fdb82f5f7284a34b5ec48645bd575bdcd2f6f0d1644775909c486/hf_xet-1.4.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:60cf7fc43a99da0a853345cf86d23738c03983ee5249613a6305d3e57a5dca74", size = 4188162, upload-time = "2026-03-31T22:39:58.382Z" },
{ url = "https://files.pythonhosted.org/packages/1b/c4/39d6e136cbeea9ca5a23aad4b33024319222adbdc059ebcda5fc7d9d5ff4/hf_xet-1.4.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:2815a49a7a59f3e2edf0cf113ae88e8cb2ca2a221bf353fb60c609584f4884d4", size = 4424525, upload-time = "2026-03-31T22:40:00.225Z" },
{ url = "https://files.pythonhosted.org/packages/46/f2/adc32dae6bdbc367853118b9878139ac869419a4ae7ba07185dc31251b76/hf_xet-1.4.3-cp313-cp313t-win_amd64.whl", hash = "sha256:42ee323265f1e6a81b0e11094564fb7f7e0ec75b5105ffd91ae63f403a11931b", size = 3671610, upload-time = "2026-03-31T22:40:10.42Z" },
{ url = "https://files.pythonhosted.org/packages/e2/19/25d897dcc3f81953e0c2cde9ec186c7a0fee413eb0c9a7a9130d87d94d3a/hf_xet-1.4.3-cp313-cp313t-win_arm64.whl", hash = "sha256:27c976ba60079fb8217f485b9c5c7fcd21c90b0367753805f87cb9f3cdc4418a", size = 3528529, upload-time = "2026-03-31T22:40:09.106Z" },
{ url = "https://files.pythonhosted.org/packages/ec/36/3e8f85ca9fe09b8de2b2e10c63b3b3353d7dda88a0b3d426dffbe7b8313b/hf_xet-1.4.3-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:5251d5ece3a81815bae9abab41cf7ddb7bcb8f56411bce0827f4a3071c92fdc6", size = 3801019, upload-time = "2026-03-31T22:39:56.651Z" },
{ url = "https://files.pythonhosted.org/packages/b5/9c/defb6cb1de28bccb7bd8d95f6e60f72a3d3fa4cb3d0329c26fb9a488bfe7/hf_xet-1.4.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1feb0f3abeacee143367c326a128a2e2b60868ec12a36c225afb1d6c5a05e6d2", size = 3558746, upload-time = "2026-03-31T22:39:54.766Z" },
{ url = "https://files.pythonhosted.org/packages/c1/bd/8d001191893178ff8e826e46ad5299446e62b93cd164e17b0ffea08832ec/hf_xet-1.4.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8b301fc150290ca90b4fccd079829b84bb4786747584ae08b94b4577d82fb791", size = 4207692, upload-time = "2026-03-31T22:39:46.246Z" },
{ url = "https://files.pythonhosted.org/packages/ce/48/6790b402803250e9936435613d3a78b9aaeee7973439f0918848dde58309/hf_xet-1.4.3-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:d972fbe95ddc0d3c0fc49b31a8a69f47db35c1e3699bf316421705741aab6653", size = 3986281, upload-time = "2026-03-31T22:39:44.648Z" },
{ url = "https://files.pythonhosted.org/packages/51/56/ea62552fe53db652a9099eda600b032d75554d0e86c12a73824bfedef88b/hf_xet-1.4.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:c5b48db1ee344a805a1b9bd2cda9b6b65fe77ed3787bd6e87ad5521141d317cd", size = 4187414, upload-time = "2026-03-31T22:40:04.951Z" },
{ url = "https://files.pythonhosted.org/packages/7d/f5/bc1456d4638061bea997e6d2db60a1a613d7b200e0755965ec312dc1ef79/hf_xet-1.4.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:22bdc1f5fb8b15bf2831440b91d1c9bbceeb7e10c81a12e8d75889996a5c9da8", size = 4424368, upload-time = "2026-03-31T22:40:06.347Z" },
{ url = "https://files.pythonhosted.org/packages/e4/76/ab597bae87e1f06d18d3ecb8ed7f0d3c9a37037fc32ce76233d369273c64/hf_xet-1.4.3-cp314-cp314t-win_amd64.whl", hash = "sha256:0392c79b7cf48418cd61478c1a925246cf10639f4cd9d94368d8ca1e8df9ea07", size = 3672280, upload-time = "2026-03-31T22:40:16.401Z" },
{ url = "https://files.pythonhosted.org/packages/62/05/2e462d34e23a09a74d73785dbed71cc5dbad82a72eee2ad60a72a554155d/hf_xet-1.4.3-cp314-cp314t-win_arm64.whl", hash = "sha256:681c92a07796325778a79d76c67011764ecc9042a8c3579332b61b63ae512075", size = 3528945, upload-time = "2026-03-31T22:40:14.995Z" },
{ url = "https://files.pythonhosted.org/packages/ac/9f/9c23e4a447b8f83120798f9279d0297a4d1360bdbf59ef49ebec78fe2545/hf_xet-1.4.3-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:d0da85329eaf196e03e90b84c2d0aca53bd4573d097a75f99609e80775f98025", size = 3805048, upload-time = "2026-03-31T22:39:53.105Z" },
{ url = "https://files.pythonhosted.org/packages/0b/f8/7aacb8e5f4a7899d39c787b5984e912e6c18b11be136ef13947d7a66d265/hf_xet-1.4.3-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:e23717ce4186b265f69afa66e6f0069fe7efbf331546f5c313d00e123dc84583", size = 3562178, upload-time = "2026-03-31T22:39:51.295Z" },
{ url = "https://files.pythonhosted.org/packages/df/9a/a24b26dc8a65f0ecc0fe5be981a19e61e7ca963b85e062c083f3a9100529/hf_xet-1.4.3-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc360b70c815bf340ed56c7b8c63aacf11762a4b099b2fe2c9bd6d6068668c08", size = 4212320, upload-time = "2026-03-31T22:39:42.922Z" },
{ url = "https://files.pythonhosted.org/packages/53/60/46d493db155d2ee2801b71fb1b0fd67696359047fdd8caee2c914cc50c79/hf_xet-1.4.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:39f2d2e9654cd9b4319885733993807aab6de9dfbd34c42f0b78338d6617421f", size = 3991546, upload-time = "2026-03-31T22:39:41.335Z" },
{ url = "https://files.pythonhosted.org/packages/bc/f5/067363e1c96c6b17256910830d1b54099d06287e10f4ec6ec4e7e08371fc/hf_xet-1.4.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:49ad8a8cead2b56051aa84d7fce3e1335efe68df3cf6c058f22a65513885baac", size = 4193200, upload-time = "2026-03-31T22:40:01.936Z" },
{ url = "https://files.pythonhosted.org/packages/42/4b/53951592882d9c23080c7644542fda34a3813104e9e11fa1a7d82d419cb8/hf_xet-1.4.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:7716d62015477a70ea272d2d68cd7cad140f61c52ee452e133e139abfe2c17ba", size = 4429392, upload-time = "2026-03-31T22:40:03.492Z" },
{ url = "https://files.pythonhosted.org/packages/8a/21/75a6c175b4e79662ad8e62f46a40ce341d8d6b206b06b4320d07d55b188c/hf_xet-1.4.3-cp37-abi3-win_amd64.whl", hash = "sha256:6b591fcad34e272a5b02607485e4f2a1334aebf1bc6d16ce8eb1eb8978ac2021", size = 3677359, upload-time = "2026-03-31T22:40:13.619Z" },
{ url = "https://files.pythonhosted.org/packages/8a/7c/44314ecd0e89f8b2b51c9d9e5e7a60a9c1c82024ac471d415860557d3cd8/hf_xet-1.4.3-cp37-abi3-win_arm64.whl", hash = "sha256:7c2c7e20bcfcc946dc67187c203463f5e932e395845d098cc2a93f5b67ca0b47", size = 3533664, upload-time = "2026-03-31T22:40:12.152Z" },
]
[[package]]
@@ -829,21 +846,22 @@ wheels = [
[[package]]
name = "huggingface-hub"
version = "0.34.4"
version = "1.9.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock" },
{ name = "fsspec" },
{ name = "hf-xet", marker = "platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'" },
{ name = "hf-xet", marker = "platform_machine == 'AMD64' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'" },
{ name = "httpx" },
{ name = "packaging" },
{ name = "pyyaml" },
{ name = "requests" },
{ name = "tqdm" },
{ name = "typer" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/45/c9/bdbe19339f76d12985bc03572f330a01a93c04dffecaaea3061bdd7fb892/huggingface_hub-0.34.4.tar.gz", hash = "sha256:a4228daa6fb001be3f4f4bdaf9a0db00e1739235702848df00885c9b5742c85c", size = 459768, upload-time = "2025-08-08T09:14:52.365Z" }
sdist = { url = "https://files.pythonhosted.org/packages/44/40/68d9b286b125d9318ae95c8f8b206e8672e7244b0eea61ebb4a88037638c/huggingface_hub-1.9.1.tar.gz", hash = "sha256:442af372207cc24dcb089caf507fcd7dbc1217c11d6059a06f6b90afe64e8bd2", size = 750355, upload-time = "2026-04-07T13:47:59.167Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/39/7b/bb06b061991107cd8783f300adff3e7b7f284e330fd82f507f2a1417b11d/huggingface_hub-0.34.4-py3-none-any.whl", hash = "sha256:9b365d781739c93ff90c359844221beef048403f1bc1f1c123c191257c3c890a", size = 561452, upload-time = "2025-08-08T09:14:50.159Z" },
{ url = "https://files.pythonhosted.org/packages/3d/af/10a89c54937dccf6c10792770f362d96dd67aedfde108e6e1fd7a0836789/huggingface_hub-1.9.1-py3-none-any.whl", hash = "sha256:8dae771b969b318203727a6c6c5209d25e661f6f0dd010fc09cc4a12cf81c657", size = 637356, upload-time = "2026-04-07T13:47:57.239Z" },
]
[[package]]
@@ -2657,27 +2675,28 @@ wheels = [
[[package]]
name = "tokenizers"
version = "0.22.0"
version = "0.22.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "huggingface-hub" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5e/b4/c1ce3699e81977da2ace8b16d2badfd42b060e7d33d75c4ccdbf9dc920fa/tokenizers-0.22.0.tar.gz", hash = "sha256:2e33b98525be8453f355927f3cab312c36cd3e44f4d7e9e97da2fa94d0a49dcb", size = 362771, upload-time = "2025-08-29T10:25:33.914Z" }
sdist = { url = "https://files.pythonhosted.org/packages/73/6f/f80cfef4a312e1fb34baf7d85c72d4411afde10978d4657f8cdd811d3ccc/tokenizers-0.22.2.tar.gz", hash = "sha256:473b83b915e547aa366d1eee11806deaf419e17be16310ac0a14077f1e28f917", size = 372115, upload-time = "2026-01-05T10:45:15.988Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6d/b1/18c13648edabbe66baa85fe266a478a7931ddc0cd1ba618802eb7b8d9865/tokenizers-0.22.0-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:eaa9620122a3fb99b943f864af95ed14c8dfc0f47afa3b404ac8c16b3f2bb484", size = 3081954, upload-time = "2025-08-29T10:25:24.993Z" },
{ url = "https://files.pythonhosted.org/packages/c2/02/c3c454b641bd7c4f79e4464accfae9e7dfc913a777d2e561e168ae060362/tokenizers-0.22.0-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:71784b9ab5bf0ff3075bceeb198149d2c5e068549c0d18fe32d06ba0deb63f79", size = 2945644, upload-time = "2025-08-29T10:25:23.405Z" },
{ url = "https://files.pythonhosted.org/packages/55/02/d10185ba2fd8c2d111e124c9d92de398aee0264b35ce433f79fb8472f5d0/tokenizers-0.22.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ec5b71f668a8076802b0241a42387d48289f25435b86b769ae1837cad4172a17", size = 3254764, upload-time = "2025-08-29T10:25:12.445Z" },
{ url = "https://files.pythonhosted.org/packages/13/89/17514bd7ef4bf5bfff58e2b131cec0f8d5cea2b1c8ffe1050a2c8de88dbb/tokenizers-0.22.0-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ea8562fa7498850d02a16178105b58803ea825b50dc9094d60549a7ed63654bb", size = 3161654, upload-time = "2025-08-29T10:25:15.493Z" },
{ url = "https://files.pythonhosted.org/packages/5a/d8/bac9f3a7ef6dcceec206e3857c3b61bb16c6b702ed7ae49585f5bd85c0ef/tokenizers-0.22.0-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4136e1558a9ef2e2f1de1555dcd573e1cbc4a320c1a06c4107a3d46dc8ac6e4b", size = 3511484, upload-time = "2025-08-29T10:25:20.477Z" },
{ url = "https://files.pythonhosted.org/packages/aa/27/9c9800eb6763683010a4851db4d1802d8cab9cec114c17056eccb4d4a6e0/tokenizers-0.22.0-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cdf5954de3962a5fd9781dc12048d24a1a6f1f5df038c6e95db328cd22964206", size = 3712829, upload-time = "2025-08-29T10:25:17.154Z" },
{ url = "https://files.pythonhosted.org/packages/10/e3/b1726dbc1f03f757260fa21752e1921445b5bc350389a8314dd3338836db/tokenizers-0.22.0-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8337ca75d0731fc4860e6204cc24bb36a67d9736142aa06ed320943b50b1e7ed", size = 3408934, upload-time = "2025-08-29T10:25:18.76Z" },
{ url = "https://files.pythonhosted.org/packages/d4/61/aeab3402c26874b74bb67a7f2c4b569dde29b51032c5384db592e7b216f4/tokenizers-0.22.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a89264e26f63c449d8cded9061adea7b5de53ba2346fc7e87311f7e4117c1cc8", size = 3345585, upload-time = "2025-08-29T10:25:22.08Z" },
{ url = "https://files.pythonhosted.org/packages/bc/d3/498b4a8a8764cce0900af1add0f176ff24f475d4413d55b760b8cdf00893/tokenizers-0.22.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:790bad50a1b59d4c21592f9c3cf5e5cf9c3c7ce7e1a23a739f13e01fb1be377a", size = 9322986, upload-time = "2025-08-29T10:25:26.607Z" },
{ url = "https://files.pythonhosted.org/packages/a2/62/92378eb1c2c565837ca3cb5f9569860d132ab9d195d7950c1ea2681dffd0/tokenizers-0.22.0-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:76cf6757c73a10ef10bf06fa937c0ec7393d90432f543f49adc8cab3fb6f26cb", size = 9276630, upload-time = "2025-08-29T10:25:28.349Z" },
{ url = "https://files.pythonhosted.org/packages/eb/f0/342d80457aa1cda7654327460f69db0d69405af1e4c453f4dc6ca7c4a76e/tokenizers-0.22.0-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:1626cb186e143720c62c6c6b5371e62bbc10af60481388c0da89bc903f37ea0c", size = 9547175, upload-time = "2025-08-29T10:25:29.989Z" },
{ url = "https://files.pythonhosted.org/packages/14/84/8aa9b4adfc4fbd09381e20a5bc6aa27040c9c09caa89988c01544e008d18/tokenizers-0.22.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:da589a61cbfea18ae267723d6b029b84598dc8ca78db9951d8f5beff72d8507c", size = 9692735, upload-time = "2025-08-29T10:25:32.089Z" },
{ url = "https://files.pythonhosted.org/packages/bf/24/83ee2b1dc76bfe05c3142e7d0ccdfe69f0ad2f1ebf6c726cea7f0874c0d0/tokenizers-0.22.0-cp39-abi3-win32.whl", hash = "sha256:dbf9d6851bddae3e046fedfb166f47743c1c7bd11c640f0691dd35ef0bcad3be", size = 2471915, upload-time = "2025-08-29T10:25:36.411Z" },
{ url = "https://files.pythonhosted.org/packages/d1/9b/0e0bf82214ee20231845b127aa4a8015936ad5a46779f30865d10e404167/tokenizers-0.22.0-cp39-abi3-win_amd64.whl", hash = "sha256:c78174859eeaee96021f248a56c801e36bfb6bd5b067f2e95aa82445ca324f00", size = 2680494, upload-time = "2025-08-29T10:25:35.14Z" },
{ url = "https://files.pythonhosted.org/packages/92/97/5dbfabf04c7e348e655e907ed27913e03db0923abb5dfdd120d7b25630e1/tokenizers-0.22.2-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:544dd704ae7238755d790de45ba8da072e9af3eea688f698b137915ae959281c", size = 3100275, upload-time = "2026-01-05T10:41:02.158Z" },
{ url = "https://files.pythonhosted.org/packages/2e/47/174dca0502ef88b28f1c9e06b73ce33500eedfac7a7692108aec220464e7/tokenizers-0.22.2-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:1e418a55456beedca4621dbab65a318981467a2b188e982a23e117f115ce5001", size = 2981472, upload-time = "2026-01-05T10:41:00.276Z" },
{ url = "https://files.pythonhosted.org/packages/d6/84/7990e799f1309a8b87af6b948f31edaa12a3ed22d11b352eaf4f4b2e5753/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2249487018adec45d6e3554c71d46eb39fa8ea67156c640f7513eb26f318cec7", size = 3290736, upload-time = "2026-01-05T10:40:32.165Z" },
{ url = "https://files.pythonhosted.org/packages/78/59/09d0d9ba94dcd5f4f1368d4858d24546b4bdc0231c2354aa31d6199f0399/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:25b85325d0815e86e0bac263506dd114578953b7b53d7de09a6485e4a160a7dd", size = 3168835, upload-time = "2026-01-05T10:40:38.847Z" },
{ url = "https://files.pythonhosted.org/packages/47/50/b3ebb4243e7160bda8d34b731e54dd8ab8b133e50775872e7a434e524c28/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bfb88f22a209ff7b40a576d5324bf8286b519d7358663db21d6246fb17eea2d5", size = 3521673, upload-time = "2026-01-05T10:40:56.614Z" },
{ url = "https://files.pythonhosted.org/packages/e0/fa/89f4cb9e08df770b57adb96f8cbb7e22695a4cb6c2bd5f0c4f0ebcf33b66/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1c774b1276f71e1ef716e5486f21e76333464f47bece56bbd554485982a9e03e", size = 3724818, upload-time = "2026-01-05T10:40:44.507Z" },
{ url = "https://files.pythonhosted.org/packages/64/04/ca2363f0bfbe3b3d36e95bf67e56a4c88c8e3362b658e616d1ac185d47f2/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:df6c4265b289083bf710dff49bc51ef252f9d5be33a45ee2bed151114a56207b", size = 3379195, upload-time = "2026-01-05T10:40:51.139Z" },
{ url = "https://files.pythonhosted.org/packages/2e/76/932be4b50ef6ccedf9d3c6639b056a967a86258c6d9200643f01269211ca/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:369cc9fc8cc10cb24143873a0d95438bb8ee257bb80c71989e3ee290e8d72c67", size = 3274982, upload-time = "2026-01-05T10:40:58.331Z" },
{ url = "https://files.pythonhosted.org/packages/1d/28/5f9f5a4cc211b69e89420980e483831bcc29dade307955cc9dc858a40f01/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:29c30b83d8dcd061078b05ae0cb94d3c710555fbb44861139f9f83dcca3dc3e4", size = 9478245, upload-time = "2026-01-05T10:41:04.053Z" },
{ url = "https://files.pythonhosted.org/packages/6c/fb/66e2da4704d6aadebf8cb39f1d6d1957df667ab24cff2326b77cda0dcb85/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:37ae80a28c1d3265bb1f22464c856bd23c02a05bb211e56d0c5301a435be6c1a", size = 9560069, upload-time = "2026-01-05T10:45:10.673Z" },
{ url = "https://files.pythonhosted.org/packages/16/04/fed398b05caa87ce9b1a1bb5166645e38196081b225059a6edaff6440fac/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:791135ee325f2336f498590eb2f11dc5c295232f288e75c99a36c5dbce63088a", size = 9899263, upload-time = "2026-01-05T10:45:12.559Z" },
{ url = "https://files.pythonhosted.org/packages/05/a1/d62dfe7376beaaf1394917e0f8e93ee5f67fea8fcf4107501db35996586b/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:38337540fbbddff8e999d59970f3c6f35a82de10053206a7562f1ea02d046fa5", size = 10033429, upload-time = "2026-01-05T10:45:14.333Z" },
{ url = "https://files.pythonhosted.org/packages/fd/18/a545c4ea42af3df6effd7d13d250ba77a0a86fb20393143bbb9a92e434d4/tokenizers-0.22.2-cp39-abi3-win32.whl", hash = "sha256:a6bf3f88c554a2b653af81f3204491c818ae2ac6fbc09e76ef4773351292bc92", size = 2502363, upload-time = "2026-01-05T10:45:20.593Z" },
{ url = "https://files.pythonhosted.org/packages/65/71/0670843133a43d43070abeb1949abfdef12a86d490bea9cd9e18e37c5ff7/tokenizers-0.22.2-cp39-abi3-win_amd64.whl", hash = "sha256:c9ea31edff2968b44a88f97d784c2f16dc0729b8b143ed004699ebca91f05c48", size = 2747786, upload-time = "2026-01-05T10:45:18.411Z" },
{ url = "https://files.pythonhosted.org/packages/72/f4/0de46cfa12cdcbcd464cc59fde36912af405696f687e53a091fb432f694c/tokenizers-0.22.2-cp39-abi3-win_arm64.whl", hash = "sha256:9ce725d22864a1e965217204946f830c37876eee3b2ba6fc6255e8e903d5fcbc", size = 2612133, upload-time = "2026-01-05T10:45:17.232Z" },
]
[[package]]
@@ -2804,7 +2823,7 @@ wheels = [
[[package]]
name = "transformers"
version = "4.56.1"
version = "5.0.0rc3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock" },
@@ -2817,10 +2836,11 @@ dependencies = [
{ name = "safetensors" },
{ name = "tokenizers" },
{ name = "tqdm" },
{ name = "typer-slim" },
]
sdist = { url = "https://files.pythonhosted.org/packages/89/21/dc88ef3da1e49af07ed69386a11047a31dcf1aaf4ded3bc4b173fbf94116/transformers-4.56.1.tar.gz", hash = "sha256:0d88b1089a563996fc5f2c34502f10516cad3ea1aa89f179f522b54c8311fe74", size = 9855473, upload-time = "2025-09-04T20:47:13.14Z" }
sdist = { url = "https://files.pythonhosted.org/packages/3f/a3/7c116a8d85f69ea7749cf4c2df79e64c35d028e5fc7ea0168f299d03b8c7/transformers-5.0.0rc3.tar.gz", hash = "sha256:a0315b92b7e087617ade42ec9e6e92ee7620541cc5d6a3331886c52cbe306f5c", size = 8388520, upload-time = "2026-01-14T16:49:02.952Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/71/7c/283c3dd35e00e22a7803a0b2a65251347b745474a82399be058bde1c9f15/transformers-4.56.1-py3-none-any.whl", hash = "sha256:1697af6addfb6ddbce9618b763f4b52d5a756f6da4899ffd1b4febf58b779248", size = 11608197, upload-time = "2025-09-04T20:47:04.895Z" },
{ url = "https://files.pythonhosted.org/packages/1e/f2/ae2b8968764253bdf38a48dee3c299b8d0bedf7c8ffbe3449fca9bd95338/transformers-5.0.0rc3-py3-none-any.whl", hash = "sha256:383fad27f4f73092d330e45fae384681e5c8521e1dc1cf6cb1a297780e68bf2d", size = 10107087, upload-time = "2026-01-14T16:48:59.393Z" },
]
[[package]]
@@ -2838,17 +2858,29 @@ wheels = [
[[package]]
name = "typer"
version = "0.17.3"
version = "0.24.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-doc" },
{ name = "click" },
{ name = "rich" },
{ name = "shellingham" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/dd/82/f4bfed3bc18c6ebd6f828320811bbe4098f92a31adf4040bee59c4ae02ea/typer-0.17.3.tar.gz", hash = "sha256:0c600503d472bcf98d29914d4dcd67f80c24cc245395e2e00ba3603c9332e8ba", size = 103517, upload-time = "2025-08-30T12:35:24.05Z" }
sdist = { url = "https://files.pythonhosted.org/packages/f5/24/cb09efec5cc954f7f9b930bf8279447d24618bb6758d4f6adf2574c41780/typer-0.24.1.tar.gz", hash = "sha256:e39b4732d65fbdcde189ae76cf7cd48aeae72919dea1fdfc16593be016256b45", size = 118613, upload-time = "2026-02-21T16:54:40.609Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ca/e8/b3d537470e8404659a6335e7af868e90657efb73916ef31ddf3d8b9cb237/typer-0.17.3-py3-none-any.whl", hash = "sha256:643919a79182ab7ac7581056d93c6a2b865b026adf2872c4d02c72758e6f095b", size = 46494, upload-time = "2025-08-30T12:35:22.391Z" },
{ url = "https://files.pythonhosted.org/packages/4a/91/48db081e7a63bb37284f9fbcefda7c44c277b18b0e13fbc36ea2335b71e6/typer-0.24.1-py3-none-any.whl", hash = "sha256:112c1f0ce578bfb4cab9ffdabc68f031416ebcc216536611ba21f04e9aa84c9e", size = 56085, upload-time = "2026-02-21T16:54:41.616Z" },
]
[[package]]
name = "typer-slim"
version = "0.24.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typer" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a7/a7/e6aecc4b4eb59598829a3b5076a93aff291b4fdaa2ded25efc4e1f4d219c/typer_slim-0.24.0.tar.gz", hash = "sha256:f0ed36127183f52ae6ced2ecb2521789995992c521a46083bfcdbb652d22ad34", size = 4776, upload-time = "2026-02-16T22:08:51.2Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/24/5480c20380dfd18cf33d14784096dca45a24eae6102e91d49a718d3b6855/typer_slim-0.24.0-py3-none-any.whl", hash = "sha256:d5d7ee1ee2834d5020c7c616ed5e0d0f29b9a4b1dd283bdebae198ec09778d0e", size = 3394, upload-time = "2026-02-16T22:08:49.92Z" },
]
[[package]]

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1,161 +0,0 @@
/*! Aspekta | OFL v1.1 License | Ivo Dolenc (c) 2025 | https://github.com/ivodolenc/aspekta */
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 50;
font-display: swap;
src: url('Aspekta-50.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 100;
font-display: swap;
src: url('Aspekta-100.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 150;
font-display: swap;
src: url('Aspekta-150.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 200;
font-display: swap;
src: url('Aspekta-200.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 250;
font-display: swap;
src: url('Aspekta-250.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 300;
font-display: swap;
src: url('Aspekta-300.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 350;
font-display: swap;
src: url('Aspekta-350.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 400;
font-display: swap;
src: url('Aspekta-400.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 450;
font-display: swap;
src: url('Aspekta-450.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 500;
font-display: swap;
src: url('Aspekta-500.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 550;
font-display: swap;
src: url('Aspekta-550.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 600;
font-display: swap;
src: url('Aspekta-600.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 650;
font-display: swap;
src: url('Aspekta-650.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 700;
font-display: swap;
src: url('Aspekta-700.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 750;
font-display: swap;
src: url('Aspekta-750.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 800;
font-display: swap;
src: url('Aspekta-800.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 850;
font-display: swap;
src: url('Aspekta-850.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 900;
font-display: swap;
src: url('Aspekta-900.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 950;
font-display: swap;
src: url('Aspekta-950.woff2') format('woff2');
}
@font-face {
font-family: 'Aspekta';
font-style: normal;
font-weight: 1000;
font-display: swap;
src: url('Aspekta-1000.woff2') format('woff2');
}

View File

@@ -1494,9 +1494,6 @@ $CUSTOM_DOMAIN {
}
handle /health {
reverse_proxy server:1250
}
handle /v2* {
reverse_proxy ui:80
}${lk_proxy_block}${hatchet_proxy_block}
handle {
reverse_proxy web:3000
@@ -1514,9 +1511,6 @@ $CUSTOM_DOMAIN {
}
handle /health {
reverse_proxy server:1250
}
handle /v2* {
reverse_proxy ui:80
}${lk_proxy_block}${hatchet_proxy_block}
handle {
reverse_proxy web:3000
@@ -1538,9 +1532,6 @@ CADDYEOF
}
handle /health {
reverse_proxy server:1250
}
handle /v2* {
reverse_proxy ui:80
}${lk_proxy_block}${hatchet_proxy_block}
handle {
reverse_proxy web:3000
@@ -1581,12 +1572,9 @@ step_services() {
info "Building frontend image from source..."
compose_cmd build web
ok "Frontend image built"
info "Building v2 UI image from source..."
compose_cmd build ui
ok "v2 UI image built"
else
info "Pulling latest backend and frontend images..."
compose_cmd pull server web ui || warn "Pull failed — using cached images"
compose_cmd pull server web || warn "Pull failed — using cached images"
fi
# Hatchet is always needed (all processing pipelines use it)
@@ -1749,24 +1737,6 @@ step_health() {
warn "Frontend not responding. Check: docker compose logs web"
fi
# v2 UI
info "Waiting for v2 UI..."
local ui_ok=false
for i in $(seq 1 30); do
if curl -sf http://localhost:3001/v2/ > /dev/null 2>&1; then
ui_ok=true
break
fi
echo -ne "\r Waiting for v2 UI... ($i/30)"
sleep 3
done
echo ""
if [[ "$ui_ok" == "true" ]]; then
ok "v2 UI healthy"
else
warn "v2 UI not responding. Check: docker compose logs ui"
fi
# Caddy
if [[ "$USE_CADDY" == "true" ]]; then
sleep 2
@@ -2009,25 +1979,20 @@ EOF
if [[ "$USE_CADDY" == "true" ]]; then
if [[ -n "$CUSTOM_DOMAIN" ]]; then
echo " App: https://$CUSTOM_DOMAIN"
echo " App v2: https://$CUSTOM_DOMAIN/v2/"
echo " API: https://$CUSTOM_DOMAIN/v1/"
elif [[ -n "$PRIMARY_IP" ]]; then
echo " App: https://$PRIMARY_IP (accept self-signed cert in browser)"
echo " App v2: https://$PRIMARY_IP/v2/"
echo " API: https://$PRIMARY_IP/v1/"
echo " Local: https://localhost"
else
echo " App: https://localhost (accept self-signed cert in browser)"
echo " App v2: https://localhost/v2/"
echo " API: https://localhost/v1/"
fi
elif [[ -n "$PRIMARY_IP" ]]; then
echo " App: http://$PRIMARY_IP:3000"
echo " App v2: http://$PRIMARY_IP:3001/v2/"
echo " API: http://$PRIMARY_IP:1250"
else
echo " App: http://localhost:3000"
echo " App v2: http://localhost:3001/v2/"
echo " API: http://localhost:1250"
fi
echo ""

View File

@@ -1,43 +0,0 @@
"""add store_video to room and meeting
Revision ID: c1d2e3f4a5b6
Revises: b4c7e8f9a012
Create Date: 2026-04-08 00:00:00.000000
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
revision: str = "c1d2e3f4a5b6"
down_revision: Union[str, None] = "b4c7e8f9a012"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.add_column(
"room",
sa.Column(
"store_video",
sa.Boolean(),
nullable=False,
server_default=sa.false(),
),
)
op.add_column(
"meeting",
sa.Column(
"store_video",
sa.Boolean(),
nullable=False,
server_default=sa.false(),
),
)
def downgrade() -> None:
op.drop_column("meeting", "store_video")
op.drop_column("room", "store_video")

View File

@@ -69,7 +69,6 @@ meetings = sa.Table(
sa.Column("daily_composed_video_duration", sa.Integer, nullable=True),
# Email recipients for transcript notification
sa.Column("email_recipients", JSONB, nullable=True),
sa.Column("store_video", sa.Boolean, nullable=False, server_default=sa.false()),
sa.Index("idx_meeting_room_id", "room_id"),
sa.Index("idx_meeting_calendar_event", "calendar_event_id"),
)
@@ -123,7 +122,6 @@ class Meeting(BaseModel):
# Email recipients for transcript notification
# Each entry is {"email": str, "include_link": bool} or a legacy plain str
email_recipients: list[dict | str] | None = None
store_video: bool = False
class MeetingController:
@@ -154,7 +152,6 @@ class MeetingController:
calendar_event_id=calendar_event_id,
calendar_metadata=calendar_metadata,
platform=room.platform,
store_video=room.store_video,
)
query = meetings.insert().values(**meeting.model_dump())
await get_database().execute(query)

View File

@@ -64,9 +64,6 @@ rooms = sqlalchemy.Table(
server_default=sqlalchemy.sql.false(),
),
sqlalchemy.Column("email_transcript_to", sqlalchemy.String, nullable=True),
sqlalchemy.Column(
"store_video", sqlalchemy.Boolean, nullable=False, server_default=false()
),
sqlalchemy.Index("idx_room_is_shared", "is_shared"),
sqlalchemy.Index("idx_room_ics_enabled", "ics_enabled"),
)
@@ -97,7 +94,6 @@ class Room(BaseModel):
platform: Platform = Field(default_factory=lambda: settings.DEFAULT_VIDEO_PLATFORM)
skip_consent: bool = False
email_transcript_to: str | None = None
store_video: bool = False
class RoomController:
@@ -154,7 +150,6 @@ class RoomController:
platform: Platform = settings.DEFAULT_VIDEO_PLATFORM,
skip_consent: bool = False,
email_transcript_to: str | None = None,
store_video: bool = False,
):
"""
Add a new room
@@ -181,7 +176,6 @@ class RoomController:
"platform": platform,
"skip_consent": skip_consent,
"email_transcript_to": email_transcript_to,
"store_video": store_video,
}
room = Room(**room_data)

View File

@@ -175,9 +175,6 @@ class SearchResult(BaseModel):
total_match_count: NonNegativeInt = Field(
default=0, description="Total number of matches found in the transcript"
)
speaker_count: NonNegativeInt = Field(
default=0, description="Number of distinct speakers in the transcript"
)
change_seq: int | None = None
@field_serializer("created_at", when_used="json")
@@ -365,7 +362,6 @@ class SearchController:
transcripts.c.change_seq,
transcripts.c.webvtt,
transcripts.c.long_summary,
transcripts.c.participants,
sqlalchemy.case(
(
transcripts.c.room_id.isnot(None) & rooms.c.id.is_(None),
@@ -462,12 +458,6 @@ class SearchController:
long_summary_r: str | None = r_dict.pop("long_summary", None)
long_summary: NonEmptyString = try_parse_non_empty_string(long_summary_r)
room_name: str | None = r_dict.pop("room_name", None)
participants_raw = r_dict.pop("participants", None) or []
speaker_count = (
len({p.get("speaker") for p in participants_raw if isinstance(p, dict)})
if isinstance(participants_raw, list)
else 0
)
db_result = SearchResultDB.model_validate(r_dict)
at_least_one_source = webvtt is not None or long_summary is not None
@@ -485,7 +475,6 @@ class SearchController:
room_name=room_name,
search_snippets=snippets,
total_match_count=total_match_count,
speaker_count=speaker_count,
)
try:

View File

@@ -446,19 +446,10 @@ class TranscriptController:
col for col in transcripts.c if col.name not in exclude_columns
]
# Cheap speaker_count via JSON array length on the participants column
# (same column already stored on every transcript, no extra queries).
# COALESCE handles transcripts where participants is NULL.
speaker_count_col = sqlalchemy.func.coalesce(
sqlalchemy.func.json_array_length(transcripts.c.participants),
0,
).label("speaker_count")
query = query.with_only_columns(
transcript_columns
+ [
rooms.c.name.label("room_name"),
speaker_count_col,
]
)

View File

@@ -10,7 +10,6 @@ from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.workflows.daily_multitrack_pipeline import (
daily_multitrack_pipeline,
)
from reflector.hatchet.workflows.failed_runs_monitor import failed_runs_monitor
from reflector.hatchet.workflows.file_pipeline import file_pipeline
from reflector.hatchet.workflows.live_post_pipeline import live_post_pipeline
from reflector.hatchet.workflows.subject_processing import subject_workflow
@@ -55,6 +54,10 @@ def main():
]
)
if _zulip_dag_enabled:
from reflector.hatchet.workflows.failed_runs_monitor import ( # noqa: PLC0415
failed_runs_monitor,
)
workflows.append(failed_runs_monitor)
logger.info(
"FailedRunsMonitor cron enabled",

View File

@@ -18,11 +18,10 @@ import json
import tempfile
import time
from contextlib import asynccontextmanager
from datetime import datetime, timedelta
from datetime import timedelta
from pathlib import Path
from typing import Any, Callable, Coroutine, Protocol, TypeVar
import databases
import httpx
from hatchet_sdk import (
ConcurrencyExpression,
@@ -84,7 +83,6 @@ from reflector.hatchet.workflows.topic_chunk_processing import (
topic_chunk_workflow,
)
from reflector.hatchet.workflows.track_processing import TrackInput, track_workflow
from reflector.llm import LLM
from reflector.logger import logger
from reflector.pipelines import topic_processing
from reflector.processors.audio_mixdown_auto import AudioMixdownAutoProcessor
@@ -97,9 +95,7 @@ from reflector.processors.summary.prompts import (
from reflector.processors.summary.summary_builder import SummaryBuilder
from reflector.processors.types import TitleSummary, Word
from reflector.processors.types import Transcript as TranscriptType
from reflector.redis_cache import get_async_redis_client
from reflector.settings import settings
from reflector.storage import get_source_storage, get_transcripts_storage
from reflector.utils.audio_constants import (
PRESIGNED_URL_EXPIRATION_SECONDS,
WAVEFORM_SEGMENTS,
@@ -109,16 +105,8 @@ from reflector.utils.daily import (
filter_cam_audio_tracks,
parse_daily_recording_filename,
)
from reflector.utils.livekit import parse_livekit_track_filepath
from reflector.utils.string import NonEmptyString, assert_non_none_and_non_empty
from reflector.utils.transcript_constants import (
compute_max_subjects,
compute_topic_chunk_size,
)
from reflector.utils.webhook import (
fetch_transcript_webhook_payload,
send_webhook_request,
)
from reflector.utils.transcript_constants import TOPIC_CHUNK_WORD_COUNT
from reflector.zulip import post_transcript_notification
@@ -147,6 +135,8 @@ async def fresh_db_connection():
The real fix would be making the db module fork-aware instead of bypassing it.
Current pattern is acceptable given Hatchet's process model.
"""
import databases # noqa: PLC0415
from reflector.db import _database_context # noqa: PLC0415
_database_context.set(None)
@@ -183,6 +173,8 @@ async def set_workflow_error_status(transcript_id: NonEmptyString) -> bool:
def _spawn_storage():
"""Create fresh storage instance for writing to our transcript bucket."""
from reflector.storage import get_transcripts_storage # noqa: PLC0415
return get_transcripts_storage()
@@ -396,6 +388,10 @@ async def get_participants(input: PipelineInput, ctx: Context) -> ParticipantsRe
if input.source_platform == "livekit":
# LiveKit: participant identity is in the track dict or can be parsed from filepath
from reflector.utils.livekit import (
parse_livekit_track_filepath, # noqa: PLC0415
)
# Look up identity → Reflector user_id mapping from Redis
# (stored at join time in rooms.py)
identity_to_user_id: dict[str, str] = {}
@@ -403,6 +399,9 @@ async def get_participants(input: PipelineInput, ctx: Context) -> ParticipantsRe
from reflector.db.meetings import (
meetings_controller as mc, # noqa: PLC0415
)
from reflector.redis_cache import (
get_async_redis_client, # noqa: PLC0415
)
meeting = (
await mc.get_by_id(transcript.meeting_id)
@@ -544,6 +543,12 @@ async def process_tracks(input: PipelineInput, ctx: Context) -> ProcessTracksRes
# OGG files don't have embedded start_time metadata, so we pre-calculate.
track_padding: dict[int, float] = {}
if input.source_platform == "livekit":
from datetime import datetime # noqa: PLC0415
from reflector.utils.livekit import (
parse_livekit_track_filepath, # noqa: PLC0415
)
timestamps = []
for i, track in enumerate(input.tracks):
ts_str = track.get("timestamp")
@@ -880,8 +885,7 @@ async def detect_topics(input: PipelineInput, ctx: Context) -> TopicsResult:
transcripts_controller,
)
duration_seconds = words[-1].end - words[0].start if words else 0
chunk_size = compute_topic_chunk_size(duration_seconds, len(words))
chunk_size = TOPIC_CHUNK_WORD_COUNT
chunks = []
for i in range(0, len(words), chunk_size):
chunk_words = words[i : i + chunk_size]
@@ -971,7 +975,7 @@ async def detect_topics(input: PipelineInput, ctx: Context) -> TopicsResult:
ctx.log(f"detect_topics complete: found {len(topics_list)} topics")
return TopicsResult(topics=topics_list, duration_seconds=duration_seconds)
return TopicsResult(topics=topics_list)
@daily_multitrack_pipeline.task(
@@ -1065,9 +1069,10 @@ async def extract_subjects(input: PipelineInput, ctx: Context) -> SubjectsResult
participant_name_to_id={},
)
# Deferred DB import: Hatchet workers fork processes, fresh imports avoid
# sharing DB connections across forks
# Deferred imports: Hatchet workers fork processes, fresh imports avoid
# sharing DB connections and LLM HTTP pools across forks
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.llm import LLM # noqa: PLC0415
async with fresh_db_connection():
transcript = await transcripts_controller.get_by_id(input.transcript_id)
@@ -1107,14 +1112,8 @@ async def extract_subjects(input: PipelineInput, ctx: Context) -> SubjectsResult
participant_names, participant_name_to_id=participant_name_to_id
)
max_subjects = compute_max_subjects(topics_result.duration_seconds)
ctx.log(
f"extract_subjects: duration={topics_result.duration_seconds:.0f}s, "
f"max_subjects={max_subjects}"
)
ctx.log("extract_subjects: calling LLM to extract subjects")
await builder.extract_subjects(max_subjects=max_subjects)
await builder.extract_subjects()
ctx.log(f"extract_subjects complete: {len(builder.subjects)} subjects")
@@ -1197,13 +1196,14 @@ async def generate_recap(input: PipelineInput, ctx: Context) -> RecapResult:
subjects_result = ctx.task_output(extract_subjects)
process_result = ctx.task_output(process_subjects)
# Deferred DB import: Hatchet workers fork processes, fresh imports avoid
# sharing DB connections across forks
# Deferred imports: Hatchet workers fork processes, fresh imports avoid
# sharing DB connections and LLM HTTP pools across forks
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptFinalLongSummary,
TranscriptFinalShortSummary,
transcripts_controller,
)
from reflector.llm import LLM # noqa: PLC0415
subject_summaries = process_result.subject_summaries
@@ -1292,12 +1292,13 @@ async def identify_action_items(
ctx.log("identify_action_items: no transcript text, returning empty")
return ActionItemsResult(action_items=ActionItemsResponse())
# Deferred DB import: Hatchet workers fork processes, fresh imports avoid
# sharing DB connections across forks
# Deferred imports: Hatchet workers fork processes, fresh imports avoid
# sharing DB connections and LLM HTTP pools across forks
from reflector.db.transcripts import ( # noqa: PLC0415
TranscriptActionItems,
transcripts_controller,
)
from reflector.llm import LLM # noqa: PLC0415
# TODO: refactor SummaryBuilder methods into standalone functions
llm = LLM(settings=settings)
@@ -1434,6 +1435,10 @@ async def cleanup_consent(input: PipelineInput, ctx: Context) -> ConsentResult:
)
from reflector.db.recordings import recordings_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
@@ -1582,6 +1587,10 @@ async def send_webhook(input: PipelineInput, ctx: Context) -> WebhookResult:
async with fresh_db_connection():
from reflector.db.rooms import rooms_controller # noqa: PLC0415
from reflector.utils.webhook import ( # noqa: PLC0415
fetch_transcript_webhook_payload,
send_webhook_request,
)
room = await rooms_controller.get_by_id(input.room_id)
if not room or not room.webhook_url:

View File

@@ -15,8 +15,6 @@ import json
from datetime import timedelta
from pathlib import Path
import av
import httpx
from hatchet_sdk import Context
from pydantic import BaseModel
@@ -49,30 +47,9 @@ from reflector.hatchet.workflows.models import (
)
from reflector.logger import logger
from reflector.pipelines import topic_processing
from reflector.pipelines.transcription_helpers import transcribe_file_with_processor
from reflector.processors import AudioFileWriterProcessor
from reflector.processors.file_diarization import FileDiarizationInput
from reflector.processors.file_diarization_auto import FileDiarizationAutoProcessor
from reflector.processors.transcript_diarization_assembler import (
TranscriptDiarizationAssemblerInput,
TranscriptDiarizationAssemblerProcessor,
)
from reflector.processors.types import (
DiarizationSegment,
Word,
)
from reflector.processors.types import (
Transcript as TranscriptType,
)
from reflector.settings import settings
from reflector.storage import get_source_storage, get_transcripts_storage
from reflector.utils.audio_constants import WAVEFORM_SEGMENTS
from reflector.utils.audio_waveform import get_audio_waveform
from reflector.utils.webhook import (
fetch_transcript_webhook_payload,
send_webhook_request,
)
from reflector.zulip import post_transcript_notification
class FilePipelineInput(BaseModel):
@@ -158,6 +135,10 @@ async def extract_audio(input: FilePipelineInput, ctx: Context) -> ExtractAudioR
ctx.log(f"extract_audio: processing {audio_file}")
# Extract audio and write as MP3
import av # noqa: PLC0415
from reflector.processors import AudioFileWriterProcessor # noqa: PLC0415
duration_ms_container = [0.0]
async def capture_duration(d):
@@ -208,6 +189,8 @@ async def upload_audio(input: FilePipelineInput, ctx: Context) -> UploadAudioRes
extract_result = ctx.task_output(extract_audio)
audio_path = extract_result.audio_path
from reflector.storage import get_transcripts_storage # noqa: PLC0415
storage = get_transcripts_storage()
if not storage:
raise ValueError(
@@ -249,6 +232,10 @@ async def transcribe(input: FilePipelineInput, ctx: Context) -> TranscribeResult
raise ValueError(f"Transcript {input.transcript_id} not found")
source_language = transcript.source_language
from reflector.pipelines.transcription_helpers import ( # noqa: PLC0415
transcribe_file_with_processor,
)
result = await transcribe_file_with_processor(audio_url, source_language)
ctx.log(f"transcribe complete: {len(result.words)} words")
@@ -277,6 +264,13 @@ async def diarize(input: FilePipelineInput, ctx: Context) -> DiarizeResult:
upload_result = ctx.task_output(upload_audio)
audio_url = upload_result.audio_url
from reflector.processors.file_diarization import ( # noqa: PLC0415
FileDiarizationInput,
)
from reflector.processors.file_diarization_auto import ( # noqa: PLC0415
FileDiarizationAutoProcessor,
)
processor = FileDiarizationAutoProcessor()
input_data = FileDiarizationInput(audio_url=audio_url)
@@ -359,6 +353,18 @@ async def assemble_transcript(
transcribe_result = ctx.task_output(transcribe)
diarize_result = ctx.task_output(diarize)
from reflector.processors.transcript_diarization_assembler import ( # noqa: PLC0415
TranscriptDiarizationAssemblerInput,
TranscriptDiarizationAssemblerProcessor,
)
from reflector.processors.types import ( # noqa: PLC0415
DiarizationSegment,
Word,
)
from reflector.processors.types import ( # noqa: PLC0415
Transcript as TranscriptType,
)
words = [Word(**w) for w in transcribe_result.words]
transcript_data = TranscriptType(
words=words, translation=transcribe_result.translation
@@ -431,6 +437,17 @@ async def detect_topics(input: FilePipelineInput, ctx: Context) -> TopicsResult:
TranscriptTopic,
transcripts_controller,
)
from reflector.processors.transcript_diarization_assembler import ( # noqa: PLC0415
TranscriptDiarizationAssemblerInput,
TranscriptDiarizationAssemblerProcessor,
)
from reflector.processors.types import ( # noqa: PLC0415
DiarizationSegment,
Word,
)
from reflector.processors.types import ( # noqa: PLC0415
Transcript as TranscriptType,
)
words = [Word(**w) for w in transcribe_result.words]
transcript_data = TranscriptType(
@@ -671,6 +688,10 @@ async def cleanup_consent(input: FilePipelineInput, ctx: Context) -> ConsentResu
)
from reflector.db.recordings import recordings_controller # noqa: PLC0415
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
@@ -786,6 +807,7 @@ async def post_zulip(input: FilePipelineInput, ctx: Context) -> ZulipResult:
async with fresh_db_connection():
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.zulip import post_transcript_notification # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript:
@@ -815,6 +837,10 @@ async def send_webhook(input: FilePipelineInput, ctx: Context) -> WebhookResult:
async with fresh_db_connection():
from reflector.db.rooms import rooms_controller # noqa: PLC0415
from reflector.utils.webhook import ( # noqa: PLC0415
fetch_transcript_webhook_payload,
send_webhook_request,
)
room = await rooms_controller.get_by_id(input.room_id)
if not room or not room.webhook_url:
@@ -830,6 +856,8 @@ async def send_webhook(input: FilePipelineInput, ctx: Context) -> WebhookResult:
ctx.log(f"send_webhook skipped (could not build payload): {payload}")
return WebhookResult(webhook_sent=False, skipped=True)
import httpx # noqa: PLC0415
try:
response = await send_webhook_request(
url=room.webhook_url,

View File

@@ -14,7 +14,6 @@ are not shared across forks, avoiding connection pooling issues.
from datetime import timedelta
import httpx
from hatchet_sdk import Context
from pydantic import BaseModel
@@ -41,24 +40,7 @@ from reflector.hatchet.workflows.models import (
ZulipResult,
)
from reflector.logger import logger
from reflector.pipelines.main_live_pipeline import (
PipelineMainTitle,
PipelineMainWaveform,
pipeline_convert_to_mp3,
pipeline_diarization,
pipeline_remove_upload,
pipeline_summaries,
pipeline_upload_mp3,
)
from reflector.pipelines.main_live_pipeline import (
cleanup_consent as _cleanup_consent,
)
from reflector.settings import settings
from reflector.utils.webhook import (
fetch_transcript_webhook_payload,
send_webhook_request,
)
from reflector.zulip import post_transcript_notification
class LivePostPipelineInput(BaseModel):
@@ -109,6 +91,9 @@ async def waveform(input: LivePostPipelineInput, ctx: Context) -> WaveformResult
async with fresh_db_connection():
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
PipelineMainWaveform,
)
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if not transcript:
@@ -133,6 +118,10 @@ async def generate_title(input: LivePostPipelineInput, ctx: Context) -> TitleRes
ctx.log(f"generate_title: starting for transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
PipelineMainTitle,
)
runner = PipelineMainTitle(transcript_id=input.transcript_id)
await runner.run()
@@ -153,6 +142,10 @@ async def convert_mp3(input: LivePostPipelineInput, ctx: Context) -> ConvertMp3R
ctx.log(f"convert_mp3: starting for transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
pipeline_convert_to_mp3,
)
await pipeline_convert_to_mp3(transcript_id=input.transcript_id)
ctx.log("convert_mp3 complete")
@@ -172,6 +165,10 @@ async def upload_mp3(input: LivePostPipelineInput, ctx: Context) -> UploadMp3Res
ctx.log(f"upload_mp3: starting for transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
pipeline_upload_mp3,
)
await pipeline_upload_mp3(transcript_id=input.transcript_id)
ctx.log("upload_mp3 complete")
@@ -193,6 +190,10 @@ async def remove_upload(
ctx.log(f"remove_upload: starting for transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
pipeline_remove_upload,
)
await pipeline_remove_upload(transcript_id=input.transcript_id)
ctx.log("remove_upload complete")
@@ -212,6 +213,10 @@ async def diarize(input: LivePostPipelineInput, ctx: Context) -> DiarizeResult:
ctx.log(f"diarize: starting for transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
pipeline_diarization,
)
await pipeline_diarization(transcript_id=input.transcript_id)
ctx.log("diarize complete")
@@ -231,6 +236,10 @@ async def cleanup_consent(input: LivePostPipelineInput, ctx: Context) -> Consent
ctx.log(f"cleanup_consent: transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
cleanup_consent as _cleanup_consent,
)
await _cleanup_consent(transcript_id=input.transcript_id)
ctx.log("cleanup_consent complete")
@@ -252,6 +261,10 @@ async def final_summaries(
ctx.log(f"final_summaries: starting for transcript_id={input.transcript_id}")
async with fresh_db_connection():
from reflector.pipelines.main_live_pipeline import ( # noqa: PLC0415
pipeline_summaries,
)
await pipeline_summaries(transcript_id=input.transcript_id)
ctx.log("final_summaries complete")
@@ -276,6 +289,7 @@ async def post_zulip(input: LivePostPipelineInput, ctx: Context) -> ZulipResult:
async with fresh_db_connection():
from reflector.db.transcripts import transcripts_controller # noqa: PLC0415
from reflector.zulip import post_transcript_notification # noqa: PLC0415
transcript = await transcripts_controller.get_by_id(input.transcript_id)
if transcript:
@@ -305,6 +319,10 @@ async def send_webhook(input: LivePostPipelineInput, ctx: Context) -> WebhookRes
async with fresh_db_connection():
from reflector.db.rooms import rooms_controller # noqa: PLC0415
from reflector.utils.webhook import ( # noqa: PLC0415
fetch_transcript_webhook_payload,
send_webhook_request,
)
room = await rooms_controller.get_by_id(input.room_id)
if not room or not room.webhook_url:
@@ -320,6 +338,8 @@ async def send_webhook(input: LivePostPipelineInput, ctx: Context) -> WebhookRes
ctx.log(f"send_webhook skipped (could not build payload): {payload}")
return WebhookResult(webhook_sent=False, skipped=True)
import httpx # noqa: PLC0415
try:
response = await send_webhook_request(
url=room.webhook_url,

View File

@@ -102,7 +102,6 @@ class TopicsResult(BaseModel):
"""Result from detect_topics task."""
topics: list[TitleSummary]
duration_seconds: float = 0
class TitleResult(BaseModel):

View File

@@ -13,8 +13,6 @@ from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.constants import TIMEOUT_AUDIO
from reflector.hatchet.workflows.models import PadTrackResult
from reflector.logger import logger
from reflector.processors.audio_padding_auto import AudioPaddingAutoProcessor
from reflector.storage import get_source_storage, get_transcripts_storage
from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS
from reflector.utils.audio_padding import extract_stream_start_time_from_container
@@ -53,6 +51,11 @@ async def pad_track(input: PaddingInput, ctx: Context) -> PadTrackResult:
)
try:
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
# Source reads: use platform-specific credentials
source_storage = get_source_storage(input.source_platform)
source_url = await source_storage.get_file_url(
@@ -101,6 +104,10 @@ async def pad_track(input: PaddingInput, ctx: Context) -> PadTrackResult:
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
from reflector.processors.audio_padding_auto import ( # noqa: PLC0415
AudioPaddingAutoProcessor,
)
processor = AudioPaddingAutoProcessor()
result = await processor.pad_track(
track_url=source_url,

View File

@@ -15,14 +15,12 @@ from pydantic import BaseModel
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.constants import LLM_RATE_LIMIT_KEY, TIMEOUT_HEAVY
from reflector.hatchet.workflows.models import SubjectSummaryResult
from reflector.llm import LLM
from reflector.logger import logger
from reflector.processors.summary.prompts import (
DETAILED_SUBJECT_PROMPT_TEMPLATE,
PARAGRAPH_SUMMARY_PROMPT,
build_participant_instructions,
)
from reflector.settings import settings
class SubjectInput(BaseModel):
@@ -62,6 +60,11 @@ async def generate_detailed_summary(
subject_index=input.subject_index,
)
# Deferred imports: Hatchet workers fork processes, fresh imports ensure
# LLM HTTP connection pools aren't shared across forks
from reflector.llm import LLM # noqa: PLC0415
from reflector.settings import settings # noqa: PLC0415
llm = LLM(settings=settings)
participant_instructions = build_participant_instructions(input.participant_names)

View File

@@ -18,13 +18,9 @@ from pydantic import BaseModel
from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.constants import LLM_RATE_LIMIT_KEY, TIMEOUT_MEDIUM
from reflector.hatchet.workflows.models import TopicChunkResult
from reflector.llm import LLM
from reflector.logger import logger
from reflector.processors.prompts import TOPIC_PROMPT
from reflector.processors.transcript_topic_detector import TopicResponse
from reflector.processors.types import Word
from reflector.settings import settings
from reflector.utils.text import clean_title
class TopicChunkInput(BaseModel):
@@ -68,6 +64,15 @@ async def detect_chunk_topic(input: TopicChunkInput, ctx: Context) -> TopicChunk
text_length=len(input.chunk_text),
)
# Deferred imports: Hatchet workers fork processes, fresh imports avoid
# sharing LLM HTTP connection pools across forks
from reflector.llm import LLM # noqa: PLC0415
from reflector.processors.transcript_topic_detector import ( # noqa: PLC0415
TopicResponse,
)
from reflector.settings import settings # noqa: PLC0415
from reflector.utils.text import clean_title # noqa: PLC0415
llm = LLM(settings=settings, temperature=0.9)
prompt = TOPIC_PROMPT.format(text=input.chunk_text)

View File

@@ -9,9 +9,9 @@ because Hatchet workflow DAGs are defined statically, but the number of tracks v
at runtime. Child workflow spawning via `aio_run()` + `asyncio.gather()` is the
standard pattern for dynamic fan-out. See `process_tracks` in daily_multitrack_pipeline.py.
Note: DB imports (reflector.db.*) are kept inline (deferred) intentionally.
Note: This file uses deferred imports (inside tasks) intentionally.
Hatchet workers run in forked processes; fresh imports per task ensure
DB connections are not shared across forks.
storage/DB connections are not shared across forks.
"""
from datetime import timedelta
@@ -24,9 +24,6 @@ from reflector.hatchet.client import HatchetClientManager
from reflector.hatchet.constants import TIMEOUT_AUDIO, TIMEOUT_HEAVY
from reflector.hatchet.workflows.models import PadTrackResult, TranscribeTrackResult
from reflector.logger import logger
from reflector.pipelines.transcription_helpers import transcribe_file_with_processor
from reflector.processors.audio_padding_auto import AudioPaddingAutoProcessor
from reflector.storage import get_source_storage, get_transcripts_storage
from reflector.utils.audio_constants import PRESIGNED_URL_EXPIRATION_SECONDS
from reflector.utils.audio_padding import extract_stream_start_time_from_container
@@ -75,6 +72,11 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
)
try:
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
# Source reads: use platform-specific credentials
source_storage = get_source_storage(input.source_platform)
source_url = await source_storage.get_file_url(
@@ -118,6 +120,10 @@ async def pad_track(input: TrackInput, ctx: Context) -> PadTrackResult:
expires_in=PRESIGNED_URL_EXPIRATION_SECONDS,
)
from reflector.processors.audio_padding_auto import ( # noqa: PLC0415
AudioPaddingAutoProcessor,
)
processor = AudioPaddingAutoProcessor()
result = await processor.pad_track(
track_url=source_url,
@@ -173,6 +179,11 @@ async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackRe
raise ValueError("Missing padded_key from pad_track")
# Presign URL on demand (avoids stale URLs on workflow replay)
from reflector.storage import ( # noqa: PLC0415
get_source_storage,
get_transcripts_storage,
)
# If bucket_name is set, file is still in the platform's source bucket (no padding applied).
# If bucket_name is None, padded file was written to our transcript storage.
if bucket_name:
@@ -187,6 +198,10 @@ async def transcribe_track(input: TrackInput, ctx: Context) -> TranscribeTrackRe
bucket=bucket_name,
)
from reflector.pipelines.transcription_helpers import ( # noqa: PLC0415
transcribe_file_with_processor,
)
transcript = await transcribe_file_with_processor(audio_url, input.language)
# Tag all words with speaker index

View File

@@ -38,7 +38,6 @@ from reflector.db.transcripts import (
TranscriptWaveform,
transcripts_controller,
)
from reflector.hatchet.client import HatchetClientManager
from reflector.logger import logger
from reflector.pipelines.runner import PipelineMessage, PipelineRunner
from reflector.processors import (
@@ -815,6 +814,8 @@ async def pipeline_post(*, transcript_id: str, room_id: str | None = None):
"""
Run the post pipeline via Hatchet.
"""
from reflector.hatchet.client import HatchetClientManager # noqa: PLC0415
await HatchetClientManager.start_workflow(
"LivePostProcessingPipeline",
{

View File

@@ -18,7 +18,7 @@ from reflector.processors import (
)
from reflector.processors.types import TitleSummary
from reflector.processors.types import Transcript as TranscriptType
from reflector.utils.transcript_constants import compute_topic_chunk_size
from reflector.utils.transcript_constants import TOPIC_CHUNK_WORD_COUNT
class EmptyPipeline:
@@ -39,10 +39,7 @@ async def detect_topics(
on_topic_callback: Callable,
empty_pipeline: EmptyPipeline,
) -> list[TitleSummary]:
duration_seconds = (
transcript.words[-1].end - transcript.words[0].start if transcript.words else 0
)
chunk_size = compute_topic_chunk_size(duration_seconds, len(transcript.words))
chunk_size = TOPIC_CHUNK_WORD_COUNT
topics: list[TitleSummary] = []
async def on_topic(topic: TitleSummary):

View File

@@ -10,7 +10,6 @@ import os
import tempfile
import av
import requests
from reflector.logger import logger
from reflector.processors.audio_padding import AudioPaddingProcessor, PaddingResponse
@@ -66,6 +65,8 @@ class AudioPaddingPyavProcessor(AudioPaddingProcessor):
track_index: int,
) -> PaddingResponse:
"""Blocking padding work: download, pad with PyAV, upload."""
import requests
log = logger.bind(track_index=track_index, padding_seconds=start_time_seconds)
temp_dir = tempfile.mkdtemp()
input_path = None

View File

@@ -34,8 +34,7 @@ class AudioTranscriptModalProcessor(AudioTranscriptProcessor):
self.transcript_url = settings.TRANSCRIPT_URL + "/v1"
self.timeout = settings.TRANSCRIPT_TIMEOUT
self.modal_api_key = modal_api_key
print(self.timeout, self.modal_api_key)
async def _transcript(self, data: AudioFile):
async with AsyncOpenAI(
base_url=self.transcript_url,

View File

@@ -43,8 +43,7 @@ DETAILED_SUBJECT_PROMPT_TEMPLATE = dedent(
include any deadlines or timeframes discussed for completion or follow-up.
- Mention unresolved issues or topics needing further discussion, aiding in
planning future meetings or follow-up actions.
- Be specific and cite participant names when attributing statements or actions.
- Do not include topics unrelated to {subject}.
- Do not include topic unrelated to {subject}.
# OUTPUT
Your summary should be clear, concise, and structured, covering all major
@@ -59,7 +58,6 @@ PARAGRAPH_SUMMARY_PROMPT = dedent(
"""
Summarize the mentioned topic in 1 paragraph.
It will be integrated into the final summary, so just for this topic.
Preserve key decisions and action items. Do not introduce new information.
"""
).strip()

View File

@@ -48,24 +48,17 @@ TRANSCRIPTION_TYPE_PROMPT = dedent(
"""
).strip()
_DEFAULT_MAX_SUBJECTS = 6
def build_subjects_prompt(max_subjects: int = _DEFAULT_MAX_SUBJECTS) -> str:
"""Build subjects extraction prompt with a dynamic subject cap."""
subject_word = "subject" if max_subjects == 1 else "subjects"
return dedent(
f"""
What are the main / high level topics of the meeting.
Do not include direct quotes or unnecessary details.
Be concise and focused on the main ideas.
A subject briefly mentioned should not be included.
There should be maximum {max_subjects} {subject_word}.
Do not write complete narrative sentences for the subject,
you must write a concise subject using noun phrases.
"""
).strip()
SUBJECTS_PROMPT = dedent(
"""
What are the main / high level topic of the meeting.
Do not include direct quotes or unnecessary details.
Be concise and focused on the main ideas.
A subject briefly mentioned should not be included.
There should be maximum 6 subjects.
Do not write complete narrative sentences for the subject,
you must write a concise subject using noun phrases.
"""
).strip()
ACTION_ITEMS_PROMPT = dedent(
"""
@@ -152,7 +145,7 @@ class SubjectsResponse(BaseModel):
"""Pydantic model for extracted subjects/topics"""
subjects: list[str] = Field(
description="List of main subjects/topics discussed",
description="List of main subjects/topics discussed, maximum 6 items",
)
@@ -352,14 +345,11 @@ class SummaryBuilder:
# Summary
# ----------------------------------------------------------------------------
async def extract_subjects(self, max_subjects: int = _DEFAULT_MAX_SUBJECTS) -> None:
async def extract_subjects(self) -> None:
"""Extract main subjects/topics from the transcript."""
self.logger.info(
"--- extract main subjects using TreeSummarize",
max_subjects=max_subjects,
)
self.logger.info("--- extract main subjects using TreeSummarize")
subjects_prompt = build_subjects_prompt(max_subjects)
subjects_prompt = SUBJECTS_PROMPT
try:
response = await self._get_structured_response(
@@ -368,7 +358,7 @@ class SummaryBuilder:
tone_name="Meeting assistant that talk only as list item",
)
self.subjects = response.subjects[:max_subjects]
self.subjects = response.subjects
self.logger.info(f"Extracted subjects: {self.subjects}")
except Exception as e:

View File

@@ -333,9 +333,7 @@ if __name__ == "__main__":
if not s3_urls:
parser.error("At least one S3 URL required for multitrack processing")
from reflector.tools.cli_multitrack import (
process_multitrack_cli, # circular import
)
from reflector.tools.cli_multitrack import process_multitrack_cli
asyncio.run(
process_multitrack_cli(

View File

@@ -5,7 +5,6 @@ This tools help to either create a pipeline from command line,
or read a yaml description of a pipeline and run it.
"""
import importlib
import json
from reflector.logger import logger
@@ -38,6 +37,8 @@ def get_jsonl(filename, filter_processor_name=None):
def get_processor(name):
import importlib
module_name = f"reflector.processors.{name}"
class_name = snake_to_camel(name) + "Processor"
module = importlib.import_module(module_name)

View File

@@ -4,67 +4,5 @@ Shared transcript processing constants.
Used by both Hatchet workflows and Celery pipelines for consistent processing.
"""
import math
# Topic detection: legacy static chunk size, used as fallback
# Topic detection: number of words per chunk for topic extraction
TOPIC_CHUNK_WORD_COUNT = 300
# Dynamic chunking curve parameters
# Formula: target_topics = _COEFFICIENT * duration_minutes ^ _EXPONENT
# Derived from anchors: 5 min -> 3 topics, 180 min -> 40 topics
_TOPIC_CURVE_COEFFICIENT = 0.833
_TOPIC_CURVE_EXPONENT = 0.723
_MIN_TOPICS = 2
_MAX_TOPICS = 50
_MIN_CHUNK_WORDS = 375
_MAX_CHUNK_WORDS = 1500
def compute_topic_chunk_size(duration_seconds: float, total_words: int) -> int:
"""Calculate optimal chunk size for topic detection based on recording duration.
Uses a power-curve function to scale topic count sublinearly with duration,
producing fewer LLM calls for longer recordings while maintaining topic quality.
Returns the number of words per chunk.
"""
if total_words <= 0 or duration_seconds <= 0:
return _MIN_CHUNK_WORDS
duration_minutes = duration_seconds / 60.0
target_topics = _TOPIC_CURVE_COEFFICIENT * math.pow(
duration_minutes, _TOPIC_CURVE_EXPONENT
)
target_topics = int(round(max(_MIN_TOPICS, min(_MAX_TOPICS, target_topics))))
chunk_size = total_words // target_topics
chunk_size = max(_MIN_CHUNK_WORDS, min(_MAX_CHUNK_WORDS, chunk_size))
return chunk_size
# Subject extraction: scale max subjects with recording duration
# Short calls get fewer subjects to avoid over-analyzing trivial content
_SUBJECT_DURATION_THRESHOLDS = [
(5 * 60, 1), # ≤ 5 min → 1 subject
(15 * 60, 2), # ≤ 15 min → 2 subjects
(30 * 60, 3), # ≤ 30 min → 3 subjects
(45 * 60, 4), # ≤ 45 min → 4 subjects
(60 * 60, 5), # ≤ 60 min → 5 subjects
]
_MAX_SUBJECTS = 6
def compute_max_subjects(duration_seconds: float) -> int:
"""Calculate maximum number of subjects to extract based on recording duration.
Uses a step function: short recordings get fewer subjects to avoid
generating excessive detail for trivial content.
"""
if duration_seconds <= 0:
return 1
for threshold, max_subjects in _SUBJECT_DURATION_THRESHOLDS:
if duration_seconds <= threshold:
return max_subjects
return _MAX_SUBJECTS

View File

@@ -15,7 +15,6 @@ from reflector.dailyco_api import (
from reflector.db.meetings import meetings_controller
from reflector.logger import logger as _logger
from reflector.settings import settings
from reflector.storage import get_source_storage
from reflector.video_platforms.factory import create_platform_client
from reflector.worker.process import (
poll_daily_room_presence_task,
@@ -220,30 +219,6 @@ async def _handle_recording_ready(event: RecordingReadyEvent):
track_keys = [t.s3Key for t in tracks if t.type == "audio"]
# Delete video tracks when store_video is disabled (same pattern as LiveKit).
# Only delete if we have a meeting AND store_video is explicitly false.
# If no meeting found, leave files alone (can't confirm user intent).
video_track_keys = [t.s3Key for t in tracks if t.type == "video"]
if video_track_keys:
meeting = await meetings_controller.get_by_room_name(room_name)
if meeting is not None and not meeting.store_video:
storage = get_source_storage("daily")
for video_key in video_track_keys:
try:
await storage.delete_file(video_key)
logger.info(
"Deleted video track from raw-tracks recording",
s3_key=video_key,
room_name=room_name,
)
except Exception as e:
# Non-critical — pipeline filters these out anyway
logger.warning(
"Failed to delete video track from raw-tracks recording",
s3_key=video_key,
error=str(e),
)
logger.info(
"Raw-tracks recording queuing processing",
recording_id=recording_id,

View File

@@ -17,7 +17,6 @@ from reflector.db.meetings import meetings_controller
from reflector.livekit_api.webhooks import create_webhook_receiver, verify_webhook
from reflector.logger import logger as _logger
from reflector.settings import settings
from reflector.storage import get_source_storage
router = APIRouter()
@@ -190,6 +189,8 @@ async def _handle_egress_ended(event):
filename = file_result.filename
if filename and filename.endswith(".webm"):
try:
from reflector.storage import get_source_storage # noqa: PLC0415
storage = get_source_storage("livekit")
await storage.delete_file(filename)
logger.info(

View File

@@ -1,6 +1,4 @@
import logging
import re
import uuid
from datetime import datetime, timedelta, timezone
from enum import Enum
from typing import Annotated, Any, Literal, Optional
@@ -16,7 +14,7 @@ from reflector.db import get_database
from reflector.db.calendar_events import calendar_events_controller
from reflector.db.meetings import meetings_controller
from reflector.db.rooms import rooms_controller
from reflector.redis_cache import RedisAsyncLock, get_async_redis_client
from reflector.redis_cache import RedisAsyncLock
from reflector.schemas.platform import Platform
from reflector.services.ics_sync import ics_sync_service
from reflector.utils.url import add_query_param
@@ -47,7 +45,6 @@ class Room(BaseModel):
platform: Platform
skip_consent: bool = False
email_transcript_to: str | None = None
store_video: bool = False
class RoomDetails(Room):
@@ -78,7 +75,6 @@ class Meeting(BaseModel):
platform: Platform
daily_composed_video_s3_key: str | None = None
daily_composed_video_duration: int | None = None
store_video: bool = False
class CreateRoom(BaseModel):
@@ -99,7 +95,6 @@ class CreateRoom(BaseModel):
platform: Platform
skip_consent: bool = False
email_transcript_to: str | None = None
store_video: bool = False
class UpdateRoom(BaseModel):
@@ -120,7 +115,6 @@ class UpdateRoom(BaseModel):
platform: Optional[Platform] = None
skip_consent: Optional[bool] = None
email_transcript_to: Optional[str] = None
store_video: Optional[bool] = None
class CreateRoomMeeting(BaseModel):
@@ -263,7 +257,6 @@ async def rooms_create(
platform=room.platform,
skip_consent=room.skip_consent,
email_transcript_to=room.email_transcript_to,
store_video=room.store_video,
)
@@ -332,7 +325,6 @@ async def rooms_create_meeting(
and meeting.recording_type == room.recording_type
and meeting.recording_trigger == room.recording_trigger
and meeting.platform == room.platform
and meeting.store_video == room.store_video
)
if not settings_match:
logger.info(
@@ -608,6 +600,9 @@ async def rooms_join_meeting(
meeting.room_url = add_query_param(meeting.room_url, "t", token)
elif meeting.platform == "livekit":
import re
import uuid
client = create_platform_client(meeting.platform)
# Identity must be unique per participant to avoid S3 key collisions.
# Format: {readable_name}-{short_uuid} ensures uniqueness even for same names.
@@ -630,6 +625,8 @@ async def rooms_join_meeting(
# Store identity → Reflector user_id mapping for the pipeline
# (so TranscriptParticipant.user_id can be set correctly)
if user_id:
from reflector.redis_cache import get_async_redis_client # noqa: PLC0415
redis_client = await get_async_redis_client()
mapping_key = f"livekit:participant_map:{meeting.room_name}"
await redis_client.hset(mapping_key, participant_identity, user_id)

View File

@@ -11,7 +11,6 @@ from reflector.events import subscribers_shutdown
from reflector.logger import logger
from reflector.pipelines.runner import PipelineRunner
from reflector.settings import settings
from reflector.webrtc_ports import resolve_webrtc_host, rewrite_sdp_host
sessions = []
router = APIRouter()
@@ -129,6 +128,8 @@ async def rtc_offer_base(
# Rewrite ICE candidate IPs when running behind Docker bridge networking
if settings.WEBRTC_HOST:
from reflector.webrtc_ports import resolve_webrtc_host, rewrite_sdp_host
host_ip = resolve_webrtc_host(settings.WEBRTC_HOST)
sdp = rewrite_sdp_host(sdp, host_ip)

View File

@@ -116,7 +116,6 @@ class GetTranscriptMinimal(BaseModel):
change_seq: int | None = None
has_cloud_video: bool = False
cloud_video_duration: int | None = None
speaker_count: int = 0
class TranscriptParticipantWithEmail(TranscriptParticipant):

View File

@@ -4,7 +4,6 @@ from fastapi import APIRouter, Depends, HTTPException, Request
import reflector.auth as auth
from reflector.db.transcripts import transcripts_controller
from reflector.pipelines.main_live_pipeline import PipelineMainLive
from .rtc_offer import RtcOffer, rtc_offer_base
@@ -29,6 +28,8 @@ async def transcript_record_webrtc(
raise HTTPException(status_code=400, detail="Transcript is locked")
# create a pipeline runner
from reflector.pipelines.main_live_pipeline import PipelineMainLive # noqa: PLC0415
pipeline_runner = PipelineMainLive(transcript_id=transcript_id)
# FIXME do not allow multiple recording at the same time

View File

@@ -1,14 +1,11 @@
import logging
from typing import Annotated, Optional
import httpx
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
import reflector.auth as auth
from reflector.zulip import get_zulip_streams, get_zulip_topics
logger = logging.getLogger(__name__)
router = APIRouter()
@@ -26,18 +23,13 @@ async def zulip_get_streams(
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
) -> list[Stream]:
"""
Get all Zulip streams. Returns [] if the upstream Zulip API is unreachable
or the server credentials are invalid — the client treats Zulip as an
optional integration and renders gracefully without a hard error.
Get all Zulip streams.
"""
if not user:
raise HTTPException(status_code=403, detail="Authentication required")
try:
return await get_zulip_streams()
except (httpx.HTTPStatusError, httpx.RequestError, Exception) as exc:
logger.warning("zulip get_streams failed, returning []: %s", exc)
return []
streams = await get_zulip_streams()
return streams
@router.get("/zulip/streams/{stream_id}/topics")
@@ -46,14 +38,10 @@ async def zulip_get_topics(
user: Annotated[Optional[auth.UserInfo], Depends(auth.current_user_optional)],
) -> list[Topic]:
"""
Get all topics for a specific Zulip stream. Returns [] on upstream failure
for the same reason as /zulip/streams above.
Get all topics for a specific Zulip stream.
"""
if not user:
raise HTTPException(status_code=403, detail="Authentication required")
try:
return await get_zulip_topics(stream_id)
except (httpx.HTTPStatusError, httpx.RequestError, Exception) as exc:
logger.warning("zulip get_topics(%s) failed, returning []: %s", stream_id, exc)
return []
topics = await get_zulip_topics(stream_id)
return topics

View File

@@ -11,8 +11,6 @@ This allows running the server in Docker with bridge networking
import asyncio
import socket
import aioice.ice
from reflector.logger import logger
@@ -38,7 +36,9 @@ def patch_aioice_port_range(min_port: int, max_port: int) -> None:
Works by temporarily wrapping loop.create_datagram_endpoint() during
aioice's get_component_candidates() to intercept bind(addr, 0) calls.
"""
_original = aioice.ice.Connection.get_component_candidates
import aioice.ice as _ice
_original = _ice.Connection.get_component_candidates
_state = {"next_port": min_port}
async def _patched_get_component_candidates(self, component, addresses, timeout=5):
@@ -78,7 +78,7 @@ def patch_aioice_port_range(min_port: int, max_port: int) -> None:
finally:
loop.create_datagram_endpoint = _orig_create
aioice.ice.Connection.get_component_candidates = _patched_get_component_candidates
_ice.Connection.get_component_candidates = _patched_get_component_candidates
logger.info(
"aioice patched for WebRTC port range",
min_port=min_port,
@@ -102,6 +102,8 @@ def rewrite_sdp_host(sdp: str, target_ip: str) -> str:
Replace container-internal IPs in SDP with target_ip so that
ICE candidates advertise a routable address.
"""
import aioice.ice
container_ips = aioice.ice.get_host_addresses(use_ipv4=True, use_ipv6=False)
for ip in container_ips:
if ip != "127.0.0.1" and ip != target_ip:

View File

@@ -30,8 +30,6 @@ def build_beat_schedule(
whereby_api_key=None,
aws_process_recording_queue_url=None,
daily_api_key=None,
livekit_api_key=None,
livekit_url=None,
public_mode=False,
public_data_retention_days=None,
healthcheck_url=None,
@@ -85,7 +83,7 @@ def build_beat_schedule(
else:
logger.info("Daily.co beat tasks disabled (no DAILY_API_KEY)")
_livekit_enabled = bool(livekit_api_key and livekit_url)
_livekit_enabled = bool(settings.LIVEKIT_API_KEY and settings.LIVEKIT_URL)
if _livekit_enabled:
beat_schedule["process_livekit_ended_meetings"] = {
"task": "reflector.worker.process.process_livekit_ended_meetings",
@@ -177,8 +175,6 @@ else:
whereby_api_key=settings.WHEREBY_API_KEY,
aws_process_recording_queue_url=settings.AWS_PROCESS_RECORDING_QUEUE_URL,
daily_api_key=settings.DAILY_API_KEY,
livekit_api_key=settings.LIVEKIT_API_KEY,
livekit_url=settings.LIVEKIT_URL,
public_mode=settings.PUBLIC_MODE,
public_data_retention_days=settings.PUBLIC_DATA_RETENTION_DAYS,
healthcheck_url=settings.HEALTHCHECK_URL,

View File

@@ -1,4 +1,3 @@
import asyncio
import json
import os
import re
@@ -27,26 +26,16 @@ from reflector.db.transcripts import (
transcripts_controller,
)
from reflector.hatchet.client import HatchetClientManager
from reflector.pipelines.topic_processing import EmptyPipeline
from reflector.processors.audio_file_writer import AudioFileWriterProcessor
from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
from reflector.redis_cache import RedisAsyncLock
from reflector.settings import settings
from reflector.storage import get_source_storage, get_transcripts_storage
from reflector.storage import get_transcripts_storage
from reflector.utils.daily import (
DailyRoomName,
extract_base_room_name,
filter_cam_audio_tracks,
recording_lock_key,
)
from reflector.utils.livekit import (
extract_livekit_base_room_name,
filter_audio_tracks,
parse_livekit_track_filepath,
)
from reflector.utils.livekit import (
recording_lock_key as livekit_recording_lock_key,
)
from reflector.utils.string import NonEmptyString
from reflector.video_platforms.factory import create_platform_client
from reflector.video_platforms.whereby_utils import (
@@ -573,15 +562,6 @@ async def store_cloud_recording(
)
return False
if not meeting.store_video:
logger.info(
f"Cloud recording ({source}): skipped, store_video=false",
recording_id=recording_id,
room_name=room_name,
meeting_id=meeting.id,
)
return False
success = await meetings_controller.set_cloud_recording_if_missing(
meeting_id=meeting.id,
s3_key=s3_key,
@@ -943,6 +923,11 @@ async def convert_audio_and_waveform(transcript) -> None:
transcript_id=transcript.id,
)
from reflector.pipelines.topic_processing import EmptyPipeline # noqa: PLC0415
from reflector.processors.audio_file_writer import (
AudioFileWriterProcessor, # noqa: PLC0415
)
upload_path = transcript.data_path / "upload.webm"
mp3_path = transcript.audio_mp3_filename
@@ -1221,13 +1206,17 @@ async def process_livekit_multitrack(
Tracks are discovered via S3 listing (source of truth), not webhooks.
Called from room_finished webhook (fast-path) or beat task (fallback).
"""
from reflector.utils.livekit import ( # noqa: PLC0415
recording_lock_key,
)
logger.info(
"Processing LiveKit multitrack recording",
room_name=room_name,
meeting_id=meeting_id,
)
lock_key = livekit_recording_lock_key(room_name)
lock_key = recording_lock_key(room_name)
async with RedisAsyncLock(
key=lock_key,
timeout=600,
@@ -1254,10 +1243,19 @@ async def _process_livekit_multitrack_inner(
# 1. Discover tracks by listing S3 prefix.
# Wait briefly for egress files to finish flushing to S3 — the room_finished
# webhook fires after empty_timeout, but egress finalization may still be in progress.
import asyncio as _asyncio # noqa: PLC0415
from reflector.storage import get_source_storage # noqa: PLC0415
from reflector.utils.livekit import ( # noqa: PLC0415
extract_livekit_base_room_name,
filter_audio_tracks,
parse_livekit_track_filepath,
)
EGRESS_FLUSH_DELAY = 10 # seconds — egress typically flushes within a few seconds
EGRESS_RETRY_DELAY = 30 # seconds — retry if first listing finds nothing
await asyncio.sleep(EGRESS_FLUSH_DELAY)
await _asyncio.sleep(EGRESS_FLUSH_DELAY)
storage = get_source_storage("livekit")
s3_prefix = f"livekit/{room_name}/"
@@ -1273,7 +1271,7 @@ async def _process_livekit_multitrack_inner(
room_name=room_name,
retry_delay=EGRESS_RETRY_DELAY,
)
await asyncio.sleep(EGRESS_RETRY_DELAY)
await _asyncio.sleep(EGRESS_RETRY_DELAY)
all_keys = await storage.list_objects(prefix=s3_prefix)
audio_keys = filter_audio_tracks(all_keys) if all_keys else []
@@ -1292,7 +1290,7 @@ async def _process_livekit_multitrack_inner(
expected=expected_audio,
found=len(audio_keys),
)
await asyncio.sleep(EGRESS_RETRY_DELAY)
await _asyncio.sleep(EGRESS_RETRY_DELAY)
all_keys = await storage.list_objects(prefix=s3_prefix)
audio_keys = filter_audio_tracks(all_keys) if all_keys else []

View File

@@ -32,10 +32,6 @@ DAILY_TASKS = {
"trigger_daily_reconciliation",
"reprocess_failed_daily_recordings",
}
LIVEKIT_TASKS = {
"process_livekit_ended_meetings",
"reprocess_failed_livekit_recordings",
}
PLATFORM_TASKS = {
"process_meetings",
"sync_all_ics_calendars",
@@ -51,7 +47,6 @@ class TestNoPlatformConfigured:
task_names = set(schedule.keys())
assert not task_names & WHEREBY_TASKS
assert not task_names & DAILY_TASKS
assert not task_names & LIVEKIT_TASKS
assert not task_names & PLATFORM_TASKS
def test_only_healthcheck_disabled_warning(self):
@@ -77,7 +72,6 @@ class TestWherebyOnly:
assert WHEREBY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
assert not task_names & DAILY_TASKS
assert not task_names & LIVEKIT_TASKS
def test_whereby_sqs_url(self):
schedule = build_beat_schedule(
@@ -87,7 +81,6 @@ class TestWherebyOnly:
assert WHEREBY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
assert not task_names & DAILY_TASKS
assert not task_names & LIVEKIT_TASKS
def test_whereby_task_count(self):
schedule = build_beat_schedule(whereby_api_key="test-key")
@@ -104,7 +97,6 @@ class TestDailyOnly:
assert DAILY_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
assert not task_names & WHEREBY_TASKS
assert not task_names & LIVEKIT_TASKS
def test_daily_task_count(self):
schedule = build_beat_schedule(daily_api_key="test-daily-key")
@@ -112,33 +104,6 @@ class TestDailyOnly:
assert len(schedule) == 6
class TestLiveKitOnly:
"""When only LiveKit is configured."""
def test_livekit_keys(self):
schedule = build_beat_schedule(
livekit_api_key="test-lk-key", livekit_url="ws://livekit:7880"
)
task_names = set(schedule.keys())
assert LIVEKIT_TASKS <= task_names
assert PLATFORM_TASKS <= task_names
assert not task_names & WHEREBY_TASKS
assert not task_names & DAILY_TASKS
def test_livekit_task_count(self):
schedule = build_beat_schedule(
livekit_api_key="test-lk-key", livekit_url="ws://livekit:7880"
)
# LiveKit (2) + Platform (3) = 5
assert len(schedule) == 5
def test_livekit_needs_both_key_and_url(self):
schedule_key_only = build_beat_schedule(livekit_api_key="test-lk-key")
schedule_url_only = build_beat_schedule(livekit_url="ws://livekit:7880")
assert not set(schedule_key_only.keys()) & LIVEKIT_TASKS
assert not set(schedule_url_only.keys()) & LIVEKIT_TASKS
class TestBothPlatforms:
"""When both Whereby and Daily.co are configured."""

View File

@@ -1,99 +0,0 @@
import math
import pytest
from reflector.utils.transcript_constants import (
compute_max_subjects,
compute_topic_chunk_size,
)
@pytest.mark.parametrize(
"duration_min,total_words,expected_topics_range",
[
(5, 750, (1, 3)),
(10, 1500, (3, 6)),
(30, 4500, (8, 14)),
(60, 9000, (14, 22)),
(120, 18000, (24, 35)),
(180, 27000, (30, 42)),
],
)
def test_topic_count_in_expected_range(
duration_min, total_words, expected_topics_range
):
chunk_size = compute_topic_chunk_size(duration_min * 60, total_words)
num_topics = math.ceil(total_words / chunk_size)
assert expected_topics_range[0] <= num_topics <= expected_topics_range[1], (
f"For {duration_min}min/{total_words}words: got {num_topics} topics "
f"(chunk_size={chunk_size}), expected {expected_topics_range[0]}-{expected_topics_range[1]}"
)
def test_chunk_size_within_bounds():
for duration_min in [5, 10, 30, 60, 120, 180]:
chunk_size = compute_topic_chunk_size(duration_min * 60, duration_min * 150)
assert (
375 <= chunk_size <= 1500
), f"For {duration_min}min: chunk_size={chunk_size} out of bounds [375, 1500]"
def test_zero_duration_falls_back():
assert compute_topic_chunk_size(0, 1000) == 375
def test_zero_words_falls_back():
assert compute_topic_chunk_size(600, 0) == 375
def test_negative_inputs_fall_back():
assert compute_topic_chunk_size(-10, 1000) == 375
assert compute_topic_chunk_size(600, -5) == 375
def test_very_short_transcript():
"""A 1-minute call with very few words should still produce at least 1 topic."""
chunk_size = compute_topic_chunk_size(60, 100)
# chunk_size is at least 375, so 100 words = 1 chunk
assert chunk_size >= 375
def test_very_long_transcript():
"""A 4-hour call should cap at max topics."""
chunk_size = compute_topic_chunk_size(4 * 3600, 36000)
num_topics = math.ceil(36000 / chunk_size)
assert num_topics <= 50
# --- compute_max_subjects tests ---
@pytest.mark.parametrize(
"duration_seconds,expected_max",
[
(0, 1), # zero/invalid → 1
(-10, 1), # negative → 1
(60, 1), # 1 min → 1
(120, 1), # 2 min → 1
(300, 1), # 5 min (boundary) → 1
(301, 2), # just over 5 min → 2
(900, 2), # 15 min (boundary) → 2
(901, 3), # just over 15 min → 3
(1800, 3), # 30 min (boundary) → 3
(1801, 4), # just over 30 min → 4
(2700, 4), # 45 min (boundary) → 4
(2701, 5), # just over 45 min → 5
(3600, 5), # 60 min (boundary) → 5
(3601, 6), # just over 60 min → 6
(7200, 6), # 2 hours → 6
(14400, 6), # 4 hours → 6
],
)
def test_max_subjects_scales_with_duration(duration_seconds, expected_max):
assert compute_max_subjects(duration_seconds) == expected_max
def test_max_subjects_never_exceeds_cap():
"""Even very long recordings should cap at 6 subjects."""
for hours in range(1, 10):
assert compute_max_subjects(hours * 3600) <= 6

312
server/uv.lock generated
View File

@@ -375,31 +375,19 @@ wheels = [
[[package]]
name = "banks"
version = "2.2.0"
version = "2.4.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "deprecated" },
{ name = "filetype" },
{ name = "griffe" },
{ name = "jinja2" },
{ name = "platformdirs" },
{ name = "pydantic" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7d/f8/25ef24814f77f3fd7f0fd3bd1ef3749e38a9dbd23502fbb53034de49900c/banks-2.2.0.tar.gz", hash = "sha256:d1446280ce6e00301e3e952dd754fd8cee23ff277d29ed160994a84d0d7ffe62", size = 179052, upload-time = "2025-07-18T16:28:26.892Z" }
sdist = { url = "https://files.pythonhosted.org/packages/47/5d/54c79aaaa9aa1278af24cae98d81d6ef635ad840f046bc2ccb5041ddeb1b/banks-2.4.1.tar.gz", hash = "sha256:8cbf1553f14c44d4f7e9c2064ad9212ce53ee4da000b2f8308d548b60db56655", size = 188033, upload-time = "2026-02-17T11:21:14.855Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b4/d6/f9168956276934162ec8d48232f9920f2985ee45aa7602e3c6b4bc203613/banks-2.2.0-py3-none-any.whl", hash = "sha256:963cd5c85a587b122abde4f4064078def35c50c688c1b9d36f43c92503854e7d", size = 29244, upload-time = "2025-07-18T16:28:27.835Z" },
]
[[package]]
name = "beautifulsoup4"
version = "4.13.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "soupsieve" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d8/e4/0c4c39e18fd76d6a628d4dd8da40543d136ce2d1752bd6eeeab0791f4d6b/beautifulsoup4-4.13.4.tar.gz", hash = "sha256:dbb3c4e1ceae6aefebdaf2423247260cd062430a410e38c66f2baa50a8437195", size = 621067, upload-time = "2025-04-15T17:05:13.836Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/50/cd/30110dc0ffcf3b131156077b90e9f60ed75711223f306da4db08eff8403b/beautifulsoup4-4.13.4-py3-none-any.whl", hash = "sha256:9bbbb14bfde9d79f38b8cd5f8c7c85f4b8f2523190ebed90e950a8dea4cb1c4b", size = 187285, upload-time = "2025-04-15T17:05:12.221Z" },
{ url = "https://files.pythonhosted.org/packages/b8/5a/f38b49e8b225b0c774e97c9495e52ab9ccdf6d82bde68c513bd736820eb2/banks-2.4.1-py3-none-any.whl", hash = "sha256:40e6d9b6e9b69fb403fa31f2853b3297e4919c1b6f2179b2119d2d4473c6ed13", size = 35032, upload-time = "2026-02-17T11:21:13.236Z" },
]
[[package]]
@@ -909,15 +897,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4e/8c/f3147f5c4b73e7550fe5f9352eaa956ae838d5c51eb58e7a25b9f3e2643b/decorator-5.2.1-py3-none-any.whl", hash = "sha256:d316bb415a2d9e2d2b3abcc4084c6502fc09240e292cd76a76afc106a1c8e04a", size = 9190, upload-time = "2025-02-24T04:41:32.565Z" },
]
[[package]]
name = "defusedxml"
version = "0.7.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/0f/d5/c66da9b79e5bdb124974bfe172b4daf3c984ebd9c2a06e2b8a4dc7331c72/defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69", size = 75520, upload-time = "2021-03-08T10:59:26.269Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/07/6c/aa3f2f849e01cb6a001cd8554a88d4c77c5c1a31c95bdf1cf9301e6d9ef4/defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61", size = 25604, upload-time = "2021-03-08T10:59:24.45Z" },
]
[[package]]
name = "deprecated"
version = "1.2.18"
@@ -1359,17 +1338,18 @@ wheels = [
[[package]]
name = "hf-xet"
version = "1.1.5"
version = "1.4.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ed/d4/7685999e85945ed0d7f0762b686ae7015035390de1161dcea9d5276c134c/hf_xet-1.1.5.tar.gz", hash = "sha256:69ebbcfd9ec44fdc2af73441619eeb06b94ee34511bbcf57cd423820090f5694", size = 495969, upload-time = "2025-06-20T21:48:38.007Z" }
sdist = { url = "https://files.pythonhosted.org/packages/53/92/ec9ad04d0b5728dca387a45af7bc98fbb0d73b2118759f5f6038b61a57e8/hf_xet-1.4.3.tar.gz", hash = "sha256:8ddedb73c8c08928c793df2f3401ec26f95be7f7e516a7bee2fbb546f6676113", size = 670477, upload-time = "2026-03-31T22:40:07.874Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/00/89/a1119eebe2836cb25758e7661d6410d3eae982e2b5e974bcc4d250be9012/hf_xet-1.1.5-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:f52c2fa3635b8c37c7764d8796dfa72706cc4eded19d638331161e82b0792e23", size = 2687929, upload-time = "2025-06-20T21:48:32.284Z" },
{ url = "https://files.pythonhosted.org/packages/de/5f/2c78e28f309396e71ec8e4e9304a6483dcbc36172b5cea8f291994163425/hf_xet-1.1.5-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:9fa6e3ee5d61912c4a113e0708eaaef987047616465ac7aa30f7121a48fc1af8", size = 2556338, upload-time = "2025-06-20T21:48:30.079Z" },
{ url = "https://files.pythonhosted.org/packages/6d/2f/6cad7b5fe86b7652579346cb7f85156c11761df26435651cbba89376cd2c/hf_xet-1.1.5-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc874b5c843e642f45fd85cda1ce599e123308ad2901ead23d3510a47ff506d1", size = 3102894, upload-time = "2025-06-20T21:48:28.114Z" },
{ url = "https://files.pythonhosted.org/packages/d0/54/0fcf2b619720a26fbb6cc941e89f2472a522cd963a776c089b189559447f/hf_xet-1.1.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dbba1660e5d810bd0ea77c511a99e9242d920790d0e63c0e4673ed36c4022d18", size = 3002134, upload-time = "2025-06-20T21:48:25.906Z" },
{ url = "https://files.pythonhosted.org/packages/f3/92/1d351ac6cef7c4ba8c85744d37ffbfac2d53d0a6c04d2cabeba614640a78/hf_xet-1.1.5-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ab34c4c3104133c495785d5d8bba3b1efc99de52c02e759cf711a91fd39d3a14", size = 3171009, upload-time = "2025-06-20T21:48:33.987Z" },
{ url = "https://files.pythonhosted.org/packages/c9/65/4b2ddb0e3e983f2508528eb4501288ae2f84963586fbdfae596836d5e57a/hf_xet-1.1.5-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:83088ecea236d5113de478acb2339f92c95b4fb0462acaa30621fac02f5a534a", size = 3279245, upload-time = "2025-06-20T21:48:36.051Z" },
{ url = "https://files.pythonhosted.org/packages/f0/55/ef77a85ee443ae05a9e9cba1c9f0dd9241eb42da2aeba1dc50f51154c81a/hf_xet-1.1.5-cp37-abi3-win_amd64.whl", hash = "sha256:73e167d9807d166596b4b2f0b585c6d5bd84a26dea32843665a8b58f6edba245", size = 2738931, upload-time = "2025-06-20T21:48:39.482Z" },
{ url = "https://files.pythonhosted.org/packages/ac/9f/9c23e4a447b8f83120798f9279d0297a4d1360bdbf59ef49ebec78fe2545/hf_xet-1.4.3-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:d0da85329eaf196e03e90b84c2d0aca53bd4573d097a75f99609e80775f98025", size = 3805048, upload-time = "2026-03-31T22:39:53.105Z" },
{ url = "https://files.pythonhosted.org/packages/0b/f8/7aacb8e5f4a7899d39c787b5984e912e6c18b11be136ef13947d7a66d265/hf_xet-1.4.3-cp37-abi3-macosx_11_0_arm64.whl", hash = "sha256:e23717ce4186b265f69afa66e6f0069fe7efbf331546f5c313d00e123dc84583", size = 3562178, upload-time = "2026-03-31T22:39:51.295Z" },
{ url = "https://files.pythonhosted.org/packages/df/9a/a24b26dc8a65f0ecc0fe5be981a19e61e7ca963b85e062c083f3a9100529/hf_xet-1.4.3-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc360b70c815bf340ed56c7b8c63aacf11762a4b099b2fe2c9bd6d6068668c08", size = 4212320, upload-time = "2026-03-31T22:39:42.922Z" },
{ url = "https://files.pythonhosted.org/packages/53/60/46d493db155d2ee2801b71fb1b0fd67696359047fdd8caee2c914cc50c79/hf_xet-1.4.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:39f2d2e9654cd9b4319885733993807aab6de9dfbd34c42f0b78338d6617421f", size = 3991546, upload-time = "2026-03-31T22:39:41.335Z" },
{ url = "https://files.pythonhosted.org/packages/bc/f5/067363e1c96c6b17256910830d1b54099d06287e10f4ec6ec4e7e08371fc/hf_xet-1.4.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:49ad8a8cead2b56051aa84d7fce3e1335efe68df3cf6c058f22a65513885baac", size = 4193200, upload-time = "2026-03-31T22:40:01.936Z" },
{ url = "https://files.pythonhosted.org/packages/42/4b/53951592882d9c23080c7644542fda34a3813104e9e11fa1a7d82d419cb8/hf_xet-1.4.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:7716d62015477a70ea272d2d68cd7cad140f61c52ee452e133e139abfe2c17ba", size = 4429392, upload-time = "2026-03-31T22:40:03.492Z" },
{ url = "https://files.pythonhosted.org/packages/8a/21/75a6c175b4e79662ad8e62f46a40ce341d8d6b206b06b4320d07d55b188c/hf_xet-1.4.3-cp37-abi3-win_amd64.whl", hash = "sha256:6b591fcad34e272a5b02607485e4f2a1334aebf1bc6d16ce8eb1eb8978ac2021", size = 3677359, upload-time = "2026-03-31T22:40:13.619Z" },
{ url = "https://files.pythonhosted.org/packages/8a/7c/44314ecd0e89f8b2b51c9d9e5e7a60a9c1c82024ac471d415860557d3cd8/hf_xet-1.4.3-cp37-abi3-win_arm64.whl", hash = "sha256:7c2c7e20bcfcc946dc67187c203463f5e932e395845d098cc2a93f5b67ca0b47", size = 3533664, upload-time = "2026-03-31T22:40:12.152Z" },
]
[[package]]
@@ -1440,21 +1420,22 @@ wheels = [
[[package]]
name = "huggingface-hub"
version = "0.33.4"
version = "1.9.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock" },
{ name = "fsspec" },
{ name = "hf-xet", marker = "platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'" },
{ name = "hf-xet", marker = "platform_machine == 'AMD64' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'arm64' or platform_machine == 'x86_64'" },
{ name = "httpx" },
{ name = "packaging" },
{ name = "pyyaml" },
{ name = "requests" },
{ name = "tqdm" },
{ name = "typer" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/4b/9e/9366b7349fc125dd68b9d384a0fea84d67b7497753fe92c71b67e13f47c4/huggingface_hub-0.33.4.tar.gz", hash = "sha256:6af13478deae120e765bfd92adad0ae1aec1ad8c439b46f23058ad5956cbca0a", size = 426674, upload-time = "2025-07-11T12:32:48.694Z" }
sdist = { url = "https://files.pythonhosted.org/packages/44/40/68d9b286b125d9318ae95c8f8b206e8672e7244b0eea61ebb4a88037638c/huggingface_hub-1.9.1.tar.gz", hash = "sha256:442af372207cc24dcb089caf507fcd7dbc1217c11d6059a06f6b90afe64e8bd2", size = 750355, upload-time = "2026-04-07T13:47:59.167Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/46/7b/98daa50a2db034cab6cd23a3de04fa2358cb691593d28e9130203eb7a805/huggingface_hub-0.33.4-py3-none-any.whl", hash = "sha256:09f9f4e7ca62547c70f8b82767eefadd2667f4e116acba2e3e62a5a81815a7bb", size = 515339, upload-time = "2025-07-11T12:32:46.346Z" },
{ url = "https://files.pythonhosted.org/packages/3d/af/10a89c54937dccf6c10792770f362d96dd67aedfde108e6e1fd7a0836789/huggingface_hub-1.9.1-py3-none-any.whl", hash = "sha256:8dae771b969b318203727a6c6c5209d25e661f6f0dd010fc09cc4a12cf81c657", size = 637356, upload-time = "2026-04-07T13:47:57.239Z" },
]
[[package]]
@@ -1834,74 +1815,24 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/5f/0e/f3d3e48628294df4559cffd0f8e1adf030127029e5a8da9beff9979090a0/livekit_protocol-1.1.3-py3-none-any.whl", hash = "sha256:fdae5640e064ab6549ec3d62d8bac75a3ef44d7ea73716069b419cbe8b360a5c", size = 107498, upload-time = "2026-03-18T05:25:42.077Z" },
]
[[package]]
name = "llama-cloud"
version = "0.1.35"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "httpx" },
{ name = "pydantic" },
]
sdist = { url = "https://files.pythonhosted.org/packages/9b/72/816e6e900448e1b4a8137d90e65876b296c5264a23db6ae888bd3e6660ba/llama_cloud-0.1.35.tar.gz", hash = "sha256:200349d5d57424d7461f304cdb1355a58eea3e6ca1e6b0d75c66b2e937216983", size = 106403, upload-time = "2025-07-28T17:22:06.41Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1d/d2/8d18a021ab757cea231428404f21fe3186bf1ebaac3f57a73c379483fd3f/llama_cloud-0.1.35-py3-none-any.whl", hash = "sha256:b7abab4423118e6f638d2f326749e7a07c6426543bea6da99b623c715b22af71", size = 303280, upload-time = "2025-07-28T17:22:04.946Z" },
]
[[package]]
name = "llama-cloud-services"
version = "0.6.54"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "llama-cloud" },
{ name = "llama-index-core" },
{ name = "platformdirs" },
{ name = "pydantic" },
{ name = "python-dotenv" },
{ name = "tenacity" },
]
sdist = { url = "https://files.pythonhosted.org/packages/8a/0c/8ca87d33bea0340a8ed791f36390112aeb29fd3eebfd64b6aef6204a03f0/llama_cloud_services-0.6.54.tar.gz", hash = "sha256:baf65d9bffb68f9dca98ac6e22908b6675b2038b021e657ead1ffc0e43cbd45d", size = 53468, upload-time = "2025-08-01T20:09:20.988Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7f/48/4e295e3f791b279885a2e584f71e75cbe4ac84e93bba3c36e2668f60a8ac/llama_cloud_services-0.6.54-py3-none-any.whl", hash = "sha256:07f595f7a0ba40c6a1a20543d63024ca7600fe65c4811d1951039977908997be", size = 63874, upload-time = "2025-08-01T20:09:20.076Z" },
]
[[package]]
name = "llama-index"
version = "0.13.0"
version = "0.14.20"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-index-cli" },
{ name = "llama-index-core" },
{ name = "llama-index-embeddings-openai" },
{ name = "llama-index-indices-managed-llama-cloud" },
{ name = "llama-index-llms-openai" },
{ name = "llama-index-readers-file" },
{ name = "llama-index-readers-llama-parse" },
{ name = "nltk" },
]
sdist = { url = "https://files.pythonhosted.org/packages/90/43/d4a19822e828f02d45d20a73c3d6e2e3dbec8faa0c107d8f851e5fccb192/llama_index-0.13.0.tar.gz", hash = "sha256:00f4c61d96a83af5d770a992006f0039eb671c2a64eaab9da3660bee921177f2", size = 8000, upload-time = "2025-07-31T16:07:44.173Z" }
sdist = { url = "https://files.pythonhosted.org/packages/24/1b/7b360f7395485c77a81e514ba86fac577a28c799a5737925dd221adc5b9a/llama_index-0.14.20.tar.gz", hash = "sha256:aa6895cee1366a1ab256715fb2f526d57fe346708c76e77d6f319380de70223b", size = 8566, upload-time = "2026-04-03T19:55:46.792Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/5b/df/3da1e5fcee560d78106357b6d91794b68637b866fc152d46ee3331ffed9b/llama_index-0.13.0-py3-none-any.whl", hash = "sha256:028986e73d948b8119dbf2ed6aa2719ece34b4e2d66dd91ae3473de672fc1361", size = 7027, upload-time = "2025-07-31T16:07:42.736Z" },
]
[[package]]
name = "llama-index-cli"
version = "0.5.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-index-core" },
{ name = "llama-index-embeddings-openai" },
{ name = "llama-index-llms-openai" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d2/e3/ac6928586e20cfd327a2a38a00781cbc8fae923edcd0316c23e38aae1537/llama_index_cli-0.5.1.tar.gz", hash = "sha256:0446159d85c56c29022c1c830c9886f670d5f59d69343c3c029a3b20eda1a9d8", size = 24821, upload-time = "2025-09-12T15:22:44.064Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b3/16/b53af5b23921d1e18f57b7a79d557b34554df295c63f5c59d5bee1f5fb47/llama_index_cli-0.5.1-py3-none-any.whl", hash = "sha256:5429b2fd7960df7724c2955b6e6901f6fa910b7b5ecef411c979a8b545a6b7e2", size = 28179, upload-time = "2025-09-12T15:22:43.169Z" },
{ url = "https://files.pythonhosted.org/packages/e5/37/bc6d45dd6207b82220da7c977aff9238c7b3f55b26d63dc2dfefaf3c394f/llama_index-0.14.20-py3-none-any.whl", hash = "sha256:bf43c6d785ced39a5e12605425bffcc9f79fc1bfe9ff831ea8babec6c1a2adef", size = 7114, upload-time = "2026-04-03T19:55:48.599Z" },
]
[[package]]
name = "llama-index-core"
version = "0.13.6"
version = "0.14.20"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -1927,136 +1858,81 @@ dependencies = [
{ name = "sqlalchemy", extra = ["asyncio"] },
{ name = "tenacity" },
{ name = "tiktoken" },
{ name = "tinytag" },
{ name = "tqdm" },
{ name = "typing-extensions" },
{ name = "typing-inspect" },
{ name = "wrapt" },
]
sdist = { url = "https://files.pythonhosted.org/packages/2d/f8/4f6e2bbc34ec6586456727a644960a1ff2d9db60b92071e213ad9d160456/llama_index_core-0.13.6.tar.gz", hash = "sha256:80315a6bd1f9804f48c1870eff1a0315bf9fe5a413747d53eb88a8ebb2602b97", size = 7232179, upload-time = "2025-09-07T03:27:26.544Z" }
sdist = { url = "https://files.pythonhosted.org/packages/38/2c/9a1f613fcd59c583c1b4d529948785fd153f97b076e7b0f170d86106357d/llama_index_core-0.14.20.tar.gz", hash = "sha256:5ddb7ecba2131ecd0a452cd730c5361a407d3ffcdcfb1a319525ed8c9a7c423b", size = 11599236, upload-time = "2026-04-03T19:54:52.108Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/35/23/7e497216ece6e041c6a271f2b7952e5609729da0dcdf09dd3f25a4efc1b9/llama_index_core-0.13.6-py3-none-any.whl", hash = "sha256:67bec3c06a8105cd82d83db0f8c3122f4e4d8a4b9c7a2768cced6a2686ddb331", size = 7575324, upload-time = "2025-09-07T03:27:19.243Z" },
{ url = "https://files.pythonhosted.org/packages/d3/27/0f0e01c239efddc178713379341aabee7a54ffa8e0a4162ff05a0ab950e0/llama_index_core-0.14.20-py3-none-any.whl", hash = "sha256:c666e395879e73a0aa6c751e5f4c8a8e8637df50f6e66ab9ae6e5d932c816126", size = 11945381, upload-time = "2026-04-03T19:54:55.711Z" },
]
[[package]]
name = "llama-index-embeddings-openai"
version = "0.5.1"
version = "0.6.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-index-core" },
{ name = "openai" },
]
sdist = { url = "https://files.pythonhosted.org/packages/10/36/90336d054a5061a3f5bc17ac2c18ef63d9d84c55c14d557de484e811ea4d/llama_index_embeddings_openai-0.5.1.tar.gz", hash = "sha256:1c89867a48b0d0daa3d2d44f5e76b394b2b2ef9935932daf921b9e77939ccda8", size = 7020, upload-time = "2025-09-08T20:17:44.681Z" }
sdist = { url = "https://files.pythonhosted.org/packages/06/52/eb56a4887501651fb17400f7f571c1878109ff698efbe0bbac9165a5603d/llama_index_embeddings_openai-0.6.0.tar.gz", hash = "sha256:eb3e6606be81cb89125073e23c97c0a6119dabb4827adbd14697c2029ad73f29", size = 7629, upload-time = "2026-03-12T20:21:27.234Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/23/4a/8ab11026cf8deff8f555aa73919be0bac48332683111e5fc4290f352dc50/llama_index_embeddings_openai-0.5.1-py3-none-any.whl", hash = "sha256:a2fcda3398bbd987b5ce3f02367caee8e84a56b930fdf43cc1d059aa9fd20ca5", size = 7011, upload-time = "2025-09-08T20:17:44.015Z" },
]
[[package]]
name = "llama-index-indices-managed-llama-cloud"
version = "0.9.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "deprecated" },
{ name = "llama-cloud" },
{ name = "llama-index-core" },
]
sdist = { url = "https://files.pythonhosted.org/packages/61/4a/79044fcb3209583d1ffe0c2a7c19dddfb657a03faeb9fe0cf5a74027e646/llama_index_indices_managed_llama_cloud-0.9.4.tar.gz", hash = "sha256:b5e00752ab30564abf19c57595a2107f5697c3b03b085817b4fca84a38ebbd59", size = 15146, upload-time = "2025-09-08T20:29:58.673Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a6/6a/0e33245df06afc9766c46a1fe92687be8a09da5d0d0128bc08d84a9f5efa/llama_index_indices_managed_llama_cloud-0.9.4-py3-none-any.whl", hash = "sha256:535a08811046803ca6ab7f8e9d510e926aa5306608b02201ad3d9d21701383bc", size = 17005, upload-time = "2025-09-08T20:29:57.876Z" },
{ url = "https://files.pythonhosted.org/packages/4e/d1/4bb0b80f4057903110060f617ef519197194b3ff5dd6153d850c8f5676fa/llama_index_embeddings_openai-0.6.0-py3-none-any.whl", hash = "sha256:039bb1007ad4267e25ddb89a206dfdab862bfb87d58da4271a3919e4f9df4d61", size = 7666, upload-time = "2026-03-12T20:21:28.079Z" },
]
[[package]]
name = "llama-index-instrumentation"
version = "0.3.0"
version = "0.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "deprecated" },
{ name = "pydantic" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0f/57/76123657bf6f175382ceddee9af66507c37d603475cbf0968df8dfea9de2/llama_index_instrumentation-0.3.0.tar.gz", hash = "sha256:77741c1d9861ead080e6f98350625971488d1e046bede91cec9e0ce2f63ea34a", size = 42651, upload-time = "2025-07-17T17:41:20.468Z" }
sdist = { url = "https://files.pythonhosted.org/packages/4e/d0/671b23ccff255c9bce132a84ffd5a6f4541ceefdeab9c1786b08c9722f2e/llama_index_instrumentation-0.5.0.tar.gz", hash = "sha256:eeb724648b25d149de882a5ac9e21c5acb1ce780da214bda2b075341af29ad8e", size = 43831, upload-time = "2026-03-12T20:17:06.742Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cc/d4/9377a53ea2f9bdd33f5ccff78ac863705657f422bb686cad4896b058ce46/llama_index_instrumentation-0.3.0-py3-none-any.whl", hash = "sha256:edfcd71aedc453dbdb4a7073a1e39ddef6ae2c13601a4cba6f2dfea38f48eeff", size = 15011, upload-time = "2025-07-17T17:41:19.723Z" },
{ url = "https://files.pythonhosted.org/packages/c3/45/6dcaccef44e541ffa138e4b45e33e0d40ab2a7d845338483954fcf77bc75/llama_index_instrumentation-0.5.0-py3-none-any.whl", hash = "sha256:aaab83cddd9dd434278891012d8995f47a3bc7ed1736a371db90965348c56a21", size = 16444, upload-time = "2026-03-12T20:17:05.957Z" },
]
[[package]]
name = "llama-index-llms-openai"
version = "0.5.6"
version = "0.7.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-index-core" },
{ name = "openai" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d7/fe/ac57ecb9b5ea4243e097fbc3f5de22f6bd1a787b72a7c80542af80afbf4d/llama_index_llms_openai-0.5.6.tar.gz", hash = "sha256:92533e83be2eb321d84a01a84fb2bf4506bf684c410cd94ccb29ae6c949a27d4", size = 24239, upload-time = "2025-09-08T20:46:25.018Z" }
sdist = { url = "https://files.pythonhosted.org/packages/65/27/18a7fd0873023aed145332dab5a09b95b298e4fff1c21685eaf22b629d87/llama_index_llms_openai-0.7.5.tar.gz", hash = "sha256:54123e679a7cddc1f2e969f278a4654050730daf84691731a0c53ae14feac3c7", size = 27423, upload-time = "2026-03-30T16:30:33.973Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b4/e2/d78be8cbc645668eba088223d63114a076758626fe12e3b4ec9efa2ba342/llama_index_llms_openai-0.5.6-py3-none-any.whl", hash = "sha256:a93a897fe733a6d7b668cbc6cca546e644054ddf5497821141b2d4b5ffb6ea80", size = 25368, upload-time = "2025-09-08T20:46:23.79Z" },
{ url = "https://files.pythonhosted.org/packages/63/62/a847e9a94c2f92926c30188259f9f86e019dcc45122bbb222dea35a74c02/llama_index_llms_openai-0.7.5-py3-none-any.whl", hash = "sha256:c302c6386873420df3714c3d538f45379b6de27ab6a531f30c67419b39a538f5", size = 28492, upload-time = "2026-03-30T16:30:32.979Z" },
]
[[package]]
name = "llama-index-llms-openai-like"
version = "0.5.1"
version = "0.7.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-index-core" },
{ name = "llama-index-llms-openai" },
{ name = "transformers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/8f/81/41b328a13262c287a1e56bb93eff1564164db53ad5773961c378e23dba36/llama_index_llms_openai_like-0.5.1.tar.gz", hash = "sha256:77044a5c2d1e4743435751dd9d39a2281bc9de969f9b90196fe4e2b9f773a352", size = 4899, upload-time = "2025-09-08T20:29:47.603Z" }
sdist = { url = "https://files.pythonhosted.org/packages/9d/9f/0d98d022a08f43d4374998072636ec50a7cb50009bbb9a2761f5b26a78cc/llama_index_llms_openai_like-0.7.1.tar.gz", hash = "sha256:ce7cef3686b1e62d7c08134b4d8ca56706cca816e4c4098eaede33002829a6f9", size = 5177, upload-time = "2026-03-13T16:15:58.156Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/63/57/ab83d7e098a88dc101d56c22584f279dd632785af3bc1e9b84b9b598264d/llama_index_llms_openai_like-0.5.1-py3-none-any.whl", hash = "sha256:0d196d9cd71f7a695a647767c3b09b8e532031f15a86a8d8519645bf77ac3b75", size = 4594, upload-time = "2025-09-08T20:29:46.883Z" },
]
[[package]]
name = "llama-index-readers-file"
version = "0.5.6"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "beautifulsoup4" },
{ name = "defusedxml" },
{ name = "llama-index-core" },
{ name = "pandas" },
{ name = "pypdf" },
{ name = "striprtf" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a0/e5/dccfb495dbc40f50fcfb799db2287ac5dca4a16a3b09bae61a4ccb1788d3/llama_index_readers_file-0.5.6.tar.gz", hash = "sha256:1c08b14facc2dfe933622aaa26dc7d2a7a6023c42d3db896a2c948789edaf1ea", size = 32535, upload-time = "2025-12-24T16:04:16.421Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fe/c3/8d28eaa962e073e6735d80847dda9fd3525cb9ff5974ae82dd20621a5a02/llama_index_readers_file-0.5.6-py3-none-any.whl", hash = "sha256:32e83f9adb4e4803e6c7cef746c44fa0949013b1cb76f06f422e9491d198dbda", size = 51832, upload-time = "2025-12-24T16:04:17.307Z" },
]
[[package]]
name = "llama-index-readers-llama-parse"
version = "0.5.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-index-core" },
{ name = "llama-parse" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b3/77/5bfaab20e6ec8428dbf2352e18be550c957602723d69383908176b5686cd/llama_index_readers_llama_parse-0.5.1.tar.gz", hash = "sha256:2b78b73faa933e30e6c69df351e4e9f36dfe2ae142e2ab3969ddd2ac48930e37", size = 3858, upload-time = "2025-09-08T20:41:29.201Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/68/81/52410c7245dcbf1a54756a9ce3892cdd167ec0b884d696de1304ca3f452e/llama_index_readers_llama_parse-0.5.1-py3-none-any.whl", hash = "sha256:0d41450ed29b0c49c024e206ef6c8e662b1854e77a1c5faefed3b958be54f880", size = 3203, upload-time = "2025-09-08T20:41:28.438Z" },
{ url = "https://files.pythonhosted.org/packages/6d/22/a1e1ec1112c69ca0a379cd72691c36cdbcba78362622ce9a27e5a97965cc/llama_index_llms_openai_like-0.7.1-py3-none-any.whl", hash = "sha256:831f1144077c6f9ea7a62e987b7f2af00310dded3056edca2cb509f70a3e650a", size = 4860, upload-time = "2026-03-13T16:15:59.113Z" },
]
[[package]]
name = "llama-index-workflows"
version = "1.2.0"
version = "2.17.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-index-instrumentation" },
{ name = "pydantic" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/26/9d/9dc7adc10d9976582bf50b074883986cb36b46f2fe45cf60550767300a29/llama_index_workflows-1.2.0.tar.gz", hash = "sha256:f6b19f01a340a1afb1d2fd2285c9dce346e304a3aae519e6103059f5afb2609f", size = 1019113, upload-time = "2025-07-23T18:32:47.86Z" }
sdist = { url = "https://files.pythonhosted.org/packages/f7/36/07f0a6c7173e33f46d6e1754b73c0b40ba5368bf623fb2727d5889cc2f93/llama_index_workflows-2.17.3.tar.gz", hash = "sha256:85f6dcdbf214700ab0741dc3225ad4eaaf1c90fd9f0e082588aa70c4735b26c3", size = 87703, upload-time = "2026-04-07T21:59:10.662Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/36/c1/5190f102a042d36a6a495de27510c2d6e3aca98f892895bfacdcf9109c1d/llama_index_workflows-1.2.0-py3-none-any.whl", hash = "sha256:5722a7ce137e00361025768789e7e77720cd66f855791050183a3c540b6e5b8c", size = 37463, upload-time = "2025-07-23T18:32:46.294Z" },
]
[[package]]
name = "llama-parse"
version = "0.6.43"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "llama-cloud-services" },
]
sdist = { url = "https://files.pythonhosted.org/packages/79/62/22e3f73a2b33b9db1523573611281010c8258bf1d17408913e8e46bdfe58/llama_parse-0.6.43.tar.gz", hash = "sha256:d88e91c97e37f77b2619111ef43c02b7da61125f821cf77f918996eb48200d78", size = 3536, upload-time = "2025-07-08T18:20:58.786Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fa/24/8497595be04a8a0209536e9ce70d4132f8f8e001986f4c700414b3777758/llama_parse-0.6.43-py3-none-any.whl", hash = "sha256:fe435309638c4fdec4fec31f97c5031b743c92268962d03b99bd76704f566c32", size = 4944, upload-time = "2025-07-08T18:20:57.089Z" },
{ url = "https://files.pythonhosted.org/packages/44/d9/b83117e1482cbfcbffca565a070e2e1c228f840f1139dc83dd21bf1f5212/llama_index_workflows-2.17.3-py3-none-any.whl", hash = "sha256:5299775835b521a7ecca0099ad7a9b14e1ce26eb83277fbcc14071dfac54a404", size = 111543, upload-time = "2026-04-07T21:59:09.406Z" },
]
[[package]]
@@ -2400,7 +2276,7 @@ wheels = [
[[package]]
name = "openai"
version = "1.97.0"
version = "2.30.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
@@ -2412,9 +2288,9 @@ dependencies = [
{ name = "tqdm" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e0/c6/b8d66e4f3b95493a8957065b24533333c927dc23817abe397f13fe589c6e/openai-1.97.0.tar.gz", hash = "sha256:0be349569ccaa4fb54f97bb808423fd29ccaeb1246ee1be762e0c81a47bae0aa", size = 493850, upload-time = "2025-07-16T16:37:35.196Z" }
sdist = { url = "https://files.pythonhosted.org/packages/88/15/52580c8fbc16d0675d516e8749806eda679b16de1e4434ea06fb6feaa610/openai-2.30.0.tar.gz", hash = "sha256:92f7661c990bda4b22a941806c83eabe4896c3094465030dd882a71abe80c885", size = 676084, upload-time = "2026-03-25T22:08:59.96Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/8a/91/1f1cf577f745e956b276a8b1d3d76fa7a6ee0c2b05db3b001b900f2c71db/openai-1.97.0-py3-none-any.whl", hash = "sha256:a1c24d96f4609f3f7f51c9e1c2606d97cc6e334833438659cfd687e9c972c610", size = 764953, upload-time = "2025-07-16T16:37:33.135Z" },
{ url = "https://files.pythonhosted.org/packages/2a/9e/5bfa2270f902d5b92ab7d41ce0475b8630572e71e349b2a4996d14bdda93/openai-2.30.0-py3-none-any.whl", hash = "sha256:9a5ae616888eb2748ec5e0c5b955a51592e0b201a11f4262db920f2a78c5231d", size = 1146656, upload-time = "2026-03-25T22:08:58.2Z" },
]
[[package]]
@@ -3012,15 +2888,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/10/bd/c038d7cc38edc1aa5bf91ab8068b63d4308c66c4c8bb3cbba7dfbc049f9c/pyparsing-3.3.2-py3-none-any.whl", hash = "sha256:850ba148bd908d7e2411587e247a1e4f0327839c40e2e5e6d05a007ecc69911d", size = 122781, upload-time = "2026-01-21T03:57:55.912Z" },
]
[[package]]
name = "pypdf"
version = "6.9.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/31/83/691bdb309306232362503083cb15777491045dd54f45393a317dc7d8082f/pypdf-6.9.2.tar.gz", hash = "sha256:7f850faf2b0d4ab936582c05da32c52214c2b089d61a316627b5bfb5b0dab46c", size = 5311837, upload-time = "2026-03-23T14:53:27.983Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a5/7e/c85f41243086a8fe5d1baeba527cb26a1918158a565932b41e0f7c0b32e9/pypdf-6.9.2-py3-none-any.whl", hash = "sha256:662cf29bcb419a36a1365232449624ab40b7c2d0cfc28e54f42eeecd1fd7e844", size = 333744, upload-time = "2026-03-23T14:53:26.573Z" },
]
[[package]]
name = "pyreadline3"
version = "3.5.4"
@@ -3970,15 +3837,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/14/e9/6b761de83277f2f02ded7e7ea6f07828ec78e4b229b80e4ca55dd205b9dc/soundfile-0.13.1-py2.py3-none-win_amd64.whl", hash = "sha256:1e70a05a0626524a69e9f0f4dd2ec174b4e9567f4d8b6c11d38b5c289be36ee9", size = 1019162, upload-time = "2025-01-25T09:16:59.573Z" },
]
[[package]]
name = "soupsieve"
version = "2.7"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/3f/f4/4a80cd6ef364b2e8b65b15816a843c0980f7a5a2b4dc701fc574952aa19f/soupsieve-2.7.tar.gz", hash = "sha256:ad282f9b6926286d2ead4750552c8a6142bc4c783fd66b0293547c8fe6ae126a", size = 103418, upload-time = "2025-04-20T18:50:08.518Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/9c/0e6afc12c269578be5c0c1c9f4b49a8d32770a080260c333ac04cc1c832d/soupsieve-2.7-py3-none-any.whl", hash = "sha256:6e60cc5c1ffaf1cebcc12e8188320b72071e922c2e897f737cadce79ad5d30c4", size = 36677, upload-time = "2025-04-20T18:50:07.196Z" },
]
[[package]]
name = "soxr"
version = "1.0.0"
@@ -4074,15 +3932,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/51/da/545b75d420bb23b5d494b0517757b351963e974e79933f01e05c929f20a6/starlette-0.49.1-py3-none-any.whl", hash = "sha256:d92ce9f07e4a3caa3ac13a79523bd18e3bc0042bb8ff2d759a8e7dd0e1859875", size = 74175, upload-time = "2025-10-28T17:34:09.13Z" },
]
[[package]]
name = "striprtf"
version = "0.0.26"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/25/20/3d419008265346452d09e5dadfd5d045b64b40d8fc31af40588e6c76997a/striprtf-0.0.26.tar.gz", hash = "sha256:fdb2bba7ac440072d1c41eab50d8d74ae88f60a8b6575c6e2c7805dc462093aa", size = 6258, upload-time = "2023-07-20T14:30:36.29Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a3/cf/0fea4f4ba3fc2772ac2419278aa9f6964124d4302117d61bc055758e000c/striprtf-0.0.26-py3-none-any.whl", hash = "sha256:8c8f9d32083cdc2e8bfb149455aa1cc5a4e0a035893bedc75db8b73becb3a1bb", size = 6914, upload-time = "2023-07-20T14:30:35.338Z" },
]
[[package]]
name = "structlog"
version = "25.4.0"
@@ -4169,29 +4018,39 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/5b/64/b16003419a1d7728d0d8c0d56a4c24325e7b10a21a9dd1fc0f7115c02f0a/tiktoken-0.9.0-cp312-cp312-win_amd64.whl", hash = "sha256:5a62d7a25225bafed786a524c1b9f0910a1128f4232615bf3f8257a73aaa3b16", size = 894897, upload-time = "2025-02-14T06:02:36.265Z" },
]
[[package]]
name = "tinytag"
version = "2.2.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/96/59/8a8cb2331e2602b53e4dc06960f57d1387a2b18e7efd24e5f9cb60ea4925/tinytag-2.2.1.tar.gz", hash = "sha256:e6d06610ebe7cd66fd07be2d3b9495914ab32654a5e47657bb8cd44c2484523c", size = 38214, upload-time = "2026-03-15T18:48:01.11Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ce/34/d50e338631baaf65ec5396e70085e5de0b52b24b28db1ffbc1c6e82190dc/tinytag-2.2.1-py3-none-any.whl", hash = "sha256:ed8b1e6d25367937e3321e054f4974f9abfde1a3e0a538824c87da377130c2b6", size = 32927, upload-time = "2026-03-15T18:47:59.613Z" },
]
[[package]]
name = "tokenizers"
version = "0.21.2"
version = "0.22.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "huggingface-hub" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ab/2d/b0fce2b8201635f60e8c95990080f58461cc9ca3d5026de2e900f38a7f21/tokenizers-0.21.2.tar.gz", hash = "sha256:fdc7cffde3e2113ba0e6cc7318c40e3438a4d74bbc62bf04bcc63bdfb082ac77", size = 351545, upload-time = "2025-06-24T10:24:52.449Z" }
sdist = { url = "https://files.pythonhosted.org/packages/73/6f/f80cfef4a312e1fb34baf7d85c72d4411afde10978d4657f8cdd811d3ccc/tokenizers-0.22.2.tar.gz", hash = "sha256:473b83b915e547aa366d1eee11806deaf419e17be16310ac0a14077f1e28f917", size = 372115, upload-time = "2026-01-05T10:45:15.988Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1d/cc/2936e2d45ceb130a21d929743f1e9897514691bec123203e10837972296f/tokenizers-0.21.2-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:342b5dfb75009f2255ab8dec0041287260fed5ce00c323eb6bab639066fef8ec", size = 2875206, upload-time = "2025-06-24T10:24:42.755Z" },
{ url = "https://files.pythonhosted.org/packages/6c/e6/33f41f2cc7861faeba8988e7a77601407bf1d9d28fc79c5903f8f77df587/tokenizers-0.21.2-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:126df3205d6f3a93fea80c7a8a266a78c1bd8dd2fe043386bafdd7736a23e45f", size = 2732655, upload-time = "2025-06-24T10:24:41.56Z" },
{ url = "https://files.pythonhosted.org/packages/33/2b/1791eb329c07122a75b01035b1a3aa22ad139f3ce0ece1b059b506d9d9de/tokenizers-0.21.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a32cd81be21168bd0d6a0f0962d60177c447a1aa1b1e48fa6ec9fc728ee0b12", size = 3019202, upload-time = "2025-06-24T10:24:31.791Z" },
{ url = "https://files.pythonhosted.org/packages/05/15/fd2d8104faa9f86ac68748e6f7ece0b5eb7983c7efc3a2c197cb98c99030/tokenizers-0.21.2-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8bd8999538c405133c2ab999b83b17c08b7fc1b48c1ada2469964605a709ef91", size = 2934539, upload-time = "2025-06-24T10:24:34.567Z" },
{ url = "https://files.pythonhosted.org/packages/a5/2e/53e8fd053e1f3ffbe579ca5f9546f35ac67cf0039ed357ad7ec57f5f5af0/tokenizers-0.21.2-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5e9944e61239b083a41cf8fc42802f855e1dca0f499196df37a8ce219abac6eb", size = 3248665, upload-time = "2025-06-24T10:24:39.024Z" },
{ url = "https://files.pythonhosted.org/packages/00/15/79713359f4037aa8f4d1f06ffca35312ac83629da062670e8830917e2153/tokenizers-0.21.2-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:514cd43045c5d546f01142ff9c79a96ea69e4b5cda09e3027708cb2e6d5762ab", size = 3451305, upload-time = "2025-06-24T10:24:36.133Z" },
{ url = "https://files.pythonhosted.org/packages/38/5f/959f3a8756fc9396aeb704292777b84f02a5c6f25c3fc3ba7530db5feb2c/tokenizers-0.21.2-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b1b9405822527ec1e0f7d8d2fdb287a5730c3a6518189c968254a8441b21faae", size = 3214757, upload-time = "2025-06-24T10:24:37.784Z" },
{ url = "https://files.pythonhosted.org/packages/c5/74/f41a432a0733f61f3d21b288de6dfa78f7acff309c6f0f323b2833e9189f/tokenizers-0.21.2-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fed9a4d51c395103ad24f8e7eb976811c57fbec2af9f133df471afcd922e5020", size = 3121887, upload-time = "2025-06-24T10:24:40.293Z" },
{ url = "https://files.pythonhosted.org/packages/3c/6a/bc220a11a17e5d07b0dfb3b5c628621d4dcc084bccd27cfaead659963016/tokenizers-0.21.2-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2c41862df3d873665ec78b6be36fcc30a26e3d4902e9dd8608ed61d49a48bc19", size = 9091965, upload-time = "2025-06-24T10:24:44.431Z" },
{ url = "https://files.pythonhosted.org/packages/6c/bd/ac386d79c4ef20dc6f39c4706640c24823dca7ebb6f703bfe6b5f0292d88/tokenizers-0.21.2-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:ed21dc7e624e4220e21758b2e62893be7101453525e3d23264081c9ef9a6d00d", size = 9053372, upload-time = "2025-06-24T10:24:46.455Z" },
{ url = "https://files.pythonhosted.org/packages/63/7b/5440bf203b2a5358f074408f7f9c42884849cd9972879e10ee6b7a8c3b3d/tokenizers-0.21.2-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:0e73770507e65a0e0e2a1affd6b03c36e3bc4377bd10c9ccf51a82c77c0fe365", size = 9298632, upload-time = "2025-06-24T10:24:48.446Z" },
{ url = "https://files.pythonhosted.org/packages/a4/d2/faa1acac3f96a7427866e94ed4289949b2524f0c1878512516567d80563c/tokenizers-0.21.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:106746e8aa9014a12109e58d540ad5465b4c183768ea96c03cbc24c44d329958", size = 9470074, upload-time = "2025-06-24T10:24:50.378Z" },
{ url = "https://files.pythonhosted.org/packages/d8/a5/896e1ef0707212745ae9f37e84c7d50269411aef2e9ccd0de63623feecdf/tokenizers-0.21.2-cp39-abi3-win32.whl", hash = "sha256:cabda5a6d15d620b6dfe711e1af52205266d05b379ea85a8a301b3593c60e962", size = 2330115, upload-time = "2025-06-24T10:24:55.069Z" },
{ url = "https://files.pythonhosted.org/packages/13/c3/cc2755ee10be859c4338c962a35b9a663788c0c0b50c0bdd8078fb6870cf/tokenizers-0.21.2-cp39-abi3-win_amd64.whl", hash = "sha256:58747bb898acdb1007f37a7bbe614346e98dc28708ffb66a3fd50ce169ac6c98", size = 2509918, upload-time = "2025-06-24T10:24:53.71Z" },
{ url = "https://files.pythonhosted.org/packages/92/97/5dbfabf04c7e348e655e907ed27913e03db0923abb5dfdd120d7b25630e1/tokenizers-0.22.2-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:544dd704ae7238755d790de45ba8da072e9af3eea688f698b137915ae959281c", size = 3100275, upload-time = "2026-01-05T10:41:02.158Z" },
{ url = "https://files.pythonhosted.org/packages/2e/47/174dca0502ef88b28f1c9e06b73ce33500eedfac7a7692108aec220464e7/tokenizers-0.22.2-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:1e418a55456beedca4621dbab65a318981467a2b188e982a23e117f115ce5001", size = 2981472, upload-time = "2026-01-05T10:41:00.276Z" },
{ url = "https://files.pythonhosted.org/packages/d6/84/7990e799f1309a8b87af6b948f31edaa12a3ed22d11b352eaf4f4b2e5753/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2249487018adec45d6e3554c71d46eb39fa8ea67156c640f7513eb26f318cec7", size = 3290736, upload-time = "2026-01-05T10:40:32.165Z" },
{ url = "https://files.pythonhosted.org/packages/78/59/09d0d9ba94dcd5f4f1368d4858d24546b4bdc0231c2354aa31d6199f0399/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:25b85325d0815e86e0bac263506dd114578953b7b53d7de09a6485e4a160a7dd", size = 3168835, upload-time = "2026-01-05T10:40:38.847Z" },
{ url = "https://files.pythonhosted.org/packages/47/50/b3ebb4243e7160bda8d34b731e54dd8ab8b133e50775872e7a434e524c28/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bfb88f22a209ff7b40a576d5324bf8286b519d7358663db21d6246fb17eea2d5", size = 3521673, upload-time = "2026-01-05T10:40:56.614Z" },
{ url = "https://files.pythonhosted.org/packages/e0/fa/89f4cb9e08df770b57adb96f8cbb7e22695a4cb6c2bd5f0c4f0ebcf33b66/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1c774b1276f71e1ef716e5486f21e76333464f47bece56bbd554485982a9e03e", size = 3724818, upload-time = "2026-01-05T10:40:44.507Z" },
{ url = "https://files.pythonhosted.org/packages/64/04/ca2363f0bfbe3b3d36e95bf67e56a4c88c8e3362b658e616d1ac185d47f2/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:df6c4265b289083bf710dff49bc51ef252f9d5be33a45ee2bed151114a56207b", size = 3379195, upload-time = "2026-01-05T10:40:51.139Z" },
{ url = "https://files.pythonhosted.org/packages/2e/76/932be4b50ef6ccedf9d3c6639b056a967a86258c6d9200643f01269211ca/tokenizers-0.22.2-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:369cc9fc8cc10cb24143873a0d95438bb8ee257bb80c71989e3ee290e8d72c67", size = 3274982, upload-time = "2026-01-05T10:40:58.331Z" },
{ url = "https://files.pythonhosted.org/packages/1d/28/5f9f5a4cc211b69e89420980e483831bcc29dade307955cc9dc858a40f01/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:29c30b83d8dcd061078b05ae0cb94d3c710555fbb44861139f9f83dcca3dc3e4", size = 9478245, upload-time = "2026-01-05T10:41:04.053Z" },
{ url = "https://files.pythonhosted.org/packages/6c/fb/66e2da4704d6aadebf8cb39f1d6d1957df667ab24cff2326b77cda0dcb85/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:37ae80a28c1d3265bb1f22464c856bd23c02a05bb211e56d0c5301a435be6c1a", size = 9560069, upload-time = "2026-01-05T10:45:10.673Z" },
{ url = "https://files.pythonhosted.org/packages/16/04/fed398b05caa87ce9b1a1bb5166645e38196081b225059a6edaff6440fac/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:791135ee325f2336f498590eb2f11dc5c295232f288e75c99a36c5dbce63088a", size = 9899263, upload-time = "2026-01-05T10:45:12.559Z" },
{ url = "https://files.pythonhosted.org/packages/05/a1/d62dfe7376beaaf1394917e0f8e93ee5f67fea8fcf4107501db35996586b/tokenizers-0.22.2-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:38337540fbbddff8e999d59970f3c6f35a82de10053206a7562f1ea02d046fa5", size = 10033429, upload-time = "2026-01-05T10:45:14.333Z" },
{ url = "https://files.pythonhosted.org/packages/fd/18/a545c4ea42af3df6effd7d13d250ba77a0a86fb20393143bbb9a92e434d4/tokenizers-0.22.2-cp39-abi3-win32.whl", hash = "sha256:a6bf3f88c554a2b653af81f3204491c818ae2ac6fbc09e76ef4773351292bc92", size = 2502363, upload-time = "2026-01-05T10:45:20.593Z" },
{ url = "https://files.pythonhosted.org/packages/65/71/0670843133a43d43070abeb1949abfdef12a86d490bea9cd9e18e37c5ff7/tokenizers-0.22.2-cp39-abi3-win_amd64.whl", hash = "sha256:c9ea31edff2968b44a88f97d784c2f16dc0729b8b143ed004699ebca91f05c48", size = 2747786, upload-time = "2026-01-05T10:45:18.411Z" },
{ url = "https://files.pythonhosted.org/packages/72/f4/0de46cfa12cdcbcd464cc59fde36912af405696f687e53a091fb432f694c/tokenizers-0.22.2-cp39-abi3-win_arm64.whl", hash = "sha256:9ce725d22864a1e965217204946f830c37876eee3b2ba6fc6255e8e903d5fcbc", size = 2612133, upload-time = "2026-01-05T10:45:17.232Z" },
]
[[package]]
@@ -4396,7 +4255,7 @@ wheels = [
[[package]]
name = "transformers"
version = "4.53.2"
version = "5.0.0rc3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock" },
@@ -4409,25 +4268,38 @@ dependencies = [
{ name = "safetensors" },
{ name = "tokenizers" },
{ name = "tqdm" },
{ name = "typer-slim" },
]
sdist = { url = "https://files.pythonhosted.org/packages/4c/67/80f51466ec447028fd84469b208eb742533ce06cc8fad2e3181380199e5c/transformers-4.53.2.tar.gz", hash = "sha256:6c3ed95edfb1cba71c4245758f1b4878c93bf8cde77d076307dacb2cbbd72be2", size = 9201233, upload-time = "2025-07-11T12:39:08.742Z" }
sdist = { url = "https://files.pythonhosted.org/packages/3f/a3/7c116a8d85f69ea7749cf4c2df79e64c35d028e5fc7ea0168f299d03b8c7/transformers-5.0.0rc3.tar.gz", hash = "sha256:a0315b92b7e087617ade42ec9e6e92ee7620541cc5d6a3331886c52cbe306f5c", size = 8388520, upload-time = "2026-01-14T16:49:02.952Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/96/88/beb33a79a382fcd2aed0be5222bdc47f41e4bfe7aaa90ae1374f1d8ea2af/transformers-4.53.2-py3-none-any.whl", hash = "sha256:db8f4819bb34f000029c73c3c557e7d06fc1b8e612ec142eecdae3947a9c78bf", size = 10826609, upload-time = "2025-07-11T12:39:05.461Z" },
{ url = "https://files.pythonhosted.org/packages/1e/f2/ae2b8968764253bdf38a48dee3c299b8d0bedf7c8ffbe3449fca9bd95338/transformers-5.0.0rc3-py3-none-any.whl", hash = "sha256:383fad27f4f73092d330e45fae384681e5c8521e1dc1cf6cb1a297780e68bf2d", size = 10107087, upload-time = "2026-01-14T16:48:59.393Z" },
]
[[package]]
name = "typer"
version = "0.16.0"
version = "0.24.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-doc" },
{ name = "click" },
{ name = "rich" },
{ name = "shellingham" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c5/8c/7d682431efca5fd290017663ea4588bf6f2c6aad085c7f108c5dbc316e70/typer-0.16.0.tar.gz", hash = "sha256:af377ffaee1dbe37ae9440cb4e8f11686ea5ce4e9bae01b84ae7c63b87f1dd3b", size = 102625, upload-time = "2025-05-26T14:30:31.824Z" }
sdist = { url = "https://files.pythonhosted.org/packages/f5/24/cb09efec5cc954f7f9b930bf8279447d24618bb6758d4f6adf2574c41780/typer-0.24.1.tar.gz", hash = "sha256:e39b4732d65fbdcde189ae76cf7cd48aeae72919dea1fdfc16593be016256b45", size = 118613, upload-time = "2026-02-21T16:54:40.609Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/76/42/3efaf858001d2c2913de7f354563e3a3a2f0decae3efe98427125a8f441e/typer-0.16.0-py3-none-any.whl", hash = "sha256:1f79bed11d4d02d4310e3c1b7ba594183bcedb0ac73b27a9e5f28f6fb5b98855", size = 46317, upload-time = "2025-05-26T14:30:30.523Z" },
{ url = "https://files.pythonhosted.org/packages/4a/91/48db081e7a63bb37284f9fbcefda7c44c277b18b0e13fbc36ea2335b71e6/typer-0.24.1-py3-none-any.whl", hash = "sha256:112c1f0ce578bfb4cab9ffdabc68f031416ebcc216536611ba21f04e9aa84c9e", size = 56085, upload-time = "2026-02-21T16:54:41.616Z" },
]
[[package]]
name = "typer-slim"
version = "0.24.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typer" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a7/a7/e6aecc4b4eb59598829a3b5076a93aff291b4fdaa2ded25efc4e1f4d219c/typer_slim-0.24.0.tar.gz", hash = "sha256:f0ed36127183f52ae6ced2ecb2521789995992c521a46083bfcdbb652d22ad34", size = 4776, upload-time = "2026-02-16T22:08:51.2Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/24/5480c20380dfd18cf33d14784096dca45a24eae6102e91d49a718d3b6855/typer_slim-0.24.0-py3-none-any.whl", hash = "sha256:d5d7ee1ee2834d5020c7c616ed5e0d0f29b9a4b1dd283bdebae198ec09778d0e", size = 3394, upload-time = "2026-02-16T22:08:49.92Z" },
]
[[package]]

View File

@@ -1,6 +0,0 @@
node_modules
dist
.env
.env.local
.git
.DS_Store

View File

@@ -1,10 +0,0 @@
# Base URL for the Reflector backend API.
# In dev, Vite proxies /v1 to this origin so keep it pointing at the local server.
VITE_API_PROXY_TARGET=http://localhost:1250
# OIDC (Authentik) — used when the backend runs in JWT / SSO mode.
# Leave blank in password-auth mode.
VITE_OIDC_AUTHORITY=
VITE_OIDC_CLIENT_ID=
# Scopes requested at login. Defaults to "openid profile email" when blank.
VITE_OIDC_SCOPE=openid profile email

24
ui/.gitignore vendored
View File

@@ -1,24 +0,0 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

View File

@@ -1,23 +0,0 @@
# syntax=docker/dockerfile:1
FROM node:22-alpine AS builder
WORKDIR /app
RUN corepack enable
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
COPY . .
# Vite bakes VITE_* env vars into the bundle at build time.
ARG VITE_OIDC_AUTHORITY=
ARG VITE_OIDC_CLIENT_ID=
ARG VITE_OIDC_SCOPE=openid profile email
ENV VITE_OIDC_AUTHORITY=$VITE_OIDC_AUTHORITY \
VITE_OIDC_CLIENT_ID=$VITE_OIDC_CLIENT_ID \
VITE_OIDC_SCOPE=$VITE_OIDC_SCOPE
RUN pnpm build
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html/v2
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80

View File

@@ -1,87 +0,0 @@
# Reflector UI (v2)
Vite + React 19 + TypeScript SPA, served at `/v2` behind Caddy. Lives alongside the existing Next.js app in `../www` while the migration is in progress.
## Stack
- **Vite** + **React 19** + **TypeScript**
- **Tailwind v4** with Greyhaven design tokens (`src/styles/greyhaven.css`)
- **React Router v7**, routes mounted under `/v2/*`
- **TanStack Query** + **openapi-fetch** with types generated from the backend OpenAPI spec
- **nuqs** for URL-backed page/search state on `/browse`
- **react-oidc-context** (OIDC Authorization Code + PKCE) for the JWT auth backend
- Password-form fallback for the `password` auth backend (`POST /v1/auth/login`)
## Local development
```bash
cd ui
pnpm install
# Point the dev server at your local backend (defaults to http://localhost:1250).
cp .env.example .env
# Edit VITE_OIDC_AUTHORITY / VITE_OIDC_CLIENT_ID if your backend runs in JWT mode.
pnpm dev # http://localhost:3001/v2/
pnpm build # production bundle in dist/
pnpm typecheck # tsc --noEmit
pnpm openapi # regenerate src/api/schema.d.ts from the running backend
```
`pnpm openapi` hits `http://127.0.0.1:1250/openapi.json` — start the backend first (`cd ../server && uv run -m reflector.app --reload`).
## Auth modes
The SPA auto-detects the backend's auth backend:
- **JWT (OIDC/SSO via Authentik):** set `VITE_OIDC_AUTHORITY` and `VITE_OIDC_CLIENT_ID`. The app does the Authorization Code + PKCE flow; Authentik hosts the login page. Register a **Public** OAuth client whose redirect URI is `https://<your-domain>/v2/auth/callback`. No client secret is baked into the bundle.
- **Password:** leave the OIDC env vars blank. The app shows an in-page email/password form that posts to `/v1/auth/login` and stores the returned JWT in `sessionStorage`.
- **None:** backend returns a fake user for every request; the SPA treats that as authenticated.
## Deployment (selfhosted)
`docker-compose.selfhosted.yml` defines a `ui` service that builds this directory and serves the static bundle from nginx on port 80. Caddy routes `/v2/*` to `ui:80` and leaves the root path on the existing `web` service.
Pass OIDC config as build args (Vite inlines `VITE_*` at build time):
```bash
VITE_OIDC_AUTHORITY=https://auth.example/application/o/reflector/ \
VITE_OIDC_CLIENT_ID=reflector-ui \
docker compose -f docker-compose.selfhosted.yml build ui
docker compose -f docker-compose.selfhosted.yml up -d ui
```
## Pages shipped in this pass
- `/` — Home / Create new transcript (single-form shipping variant)
- `/browse` — transcript list with FTS search, source/room/trash filters, pagination
- `/rooms` — rooms list, create, edit, delete
- `/welcome` — logged-out landing (OIDC mode)
- `/login` — password login form (password mode)
- `/auth/callback` — OIDC redirect target
Not yet ported:
- Transcript detail / playback
- Meeting / live join
- Settings, API keys
- Tags sidebar (backend model doesn't exist yet)
- Progress streaming over WebSocket
## Directory map
```
src/
api/ fetch client, generated OpenAPI types
auth/ AuthProvider, RequireAuth, OIDC config
components/
browse/ TranscriptRow, FilterBar, Pagination
home/ LanguagePair, RoomPicker
icons.tsx lucide-react wrapper (compat with prototype I.* shape)
layout/ AppShell, AppSidebar, TopBar
rooms/ RoomsTable, RoomFormDialog, DeleteRoomDialog
ui/ primitives (Button, StatusDot, StatusBadge, SidebarItem, …)
hooks/ useRooms, useTranscripts
lib/ utils, format helpers, types
pages/ HomePage, BrowsePage, RoomsPage, LoggedOut, LoginForm, AuthCallback
styles/ greyhaven.css, reflector-forms.css, index.css (Tailwind entry)
```

View File

@@ -1,23 +0,0 @@
import js from '@eslint/js'
import globals from 'globals'
import reactHooks from 'eslint-plugin-react-hooks'
import reactRefresh from 'eslint-plugin-react-refresh'
import tseslint from 'typescript-eslint'
import { defineConfig, globalIgnores } from 'eslint/config'
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
js.configs.recommended,
tseslint.configs.recommended,
reactHooks.configs.flat.recommended,
reactRefresh.configs.vite,
],
languageOptions: {
ecmaVersion: 2020,
globals: globals.browser,
},
},
])

View File

@@ -1,22 +0,0 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/x-icon" href="./favicon.ico" />
<link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png" />
<link rel="icon" type="image/png" sizes="16x16" href="./favicon-16x16.png" />
<link rel="apple-touch-icon" sizes="180x180" href="./apple-touch-icon.png" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Reflector</title>
<link rel="preconnect" href="https://fonts.googleapis.com" />
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
<link
rel="stylesheet"
href="https://fonts.googleapis.com/css2?family=Source+Serif+4:ital,opsz,wght@0,8..60,400;0,8..60,500;0,8..60,600;0,8..60,700;1,8..60,400&display=swap"
/>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

View File

@@ -1,27 +0,0 @@
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Without the trailing slash, redirect so relative asset paths resolve.
location = /v2 {
return 301 /v2/;
}
# React Router SPA under /v2 — fall back to index.html for client routes.
location /v2/ {
try_files $uri $uri/ /v2/index.html;
}
# Root convenience redirect to the SPA entry.
location = / {
return 302 /v2/;
}
# Long-cache hashed assets emitted by Vite.
location ~* /v2/assets/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}

View File

@@ -1,62 +0,0 @@
{
"name": "ui",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview",
"openapi": "openapi-typescript http://localhost:1250/openapi.json -o ./src/api/schema.d.ts",
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@hookform/resolvers": "^5.2.2",
"@radix-ui/react-checkbox": "^1.3.3",
"@radix-ui/react-dialog": "^1.1.15",
"@radix-ui/react-dropdown-menu": "^2.1.16",
"@radix-ui/react-label": "^2.1.8",
"@radix-ui/react-popover": "^1.1.15",
"@radix-ui/react-select": "^2.2.6",
"@radix-ui/react-separator": "^1.1.8",
"@radix-ui/react-slot": "^1.2.4",
"@radix-ui/react-switch": "^1.2.6",
"@radix-ui/react-tabs": "^1.1.13",
"@radix-ui/react-tooltip": "^1.2.8",
"@tailwindcss/vite": "^4.2.4",
"@tanstack/react-query": "^5.99.2",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"date-fns": "^4.1.0",
"lucide-react": "^1.8.0",
"nuqs": "^2.8.9",
"oidc-client-ts": "^3.5.0",
"openapi-fetch": "^0.17.0",
"openapi-react-query": "^0.5.4",
"react": "^19.2.5",
"react-dom": "^19.2.5",
"react-hook-form": "^7.73.1",
"react-oidc-context": "^3.3.1",
"react-router-dom": "^7.14.2",
"sonner": "^2.0.7",
"tailwind-merge": "^3.5.0",
"tailwindcss": "^4.2.4",
"zod": "^4.3.6"
},
"devDependencies": {
"@eslint/js": "^9.39.4",
"@types/node": "^24.12.2",
"@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"@vitejs/plugin-react": "^6.0.1",
"eslint": "^9.39.4",
"eslint-plugin-react-hooks": "^7.1.1",
"eslint-plugin-react-refresh": "^0.5.2",
"globals": "^17.5.0",
"openapi-typescript": "^7.13.0",
"typescript": "~6.0.2",
"typescript-eslint": "^8.58.2",
"vite": "^8.0.9"
}
}

3536
ui/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 678 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -1,24 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg">
<symbol id="bluesky-icon" viewBox="0 0 16 17">
<g clip-path="url(#bluesky-clip)"><path fill="#08060d" d="M7.75 7.735c-.693-1.348-2.58-3.86-4.334-5.097-1.68-1.187-2.32-.981-2.74-.79C.188 2.065.1 2.812.1 3.251s.241 3.602.398 4.13c.52 1.744 2.367 2.333 4.07 2.145-2.495.37-4.71 1.278-1.805 4.512 3.196 3.309 4.38-.71 4.987-2.746.608 2.036 1.307 5.91 4.93 2.746 2.72-2.746.747-4.143-1.747-4.512 1.702.189 3.55-.4 4.07-2.145.156-.528.397-3.691.397-4.13s-.088-1.186-.575-1.406c-.42-.19-1.06-.395-2.741.79-1.755 1.24-3.64 3.752-4.334 5.099"/></g>
<defs><clipPath id="bluesky-clip"><path fill="#fff" d="M.1.85h15.3v15.3H.1z"/></clipPath></defs>
</symbol>
<symbol id="discord-icon" viewBox="0 0 20 19">
<path fill="#08060d" d="M16.224 3.768a14.5 14.5 0 0 0-3.67-1.153c-.158.286-.343.67-.47.976a13.5 13.5 0 0 0-4.067 0c-.128-.306-.317-.69-.476-.976A14.4 14.4 0 0 0 3.868 3.77C1.546 7.28.916 10.703 1.231 14.077a14.7 14.7 0 0 0 4.5 2.306q.545-.748.965-1.587a9.5 9.5 0 0 1-1.518-.74q.191-.14.372-.293c2.927 1.369 6.107 1.369 8.999 0q.183.152.372.294-.723.437-1.52.74.418.838.963 1.588a14.6 14.6 0 0 0 4.504-2.308c.37-3.911-.63-7.302-2.644-10.309m-9.13 8.234c-.878 0-1.599-.82-1.599-1.82 0-.998.705-1.82 1.6-1.82.894 0 1.614.82 1.599 1.82.001 1-.705 1.82-1.6 1.82m5.91 0c-.878 0-1.599-.82-1.599-1.82 0-.998.705-1.82 1.6-1.82.893 0 1.614.82 1.599 1.82 0 1-.706 1.82-1.6 1.82"/>
</symbol>
<symbol id="documentation-icon" viewBox="0 0 21 20">
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="m15.5 13.333 1.533 1.322c.645.555.967.833.967 1.178s-.322.623-.967 1.179L15.5 18.333m-3.333-5-1.534 1.322c-.644.555-.966.833-.966 1.178s.322.623.966 1.179l1.534 1.321"/>
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M17.167 10.836v-4.32c0-1.41 0-2.117-.224-2.68-.359-.906-1.118-1.621-2.08-1.96-.599-.21-1.349-.21-2.848-.21-2.623 0-3.935 0-4.983.369-1.684.591-3.013 1.842-3.641 3.428C3 6.449 3 7.684 3 10.154v2.122c0 2.558 0 3.838.706 4.726q.306.383.713.671c.76.536 1.79.64 3.581.66"/>
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M3 10a2.78 2.78 0 0 1 2.778-2.778c.555 0 1.209.097 1.748-.047.48-.129.854-.503.982-.982.145-.54.048-1.194.048-1.749a2.78 2.78 0 0 1 2.777-2.777"/>
</symbol>
<symbol id="github-icon" viewBox="0 0 19 19">
<path fill="#08060d" fill-rule="evenodd" d="M9.356 1.85C5.05 1.85 1.57 5.356 1.57 9.694a7.84 7.84 0 0 0 5.324 7.44c.387.079.528-.168.528-.376 0-.182-.013-.805-.013-1.454-2.165.467-2.616-.935-2.616-.935-.349-.91-.864-1.143-.864-1.143-.71-.48.051-.48.051-.48.787.051 1.2.805 1.2.805.695 1.194 1.817.857 2.268.649.064-.507.27-.857.49-1.052-1.728-.182-3.545-.857-3.545-3.87 0-.857.31-1.558.8-2.104-.078-.195-.349-1 .077-2.078 0 0 .657-.208 2.14.805a7.5 7.5 0 0 1 1.946-.26c.657 0 1.328.092 1.946.26 1.483-1.013 2.14-.805 2.14-.805.426 1.078.155 1.883.078 2.078.502.546.799 1.247.799 2.104 0 3.013-1.818 3.675-3.558 3.87.284.247.528.714.528 1.454 0 1.052-.012 1.896-.012 2.156 0 .208.142.455.528.377a7.84 7.84 0 0 0 5.324-7.441c.013-4.338-3.48-7.844-7.773-7.844" clip-rule="evenodd"/>
</symbol>
<symbol id="social-icon" viewBox="0 0 20 20">
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M12.5 6.667a4.167 4.167 0 1 0-8.334 0 4.167 4.167 0 0 0 8.334 0"/>
<path fill="none" stroke="#aa3bff" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.35" d="M2.5 16.667a5.833 5.833 0 0 1 8.75-5.053m3.837.474.513 1.035c.07.144.257.282.414.309l.93.155c.596.1.736.536.307.965l-.723.73a.64.64 0 0 0-.152.531l.207.903c.164.715-.213.991-.84.618l-.872-.52a.63.63 0 0 0-.577 0l-.872.52c-.624.373-1.003.094-.84-.618l.207-.903a.64.64 0 0 0-.152-.532l-.723-.729c-.426-.43-.289-.864.306-.964l.93-.156a.64.64 0 0 0 .412-.31l.513-1.034c.28-.562.735-.562 1.012 0"/>
</symbol>
<symbol id="x-icon" viewBox="0 0 19 19">
<path fill="#08060d" fill-rule="evenodd" d="M1.893 1.98c.052.072 1.245 1.769 2.653 3.77l2.892 4.114c.183.261.333.48.333.486s-.068.089-.152.183l-.522.593-.765.867-3.597 4.087c-.375.426-.734.834-.798.905a1 1 0 0 0-.118.148c0 .01.236.017.664.017h.663l.729-.83c.4-.457.796-.906.879-.999a692 692 0 0 0 1.794-2.038c.034-.037.301-.34.594-.675l.551-.624.345-.392a7 7 0 0 1 .34-.374c.006 0 .93 1.306 2.052 2.903l2.084 2.965.045.063h2.275c1.87 0 2.273-.003 2.266-.021-.008-.02-1.098-1.572-3.894-5.547-2.013-2.862-2.28-3.246-2.273-3.266.008-.019.282-.332 2.085-2.38l2-2.274 1.567-1.782c.022-.028-.016-.03-.65-.03h-.674l-.3.342a871 871 0 0 1-1.782 2.025c-.067.075-.405.458-.75.852a100 100 0 0 1-.803.91c-.148.172-.299.344-.99 1.127-.304.343-.32.358-.345.327-.015-.019-.904-1.282-1.976-2.808L6.365 1.85H1.8zm1.782.91 8.078 11.294c.772 1.08 1.413 1.973 1.425 1.984.016.017.241.02 1.05.017l1.03-.004-2.694-3.766L7.796 5.75 5.722 2.852l-1.039-.004-1.039-.004z" clip-rule="evenodd"/>
</symbol>
</svg>

Before

Width:  |  Height:  |  Size: 4.9 KiB

View File

@@ -1,47 +0,0 @@
#!/usr/bin/env bash
# Diagnoses why the raw domain (https://reflector.local/) isn't loading.
# Usage: ./ui/scripts/debug-root.sh [host]
set +e
HOST="${1:-reflector.local}"
COMPOSE="docker compose -f docker-compose.selfhosted.yml"
echo "============================================================"
echo " 1. Container status (web + caddy)"
echo "============================================================"
$COMPOSE ps web caddy 2>&1 | head -10
echo
echo "============================================================"
echo " 2. HTTPS probe to https://$HOST/"
echo "============================================================"
curl -skv "https://$HOST/" 2>&1 | head -60
echo
echo "============================================================"
echo " 3. Body snippet"
echo "============================================================"
curl -sk "https://$HOST/" 2>&1 | head -30
echo
echo "============================================================"
echo " 4. Direct web:3000 probe from inside caddy"
echo "============================================================"
$COMPOSE exec -T caddy wget -qO- --server-response http://web:3000/ 2>&1 | head -30
echo
echo "============================================================"
echo " 5. NextAuth URL / relevant web env (from inside web)"
echo "============================================================"
$COMPOSE exec -T web printenv 2>&1 | grep -E 'NEXTAUTH|NEXT_PUBLIC|SERVER_API_URL' | head -10
echo
echo "============================================================"
echo " 6. web container logs (last 40 lines)"
echo "============================================================"
$COMPOSE logs --tail=40 web 2>&1 | tail -40
echo
echo "============================================================"
echo " 7. caddy recent errors to the web upstream (last 10)"
echo "============================================================"
$COMPOSE logs --tail=200 caddy 2>&1 | grep -Ei 'error|web:3000|dial tcp' | tail -10

View File

@@ -1,63 +0,0 @@
#!/usr/bin/env bash
# Diagnoses why reflector.local/v2/ isn't serving the SPA.
# Usage: ./ui/scripts/debug-v2.sh [host] (default host: reflector.local)
set +e
HOST="${1:-reflector.local}"
COMPOSE="docker compose -f docker-compose.selfhosted.yml"
echo "============================================================"
echo " 1. Container status"
echo "============================================================"
$COMPOSE ps ui caddy web 2>&1 | head -20
echo
echo "============================================================"
echo " 2. Live Caddyfile inside the caddy container"
echo "============================================================"
$COMPOSE exec -T caddy cat /etc/caddy/Caddyfile 2>&1 | sed -n '/handle \/v2\|handle {/{p;n;p;n;p;}' | head -20
echo "--- full handle blocks (first 40 lines) ---"
$COMPOSE exec -T caddy cat /etc/caddy/Caddyfile 2>&1 | grep -nE 'handle|reverse_proxy|tls' | head -40
echo
echo "============================================================"
echo " 3. nginx config inside the ui container"
echo "============================================================"
$COMPOSE exec -T ui cat /etc/nginx/conf.d/default.conf 2>&1
echo
echo "============================================================"
echo " 4. dist contents inside the ui container"
echo "============================================================"
$COMPOSE exec -T ui ls -la /usr/share/nginx/html/v2/ 2>&1 | head -20
echo
echo "============================================================"
echo " 5. Direct nginx probe (bypass Caddy) — container -> container"
echo "============================================================"
echo "--- GET http://ui/v2/ from inside caddy ---"
$COMPOSE exec -T caddy wget -qO- --server-response http://ui/v2/ 2>&1 | head -40
echo
echo "--- GET http://ui/v2 (no slash) from inside caddy ---"
$COMPOSE exec -T caddy wget -qO- --server-response http://ui/v2 2>&1 | head -20
echo
echo "============================================================"
echo " 6. Caddy probe from host"
echo "============================================================"
echo "--- GET https://$HOST/v2/ ---"
curl -sk -o /dev/null -D - "https://$HOST/v2/" 2>&1 | head -20
echo
echo "--- GET https://$HOST/v2 (no slash) ---"
curl -sk -o /dev/null -D - "https://$HOST/v2" 2>&1 | head -20
echo
echo "--- body of https://$HOST/v2/ (first 30 lines) ---"
curl -sk "https://$HOST/v2/" 2>&1 | head -30
echo
echo "============================================================"
echo " 7. Recent ui + caddy logs"
echo "============================================================"
echo "--- ui (last 30 lines) ---"
$COMPOSE logs --tail=30 ui 2>&1 | tail -30
echo "--- caddy (last 30 lines) ---"
$COMPOSE logs --tail=30 caddy 2>&1 | tail -30

View File

@@ -1,74 +0,0 @@
import { BrowserRouter, Navigate, Route, Routes, useParams } from 'react-router-dom'
import { QueryClientProvider } from '@tanstack/react-query'
import { NuqsAdapter } from 'nuqs/adapters/react-router/v7'
import { Toaster } from 'sonner'
import { queryClient } from '@/api/queryClient'
import { AuthProvider } from '@/auth/AuthProvider'
import { RequireAuth } from '@/auth/RequireAuth'
import { BrowsePage } from '@/pages/BrowsePage'
import { RoomsPage } from '@/pages/RoomsPage'
import { TranscriptPage } from '@/pages/TranscriptPage'
import { LoggedOutPage } from '@/pages/LoggedOut'
import { LoginForm } from '@/pages/LoginForm'
import { AuthCallbackPage } from '@/pages/AuthCallback'
function TranscriptRedirect() {
const { id } = useParams()
return <Navigate to={`/transcripts/${id}`} replace />
}
export default function App() {
return (
<QueryClientProvider client={queryClient}>
<BrowserRouter basename="/v2">
<NuqsAdapter>
<AuthProvider>
<Routes>
<Route path="/login" element={<LoginForm />} />
<Route path="/welcome" element={<LoggedOutPage />} />
<Route path="/auth/callback" element={<AuthCallbackPage />} />
<Route path="/auth/silent-renew" element={<AuthCallbackPage />} />
<Route path="/" element={<Navigate to="/browse" replace />} />
<Route
path="/browse"
element={
<RequireAuth>
<BrowsePage />
</RequireAuth>
}
/>
<Route
path="/rooms"
element={
<RequireAuth>
<RoomsPage />
</RequireAuth>
}
/>
<Route
path="/transcripts/:id"
element={
<RequireAuth>
<TranscriptPage />
</RequireAuth>
}
/>
<Route path="/transcript/:id" element={<TranscriptRedirect />} />
<Route path="*" element={<Navigate to="/browse" replace />} />
</Routes>
<Toaster
position="top-right"
toastOptions={{
style: {
background: 'var(--card)',
color: 'var(--fg)',
border: '1px solid var(--border)',
},
}}
/>
</AuthProvider>
</NuqsAdapter>
</BrowserRouter>
</QueryClientProvider>
)
}

View File

@@ -1,32 +0,0 @@
import createClient, { type Middleware } from 'openapi-fetch'
import createQueryClient from 'openapi-react-query'
import type { paths } from './schema'
export const PASSWORD_TOKEN_KEY = 'reflector.password_token'
let oidcAccessTokenGetter: (() => string | null) | null = null
export function setOidcAccessTokenGetter(getter: (() => string | null) | null) {
oidcAccessTokenGetter = getter
}
export function setPasswordToken(token: string | null) {
if (token) sessionStorage.setItem(PASSWORD_TOKEN_KEY, token)
else sessionStorage.removeItem(PASSWORD_TOKEN_KEY)
}
export function getPasswordToken() {
return sessionStorage.getItem(PASSWORD_TOKEN_KEY)
}
const authMiddleware: Middleware = {
async onRequest({ request }) {
const token = oidcAccessTokenGetter?.() ?? getPasswordToken()
if (token) request.headers.set('Authorization', `Bearer ${token}`)
return request
},
}
export const apiClient = createClient<paths>({ baseUrl: '/' })
apiClient.use(authMiddleware)
export const $api = createQueryClient(apiClient)

View File

@@ -1,15 +0,0 @@
import { QueryClient } from '@tanstack/react-query'
export const queryClient = new QueryClient({
defaultOptions: {
queries: {
staleTime: 15_000,
retry: (failureCount, error) => {
const status = (error as { status?: number } | null)?.status
if (status === 401 || status === 403 || status === 404) return false
return failureCount < 2
},
refetchOnWindowFocus: false,
},
},
})

4556
ui/src/api/schema.d.ts vendored

File diff suppressed because it is too large Load Diff

View File

@@ -1,28 +0,0 @@
import { createContext, useContext } from 'react'
export type AuthMode = 'oidc' | 'password'
export type AuthUser = {
email?: string | null
name?: string | null
sub?: string | null
} | null
export type AuthContextValue = {
mode: AuthMode
loading: boolean
authenticated: boolean
user: AuthUser
error: Error | null
loginWithOidc: () => void
loginWithPassword: (email: string, password: string) => Promise<void>
logout: () => Promise<void>
}
export const AuthContext = createContext<AuthContextValue | null>(null)
export function useAuth(): AuthContextValue {
const value = useContext(AuthContext)
if (!value) throw new Error('useAuth must be used inside AuthProvider')
return value
}

View File

@@ -1,129 +0,0 @@
import { useCallback, useEffect, useMemo, useState, type ReactNode } from 'react'
import {
AuthProvider as OidcAuthProvider,
useAuth as useOidcAuth,
} from 'react-oidc-context'
import { useQuery, useQueryClient } from '@tanstack/react-query'
import { apiClient, getPasswordToken, setPasswordToken, setOidcAccessTokenGetter } from '@/api/client'
import { AuthContext, type AuthContextValue, type AuthUser } from './AuthContext'
import { buildOidcConfig, oidcEnabled } from './oidcConfig'
function useMeQuery(tokenKey: string | null | undefined) {
return useQuery<AuthUser>({
queryKey: ['auth', 'me', tokenKey ?? 'anon'],
enabled: !!tokenKey,
queryFn: async () => {
const { data, error, response } = await apiClient.GET('/v1/me')
if (error || !response.ok) {
if (response.status === 401) return null
throw Object.assign(new Error('me request failed'), { status: response.status })
}
return (data ?? null) as AuthUser
},
staleTime: 60_000,
})
}
function PasswordAuthProvider({ children }: { children: ReactNode }) {
const queryClient = useQueryClient()
const [token, setToken] = useState<string | null>(() => getPasswordToken())
const meQuery = useMeQuery(token)
const loginWithPassword = useCallback(
async (email: string, password: string) => {
const res = await fetch('/v1/auth/login', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ email, password }),
})
if (!res.ok) {
const detail = await res
.json()
.then((j: { detail?: string }) => j?.detail)
.catch(() => null)
throw new Error(detail ?? 'Invalid credentials')
}
const json = (await res.json()) as { access_token: string }
setPasswordToken(json.access_token)
setToken(json.access_token)
await queryClient.invalidateQueries({ queryKey: ['auth', 'me'] })
},
[queryClient],
)
const logout = useCallback(async () => {
setPasswordToken(null)
setToken(null)
await queryClient.invalidateQueries({ queryKey: ['auth', 'me'] })
}, [queryClient])
const loginWithOidc = useCallback(() => {
console.warn('OIDC login not configured; use loginWithPassword')
}, [])
const value = useMemo<AuthContextValue>(
() => ({
mode: 'password',
loading: meQuery.isLoading,
authenticated: !!token && meQuery.data != null,
user: meQuery.data ?? null,
error: (meQuery.error as Error | null) ?? null,
loginWithOidc,
loginWithPassword,
logout,
}),
[token, meQuery.isLoading, meQuery.data, meQuery.error, loginWithOidc, loginWithPassword, logout],
)
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>
}
function OidcAuthBridge({ children }: { children: ReactNode }) {
const oidc = useOidcAuth()
const queryClient = useQueryClient()
const accessToken = oidc.user?.access_token ?? null
useEffect(() => {
setOidcAccessTokenGetter(() => accessToken)
return () => setOidcAccessTokenGetter(null)
}, [accessToken])
const meQuery = useMeQuery(accessToken)
const loginWithOidc = useCallback(() => oidc.signinRedirect(), [oidc])
const loginWithPassword = useCallback(async () => {
throw new Error('Password login is disabled in OIDC mode')
}, [])
const logout = useCallback(async () => {
await oidc.signoutRedirect().catch(() => oidc.removeUser())
await queryClient.invalidateQueries({ queryKey: ['auth', 'me'] })
}, [oidc, queryClient])
const value = useMemo<AuthContextValue>(
() => ({
mode: 'oidc',
loading: oidc.isLoading || meQuery.isLoading,
authenticated: !!accessToken && meQuery.data != null,
user: meQuery.data ?? null,
error: (oidc.error ?? (meQuery.error as Error | null)) ?? null,
loginWithOidc,
loginWithPassword,
logout,
}),
[oidc.isLoading, oidc.error, accessToken, meQuery.isLoading, meQuery.data, meQuery.error, loginWithOidc, loginWithPassword, logout],
)
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>
}
export function AuthProvider({ children }: { children: ReactNode }) {
const config = buildOidcConfig()
if (!config || !oidcEnabled) {
return <PasswordAuthProvider>{children}</PasswordAuthProvider>
}
return (
<OidcAuthProvider {...config}>
<OidcAuthBridge>{children}</OidcAuthBridge>
</OidcAuthProvider>
)
}

View File

@@ -1,30 +0,0 @@
import { type ReactNode } from 'react'
import { Navigate, useLocation } from 'react-router-dom'
import { useAuth } from './AuthContext'
export function RequireAuth({ children }: { children: ReactNode }) {
const { loading, authenticated } = useAuth()
const location = useLocation()
if (loading) {
return (
<div
style={{
height: '100vh',
display: 'grid',
placeItems: 'center',
color: 'var(--fg-muted)',
fontFamily: 'var(--font-sans)',
}}
>
Loading
</div>
)
}
if (!authenticated) {
return <Navigate to="/welcome" state={{ from: location.pathname }} replace />
}
return <>{children}</>
}

View File

@@ -1,27 +0,0 @@
import type { AuthProviderProps } from 'react-oidc-context'
import { WebStorageStateStore } from 'oidc-client-ts'
import { env, oidcEnabled } from '@/lib/env'
export { oidcEnabled }
export function buildOidcConfig(): AuthProviderProps | null {
if (!oidcEnabled) return null
const redirectUri = `${window.location.origin}/v2/auth/callback`
const silentRedirectUri = `${window.location.origin}/v2/auth/silent-renew`
const postLogoutRedirectUri = `${window.location.origin}/v2/`
return {
authority: env.oidcAuthority,
client_id: env.oidcClientId,
redirect_uri: redirectUri,
silent_redirect_uri: silentRedirectUri,
post_logout_redirect_uri: postLogoutRedirectUri,
scope: env.oidcScope,
response_type: 'code',
loadUserInfo: true,
automaticSilentRenew: true,
userStore: new WebStorageStateStore({ store: window.sessionStorage }),
onSigninCallback: () => {
window.history.replaceState({}, document.title, '/v2/')
},
}
}

View File

@@ -1,130 +0,0 @@
import { useEffect, type ReactNode } from 'react'
import { I } from '@/components/icons'
import { Button } from '@/components/ui/primitives'
type Props = {
title: string
message: ReactNode
confirmLabel: string
cancelLabel?: string
danger?: boolean
loading?: boolean
onConfirm: () => void
onClose: () => void
}
export function ConfirmDialog({
title,
message,
confirmLabel,
cancelLabel = 'Cancel',
danger,
loading,
onConfirm,
onClose,
}: Props) {
useEffect(() => {
const k = (e: KeyboardEvent) => {
if (e.key === 'Escape' && !loading) onClose()
}
document.addEventListener('keydown', k)
return () => document.removeEventListener('keydown', k)
}, [onClose, loading])
return (
<>
<div className="rf-modal-backdrop" onClick={() => !loading && onClose()} />
<div
className="rf-modal"
role="dialog"
aria-modal="true"
style={{ width: 'min(440px, calc(100vw - 32px))' }}
>
<div style={{ padding: 24, display: 'flex', flexDirection: 'column', gap: 12 }}>
<div style={{ display: 'flex', alignItems: 'flex-start', gap: 12 }}>
<div
style={{
flexShrink: 0,
width: 36,
height: 36,
borderRadius: 10,
background: danger
? 'color-mix(in srgb, var(--destructive) 12%, transparent)'
: 'var(--muted)',
color: danger ? 'var(--destructive)' : 'var(--fg-muted)',
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
}}
>
{I.Trash(18)}
</div>
<div style={{ flex: 1, minWidth: 0 }}>
<h2
style={{
margin: 0,
fontFamily: 'var(--font-serif)',
fontSize: 18,
fontWeight: 600,
color: 'var(--fg)',
}}
>
{title}
</h2>
<div
style={{
margin: '6px 0 0',
fontSize: 13,
color: 'var(--fg-muted)',
lineHeight: 1.5,
fontFamily: 'var(--font-sans)',
}}
>
{message}
</div>
</div>
</div>
</div>
<footer
style={{
padding: '14px 20px',
borderTop: '1px solid var(--border)',
display: 'flex',
gap: 10,
justifyContent: 'flex-end',
}}
>
<Button
type="button"
variant="ghost"
size="md"
onClick={onClose}
disabled={loading}
style={{ color: 'var(--fg)', fontWeight: 600 }}
>
{cancelLabel}
</Button>
<Button
type="button"
variant={danger ? 'danger' : 'primary'}
size="md"
onClick={onConfirm}
disabled={loading}
style={
danger
? {
background: 'var(--destructive)',
color: 'var(--destructive-fg)',
borderColor: 'var(--destructive)',
boxShadow: 'var(--shadow-xs)',
}
: undefined
}
>
{loading ? 'Working…' : confirmLabel}
</Button>
</footer>
</div>
</>
)
}

View File

@@ -1,127 +0,0 @@
import { I } from '@/components/icons'
import type { RoomRowData, SidebarFilter, TagRowData } from '@/lib/types'
type SortKey = 'newest' | 'oldest' | 'longest'
type FilterBarProps = {
filter: SidebarFilter
rooms: RoomRowData[]
tags: TagRowData[]
total: number
sort: SortKey
onSort: (s: SortKey) => void
query: string
onSearch: (v: string) => void
}
export function FilterBar({
filter,
rooms,
tags,
total,
sort,
onSort,
query,
onSearch,
}: FilterBarProps) {
let label = 'All transcripts'
if (filter.kind === 'source' && filter.value === 'live') label = 'Live transcripts'
if (filter.kind === 'source' && filter.value === 'file') label = 'Uploaded files'
if (filter.kind === 'room') {
const r = rooms.find((x) => x.id === filter.value)
label = r ? `Room · ${r.name}` : 'Room'
}
if (filter.kind === 'tag') {
const t = tags.find((x) => x.id === filter.value)
label = t ? `Tagged · #${t.name}` : 'Tag'
}
if (filter.kind === 'trash') label = 'Trash'
if (filter.kind === 'recent') label = 'Recent (last 7 days)'
return (
<div
style={{
display: 'flex',
alignItems: 'center',
gap: 14,
padding: '10px 20px',
borderBottom: '1px solid var(--border)',
background: 'var(--card)',
fontFamily: 'var(--font-sans)',
fontSize: 12,
}}
>
<span style={{ color: 'var(--fg)', fontWeight: 600 }}>{label}</span>
<span
style={{
fontFamily: 'var(--font-mono)',
fontSize: 11,
color: 'var(--fg-muted)',
}}
>
{total} {total === 1 ? 'result' : 'results'}
</span>
<div
style={{
marginLeft: 12,
display: 'inline-flex',
alignItems: 'center',
gap: 8,
height: 30,
padding: '0 10px',
background: 'var(--bg)',
border: '1px solid var(--border)',
borderRadius: 'var(--radius-md)',
width: 320,
maxWidth: '40%',
}}
>
<span style={{ color: 'var(--fg-muted)', display: 'inline-flex' }}>{I.Search(13)}</span>
<input
value={query || ''}
onChange={(e) => onSearch(e.target.value)}
placeholder="Search transcripts, speakers, rooms…"
style={{
border: 'none',
outline: 'none',
background: 'transparent',
fontFamily: 'var(--font-sans)',
fontSize: 12.5,
color: 'var(--fg)',
flex: 1,
}}
/>
<span className="rf-kbd">K</span>
</div>
<div style={{ flex: 1 }} />
<span
style={{
color: 'var(--fg-muted)',
fontSize: 11,
fontFamily: 'var(--font-mono)',
}}
>
sort
</span>
{(['newest', 'oldest', 'longest'] as const).map((s) => (
<button
key={s}
onClick={() => onSort(s)}
style={{
border: 'none',
padding: '3px 8px',
fontFamily: 'var(--font-sans)',
fontSize: 12,
cursor: 'pointer',
color: sort === s ? 'var(--fg)' : 'var(--fg-muted)',
fontWeight: sort === s ? 600 : 500,
borderRadius: 'var(--radius-sm)',
background: sort === s ? 'var(--muted)' : 'transparent',
}}
>
{s}
</button>
))}
</div>
)
}

View File

@@ -1,74 +0,0 @@
import { I } from '@/components/icons'
import { Button } from '@/components/ui/primitives'
type Props = {
page: number
total: number
pageSize: number
onPage: (n: number) => void
}
export function Pagination({ page, total, pageSize, onPage }: Props) {
const totalPages = Math.max(1, Math.ceil(total / pageSize))
if (totalPages <= 1) return null
const start = (page - 1) * pageSize + 1
const end = Math.min(total, page * pageSize)
const pages = Array.from({ length: totalPages }, (_, i) => i + 1)
return (
<div
style={{
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
padding: '14px 20px',
borderTop: '1px solid var(--border)',
background: 'var(--card)',
fontFamily: 'var(--font-sans)',
fontSize: 12,
}}
>
<span style={{ color: 'var(--fg-muted)', fontFamily: 'var(--font-mono)' }}>
{start}{end} of {total}
</span>
<div style={{ display: 'flex', alignItems: 'center', gap: 4 }}>
<Button
variant="outline"
size="sm"
disabled={page === 1}
onClick={() => onPage(page - 1)}
>
{I.ChevronLeft(14)}
</Button>
{pages.map((n) => (
<button
key={n}
onClick={() => onPage(n)}
style={{
width: 30,
height: 30,
borderRadius: 'var(--radius-md)',
cursor: 'pointer',
border: '1px solid',
borderColor: n === page ? 'var(--primary)' : 'var(--border)',
background: n === page ? 'var(--primary)' : 'var(--card)',
color: n === page ? 'var(--primary-fg)' : 'var(--fg)',
fontFamily: 'var(--font-sans)',
fontSize: 12,
fontWeight: 500,
}}
>
{n}
</button>
))}
<Button
variant="outline"
size="sm"
disabled={page === totalPages}
onClick={() => onPage(page + 1)}
>
{I.ChevronRight(14)}
</Button>
</div>
</div>
)
}

View File

@@ -1,308 +0,0 @@
import { type ReactNode } from 'react'
import { I } from '@/components/icons'
import { RowMenuTrigger } from '@/components/ui/primitives'
import { fmtDate, fmtDur } from '@/lib/format'
import type { TranscriptRowData } from '@/lib/types'
type Props = {
t: TranscriptRowData
active?: boolean
onSelect?: (id: string) => void
query?: string
density?: 'compact' | 'comfortable'
onDelete?: (t: TranscriptRowData) => void
onReprocess?: (id: string) => void
}
type ApiStatus = 'recording' | 'ended' | 'processing' | 'uploaded' | 'error' | 'idle'
const STATUS_MAP: Record<string, ApiStatus> = {
live: 'recording',
ended: 'ended',
processing: 'processing',
uploading: 'uploaded',
failed: 'error',
idle: 'idle',
}
function statusIconFor(apiStatus: ApiStatus): { node: ReactNode; color: string } {
switch (apiStatus) {
case 'recording':
return { node: I.Radio(14), color: 'var(--status-live)' }
case 'processing':
return {
node: (
<span
style={{
width: 12,
height: 12,
borderRadius: 9999,
display: 'inline-block',
border: '2px solid color-mix(in oklch, var(--status-processing) 25%, transparent)',
borderTopColor: 'var(--status-processing)',
animation: 'rfSpin 0.9s linear infinite',
}}
/>
),
color: 'var(--status-processing)',
}
case 'uploaded':
return { node: I.Clock(14), color: 'var(--fg-muted)' }
case 'error':
return { node: I.AlertTriangle(14), color: 'var(--destructive)' }
case 'ended':
return { node: I.CheckCircle(14), color: 'var(--status-ok)' }
default:
return { node: I.Clock(14), color: 'var(--fg-muted)' }
}
}
function buildRowMenu(
t: TranscriptRowData,
onDelete?: (t: TranscriptRowData) => void,
onReprocess?: (id: string) => void,
) {
const apiStatus = STATUS_MAP[t.status] ?? 'idle'
const canReprocess = apiStatus === 'ended' || apiStatus === 'error'
return [
{ label: 'Open', icon: I.ExternalLink(14) },
{ label: 'Rename', icon: I.Edit(14) },
{ separator: true as const },
{
label: 'Reprocess',
icon: I.Refresh(14),
disabled: !canReprocess,
onClick: () => onReprocess?.(t.id),
},
{ separator: true as const },
{
label: 'Delete',
icon: I.Trash(14),
danger: true,
onClick: () => onDelete?.(t),
},
]
}
function Highlight({ text, query }: { text: string; query?: string }) {
if (!query || !text) return <>{text}</>
const i = text.toLowerCase().indexOf(query.toLowerCase())
if (i < 0) return <>{text}</>
return (
<>
{text.slice(0, i)}
<mark
style={{
background: 'var(--reflector-accent-tint2)',
color: 'var(--fg)',
padding: '0 2px',
borderRadius: 2,
}}
>
{text.slice(i, i + query.length)}
</mark>
{text.slice(i + query.length)}
</>
)
}
export function TranscriptRow({
t,
active,
onSelect,
query,
density = 'comfortable',
onDelete,
onReprocess,
}: Props) {
const compact = density === 'compact'
const vpad = compact ? 10 : 14
const apiStatus = STATUS_MAP[t.status] ?? 'idle'
const statusIcon = statusIconFor(apiStatus)
const sourceLabel = t.source === 'room' ? t.room || 'room' : t.source
const isError = apiStatus === 'error'
const errorMsg = isError ? t.error_message || t.error || 'Processing failed — reason unavailable' : null
const snippet = query && t.snippet ? t.snippet : null
const matchCount = query && t.snippet ? 1 : 0
const [srcLang, tgtLang] = (t.lang || '').includes('→')
? (t.lang as string).split('→').map((s) => s.trim())
: [t.lang, null]
return (
<div
className="rf-row"
data-active={active ? 'true' : undefined}
onClick={() => onSelect?.(t.id)}
style={{
display: 'grid',
gridTemplateColumns: 'auto 1fr auto auto',
alignItems: 'center',
columnGap: 14,
padding: `${vpad}px 20px`,
borderBottom: '1px solid var(--border)',
cursor: 'pointer',
position: 'relative',
}}
>
{active && (
<span
style={{
position: 'absolute',
left: 0,
top: 6,
bottom: 6,
width: 2,
background: 'var(--primary)',
borderRadius: 2,
}}
/>
)}
<span style={{ color: statusIcon.color, display: 'inline-flex' }}>{statusIcon.node}</span>
<div
style={{
minWidth: 0,
display: 'flex',
flexDirection: 'column',
gap: compact ? 2 : 4,
}}
>
<span
style={{
fontFamily: 'var(--font-serif)',
fontSize: compact ? 14 : 15,
fontWeight: 600,
color: 'var(--fg)',
letterSpacing: '-0.005em',
whiteSpace: 'nowrap',
overflow: 'hidden',
textOverflow: 'ellipsis',
}}
>
<Highlight text={t.title || 'Unnamed transcript'} query={query} />
</span>
<div
style={{
display: 'flex',
alignItems: 'center',
flexWrap: 'wrap',
rowGap: 2,
columnGap: 0,
fontSize: 11.5,
color: 'var(--fg-muted)',
fontFamily: 'var(--font-sans)',
}}
>
<span>{sourceLabel}</span>
<span style={{ margin: '0 8px', color: 'var(--gh-grey-3)' }}>·</span>
<span style={{ fontFamily: 'var(--font-mono)', fontSize: 11 }}>{fmtDate(t.date)}</span>
<span style={{ margin: '0 8px', color: 'var(--gh-grey-3)' }}>·</span>
<span style={{ fontFamily: 'var(--font-mono)', fontSize: 11 }}>{fmtDur(t.duration)}</span>
{t.speakers > 0 && (
<>
<span style={{ margin: '0 8px', color: 'var(--gh-grey-3)' }}>·</span>
<span style={{ display: 'inline-flex', alignItems: 'center', gap: 4 }}>
{I.Users(11)} {t.speakers} {t.speakers === 1 ? 'speaker' : 'speakers'}
</span>
</>
)}
{srcLang && (
<>
<span style={{ margin: '0 8px', color: 'var(--gh-grey-3)' }}>·</span>
<span
style={{
display: 'inline-flex',
alignItems: 'center',
gap: 4,
color: tgtLang ? 'var(--primary)' : 'var(--fg-muted)',
}}
>
{I.Globe(11)}
<span
style={{
fontFamily: 'var(--font-mono)',
fontSize: 10.5,
textTransform: 'uppercase',
}}
>
{srcLang}
{tgtLang && <> {tgtLang}</>}
</span>
</span>
</>
)}
</div>
{errorMsg && (
<div
style={{
marginTop: 4,
padding: '6px 10px',
fontSize: 11.5,
lineHeight: 1.45,
fontFamily: 'var(--font-sans)',
color: 'var(--destructive)',
background: 'color-mix(in oklch, var(--destructive) 8%, transparent)',
border: '1px solid color-mix(in oklch, var(--destructive) 20%, transparent)',
borderRadius: 'var(--radius-sm)',
display: 'flex',
alignItems: 'flex-start',
gap: 6,
}}
>
<span style={{ marginTop: 1, flexShrink: 0 }}>{I.AlertTriangle(11)}</span>
<span style={{ minWidth: 0 }}>{errorMsg}</span>
</div>
)}
{snippet && (
<div
style={{
marginTop: 4,
padding: '6px 10px',
fontSize: 12,
fontFamily: 'var(--font-serif)',
fontStyle: 'italic',
color: 'var(--fg-muted)',
lineHeight: 1.5,
background: 'var(--muted)',
borderLeft: '2px solid var(--primary)',
borderRadius: '0 var(--radius-sm) var(--radius-sm) 0',
}}
>
<Highlight text={snippet} query={query} />
</div>
)}
</div>
<span>
{matchCount > 0 && (
<span
style={{
display: 'inline-flex',
alignItems: 'center',
padding: '1px 8px',
height: 18,
fontFamily: 'var(--font-mono)',
fontSize: 10.5,
fontWeight: 600,
color: 'var(--primary)',
background: 'var(--reflector-accent-tint)',
border: '1px solid var(--reflector-accent-tint2)',
borderRadius: 9999,
}}
>
{matchCount} match
</span>
)}
</span>
<RowMenuTrigger items={buildRowMenu(t, onDelete, onReprocess)} />
</div>
)
}

View File

@@ -1,101 +0,0 @@
import { I } from '@/components/icons'
import { RowMenuTrigger } from '@/components/ui/primitives'
import { fmtDate, fmtDur } from '@/lib/format'
import type { TranscriptRowData } from '@/lib/types'
type Props = {
t: TranscriptRowData
onRestore?: (id: string) => void
onDestroy?: (t: TranscriptRowData) => void
}
export function TrashRow({ t, onRestore, onDestroy }: Props) {
const sourceLabel = t.source === 'room' ? t.room || 'room' : t.source
return (
<div
className="rf-row"
style={{
display: 'grid',
gridTemplateColumns: 'auto 1fr auto',
alignItems: 'center',
columnGap: 14,
padding: '14px 20px',
borderBottom: '1px solid var(--border)',
cursor: 'default',
position: 'relative',
opacity: 0.78,
background:
'repeating-linear-gradient(45deg, transparent 0 12px, color-mix(in oklch, var(--muted) 40%, transparent) 12px 13px)',
}}
>
<span style={{ color: 'var(--fg-muted)', display: 'inline-flex' }}>{I.Trash(14)}</span>
<div style={{ minWidth: 0, display: 'flex', flexDirection: 'column', gap: 4 }}>
<span
style={{
fontFamily: 'var(--font-serif)',
fontSize: 15,
fontWeight: 500,
color: 'var(--fg-muted)',
letterSpacing: '-0.005em',
textDecoration: 'line-through',
textDecorationColor: 'color-mix(in oklch, var(--fg-muted) 50%, transparent)',
textDecorationThickness: '1px',
whiteSpace: 'nowrap',
overflow: 'hidden',
textOverflow: 'ellipsis',
}}
>
{t.title || 'Unnamed transcript'}
</span>
<div
style={{
display: 'flex',
alignItems: 'center',
flexWrap: 'wrap',
rowGap: 2,
columnGap: 0,
fontSize: 11.5,
color: 'var(--fg-muted)',
fontFamily: 'var(--font-sans)',
}}
>
<span>{sourceLabel}</span>
<span style={{ margin: '0 8px', color: 'var(--gh-grey-3)' }}>·</span>
<span style={{ fontFamily: 'var(--font-mono)', fontSize: 11 }}>{fmtDate(t.date)}</span>
{t.duration > 0 && (
<>
<span style={{ margin: '0 8px', color: 'var(--gh-grey-3)' }}>·</span>
<span style={{ fontFamily: 'var(--font-mono)', fontSize: 11 }}>
{fmtDur(t.duration)}
</span>
</>
)}
<span style={{ margin: '0 8px', color: 'var(--gh-grey-3)' }}>·</span>
<span style={{ display: 'inline-flex', alignItems: 'center', gap: 4 }}>
{I.Trash(11)} Deleted
</span>
</div>
</div>
<RowMenuTrigger
label="Trash options"
items={[
{
label: 'Restore',
icon: I.Undo(14),
onClick: () => onRestore?.(t.id),
},
{ separator: true },
{
label: 'Destroy permanently',
icon: I.Trash(14),
danger: true,
onClick: () => onDestroy?.(t),
},
]}
/>
</div>
)
}

View File

@@ -1,97 +0,0 @@
import { I } from '@/components/icons'
import { REFLECTOR_LANGS } from '@/lib/types'
type Props = {
sourceLang: string
setSourceLang: (v: string) => void
targetLang: string
setTargetLang: (v: string) => void
horizontal?: boolean
}
export function LanguagePair({
sourceLang,
setSourceLang,
targetLang,
setTargetLang,
horizontal,
}: Props) {
return (
<div
style={{
display: 'grid',
gridTemplateColumns: horizontal ? '1fr auto 1fr' : '1fr',
gap: horizontal ? 8 : 14,
alignItems: 'end',
}}
>
<div>
<label className="rf-label" htmlFor="rf-source-lang">
{I.Mic(13)} Spoken language
</label>
<select
id="rf-source-lang"
className="rf-select"
value={sourceLang}
onChange={(e) => setSourceLang(e.target.value)}
style={{ marginTop: 6 }}
>
{REFLECTOR_LANGS.map((l) => (
<option key={l.code} value={l.code}>
{l.flag} {l.name}
</option>
))}
</select>
<div className="rf-hint">Detected from the audio if set to Auto.</div>
</div>
{horizontal && (
<button
type="button"
onClick={() => {
const a = sourceLang
setSourceLang(targetLang)
setTargetLang(a)
}}
title="Swap languages"
style={{
height: 40,
width: 40,
marginBottom: 18,
border: '1px solid var(--border)',
borderRadius: 'var(--radius-md)',
background: 'var(--muted)',
cursor: 'pointer',
color: 'var(--fg-muted)',
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
}}
>
{I.Swap(16)}
</button>
)}
<div>
<label className="rf-label" htmlFor="rf-target-lang">
{I.Globe(13)} Translate to
</label>
<select
id="rf-target-lang"
className="rf-select"
value={targetLang}
onChange={(e) => setTargetLang(e.target.value)}
style={{ marginTop: 6 }}
>
<option value=""> None (same as spoken) </option>
{REFLECTOR_LANGS.filter((l) => l.code !== 'auto').map((l) => (
<option key={l.code} value={l.code}>
{l.flag} {l.name}
</option>
))}
</select>
<div className="rf-hint">Leave blank to skip translation.</div>
</div>
</div>
)
}

View File

@@ -1,33 +0,0 @@
import { I } from '@/components/icons'
import type { RoomRowData } from '@/lib/types'
type Props = {
roomId: string
setRoomId: (v: string) => void
rooms: RoomRowData[]
}
export function RoomPicker({ roomId, setRoomId, rooms }: Props) {
return (
<div>
<label className="rf-label" htmlFor="rf-room">
{I.Folder(13)} Attach to room{' '}
<span style={{ color: 'var(--fg-muted)', fontWeight: 400 }}> optional</span>
</label>
<select
id="rf-room"
className="rf-select"
value={roomId}
onChange={(e) => setRoomId(e.target.value)}
style={{ marginTop: 6 }}
>
<option value=""> None </option>
{rooms.map((r) => (
<option key={r.id} value={r.id}>
{r.name}
</option>
))}
</select>
</div>
)
}

View File

@@ -1,162 +0,0 @@
import {
AlertTriangle,
ArrowRight,
ArrowUpDown,
Bell,
Calendar,
Check,
CheckCircle2,
ChevronDown,
ChevronLeft,
ChevronRight,
Clock,
Cloud,
Copy,
DoorClosed,
DoorOpen,
Download,
Edit,
ExternalLink,
File,
FileAudio,
Filter,
Folder,
Globe,
History,
Inbox,
Info,
Link as LinkIcon,
Loader,
Lock,
Mic,
Moon,
MoreHorizontal,
Plus,
Radio,
RefreshCw,
RotateCcw,
Search,
Settings,
Share2,
Shield,
Sparkles,
Sun,
Tag,
Trash2,
Undo,
Upload,
User,
Users,
Waves,
X,
} from 'lucide-react'
export {
AlertTriangle,
ArrowRight,
Bell,
Calendar,
Check,
CheckCircle2,
ChevronDown,
ChevronLeft,
ChevronRight,
Clock,
Cloud,
Copy,
Download,
Edit,
ExternalLink,
File,
FileAudio,
Filter,
Folder,
Globe,
History,
Inbox,
Info,
Loader,
Lock,
Mic,
Moon,
Plus,
Radio,
RefreshCw,
RotateCcw,
Search,
Settings,
Shield,
Sparkles,
Sun,
Tag,
Undo,
Upload,
User,
Users,
Waves,
X,
}
export { DoorClosed as Door }
export { DoorOpen }
export { Trash2 as Trash }
export { MoreHorizontal as More }
export { Share2 as Share }
export { ArrowUpDown as Swap }
export { LinkIcon as Link }
export { X as Close }
const make = (Icon: typeof Mic) => (size = 16) => <Icon size={size} strokeWidth={1.75} />
export const I = {
Inbox: make(Inbox),
Mic: make(Mic),
Upload: make(Upload),
Radio: make(Radio),
Door: make(DoorClosed),
Folder: make(Folder),
Trash: make(Trash2),
Tag: make(Tag),
Users: make(Users),
Search: make(Search),
Plus: make(Plus),
Bell: make(Bell),
Settings: make(Settings),
Close: make(X),
Download: make(Download),
Share: make(Share2),
More: make(MoreHorizontal),
Globe: make(Globe),
Clock: make(Clock),
CheckCircle: make(CheckCircle2),
AlertTriangle: make(AlertTriangle),
Loader: make(Loader),
ChevronDown: make(ChevronDown),
ChevronLeft: make(ChevronLeft),
ChevronRight: make(ChevronRight),
Sparkle: make(Sparkles),
Waves: make(Waves),
Filter: make(Filter),
Undo: make(Undo),
Edit: make(Edit),
Refresh: make(RefreshCw),
ExternalLink: make(ExternalLink),
RotateCcw: make(RotateCcw),
X: make(X),
Info: make(Info),
Check: make(Check),
Moon: make(Moon),
Sun: make(Sun),
Lock: make(Lock),
Shield: make(Shield),
Swap: make(ArrowUpDown),
ArrowRight: make(ArrowRight),
History: make(History),
DoorOpen: make(DoorOpen),
FileAudio: make(FileAudio),
File: make(File),
Calendar: make(Calendar),
Link: make(LinkIcon),
Cloud: make(Cloud),
User: make(User),
Copy: make(Copy),
}

View File

@@ -1,37 +0,0 @@
import { type ReactNode } from 'react'
import { TopBar } from './TopBar'
type AppShellProps = {
title: string
crumb?: string[]
sidebar?: ReactNode
children: ReactNode
}
export function AppShell({ title, crumb, sidebar, children }: AppShellProps) {
return (
<div
style={{
display: 'flex',
height: '100vh',
background: 'var(--bg)',
overflow: 'hidden',
}}
>
{sidebar}
<main style={{ flex: 1, display: 'flex', flexDirection: 'column', minWidth: 0 }}>
<TopBar title={title} crumb={crumb} />
<div
style={{
flex: 1,
overflowY: 'auto',
padding: 24,
background: 'var(--bg)',
}}
>
{children}
</div>
</main>
</div>
)
}

View File

@@ -1,331 +0,0 @@
import type { CSSProperties } from 'react'
import { I } from '@/components/icons'
import { Button, SectionLabel, SidebarItem } from '@/components/ui/primitives'
import type { RoomRowData, SidebarFilter, TagRowData } from '@/lib/types'
import { BrandHeader, PrimaryNav, UserChip, sidebarAsideStyle } from './sidebarChrome'
import { useAuth } from '@/auth/AuthContext'
type AppSidebarProps = {
filter: SidebarFilter
onFilter: (filter: SidebarFilter) => void
rooms: RoomRowData[]
tags: TagRowData[]
showTags?: boolean
collapsed: boolean
onToggle: () => void
onNewRecording?: () => void
counts?: {
all?: number | null
liveTranscripts?: number | null
uploadedFiles?: number | null
trash?: number | null
}
}
export function AppSidebar({
filter,
onFilter,
rooms,
tags,
showTags = true,
collapsed,
onToggle,
onNewRecording,
counts,
}: AppSidebarProps) {
const { user } = useAuth()
const myRooms = rooms.filter((r) => !r.shared)
const sharedRooms = rooms.filter((r) => r.shared)
return (
<aside style={sidebarAsideStyle(collapsed) as CSSProperties}>
<BrandHeader collapsed={collapsed} onToggle={onToggle} />
{collapsed ? (
<CollapsedRail
filter={filter}
onFilter={onFilter}
onToggle={onToggle}
onNewRecording={onNewRecording}
/>
) : (
<ExpandedNav
filter={filter}
onFilter={onFilter}
myRooms={myRooms}
sharedRooms={sharedRooms}
tags={tags}
showTags={showTags}
onNewRecording={onNewRecording}
counts={counts}
/>
)}
{!collapsed && <UserChip user={user} />}
</aside>
)
}
type ExpandedNavProps = {
filter: SidebarFilter
onFilter: (filter: SidebarFilter) => void
myRooms: RoomRowData[]
sharedRooms: RoomRowData[]
tags: TagRowData[]
showTags?: boolean
onNewRecording?: () => void
counts?: AppSidebarProps['counts']
}
function ExpandedNav({
filter,
onFilter,
myRooms,
sharedRooms,
tags,
showTags = true,
onNewRecording,
counts,
}: ExpandedNavProps) {
const isActive = (kind: SidebarFilter['kind'], val: SidebarFilter['value'] = null) =>
filter.kind === kind && filter.value === val
return (
<>
<div style={{ padding: '14px 12px 6px' }}>
<Button
variant="primary"
size="md"
style={{ width: '100%', justifyContent: 'flex-start' }}
onClick={onNewRecording}
>
{I.Mic(14)} New recording
</Button>
</div>
<nav
style={{
flex: 1,
padding: '6px 10px 12px',
display: 'flex',
flexDirection: 'column',
gap: 14,
overflowY: 'auto',
}}
>
<PrimaryNav />
<div
style={{
height: 1,
background: 'var(--border)',
margin: '2px 6px',
}}
/>
<div style={{ display: 'flex', flexDirection: 'column', gap: 1 }}>
<SidebarItem
icon={I.Inbox(15)}
label="All transcripts"
count={counts?.all ?? null}
active={isActive('all')}
onClick={() => onFilter({ kind: 'all', value: null })}
/>
<SidebarItem
icon={I.Sparkle(15)}
label="Recent"
active={isActive('recent')}
onClick={() => onFilter({ kind: 'recent', value: null })}
/>
</div>
<div>
<SectionLabel>Sources</SectionLabel>
<div style={{ display: 'flex', flexDirection: 'column', gap: 1 }}>
<SidebarItem
icon={I.Radio(15)}
label="Live transcripts"
dot={
filter.kind === 'source' && filter.value === 'live'
? undefined
: 'var(--status-live)'
}
count={counts?.liveTranscripts ?? null}
active={isActive('source', 'live')}
onClick={() => onFilter({ kind: 'source', value: 'live' })}
/>
<SidebarItem
icon={I.Upload(15)}
label="Uploaded files"
count={counts?.uploadedFiles ?? null}
active={isActive('source', 'file')}
onClick={() => onFilter({ kind: 'source', value: 'file' })}
/>
</div>
</div>
{myRooms.length > 0 && (
<div>
<SectionLabel
action={
<span style={{ color: 'var(--fg-muted)', cursor: 'pointer', opacity: 0.6 }}>+</span>
}
>
My rooms
</SectionLabel>
<div style={{ display: 'flex', flexDirection: 'column', gap: 1 }}>
{myRooms.map((r) => (
<SidebarItem
key={r.id}
icon={I.Door(15)}
label={r.name}
count={r.count}
active={isActive('room', r.id)}
onClick={() => onFilter({ kind: 'room', value: r.id })}
/>
))}
</div>
</div>
)}
{sharedRooms.length > 0 && (
<div>
<SectionLabel>Shared</SectionLabel>
<div style={{ display: 'flex', flexDirection: 'column', gap: 1 }}>
{sharedRooms.map((r) => (
<SidebarItem
key={r.id}
icon={I.Users(14)}
label={r.name}
count={r.count}
active={isActive('room', r.id)}
onClick={() => onFilter({ kind: 'room', value: r.id })}
/>
))}
</div>
</div>
)}
{showTags && tags.length > 0 && (
<div>
<SectionLabel
action={
<span style={{ color: 'var(--fg-muted)', cursor: 'pointer', opacity: 0.6 }}>+</span>
}
>
Tags
</SectionLabel>
<div style={{ display: 'flex', flexDirection: 'column', gap: 1 }}>
{tags.map((t) => (
<SidebarItem
key={t.id}
icon={I.Tag(14)}
label={t.name}
count={t.count}
active={isActive('tag', t.id)}
onClick={() => onFilter({ kind: 'tag', value: t.id })}
/>
))}
</div>
</div>
)}
<div style={{ marginTop: 'auto', borderTop: '1px solid var(--border)', paddingTop: 10 }}>
<SidebarItem
icon={I.Trash(15)}
label="Trash"
active={isActive('trash')}
onClick={() => onFilter({ kind: 'trash', value: null })}
count={counts?.trash ?? null}
/>
</div>
</nav>
</>
)
}
type CollapsedRailProps = {
filter: SidebarFilter
onFilter: (filter: SidebarFilter) => void
onToggle: () => void
onNewRecording?: () => void
}
function CollapsedRail({ filter, onFilter, onToggle, onNewRecording }: CollapsedRailProps) {
const items: Array<{
kind: SidebarFilter['kind']
value?: SidebarFilter['value']
icon: ReturnType<typeof I.Inbox>
title: string
}> = [
{ kind: 'all', icon: I.Inbox(18), title: 'All' },
{ kind: 'recent', icon: I.Sparkle(18), title: 'Recent' },
{ kind: 'source', value: 'live', icon: I.Radio(18), title: 'Live' },
{ kind: 'source', value: 'file', icon: I.Upload(18), title: 'Uploads' },
{ kind: 'trash', icon: I.Trash(18), title: 'Trash' },
]
return (
<nav
style={{
flex: 1,
padding: '10px 8px',
display: 'flex',
flexDirection: 'column',
gap: 4,
alignItems: 'center',
}}
>
<Button variant="primary" size="icon" title="New recording" onClick={onNewRecording}>
{I.Mic(16)}
</Button>
<div style={{ height: 10 }} />
{items.map((it, i) => {
const on = filter.kind === it.kind && (filter.value ?? null) === (it.value ?? null)
return (
<button
key={i}
title={it.title}
onClick={() =>
onFilter({ kind: it.kind, value: (it.value ?? null) as never } as SidebarFilter)
}
style={{
width: 40,
height: 40,
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
border: '1px solid',
borderColor: on ? 'var(--border)' : 'transparent',
borderRadius: 'var(--radius-md)',
background: on ? 'var(--card)' : 'transparent',
color: on ? 'var(--primary)' : 'var(--fg-muted)',
cursor: 'pointer',
boxShadow: on ? 'var(--shadow-xs)' : 'none',
}}
>
{it.icon}
</button>
)
})}
<div style={{ marginTop: 'auto' }}>
<button
onClick={onToggle}
title="Expand sidebar"
style={{
width: 40,
height: 40,
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
border: 'none',
background: 'transparent',
color: 'var(--fg-muted)',
cursor: 'pointer',
}}
>
{I.ChevronRight(16)}
</button>
</div>
</nav>
)
}

View File

@@ -1,22 +0,0 @@
export function ReflectorMark({ size = 28 }: { size?: number }) {
return (
<svg
width={size}
height={size}
viewBox="0 0 500 500"
aria-hidden="true"
style={{ display: 'block', flexShrink: 0 }}
>
<polygon
points="227.5,51.5 86.5,150.1 100.8,383.9 244.3,249.8"
fill="var(--fg)"
opacity="0.82"
/>
<polygon
points="305.4,421.4 423.9,286 244.3,249.8 100.8,383.9"
fill="var(--fg)"
opacity="0.42"
/>
</svg>
)
}

Some files were not shown because too many files have changed in this diff Show More