mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-21 20:59:05 +00:00
Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 1aa52a99b6 | |||
|
|
2a97290f2e | ||
| 7963cc8a52 | |||
| d12424848d | |||
|
|
6e765875d5 | ||
|
|
e0f4acf28b | ||
|
|
12359ea4eb | ||
| 267b7401ea | |||
| aea9de393c | |||
| dc177af3ff | |||
| 5bd8233657 | |||
| 28ac031ff6 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -13,3 +13,4 @@ restart-dev.sh
|
||||
data/
|
||||
www/REFACTOR.md
|
||||
www/reload-frontend
|
||||
server/test.sqlite
|
||||
|
||||
27
CHANGELOG.md
27
CHANGELOG.md
@@ -1,5 +1,32 @@
|
||||
# Changelog
|
||||
|
||||
## [0.6.1](https://github.com/Monadical-SAS/reflector/compare/v0.6.0...v0.6.1) (2025-08-06)
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* delayed waveform loading ([#538](https://github.com/Monadical-SAS/reflector/issues/538)) ([ef64146](https://github.com/Monadical-SAS/reflector/commit/ef64146325d03f64dd9a1fe40234fb3e7e957ae2))
|
||||
|
||||
## [0.6.0](https://github.com/Monadical-SAS/reflector/compare/v0.5.0...v0.6.0) (2025-08-05)
|
||||
|
||||
|
||||
### ⚠ BREAKING CHANGES
|
||||
|
||||
* Configuration keys have changed. Update your .env file:
|
||||
- TRANSCRIPT_MODAL_API_KEY → TRANSCRIPT_API_KEY
|
||||
- LLM_MODAL_API_KEY → (removed, use TRANSCRIPT_API_KEY)
|
||||
- Add DIARIZATION_API_KEY and TRANSLATE_API_KEY if using those services
|
||||
|
||||
### Features
|
||||
|
||||
* implement service-specific Modal API keys with auto processor pattern ([#528](https://github.com/Monadical-SAS/reflector/issues/528)) ([650befb](https://github.com/Monadical-SAS/reflector/commit/650befb291c47a1f49e94a01ab37d8fdfcd2b65d))
|
||||
* use llamaindex everywhere ([#525](https://github.com/Monadical-SAS/reflector/issues/525)) ([3141d17](https://github.com/Monadical-SAS/reflector/commit/3141d172bc4d3b3d533370c8e6e351ea762169bf))
|
||||
|
||||
|
||||
### Miscellaneous Chores
|
||||
|
||||
* **main:** release 0.6.0 ([ecdbf00](https://github.com/Monadical-SAS/reflector/commit/ecdbf003ea2476c3e95fd231adaeb852f2943df0))
|
||||
|
||||
## [0.5.0](https://github.com/Monadical-SAS/reflector/compare/v0.4.0...v0.5.0) (2025-07-31)
|
||||
|
||||
|
||||
|
||||
@@ -144,7 +144,9 @@ All endpoints prefixed `/v1/`:
|
||||
**Backend** (`server/.env`):
|
||||
- `DATABASE_URL` - Database connection string
|
||||
- `REDIS_URL` - Redis broker for Celery
|
||||
- `MODAL_TOKEN_ID`, `MODAL_TOKEN_SECRET` - Modal.com GPU processing
|
||||
- `TRANSCRIPT_BACKEND=modal` + `TRANSCRIPT_MODAL_API_KEY` - Modal.com transcription
|
||||
- `DIARIZATION_BACKEND=modal` + `DIARIZATION_MODAL_API_KEY` - Modal.com diarization
|
||||
- `TRANSLATION_BACKEND=modal` + `TRANSLATION_MODAL_API_KEY` - Modal.com translation
|
||||
- `WHEREBY_API_KEY` - Video platform integration
|
||||
- `REFLECTOR_AUTH_BACKEND` - Authentication method (none, jwt)
|
||||
|
||||
|
||||
@@ -24,7 +24,6 @@ AUTH_JWT_AUDIENCE=
|
||||
## Using serverless modal.com (require reflector-gpu-modal deployed)
|
||||
#TRANSCRIPT_BACKEND=modal
|
||||
#TRANSCRIPT_URL=https://xxxxx--reflector-transcriber-web.modal.run
|
||||
#TRANSLATE_URL=https://xxxxx--reflector-translator-web.modal.run
|
||||
#TRANSCRIPT_MODAL_API_KEY=xxxxx
|
||||
|
||||
TRANSCRIPT_BACKEND=modal
|
||||
@@ -32,11 +31,13 @@ TRANSCRIPT_URL=https://monadical-sas--reflector-transcriber-web.modal.run
|
||||
TRANSCRIPT_MODAL_API_KEY=
|
||||
|
||||
## =======================================================
|
||||
## Transcription backend
|
||||
## Translation backend
|
||||
##
|
||||
## Only available in modal atm
|
||||
## =======================================================
|
||||
TRANSLATION_BACKEND=modal
|
||||
TRANSLATE_URL=https://monadical-sas--reflector-translator-web.modal.run
|
||||
#TRANSLATION_MODAL_API_KEY=xxxxx
|
||||
|
||||
## =======================================================
|
||||
## LLM backend
|
||||
@@ -46,38 +47,11 @@ TRANSLATE_URL=https://monadical-sas--reflector-translator-web.modal.run
|
||||
## llm backend implementation
|
||||
## =======================================================
|
||||
|
||||
## Using serverless modal.com (require reflector-gpu-modal deployed)
|
||||
LLM_BACKEND=modal
|
||||
LLM_URL=https://monadical-sas--reflector-llm-web.modal.run
|
||||
LLM_MODAL_API_KEY=
|
||||
ZEPHYR_LLM_URL=https://monadical-sas--reflector-llm-zephyr-web.modal.run
|
||||
|
||||
|
||||
## Using OpenAI
|
||||
#LLM_BACKEND=openai
|
||||
#LLM_OPENAI_KEY=xxx
|
||||
#LLM_OPENAI_MODEL=gpt-3.5-turbo
|
||||
|
||||
## Using GPT4ALL
|
||||
#LLM_BACKEND=openai
|
||||
#LLM_URL=http://localhost:4891/v1/completions
|
||||
#LLM_OPENAI_MODEL="GPT4All Falcon"
|
||||
|
||||
## Default LLM MODEL NAME
|
||||
#DEFAULT_LLM=lmsys/vicuna-13b-v1.5
|
||||
|
||||
## Cache directory to store models
|
||||
CACHE_DIR=data
|
||||
|
||||
## =======================================================
|
||||
## Summary LLM configuration
|
||||
## =======================================================
|
||||
|
||||
## Context size for summary generation (tokens)
|
||||
SUMMARY_LLM_CONTEXT_SIZE_TOKENS=16000
|
||||
SUMMARY_LLM_URL=
|
||||
SUMMARY_LLM_API_KEY=sk-
|
||||
SUMMARY_MODEL=
|
||||
# LLM_MODEL=microsoft/phi-4
|
||||
LLM_CONTEXT_WINDOW=16000
|
||||
LLM_URL=
|
||||
LLM_API_KEY=sk-
|
||||
|
||||
## =======================================================
|
||||
## Diarization
|
||||
@@ -86,7 +60,9 @@ SUMMARY_MODEL=
|
||||
## To allow diarization, you need to expose expose the files to be dowloded by the pipeline
|
||||
## =======================================================
|
||||
DIARIZATION_ENABLED=false
|
||||
DIARIZATION_BACKEND=modal
|
||||
DIARIZATION_URL=https://monadical-sas--reflector-diarizer-web.modal.run
|
||||
#DIARIZATION_MODAL_API_KEY=xxxxx
|
||||
|
||||
|
||||
## =======================================================
|
||||
|
||||
@@ -3,8 +3,9 @@
|
||||
This repository hold an API for the GPU implementation of the Reflector API service,
|
||||
and use [Modal.com](https://modal.com)
|
||||
|
||||
- `reflector_llm.py` - LLM API
|
||||
- `reflector_diarizer.py` - Diarization API
|
||||
- `reflector_transcriber.py` - Transcription API
|
||||
- `reflector_translator.py` - Translation API
|
||||
|
||||
## Modal.com deployment
|
||||
|
||||
@@ -23,16 +24,20 @@ $ modal deploy reflector_llm.py
|
||||
└── 🔨 Created web => https://xxxx--reflector-llm-web.modal.run
|
||||
```
|
||||
|
||||
Then in your reflector api configuration `.env`, you can set theses keys:
|
||||
Then in your reflector api configuration `.env`, you can set these keys:
|
||||
|
||||
```
|
||||
TRANSCRIPT_BACKEND=modal
|
||||
TRANSCRIPT_URL=https://xxxx--reflector-transcriber-web.modal.run
|
||||
TRANSCRIPT_MODAL_API_KEY=REFLECTOR_APIKEY
|
||||
|
||||
LLM_BACKEND=modal
|
||||
LLM_URL=https://xxxx--reflector-llm-web.modal.run
|
||||
LLM_MODAL_API_KEY=REFLECTOR_APIKEY
|
||||
DIARIZATION_BACKEND=modal
|
||||
DIARIZATION_URL=https://xxxx--reflector-diarizer-web.modal.run
|
||||
DIARIZATION_MODAL_API_KEY=REFLECTOR_APIKEY
|
||||
|
||||
TRANSLATION_BACKEND=modal
|
||||
TRANSLATION_URL=https://xxxx--reflector-translator-web.modal.run
|
||||
TRANSLATION_MODAL_API_KEY=REFLECTOR_APIKEY
|
||||
```
|
||||
|
||||
## API
|
||||
|
||||
@@ -1,213 +0,0 @@
|
||||
"""
|
||||
Reflector GPU backend - LLM
|
||||
===========================
|
||||
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
from typing import Optional
|
||||
|
||||
from modal import App, Image, Secret, asgi_app, enter, exit, method
|
||||
|
||||
# LLM
|
||||
LLM_MODEL: str = "lmsys/vicuna-13b-v1.5"
|
||||
LLM_LOW_CPU_MEM_USAGE: bool = True
|
||||
LLM_TORCH_DTYPE: str = "bfloat16"
|
||||
LLM_MAX_NEW_TOKENS: int = 300
|
||||
|
||||
IMAGE_MODEL_DIR = "/root/llm_models"
|
||||
|
||||
app = App(name="reflector-llm")
|
||||
|
||||
|
||||
def download_llm():
|
||||
from huggingface_hub import snapshot_download
|
||||
|
||||
print("Downloading LLM model")
|
||||
snapshot_download(LLM_MODEL, cache_dir=IMAGE_MODEL_DIR)
|
||||
print("LLM model downloaded")
|
||||
|
||||
|
||||
def migrate_cache_llm():
|
||||
"""
|
||||
XXX The cache for model files in Transformers v4.22.0 has been updated.
|
||||
Migrating your old cache. This is a one-time only operation. You can
|
||||
interrupt this and resume the migration later on by calling
|
||||
`transformers.utils.move_cache()`.
|
||||
"""
|
||||
from transformers.utils.hub import move_cache
|
||||
|
||||
print("Moving LLM cache")
|
||||
move_cache(cache_dir=IMAGE_MODEL_DIR, new_cache_dir=IMAGE_MODEL_DIR)
|
||||
print("LLM cache moved")
|
||||
|
||||
|
||||
llm_image = (
|
||||
Image.debian_slim(python_version="3.10.8")
|
||||
.apt_install("git")
|
||||
.pip_install(
|
||||
"transformers",
|
||||
"torch",
|
||||
"sentencepiece",
|
||||
"protobuf",
|
||||
"jsonformer==0.12.0",
|
||||
"accelerate==0.21.0",
|
||||
"einops==0.6.1",
|
||||
"hf-transfer~=0.1",
|
||||
"huggingface_hub==0.16.4",
|
||||
)
|
||||
.env({"HF_HUB_ENABLE_HF_TRANSFER": "1"})
|
||||
.run_function(download_llm)
|
||||
.run_function(migrate_cache_llm)
|
||||
)
|
||||
|
||||
|
||||
@app.cls(
|
||||
gpu="A100",
|
||||
timeout=60 * 5,
|
||||
scaledown_window=60 * 5,
|
||||
allow_concurrent_inputs=15,
|
||||
image=llm_image,
|
||||
)
|
||||
class LLM:
|
||||
@enter()
|
||||
def enter(self):
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
|
||||
print("Instance llm model")
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
LLM_MODEL,
|
||||
torch_dtype=getattr(torch, LLM_TORCH_DTYPE),
|
||||
low_cpu_mem_usage=LLM_LOW_CPU_MEM_USAGE,
|
||||
cache_dir=IMAGE_MODEL_DIR,
|
||||
local_files_only=True,
|
||||
)
|
||||
|
||||
# JSONFormer doesn't yet support generation configs
|
||||
print("Instance llm generation config")
|
||||
model.config.max_new_tokens = LLM_MAX_NEW_TOKENS
|
||||
|
||||
# generation configuration
|
||||
gen_cfg = GenerationConfig.from_model_config(model.config)
|
||||
gen_cfg.max_new_tokens = LLM_MAX_NEW_TOKENS
|
||||
|
||||
# load tokenizer
|
||||
print("Instance llm tokenizer")
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
LLM_MODEL, cache_dir=IMAGE_MODEL_DIR, local_files_only=True
|
||||
)
|
||||
|
||||
# move model to gpu
|
||||
print("Move llm model to GPU")
|
||||
model = model.cuda()
|
||||
|
||||
print("Warmup llm done")
|
||||
self.model = model
|
||||
self.tokenizer = tokenizer
|
||||
self.gen_cfg = gen_cfg
|
||||
self.GenerationConfig = GenerationConfig
|
||||
|
||||
self.lock = threading.Lock()
|
||||
|
||||
@exit()
|
||||
def exit():
|
||||
print("Exit llm")
|
||||
|
||||
@method()
|
||||
def generate(
|
||||
self, prompt: str, gen_schema: str | None, gen_cfg: str | None
|
||||
) -> dict:
|
||||
"""
|
||||
Perform a generation action using the LLM
|
||||
"""
|
||||
print(f"Generate {prompt=}")
|
||||
if gen_cfg:
|
||||
gen_cfg = self.GenerationConfig.from_dict(json.loads(gen_cfg))
|
||||
else:
|
||||
gen_cfg = self.gen_cfg
|
||||
|
||||
# If a gen_schema is given, conform to gen_schema
|
||||
with self.lock:
|
||||
if gen_schema:
|
||||
import jsonformer
|
||||
|
||||
print(f"Schema {gen_schema=}")
|
||||
jsonformer_llm = jsonformer.Jsonformer(
|
||||
model=self.model,
|
||||
tokenizer=self.tokenizer,
|
||||
json_schema=json.loads(gen_schema),
|
||||
prompt=prompt,
|
||||
max_string_token_length=gen_cfg.max_new_tokens,
|
||||
)
|
||||
response = jsonformer_llm()
|
||||
else:
|
||||
# If no gen_schema, perform prompt only generation
|
||||
|
||||
# tokenize prompt
|
||||
input_ids = self.tokenizer.encode(prompt, return_tensors="pt").to(
|
||||
self.model.device
|
||||
)
|
||||
output = self.model.generate(input_ids, generation_config=gen_cfg)
|
||||
|
||||
# decode output
|
||||
response = self.tokenizer.decode(
|
||||
output[0].cpu(), skip_special_tokens=True
|
||||
)
|
||||
response = response[len(prompt) :]
|
||||
print(f"Generated {response=}")
|
||||
return {"text": response}
|
||||
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Web API
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
|
||||
@app.function(
|
||||
scaledown_window=60 * 10,
|
||||
timeout=60 * 5,
|
||||
allow_concurrent_inputs=45,
|
||||
secrets=[
|
||||
Secret.from_name("reflector-gpu"),
|
||||
],
|
||||
)
|
||||
@asgi_app()
|
||||
def web():
|
||||
from fastapi import Depends, FastAPI, HTTPException, status
|
||||
from fastapi.security import OAuth2PasswordBearer
|
||||
from pydantic import BaseModel
|
||||
|
||||
llmstub = LLM()
|
||||
|
||||
app = FastAPI()
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
|
||||
|
||||
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
|
||||
if apikey != os.environ["REFLECTOR_GPU_APIKEY"]:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid API key",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
class LLMRequest(BaseModel):
|
||||
prompt: str
|
||||
gen_schema: Optional[dict] = None
|
||||
gen_cfg: Optional[dict] = None
|
||||
|
||||
@app.post("/llm", dependencies=[Depends(apikey_auth)])
|
||||
def llm(
|
||||
req: LLMRequest,
|
||||
):
|
||||
gen_schema = json.dumps(req.gen_schema) if req.gen_schema else None
|
||||
gen_cfg = json.dumps(req.gen_cfg) if req.gen_cfg else None
|
||||
func = llmstub.generate.spawn(
|
||||
prompt=req.prompt, gen_schema=gen_schema, gen_cfg=gen_cfg
|
||||
)
|
||||
result = func.get()
|
||||
return result
|
||||
|
||||
return app
|
||||
@@ -1,219 +0,0 @@
|
||||
"""
|
||||
Reflector GPU backend - LLM
|
||||
===========================
|
||||
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
from typing import Optional
|
||||
|
||||
from modal import App, Image, Secret, asgi_app, enter, exit, method
|
||||
|
||||
# LLM
|
||||
LLM_MODEL: str = "HuggingFaceH4/zephyr-7b-alpha"
|
||||
LLM_LOW_CPU_MEM_USAGE: bool = True
|
||||
LLM_TORCH_DTYPE: str = "bfloat16"
|
||||
LLM_MAX_NEW_TOKENS: int = 300
|
||||
|
||||
IMAGE_MODEL_DIR = "/root/llm_models/zephyr"
|
||||
|
||||
app = App(name="reflector-llm-zephyr")
|
||||
|
||||
|
||||
def download_llm():
|
||||
from huggingface_hub import snapshot_download
|
||||
|
||||
print("Downloading LLM model")
|
||||
snapshot_download(LLM_MODEL, cache_dir=IMAGE_MODEL_DIR)
|
||||
print("LLM model downloaded")
|
||||
|
||||
|
||||
def migrate_cache_llm():
|
||||
"""
|
||||
XXX The cache for model files in Transformers v4.22.0 has been updated.
|
||||
Migrating your old cache. This is a one-time only operation. You can
|
||||
interrupt this and resume the migration later on by calling
|
||||
`transformers.utils.move_cache()`.
|
||||
"""
|
||||
from transformers.utils.hub import move_cache
|
||||
|
||||
print("Moving LLM cache")
|
||||
move_cache(cache_dir=IMAGE_MODEL_DIR, new_cache_dir=IMAGE_MODEL_DIR)
|
||||
print("LLM cache moved")
|
||||
|
||||
|
||||
llm_image = (
|
||||
Image.debian_slim(python_version="3.10.8")
|
||||
.apt_install("git")
|
||||
.pip_install(
|
||||
"transformers==4.34.0",
|
||||
"torch",
|
||||
"sentencepiece",
|
||||
"protobuf",
|
||||
"jsonformer==0.12.0",
|
||||
"accelerate==0.21.0",
|
||||
"einops==0.6.1",
|
||||
"hf-transfer~=0.1",
|
||||
"huggingface_hub==0.16.4",
|
||||
)
|
||||
.env({"HF_HUB_ENABLE_HF_TRANSFER": "1"})
|
||||
.run_function(download_llm)
|
||||
.run_function(migrate_cache_llm)
|
||||
)
|
||||
|
||||
|
||||
@app.cls(
|
||||
gpu="A10G",
|
||||
timeout=60 * 5,
|
||||
scaledown_window=60 * 5,
|
||||
allow_concurrent_inputs=10,
|
||||
image=llm_image,
|
||||
)
|
||||
class LLM:
|
||||
@enter()
|
||||
def enter(self):
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
|
||||
print("Instance llm model")
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
LLM_MODEL,
|
||||
torch_dtype=getattr(torch, LLM_TORCH_DTYPE),
|
||||
low_cpu_mem_usage=LLM_LOW_CPU_MEM_USAGE,
|
||||
cache_dir=IMAGE_MODEL_DIR,
|
||||
local_files_only=True,
|
||||
)
|
||||
|
||||
# JSONFormer doesn't yet support generation configs
|
||||
print("Instance llm generation config")
|
||||
model.config.max_new_tokens = LLM_MAX_NEW_TOKENS
|
||||
|
||||
# generation configuration
|
||||
gen_cfg = GenerationConfig.from_model_config(model.config)
|
||||
gen_cfg.max_new_tokens = LLM_MAX_NEW_TOKENS
|
||||
|
||||
# load tokenizer
|
||||
print("Instance llm tokenizer")
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
LLM_MODEL, cache_dir=IMAGE_MODEL_DIR, local_files_only=True
|
||||
)
|
||||
gen_cfg.pad_token_id = tokenizer.eos_token_id
|
||||
gen_cfg.eos_token_id = tokenizer.eos_token_id
|
||||
tokenizer.pad_token = tokenizer.eos_token
|
||||
model.config.pad_token_id = tokenizer.eos_token_id
|
||||
|
||||
# move model to gpu
|
||||
print("Move llm model to GPU")
|
||||
model = model.cuda()
|
||||
|
||||
print("Warmup llm done")
|
||||
self.model = model
|
||||
self.tokenizer = tokenizer
|
||||
self.gen_cfg = gen_cfg
|
||||
self.GenerationConfig = GenerationConfig
|
||||
self.lock = threading.Lock()
|
||||
|
||||
@exit()
|
||||
def exit():
|
||||
print("Exit llm")
|
||||
|
||||
@method()
|
||||
def generate(
|
||||
self, prompt: str, gen_schema: str | None, gen_cfg: str | None
|
||||
) -> dict:
|
||||
"""
|
||||
Perform a generation action using the LLM
|
||||
"""
|
||||
print(f"Generate {prompt=}")
|
||||
if gen_cfg:
|
||||
gen_cfg = self.GenerationConfig.from_dict(json.loads(gen_cfg))
|
||||
gen_cfg.pad_token_id = self.tokenizer.eos_token_id
|
||||
gen_cfg.eos_token_id = self.tokenizer.eos_token_id
|
||||
else:
|
||||
gen_cfg = self.gen_cfg
|
||||
|
||||
# If a gen_schema is given, conform to gen_schema
|
||||
with self.lock:
|
||||
if gen_schema:
|
||||
import jsonformer
|
||||
|
||||
print(f"Schema {gen_schema=}")
|
||||
jsonformer_llm = jsonformer.Jsonformer(
|
||||
model=self.model,
|
||||
tokenizer=self.tokenizer,
|
||||
json_schema=json.loads(gen_schema),
|
||||
prompt=prompt,
|
||||
max_string_token_length=gen_cfg.max_new_tokens,
|
||||
)
|
||||
response = jsonformer_llm()
|
||||
else:
|
||||
# If no gen_schema, perform prompt only generation
|
||||
|
||||
# tokenize prompt
|
||||
input_ids = self.tokenizer.encode(prompt, return_tensors="pt").to(
|
||||
self.model.device
|
||||
)
|
||||
output = self.model.generate(input_ids, generation_config=gen_cfg)
|
||||
|
||||
# decode output
|
||||
response = self.tokenizer.decode(
|
||||
output[0].cpu(), skip_special_tokens=True
|
||||
)
|
||||
response = response[len(prompt) :]
|
||||
response = {"long_summary": response}
|
||||
print(f"Generated {response=}")
|
||||
return {"text": response}
|
||||
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Web API
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
|
||||
@app.function(
|
||||
scaledown_window=60 * 10,
|
||||
timeout=60 * 5,
|
||||
allow_concurrent_inputs=30,
|
||||
secrets=[
|
||||
Secret.from_name("reflector-gpu"),
|
||||
],
|
||||
)
|
||||
@asgi_app()
|
||||
def web():
|
||||
from fastapi import Depends, FastAPI, HTTPException, status
|
||||
from fastapi.security import OAuth2PasswordBearer
|
||||
from pydantic import BaseModel
|
||||
|
||||
llmstub = LLM()
|
||||
|
||||
app = FastAPI()
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
|
||||
|
||||
def apikey_auth(apikey: str = Depends(oauth2_scheme)):
|
||||
if apikey != os.environ["REFLECTOR_GPU_APIKEY"]:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid API key",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
class LLMRequest(BaseModel):
|
||||
prompt: str
|
||||
gen_schema: Optional[dict] = None
|
||||
gen_cfg: Optional[dict] = None
|
||||
|
||||
@app.post("/llm", dependencies=[Depends(apikey_auth)])
|
||||
def llm(
|
||||
req: LLMRequest,
|
||||
):
|
||||
gen_schema = json.dumps(req.gen_schema) if req.gen_schema else None
|
||||
gen_cfg = json.dumps(req.gen_cfg) if req.gen_cfg else None
|
||||
func = llmstub.generate.spawn(
|
||||
prompt=req.prompt, gen_schema=gen_schema, gen_cfg=gen_cfg
|
||||
)
|
||||
result = func.get()
|
||||
return result
|
||||
|
||||
return app
|
||||
@@ -34,12 +34,12 @@ dependencies = [
|
||||
"python-multipart>=0.0.6",
|
||||
"faster-whisper>=0.10.0",
|
||||
"transformers>=4.36.2",
|
||||
"black==24.1.1",
|
||||
"jsonschema>=4.23.0",
|
||||
"openai>=1.59.7",
|
||||
"psycopg2-binary>=2.9.10",
|
||||
"llama-index>=0.12.52",
|
||||
"llama-index-llms-openai-like>=0.4.0",
|
||||
"pytest-env>=1.1.5",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
@@ -83,6 +83,10 @@ packages = ["reflector"]
|
||||
[tool.coverage.run]
|
||||
source = ["reflector"]
|
||||
|
||||
[tool.pytest_env]
|
||||
ENVIRONMENT = "pytest"
|
||||
DATABASE_URL = "sqlite:///test.sqlite"
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
addopts = "-ra -q --disable-pytest-warnings --cov --cov-report html -v"
|
||||
testpaths = ["tests"]
|
||||
|
||||
83
server/reflector/llm.py
Normal file
83
server/reflector/llm.py
Normal file
@@ -0,0 +1,83 @@
|
||||
from typing import Type, TypeVar
|
||||
|
||||
from llama_index.core import Settings
|
||||
from llama_index.core.output_parsers import PydanticOutputParser
|
||||
from llama_index.core.program import LLMTextCompletionProgram
|
||||
from llama_index.core.response_synthesizers import TreeSummarize
|
||||
from llama_index.llms.openai_like import OpenAILike
|
||||
from pydantic import BaseModel
|
||||
|
||||
T = TypeVar("T", bound=BaseModel)
|
||||
|
||||
STRUCTURED_RESPONSE_PROMPT_TEMPLATE = """
|
||||
Based on the following analysis, provide the information in the requested JSON format:
|
||||
|
||||
Analysis:
|
||||
{analysis}
|
||||
|
||||
{format_instructions}
|
||||
"""
|
||||
|
||||
|
||||
class LLM:
|
||||
def __init__(self, settings, temperature: float = 0.4, max_tokens: int = 2048):
|
||||
self.settings_obj = settings
|
||||
self.model_name = settings.LLM_MODEL
|
||||
self.url = settings.LLM_URL
|
||||
self.api_key = settings.LLM_API_KEY
|
||||
self.context_window = settings.LLM_CONTEXT_WINDOW
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
|
||||
# Configure llamaindex Settings
|
||||
self._configure_llamaindex()
|
||||
|
||||
def _configure_llamaindex(self):
|
||||
"""Configure llamaindex Settings with OpenAILike LLM"""
|
||||
Settings.llm = OpenAILike(
|
||||
model=self.model_name,
|
||||
api_base=self.url,
|
||||
api_key=self.api_key,
|
||||
context_window=self.context_window,
|
||||
is_chat_model=True,
|
||||
is_function_calling_model=False,
|
||||
temperature=self.temperature,
|
||||
max_tokens=self.max_tokens,
|
||||
)
|
||||
|
||||
async def get_response(
|
||||
self, prompt: str, texts: list[str], tone_name: str | None = None
|
||||
) -> str:
|
||||
"""Get a text response using TreeSummarize for non-function-calling models"""
|
||||
summarizer = TreeSummarize(verbose=False)
|
||||
response = await summarizer.aget_response(prompt, texts, tone_name=tone_name)
|
||||
return str(response).strip()
|
||||
|
||||
async def get_structured_response(
|
||||
self,
|
||||
prompt: str,
|
||||
texts: list[str],
|
||||
output_cls: Type[T],
|
||||
tone_name: str | None = None,
|
||||
) -> T:
|
||||
"""Get structured output from LLM for non-function-calling models"""
|
||||
summarizer = TreeSummarize(verbose=True)
|
||||
response = await summarizer.aget_response(prompt, texts, tone_name=tone_name)
|
||||
|
||||
output_parser = PydanticOutputParser(output_cls)
|
||||
|
||||
program = LLMTextCompletionProgram.from_defaults(
|
||||
output_parser=output_parser,
|
||||
prompt_template_str=STRUCTURED_RESPONSE_PROMPT_TEMPLATE,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
format_instructions = output_parser.format(
|
||||
"Please structure the above information in the following JSON format:"
|
||||
)
|
||||
|
||||
output = await program.acall(
|
||||
analysis=str(response), format_instructions=format_instructions
|
||||
)
|
||||
|
||||
return output
|
||||
@@ -1,2 +0,0 @@
|
||||
from .base import LLM # noqa: F401
|
||||
from .llm_params import LLMTaskParams # noqa: F401
|
||||
@@ -1,347 +0,0 @@
|
||||
import importlib
|
||||
import json
|
||||
import re
|
||||
from typing import TypeVar
|
||||
|
||||
import nltk
|
||||
from prometheus_client import Counter, Histogram
|
||||
from transformers import GenerationConfig
|
||||
|
||||
from reflector.llm.llm_params import TaskParams
|
||||
from reflector.logger import logger as reflector_logger
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.retry import retry
|
||||
|
||||
T = TypeVar("T", bound="LLM")
|
||||
|
||||
|
||||
class LLM:
|
||||
_nltk_downloaded = False
|
||||
_registry = {}
|
||||
model_name: str
|
||||
m_generate = Histogram(
|
||||
"llm_generate",
|
||||
"Time spent in LLM.generate",
|
||||
["backend"],
|
||||
)
|
||||
m_generate_call = Counter(
|
||||
"llm_generate_call",
|
||||
"Number of calls to LLM.generate",
|
||||
["backend"],
|
||||
)
|
||||
m_generate_success = Counter(
|
||||
"llm_generate_success",
|
||||
"Number of successful calls to LLM.generate",
|
||||
["backend"],
|
||||
)
|
||||
m_generate_failure = Counter(
|
||||
"llm_generate_failure",
|
||||
"Number of failed calls to LLM.generate",
|
||||
["backend"],
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def ensure_nltk(cls):
|
||||
"""
|
||||
Make sure NLTK package is installed. Searches in the cache and
|
||||
downloads only if needed.
|
||||
"""
|
||||
if not cls._nltk_downloaded:
|
||||
nltk.download("punkt_tab")
|
||||
# For POS tagging
|
||||
nltk.download("averaged_perceptron_tagger_eng")
|
||||
cls._nltk_downloaded = True
|
||||
|
||||
@classmethod
|
||||
def register(cls, name, klass):
|
||||
cls._registry[name] = klass
|
||||
|
||||
@classmethod
|
||||
def get_instance(cls, model_name: str | None = None, name: str = None) -> T:
|
||||
"""
|
||||
Return an instance depending on the settings.
|
||||
Settings used:
|
||||
|
||||
- `LLM_BACKEND`: key of the backend
|
||||
- `LLM_URL`: url of the backend
|
||||
"""
|
||||
if name is None:
|
||||
name = settings.LLM_BACKEND
|
||||
if name not in cls._registry:
|
||||
module_name = f"reflector.llm.llm_{name}"
|
||||
importlib.import_module(module_name)
|
||||
cls.ensure_nltk()
|
||||
|
||||
return cls._registry[name](model_name)
|
||||
|
||||
def get_model_name(self) -> str:
|
||||
"""
|
||||
Get the currently set model name
|
||||
"""
|
||||
return self._get_model_name()
|
||||
|
||||
def _get_model_name(self) -> str:
|
||||
pass
|
||||
|
||||
def set_model_name(self, model_name: str) -> bool:
|
||||
"""
|
||||
Update the model name with the provided model name
|
||||
"""
|
||||
return self._set_model_name(model_name)
|
||||
|
||||
def _set_model_name(self, model_name: str) -> bool:
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def template(self) -> str:
|
||||
"""
|
||||
Return the LLM Prompt template
|
||||
"""
|
||||
return """
|
||||
### Human:
|
||||
{instruct}
|
||||
|
||||
{text}
|
||||
|
||||
### Assistant:
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
name = self.__class__.__name__
|
||||
self.m_generate = self.m_generate.labels(name)
|
||||
self.m_generate_call = self.m_generate_call.labels(name)
|
||||
self.m_generate_success = self.m_generate_success.labels(name)
|
||||
self.m_generate_failure = self.m_generate_failure.labels(name)
|
||||
self.detokenizer = nltk.tokenize.treebank.TreebankWordDetokenizer()
|
||||
|
||||
@property
|
||||
def tokenizer(self):
|
||||
"""
|
||||
Return the tokenizer instance used by LLM
|
||||
"""
|
||||
return self._get_tokenizer()
|
||||
|
||||
def _get_tokenizer(self):
|
||||
pass
|
||||
|
||||
def has_structured_output(self):
|
||||
# whether implementation supports structured output
|
||||
# on the model side (otherwise it's prompt engineering)
|
||||
return False
|
||||
|
||||
async def generate(
|
||||
self,
|
||||
prompt: str,
|
||||
logger: reflector_logger,
|
||||
gen_schema: dict | None = None,
|
||||
gen_cfg: GenerationConfig | None = None,
|
||||
**kwargs,
|
||||
) -> dict:
|
||||
logger.info("LLM generate", prompt=repr(prompt))
|
||||
|
||||
if gen_cfg:
|
||||
gen_cfg = gen_cfg.to_dict()
|
||||
self.m_generate_call.inc()
|
||||
try:
|
||||
with self.m_generate.time():
|
||||
result = await retry(self._generate)(
|
||||
prompt=prompt,
|
||||
gen_schema=gen_schema,
|
||||
gen_cfg=gen_cfg,
|
||||
logger=logger,
|
||||
**kwargs,
|
||||
)
|
||||
self.m_generate_success.inc()
|
||||
|
||||
except Exception:
|
||||
logger.exception("Failed to call llm after retrying")
|
||||
self.m_generate_failure.inc()
|
||||
raise
|
||||
|
||||
logger.debug("LLM result [raw]", result=repr(result))
|
||||
if isinstance(result, str):
|
||||
result = self._parse_json(result)
|
||||
logger.debug("LLM result [parsed]", result=repr(result))
|
||||
|
||||
return result
|
||||
|
||||
async def completion(
|
||||
self, messages: list, logger: reflector_logger, **kwargs
|
||||
) -> dict:
|
||||
"""
|
||||
Use /v1/chat/completion Open-AI compatible endpoint from the URL
|
||||
It's up to the user to validate anything or transform the result
|
||||
"""
|
||||
logger.info("LLM completions", messages=messages)
|
||||
|
||||
try:
|
||||
with self.m_generate.time():
|
||||
result = await retry(self._completion)(
|
||||
messages=messages, **{**kwargs, "logger": logger}
|
||||
)
|
||||
self.m_generate_success.inc()
|
||||
except Exception:
|
||||
logger.exception("Failed to call llm after retrying")
|
||||
self.m_generate_failure.inc()
|
||||
raise
|
||||
|
||||
logger.debug("LLM completion result", result=repr(result))
|
||||
return result
|
||||
|
||||
def ensure_casing(self, title: str) -> str:
|
||||
"""
|
||||
LLM takes care of word casing, but in rare cases this
|
||||
can falter. This is a fallback to ensure the casing of
|
||||
topics is in a proper format.
|
||||
|
||||
We select nouns, verbs and adjectives and check if camel
|
||||
casing is present and fix it, if not. Will not perform
|
||||
any other changes.
|
||||
"""
|
||||
tokens = nltk.word_tokenize(title)
|
||||
pos_tags = nltk.pos_tag(tokens)
|
||||
camel_cased = []
|
||||
|
||||
whitelisted_pos_tags = [
|
||||
"NN",
|
||||
"NNS",
|
||||
"NNP",
|
||||
"NNPS", # Noun POS
|
||||
"VB",
|
||||
"VBD",
|
||||
"VBG",
|
||||
"VBN",
|
||||
"VBP",
|
||||
"VBZ", # Verb POS
|
||||
"JJ",
|
||||
"JJR",
|
||||
"JJS", # Adjective POS
|
||||
]
|
||||
|
||||
# If at all there is an exception, do not block other reflector
|
||||
# processes. Return the LLM generated title, at the least.
|
||||
try:
|
||||
for word, pos in pos_tags:
|
||||
if pos in whitelisted_pos_tags and word[0].islower():
|
||||
camel_cased.append(word[0].upper() + word[1:])
|
||||
else:
|
||||
camel_cased.append(word)
|
||||
modified_title = self.detokenizer.detokenize(camel_cased)
|
||||
|
||||
# Irrespective of casing changes, the starting letter
|
||||
# of title is always upper-cased
|
||||
title = modified_title[0].upper() + modified_title[1:]
|
||||
except Exception as e:
|
||||
reflector_logger.info(
|
||||
f"Failed to ensure casing on {title=} with exception : {str(e)}"
|
||||
)
|
||||
|
||||
return title
|
||||
|
||||
def trim_title(self, title: str) -> str:
|
||||
"""
|
||||
List of manual trimming to the title.
|
||||
|
||||
Longer titles are prone to run into A prefix of phrases that don't
|
||||
really add any descriptive information and in some cases, this
|
||||
behaviour can be repeated for several consecutive topics. Trim the
|
||||
titles to maintain quality of titles.
|
||||
"""
|
||||
phrases_to_remove = ["Discussing", "Discussion on", "Discussion about"]
|
||||
try:
|
||||
pattern = (
|
||||
r"\b(?:"
|
||||
+ "|".join(re.escape(phrase) for phrase in phrases_to_remove)
|
||||
+ r")\b"
|
||||
)
|
||||
title = re.sub(pattern, "", title, flags=re.IGNORECASE)
|
||||
except Exception as e:
|
||||
reflector_logger.info(f"Failed to trim {title=} with exception : {str(e)}")
|
||||
return title
|
||||
|
||||
async def _generate(
|
||||
self, prompt: str, gen_schema: dict | None, gen_cfg: dict | None, **kwargs
|
||||
) -> str:
|
||||
raise NotImplementedError
|
||||
|
||||
async def _completion(self, messages: list, **kwargs) -> dict:
|
||||
raise NotImplementedError
|
||||
|
||||
def _parse_json(self, result: str) -> dict:
|
||||
result = result.strip()
|
||||
# try detecting code block if exist
|
||||
# starts with ```json\n, ends with ```
|
||||
# or starts with ```\n, ends with ```
|
||||
# or starts with \n```javascript\n, ends with ```
|
||||
|
||||
regex = r"```(json|javascript|)?(.*)```"
|
||||
matches = re.findall(regex, result.strip(), re.MULTILINE | re.DOTALL)
|
||||
if matches:
|
||||
result = matches[0][1]
|
||||
|
||||
else:
|
||||
# maybe the prompt has been started with ```json
|
||||
# so if text ends with ```, just remove it and use it as json
|
||||
if result.endswith("```"):
|
||||
result = result[:-3]
|
||||
|
||||
return json.loads(result.strip())
|
||||
|
||||
def text_token_threshold(self, task_params: TaskParams | None) -> int:
|
||||
"""
|
||||
Choose the token size to set as the threshold to pack the LLM calls
|
||||
"""
|
||||
buffer_token_size = 100
|
||||
default_output_tokens = 1000
|
||||
context_window = self.tokenizer.model_max_length
|
||||
tokens = self.tokenizer.tokenize(
|
||||
self.create_prompt(instruct=task_params.instruct, text="")
|
||||
)
|
||||
threshold = context_window - len(tokens) - buffer_token_size
|
||||
if task_params.gen_cfg:
|
||||
threshold -= task_params.gen_cfg.max_new_tokens
|
||||
else:
|
||||
threshold -= default_output_tokens
|
||||
return threshold
|
||||
|
||||
def split_corpus(
|
||||
self,
|
||||
corpus: str,
|
||||
task_params: TaskParams,
|
||||
token_threshold: int | None = None,
|
||||
) -> list[str]:
|
||||
"""
|
||||
Split the input to the LLM due to CUDA memory limitations and LLM context window
|
||||
restrictions.
|
||||
|
||||
Accumulate tokens from full sentences till threshold and yield accumulated
|
||||
tokens. Reset accumulation when threshold is reached and repeat process.
|
||||
"""
|
||||
if not token_threshold:
|
||||
token_threshold = self.text_token_threshold(task_params=task_params)
|
||||
|
||||
accumulated_tokens = []
|
||||
accumulated_sentences = []
|
||||
accumulated_token_count = 0
|
||||
corpus_sentences = nltk.sent_tokenize(corpus)
|
||||
|
||||
for sentence in corpus_sentences:
|
||||
tokens = self.tokenizer.tokenize(sentence)
|
||||
if accumulated_token_count + len(tokens) <= token_threshold:
|
||||
accumulated_token_count += len(tokens)
|
||||
accumulated_tokens.extend(tokens)
|
||||
accumulated_sentences.append(sentence)
|
||||
else:
|
||||
yield "".join(accumulated_sentences)
|
||||
accumulated_token_count = len(tokens)
|
||||
accumulated_tokens = tokens
|
||||
accumulated_sentences = [sentence]
|
||||
|
||||
if accumulated_tokens:
|
||||
yield " ".join(accumulated_sentences)
|
||||
|
||||
def create_prompt(self, instruct: str, text: str) -> str:
|
||||
"""
|
||||
Create a consumable prompt based on the prompt template
|
||||
"""
|
||||
return self.template.format(instruct=instruct, text=text)
|
||||
@@ -1,155 +0,0 @@
|
||||
import httpx
|
||||
from transformers import AutoTokenizer, GenerationConfig
|
||||
|
||||
from reflector.llm.base import LLM
|
||||
from reflector.logger import logger as reflector_logger
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.retry import retry
|
||||
|
||||
|
||||
class ModalLLM(LLM):
|
||||
def __init__(self, model_name: str | None = None):
|
||||
super().__init__()
|
||||
self.timeout = settings.LLM_TIMEOUT
|
||||
self.llm_url = settings.LLM_URL + "/llm"
|
||||
self.headers = {
|
||||
"Authorization": f"Bearer {settings.LLM_MODAL_API_KEY}",
|
||||
}
|
||||
self._set_model_name(model_name if model_name else settings.DEFAULT_LLM)
|
||||
|
||||
@property
|
||||
def supported_models(self):
|
||||
"""
|
||||
List of currently supported models on this GPU platform
|
||||
"""
|
||||
# TODO: Query the specific GPU platform
|
||||
# Replace this with a HTTP call
|
||||
return [
|
||||
"lmsys/vicuna-13b-v1.5",
|
||||
"HuggingFaceH4/zephyr-7b-alpha",
|
||||
"NousResearch/Hermes-3-Llama-3.1-8B",
|
||||
]
|
||||
|
||||
async def _generate(
|
||||
self, prompt: str, gen_schema: dict | None, gen_cfg: dict | None, **kwargs
|
||||
) -> str:
|
||||
json_payload = {"prompt": prompt}
|
||||
if gen_schema:
|
||||
json_payload["gen_schema"] = gen_schema
|
||||
if gen_cfg:
|
||||
json_payload["gen_cfg"] = gen_cfg
|
||||
|
||||
# Handing over generation of the final summary to Zephyr model
|
||||
# but replacing the Vicuna model will happen after more testing
|
||||
# TODO: Create a mapping of model names and cloud deployments
|
||||
if self.model_name == "HuggingFaceH4/zephyr-7b-alpha":
|
||||
self.llm_url = settings.ZEPHYR_LLM_URL + "/llm"
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await retry(client.post)(
|
||||
self.llm_url,
|
||||
headers=self.headers,
|
||||
json=json_payload,
|
||||
timeout=self.timeout,
|
||||
retry_timeout=60 * 5,
|
||||
follow_redirects=True,
|
||||
logger=kwargs.get("logger", reflector_logger),
|
||||
)
|
||||
response.raise_for_status()
|
||||
text = response.json()["text"]
|
||||
return text
|
||||
|
||||
async def _completion(self, messages: list, **kwargs) -> dict:
|
||||
# returns full api response
|
||||
kwargs.setdefault("temperature", 0.3)
|
||||
kwargs.setdefault("max_tokens", 2048)
|
||||
kwargs.setdefault("stream", False)
|
||||
kwargs.setdefault("repetition_penalty", 1)
|
||||
kwargs.setdefault("top_p", 1)
|
||||
kwargs.setdefault("top_k", -1)
|
||||
kwargs.setdefault("min_p", 0.05)
|
||||
data = {"messages": messages, "model": self.model_name, **kwargs}
|
||||
|
||||
if self.model_name == "NousResearch/Hermes-3-Llama-3.1-8B":
|
||||
self.llm_url = settings.HERMES_3_8B_LLM_URL + "/v1/chat/completions"
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await retry(client.post)(
|
||||
self.llm_url,
|
||||
headers=self.headers,
|
||||
json=data,
|
||||
timeout=self.timeout,
|
||||
retry_timeout=60 * 5,
|
||||
follow_redirects=True,
|
||||
logger=kwargs.get("logger", reflector_logger),
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
def _set_model_name(self, model_name: str) -> bool:
|
||||
"""
|
||||
Set the model name
|
||||
"""
|
||||
# Abort, if the model is not supported
|
||||
if model_name not in self.supported_models:
|
||||
reflector_logger.info(
|
||||
f"Attempted to change {model_name=}, but is not supported."
|
||||
f"Setting model and tokenizer failed !"
|
||||
)
|
||||
return False
|
||||
# Abort, if the model is already set
|
||||
elif hasattr(self, "model_name") and model_name == self._get_model_name():
|
||||
reflector_logger.info("No change in model. Setting model skipped.")
|
||||
return False
|
||||
# Update model name and tokenizer
|
||||
self.model_name = model_name
|
||||
self.llm_tokenizer = AutoTokenizer.from_pretrained(
|
||||
self.model_name, cache_dir=settings.CACHE_DIR
|
||||
)
|
||||
reflector_logger.info(f"Model set to {model_name=}. Tokenizer updated.")
|
||||
return True
|
||||
|
||||
def _get_tokenizer(self) -> AutoTokenizer:
|
||||
"""
|
||||
Return the currently used LLM tokenizer
|
||||
"""
|
||||
return self.llm_tokenizer
|
||||
|
||||
def _get_model_name(self) -> str:
|
||||
"""
|
||||
Return the current model name from the instance details
|
||||
"""
|
||||
return self.model_name
|
||||
|
||||
|
||||
LLM.register("modal", ModalLLM)
|
||||
|
||||
if __name__ == "__main__":
|
||||
from reflector.logger import logger
|
||||
|
||||
async def main():
|
||||
llm = ModalLLM()
|
||||
prompt = llm.create_prompt(
|
||||
instruct="Complete the following task",
|
||||
text="Tell me a joke about programming.",
|
||||
)
|
||||
result = await llm.generate(prompt=prompt, logger=logger)
|
||||
print(result)
|
||||
|
||||
gen_schema = {
|
||||
"type": "object",
|
||||
"properties": {"response": {"type": "string"}},
|
||||
}
|
||||
|
||||
result = await llm.generate(prompt=prompt, gen_schema=gen_schema, logger=logger)
|
||||
print(result)
|
||||
|
||||
gen_cfg = GenerationConfig(max_new_tokens=150)
|
||||
result = await llm.generate(
|
||||
prompt=prompt, gen_cfg=gen_cfg, gen_schema=gen_schema, logger=logger
|
||||
)
|
||||
print(result)
|
||||
|
||||
import asyncio
|
||||
|
||||
asyncio.run(main())
|
||||
@@ -1,48 +0,0 @@
|
||||
import httpx
|
||||
from transformers import GenerationConfig
|
||||
|
||||
from reflector.llm.base import LLM
|
||||
from reflector.logger import logger
|
||||
from reflector.settings import settings
|
||||
|
||||
|
||||
class OpenAILLM(LLM):
|
||||
def __init__(self, model_name: str | None = None, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self.openai_key = settings.LLM_OPENAI_KEY
|
||||
self.openai_url = settings.LLM_URL
|
||||
self.openai_model = settings.LLM_OPENAI_MODEL
|
||||
self.openai_temperature = settings.LLM_OPENAI_TEMPERATURE
|
||||
self.timeout = settings.LLM_TIMEOUT
|
||||
self.max_tokens = settings.LLM_MAX_TOKENS
|
||||
logger.info(f"LLM use openai backend at {self.openai_url}")
|
||||
|
||||
async def _generate(
|
||||
self,
|
||||
prompt: str,
|
||||
gen_schema: dict | None,
|
||||
gen_cfg: GenerationConfig | None,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {self.openai_key}",
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient(timeout=self.timeout) as client:
|
||||
response = await client.post(
|
||||
self.openai_url,
|
||||
headers=headers,
|
||||
json={
|
||||
"model": self.openai_model,
|
||||
"prompt": prompt,
|
||||
"max_tokens": self.max_tokens,
|
||||
"temperature": self.openai_temperature,
|
||||
},
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()
|
||||
return result["choices"][0]["text"]
|
||||
|
||||
|
||||
LLM.register("openai", OpenAILLM)
|
||||
@@ -1,219 +0,0 @@
|
||||
from typing import Optional, TypeVar
|
||||
|
||||
from pydantic import BaseModel
|
||||
from transformers import GenerationConfig
|
||||
|
||||
|
||||
class TaskParams(BaseModel, arbitrary_types_allowed=True):
|
||||
instruct: str
|
||||
gen_cfg: Optional[GenerationConfig] = None
|
||||
gen_schema: Optional[dict] = None
|
||||
|
||||
|
||||
T = TypeVar("T", bound="LLMTaskParams")
|
||||
|
||||
|
||||
class LLMTaskParams:
|
||||
_registry = {}
|
||||
|
||||
@classmethod
|
||||
def register(cls, task, klass) -> None:
|
||||
cls._registry[task] = klass
|
||||
|
||||
@classmethod
|
||||
def get_instance(cls, task: str) -> T:
|
||||
return cls._registry[task]()
|
||||
|
||||
@property
|
||||
def task_params(self) -> TaskParams | None:
|
||||
"""
|
||||
Fetch the task related parameters
|
||||
"""
|
||||
return self._get_task_params()
|
||||
|
||||
def _get_task_params(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class FinalLongSummaryParams(LLMTaskParams):
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self._gen_cfg = GenerationConfig(
|
||||
max_new_tokens=1000, num_beams=3, do_sample=True, temperature=0.3
|
||||
)
|
||||
self._instruct = """
|
||||
Take the key ideas and takeaways from the text and create a short
|
||||
summary. Be sure to keep the length of the response to a minimum.
|
||||
Do not include trivial information in the summary.
|
||||
"""
|
||||
self._schema = {
|
||||
"type": "object",
|
||||
"properties": {"long_summary": {"type": "string"}},
|
||||
}
|
||||
self._task_params = TaskParams(
|
||||
instruct=self._instruct, gen_schema=self._schema, gen_cfg=self._gen_cfg
|
||||
)
|
||||
|
||||
def _get_task_params(self) -> TaskParams:
|
||||
"""gen_schema
|
||||
Return the parameters associated with a specific LLM task
|
||||
"""
|
||||
return self._task_params
|
||||
|
||||
|
||||
class FinalShortSummaryParams(LLMTaskParams):
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self._gen_cfg = GenerationConfig(
|
||||
max_new_tokens=800, num_beams=3, do_sample=True, temperature=0.3
|
||||
)
|
||||
self._instruct = """
|
||||
Take the key ideas and takeaways from the text and create a short
|
||||
summary. Be sure to keep the length of the response to a minimum.
|
||||
Do not include trivial information in the summary.
|
||||
"""
|
||||
self._schema = {
|
||||
"type": "object",
|
||||
"properties": {"short_summary": {"type": "string"}},
|
||||
}
|
||||
self._task_params = TaskParams(
|
||||
instruct=self._instruct, gen_schema=self._schema, gen_cfg=self._gen_cfg
|
||||
)
|
||||
|
||||
def _get_task_params(self) -> TaskParams:
|
||||
"""
|
||||
Return the parameters associated with a specific LLM task
|
||||
"""
|
||||
return self._task_params
|
||||
|
||||
|
||||
class FinalTitleParams(LLMTaskParams):
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self._gen_cfg = GenerationConfig(
|
||||
max_new_tokens=200, num_beams=5, do_sample=True, temperature=0.5
|
||||
)
|
||||
self._instruct = """
|
||||
Combine the following individual titles into one single short title that
|
||||
condenses the essence of all titles.
|
||||
"""
|
||||
self._schema = {
|
||||
"type": "object",
|
||||
"properties": {"title": {"type": "string"}},
|
||||
}
|
||||
self._task_params = TaskParams(
|
||||
instruct=self._instruct, gen_schema=self._schema, gen_cfg=self._gen_cfg
|
||||
)
|
||||
|
||||
def _get_task_params(self) -> TaskParams:
|
||||
"""
|
||||
Return the parameters associated with a specific LLM task
|
||||
"""
|
||||
return self._task_params
|
||||
|
||||
|
||||
class TopicParams(LLMTaskParams):
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self._gen_cfg = GenerationConfig(
|
||||
max_new_tokens=500, num_beams=6, do_sample=True, temperature=0.9
|
||||
)
|
||||
self._instruct = """
|
||||
Create a JSON object as response.The JSON object must have 2 fields:
|
||||
i) title and ii) summary.
|
||||
For the title field, generate a very detailed and self-explanatory
|
||||
title for the given text. Let the title be as descriptive as possible.
|
||||
For the summary field, summarize the given text in a maximum of
|
||||
two sentences.
|
||||
"""
|
||||
self._schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"title": {"type": "string"},
|
||||
"summary": {"type": "string"},
|
||||
},
|
||||
}
|
||||
self._task_params = TaskParams(
|
||||
instruct=self._instruct, gen_schema=self._schema, gen_cfg=self._gen_cfg
|
||||
)
|
||||
|
||||
def _get_task_params(self) -> TaskParams:
|
||||
"""
|
||||
Return the parameters associated with a specific LLM task
|
||||
"""
|
||||
return self._task_params
|
||||
|
||||
|
||||
class BulletedSummaryParams(LLMTaskParams):
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self._gen_cfg = GenerationConfig(
|
||||
max_new_tokens=800,
|
||||
num_beams=1,
|
||||
do_sample=True,
|
||||
temperature=0.2,
|
||||
early_stopping=True,
|
||||
)
|
||||
self._instruct = """
|
||||
Given a meeting transcript, extract the key things discussed in the
|
||||
form of a list.
|
||||
|
||||
While generating the response, follow the constraints mentioned below.
|
||||
|
||||
Summary constraints:
|
||||
i) Do not add new content, except to fix spelling or punctuation.
|
||||
ii) Do not add any prefixes or numbering in the response.
|
||||
iii) The summarization should be as information dense as possible.
|
||||
iv) Do not add any additional sections like Note, Conclusion, etc. in
|
||||
the response.
|
||||
|
||||
Response format:
|
||||
i) The response should be in the form of a bulleted list.
|
||||
ii) Iteratively merge all the relevant paragraphs together to keep the
|
||||
number of paragraphs to a minimum.
|
||||
iii) Remove any unfinished sentences from the final response.
|
||||
iv) Do not include narrative or reporting clauses.
|
||||
v) Use "*" as the bullet icon.
|
||||
"""
|
||||
self._task_params = TaskParams(
|
||||
instruct=self._instruct, gen_schema=None, gen_cfg=self._gen_cfg
|
||||
)
|
||||
|
||||
def _get_task_params(self) -> TaskParams:
|
||||
"""gen_schema
|
||||
Return the parameters associated with a specific LLM task
|
||||
"""
|
||||
return self._task_params
|
||||
|
||||
|
||||
class MergedSummaryParams(LLMTaskParams):
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self._gen_cfg = GenerationConfig(
|
||||
max_new_tokens=600,
|
||||
num_beams=1,
|
||||
do_sample=True,
|
||||
temperature=0.2,
|
||||
early_stopping=True,
|
||||
)
|
||||
self._instruct = """
|
||||
Given the key points of a meeting, summarize the points to describe the
|
||||
meeting in the form of paragraphs.
|
||||
"""
|
||||
self._task_params = TaskParams(
|
||||
instruct=self._instruct, gen_schema=None, gen_cfg=self._gen_cfg
|
||||
)
|
||||
|
||||
def _get_task_params(self) -> TaskParams:
|
||||
"""gen_schema
|
||||
Return the parameters associated with a specific LLM task
|
||||
"""
|
||||
return self._task_params
|
||||
|
||||
|
||||
LLMTaskParams.register("topic", TopicParams)
|
||||
LLMTaskParams.register("final_title", FinalTitleParams)
|
||||
LLMTaskParams.register("final_short_summary", FinalShortSummaryParams)
|
||||
LLMTaskParams.register("final_long_summary", FinalLongSummaryParams)
|
||||
LLMTaskParams.register("bullet_summary", BulletedSummaryParams)
|
||||
LLMTaskParams.register("merged_summary", MergedSummaryParams)
|
||||
@@ -1,118 +0,0 @@
|
||||
import httpx
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
from reflector.logger import logger
|
||||
|
||||
|
||||
def apply_gen_config(payload: dict, gen_cfg) -> None:
|
||||
"""Apply generation config overrides to the payload."""
|
||||
config_mapping = {
|
||||
"temperature": "temperature",
|
||||
"max_new_tokens": "max_tokens",
|
||||
"max_tokens": "max_tokens",
|
||||
"top_p": "top_p",
|
||||
"frequency_penalty": "frequency_penalty",
|
||||
"presence_penalty": "presence_penalty",
|
||||
}
|
||||
|
||||
for cfg_attr, payload_key in config_mapping.items():
|
||||
value = getattr(gen_cfg, cfg_attr, None)
|
||||
if value is not None:
|
||||
payload[payload_key] = value
|
||||
if cfg_attr == "max_new_tokens": # Handle max_new_tokens taking precedence
|
||||
break
|
||||
|
||||
|
||||
class OpenAILLM:
|
||||
def __init__(self, config_prefix: str, settings):
|
||||
self.config_prefix = config_prefix
|
||||
self.settings_obj = settings
|
||||
self.model_name = getattr(settings, f"{config_prefix}_MODEL")
|
||||
self.url = getattr(settings, f"{config_prefix}_LLM_URL")
|
||||
self.api_key = getattr(settings, f"{config_prefix}_LLM_API_KEY")
|
||||
|
||||
timeout = getattr(settings, f"{config_prefix}_LLM_TIMEOUT", 300)
|
||||
self.temperature = getattr(settings, f"{config_prefix}_LLM_TEMPERATURE", 0.7)
|
||||
self.max_tokens = getattr(settings, f"{config_prefix}_LLM_MAX_TOKENS", 1024)
|
||||
self.client = httpx.AsyncClient(timeout=timeout)
|
||||
|
||||
# Use a tokenizer that approximates OpenAI token counting
|
||||
tokenizer_name = getattr(settings, f"{config_prefix}_TOKENIZER", "gpt2")
|
||||
try:
|
||||
self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
f"Failed to load tokenizer '{tokenizer_name}', falling back to default 'gpt2' tokenizer"
|
||||
)
|
||||
self.tokenizer = AutoTokenizer.from_pretrained("gpt2")
|
||||
|
||||
async def generate(
|
||||
self, prompt: str, gen_schema=None, gen_cfg=None, logger=None
|
||||
) -> str:
|
||||
if logger:
|
||||
logger.debug(
|
||||
"OpenAI LLM generate",
|
||||
prompt=repr(prompt[:100] + "..." if len(prompt) > 100 else prompt),
|
||||
)
|
||||
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
result = await self.completion(
|
||||
messages, gen_schema=gen_schema, gen_cfg=gen_cfg, logger=logger
|
||||
)
|
||||
return result["choices"][0]["message"]["content"]
|
||||
|
||||
async def completion(
|
||||
self, messages: list, gen_schema=None, gen_cfg=None, logger=None, **kwargs
|
||||
) -> dict:
|
||||
if logger:
|
||||
logger.info("OpenAI LLM completion", messages_count=len(messages))
|
||||
|
||||
payload = {
|
||||
"model": self.model_name,
|
||||
"messages": messages,
|
||||
"temperature": self.temperature,
|
||||
"max_tokens": self.max_tokens,
|
||||
}
|
||||
|
||||
# Apply generation config overrides
|
||||
if gen_cfg:
|
||||
apply_gen_config(payload, gen_cfg)
|
||||
|
||||
# Apply structured output schema
|
||||
if gen_schema:
|
||||
payload["response_format"] = {
|
||||
"type": "json_schema",
|
||||
"json_schema": {"name": "response", "schema": gen_schema},
|
||||
}
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
}
|
||||
|
||||
url = f"{self.url.rstrip('/')}/chat/completions"
|
||||
|
||||
if logger:
|
||||
logger.debug(
|
||||
"OpenAI API request", url=url, payload_keys=list(payload.keys())
|
||||
)
|
||||
|
||||
response = await self.client.post(url, json=payload, headers=headers)
|
||||
response.raise_for_status()
|
||||
|
||||
result = response.json()
|
||||
|
||||
if logger:
|
||||
logger.debug(
|
||||
"OpenAI API response",
|
||||
status_code=response.status_code,
|
||||
choices_count=len(result.get("choices", [])),
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
async def __aenter__(self):
|
||||
return self
|
||||
|
||||
async def __aexit__(self, exc_type, exc_val, exc_tb):
|
||||
await self.client.aclose()
|
||||
@@ -47,7 +47,7 @@ from reflector.processors import (
|
||||
TranscriptFinalTitleProcessor,
|
||||
TranscriptLinerProcessor,
|
||||
TranscriptTopicDetectorProcessor,
|
||||
TranscriptTranslatorProcessor,
|
||||
TranscriptTranslatorAutoProcessor,
|
||||
)
|
||||
from reflector.processors.audio_waveform_processor import AudioWaveformProcessor
|
||||
from reflector.processors.types import AudioDiarizationInput
|
||||
@@ -361,7 +361,7 @@ class PipelineMainLive(PipelineMainBase):
|
||||
AudioMergeProcessor(),
|
||||
AudioTranscriptAutoProcessor.as_threaded(),
|
||||
TranscriptLinerProcessor(),
|
||||
TranscriptTranslatorProcessor.as_threaded(callback=self.on_transcript),
|
||||
TranscriptTranslatorAutoProcessor.as_threaded(callback=self.on_transcript),
|
||||
TranscriptTopicDetectorProcessor.as_threaded(callback=self.on_topic),
|
||||
]
|
||||
pipeline = Pipeline(*processors)
|
||||
|
||||
@@ -16,6 +16,7 @@ from .transcript_final_title import TranscriptFinalTitleProcessor # noqa: F401
|
||||
from .transcript_liner import TranscriptLinerProcessor # noqa: F401
|
||||
from .transcript_topic_detector import TranscriptTopicDetectorProcessor # noqa: F401
|
||||
from .transcript_translator import TranscriptTranslatorProcessor # noqa: F401
|
||||
from .transcript_translator_auto import TranscriptTranslatorAutoProcessor # noqa: F401
|
||||
from .types import ( # noqa: F401
|
||||
AudioFile,
|
||||
FinalLongSummary,
|
||||
|
||||
@@ -10,12 +10,17 @@ class AudioDiarizationModalProcessor(AudioDiarizationProcessor):
|
||||
INPUT_TYPE = AudioDiarizationInput
|
||||
OUTPUT_TYPE = TitleSummary
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
def __init__(self, modal_api_key: str | None = None, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
if not settings.DIARIZATION_URL:
|
||||
raise Exception(
|
||||
"DIARIZATION_URL required to use AudioDiarizationModalProcessor"
|
||||
)
|
||||
self.diarization_url = settings.DIARIZATION_URL + "/diarize"
|
||||
self.headers = {
|
||||
"Authorization": f"Bearer {settings.LLM_MODAL_API_KEY}",
|
||||
}
|
||||
self.modal_api_key = modal_api_key
|
||||
self.headers = {}
|
||||
if self.modal_api_key:
|
||||
self.headers["Authorization"] = f"Bearer {self.modal_api_key}"
|
||||
|
||||
async def _diarize(self, data: AudioDiarizationInput):
|
||||
# Gather diarization data
|
||||
|
||||
@@ -21,16 +21,20 @@ from reflector.settings import settings
|
||||
|
||||
|
||||
class AudioTranscriptModalProcessor(AudioTranscriptProcessor):
|
||||
def __init__(self, modal_api_key: str):
|
||||
def __init__(self, modal_api_key: str | None = None, **kwargs):
|
||||
super().__init__()
|
||||
if not settings.TRANSCRIPT_URL:
|
||||
raise Exception(
|
||||
"TRANSCRIPT_URL required to use AudioTranscriptModalProcessor"
|
||||
)
|
||||
self.transcript_url = settings.TRANSCRIPT_URL + "/v1"
|
||||
self.timeout = settings.TRANSCRIPT_TIMEOUT
|
||||
self.api_key = settings.TRANSCRIPT_MODAL_API_KEY
|
||||
self.modal_api_key = modal_api_key
|
||||
|
||||
async def _transcript(self, data: AudioFile):
|
||||
async with AsyncOpenAI(
|
||||
base_url=self.transcript_url,
|
||||
api_key=self.api_key,
|
||||
api_key=self.modal_api_key,
|
||||
timeout=self.timeout,
|
||||
) as client:
|
||||
self.logger.debug(f"Try to transcribe audio {data.name}")
|
||||
|
||||
@@ -12,15 +12,9 @@ from textwrap import dedent
|
||||
from typing import Type, TypeVar
|
||||
|
||||
import structlog
|
||||
from llama_index.core import Settings
|
||||
from llama_index.core.output_parsers import PydanticOutputParser
|
||||
from llama_index.core.program import LLMTextCompletionProgram
|
||||
from llama_index.core.response_synthesizers import TreeSummarize
|
||||
from llama_index.llms.openai_like import OpenAILike
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from reflector.llm.base import LLM
|
||||
from reflector.llm.openai_llm import OpenAILLM
|
||||
from reflector.llm import LLM
|
||||
from reflector.settings import settings
|
||||
|
||||
T = TypeVar("T", bound=BaseModel)
|
||||
@@ -168,23 +162,12 @@ class SummaryBuilder:
|
||||
self.summaries: list[dict[str, str]] = []
|
||||
self.subjects: list[str] = []
|
||||
self.transcription_type: TranscriptionType | None = None
|
||||
self.llm_instance: LLM = llm
|
||||
self.llm: LLM = llm
|
||||
self.model_name: str = llm.model_name
|
||||
self.logger = logger or structlog.get_logger()
|
||||
if filename:
|
||||
self.read_transcript_from_file(filename)
|
||||
|
||||
Settings.llm = OpenAILike(
|
||||
model=llm.model_name,
|
||||
api_base=llm.url,
|
||||
api_key=llm.api_key,
|
||||
context_window=settings.SUMMARY_LLM_CONTEXT_SIZE_TOKENS,
|
||||
is_chat_model=True,
|
||||
is_function_calling_model=llm.has_structured_output,
|
||||
temperature=llm.temperature,
|
||||
max_tokens=llm.max_tokens,
|
||||
)
|
||||
|
||||
def read_transcript_from_file(self, filename: str) -> None:
|
||||
"""
|
||||
Load a transcript from a text file.
|
||||
@@ -202,40 +185,16 @@ class SummaryBuilder:
|
||||
self.transcript = transcript
|
||||
|
||||
def set_llm_instance(self, llm: LLM) -> None:
|
||||
self.llm_instance = llm
|
||||
self.llm = llm
|
||||
|
||||
async def _get_structured_response(
|
||||
self, prompt: str, output_cls: Type[T], tone_name: str | None = None
|
||||
) -> Type[T]:
|
||||
) -> T:
|
||||
"""Generic function to get structured output from LLM for non-function-calling models."""
|
||||
# First, use TreeSummarize to get the response
|
||||
summarizer = TreeSummarize(verbose=True)
|
||||
|
||||
response = await summarizer.aget_response(
|
||||
prompt, [self.transcript], tone_name=tone_name
|
||||
return await self.llm.get_structured_response(
|
||||
prompt, [self.transcript], output_cls, tone_name=tone_name
|
||||
)
|
||||
|
||||
# Then, use PydanticOutputParser to structure the response
|
||||
output_parser = PydanticOutputParser(output_cls)
|
||||
|
||||
prompt_template_str = STRUCTURED_RESPONSE_PROMPT_TEMPLATE
|
||||
|
||||
program = LLMTextCompletionProgram.from_defaults(
|
||||
output_parser=output_parser,
|
||||
prompt_template_str=prompt_template_str,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
format_instructions = output_parser.format(
|
||||
"Please structure the above information in the following JSON format:"
|
||||
)
|
||||
|
||||
output = await program.acall(
|
||||
analysis=str(response), format_instructions=format_instructions
|
||||
)
|
||||
|
||||
return output
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Participants
|
||||
# ----------------------------------------------------------------------------
|
||||
@@ -354,19 +313,18 @@ class SummaryBuilder:
|
||||
async def generate_subject_summaries(self) -> None:
|
||||
"""Generate detailed summaries for each extracted subject."""
|
||||
assert self.transcript is not None
|
||||
summarizer = TreeSummarize(verbose=False)
|
||||
summaries = []
|
||||
|
||||
for subject in self.subjects:
|
||||
detailed_prompt = DETAILED_SUBJECT_PROMPT_TEMPLATE.format(subject=subject)
|
||||
|
||||
detailed_response = await summarizer.aget_response(
|
||||
detailed_response = await self.llm.get_response(
|
||||
detailed_prompt, [self.transcript], tone_name="Topic assistant"
|
||||
)
|
||||
|
||||
paragraph_prompt = PARAGRAPH_SUMMARY_PROMPT
|
||||
|
||||
paragraph_response = await summarizer.aget_response(
|
||||
paragraph_response = await self.llm.get_response(
|
||||
paragraph_prompt, [str(detailed_response)], tone_name="Topic summarizer"
|
||||
)
|
||||
|
||||
@@ -377,7 +335,6 @@ class SummaryBuilder:
|
||||
|
||||
async def generate_recap(self) -> None:
|
||||
"""Generate a quick recap from the subject summaries."""
|
||||
summarizer = TreeSummarize(verbose=True)
|
||||
|
||||
summaries_text = "\n\n".join(
|
||||
[
|
||||
@@ -388,7 +345,7 @@ class SummaryBuilder:
|
||||
|
||||
recap_prompt = RECAP_PROMPT
|
||||
|
||||
recap_response = await summarizer.aget_response(
|
||||
recap_response = await self.llm.get_response(
|
||||
recap_prompt, [summaries_text], tone_name="Recap summarizer"
|
||||
)
|
||||
|
||||
@@ -483,7 +440,7 @@ if __name__ == "__main__":
|
||||
async def main():
|
||||
# build the summary
|
||||
|
||||
llm = OpenAILLM(config_prefix="SUMMARY", settings=settings)
|
||||
llm = LLM(settings=settings)
|
||||
sm = SummaryBuilder(llm=llm, filename=args.transcript)
|
||||
|
||||
if args.subjects:
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from reflector.llm.openai_llm import OpenAILLM
|
||||
from reflector.llm import LLM
|
||||
from reflector.processors.base import Processor
|
||||
from reflector.processors.summary.summary_builder import SummaryBuilder
|
||||
from reflector.processors.types import FinalLongSummary, FinalShortSummary, TitleSummary
|
||||
@@ -17,7 +17,7 @@ class TranscriptFinalSummaryProcessor(Processor):
|
||||
super().__init__(**kwargs)
|
||||
self.transcript = transcript
|
||||
self.chunks: list[TitleSummary] = []
|
||||
self.llm = OpenAILLM(config_prefix="SUMMARY", settings=settings)
|
||||
self.llm = LLM(settings=settings)
|
||||
self.builder = None
|
||||
|
||||
async def _push(self, data: TitleSummary):
|
||||
|
||||
@@ -1,67 +1,72 @@
|
||||
from reflector.llm import LLM, LLMTaskParams
|
||||
from textwrap import dedent
|
||||
|
||||
from reflector.llm import LLM
|
||||
from reflector.processors.base import Processor
|
||||
from reflector.processors.types import FinalTitle, TitleSummary
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.text import clean_title
|
||||
|
||||
TITLE_PROMPT = dedent(
|
||||
"""
|
||||
Generate a concise title for this meeting based on the following topic titles.
|
||||
Ignore casual conversation, greetings, or administrative matters.
|
||||
|
||||
The title must:
|
||||
- Be maximum 10 words
|
||||
- Use noun phrases when possible (e.g., "Q1 Budget Review" not "Reviewing the Q1 Budget")
|
||||
- Avoid generic terms like "Team Meeting" or "Discussion"
|
||||
|
||||
If multiple unrelated topics were discussed, prioritize the most significant one.
|
||||
or create a compound title (e.g., "Product Launch and Budget Planning").
|
||||
|
||||
<topics_discussed>
|
||||
{titles}
|
||||
</topics_discussed>
|
||||
|
||||
Do not explain, just output the meeting title as a single line.
|
||||
"""
|
||||
).strip()
|
||||
|
||||
|
||||
class TranscriptFinalTitleProcessor(Processor):
|
||||
"""
|
||||
Assemble all summary into a line-based json
|
||||
Generate a final title from topic titles using LlamaIndex
|
||||
"""
|
||||
|
||||
INPUT_TYPE = TitleSummary
|
||||
OUTPUT_TYPE = FinalTitle
|
||||
TASK = "final_title"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self.chunks: list[TitleSummary] = []
|
||||
self.llm = LLM.get_instance()
|
||||
self.params = LLMTaskParams.get_instance(self.TASK).task_params
|
||||
self.llm = LLM(settings=settings, temperature=0.5, max_tokens=200)
|
||||
|
||||
async def _push(self, data: TitleSummary):
|
||||
self.chunks.append(data)
|
||||
|
||||
async def get_title(self, text: str) -> dict:
|
||||
async def get_title(self, accumulated_titles: str) -> str:
|
||||
"""
|
||||
Generate a title for the whole recording
|
||||
Generate a title for the whole recording using LLM
|
||||
"""
|
||||
chunks = list(self.llm.split_corpus(corpus=text, task_params=self.params))
|
||||
prompt = TITLE_PROMPT.format(titles=accumulated_titles)
|
||||
response = await self.llm.get_response(
|
||||
prompt,
|
||||
[accumulated_titles],
|
||||
tone_name="Title generator",
|
||||
)
|
||||
|
||||
if len(chunks) == 1:
|
||||
chunk = chunks[0]
|
||||
prompt = self.llm.create_prompt(instruct=self.params.instruct, text=chunk)
|
||||
title_result = await self.llm.generate(
|
||||
prompt=prompt,
|
||||
gen_schema=self.params.gen_schema,
|
||||
gen_cfg=self.params.gen_cfg,
|
||||
logger=self.logger,
|
||||
)
|
||||
return title_result
|
||||
else:
|
||||
accumulated_titles = ""
|
||||
for chunk in chunks:
|
||||
prompt = self.llm.create_prompt(
|
||||
instruct=self.params.instruct, text=chunk
|
||||
)
|
||||
title_result = await self.llm.generate(
|
||||
prompt=prompt,
|
||||
gen_schema=self.params.gen_schema,
|
||||
gen_cfg=self.params.gen_cfg,
|
||||
logger=self.logger,
|
||||
)
|
||||
accumulated_titles += title_result["title"]
|
||||
self.logger.info(f"Generated title response: {response}")
|
||||
|
||||
return await self.get_title(accumulated_titles)
|
||||
return response
|
||||
|
||||
async def _flush(self):
|
||||
if not self.chunks:
|
||||
self.logger.warning("No summary to output")
|
||||
return
|
||||
|
||||
accumulated_titles = ".".join([chunk.title for chunk in self.chunks])
|
||||
title_result = await self.get_title(accumulated_titles)
|
||||
final_title = self.llm.trim_title(title_result["title"])
|
||||
final_title = self.llm.ensure_casing(final_title)
|
||||
accumulated_titles = "\n".join([f"- {chunk.title}" for chunk in self.chunks])
|
||||
title = await self.get_title(accumulated_titles)
|
||||
title = clean_title(title)
|
||||
|
||||
final_title = FinalTitle(title=final_title)
|
||||
final_title = FinalTitle(title=title)
|
||||
await self.emit(final_title)
|
||||
|
||||
@@ -1,7 +1,41 @@
|
||||
from reflector.llm import LLM, LLMTaskParams
|
||||
from textwrap import dedent
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from reflector.llm import LLM
|
||||
from reflector.processors.base import Processor
|
||||
from reflector.processors.types import TitleSummary, Transcript
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.text import clean_title
|
||||
|
||||
TOPIC_PROMPT = dedent(
|
||||
"""
|
||||
Analyze the following transcript segment and extract the main topic being discussed.
|
||||
Focus on the substantive content and ignore small talk or administrative chatter.
|
||||
|
||||
Create a title that:
|
||||
- Captures the specific subject matter being discussed
|
||||
- Is descriptive and self-explanatory
|
||||
- Uses professional language
|
||||
- Is specific rather than generic
|
||||
|
||||
For the summary:
|
||||
- Summarize the key points in maximum two sentences
|
||||
- Focus on what was discussed, decided, or accomplished
|
||||
- Be concise but informative
|
||||
|
||||
<transcript>
|
||||
{text}
|
||||
</transcript>
|
||||
"""
|
||||
).strip()
|
||||
|
||||
|
||||
class TopicResponse(BaseModel):
|
||||
"""Structured response for topic detection"""
|
||||
|
||||
title: str = Field(description="A descriptive title for the topic being discussed")
|
||||
summary: str = Field(description="A concise 1-2 sentence summary of the discussion")
|
||||
|
||||
|
||||
class TranscriptTopicDetectorProcessor(Processor):
|
||||
@@ -11,7 +45,6 @@ class TranscriptTopicDetectorProcessor(Processor):
|
||||
|
||||
INPUT_TYPE = Transcript
|
||||
OUTPUT_TYPE = TitleSummary
|
||||
TASK = "topic"
|
||||
|
||||
def __init__(
|
||||
self, min_transcript_length: int = int(settings.MIN_TRANSCRIPT_LENGTH), **kwargs
|
||||
@@ -19,8 +52,7 @@ class TranscriptTopicDetectorProcessor(Processor):
|
||||
super().__init__(**kwargs)
|
||||
self.transcript = None
|
||||
self.min_transcript_length = min_transcript_length
|
||||
self.llm = LLM.get_instance()
|
||||
self.params = LLMTaskParams.get_instance(self.TASK).task_params
|
||||
self.llm = LLM(settings=settings, temperature=0.9, max_tokens=500)
|
||||
|
||||
async def _push(self, data: Transcript):
|
||||
if self.transcript is None:
|
||||
@@ -34,18 +66,15 @@ class TranscriptTopicDetectorProcessor(Processor):
|
||||
return
|
||||
await self.flush()
|
||||
|
||||
async def get_topic(self, text: str) -> dict:
|
||||
async def get_topic(self, text: str) -> TopicResponse:
|
||||
"""
|
||||
Generate a topic and description for a transcription excerpt
|
||||
Generate a topic and description for a transcription excerpt using LLM
|
||||
"""
|
||||
prompt = self.llm.create_prompt(instruct=self.params.instruct, text=text)
|
||||
topic_result = await self.llm.generate(
|
||||
prompt=prompt,
|
||||
gen_schema=self.params.gen_schema,
|
||||
gen_cfg=self.params.gen_cfg,
|
||||
logger=self.logger,
|
||||
prompt = TOPIC_PROMPT.format(text=text)
|
||||
response = await self.llm.get_structured_response(
|
||||
prompt, [text], TopicResponse, tone_name="Topic analyzer"
|
||||
)
|
||||
return topic_result
|
||||
return response
|
||||
|
||||
async def _flush(self):
|
||||
if not self.transcript:
|
||||
@@ -53,13 +82,13 @@ class TranscriptTopicDetectorProcessor(Processor):
|
||||
|
||||
text = self.transcript.text
|
||||
self.logger.info(f"Topic detector got {len(text)} length transcript")
|
||||
|
||||
topic_result = await self.get_topic(text=text)
|
||||
title = self.llm.trim_title(topic_result["title"])
|
||||
title = self.llm.ensure_casing(title)
|
||||
title = clean_title(topic_result.title)
|
||||
|
||||
summary = TitleSummary(
|
||||
title=title,
|
||||
summary=topic_result["summary"],
|
||||
summary=topic_result.summary,
|
||||
timestamp=self.transcript.timestamp,
|
||||
duration=self.transcript.duration,
|
||||
transcript=self.transcript,
|
||||
|
||||
@@ -1,9 +1,5 @@
|
||||
import httpx
|
||||
|
||||
from reflector.processors.base import Processor
|
||||
from reflector.processors.types import Transcript, TranslationLanguages
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.retry import retry
|
||||
from reflector.processors.types import Transcript
|
||||
|
||||
|
||||
class TranscriptTranslatorProcessor(Processor):
|
||||
@@ -13,61 +9,27 @@ class TranscriptTranslatorProcessor(Processor):
|
||||
|
||||
INPUT_TYPE = Transcript
|
||||
OUTPUT_TYPE = Transcript
|
||||
TASK = "translate"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self.transcript = None
|
||||
self.translate_url = settings.TRANSLATE_URL
|
||||
self.timeout = settings.TRANSLATE_TIMEOUT
|
||||
self.headers = {"Authorization": f"Bearer {settings.LLM_MODAL_API_KEY}"}
|
||||
|
||||
async def _push(self, data: Transcript):
|
||||
self.transcript = data
|
||||
await self.flush()
|
||||
|
||||
async def get_translation(self, text: str) -> str | None:
|
||||
# FIXME this should be a processor after, as each user may want
|
||||
# different languages
|
||||
|
||||
source_language = self.get_pref("audio:source_language", "en")
|
||||
target_language = self.get_pref("audio:target_language", "en")
|
||||
if source_language == target_language:
|
||||
return
|
||||
|
||||
languages = TranslationLanguages()
|
||||
# Only way to set the target should be the UI element like dropdown.
|
||||
# Hence, this assert should never fail.
|
||||
assert languages.is_supported(target_language)
|
||||
self.logger.debug(f"Try to translate {text=}")
|
||||
json_payload = {
|
||||
"text": text,
|
||||
"source_language": source_language,
|
||||
"target_language": target_language,
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await retry(client.post)(
|
||||
self.translate_url + "/translate",
|
||||
headers=self.headers,
|
||||
params=json_payload,
|
||||
timeout=self.timeout,
|
||||
follow_redirects=True,
|
||||
logger=self.logger,
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()["text"]
|
||||
|
||||
# Sanity check for translation status in the result
|
||||
if target_language in result:
|
||||
translation = result[target_language]
|
||||
self.logger.debug(f"Translation response: {text=}, {translation=}")
|
||||
return translation
|
||||
async def _translate(self, text: str) -> str | None:
|
||||
raise NotImplementedError
|
||||
|
||||
async def _flush(self):
|
||||
if not self.transcript:
|
||||
return
|
||||
self.transcript.translation = await self.get_translation(
|
||||
text=self.transcript.text
|
||||
)
|
||||
|
||||
source_language = self.get_pref("audio:source_language", "en")
|
||||
target_language = self.get_pref("audio:target_language", "en")
|
||||
if source_language == target_language:
|
||||
self.transcript.translation = None
|
||||
else:
|
||||
self.transcript.translation = await self._translate(self.transcript.text)
|
||||
|
||||
await self.emit(self.transcript)
|
||||
|
||||
32
server/reflector/processors/transcript_translator_auto.py
Normal file
32
server/reflector/processors/transcript_translator_auto.py
Normal file
@@ -0,0 +1,32 @@
|
||||
import importlib
|
||||
|
||||
from reflector.processors.transcript_translator import TranscriptTranslatorProcessor
|
||||
from reflector.settings import settings
|
||||
|
||||
|
||||
class TranscriptTranslatorAutoProcessor(TranscriptTranslatorProcessor):
|
||||
_registry = {}
|
||||
|
||||
@classmethod
|
||||
def register(cls, name, kclass):
|
||||
cls._registry[name] = kclass
|
||||
|
||||
def __new__(cls, name: str | None = None, **kwargs):
|
||||
if name is None:
|
||||
name = settings.TRANSLATION_BACKEND
|
||||
if name not in cls._registry:
|
||||
module_name = f"reflector.processors.transcript_translator_{name}"
|
||||
importlib.import_module(module_name)
|
||||
|
||||
# gather specific configuration for the processor
|
||||
# search `TRANSLATION_BACKEND_XXX_YYY`, push to constructor as `backend_xxx_yyy`
|
||||
config = {}
|
||||
name_upper = name.upper()
|
||||
settings_prefix = "TRANSLATION_"
|
||||
config_prefix = f"{settings_prefix}{name_upper}_"
|
||||
for key, value in settings:
|
||||
if key.startswith(config_prefix):
|
||||
config_name = key[len(settings_prefix) :].lower()
|
||||
config[config_name] = value
|
||||
|
||||
return cls._registry[name](**config | kwargs)
|
||||
66
server/reflector/processors/transcript_translator_modal.py
Normal file
66
server/reflector/processors/transcript_translator_modal.py
Normal file
@@ -0,0 +1,66 @@
|
||||
import httpx
|
||||
|
||||
from reflector.processors.transcript_translator import TranscriptTranslatorProcessor
|
||||
from reflector.processors.transcript_translator_auto import (
|
||||
TranscriptTranslatorAutoProcessor,
|
||||
)
|
||||
from reflector.processors.types import TranslationLanguages
|
||||
from reflector.settings import settings
|
||||
from reflector.utils.retry import retry
|
||||
|
||||
|
||||
class TranscriptTranslatorModalProcessor(TranscriptTranslatorProcessor):
|
||||
"""
|
||||
Translate the transcript into the target language using Modal.com
|
||||
"""
|
||||
|
||||
def __init__(self, modal_api_key: str | None = None, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
if not settings.TRANSLATE_URL:
|
||||
raise Exception(
|
||||
"TRANSLATE_URL is required for TranscriptTranslatorModalProcessor"
|
||||
)
|
||||
self.translate_url = settings.TRANSLATE_URL
|
||||
self.timeout = settings.TRANSLATE_TIMEOUT
|
||||
self.modal_api_key = modal_api_key
|
||||
self.headers = {}
|
||||
if self.modal_api_key:
|
||||
self.headers["Authorization"] = f"Bearer {self.modal_api_key}"
|
||||
|
||||
async def _translate(self, text: str) -> str | None:
|
||||
source_language = self.get_pref("audio:source_language", "en")
|
||||
target_language = self.get_pref("audio:target_language", "en")
|
||||
|
||||
languages = TranslationLanguages()
|
||||
# Only way to set the target should be the UI element like dropdown.
|
||||
# Hence, this assert should never fail.
|
||||
assert languages.is_supported(target_language)
|
||||
self.logger.debug(f"Try to translate {text=}")
|
||||
json_payload = {
|
||||
"text": text,
|
||||
"source_language": source_language,
|
||||
"target_language": target_language,
|
||||
}
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await retry(client.post)(
|
||||
self.translate_url + "/translate",
|
||||
headers=self.headers,
|
||||
params=json_payload,
|
||||
timeout=self.timeout,
|
||||
follow_redirects=True,
|
||||
logger=self.logger,
|
||||
)
|
||||
response.raise_for_status()
|
||||
result = response.json()["text"]
|
||||
|
||||
# Sanity check for translation status in the result
|
||||
if target_language in result:
|
||||
translation = result[target_language]
|
||||
else:
|
||||
translation = None
|
||||
self.logger.debug(f"Translation response: {text=}, {translation=}")
|
||||
return translation
|
||||
|
||||
|
||||
TranscriptTranslatorAutoProcessor.register("modal", TranscriptTranslatorModalProcessor)
|
||||
@@ -0,0 +1,14 @@
|
||||
from reflector.processors.transcript_translator import TranscriptTranslatorProcessor
|
||||
from reflector.processors.transcript_translator_auto import (
|
||||
TranscriptTranslatorAutoProcessor,
|
||||
)
|
||||
|
||||
|
||||
class TranscriptTranslatorPassthroughProcessor(TranscriptTranslatorProcessor):
|
||||
async def _translate(self, text: str) -> None:
|
||||
return None
|
||||
|
||||
|
||||
TranscriptTranslatorAutoProcessor.register(
|
||||
"passthrough", TranscriptTranslatorPassthroughProcessor
|
||||
)
|
||||
@@ -9,13 +9,14 @@ class Settings(BaseSettings):
|
||||
)
|
||||
|
||||
# CORS
|
||||
UI_BASE_URL: str = "http://localhost:3000"
|
||||
CORS_ORIGIN: str = "*"
|
||||
CORS_ALLOW_CREDENTIALS: bool = False
|
||||
|
||||
# Database
|
||||
DATABASE_URL: str = "sqlite:///./reflector.sqlite3"
|
||||
|
||||
# local data directory (audio for no)
|
||||
# local data directory
|
||||
DATA_DIR: str = "./data"
|
||||
|
||||
# Audio Transcription
|
||||
@@ -24,11 +25,7 @@ class Settings(BaseSettings):
|
||||
TRANSCRIPT_URL: str | None = None
|
||||
TRANSCRIPT_TIMEOUT: int = 90
|
||||
|
||||
# Translate into the target language
|
||||
TRANSLATE_URL: str | None = None
|
||||
TRANSLATE_TIMEOUT: int = 90
|
||||
|
||||
# Audio transcription modal.com configuration
|
||||
# Audio Transcription: modal backend
|
||||
TRANSCRIPT_MODAL_API_KEY: str | None = None
|
||||
|
||||
# Audio transcription storage
|
||||
@@ -40,37 +37,28 @@ class Settings(BaseSettings):
|
||||
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID: str | None = None
|
||||
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY: str | None = None
|
||||
|
||||
# Translate into the target language
|
||||
TRANSLATION_BACKEND: str = "passthrough"
|
||||
TRANSLATE_URL: str | None = None
|
||||
TRANSLATE_TIMEOUT: int = 90
|
||||
|
||||
# Translation: modal backend
|
||||
TRANSLATE_MODAL_API_KEY: str | None = None
|
||||
|
||||
# LLM
|
||||
# available backend: openai, modal
|
||||
LLM_BACKEND: str = "modal"
|
||||
|
||||
# LLM common configuration
|
||||
LLM_MODEL: str = "microsoft/phi-4"
|
||||
LLM_URL: str | None = None
|
||||
LLM_HOST: str = "localhost"
|
||||
LLM_PORT: int = 7860
|
||||
LLM_OPENAI_KEY: str | None = None
|
||||
LLM_OPENAI_MODEL: str = "gpt-3.5-turbo"
|
||||
LLM_OPENAI_TEMPERATURE: float = 0.7
|
||||
LLM_TIMEOUT: int = 60 * 5 # take cold start into account
|
||||
LLM_MAX_TOKENS: int = 1024
|
||||
LLM_TEMPERATURE: float = 0.7
|
||||
ZEPHYR_LLM_URL: str | None = None
|
||||
HERMES_3_8B_LLM_URL: str | None = None
|
||||
|
||||
# LLM Modal configuration
|
||||
LLM_MODAL_API_KEY: str | None = None
|
||||
|
||||
# per-task cases
|
||||
SUMMARY_MODEL: str = "monadical/private/smart"
|
||||
SUMMARY_LLM_URL: str | None = None
|
||||
SUMMARY_LLM_API_KEY: str | None = None
|
||||
SUMMARY_LLM_CONTEXT_SIZE_TOKENS: int = 16000
|
||||
LLM_API_KEY: str | None = None
|
||||
LLM_CONTEXT_WINDOW: int = 16000
|
||||
|
||||
# Diarization
|
||||
DIARIZATION_ENABLED: bool = True
|
||||
DIARIZATION_BACKEND: str = "modal"
|
||||
DIARIZATION_URL: str | None = None
|
||||
|
||||
# Diarization: modal backend
|
||||
DIARIZATION_MODAL_API_KEY: str | None = None
|
||||
|
||||
# Sentry
|
||||
SENTRY_DSN: str | None = None
|
||||
|
||||
@@ -86,12 +74,6 @@ class Settings(BaseSettings):
|
||||
# if set, all anonymous record will be public
|
||||
PUBLIC_MODE: bool = False
|
||||
|
||||
# Default LLM model name
|
||||
DEFAULT_LLM: str = "lmsys/vicuna-13b-v1.5"
|
||||
|
||||
# Cache directory for all model storage
|
||||
CACHE_DIR: str = "./data"
|
||||
|
||||
# Min transcript length to generate topic + summary
|
||||
MIN_TRANSCRIPT_LENGTH: int = 750
|
||||
|
||||
@@ -116,24 +98,20 @@ class Settings(BaseSettings):
|
||||
# Healthcheck
|
||||
HEALTHCHECK_URL: str | None = None
|
||||
|
||||
AWS_PROCESS_RECORDING_QUEUE_URL: str | None = None
|
||||
SQS_POLLING_TIMEOUT_SECONDS: int = 60
|
||||
|
||||
# Whereby integration
|
||||
WHEREBY_API_URL: str = "https://api.whereby.dev/v1"
|
||||
|
||||
WHEREBY_API_KEY: str | None = None
|
||||
|
||||
WHEREBY_WEBHOOK_SECRET: str | None = None
|
||||
AWS_WHEREBY_S3_BUCKET: str | None = None
|
||||
AWS_WHEREBY_ACCESS_KEY_ID: str | None = None
|
||||
AWS_WHEREBY_ACCESS_KEY_SECRET: str | None = None
|
||||
AWS_PROCESS_RECORDING_QUEUE_URL: str | None = None
|
||||
SQS_POLLING_TIMEOUT_SECONDS: int = 60
|
||||
|
||||
# Zulip integration
|
||||
ZULIP_REALM: str | None = None
|
||||
ZULIP_API_KEY: str | None = None
|
||||
ZULIP_BOT_EMAIL: str | None = None
|
||||
|
||||
UI_BASE_URL: str = "http://localhost:3000"
|
||||
|
||||
WHEREBY_WEBHOOK_SECRET: str | None = None
|
||||
|
||||
|
||||
settings = Settings()
|
||||
|
||||
@@ -13,7 +13,7 @@ from reflector.processors import (
|
||||
TranscriptFinalTitleProcessor,
|
||||
TranscriptLinerProcessor,
|
||||
TranscriptTopicDetectorProcessor,
|
||||
TranscriptTranslatorProcessor,
|
||||
TranscriptTranslatorAutoProcessor,
|
||||
)
|
||||
from reflector.processors.base import BroadcastProcessor
|
||||
|
||||
@@ -31,7 +31,7 @@ async def process_audio_file(
|
||||
AudioMergeProcessor(),
|
||||
AudioTranscriptAutoProcessor.as_threaded(),
|
||||
TranscriptLinerProcessor(),
|
||||
TranscriptTranslatorProcessor.as_threaded(),
|
||||
TranscriptTranslatorAutoProcessor.as_threaded(),
|
||||
]
|
||||
if not only_transcript:
|
||||
processors += [
|
||||
|
||||
@@ -27,7 +27,7 @@ from reflector.processors import (
|
||||
TranscriptFinalTitleProcessor,
|
||||
TranscriptLinerProcessor,
|
||||
TranscriptTopicDetectorProcessor,
|
||||
TranscriptTranslatorProcessor,
|
||||
TranscriptTranslatorAutoProcessor,
|
||||
)
|
||||
from reflector.processors.base import BroadcastProcessor, Processor
|
||||
from reflector.processors.types import (
|
||||
@@ -103,7 +103,7 @@ async def process_audio_file_with_diarization(
|
||||
|
||||
processors += [
|
||||
TranscriptLinerProcessor(),
|
||||
TranscriptTranslatorProcessor.as_threaded(),
|
||||
TranscriptTranslatorAutoProcessor.as_threaded(),
|
||||
]
|
||||
|
||||
if not only_transcript:
|
||||
|
||||
33
server/reflector/utils/text.py
Normal file
33
server/reflector/utils/text.py
Normal file
@@ -0,0 +1,33 @@
|
||||
def clean_title(title: str) -> str:
|
||||
"""
|
||||
Clean and format a title string for consistent capitalization.
|
||||
|
||||
Rules:
|
||||
- Strip surrounding quotes (single or double)
|
||||
- Capitalize the first word
|
||||
- Capitalize words longer than 3 characters
|
||||
- Keep words with 3 or fewer characters lowercase (except first word)
|
||||
|
||||
Args:
|
||||
title: The title string to clean
|
||||
|
||||
Returns:
|
||||
The cleaned title with consistent capitalization
|
||||
|
||||
Examples:
|
||||
>>> clean_title("hello world")
|
||||
"Hello World"
|
||||
>>> clean_title("meeting with the team")
|
||||
"Meeting With the Team"
|
||||
>>> clean_title("'Title with quotes'")
|
||||
"Title With Quotes"
|
||||
"""
|
||||
title = title.strip("\"'")
|
||||
words = title.split()
|
||||
if words:
|
||||
words = [
|
||||
word.capitalize() if i == 0 or len(word) > 3 else word.lower()
|
||||
for i, word in enumerate(words)
|
||||
]
|
||||
title = " ".join(words)
|
||||
return title
|
||||
@@ -51,24 +51,6 @@ async def transcript_get_audio_mp3(
|
||||
transcript_id, user_id=user_id
|
||||
)
|
||||
|
||||
if transcript.audio_location == "storage":
|
||||
# proxy S3 file, to prevent issue with CORS
|
||||
url = await transcript.get_audio_url()
|
||||
headers = {}
|
||||
|
||||
copy_headers = ["range", "accept-encoding"]
|
||||
for header in copy_headers:
|
||||
if header in request.headers:
|
||||
headers[header] = request.headers[header]
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
resp = await client.request(request.method, url, headers=headers)
|
||||
return Response(
|
||||
content=resp.content,
|
||||
status_code=resp.status_code,
|
||||
headers=resp.headers,
|
||||
)
|
||||
|
||||
if transcript.audio_location == "storage":
|
||||
# proxy S3 file, to prevent issue with CORS
|
||||
url = await transcript.get_audio_url()
|
||||
|
||||
@@ -7,14 +7,10 @@ import pytest
|
||||
@pytest.fixture(scope="function", autouse=True)
|
||||
@pytest.mark.asyncio
|
||||
async def setup_database():
|
||||
from reflector.settings import settings
|
||||
|
||||
with NamedTemporaryFile() as f:
|
||||
settings.DATABASE_URL = f"sqlite:///{f.name}"
|
||||
from reflector.db import engine, metadata
|
||||
from reflector.db import engine, metadata # noqa
|
||||
|
||||
metadata.drop_all(bind=engine)
|
||||
metadata.create_all(bind=engine)
|
||||
|
||||
yield
|
||||
|
||||
|
||||
@@ -33,17 +29,16 @@ def dummy_processors():
|
||||
patch(
|
||||
"reflector.processors.transcript_final_summary.TranscriptFinalSummaryProcessor.get_short_summary"
|
||||
) as mock_short_summary,
|
||||
patch(
|
||||
"reflector.processors.transcript_translator.TranscriptTranslatorProcessor.get_translation"
|
||||
) as mock_translate,
|
||||
):
|
||||
mock_topic.return_value = {"title": "LLM TITLE", "summary": "LLM SUMMARY"}
|
||||
mock_title.return_value = {"title": "LLM TITLE"}
|
||||
from reflector.processors.transcript_topic_detector import TopicResponse
|
||||
|
||||
mock_topic.return_value = TopicResponse(
|
||||
title="LLM TITLE", summary="LLM SUMMARY"
|
||||
)
|
||||
mock_title.return_value = "LLM Title"
|
||||
mock_long_summary.return_value = "LLM LONG SUMMARY"
|
||||
mock_short_summary.return_value = "LLM SHORT SUMMARY"
|
||||
mock_translate.return_value = "Bonjour le monde"
|
||||
yield (
|
||||
mock_translate,
|
||||
mock_topic,
|
||||
mock_title,
|
||||
mock_long_summary,
|
||||
@@ -101,16 +96,38 @@ async def dummy_diarization():
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def dummy_transcript_translator():
|
||||
from reflector.processors.transcript_translator import TranscriptTranslatorProcessor
|
||||
|
||||
class TestTranscriptTranslatorProcessor(TranscriptTranslatorProcessor):
|
||||
async def _translate(self, text: str) -> str:
|
||||
source_language = self.get_pref("audio:source_language", "en")
|
||||
target_language = self.get_pref("audio:target_language", "en")
|
||||
return f"{source_language}:{target_language}:{text}"
|
||||
|
||||
def mock_new(cls, *args, **kwargs):
|
||||
return TestTranscriptTranslatorProcessor(*args, **kwargs)
|
||||
|
||||
with patch(
|
||||
"reflector.processors.transcript_translator_auto"
|
||||
".TranscriptTranslatorAutoProcessor.__new__",
|
||||
mock_new,
|
||||
):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def dummy_llm():
|
||||
from reflector.llm.base import LLM
|
||||
from reflector.llm import LLM
|
||||
|
||||
class TestLLM(LLM):
|
||||
def __init__(self):
|
||||
self.model_name = "DUMMY MODEL"
|
||||
self.llm_tokenizer = "DUMMY TOKENIZER"
|
||||
|
||||
with patch("reflector.llm.base.LLM.get_instance") as mock_llm:
|
||||
# LLM doesn't have get_instance anymore, mocking constructor instead
|
||||
with patch("reflector.llm.LLM") as mock_llm:
|
||||
mock_llm.return_value = TestLLM()
|
||||
yield
|
||||
|
||||
@@ -129,22 +146,19 @@ async def dummy_storage():
|
||||
async def _get_file_url(self, *args, **kwargs):
|
||||
return "http://fake_server/audio.mp3"
|
||||
|
||||
with patch("reflector.storage.base.Storage.get_instance") as mock_storage:
|
||||
mock_storage.return_value = DummyStorage()
|
||||
yield
|
||||
async def _get_file(self, *args, **kwargs):
|
||||
from pathlib import Path
|
||||
|
||||
test_mp3 = Path(__file__).parent / "records" / "test_mathieu_hello.mp3"
|
||||
return test_mp3.read_bytes()
|
||||
|
||||
@pytest.fixture
|
||||
def nltk():
|
||||
with patch("reflector.llm.base.LLM.ensure_nltk") as mock_nltk:
|
||||
mock_nltk.return_value = "NLTK PACKAGE"
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ensure_casing():
|
||||
with patch("reflector.llm.base.LLM.ensure_casing") as mock_casing:
|
||||
mock_casing.return_value = "LLM TITLE"
|
||||
dummy = DummyStorage()
|
||||
with (
|
||||
patch("reflector.storage.base.Storage.get_instance") as mock_storage,
|
||||
patch("reflector.storage.get_transcripts_storage") as mock_get_transcripts,
|
||||
):
|
||||
mock_storage.return_value = dummy
|
||||
mock_get_transcripts.return_value = dummy
|
||||
yield
|
||||
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ import pytest
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_processor_broadcast(nltk):
|
||||
async def test_processor_broadcast():
|
||||
from reflector.processors.base import BroadcastProcessor, Pipeline, Processor
|
||||
|
||||
class TestProcessor(Processor):
|
||||
|
||||
@@ -3,11 +3,9 @@ import pytest
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_basic_process(
|
||||
nltk,
|
||||
dummy_transcript,
|
||||
dummy_llm,
|
||||
dummy_processors,
|
||||
ensure_casing,
|
||||
):
|
||||
# goal is to start the server, and send rtc audio to it
|
||||
# validate the events received
|
||||
@@ -16,8 +14,8 @@ async def test_basic_process(
|
||||
from reflector.settings import settings
|
||||
from reflector.tools.process import process_audio_file
|
||||
|
||||
# use an LLM test backend
|
||||
settings.LLM_BACKEND = "test"
|
||||
# LLM_BACKEND no longer exists in settings
|
||||
# settings.LLM_BACKEND = "test"
|
||||
settings.TRANSCRIPT_BACKEND = "whisper"
|
||||
|
||||
# event callback
|
||||
@@ -35,7 +33,7 @@ async def test_basic_process(
|
||||
|
||||
# validate the events
|
||||
assert marks["TranscriptLinerProcessor"] == 1
|
||||
assert marks["TranscriptTranslatorProcessor"] == 1
|
||||
assert marks["TranscriptTranslatorPassthroughProcessor"] == 1
|
||||
assert marks["TranscriptTopicDetectorProcessor"] == 1
|
||||
assert marks["TranscriptFinalSummaryProcessor"] == 1
|
||||
assert marks["TranscriptFinalTitleProcessor"] == 1
|
||||
|
||||
@@ -10,7 +10,6 @@ from httpx import AsyncClient
|
||||
@pytest.mark.asyncio
|
||||
async def test_transcript_process(
|
||||
tmpdir,
|
||||
ensure_casing,
|
||||
dummy_llm,
|
||||
dummy_processors,
|
||||
dummy_diarization,
|
||||
@@ -69,7 +68,7 @@ async def test_transcript_process(
|
||||
transcript = resp.json()
|
||||
assert transcript["status"] == "ended"
|
||||
assert transcript["short_summary"] == "LLM SHORT SUMMARY"
|
||||
assert transcript["title"] == "LLM TITLE"
|
||||
assert transcript["title"] == "Llm Title"
|
||||
|
||||
# check topics and transcript
|
||||
response = await ac.get(f"/transcripts/{tid}/topics")
|
||||
|
||||
@@ -67,10 +67,9 @@ async def test_transcript_rtc_and_websocket(
|
||||
dummy_transcript,
|
||||
dummy_processors,
|
||||
dummy_diarization,
|
||||
dummy_transcript_translator,
|
||||
dummy_storage,
|
||||
fake_mp3_upload,
|
||||
ensure_casing,
|
||||
nltk,
|
||||
appserver,
|
||||
):
|
||||
# goal: start the server, exchange RTC, receive websocket events
|
||||
@@ -166,7 +165,7 @@ async def test_transcript_rtc_and_websocket(
|
||||
assert "TRANSCRIPT" in eventnames
|
||||
ev = events[eventnames.index("TRANSCRIPT")]
|
||||
assert ev["data"]["text"].startswith("Hello world.")
|
||||
assert ev["data"]["translation"] == "Bonjour le monde"
|
||||
assert ev["data"]["translation"] is None
|
||||
|
||||
assert "TOPIC" in eventnames
|
||||
ev = events[eventnames.index("TOPIC")]
|
||||
@@ -185,7 +184,7 @@ async def test_transcript_rtc_and_websocket(
|
||||
|
||||
assert "FINAL_TITLE" in eventnames
|
||||
ev = events[eventnames.index("FINAL_TITLE")]
|
||||
assert ev["data"]["title"] == "LLM TITLE"
|
||||
assert ev["data"]["title"] == "Llm Title"
|
||||
|
||||
assert "WAVEFORM" in eventnames
|
||||
ev = events[eventnames.index("WAVEFORM")]
|
||||
@@ -226,10 +225,9 @@ async def test_transcript_rtc_and_websocket_and_fr(
|
||||
dummy_transcript,
|
||||
dummy_processors,
|
||||
dummy_diarization,
|
||||
dummy_transcript_translator,
|
||||
dummy_storage,
|
||||
fake_mp3_upload,
|
||||
ensure_casing,
|
||||
nltk,
|
||||
appserver,
|
||||
):
|
||||
# goal: start the server, exchange RTC, receive websocket events
|
||||
@@ -334,7 +332,7 @@ async def test_transcript_rtc_and_websocket_and_fr(
|
||||
assert "TRANSCRIPT" in eventnames
|
||||
ev = events[eventnames.index("TRANSCRIPT")]
|
||||
assert ev["data"]["text"].startswith("Hello world.")
|
||||
assert ev["data"]["translation"] == "Bonjour le monde"
|
||||
assert ev["data"]["translation"] == "en:fr:Hello world."
|
||||
|
||||
assert "TOPIC" in eventnames
|
||||
ev = events[eventnames.index("TOPIC")]
|
||||
@@ -353,7 +351,7 @@ async def test_transcript_rtc_and_websocket_and_fr(
|
||||
|
||||
assert "FINAL_TITLE" in eventnames
|
||||
ev = events[eventnames.index("FINAL_TITLE")]
|
||||
assert ev["data"]["title"] == "LLM TITLE"
|
||||
assert ev["data"]["title"] == "Llm Title"
|
||||
|
||||
# check status order
|
||||
statuses = [e["data"]["value"] for e in events if e["event"] == "STATUS"]
|
||||
|
||||
@@ -10,7 +10,6 @@ from httpx import AsyncClient
|
||||
@pytest.mark.asyncio
|
||||
async def test_transcript_upload_file(
|
||||
tmpdir,
|
||||
ensure_casing,
|
||||
dummy_llm,
|
||||
dummy_processors,
|
||||
dummy_diarization,
|
||||
@@ -53,7 +52,7 @@ async def test_transcript_upload_file(
|
||||
transcript = resp.json()
|
||||
assert transcript["status"] == "ended"
|
||||
assert transcript["short_summary"] == "LLM SHORT SUMMARY"
|
||||
assert transcript["title"] == "LLM TITLE"
|
||||
assert transcript["title"] == "Llm Title"
|
||||
|
||||
# check topics and transcript
|
||||
response = await ac.get(f"/transcripts/{tid}/topics")
|
||||
|
||||
21
server/tests/test_utils_text.py
Normal file
21
server/tests/test_utils_text.py
Normal file
@@ -0,0 +1,21 @@
|
||||
import pytest
|
||||
|
||||
from reflector.utils.text import clean_title
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"input_title,expected",
|
||||
[
|
||||
("hello world", "Hello World"),
|
||||
("HELLO WORLD", "Hello World"),
|
||||
("hello WORLD", "Hello World"),
|
||||
("the quick brown fox", "The Quick Brown fox"),
|
||||
("discussion about API design", "Discussion About api Design"),
|
||||
("Q1 2024 budget review", "Q1 2024 Budget Review"),
|
||||
("'Title with quotes'", "Title With Quotes"),
|
||||
("'title with quotes'", "Title With Quotes"),
|
||||
("MiXeD CaSe WoRdS", "Mixed Case Words"),
|
||||
],
|
||||
)
|
||||
def test_clean_title(input_title, expected):
|
||||
assert clean_title(input_title) == expected
|
||||
16
server/uv.lock
generated
16
server/uv.lock
generated
@@ -2428,6 +2428,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/57/79/9dae84c244dabebca6a952e098d6ac9d13719b701fc5323ba6d00abc675a/pytest_docker_tools-3.1.9-py2.py3-none-any.whl", hash = "sha256:36f8e88d56d84ea177df68a175673681243dd991d2807fbf551d90f60341bfdb", size = 29268, upload-time = "2025-03-16T13:48:22.184Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-env"
|
||||
version = "1.1.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pytest" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/1f/31/27f28431a16b83cab7a636dce59cf397517807d247caa38ee67d65e71ef8/pytest_env-1.1.5.tar.gz", hash = "sha256:91209840aa0e43385073ac464a554ad2947cc2fd663a9debf88d03b01e0cc1cf", size = 8911, upload-time = "2024-09-17T22:39:18.566Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/de/b8/87cfb16045c9d4092cfcf526135d73b88101aac83bc1adcf82dfb5fd3833/pytest_env-1.1.5-py3-none-any.whl", hash = "sha256:ce90cf8772878515c24b31cd97c7fa1f4481cd68d588419fd45f10ecaee6bc30", size = 6141, upload-time = "2024-09-17T22:39:16.942Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-httpx"
|
||||
version = "0.34.0"
|
||||
@@ -2618,7 +2630,6 @@ dependencies = [
|
||||
{ name = "aiortc" },
|
||||
{ name = "alembic" },
|
||||
{ name = "av" },
|
||||
{ name = "black" },
|
||||
{ name = "celery" },
|
||||
{ name = "databases", extra = ["aiosqlite", "asyncpg"] },
|
||||
{ name = "fastapi", extra = ["standard"] },
|
||||
@@ -2636,6 +2647,7 @@ dependencies = [
|
||||
{ name = "protobuf" },
|
||||
{ name = "psycopg2-binary" },
|
||||
{ name = "pydantic-settings" },
|
||||
{ name = "pytest-env" },
|
||||
{ name = "python-jose", extra = ["cryptography"] },
|
||||
{ name = "python-multipart" },
|
||||
{ name = "redis" },
|
||||
@@ -2681,7 +2693,6 @@ requires-dist = [
|
||||
{ name = "aiortc", specifier = ">=1.5.0" },
|
||||
{ name = "alembic", specifier = ">=1.11.3" },
|
||||
{ name = "av", specifier = ">=10.0.0" },
|
||||
{ name = "black", specifier = "==24.1.1" },
|
||||
{ name = "celery", specifier = ">=5.3.4" },
|
||||
{ name = "databases", extras = ["aiosqlite", "asyncpg"], specifier = ">=0.7.0" },
|
||||
{ name = "fastapi", extras = ["standard"], specifier = ">=0.100.1" },
|
||||
@@ -2699,6 +2710,7 @@ requires-dist = [
|
||||
{ name = "protobuf", specifier = ">=4.24.3" },
|
||||
{ name = "psycopg2-binary", specifier = ">=2.9.10" },
|
||||
{ name = "pydantic-settings", specifier = ">=2.0.2" },
|
||||
{ name = "pytest-env", specifier = ">=1.1.5" },
|
||||
{ name = "python-jose", extras = ["cryptography"], specifier = ">=3.3.0" },
|
||||
{ name = "python-multipart", specifier = ">=0.0.6" },
|
||||
{ name = "redis", specifier = ">=5.0.1" },
|
||||
|
||||
@@ -1,86 +0,0 @@
|
||||
# Chakra UI v3 Migration - Remaining Tasks
|
||||
|
||||
## Completed
|
||||
|
||||
- ✅ Migrated from Chakra UI v2 to v3 in package.json
|
||||
- ✅ Updated theme.ts with whiteAlpha color palette and semantic tokens
|
||||
- ✅ Added button recipe with fontWeight 600 and hover states
|
||||
- ✅ Moved Poppins font from theme to HTML tag className
|
||||
- ✅ Fixed deprecated props across all files:
|
||||
- ✅ `isDisabled` → `disabled` (all occurrences fixed)
|
||||
- ✅ `isChecked` → `checked` (all occurrences fixed)
|
||||
- ✅ `isLoading` → `loading` (all occurrences fixed)
|
||||
- ✅ `isOpen` → `open` (all occurrences fixed)
|
||||
- ✅ `noOfLines` → `lineClamp` (all occurrences fixed)
|
||||
- ✅ `align` → `alignItems` on Flex/Stack components (all occurrences fixed)
|
||||
- ✅ `justify` → `justifyContent` on Flex/Stack components (all occurrences fixed)
|
||||
|
||||
## Migration Summary
|
||||
|
||||
### Files Modified
|
||||
|
||||
1. **app/(app)/rooms/page.tsx**
|
||||
|
||||
- Fixed: isDisabled, isChecked, align, justify on multiple components
|
||||
- Updated temporary Select component props
|
||||
|
||||
2. **app/(app)/transcripts/fileUploadButton.tsx**
|
||||
|
||||
- Fixed: isDisabled → disabled
|
||||
|
||||
3. **app/(app)/transcripts/shareZulip.tsx**
|
||||
|
||||
- Fixed: isDisabled → disabled
|
||||
|
||||
4. **app/(app)/transcripts/shareAndPrivacy.tsx**
|
||||
|
||||
- Fixed: isLoading → loading, isOpen → open
|
||||
- Updated temporary Select component props
|
||||
|
||||
5. **app/(app)/browse/page.tsx**
|
||||
|
||||
- Fixed: isOpen → open, align → alignItems, justify → justifyContent
|
||||
|
||||
6. **app/(app)/transcripts/transcriptTitle.tsx**
|
||||
|
||||
- Fixed: noOfLines → lineClamp
|
||||
|
||||
7. **app/(app)/transcripts/[transcriptId]/correct/topicHeader.tsx**
|
||||
|
||||
- Fixed: noOfLines → lineClamp
|
||||
|
||||
8. **app/lib/expandableText.tsx**
|
||||
|
||||
- Fixed: noOfLines → lineClamp
|
||||
|
||||
9. **app/[roomName]/page.tsx**
|
||||
|
||||
- Fixed: align → alignItems, justify → justifyContent
|
||||
|
||||
10. **app/lib/WherebyWebinarEmbed.tsx**
|
||||
- Fixed: align → alignItems, justify → justifyContent
|
||||
|
||||
## Other Potential Issues
|
||||
|
||||
1. Check for Modal/Dialog component imports and usage (currently using temporary replacements)
|
||||
2. Review Select component usage (using temporary replacements)
|
||||
3. Test button hover states for whiteAlpha color palette
|
||||
4. Verify all color palettes work correctly with the new semantic tokens
|
||||
|
||||
## Testing
|
||||
|
||||
After completing migrations:
|
||||
|
||||
1. Run `yarn dev` and check all pages
|
||||
2. Test buttons with different color palettes
|
||||
3. Verify disabled states work correctly
|
||||
4. Check that text alignment and flex layouts are correct
|
||||
5. Test modal/dialog functionality
|
||||
|
||||
## Next Steps
|
||||
|
||||
The Chakra UI v3 migration is now largely complete for deprecated props. The main remaining items are:
|
||||
|
||||
- Replace temporary Modal and Select components with proper Chakra v3 implementations
|
||||
- Thorough testing of all UI components
|
||||
- Performance optimization if needed
|
||||
@@ -32,7 +32,7 @@ export default function TranscriptDetails(details: TranscriptDetails) {
|
||||
const topics = useTopics(transcriptId);
|
||||
const waveform = useWaveform(
|
||||
transcriptId,
|
||||
waiting || mp3.loading || mp3.audioDeleted === true,
|
||||
waiting || mp3.audioDeleted === true,
|
||||
);
|
||||
const useActiveTopic = useState<Topic | null>(null);
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
"@vercel/kv": "^2.0.0",
|
||||
"@whereby.com/browser-sdk": "^3.3.4",
|
||||
"autoprefixer": "10.4.20",
|
||||
"axios": "^1.6.2",
|
||||
"axios": "^1.8.2",
|
||||
"chakra-react-select": "^4.9.1",
|
||||
"eslint": "^9.9.1",
|
||||
"eslint-config-next": "^14.2.7",
|
||||
@@ -29,10 +29,10 @@
|
||||
"ioredis": "^5.4.1",
|
||||
"jest-worker": "^29.6.2",
|
||||
"lucide-react": "^0.525.0",
|
||||
"next": "^14.2.7",
|
||||
"next": "^14.2.30",
|
||||
"next-auth": "^4.24.7",
|
||||
"next-themes": "^0.4.6",
|
||||
"postcss": "8.4.25",
|
||||
"postcss": "8.4.31",
|
||||
"prop-types": "^15.8.1",
|
||||
"react": "^18.2.0",
|
||||
"react-dom": "^18.2.0",
|
||||
|
||||
310
www/yarn.lock
310
www/yarn.lock
@@ -119,19 +119,10 @@
|
||||
chalk "^2.4.2"
|
||||
js-tokens "^4.0.0"
|
||||
|
||||
"@babel/runtime@^7.12.0", "@babel/runtime@^7.12.5", "@babel/runtime@^7.18.3", "@babel/runtime@^7.23.2", "@babel/runtime@^7.5.5", "@babel/runtime@^7.8.7":
|
||||
version "7.23.6"
|
||||
resolved "https://registry.yarnpkg.com/@babel/runtime/-/runtime-7.23.6.tgz#c05e610dc228855dc92ef1b53d07389ed8ab521d"
|
||||
integrity sha512-zHd0eUrf5GZoOWVCXp6koAKQTfZV07eit6bGPmJgnZdnSAvvZee6zniW2XMF7Cmc4ISOOnPy3QaSiIJGJkVEDQ==
|
||||
dependencies:
|
||||
regenerator-runtime "^0.14.0"
|
||||
|
||||
"@babel/runtime@^7.20.13":
|
||||
version "7.25.6"
|
||||
resolved "https://registry.yarnpkg.com/@babel/runtime/-/runtime-7.25.6.tgz#9afc3289f7184d8d7f98b099884c26317b9264d2"
|
||||
integrity sha512-VBj9MYyDb9tuLq7yzqjgzt6Q+IBQLrGZfdjOekyEirZPHxXWoTSGUTMrpsfi58Up73d13NfYLv8HT9vmznjzhQ==
|
||||
dependencies:
|
||||
regenerator-runtime "^0.14.0"
|
||||
"@babel/runtime@^7.12.0", "@babel/runtime@^7.12.5", "@babel/runtime@^7.18.3", "@babel/runtime@^7.20.13", "@babel/runtime@^7.23.2", "@babel/runtime@^7.5.5", "@babel/runtime@^7.8.7":
|
||||
version "7.28.2"
|
||||
resolved "https://registry.yarnpkg.com/@babel/runtime/-/runtime-7.28.2.tgz#2ae5a9d51cc583bd1f5673b3bb70d6d819682473"
|
||||
integrity sha512-KHp2IflsnGywDjBWDkR9iEqiWSpc8GIi0lgTT3mOElT0PP1tG26P4tmFI2YvAdzgq9RGyoHZQEIEdZy6Ec5xCA==
|
||||
|
||||
"@babel/types@^7.22.15":
|
||||
version "7.23.6"
|
||||
@@ -658,10 +649,10 @@
|
||||
semver "^7.3.5"
|
||||
tar "^6.1.11"
|
||||
|
||||
"@next/env@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/env/-/env-14.2.7.tgz#40fcd6ccdd53fd7e6788a0604f39032c84bea112"
|
||||
integrity sha512-OTx9y6I3xE/eih+qtthppwLytmpJVPM5PPoJxChFsbjIEFXIayG0h/xLzefHGJviAa3Q5+Fd+9uYojKkHDKxoQ==
|
||||
"@next/env@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/env/-/env-14.2.30.tgz#f955b57975751584722b6b0a2a8cf2bdcc4ffae3"
|
||||
integrity sha512-KBiBKrDY6kxTQWGzKjQB7QirL3PiiOkV7KW98leHFjtVRKtft76Ra5qSA/SL75xT44dp6hOcqiiJ6iievLOYug==
|
||||
|
||||
"@next/eslint-plugin-next@14.2.7":
|
||||
version "14.2.7"
|
||||
@@ -670,50 +661,50 @@
|
||||
dependencies:
|
||||
glob "10.3.10"
|
||||
|
||||
"@next/swc-darwin-arm64@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-darwin-arm64/-/swc-darwin-arm64-14.2.7.tgz#6cd39ba5d5f43705de44e389d4b4f5d2df391927"
|
||||
integrity sha512-UhZGcOyI9LE/tZL3h9rs/2wMZaaJKwnpAyegUVDGZqwsla6hMfeSj9ssBWQS9yA4UXun3pPhrFLVnw5KXZs3vw==
|
||||
"@next/swc-darwin-arm64@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-darwin-arm64/-/swc-darwin-arm64-14.2.30.tgz#8179a35a068bc6f43a9ab6439875f6e330d02e52"
|
||||
integrity sha512-EAqfOTb3bTGh9+ewpO/jC59uACadRHM6TSA9DdxJB/6gxOpyV+zrbqeXiFTDy9uV6bmipFDkfpAskeaDcO+7/g==
|
||||
|
||||
"@next/swc-darwin-x64@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-darwin-x64/-/swc-darwin-x64-14.2.7.tgz#a1d191a293443cf8df9451b8f13a348caa718cb7"
|
||||
integrity sha512-ys2cUgZYRc+CbyDeLAaAdZgS7N1Kpyy+wo0b/gAj+SeOeaj0Lw/q+G1hp+DuDiDAVyxLBCJXEY/AkhDmtihUTA==
|
||||
"@next/swc-darwin-x64@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-darwin-x64/-/swc-darwin-x64-14.2.30.tgz#87c08d805c0546a73c25a0538a81f8b5f43bd0e9"
|
||||
integrity sha512-TyO7Wz1IKE2kGv8dwQ0bmPL3s44EKVencOqwIY69myoS3rdpO1NPg5xPM5ymKu7nfX4oYJrpMxv8G9iqLsnL4A==
|
||||
|
||||
"@next/swc-linux-arm64-gnu@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-14.2.7.tgz#9da3f993b3754b900fe7b469de51898fc51112f2"
|
||||
integrity sha512-2xoWtE13sUJ3qrC1lwE/HjbDPm+kBQYFkkiVECJWctRASAHQ+NwjMzgrfqqMYHfMxFb5Wws3w9PqzZJqKFdWcQ==
|
||||
"@next/swc-linux-arm64-gnu@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-14.2.30.tgz#eed26d87d96d9ef6fffbde98ceed2c75108a9911"
|
||||
integrity sha512-I5lg1fgPJ7I5dk6mr3qCH1hJYKJu1FsfKSiTKoYwcuUf53HWTrEkwmMI0t5ojFKeA6Vu+SfT2zVy5NS0QLXV4Q==
|
||||
|
||||
"@next/swc-linux-arm64-musl@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-14.2.7.tgz#f75662bdedd2d91ad7e05778274fa17659f1f02f"
|
||||
integrity sha512-+zJ1gJdl35BSAGpkCbfyiY6iRTaPrt3KTl4SF/B1NyELkqqnrNX6cp4IjjjxKpd64/7enI0kf6b9O1Uf3cL0pw==
|
||||
"@next/swc-linux-arm64-musl@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-14.2.30.tgz#54b38b43c8acf3d3e0b71ae208a0bfca5a9b8563"
|
||||
integrity sha512-8GkNA+sLclQyxgzCDs2/2GSwBc92QLMrmYAmoP2xehe5MUKBLB2cgo34Yu242L1siSkwQkiV4YLdCnjwc/Micw==
|
||||
|
||||
"@next/swc-linux-x64-gnu@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-14.2.7.tgz#3c6c5b551a5af4fc8178bd5733c8063266034e79"
|
||||
integrity sha512-m6EBqrskeMUzykBrv0fDX/28lWIBGhMzOYaStp0ihkjzIYJiKUOzVYD1gULHc8XDf5EMSqoH/0/TRAgXqpQwmw==
|
||||
"@next/swc-linux-x64-gnu@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-14.2.30.tgz#0ee0419da4dc1211a4c925b0841419cd07aa6c59"
|
||||
integrity sha512-8Ly7okjssLuBoe8qaRCcjGtcMsv79hwzn/63wNeIkzJVFVX06h5S737XNr7DZwlsbTBDOyI6qbL2BJB5n6TV/w==
|
||||
|
||||
"@next/swc-linux-x64-musl@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-14.2.7.tgz#16f92f00263d1fce91ae80e5f230eb1feea484e4"
|
||||
integrity sha512-gUu0viOMvMlzFRz1r1eQ7Ql4OE+hPOmA7smfZAhn8vC4+0swMZaZxa9CSIozTYavi+bJNDZ3tgiSdMjmMzRJlQ==
|
||||
"@next/swc-linux-x64-musl@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-14.2.30.tgz#e88463d8c10dd600087b062f2dea59a515cd66f6"
|
||||
integrity sha512-dBmV1lLNeX4mR7uI7KNVHsGQU+OgTG5RGFPi3tBJpsKPvOPtg9poyav/BYWrB3GPQL4dW5YGGgalwZ79WukbKQ==
|
||||
|
||||
"@next/swc-win32-arm64-msvc@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-14.2.7.tgz#1224cb8a04cd9caad785a2187df9e85b49414a42"
|
||||
integrity sha512-PGbONHIVIuzWlYmLvuFKcj+8jXnLbx4WrlESYlVnEzDsa3+Q2hI1YHoXaSmbq0k4ZwZ7J6sWNV4UZfx1OeOlbQ==
|
||||
"@next/swc-win32-arm64-msvc@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-14.2.30.tgz#6975cbbab74d519b06d93210ed86cd4f3dbc1c4d"
|
||||
integrity sha512-6MMHi2Qc1Gkq+4YLXAgbYslE1f9zMGBikKMdmQRHXjkGPot1JY3n5/Qrbg40Uvbi8//wYnydPnyvNhI1DMUW1g==
|
||||
|
||||
"@next/swc-win32-ia32-msvc@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-14.2.7.tgz#9494aaf9cc50ddef600f8c1b2ed0f216b19f9294"
|
||||
integrity sha512-BiSY5umlx9ed5RQDoHcdbuKTUkuFORDqzYKPHlLeS+STUWQKWziVOn3Ic41LuTBvqE0TRJPKpio9GSIblNR+0w==
|
||||
"@next/swc-win32-ia32-msvc@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-14.2.30.tgz#08ad4de2e082bc6b07d41099b4310daec7885748"
|
||||
integrity sha512-pVZMnFok5qEX4RT59mK2hEVtJX+XFfak+/rjHpyFh7juiT52r177bfFKhnlafm0UOSldhXjj32b+LZIOdswGTg==
|
||||
|
||||
"@next/swc-win32-x64-msvc@14.2.7":
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-14.2.7.tgz#75e1d90758cb10a547e1cdfb878871da28123682"
|
||||
integrity sha512-pxsI23gKWRt/SPHFkDEsP+w+Nd7gK37Hpv0ngc5HpWy2e7cKx9zR/+Q2ptAUqICNTecAaGWvmhway7pj/JLEWA==
|
||||
"@next/swc-win32-x64-msvc@14.2.30":
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-14.2.30.tgz#94d3ddcc1e97572a0514a6180c8e3bb415e1dc98"
|
||||
integrity sha512-4KCo8hMZXMjpTzs3HOqOGYYwAXymXIy7PEPAXNEcEOyKqkjiDlECumrWziy+JEF0Oi4ILHGxzgQ3YiMGG2t/Lg==
|
||||
|
||||
"@nodelib/fs.scandir@2.1.5":
|
||||
version "2.1.5"
|
||||
@@ -2624,24 +2615,15 @@ axe-core@=4.7.0:
|
||||
resolved "https://registry.yarnpkg.com/axe-core/-/axe-core-4.7.0.tgz#34ba5a48a8b564f67e103f0aa5768d76e15bbbbf"
|
||||
integrity sha512-M0JtH+hlOL5pLQwHOLNYZaXuhqmvS8oExsqB1SBYgA4Dk7u/xx+YdGHXaK5pyUfed5mYXdlYiphWq3G8cRi5JQ==
|
||||
|
||||
axios@^1.2.3:
|
||||
version "1.7.2"
|
||||
resolved "https://registry.yarnpkg.com/axios/-/axios-1.7.2.tgz#b625db8a7051fbea61c35a3cbb3a1daa7b9c7621"
|
||||
integrity sha512-2A8QhOMrbomlDuiLeK9XibIBzuHeRcqqNOHp0Cyp5EoJ1IFDh+XZH3A6BkXtv0K4gFGCI0Y4BM7B1wOEi0Rmgw==
|
||||
axios@^1.2.3, axios@^1.8.2:
|
||||
version "1.8.2"
|
||||
resolved "https://registry.yarnpkg.com/axios/-/axios-1.8.2.tgz#fabe06e241dfe83071d4edfbcaa7b1c3a40f7979"
|
||||
integrity sha512-ls4GYBm5aig9vWx8AWDSGLpnpDQRtWAfrjU+EuytuODrFBkqesN2RkOQCBzrA1RQNHw1SmRMSDDDSwzNAYQ6Rg==
|
||||
dependencies:
|
||||
follow-redirects "^1.15.6"
|
||||
form-data "^4.0.0"
|
||||
proxy-from-env "^1.1.0"
|
||||
|
||||
axios@^1.6.2:
|
||||
version "1.6.2"
|
||||
resolved "https://registry.yarnpkg.com/axios/-/axios-1.6.2.tgz#de67d42c755b571d3e698df1b6504cde9b0ee9f2"
|
||||
integrity sha512-7i24Ri4pmDRfJTR7LDBhsOTtcm+9kjX5WiY1X3wIisx6G9So3pfMkEiU7emUBe46oceVImccTEM3k6C5dbVW8A==
|
||||
dependencies:
|
||||
follow-redirects "^1.15.0"
|
||||
form-data "^4.0.0"
|
||||
proxy-from-env "^1.1.0"
|
||||
|
||||
axobject-query@^3.2.1:
|
||||
version "3.2.1"
|
||||
resolved "https://registry.yarnpkg.com/axobject-query/-/axobject-query-3.2.1.tgz#39c378a6e3b06ca679f29138151e45b2b32da62a"
|
||||
@@ -2700,14 +2682,7 @@ brace-expansion@^2.0.1:
|
||||
dependencies:
|
||||
balanced-match "^1.0.0"
|
||||
|
||||
braces@^3.0.2, braces@~3.0.2:
|
||||
version "3.0.2"
|
||||
resolved "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz"
|
||||
integrity sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==
|
||||
dependencies:
|
||||
fill-range "^7.0.1"
|
||||
|
||||
braces@^3.0.3:
|
||||
braces@^3.0.3, braces@~3.0.2:
|
||||
version "3.0.3"
|
||||
resolved "https://registry.yarnpkg.com/braces/-/braces-3.0.3.tgz#490332f40919452272d55a8480adc0c441358789"
|
||||
integrity sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==
|
||||
@@ -2772,6 +2747,14 @@ c12@1.11.1:
|
||||
pkg-types "^1.1.1"
|
||||
rc9 "^2.1.2"
|
||||
|
||||
call-bind-apply-helpers@^1.0.1, call-bind-apply-helpers@^1.0.2:
|
||||
version "1.0.2"
|
||||
resolved "https://registry.yarnpkg.com/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz#4b5428c222be985d79c3d82657479dbe0b59b2d6"
|
||||
integrity sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==
|
||||
dependencies:
|
||||
es-errors "^1.3.0"
|
||||
function-bind "^1.1.2"
|
||||
|
||||
call-bind@^1.0.0:
|
||||
version "1.0.2"
|
||||
resolved "https://registry.yarnpkg.com/call-bind/-/call-bind-1.0.2.tgz#b1d4e89e688119c3c9a903ad30abb2f6a919be3c"
|
||||
@@ -3073,9 +3056,9 @@ create-require@^1.1.0:
|
||||
integrity sha512-dcKFX3jn0MpIaXjisoRvexIJVEKzaq7z2rZKxf+MSr9TkdmHmsU4m2lcLojrj/FHl8mk5VxMmYA+ftRkP/3oKQ==
|
||||
|
||||
cross-spawn@^7.0.0, cross-spawn@^7.0.2, cross-spawn@^7.0.3:
|
||||
version "7.0.3"
|
||||
resolved "https://registry.yarnpkg.com/cross-spawn/-/cross-spawn-7.0.3.tgz#f73a85b9d5d41d045551c177e2882d4ac85728a6"
|
||||
integrity sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==
|
||||
version "7.0.6"
|
||||
resolved "https://registry.yarnpkg.com/cross-spawn/-/cross-spawn-7.0.6.tgz#8a58fe78f00dcd70c370451759dfbfaf03e8ee9f"
|
||||
integrity sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==
|
||||
dependencies:
|
||||
path-key "^3.1.0"
|
||||
shebang-command "^2.0.0"
|
||||
@@ -3298,6 +3281,15 @@ dotenv@^16.4.5:
|
||||
resolved "https://registry.yarnpkg.com/dotenv/-/dotenv-16.4.5.tgz#cdd3b3b604cb327e286b4762e13502f717cb099f"
|
||||
integrity sha512-ZmdL2rui+eB2YwhsWzjInR8LldtZHGDoQ1ugH85ppHKwpUHL7j7rN0Ti9NCnGiQbhaZ11FpR+7ao1dNsmduNUg==
|
||||
|
||||
dunder-proto@^1.0.1:
|
||||
version "1.0.1"
|
||||
resolved "https://registry.yarnpkg.com/dunder-proto/-/dunder-proto-1.0.1.tgz#d7ae667e1dc83482f8b70fd0f6eefc50da30f58a"
|
||||
integrity sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==
|
||||
dependencies:
|
||||
call-bind-apply-helpers "^1.0.1"
|
||||
es-errors "^1.3.0"
|
||||
gopd "^1.2.0"
|
||||
|
||||
eastasianwidth@^0.2.0:
|
||||
version "0.2.0"
|
||||
resolved "https://registry.yarnpkg.com/eastasianwidth/-/eastasianwidth-0.2.0.tgz#696ce2ec0aa0e6ea93a397ffcf24aa7840c827cb"
|
||||
@@ -3428,6 +3420,16 @@ es-abstract@^1.22.1:
|
||||
unbox-primitive "^1.0.2"
|
||||
which-typed-array "^1.1.13"
|
||||
|
||||
es-define-property@^1.0.1:
|
||||
version "1.0.1"
|
||||
resolved "https://registry.yarnpkg.com/es-define-property/-/es-define-property-1.0.1.tgz#983eb2f9a6724e9303f61addf011c72e09e0b0fa"
|
||||
integrity sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==
|
||||
|
||||
es-errors@^1.3.0:
|
||||
version "1.3.0"
|
||||
resolved "https://registry.yarnpkg.com/es-errors/-/es-errors-1.3.0.tgz#05f75a25dab98e4fb1dcd5e1472c0546d5057c8f"
|
||||
integrity sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==
|
||||
|
||||
es-iterator-helpers@^1.0.12, es-iterator-helpers@^1.0.15:
|
||||
version "1.0.15"
|
||||
resolved "https://registry.yarnpkg.com/es-iterator-helpers/-/es-iterator-helpers-1.0.15.tgz#bd81d275ac766431d19305923707c3efd9f1ae40"
|
||||
@@ -3453,6 +3455,13 @@ es-module-lexer@1.4.1:
|
||||
resolved "https://registry.yarnpkg.com/es-module-lexer/-/es-module-lexer-1.4.1.tgz#41ea21b43908fe6a287ffcbe4300f790555331f5"
|
||||
integrity sha512-cXLGjP0c4T3flZJKQSuziYoq7MlT+rnvfZjfp7h+I7K9BNX54kP9nyWvdbwjQ4u1iWbOL4u96fgeZLToQlZC7w==
|
||||
|
||||
es-object-atoms@^1.0.0, es-object-atoms@^1.1.1:
|
||||
version "1.1.1"
|
||||
resolved "https://registry.yarnpkg.com/es-object-atoms/-/es-object-atoms-1.1.1.tgz#1c4f2c4837327597ce69d2ca190a7fdd172338c1"
|
||||
integrity sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==
|
||||
dependencies:
|
||||
es-errors "^1.3.0"
|
||||
|
||||
es-set-tostringtag@^2.0.1:
|
||||
version "2.0.2"
|
||||
resolved "https://registry.yarnpkg.com/es-set-tostringtag/-/es-set-tostringtag-2.0.2.tgz#11f7cc9f63376930a5f20be4915834f4bc74f9c9"
|
||||
@@ -3462,6 +3471,16 @@ es-set-tostringtag@^2.0.1:
|
||||
has-tostringtag "^1.0.0"
|
||||
hasown "^2.0.0"
|
||||
|
||||
es-set-tostringtag@^2.1.0:
|
||||
version "2.1.0"
|
||||
resolved "https://registry.yarnpkg.com/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz#f31dbbe0c183b00a6d26eb6325c810c0fd18bd4d"
|
||||
integrity sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==
|
||||
dependencies:
|
||||
es-errors "^1.3.0"
|
||||
get-intrinsic "^1.2.6"
|
||||
has-tostringtag "^1.0.2"
|
||||
hasown "^2.0.2"
|
||||
|
||||
es-shim-unscopables@^1.0.0:
|
||||
version "1.0.2"
|
||||
resolved "https://registry.yarnpkg.com/es-shim-unscopables/-/es-shim-unscopables-1.0.2.tgz#1f6942e71ecc7835ed1c8a83006d8771a63a3763"
|
||||
@@ -3963,13 +3982,6 @@ file-uri-to-path@1.0.0:
|
||||
resolved "https://registry.yarnpkg.com/file-uri-to-path/-/file-uri-to-path-1.0.0.tgz#553a7b8446ff6f684359c445f1e37a05dacc33dd"
|
||||
integrity sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw==
|
||||
|
||||
fill-range@^7.0.1:
|
||||
version "7.0.1"
|
||||
resolved "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz"
|
||||
integrity sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==
|
||||
dependencies:
|
||||
to-regex-range "^5.0.1"
|
||||
|
||||
fill-range@^7.1.1:
|
||||
version "7.1.1"
|
||||
resolved "https://registry.yarnpkg.com/fill-range/-/fill-range-7.1.1.tgz#44265d3cac07e3ea7dc247516380643754a05292"
|
||||
@@ -4003,11 +4015,6 @@ flatted@^3.2.9:
|
||||
resolved "https://registry.yarnpkg.com/flatted/-/flatted-3.2.9.tgz#7eb4c67ca1ba34232ca9d2d93e9886e611ad7daf"
|
||||
integrity sha512-36yxDn5H7OFZQla0/jFJmbIKTdZAQHngCedGxiMmpNfEZM0sdEeT+WczLQrjK6D7o2aiyLYDnkw0R3JK0Qv1RQ==
|
||||
|
||||
follow-redirects@^1.15.0:
|
||||
version "1.15.2"
|
||||
resolved "https://registry.yarnpkg.com/follow-redirects/-/follow-redirects-1.15.2.tgz#b460864144ba63f2681096f274c4e57026da2c13"
|
||||
integrity sha512-VQLG33o04KaQ8uYi2tVNbdrWp1QWxNNea+nmIB4EVM28v0hmP17z7aG1+wAkNzVq4KeXTq3221ye5qTJP91JwA==
|
||||
|
||||
follow-redirects@^1.15.6:
|
||||
version "1.15.6"
|
||||
resolved "https://registry.yarnpkg.com/follow-redirects/-/follow-redirects-1.15.6.tgz#7f815c0cda4249c74ff09e95ef97c23b5fd0399b"
|
||||
@@ -4034,12 +4041,14 @@ foreground-child@^3.1.0:
|
||||
signal-exit "^4.0.1"
|
||||
|
||||
form-data@^4.0.0:
|
||||
version "4.0.0"
|
||||
resolved "https://registry.yarnpkg.com/form-data/-/form-data-4.0.0.tgz#93919daeaf361ee529584b9b31664dc12c9fa452"
|
||||
integrity sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==
|
||||
version "4.0.4"
|
||||
resolved "https://registry.yarnpkg.com/form-data/-/form-data-4.0.4.tgz#784cdcce0669a9d68e94d11ac4eea98088edd2c4"
|
||||
integrity sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==
|
||||
dependencies:
|
||||
asynckit "^0.4.0"
|
||||
combined-stream "^1.0.8"
|
||||
es-set-tostringtag "^2.1.0"
|
||||
hasown "^2.0.2"
|
||||
mime-types "^2.1.12"
|
||||
|
||||
formidable@^2.1.2:
|
||||
@@ -4174,11 +4183,35 @@ get-intrinsic@^1.1.1, get-intrinsic@^1.1.3, get-intrinsic@^1.2.0, get-intrinsic@
|
||||
has-symbols "^1.0.3"
|
||||
hasown "^2.0.0"
|
||||
|
||||
get-intrinsic@^1.2.6:
|
||||
version "1.3.0"
|
||||
resolved "https://registry.yarnpkg.com/get-intrinsic/-/get-intrinsic-1.3.0.tgz#743f0e3b6964a93a5491ed1bffaae054d7f98d01"
|
||||
integrity sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==
|
||||
dependencies:
|
||||
call-bind-apply-helpers "^1.0.2"
|
||||
es-define-property "^1.0.1"
|
||||
es-errors "^1.3.0"
|
||||
es-object-atoms "^1.1.1"
|
||||
function-bind "^1.1.2"
|
||||
get-proto "^1.0.1"
|
||||
gopd "^1.2.0"
|
||||
has-symbols "^1.1.0"
|
||||
hasown "^2.0.2"
|
||||
math-intrinsics "^1.1.0"
|
||||
|
||||
get-nonce@^1.0.0:
|
||||
version "1.0.1"
|
||||
resolved "https://registry.yarnpkg.com/get-nonce/-/get-nonce-1.0.1.tgz#fdf3f0278073820d2ce9426c18f07481b1e0cdf3"
|
||||
integrity sha512-FJhYRoDaiatfEkUK8HKlicmu/3SGFD51q3itKDGoSTysQJBnfOcxU5GxnhE1E6soB76MbT0MBtnKJuXyAx+96Q==
|
||||
|
||||
get-proto@^1.0.1:
|
||||
version "1.0.1"
|
||||
resolved "https://registry.yarnpkg.com/get-proto/-/get-proto-1.0.1.tgz#150b3f2743869ef3e851ec0c49d15b1d14d00ee1"
|
||||
integrity sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==
|
||||
dependencies:
|
||||
dunder-proto "^1.0.1"
|
||||
es-object-atoms "^1.0.0"
|
||||
|
||||
get-stream@^5.0.0:
|
||||
version "5.2.0"
|
||||
resolved "https://registry.yarnpkg.com/get-stream/-/get-stream-5.2.0.tgz#4966a1795ee5ace65e706c4b7beb71257d6e22d3"
|
||||
@@ -4311,6 +4344,11 @@ gopd@^1.0.1:
|
||||
dependencies:
|
||||
get-intrinsic "^1.1.3"
|
||||
|
||||
gopd@^1.2.0:
|
||||
version "1.2.0"
|
||||
resolved "https://registry.yarnpkg.com/gopd/-/gopd-1.2.0.tgz#89f56b8217bdbc8802bd299df6d7f1081d7e51a1"
|
||||
integrity sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==
|
||||
|
||||
graceful-fs@^4.1.6, graceful-fs@^4.2.0, graceful-fs@^4.2.11, graceful-fs@^4.2.4, graceful-fs@^4.2.9:
|
||||
version "4.2.11"
|
||||
resolved "https://registry.yarnpkg.com/graceful-fs/-/graceful-fs-4.2.11.tgz#4183e4e8bf08bb6e05bbb2f7d2e0c8f712ca40e3"
|
||||
@@ -4368,6 +4406,11 @@ has-symbols@^1.0.2, has-symbols@^1.0.3:
|
||||
resolved "https://registry.yarnpkg.com/has-symbols/-/has-symbols-1.0.3.tgz#bb7b2c4349251dce87b125f7bdf874aa7c8b39f8"
|
||||
integrity sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A==
|
||||
|
||||
has-symbols@^1.1.0:
|
||||
version "1.1.0"
|
||||
resolved "https://registry.yarnpkg.com/has-symbols/-/has-symbols-1.1.0.tgz#fc9c6a783a084951d0b971fe1018de813707a338"
|
||||
integrity sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==
|
||||
|
||||
has-tostringtag@^1.0.0:
|
||||
version "1.0.0"
|
||||
resolved "https://registry.yarnpkg.com/has-tostringtag/-/has-tostringtag-1.0.0.tgz#7e133818a7d394734f941e73c3d3f9291e658b25"
|
||||
@@ -4375,6 +4418,13 @@ has-tostringtag@^1.0.0:
|
||||
dependencies:
|
||||
has-symbols "^1.0.2"
|
||||
|
||||
has-tostringtag@^1.0.2:
|
||||
version "1.0.2"
|
||||
resolved "https://registry.yarnpkg.com/has-tostringtag/-/has-tostringtag-1.0.2.tgz#2cdc42d40bef2e5b4eeab7c01a73c54ce7ab5abc"
|
||||
integrity sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==
|
||||
dependencies:
|
||||
has-symbols "^1.0.3"
|
||||
|
||||
has-unicode@^2.0.1:
|
||||
version "2.0.1"
|
||||
resolved "https://registry.yarnpkg.com/has-unicode/-/has-unicode-2.0.1.tgz#e0e6fe6a28cf51138855e086d1691e771de2a8b9"
|
||||
@@ -4394,6 +4444,13 @@ hasown@^2.0.0:
|
||||
dependencies:
|
||||
function-bind "^1.1.2"
|
||||
|
||||
hasown@^2.0.2:
|
||||
version "2.0.2"
|
||||
resolved "https://registry.yarnpkg.com/hasown/-/hasown-2.0.2.tgz#003eaf91be7adc372e84ec59dc37252cedb80003"
|
||||
integrity sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==
|
||||
dependencies:
|
||||
function-bind "^1.1.2"
|
||||
|
||||
hast-util-to-jsx-runtime@^2.0.0:
|
||||
version "2.2.0"
|
||||
resolved "https://registry.yarnpkg.com/hast-util-to-jsx-runtime/-/hast-util-to-jsx-runtime-2.2.0.tgz#ffd59bfcf0eb8321c6ed511bfc4b399ac3404bc2"
|
||||
@@ -5095,6 +5152,11 @@ make-error@^1.1.1:
|
||||
resolved "https://registry.yarnpkg.com/make-error/-/make-error-1.3.6.tgz#2eb2e37ea9b67c4891f684a1394799af484cf7a2"
|
||||
integrity sha512-s8UhlNe7vPKomQhC1qFelMokr/Sc3AgNbso3n74mVPA5LTZwkB9NlXf4XPamLxJE8h0gh73rM94xvwRT2CVInw==
|
||||
|
||||
math-intrinsics@^1.1.0:
|
||||
version "1.1.0"
|
||||
resolved "https://registry.yarnpkg.com/math-intrinsics/-/math-intrinsics-1.1.0.tgz#a0dd74be81e2aa5c2f27e65ce283605ee4e2b7f9"
|
||||
integrity sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==
|
||||
|
||||
mdast-util-from-markdown@^2.0.0:
|
||||
version "2.0.0"
|
||||
resolved "https://registry.yarnpkg.com/mdast-util-from-markdown/-/mdast-util-from-markdown-2.0.0.tgz#52f14815ec291ed061f2922fd14d6689c810cb88"
|
||||
@@ -5374,7 +5436,7 @@ micromark@^4.0.0:
|
||||
micromark-util-symbol "^2.0.0"
|
||||
micromark-util-types "^2.0.0"
|
||||
|
||||
micromatch@^4.0.2:
|
||||
micromatch@^4.0.2, micromatch@^4.0.4, micromatch@^4.0.5:
|
||||
version "4.0.8"
|
||||
resolved "https://registry.yarnpkg.com/micromatch/-/micromatch-4.0.8.tgz#d66fa18f3a47076789320b9b1af32bd86d9fa202"
|
||||
integrity sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==
|
||||
@@ -5382,14 +5444,6 @@ micromatch@^4.0.2:
|
||||
braces "^3.0.3"
|
||||
picomatch "^2.3.1"
|
||||
|
||||
micromatch@^4.0.4, micromatch@^4.0.5:
|
||||
version "4.0.5"
|
||||
resolved "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz"
|
||||
integrity sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==
|
||||
dependencies:
|
||||
braces "^3.0.2"
|
||||
picomatch "^2.3.1"
|
||||
|
||||
mime-db@1.52.0:
|
||||
version "1.52.0"
|
||||
resolved "https://registry.yarnpkg.com/mime-db/-/mime-db-1.52.0.tgz#bbabcdc02859f4987301c856e3387ce5ec43bf70"
|
||||
@@ -5542,9 +5596,9 @@ mz@^2.7.0:
|
||||
thenify-all "^1.0.0"
|
||||
|
||||
nanoid@^3.3.6:
|
||||
version "3.3.6"
|
||||
resolved "https://registry.npmjs.org/nanoid/-/nanoid-3.3.6.tgz"
|
||||
integrity sha512-BGcqMMJuToF7i1rt+2PWSNVnWIkGCU78jBG3RxO/bZlnZPK2Cmi2QaffxGO/2RvWi9sL+FAiRiXMgsyxQ1DIDA==
|
||||
version "3.3.11"
|
||||
resolved "https://registry.yarnpkg.com/nanoid/-/nanoid-3.3.11.tgz#4f4f112cefbe303202f2199838128936266d185b"
|
||||
integrity sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==
|
||||
|
||||
natural-compare@^1.4.0:
|
||||
version "1.4.0"
|
||||
@@ -5576,12 +5630,12 @@ next-themes@^0.4.6:
|
||||
resolved "https://registry.yarnpkg.com/next-themes/-/next-themes-0.4.6.tgz#8d7e92d03b8fea6582892a50a928c9b23502e8b6"
|
||||
integrity sha512-pZvgD5L0IEvX5/9GWyHMf3m8BKiVQwsCMHfoFosXtXBMnaS0ZnIJ9ST4b4NqLVKDEm8QBxoNNGNaBv2JNF6XNA==
|
||||
|
||||
next@^14.2.7:
|
||||
version "14.2.7"
|
||||
resolved "https://registry.yarnpkg.com/next/-/next-14.2.7.tgz#e02d5d9622ff4b998e5c89adfd660c9bf6435970"
|
||||
integrity sha512-4Qy2aK0LwH4eQiSvQWyKuC7JXE13bIopEQesWE0c/P3uuNRnZCQanI0vsrMLmUQJLAto+A+/8+sve2hd+BQuOQ==
|
||||
next@^14.2.30:
|
||||
version "14.2.30"
|
||||
resolved "https://registry.yarnpkg.com/next/-/next-14.2.30.tgz#7b7288859794574067f65d6e2ea98822f2173006"
|
||||
integrity sha512-+COdu6HQrHHFQ1S/8BBsCag61jZacmvbuL2avHvQFbWa2Ox7bE+d8FyNgxRLjXQ5wtPyQwEmk85js/AuaG2Sbg==
|
||||
dependencies:
|
||||
"@next/env" "14.2.7"
|
||||
"@next/env" "14.2.30"
|
||||
"@swc/helpers" "0.5.5"
|
||||
busboy "1.6.0"
|
||||
caniuse-lite "^1.0.30001579"
|
||||
@@ -5589,15 +5643,15 @@ next@^14.2.7:
|
||||
postcss "8.4.31"
|
||||
styled-jsx "5.1.1"
|
||||
optionalDependencies:
|
||||
"@next/swc-darwin-arm64" "14.2.7"
|
||||
"@next/swc-darwin-x64" "14.2.7"
|
||||
"@next/swc-linux-arm64-gnu" "14.2.7"
|
||||
"@next/swc-linux-arm64-musl" "14.2.7"
|
||||
"@next/swc-linux-x64-gnu" "14.2.7"
|
||||
"@next/swc-linux-x64-musl" "14.2.7"
|
||||
"@next/swc-win32-arm64-msvc" "14.2.7"
|
||||
"@next/swc-win32-ia32-msvc" "14.2.7"
|
||||
"@next/swc-win32-x64-msvc" "14.2.7"
|
||||
"@next/swc-darwin-arm64" "14.2.30"
|
||||
"@next/swc-darwin-x64" "14.2.30"
|
||||
"@next/swc-linux-arm64-gnu" "14.2.30"
|
||||
"@next/swc-linux-arm64-musl" "14.2.30"
|
||||
"@next/swc-linux-x64-gnu" "14.2.30"
|
||||
"@next/swc-linux-x64-musl" "14.2.30"
|
||||
"@next/swc-win32-arm64-msvc" "14.2.30"
|
||||
"@next/swc-win32-ia32-msvc" "14.2.30"
|
||||
"@next/swc-win32-x64-msvc" "14.2.30"
|
||||
|
||||
node-abort-controller@^3.0.1:
|
||||
version "3.1.1"
|
||||
@@ -5982,12 +6036,12 @@ perfect-freehand@^1.2.2:
|
||||
resolved "https://registry.yarnpkg.com/perfect-freehand/-/perfect-freehand-1.2.2.tgz#292f65b72df0c7f57a89c4b346b50d7139014172"
|
||||
integrity sha512-eh31l019WICQ03pkF3FSzHxB8n07ItqIQ++G5UV8JX0zVOXzgTGCqnRR0jJ2h9U8/2uW4W4mtGJELt9kEV0CFQ==
|
||||
|
||||
picocolors@1.0.0, picocolors@^1.0.0:
|
||||
picocolors@1.0.0:
|
||||
version "1.0.0"
|
||||
resolved "https://registry.npmjs.org/picocolors/-/picocolors-1.0.0.tgz"
|
||||
integrity sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ==
|
||||
|
||||
picocolors@^1.0.1:
|
||||
picocolors@^1.0.0, picocolors@^1.0.1:
|
||||
version "1.0.1"
|
||||
resolved "https://registry.yarnpkg.com/picocolors/-/picocolors-1.0.1.tgz#a8ad579b571952f0e5d25892de5445bcfe25aaa1"
|
||||
integrity sha512-anP1Z8qwhkbmu7MFP5iTt+wQKXgwzf7zTyGlcdzabySa9vd0Xt392U0rVmz9poOaBj0uHJKyyo9/upk0HrEQew==
|
||||
@@ -6060,16 +6114,7 @@ postcss-value-parser@^4.0.0, postcss-value-parser@^4.2.0:
|
||||
resolved "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-4.2.0.tgz"
|
||||
integrity sha512-1NNCs6uurfkVbeXG4S8JFT9t19m45ICnif8zWLd5oPSZ50QnwMfK+H3jv408d4jw/7Bttv5axS5IiHoLaVNHeQ==
|
||||
|
||||
postcss@8.4.25, postcss@^8.4.23:
|
||||
version "8.4.25"
|
||||
resolved "https://registry.npmjs.org/postcss/-/postcss-8.4.25.tgz"
|
||||
integrity sha512-7taJ/8t2av0Z+sQEvNzCkpDynl0tX3uJMCODi6nT3PfASC7dYCWV9aQ+uiCf+KBD4SEFcu+GvJdGdwzQ6OSjCw==
|
||||
dependencies:
|
||||
nanoid "^3.3.6"
|
||||
picocolors "^1.0.0"
|
||||
source-map-js "^1.0.2"
|
||||
|
||||
postcss@8.4.31:
|
||||
postcss@8.4.31, postcss@^8.4.23:
|
||||
version "8.4.31"
|
||||
resolved "https://registry.yarnpkg.com/postcss/-/postcss-8.4.31.tgz#92b451050a9f914da6755af352bdc0192508656d"
|
||||
integrity sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==
|
||||
@@ -6402,11 +6447,6 @@ reflect.getprototypeof@^1.0.4:
|
||||
globalthis "^1.0.3"
|
||||
which-builtin-type "^1.1.3"
|
||||
|
||||
regenerator-runtime@^0.14.0:
|
||||
version "0.14.0"
|
||||
resolved "https://registry.yarnpkg.com/regenerator-runtime/-/regenerator-runtime-0.14.0.tgz#5e19d68eb12d486f797e15a3c6a918f7cec5eb45"
|
||||
integrity sha512-srw17NI0TUWHuGa5CFGGmhfNIeja30WMBfbslPNhf6JrqQlLN5gcrvig1oqPxiVaXb0oW0XRKtH6Nngs5lKCIA==
|
||||
|
||||
regexp.prototype.flags@^1.5.0, regexp.prototype.flags@^1.5.1:
|
||||
version "1.5.1"
|
||||
resolved "https://registry.yarnpkg.com/regexp.prototype.flags/-/regexp.prototype.flags-1.5.1.tgz#90ce989138db209f81492edd734183ce99f9677e"
|
||||
|
||||
Reference in New Issue
Block a user