Compare commits

..

11 Commits

Author SHA1 Message Date
893d02075f chore(main): release 0.24.1 2025-12-19 10:19:40 -06:00
f0ee7b531a fix: logout redirect (#802) 2025-12-19 17:19:09 +01:00
37a454f283 chore(main): release 0.24.0 (#793) 2025-12-19 15:00:43 +01:00
964cd78bb6 feat: identify action items (#790)
* Identify action items

* Add action items to mock summary

* Add action items validator

* Remove final prefix from action items

* Make on action items callback required

* Don't mutation action items response

* Assign action items to none on error

* Use timeout constant

* Exclude action items from transcript list
2025-12-18 21:13:47 +01:00
5f458aa4a7 fix: automatically reprocess daily recordings (#797)
* Automatically reprocess recordings

* Restore the comments

* Remove redundant check

* Fix indent

* Add comment about cyclic import
2025-12-18 21:10:04 +01:00
5f7dfadabd fix: retry on workflow timeout (#798) 2025-12-18 20:49:06 +01:00
0bc971ba96 fix: main menu login (#800) 2025-12-18 20:48:39 +01:00
Igor Monadical
c62e3c0753 incorporate daily api undocumented feature (#796)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-17 09:51:55 -05:00
Igor Monadical
16284e1ac3 fix: daily video optimisation (#789)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-15 15:00:53 -05:00
Igor Monadical
443982617d coolify pull policy (#792)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-15 14:54:05 -05:00
Igor Monadical
23023b3cdb update nextjs (#791)
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
2025-12-15 13:58:34 -05:00
28 changed files with 945 additions and 261 deletions

View File

@@ -1,5 +1,27 @@
# Changelog # Changelog
## [0.24.1](https://github.com/Monadical-SAS/reflector/compare/v0.24.0...v0.24.1) (2025-12-19)
### Bug Fixes
* logout redirect ([#802](https://github.com/Monadical-SAS/reflector/issues/802)) ([f0ee7b5](https://github.com/Monadical-SAS/reflector/commit/f0ee7b531a0911f214ccbb84d399e9a6c9b700c0))
## [0.24.0](https://github.com/Monadical-SAS/reflector/compare/v0.23.2...v0.24.0) (2025-12-18)
### Features
* identify action items ([#790](https://github.com/Monadical-SAS/reflector/issues/790)) ([964cd78](https://github.com/Monadical-SAS/reflector/commit/964cd78bb699d83d012ae4b8c96565df25b90a5d))
### Bug Fixes
* automatically reprocess daily recordings ([#797](https://github.com/Monadical-SAS/reflector/issues/797)) ([5f458aa](https://github.com/Monadical-SAS/reflector/commit/5f458aa4a7ec3d00ca5ec49d62fcc8ad232b138e))
* daily video optimisation ([#789](https://github.com/Monadical-SAS/reflector/issues/789)) ([16284e1](https://github.com/Monadical-SAS/reflector/commit/16284e1ac3faede2b74f0d91b50c0b5612af2c35))
* main menu login ([#800](https://github.com/Monadical-SAS/reflector/issues/800)) ([0bc971b](https://github.com/Monadical-SAS/reflector/commit/0bc971ba966a52d719c8c240b47dc7b3bdea4391))
* retry on workflow timeout ([#798](https://github.com/Monadical-SAS/reflector/issues/798)) ([5f7dfad](https://github.com/Monadical-SAS/reflector/commit/5f7dfadabd3e8017406ad3720ba495a59963ee34))
## [0.23.2](https://github.com/Monadical-SAS/reflector/compare/v0.23.1...v0.23.2) (2025-12-11) ## [0.23.2](https://github.com/Monadical-SAS/reflector/compare/v0.23.1...v0.23.2) (2025-12-11)

View File

@@ -4,6 +4,7 @@
services: services:
web: web:
image: monadicalsas/reflector-frontend:latest image: monadicalsas/reflector-frontend:latest
pull_policy: always
environment: environment:
- KV_URL=${KV_URL:-redis://redis:6379} - KV_URL=${KV_URL:-redis://redis:6379}
- SITE_URL=${SITE_URL} - SITE_URL=${SITE_URL}

View File

@@ -0,0 +1,26 @@
"""add_action_items
Revision ID: 05f8688d6895
Revises: bbafedfa510c
Create Date: 2025-12-12 11:57:50.209658
"""
from typing import Sequence, Union
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision: str = "05f8688d6895"
down_revision: Union[str, None] = "bbafedfa510c"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
op.add_column("transcript", sa.Column("action_items", sa.JSON(), nullable=True))
def downgrade() -> None:
op.drop_column("transcript", "action_items")

View File

@@ -18,6 +18,7 @@ from .requests import (
# Response models # Response models
from .responses import ( from .responses import (
FinishedRecordingResponse,
MeetingParticipant, MeetingParticipant,
MeetingParticipantsResponse, MeetingParticipantsResponse,
MeetingResponse, MeetingResponse,
@@ -79,6 +80,7 @@ __all__ = [
"MeetingParticipant", "MeetingParticipant",
"MeetingResponse", "MeetingResponse",
"RecordingResponse", "RecordingResponse",
"FinishedRecordingResponse",
"RecordingS3Info", "RecordingS3Info",
"MeetingTokenResponse", "MeetingTokenResponse",
"WebhookResponse", "WebhookResponse",

View File

@@ -121,7 +121,10 @@ class RecordingS3Info(BaseModel):
class RecordingResponse(BaseModel): class RecordingResponse(BaseModel):
""" """
Response from recording retrieval endpoint. Response from recording retrieval endpoint (network layer).
Duration may be None for recordings still being processed by Daily.
Use FinishedRecordingResponse for recordings ready for processing.
Reference: https://docs.daily.co/reference/rest-api/recordings Reference: https://docs.daily.co/reference/rest-api/recordings
""" """
@@ -135,7 +138,9 @@ class RecordingResponse(BaseModel):
max_participants: int | None = Field( max_participants: int | None = Field(
None, description="Maximum participants during recording (may be missing)" None, description="Maximum participants during recording (may be missing)"
) )
duration: int = Field(description="Recording duration in seconds") duration: int | None = Field(
None, description="Recording duration in seconds (None if still processing)"
)
share_token: NonEmptyString | None = Field( share_token: NonEmptyString | None = Field(
None, description="Token for sharing recording" None, description="Token for sharing recording"
) )
@@ -149,6 +154,25 @@ class RecordingResponse(BaseModel):
None, description="Meeting session identifier (may be missing)" None, description="Meeting session identifier (may be missing)"
) )
def to_finished(self) -> "FinishedRecordingResponse | None":
"""Convert to FinishedRecordingResponse if duration is available and status is finished."""
if self.duration is None or self.status != "finished":
return None
return FinishedRecordingResponse(**self.model_dump())
class FinishedRecordingResponse(RecordingResponse):
"""
Recording with confirmed duration - ready for processing.
This model guarantees duration is present and status is finished.
"""
status: Literal["finished"] = Field(
description="Recording status (always 'finished')"
)
duration: int = Field(description="Recording duration in seconds")
class MeetingTokenResponse(BaseModel): class MeetingTokenResponse(BaseModel):
""" """

View File

@@ -3,6 +3,7 @@ from typing import Literal
import sqlalchemy as sa import sqlalchemy as sa
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from sqlalchemy import or_
from reflector.db import get_database, metadata from reflector.db import get_database, metadata
from reflector.utils import generate_uuid4 from reflector.utils import generate_uuid4
@@ -79,5 +80,35 @@ class RecordingController:
results = await get_database().fetch_all(query) results = await get_database().fetch_all(query)
return [Recording(**row) for row in results] return [Recording(**row) for row in results]
async def get_multitrack_needing_reprocessing(
self, bucket_name: str
) -> list[Recording]:
"""
Get multitrack recordings that need reprocessing:
- Have track_keys (multitrack)
- Either have no transcript OR transcript has error status
This is more efficient than fetching all recordings and filtering in Python.
"""
from reflector.db.transcripts import (
transcripts, # noqa: PLC0415 cyclic import
)
query = (
recordings.select()
.outerjoin(transcripts, recordings.c.id == transcripts.c.recording_id)
.where(
recordings.c.bucket_name == bucket_name,
recordings.c.track_keys.isnot(None),
or_(
transcripts.c.id.is_(None),
transcripts.c.status == "error",
),
)
)
results = await get_database().fetch_all(query)
recordings_list = [Recording(**row) for row in results]
return [r for r in recordings_list if r.is_multitrack]
recordings_controller = RecordingController() recordings_controller = RecordingController()

View File

@@ -44,6 +44,7 @@ transcripts = sqlalchemy.Table(
sqlalchemy.Column("title", sqlalchemy.String), sqlalchemy.Column("title", sqlalchemy.String),
sqlalchemy.Column("short_summary", sqlalchemy.String), sqlalchemy.Column("short_summary", sqlalchemy.String),
sqlalchemy.Column("long_summary", sqlalchemy.String), sqlalchemy.Column("long_summary", sqlalchemy.String),
sqlalchemy.Column("action_items", sqlalchemy.JSON),
sqlalchemy.Column("topics", sqlalchemy.JSON), sqlalchemy.Column("topics", sqlalchemy.JSON),
sqlalchemy.Column("events", sqlalchemy.JSON), sqlalchemy.Column("events", sqlalchemy.JSON),
sqlalchemy.Column("participants", sqlalchemy.JSON), sqlalchemy.Column("participants", sqlalchemy.JSON),
@@ -164,6 +165,10 @@ class TranscriptFinalLongSummary(BaseModel):
long_summary: str long_summary: str
class TranscriptActionItems(BaseModel):
action_items: dict
class TranscriptFinalTitle(BaseModel): class TranscriptFinalTitle(BaseModel):
title: str title: str
@@ -204,6 +209,7 @@ class Transcript(BaseModel):
locked: bool = False locked: bool = False
short_summary: str | None = None short_summary: str | None = None
long_summary: str | None = None long_summary: str | None = None
action_items: dict | None = None
topics: list[TranscriptTopic] = [] topics: list[TranscriptTopic] = []
events: list[TranscriptEvent] = [] events: list[TranscriptEvent] = []
participants: list[TranscriptParticipant] | None = [] participants: list[TranscriptParticipant] | None = []
@@ -368,7 +374,12 @@ class TranscriptController:
room_id: str | None = None, room_id: str | None = None,
search_term: str | None = None, search_term: str | None = None,
return_query: bool = False, return_query: bool = False,
exclude_columns: list[str] = ["topics", "events", "participants"], exclude_columns: list[str] = [
"topics",
"events",
"participants",
"action_items",
],
) -> list[Transcript]: ) -> list[Transcript]:
""" """
Get all transcripts Get all transcripts

View File

@@ -16,6 +16,9 @@ from llama_index.core.workflow import (
) )
from llama_index.llms.openai_like import OpenAILike from llama_index.llms.openai_like import OpenAILike
from pydantic import BaseModel, ValidationError from pydantic import BaseModel, ValidationError
from workflows.errors import WorkflowTimeoutError
from reflector.utils.retry import retry
T = TypeVar("T", bound=BaseModel) T = TypeVar("T", bound=BaseModel)
OutputT = TypeVar("OutputT", bound=BaseModel) OutputT = TypeVar("OutputT", bound=BaseModel)
@@ -229,12 +232,17 @@ class LLM:
texts: list[str], texts: list[str],
output_cls: Type[T], output_cls: Type[T],
tone_name: str | None = None, tone_name: str | None = None,
timeout: int | None = None,
) -> T: ) -> T:
"""Get structured output from LLM with validation retry via Workflow.""" """Get structured output from LLM with validation retry via Workflow."""
if timeout is None:
timeout = self.settings_obj.LLM_STRUCTURED_RESPONSE_TIMEOUT
async def run_workflow():
workflow = StructuredOutputWorkflow( workflow = StructuredOutputWorkflow(
output_cls=output_cls, output_cls=output_cls,
max_retries=self.settings_obj.LLM_PARSE_MAX_RETRIES + 1, max_retries=self.settings_obj.LLM_PARSE_MAX_RETRIES + 1,
timeout=120, timeout=timeout,
) )
result = await workflow.run( result = await workflow.run(
@@ -252,3 +260,10 @@ class LLM:
) )
return result["success"] return result["success"]
return await retry(run_workflow)(
retry_attempts=3,
retry_backoff_interval=1.0,
retry_backoff_max=30.0,
retry_ignore_exc_types=(WorkflowTimeoutError,),
)

View File

@@ -309,6 +309,7 @@ class PipelineMainFile(PipelineMainBase):
transcript, transcript,
on_long_summary_callback=self.on_long_summary, on_long_summary_callback=self.on_long_summary,
on_short_summary_callback=self.on_short_summary, on_short_summary_callback=self.on_short_summary,
on_action_items_callback=self.on_action_items,
empty_pipeline=self.empty_pipeline, empty_pipeline=self.empty_pipeline,
logger=self.logger, logger=self.logger,
) )

View File

@@ -27,6 +27,7 @@ from reflector.db.recordings import recordings_controller
from reflector.db.rooms import rooms_controller from reflector.db.rooms import rooms_controller
from reflector.db.transcripts import ( from reflector.db.transcripts import (
Transcript, Transcript,
TranscriptActionItems,
TranscriptDuration, TranscriptDuration,
TranscriptFinalLongSummary, TranscriptFinalLongSummary,
TranscriptFinalShortSummary, TranscriptFinalShortSummary,
@@ -306,6 +307,23 @@ class PipelineMainBase(PipelineRunner[PipelineMessage], Generic[PipelineMessage]
data=final_short_summary, data=final_short_summary,
) )
@broadcast_to_sockets
async def on_action_items(self, data):
action_items = TranscriptActionItems(action_items=data.action_items)
async with self.transaction():
transcript = await self.get_transcript()
await transcripts_controller.update(
transcript,
{
"action_items": action_items.action_items,
},
)
return await transcripts_controller.append_event(
transcript=transcript,
event="ACTION_ITEMS",
data=action_items,
)
@broadcast_to_sockets @broadcast_to_sockets
async def on_duration(self, data): async def on_duration(self, data):
async with self.transaction(): async with self.transaction():
@@ -465,6 +483,7 @@ class PipelineMainFinalSummaries(PipelineMainFromTopics):
transcript=self._transcript, transcript=self._transcript,
callback=self.on_long_summary, callback=self.on_long_summary,
on_short_summary=self.on_short_summary, on_short_summary=self.on_short_summary,
on_action_items=self.on_action_items,
), ),
] ]

View File

@@ -772,6 +772,7 @@ class PipelineMainMultitrack(PipelineMainBase):
transcript, transcript,
on_long_summary_callback=self.on_long_summary, on_long_summary_callback=self.on_long_summary,
on_short_summary_callback=self.on_short_summary, on_short_summary_callback=self.on_short_summary,
on_action_items_callback=self.on_action_items,
empty_pipeline=self.empty_pipeline, empty_pipeline=self.empty_pipeline,
logger=self.logger, logger=self.logger,
) )

View File

@@ -89,6 +89,7 @@ async def generate_summaries(
*, *,
on_long_summary_callback: Callable, on_long_summary_callback: Callable,
on_short_summary_callback: Callable, on_short_summary_callback: Callable,
on_action_items_callback: Callable,
empty_pipeline: EmptyPipeline, empty_pipeline: EmptyPipeline,
logger: structlog.BoundLogger, logger: structlog.BoundLogger,
): ):
@@ -96,11 +97,14 @@ async def generate_summaries(
logger.warning("No topics for summary generation") logger.warning("No topics for summary generation")
return return
processor = TranscriptFinalSummaryProcessor( processor_kwargs = {
transcript=transcript, "transcript": transcript,
callback=on_long_summary_callback, "callback": on_long_summary_callback,
on_short_summary=on_short_summary_callback, "on_short_summary": on_short_summary_callback,
) "on_action_items": on_action_items_callback,
}
processor = TranscriptFinalSummaryProcessor(**processor_kwargs)
processor.set_pipeline(empty_pipeline) processor.set_pipeline(empty_pipeline)
for topic in topics: for topic in topics:

View File

@@ -96,6 +96,36 @@ RECAP_PROMPT = dedent(
""" """
).strip() ).strip()
ACTION_ITEMS_PROMPT = dedent(
"""
Identify action items from this meeting transcript. Your goal is to identify what was decided and what needs to happen next.
Look for:
1. **Decisions Made**: Any decisions, choices, or conclusions reached during the meeting. For each decision:
- What was decided? (be specific)
- Who made the decision or was involved? (use actual participant names)
- Why was this decision made? (key factors, reasoning, or rationale)
2. **Next Steps / Action Items**: Any tasks, follow-ups, or actions that were mentioned or assigned. For each action item:
- What specific task needs to be done? (be concrete and actionable)
- Who is responsible? (use actual participant names if mentioned, or "team" if unclear)
- When is it due? (any deadlines, timeframes, or "by next meeting" type commitments)
- What context is needed? (any additional details that help understand the task)
Guidelines:
- Be thorough and identify all action items, even if they seem minor
- Include items that were agreed upon, assigned, or committed to
- Include decisions even if they seem obvious or implicit
- If someone says "I'll do X" or "We should do Y", that's an action item
- If someone says "Let's go with option A", that's a decision
- Use the exact participant names from the transcript
- If no participant name is mentioned, you can leave assigned_to/decided_by as null
Only return empty lists if the transcript contains NO decisions and NO action items whatsoever.
"""
).strip()
STRUCTURED_RESPONSE_PROMPT_TEMPLATE = dedent( STRUCTURED_RESPONSE_PROMPT_TEMPLATE = dedent(
""" """
Based on the following analysis, provide the information in the requested JSON format: Based on the following analysis, provide the information in the requested JSON format:
@@ -155,6 +185,53 @@ class SubjectsResponse(BaseModel):
) )
class ActionItem(BaseModel):
"""A single action item from the meeting"""
task: str = Field(description="The task or action item to be completed")
assigned_to: str | None = Field(
default=None, description="Person or team assigned to this task (name)"
)
assigned_to_participant_id: str | None = Field(
default=None, description="Participant ID if assigned_to matches a participant"
)
deadline: str | None = Field(
default=None, description="Deadline or timeframe mentioned for this task"
)
context: str | None = Field(
default=None, description="Additional context or notes about this task"
)
class Decision(BaseModel):
"""A decision made during the meeting"""
decision: str = Field(description="What was decided")
rationale: str | None = Field(
default=None,
description="Reasoning or key factors that influenced this decision",
)
decided_by: str | None = Field(
default=None, description="Person or group who made the decision (name)"
)
decided_by_participant_id: str | None = Field(
default=None, description="Participant ID if decided_by matches a participant"
)
class ActionItemsResponse(BaseModel):
"""Pydantic model for identified action items"""
decisions: list[Decision] = Field(
default_factory=list,
description="List of decisions made during the meeting",
)
next_steps: list[ActionItem] = Field(
default_factory=list,
description="List of action items and next steps to be taken",
)
class SummaryBuilder: class SummaryBuilder:
def __init__(self, llm: LLM, filename: str | None = None, logger=None) -> None: def __init__(self, llm: LLM, filename: str | None = None, logger=None) -> None:
self.transcript: str | None = None self.transcript: str | None = None
@@ -166,6 +243,8 @@ class SummaryBuilder:
self.model_name: str = llm.model_name self.model_name: str = llm.model_name
self.logger = logger or structlog.get_logger() self.logger = logger or structlog.get_logger()
self.participant_instructions: str | None = None self.participant_instructions: str | None = None
self.action_items: ActionItemsResponse | None = None
self.participant_name_to_id: dict[str, str] = {}
if filename: if filename:
self.read_transcript_from_file(filename) self.read_transcript_from_file(filename)
@@ -189,13 +268,20 @@ class SummaryBuilder:
self.llm = llm self.llm = llm
async def _get_structured_response( async def _get_structured_response(
self, prompt: str, output_cls: Type[T], tone_name: str | None = None self,
prompt: str,
output_cls: Type[T],
tone_name: str | None = None,
timeout: int | None = None,
) -> T: ) -> T:
"""Generic function to get structured output from LLM for non-function-calling models.""" """Generic function to get structured output from LLM for non-function-calling models."""
# Add participant instructions to the prompt if available
enhanced_prompt = self._enhance_prompt_with_participants(prompt) enhanced_prompt = self._enhance_prompt_with_participants(prompt)
return await self.llm.get_structured_response( return await self.llm.get_structured_response(
enhanced_prompt, [self.transcript], output_cls, tone_name=tone_name enhanced_prompt,
[self.transcript],
output_cls,
tone_name=tone_name,
timeout=timeout,
) )
async def _get_response( async def _get_response(
@@ -216,11 +302,19 @@ class SummaryBuilder:
# Participants # Participants
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
def set_known_participants(self, participants: list[str]) -> None: def set_known_participants(
self,
participants: list[str],
participant_name_to_id: dict[str, str] | None = None,
) -> None:
""" """
Set known participants directly without LLM identification. Set known participants directly without LLM identification.
This is used when participants are already identified and stored. This is used when participants are already identified and stored.
They are appended at the end of the transcript, providing more context for the assistant. They are appended at the end of the transcript, providing more context for the assistant.
Args:
participants: List of participant names
participant_name_to_id: Optional mapping of participant names to their IDs
""" """
if not participants: if not participants:
self.logger.warning("No participants provided") self.logger.warning("No participants provided")
@@ -231,10 +325,12 @@ class SummaryBuilder:
participants=participants, participants=participants,
) )
if participant_name_to_id:
self.participant_name_to_id = participant_name_to_id
participants_md = self.format_list_md(participants) participants_md = self.format_list_md(participants)
self.transcript += f"\n\n# Participants\n\n{participants_md}" self.transcript += f"\n\n# Participants\n\n{participants_md}"
# Set instructions that will be automatically added to all prompts
participants_list = ", ".join(participants) participants_list = ", ".join(participants)
self.participant_instructions = dedent( self.participant_instructions = dedent(
f""" f"""
@@ -413,6 +509,92 @@ class SummaryBuilder:
self.recap = str(recap_response) self.recap = str(recap_response)
self.logger.info(f"Quick recap: {self.recap}") self.logger.info(f"Quick recap: {self.recap}")
def _map_participant_names_to_ids(
self, response: ActionItemsResponse
) -> ActionItemsResponse:
"""Map participant names in action items to participant IDs."""
if not self.participant_name_to_id:
return response
decisions = []
for decision in response.decisions:
new_decision = decision.model_copy()
if (
decision.decided_by
and decision.decided_by in self.participant_name_to_id
):
new_decision.decided_by_participant_id = self.participant_name_to_id[
decision.decided_by
]
decisions.append(new_decision)
next_steps = []
for item in response.next_steps:
new_item = item.model_copy()
if item.assigned_to and item.assigned_to in self.participant_name_to_id:
new_item.assigned_to_participant_id = self.participant_name_to_id[
item.assigned_to
]
next_steps.append(new_item)
return ActionItemsResponse(decisions=decisions, next_steps=next_steps)
async def identify_action_items(self) -> ActionItemsResponse | None:
"""Identify action items (decisions and next steps) from the transcript."""
self.logger.info("--- identify action items using TreeSummarize")
if not self.transcript:
self.logger.warning(
"No transcript available for action items identification"
)
self.action_items = None
return None
action_items_prompt = ACTION_ITEMS_PROMPT
try:
response = await self._get_structured_response(
action_items_prompt,
ActionItemsResponse,
tone_name="Action item identifier",
timeout=settings.LLM_STRUCTURED_RESPONSE_TIMEOUT,
)
response = self._map_participant_names_to_ids(response)
self.action_items = response
self.logger.info(
f"Identified {len(response.decisions)} decisions and {len(response.next_steps)} action items",
decisions_count=len(response.decisions),
next_steps_count=len(response.next_steps),
)
if response.decisions:
self.logger.debug(
"Decisions identified",
decisions=[d.decision for d in response.decisions],
)
if response.next_steps:
self.logger.debug(
"Action items identified",
tasks=[item.task for item in response.next_steps],
)
if not response.decisions and not response.next_steps:
self.logger.warning(
"No action items identified from transcript",
transcript_length=len(self.transcript),
)
return response
except Exception as e:
self.logger.error(
f"Error identifying action items: {e}",
exc_info=True,
)
self.action_items = None
return None
async def generate_summary(self, only_subjects: bool = False) -> None: async def generate_summary(self, only_subjects: bool = False) -> None:
""" """
Generate summary by extracting subjects, creating summaries for each, and generating a recap. Generate summary by extracting subjects, creating summaries for each, and generating a recap.
@@ -424,6 +606,7 @@ class SummaryBuilder:
await self.generate_subject_summaries() await self.generate_subject_summaries()
await self.generate_recap() await self.generate_recap()
await self.identify_action_items()
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
# Markdown # Markdown
@@ -526,8 +709,6 @@ if __name__ == "__main__":
if args.summary: if args.summary:
await sm.generate_summary() await sm.generate_summary()
# Note: action items generation has been removed
print("") print("")
print("-" * 80) print("-" * 80)
print("") print("")

View File

@@ -1,7 +1,12 @@
from reflector.llm import LLM from reflector.llm import LLM
from reflector.processors.base import Processor from reflector.processors.base import Processor
from reflector.processors.summary.summary_builder import SummaryBuilder from reflector.processors.summary.summary_builder import SummaryBuilder
from reflector.processors.types import FinalLongSummary, FinalShortSummary, TitleSummary from reflector.processors.types import (
ActionItems,
FinalLongSummary,
FinalShortSummary,
TitleSummary,
)
from reflector.settings import settings from reflector.settings import settings
@@ -27,15 +32,20 @@ class TranscriptFinalSummaryProcessor(Processor):
builder = SummaryBuilder(self.llm, logger=self.logger) builder = SummaryBuilder(self.llm, logger=self.logger)
builder.set_transcript(text) builder.set_transcript(text)
# Use known participants if available, otherwise identify them
if self.transcript and self.transcript.participants: if self.transcript and self.transcript.participants:
# Extract participant names from the stored participants
participant_names = [p.name for p in self.transcript.participants if p.name] participant_names = [p.name for p in self.transcript.participants if p.name]
if participant_names: if participant_names:
self.logger.info( self.logger.info(
f"Using {len(participant_names)} known participants from transcript" f"Using {len(participant_names)} known participants from transcript"
) )
builder.set_known_participants(participant_names) participant_name_to_id = {
p.name: p.id
for p in self.transcript.participants
if p.name and p.id
}
builder.set_known_participants(
participant_names, participant_name_to_id=participant_name_to_id
)
else: else:
self.logger.info( self.logger.info(
"Participants field exists but is empty, identifying participants" "Participants field exists but is empty, identifying participants"
@@ -63,7 +73,6 @@ class TranscriptFinalSummaryProcessor(Processor):
self.logger.warning("No summary to output") self.logger.warning("No summary to output")
return return
# build the speakermap from the transcript
speakermap = {} speakermap = {}
if self.transcript: if self.transcript:
speakermap = { speakermap = {
@@ -76,8 +85,6 @@ class TranscriptFinalSummaryProcessor(Processor):
speakermap=speakermap, speakermap=speakermap,
) )
# build the transcript as a single string
# Replace speaker IDs with actual participant names if available
text_transcript = [] text_transcript = []
unique_speakers = set() unique_speakers = set()
for topic in self.chunks: for topic in self.chunks:
@@ -111,4 +118,9 @@ class TranscriptFinalSummaryProcessor(Processor):
) )
await self.emit(final_short_summary, name="short_summary") await self.emit(final_short_summary, name="short_summary")
if self.builder and self.builder.action_items:
action_items = self.builder.action_items.model_dump()
action_items = ActionItems(action_items=action_items)
await self.emit(action_items, name="action_items")
await self.emit(final_long_summary) await self.emit(final_long_summary)

View File

@@ -78,7 +78,11 @@ class TranscriptTopicDetectorProcessor(Processor):
""" """
prompt = TOPIC_PROMPT.format(text=text) prompt = TOPIC_PROMPT.format(text=text)
response = await self.llm.get_structured_response( response = await self.llm.get_structured_response(
prompt, [text], TopicResponse, tone_name="Topic analyzer" prompt,
[text],
TopicResponse,
tone_name="Topic analyzer",
timeout=settings.LLM_STRUCTURED_RESPONSE_TIMEOUT,
) )
return response return response

View File

@@ -264,6 +264,10 @@ class FinalShortSummary(BaseModel):
duration: float duration: float
class ActionItems(BaseModel):
action_items: dict # JSON-serializable dict from ActionItemsResponse
class FinalTitle(BaseModel): class FinalTitle(BaseModel):
title: str title: str

View File

@@ -77,6 +77,9 @@ class Settings(BaseSettings):
LLM_PARSE_MAX_RETRIES: int = ( LLM_PARSE_MAX_RETRIES: int = (
3 # Max retries for JSON/validation errors (total attempts = retries + 1) 3 # Max retries for JSON/validation errors (total attempts = retries + 1)
) )
LLM_STRUCTURED_RESPONSE_TIMEOUT: int = (
300 # Timeout in seconds for structured responses (5 minutes)
)
# Diarization # Diarization
DIARIZATION_ENABLED: bool = True DIARIZATION_ENABLED: bool = True

View File

@@ -501,6 +501,7 @@ async def transcript_get(
"title": transcript.title, "title": transcript.title,
"short_summary": transcript.short_summary, "short_summary": transcript.short_summary,
"long_summary": transcript.long_summary, "long_summary": transcript.long_summary,
"action_items": transcript.action_items,
"created_at": transcript.created_at, "created_at": transcript.created_at,
"share_mode": transcript.share_mode, "share_mode": transcript.share_mode,
"source_language": transcript.source_language, "source_language": transcript.source_language,

View File

@@ -38,6 +38,10 @@ else:
"task": "reflector.worker.process.reprocess_failed_recordings", "task": "reflector.worker.process.reprocess_failed_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST "schedule": crontab(hour=5, minute=0), # Midnight EST
}, },
"reprocess_failed_daily_recordings": {
"task": "reflector.worker.process.reprocess_failed_daily_recordings",
"schedule": crontab(hour=5, minute=0), # Midnight EST
},
"poll_daily_recordings": { "poll_daily_recordings": {
"task": "reflector.worker.process.poll_daily_recordings", "task": "reflector.worker.process.poll_daily_recordings",
"schedule": 180.0, # Every 3 minutes (configurable lookback window) "schedule": 180.0, # Every 3 minutes (configurable lookback window)

View File

@@ -12,7 +12,7 @@ from celery import shared_task
from celery.utils.log import get_task_logger from celery.utils.log import get_task_logger
from pydantic import ValidationError from pydantic import ValidationError
from reflector.dailyco_api import RecordingResponse from reflector.dailyco_api import FinishedRecordingResponse, RecordingResponse
from reflector.db.daily_participant_sessions import ( from reflector.db.daily_participant_sessions import (
DailyParticipantSession, DailyParticipantSession,
daily_participant_sessions_controller, daily_participant_sessions_controller,
@@ -322,16 +322,38 @@ async def poll_daily_recordings():
) )
return return
recording_ids = [rec.id for rec in api_recordings] finished_recordings: List[FinishedRecordingResponse] = []
for rec in api_recordings:
finished = rec.to_finished()
if finished is None:
logger.debug(
"Skipping unfinished recording",
recording_id=rec.id,
room_name=rec.room_name,
status=rec.status,
)
continue
finished_recordings.append(finished)
if not finished_recordings:
logger.debug(
"No finished recordings found from Daily.co API",
total_api_count=len(api_recordings),
)
return
recording_ids = [rec.id for rec in finished_recordings]
existing_recordings = await recordings_controller.get_by_ids(recording_ids) existing_recordings = await recordings_controller.get_by_ids(recording_ids)
existing_ids = {rec.id for rec in existing_recordings} existing_ids = {rec.id for rec in existing_recordings}
missing_recordings = [rec for rec in api_recordings if rec.id not in existing_ids] missing_recordings = [
rec for rec in finished_recordings if rec.id not in existing_ids
]
if not missing_recordings: if not missing_recordings:
logger.debug( logger.debug(
"All recordings already in DB", "All recordings already in DB",
api_count=len(api_recordings), api_count=len(finished_recordings),
existing_count=len(existing_recordings), existing_count=len(existing_recordings),
) )
return return
@@ -339,7 +361,7 @@ async def poll_daily_recordings():
logger.info( logger.info(
"Found recordings missing from DB", "Found recordings missing from DB",
missing_count=len(missing_recordings), missing_count=len(missing_recordings),
total_api_count=len(api_recordings), total_api_count=len(finished_recordings),
existing_count=len(existing_recordings), existing_count=len(existing_recordings),
) )
@@ -649,7 +671,7 @@ async def reprocess_failed_recordings():
Find recordings in Whereby S3 bucket and check if they have proper transcriptions. Find recordings in Whereby S3 bucket and check if they have proper transcriptions.
If not, requeue them for processing. If not, requeue them for processing.
Note: Daily.co recordings are processed via webhooks, not this cron job. Note: Daily.co multitrack recordings are handled by reprocess_failed_daily_recordings.
""" """
logger.info("Checking Whereby recordings that need processing or reprocessing") logger.info("Checking Whereby recordings that need processing or reprocessing")
@@ -702,6 +724,103 @@ async def reprocess_failed_recordings():
return reprocessed_count return reprocessed_count
@shared_task
@asynctask
async def reprocess_failed_daily_recordings():
"""
Find Daily.co multitrack recordings in the database and check if they have proper transcriptions.
If not, requeue them for processing.
"""
logger.info(
"Checking Daily.co multitrack recordings that need processing or reprocessing"
)
if not settings.DAILYCO_STORAGE_AWS_BUCKET_NAME:
logger.debug(
"DAILYCO_STORAGE_AWS_BUCKET_NAME not configured; skipping Daily recording reprocessing"
)
return 0
bucket_name = settings.DAILYCO_STORAGE_AWS_BUCKET_NAME
reprocessed_count = 0
try:
multitrack_recordings = (
await recordings_controller.get_multitrack_needing_reprocessing(bucket_name)
)
logger.info(
"Found multitrack recordings needing reprocessing",
count=len(multitrack_recordings),
bucket=bucket_name,
)
for recording in multitrack_recordings:
if not recording.meeting_id:
logger.debug(
"Skipping recording without meeting_id",
recording_id=recording.id,
)
continue
meeting = await meetings_controller.get_by_id(recording.meeting_id)
if not meeting:
logger.warning(
"Meeting not found for recording",
recording_id=recording.id,
meeting_id=recording.meeting_id,
)
continue
transcript = None
try:
transcript = await transcripts_controller.get_by_recording_id(
recording.id
)
except ValidationError:
await transcripts_controller.remove_by_recording_id(recording.id)
logger.warning(
"Removed invalid transcript for recording",
recording_id=recording.id,
)
if not recording.track_keys:
logger.warning(
"Recording has no track_keys, cannot reprocess",
recording_id=recording.id,
)
continue
logger.info(
"Queueing Daily recording for reprocessing",
recording_id=recording.id,
room_name=meeting.room_name,
track_count=len(recording.track_keys),
transcript_status=transcript.status if transcript else None,
)
process_multitrack_recording.delay(
bucket_name=bucket_name,
daily_room_name=meeting.room_name,
recording_id=recording.id,
track_keys=recording.track_keys,
)
reprocessed_count += 1
except Exception as e:
logger.error(
"Error checking Daily multitrack recordings",
error=str(e),
exc_info=True,
)
logger.info(
"Daily reprocessing complete",
requeued_count=reprocessed_count,
)
return reprocessed_count
@shared_task @shared_task
@asynctask @asynctask
async def trigger_daily_reconciliation() -> None: async def trigger_daily_reconciliation() -> None:

View File

@@ -123,6 +123,7 @@ async def send_transcript_webhook(
"target_language": transcript.target_language, "target_language": transcript.target_language,
"status": transcript.status, "status": transcript.status,
"frontend_url": frontend_url, "frontend_url": frontend_url,
"action_items": transcript.action_items,
}, },
"room": { "room": {
"id": room.id, "id": room.id,

View File

@@ -1,12 +1,14 @@
"""Tests for LLM parse error recovery using llama-index Workflow""" """Tests for LLM parse error recovery using llama-index Workflow"""
from time import monotonic
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, patch
import pytest import pytest
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from workflows.errors import WorkflowRuntimeError from workflows.errors import WorkflowRuntimeError, WorkflowTimeoutError
from reflector.llm import LLM, LLMParseError, StructuredOutputWorkflow from reflector.llm import LLM, LLMParseError, StructuredOutputWorkflow
from reflector.utils.retry import RetryException
class TestResponse(BaseModel): class TestResponse(BaseModel):
@@ -355,3 +357,132 @@ class TestNetworkErrorRetries:
# Only called once - Workflow doesn't retry network errors # Only called once - Workflow doesn't retry network errors
assert mock_settings.llm.acomplete.call_count == 1 assert mock_settings.llm.acomplete.call_count == 1
class TestWorkflowTimeoutRetry:
"""Test timeout retry mechanism in get_structured_response"""
@pytest.mark.asyncio
async def test_timeout_retry_succeeds_on_retry(self, test_settings):
"""Test that WorkflowTimeoutError triggers retry and succeeds"""
llm = LLM(settings=test_settings, temperature=0.4, max_tokens=100)
call_count = {"count": 0}
async def workflow_run_side_effect(*args, **kwargs):
call_count["count"] += 1
if call_count["count"] == 1:
raise WorkflowTimeoutError("Operation timed out after 120 seconds")
return {
"success": TestResponse(
title="Test", summary="Summary", confidence=0.95
)
}
with (
patch("reflector.llm.StructuredOutputWorkflow") as mock_workflow_class,
patch("reflector.llm.TreeSummarize") as mock_summarize,
patch("reflector.llm.Settings") as mock_settings,
):
mock_workflow = MagicMock()
mock_workflow.run = AsyncMock(side_effect=workflow_run_side_effect)
mock_workflow_class.return_value = mock_workflow
mock_summarizer = MagicMock()
mock_summarize.return_value = mock_summarizer
mock_summarizer.aget_response = AsyncMock(return_value="Some analysis")
mock_settings.llm.acomplete = AsyncMock(
return_value=make_completion_response(
'{"title": "Test", "summary": "Summary", "confidence": 0.95}'
)
)
result = await llm.get_structured_response(
prompt="Test prompt", texts=["Test text"], output_cls=TestResponse
)
assert result.title == "Test"
assert result.summary == "Summary"
assert call_count["count"] == 2
@pytest.mark.asyncio
async def test_timeout_retry_exhausts_after_max_attempts(self, test_settings):
"""Test that timeout retry stops after max attempts"""
llm = LLM(settings=test_settings, temperature=0.4, max_tokens=100)
call_count = {"count": 0}
async def workflow_run_side_effect(*args, **kwargs):
call_count["count"] += 1
raise WorkflowTimeoutError("Operation timed out after 120 seconds")
with (
patch("reflector.llm.StructuredOutputWorkflow") as mock_workflow_class,
patch("reflector.llm.TreeSummarize") as mock_summarize,
patch("reflector.llm.Settings") as mock_settings,
):
mock_workflow = MagicMock()
mock_workflow.run = AsyncMock(side_effect=workflow_run_side_effect)
mock_workflow_class.return_value = mock_workflow
mock_summarizer = MagicMock()
mock_summarize.return_value = mock_summarizer
mock_summarizer.aget_response = AsyncMock(return_value="Some analysis")
mock_settings.llm.acomplete = AsyncMock(
return_value=make_completion_response(
'{"title": "Test", "summary": "Summary", "confidence": 0.95}'
)
)
with pytest.raises(RetryException, match="Retry attempts exceeded"):
await llm.get_structured_response(
prompt="Test prompt", texts=["Test text"], output_cls=TestResponse
)
assert call_count["count"] == 3
@pytest.mark.asyncio
async def test_timeout_retry_with_backoff(self, test_settings):
"""Test that exponential backoff is applied between retries"""
llm = LLM(settings=test_settings, temperature=0.4, max_tokens=100)
call_times = []
async def workflow_run_side_effect(*args, **kwargs):
call_times.append(monotonic())
if len(call_times) < 3:
raise WorkflowTimeoutError("Operation timed out after 120 seconds")
return {
"success": TestResponse(
title="Test", summary="Summary", confidence=0.95
)
}
with (
patch("reflector.llm.StructuredOutputWorkflow") as mock_workflow_class,
patch("reflector.llm.TreeSummarize") as mock_summarize,
patch("reflector.llm.Settings") as mock_settings,
):
mock_workflow = MagicMock()
mock_workflow.run = AsyncMock(side_effect=workflow_run_side_effect)
mock_workflow_class.return_value = mock_workflow
mock_summarizer = MagicMock()
mock_summarize.return_value = mock_summarizer
mock_summarizer.aget_response = AsyncMock(return_value="Some analysis")
mock_settings.llm.acomplete = AsyncMock(
return_value=make_completion_response(
'{"title": "Test", "summary": "Summary", "confidence": 0.95}'
)
)
result = await llm.get_structured_response(
prompt="Test prompt", texts=["Test text"], output_cls=TestResponse
)
assert result.title == "Test"
if len(call_times) >= 2:
time_between_calls = call_times[1] - call_times[0]
assert (
time_between_calls >= 1.5
), f"Expected ~2s backoff, got {time_between_calls}s"

View File

@@ -266,7 +266,11 @@ async def mock_summary_processor():
# When flush is called, simulate summary generation by calling the callbacks # When flush is called, simulate summary generation by calling the callbacks
async def flush_with_callback(): async def flush_with_callback():
mock_summary.flush_called = True mock_summary.flush_called = True
from reflector.processors.types import FinalLongSummary, FinalShortSummary from reflector.processors.types import (
ActionItems,
FinalLongSummary,
FinalShortSummary,
)
if hasattr(mock_summary, "_callback"): if hasattr(mock_summary, "_callback"):
await mock_summary._callback( await mock_summary._callback(
@@ -276,12 +280,19 @@ async def mock_summary_processor():
await mock_summary._on_short_summary( await mock_summary._on_short_summary(
FinalShortSummary(short_summary="Test short summary", duration=10.0) FinalShortSummary(short_summary="Test short summary", duration=10.0)
) )
if hasattr(mock_summary, "_on_action_items"):
await mock_summary._on_action_items(
ActionItems(action_items={"test": "action item"})
)
mock_summary.flush = flush_with_callback mock_summary.flush = flush_with_callback
def init_with_callback(transcript=None, callback=None, on_short_summary=None): def init_with_callback(
transcript=None, callback=None, on_short_summary=None, on_action_items=None
):
mock_summary._callback = callback mock_summary._callback = callback
mock_summary._on_short_summary = on_short_summary mock_summary._on_short_summary = on_short_summary
mock_summary._on_action_items = on_action_items
return mock_summary return mock_summary
mock_summary_class.side_effect = init_with_callback mock_summary_class.side_effect = init_with_callback

View File

@@ -2,20 +2,29 @@
import { Spinner, Link } from "@chakra-ui/react"; import { Spinner, Link } from "@chakra-ui/react";
import { useAuth } from "../lib/AuthProvider"; import { useAuth } from "../lib/AuthProvider";
import { usePathname } from "next/navigation";
import { getLogoutRedirectUrl } from "../lib/auth";
export default function UserInfo() { export default function UserInfo() {
const auth = useAuth(); const auth = useAuth();
const pathname = usePathname();
const status = auth.status; const status = auth.status;
const isLoading = status === "loading"; const isLoading = status === "loading";
const isAuthenticated = status === "authenticated"; const isAuthenticated = status === "authenticated";
const isRefreshing = status === "refreshing"; const isRefreshing = status === "refreshing";
const callbackUrl = getLogoutRedirectUrl(pathname);
return isLoading ? ( return isLoading ? (
<Spinner size="xs" className="mx-3" /> <Spinner size="xs" className="mx-3" />
) : !isAuthenticated && !isRefreshing ? ( ) : !isAuthenticated && !isRefreshing ? (
<Link <Link
href="/" href="#"
className="font-light px-2" className="font-light px-2"
onClick={() => auth.signIn("authentik")} onClick={(e) => {
e.preventDefault();
auth.signIn("authentik");
}}
> >
Log in Log in
</Link> </Link>
@@ -23,7 +32,7 @@ export default function UserInfo() {
<Link <Link
href="#" href="#"
className="font-light px-2" className="font-light px-2"
onClick={() => auth.signOut({ callbackUrl: "/" })} onClick={() => auth.signOut({ callbackUrl })}
> >
Log out Log out
</Link> </Link>

View File

@@ -105,7 +105,19 @@ export default function DailyRoom({ meeting }: DailyRoomProps) {
} }
}); });
await frame.join({ url: roomUrl }); await frame.join({
url: roomUrl,
sendSettings: {
video: {
// Optimize bandwidth for camera video
// allowAdaptiveLayers automatically adjusts quality based on network conditions
allowAdaptiveLayers: true,
// Use bandwidth-optimized preset as fallback for browsers without adaptive support
maxQuality: "medium",
},
// Note: screenVideo intentionally not configured to preserve full quality for screen shares
},
});
} catch (error) { } catch (error) {
console.error("Error creating Daily frame:", error); console.error("Error creating Daily frame:", error);
} }

View File

@@ -18,3 +18,8 @@ export const LOGIN_REQUIRED_PAGES = [
export const PROTECTED_PAGES = new RegExp( export const PROTECTED_PAGES = new RegExp(
LOGIN_REQUIRED_PAGES.map((page) => `^${page}$`).join("|"), LOGIN_REQUIRED_PAGES.map((page) => `^${page}$`).join("|"),
); );
export function getLogoutRedirectUrl(pathname: string): string {
const transcriptPagePattern = /^\/transcripts\/[^/]+$/;
return transcriptPagePattern.test(pathname) ? pathname : "/";
}

View File

@@ -31,7 +31,7 @@
"ioredis": "^5.7.0", "ioredis": "^5.7.0",
"jest-worker": "^29.6.2", "jest-worker": "^29.6.2",
"lucide-react": "^0.525.0", "lucide-react": "^0.525.0",
"next": "^15.5.7", "next": "^15.5.9",
"next-auth": "^4.24.7", "next-auth": "^4.24.7",
"next-themes": "^0.4.6", "next-themes": "^0.4.6",
"nuqs": "^2.4.3", "nuqs": "^2.4.3",

444
www/pnpm-lock.yaml generated
View File

@@ -27,7 +27,7 @@ importers:
version: 0.2.3(@fortawesome/fontawesome-svg-core@6.7.2)(react@18.3.1) version: 0.2.3(@fortawesome/fontawesome-svg-core@6.7.2)(react@18.3.1)
"@sentry/nextjs": "@sentry/nextjs":
specifier: ^10.11.0 specifier: ^10.11.0
version: 10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3) version: 10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3)
"@tanstack/react-query": "@tanstack/react-query":
specifier: ^5.85.9 specifier: ^5.85.9
version: 5.85.9(react@18.3.1) version: 5.85.9(react@18.3.1)
@@ -62,17 +62,17 @@ importers:
specifier: ^0.525.0 specifier: ^0.525.0
version: 0.525.0(react@18.3.1) version: 0.525.0(react@18.3.1)
next: next:
specifier: ^15.5.7 specifier: ^15.5.9
version: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0) version: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
next-auth: next-auth:
specifier: ^4.24.7 specifier: ^4.24.7
version: 4.24.11(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1) version: 4.24.11(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
next-themes: next-themes:
specifier: ^0.4.6 specifier: ^0.4.6
version: 0.4.6(react-dom@18.3.1(react@18.3.1))(react@18.3.1) version: 0.4.6(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
nuqs: nuqs:
specifier: ^2.4.3 specifier: ^2.4.3
version: 2.4.3(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1) version: 2.4.3(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)
openapi-fetch: openapi-fetch:
specifier: ^0.14.0 specifier: ^0.14.0
version: 0.14.0 version: 0.14.0
@@ -509,6 +509,12 @@ packages:
integrity: sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg==, integrity: sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg==,
} }
"@emnapi/runtime@1.7.1":
resolution:
{
integrity: sha512-PVtJr5CmLwYAU9PZDMITZoR5iAOShYREoR45EyyLrbntV50mdePTgUn4AmOw90Ifcj+x2kRjdzr1HP3RrNiHGA==,
}
"@emnapi/wasi-threads@1.0.4": "@emnapi/wasi-threads@1.0.4":
resolution: resolution:
{ {
@@ -758,189 +764,213 @@ packages:
} }
engines: { node: ">=18.18" } engines: { node: ">=18.18" }
"@img/sharp-darwin-arm64@0.34.3": "@img/colour@1.0.0":
resolution: resolution:
{ {
integrity: sha512-ryFMfvxxpQRsgZJqBd4wsttYQbCxsJksrv9Lw/v798JcQ8+w84mBWuXwl+TT0WJ/WrYOLaYpwQXi3sA9nTIaIg==, integrity: sha512-A5P/LfWGFSl6nsckYtjw9da+19jB8hkJ6ACTGcDfEJ0aE+l2n2El7dsVM7UVHZQ9s2lmYMWlrS21YLy2IR1LUw==,
}
engines: { node: ">=18" }
"@img/sharp-darwin-arm64@0.34.5":
resolution:
{
integrity: sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64] cpu: [arm64]
os: [darwin] os: [darwin]
"@img/sharp-darwin-x64@0.34.3": "@img/sharp-darwin-x64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-yHpJYynROAj12TA6qil58hmPmAwxKKC7reUqtGLzsOHfP7/rniNGTL8tjWX6L3CTV4+5P4ypcS7Pp+7OB+8ihA==, integrity: sha512-YNEFAF/4KQ/PeW0N+r+aVVsoIY0/qxxikF2SWdp+NRkmMB7y9LBZAVqQ4yhGCm/H3H270OSykqmQMKLBhBJDEw==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64] cpu: [x64]
os: [darwin] os: [darwin]
"@img/sharp-libvips-darwin-arm64@1.2.0": "@img/sharp-libvips-darwin-arm64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-sBZmpwmxqwlqG9ueWFXtockhsxefaV6O84BMOrhtg/YqbTaRdqDE7hxraVE3y6gVM4eExmfzW4a8el9ArLeEiQ==, integrity: sha512-zqjjo7RatFfFoP0MkQ51jfuFZBnVE2pRiaydKJ1G/rHZvnsrHAOcQALIi9sA5co5xenQdTugCvtb1cuf78Vf4g==,
} }
cpu: [arm64] cpu: [arm64]
os: [darwin] os: [darwin]
"@img/sharp-libvips-darwin-x64@1.2.0": "@img/sharp-libvips-darwin-x64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-M64XVuL94OgiNHa5/m2YvEQI5q2cl9d/wk0qFTDVXcYzi43lxuiFTftMR1tOnFQovVXNZJ5TURSDK2pNe9Yzqg==, integrity: sha512-1IOd5xfVhlGwX+zXv2N93k0yMONvUlANylbJw1eTah8K/Jtpi15KC+WSiaX/nBmbm2HxRM1gZ0nSdjSsrZbGKg==,
} }
cpu: [x64] cpu: [x64]
os: [darwin] os: [darwin]
"@img/sharp-libvips-linux-arm64@1.2.0": "@img/sharp-libvips-linux-arm64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-RXwd0CgG+uPRX5YYrkzKyalt2OJYRiJQ8ED/fi1tq9WQW2jsQIn0tqrlR5l5dr/rjqq6AHAxURhj2DVjyQWSOA==, integrity: sha512-excjX8DfsIcJ10x1Kzr4RcWe1edC9PquDRRPx3YVCvQv+U5p7Yin2s32ftzikXojb1PIFc/9Mt28/y+iRklkrw==,
} }
cpu: [arm64] cpu: [arm64]
os: [linux] os: [linux]
"@img/sharp-libvips-linux-arm@1.2.0": "@img/sharp-libvips-linux-arm@1.2.4":
resolution: resolution:
{ {
integrity: sha512-mWd2uWvDtL/nvIzThLq3fr2nnGfyr/XMXlq8ZJ9WMR6PXijHlC3ksp0IpuhK6bougvQrchUAfzRLnbsen0Cqvw==, integrity: sha512-bFI7xcKFELdiNCVov8e44Ia4u2byA+l3XtsAj+Q8tfCwO6BQ8iDojYdvoPMqsKDkuoOo+X6HZA0s0q11ANMQ8A==,
} }
cpu: [arm] cpu: [arm]
os: [linux] os: [linux]
"@img/sharp-libvips-linux-ppc64@1.2.0": "@img/sharp-libvips-linux-ppc64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-Xod/7KaDDHkYu2phxxfeEPXfVXFKx70EAFZ0qyUdOjCcxbjqyJOEUpDe6RIyaunGxT34Anf9ue/wuWOqBW2WcQ==, integrity: sha512-FMuvGijLDYG6lW+b/UvyilUWu5Ayu+3r2d1S8notiGCIyYU/76eig1UfMmkZ7vwgOrzKzlQbFSuQfgm7GYUPpA==,
} }
cpu: [ppc64] cpu: [ppc64]
os: [linux] os: [linux]
"@img/sharp-libvips-linux-s390x@1.2.0": "@img/sharp-libvips-linux-riscv64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-eMKfzDxLGT8mnmPJTNMcjfO33fLiTDsrMlUVcp6b96ETbnJmd4uvZxVJSKPQfS+odwfVaGifhsB07J1LynFehw==, integrity: sha512-oVDbcR4zUC0ce82teubSm+x6ETixtKZBh/qbREIOcI3cULzDyb18Sr/Wcyx7NRQeQzOiHTNbZFF1UwPS2scyGA==,
}
cpu: [riscv64]
os: [linux]
"@img/sharp-libvips-linux-s390x@1.2.4":
resolution:
{
integrity: sha512-qmp9VrzgPgMoGZyPvrQHqk02uyjA0/QrTO26Tqk6l4ZV0MPWIW6LTkqOIov+J1yEu7MbFQaDpwdwJKhbJvuRxQ==,
} }
cpu: [s390x] cpu: [s390x]
os: [linux] os: [linux]
"@img/sharp-libvips-linux-x64@1.2.0": "@img/sharp-libvips-linux-x64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-ZW3FPWIc7K1sH9E3nxIGB3y3dZkpJlMnkk7z5tu1nSkBoCgw2nSRTFHI5pB/3CQaJM0pdzMF3paf9ckKMSE9Tg==, integrity: sha512-tJxiiLsmHc9Ax1bz3oaOYBURTXGIRDODBqhveVHonrHJ9/+k89qbLl0bcJns+e4t4rvaNBxaEZsFtSfAdquPrw==,
} }
cpu: [x64] cpu: [x64]
os: [linux] os: [linux]
"@img/sharp-libvips-linuxmusl-arm64@1.2.0": "@img/sharp-libvips-linuxmusl-arm64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-UG+LqQJbf5VJ8NWJ5Z3tdIe/HXjuIdo4JeVNADXBFuG7z9zjoegpzzGIyV5zQKi4zaJjnAd2+g2nna8TZvuW9Q==, integrity: sha512-FVQHuwx1IIuNow9QAbYUzJ+En8KcVm9Lk5+uGUQJHaZmMECZmOlix9HnH7n1TRkXMS0pGxIJokIVB9SuqZGGXw==,
} }
cpu: [arm64] cpu: [arm64]
os: [linux] os: [linux]
"@img/sharp-libvips-linuxmusl-x64@1.2.0": "@img/sharp-libvips-linuxmusl-x64@1.2.4":
resolution: resolution:
{ {
integrity: sha512-SRYOLR7CXPgNze8akZwjoGBoN1ThNZoqpOgfnOxmWsklTGVfJiGJoC/Lod7aNMGA1jSsKWM1+HRX43OP6p9+6Q==, integrity: sha512-+LpyBk7L44ZIXwz/VYfglaX/okxezESc6UxDSoyo2Ks6Jxc4Y7sGjpgU9s4PMgqgjj1gZCylTieNamqA1MF7Dg==,
} }
cpu: [x64] cpu: [x64]
os: [linux] os: [linux]
"@img/sharp-linux-arm64@0.34.3": "@img/sharp-linux-arm64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-QdrKe3EvQrqwkDrtuTIjI0bu6YEJHTgEeqdzI3uWJOH6G1O8Nl1iEeVYRGdj1h5I21CqxSvQp1Yv7xeU3ZewbA==, integrity: sha512-bKQzaJRY/bkPOXyKx5EVup7qkaojECG6NLYswgktOZjaXecSAeCWiZwwiFf3/Y+O1HrauiE3FVsGxFg8c24rZg==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64] cpu: [arm64]
os: [linux] os: [linux]
"@img/sharp-linux-arm@0.34.3": "@img/sharp-linux-arm@0.34.5":
resolution: resolution:
{ {
integrity: sha512-oBK9l+h6KBN0i3dC8rYntLiVfW8D8wH+NPNT3O/WBHeW0OQWCjfWksLUaPidsrDKpJgXp3G3/hkmhptAW0I3+A==, integrity: sha512-9dLqsvwtg1uuXBGZKsxem9595+ujv0sJ6Vi8wcTANSFpwV/GONat5eCkzQo/1O6zRIkh0m/8+5BjrRr7jDUSZw==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm] cpu: [arm]
os: [linux] os: [linux]
"@img/sharp-linux-ppc64@0.34.3": "@img/sharp-linux-ppc64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-GLtbLQMCNC5nxuImPR2+RgrviwKwVql28FWZIW1zWruy6zLgA5/x2ZXk3mxj58X/tszVF69KK0Is83V8YgWhLA==, integrity: sha512-7zznwNaqW6YtsfrGGDA6BRkISKAAE1Jo0QdpNYXNMHu2+0dTrPflTLNkpc8l7MUP5M16ZJcUvysVWWrMefZquA==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [ppc64] cpu: [ppc64]
os: [linux] os: [linux]
"@img/sharp-linux-s390x@0.34.3": "@img/sharp-linux-riscv64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-3gahT+A6c4cdc2edhsLHmIOXMb17ltffJlxR0aC2VPZfwKoTGZec6u5GrFgdR7ciJSsHT27BD3TIuGcuRT0KmQ==, integrity: sha512-51gJuLPTKa7piYPaVs8GmByo7/U7/7TZOq+cnXJIHZKavIRHAP77e3N2HEl3dgiqdD/w0yUfiJnII77PuDDFdw==,
}
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [riscv64]
os: [linux]
"@img/sharp-linux-s390x@0.34.5":
resolution:
{
integrity: sha512-nQtCk0PdKfho3eC5MrbQoigJ2gd1CgddUMkabUj+rBevs8tZ2cULOx46E7oyX+04WGfABgIwmMC0VqieTiR4jg==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [s390x] cpu: [s390x]
os: [linux] os: [linux]
"@img/sharp-linux-x64@0.34.3": "@img/sharp-linux-x64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-8kYso8d806ypnSq3/Ly0QEw90V5ZoHh10yH0HnrzOCr6DKAPI6QVHvwleqMkVQ0m+fc7EH8ah0BB0QPuWY6zJQ==, integrity: sha512-MEzd8HPKxVxVenwAa+JRPwEC7QFjoPWuS5NZnBt6B3pu7EG2Ge0id1oLHZpPJdn3OQK+BQDiw9zStiHBTJQQQQ==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64] cpu: [x64]
os: [linux] os: [linux]
"@img/sharp-linuxmusl-arm64@0.34.3": "@img/sharp-linuxmusl-arm64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-vAjbHDlr4izEiXM1OTggpCcPg9tn4YriK5vAjowJsHwdBIdx0fYRsURkxLG2RLm9gyBq66gwtWI8Gx0/ov+JKQ==, integrity: sha512-fprJR6GtRsMt6Kyfq44IsChVZeGN97gTD331weR1ex1c1rypDEABN6Tm2xa1wE6lYb5DdEnk03NZPqA7Id21yg==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64] cpu: [arm64]
os: [linux] os: [linux]
"@img/sharp-linuxmusl-x64@0.34.3": "@img/sharp-linuxmusl-x64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-gCWUn9547K5bwvOn9l5XGAEjVTTRji4aPTqLzGXHvIr6bIDZKNTA34seMPgM0WmSf+RYBH411VavCejp3PkOeQ==, integrity: sha512-Jg8wNT1MUzIvhBFxViqrEhWDGzqymo3sV7z7ZsaWbZNDLXRJZoRGrjulp60YYtV4wfY8VIKcWidjojlLcWrd8Q==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64] cpu: [x64]
os: [linux] os: [linux]
"@img/sharp-wasm32@0.34.3": "@img/sharp-wasm32@0.34.5":
resolution: resolution:
{ {
integrity: sha512-+CyRcpagHMGteySaWos8IbnXcHgfDn7pO2fiC2slJxvNq9gDipYBN42/RagzctVRKgxATmfqOSulgZv5e1RdMg==, integrity: sha512-OdWTEiVkY2PHwqkbBI8frFxQQFekHaSSkUIJkwzclWZe64O1X4UlUjqqqLaPbUpMOQk6FBu/HtlGXNblIs0huw==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [wasm32] cpu: [wasm32]
"@img/sharp-win32-arm64@0.34.3": "@img/sharp-win32-arm64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-MjnHPnbqMXNC2UgeLJtX4XqoVHHlZNd+nPt1kRPmj63wURegwBhZlApELdtxM2OIZDRv/DFtLcNhVbd1z8GYXQ==, integrity: sha512-WQ3AgWCWYSb2yt+IG8mnC6Jdk9Whs7O0gxphblsLvdhSpSTtmu69ZG1Gkb6NuvxsNACwiPV6cNSZNzt0KPsw7g==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [arm64] cpu: [arm64]
os: [win32] os: [win32]
"@img/sharp-win32-ia32@0.34.3": "@img/sharp-win32-ia32@0.34.5":
resolution: resolution:
{ {
integrity: sha512-xuCdhH44WxuXgOM714hn4amodJMZl3OEvf0GVTm0BEyMeA2to+8HEdRPShH0SLYptJY1uBw+SCFP9WVQi1Q/cw==, integrity: sha512-FV9m/7NmeCmSHDD5j4+4pNI8Cp3aW+JvLoXcTUo0IqyjSfAZJ8dIUmijx1qaJsIiU+Hosw6xM5KijAWRJCSgNg==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [ia32] cpu: [ia32]
os: [win32] os: [win32]
"@img/sharp-win32-x64@0.34.3": "@img/sharp-win32-x64@0.34.5":
resolution: resolution:
{ {
integrity: sha512-OWwz05d++TxzLEv4VnsTz5CmZ6mI6S05sfQGEMrNrQcOEERbX46332IvE7pO/EUiw7jUrrS40z/M7kPyjfl04g==, integrity: sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
cpu: [x64] cpu: [x64]
@@ -1184,10 +1214,10 @@ packages:
integrity: sha512-ZVWUcfwY4E/yPitQJl481FjFo3K22D6qF0DuFH6Y/nbnE11GY5uguDxZMGXPQ8WQ0128MXQD7TnfHyK4oWoIJQ==, integrity: sha512-ZVWUcfwY4E/yPitQJl481FjFo3K22D6qF0DuFH6Y/nbnE11GY5uguDxZMGXPQ8WQ0128MXQD7TnfHyK4oWoIJQ==,
} }
"@next/env@15.5.7": "@next/env@15.5.9":
resolution: resolution:
{ {
integrity: sha512-4h6Y2NyEkIEN7Z8YxkA27pq6zTkS09bUSYC0xjd0NpwFxjnIKeZEeH591o5WECSmjpUhLn3H2QLJcDye3Uzcvg==, integrity: sha512-4GlTZ+EJM7WaW2HEZcyU317tIQDjkQIyENDLxYJfSWlfqguN+dHkZgyQTV/7ykvobU7yEH5gKvreNrH4B6QgIg==,
} }
"@next/eslint-plugin-next@15.5.3": "@next/eslint-plugin-next@15.5.3":
@@ -2610,10 +2640,10 @@ packages:
peerDependencies: peerDependencies:
react: ^18 || ^19 react: ^18 || ^19
"@tsconfig/node10@1.0.11": "@tsconfig/node10@1.0.12":
resolution: resolution:
{ {
integrity: sha512-DcRjDCujK/kCk/cUe8Xz8ZSpm8mS3mNNpta+jGCA6USEDfktlNvm1+IuZ9eTcDbNk41BHwpHHeW+N1lKCz4zOw==, integrity: sha512-UCYBaeFvM11aU2y3YPZ//O5Rhj+xKyzy7mvcIoAjASbigy8mHMryP5cK7dgjlz2hWxh1g5pLw084E0a/wlUSFQ==,
} }
"@tsconfig/node12@1.0.11": "@tsconfig/node12@1.0.11":
@@ -2785,10 +2815,10 @@ packages:
integrity: sha512-DRh5K+ka5eJic8CjH7td8QpYEV6Zo10gfRkjHCO3weqZHWDtAaSTFtl4+VMqOJ4N5jcuhZ9/l+yy8rVgw7BQeQ==, integrity: sha512-DRh5K+ka5eJic8CjH7td8QpYEV6Zo10gfRkjHCO3weqZHWDtAaSTFtl4+VMqOJ4N5jcuhZ9/l+yy8rVgw7BQeQ==,
} }
"@types/node@24.3.1": "@types/node@25.0.2":
resolution: resolution:
{ {
integrity: sha512-3vXmQDXy+woz+gnrTvuvNrPzekOi+Ds0ReMxw0LzBiK3a+1k0kQn9f2NWk+lgD4rJehFUmYy2gMhJ2ZI+7YP9g==, integrity: sha512-gWEkeiyYE4vqjON/+Obqcoeffmk0NF15WSBwSs7zwVA2bAbTaE0SJ7P0WNGoJn8uE7fiaV5a7dKYIJriEqOrmA==,
} }
"@types/parse-json@4.0.2": "@types/parse-json@4.0.2":
@@ -4202,6 +4232,12 @@ packages:
integrity: sha512-uhE1Ye5vgqju6OI71HTQqcBCZrvHugk0MjLak7Q+HfoBgoq5Bi+5YnwjP4fjDgrtYr/l8MVRBvzz9dPD4KyK0A==, integrity: sha512-uhE1Ye5vgqju6OI71HTQqcBCZrvHugk0MjLak7Q+HfoBgoq5Bi+5YnwjP4fjDgrtYr/l8MVRBvzz9dPD4KyK0A==,
} }
caniuse-lite@1.0.30001760:
resolution:
{
integrity: sha512-7AAMPcueWELt1p3mi13HR/LHH0TJLT11cnwDJEs3xA4+CK/PLKeO9Kl1oru24htkyUKtkGCvAx4ohB0Ttry8Dw==,
}
ccount@2.0.1: ccount@2.0.1:
resolution: resolution:
{ {
@@ -4371,19 +4407,6 @@ packages:
integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==, integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==,
} }
color-string@1.9.1:
resolution:
{
integrity: sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==,
}
color@4.2.3:
resolution:
{
integrity: sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A==,
}
engines: { node: ">=12.5.0" }
colorette@1.4.0: colorette@1.4.0:
resolution: resolution:
{ {
@@ -4622,10 +4645,10 @@ packages:
engines: { node: ">=0.10" } engines: { node: ">=0.10" }
hasBin: true hasBin: true
detect-libc@2.0.4: detect-libc@2.1.2:
resolution: resolution:
{ {
integrity: sha512-3UDv+G9CsCKO1WKMGw9fwq/SWJYbI0c5Y7LU1AXYoDdbhE2AHQ6N6Nb34sG8Fj7T5APy8qXDCKuuIHd1BR0tVA==, integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==,
} }
engines: { node: ">=8" } engines: { node: ">=8" }
@@ -4750,10 +4773,10 @@ packages:
} }
engines: { node: ">=10.0.0" } engines: { node: ">=10.0.0" }
enhanced-resolve@5.18.3: enhanced-resolve@5.18.4:
resolution: resolution:
{ {
integrity: sha512-d4lC8xfavMeBjzGr2vECC3fsGXziXZQyJxD868h2M/mBI3PwAuODxAkLkq5HYuvrPYcUtiLzsTo8U3PgX3Ocww==, integrity: sha512-LgQMM4WXU3QI+SYgEc2liRgznaD5ojbmY3sb8LxyguVkIg5FxdpTkvk72te2R38/TGKxH634oLxXRGY6d7AP+Q==,
} }
engines: { node: ">=10.13.0" } engines: { node: ">=10.13.0" }
@@ -5711,12 +5734,6 @@ packages:
integrity: sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==, integrity: sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==,
} }
is-arrayish@0.3.2:
resolution:
{
integrity: sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ==,
}
is-async-function@2.1.1: is-async-function@2.1.1:
resolution: resolution:
{ {
@@ -6392,10 +6409,10 @@ packages:
integrity: sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==, integrity: sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==,
} }
loader-runner@4.3.0: loader-runner@4.3.1:
resolution: resolution:
{ {
integrity: sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg==, integrity: sha512-IWqP2SCPhyVFTBtRcgMHdzlf9ul25NwaFx4wCEH/KjAXuuHY4yNjvPXsBokp8jCB936PyWRaPKUNh8NvylLp2Q==,
} }
engines: { node: ">=6.11.5" } engines: { node: ">=6.11.5" }
@@ -6863,10 +6880,10 @@ packages:
react: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc react: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc
react-dom: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc react-dom: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc
next@15.5.7: next@15.5.9:
resolution: resolution:
{ {
integrity: sha512-+t2/0jIJ48kUpGKkdlhgkv+zPTEOoXyr60qXe68eB/pl3CMJaLeIGjzp5D6Oqt25hCBiBTt8wEeeAzfJvUKnPQ==, integrity: sha512-agNLK89seZEtC5zUHwtut0+tNrc0Xw4FT/Dg+B/VLEo9pAcS9rtTKpek3V6kVcVwsB2YlqMaHdfZL4eLEVYuCg==,
} }
engines: { node: ^18.18.0 || ^19.8.0 || >= 20.0.0 } engines: { node: ^18.18.0 || ^19.8.0 || >= 20.0.0 }
hasBin: true hasBin: true
@@ -7870,10 +7887,10 @@ packages:
integrity: sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==, integrity: sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==,
} }
schema-utils@4.3.2: schema-utils@4.3.3:
resolution: resolution:
{ {
integrity: sha512-Gn/JaSk/Mt9gYubxTtSn/QCV4em9mpAPiR1rqy/Ocu19u/G9J5WWdNoUT4SiV6mFC3y6cxyFcFwdzPM3FgxGAQ==, integrity: sha512-eflK8wEtyOE6+hsaRVPxvUKYCpRgzLqDTb8krvAsRIwOGlHoSgYLgBXoubGgLd2fT41/OUYdb48v4k4WWHQurA==,
} }
engines: { node: ">= 10.13.0" } engines: { node: ">= 10.13.0" }
@@ -7905,6 +7922,14 @@ packages:
engines: { node: ">=10" } engines: { node: ">=10" }
hasBin: true hasBin: true
semver@7.7.3:
resolution:
{
integrity: sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==,
}
engines: { node: ">=10" }
hasBin: true
serialize-javascript@6.0.2: serialize-javascript@6.0.2:
resolution: resolution:
{ {
@@ -7932,10 +7957,10 @@ packages:
} }
engines: { node: ">= 0.4" } engines: { node: ">= 0.4" }
sharp@0.34.3: sharp@0.34.5:
resolution: resolution:
{ {
integrity: sha512-eX2IQ6nFohW4DbvHIOLRB3MHFpYqaqvXd3Tp5e/T/dSH83fxaNJQRvDMhASmkNTsNTVF2/OOopzRCt7xokgPfg==, integrity: sha512-Ou9I5Ft9WNcCbXrU9cMgPBcCK8LiwLqcbywW3t4oDV37n1pzpuNLsYiAV8eODnjbtQlSDwZ2cUEeQz4E54Hltg==,
} }
engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 } engines: { node: ^18.17.0 || ^20.3.0 || >=21.0.0 }
@@ -8006,12 +8031,6 @@ packages:
integrity: sha512-D1SaWpOW8afq1CZGWB8xTfrT3FekjQmPValrqncJMX7QFl8YwhrPTZvMCANLtgBwwdS+7zURyqxDDEmY558tTw==, integrity: sha512-D1SaWpOW8afq1CZGWB8xTfrT3FekjQmPValrqncJMX7QFl8YwhrPTZvMCANLtgBwwdS+7zURyqxDDEmY558tTw==,
} }
simple-swizzle@0.2.2:
resolution:
{
integrity: sha512-JA//kQgZtbuY83m+xT+tXJkmJncGMTFT+C+g2h2R9uxkYIrE2yy9sgmcLhCnw57/WSD+Eh3J97FPEDFnbXnDUg==,
}
slash@3.0.0: slash@3.0.0:
resolution: resolution:
{ {
@@ -8325,17 +8344,17 @@ packages:
engines: { node: ">=14.0.0" } engines: { node: ">=14.0.0" }
hasBin: true hasBin: true
tapable@2.2.3: tapable@2.3.0:
resolution: resolution:
{ {
integrity: sha512-ZL6DDuAlRlLGghwcfmSn9sK3Hr6ArtyudlSAiCqQ6IfE+b+HHbydbYDIG15IfS5do+7XQQBdBiubF/cV2dnDzg==, integrity: sha512-g9ljZiwki/LfxmQADO3dEY1CbpmXT5Hm2fJ+QaGKwSXUylMybePR7/67YW7jOrrvjEgL1Fmz5kzyAjWVWLlucg==,
} }
engines: { node: ">=6" } engines: { node: ">=6" }
terser-webpack-plugin@5.3.14: terser-webpack-plugin@5.3.16:
resolution: resolution:
{ {
integrity: sha512-vkZjpUjb6OMS7dhV+tILUW6BhpDR7P2L/aQSAv+Uwk+m8KATX9EccViHTJR2qDtACKPIYndLGCyl3FMo+r2LMw==, integrity: sha512-h9oBFCWrq78NyWWVcSwZarJkZ01c2AyGrzs1crmHZO3QUg9D61Wu4NPjBy69n7JqylFF5y+CsUZYmYEIZ3mR+Q==,
} }
engines: { node: ">= 10.13.0" } engines: { node: ">= 10.13.0" }
peerDependencies: peerDependencies:
@@ -8351,10 +8370,10 @@ packages:
uglify-js: uglify-js:
optional: true optional: true
terser@5.44.0: terser@5.44.1:
resolution: resolution:
{ {
integrity: sha512-nIVck8DK+GM/0Frwd+nIhZ84pR/BX7rmXMfYwyg+Sri5oGVE99/E3KvXqpC2xHFxyqXyGHTKBSioxxplrO4I4w==, integrity: sha512-t/R3R/n0MSwnnazuPpPNVO60LX0SKL45pyl9YlvxIdkH0Of7D5qM2EVe+yASRIlY5pZ73nclYJfNANGWPwFDZw==,
} }
engines: { node: ">=10" } engines: { node: ">=10" }
hasBin: true hasBin: true
@@ -8633,6 +8652,12 @@ packages:
integrity: sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag==, integrity: sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag==,
} }
undici-types@7.16.0:
resolution:
{
integrity: sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==,
}
unified@11.0.5: unified@11.0.5:
resolution: resolution:
{ {
@@ -9365,6 +9390,11 @@ snapshots:
tslib: 2.8.1 tslib: 2.8.1
optional: true optional: true
"@emnapi/runtime@1.7.1":
dependencies:
tslib: 2.8.1
optional: true
"@emnapi/wasi-threads@1.0.4": "@emnapi/wasi-threads@1.0.4":
dependencies: dependencies:
tslib: 2.8.1 tslib: 2.8.1
@@ -9533,90 +9563,101 @@ snapshots:
"@humanwhocodes/retry@0.4.3": {} "@humanwhocodes/retry@0.4.3": {}
"@img/sharp-darwin-arm64@0.34.3": "@img/colour@1.0.0":
optional: true
"@img/sharp-darwin-arm64@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-darwin-arm64": 1.2.0 "@img/sharp-libvips-darwin-arm64": 1.2.4
optional: true optional: true
"@img/sharp-darwin-x64@0.34.3": "@img/sharp-darwin-x64@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-darwin-x64": 1.2.0 "@img/sharp-libvips-darwin-x64": 1.2.4
optional: true optional: true
"@img/sharp-libvips-darwin-arm64@1.2.0": "@img/sharp-libvips-darwin-arm64@1.2.4":
optional: true optional: true
"@img/sharp-libvips-darwin-x64@1.2.0": "@img/sharp-libvips-darwin-x64@1.2.4":
optional: true optional: true
"@img/sharp-libvips-linux-arm64@1.2.0": "@img/sharp-libvips-linux-arm64@1.2.4":
optional: true optional: true
"@img/sharp-libvips-linux-arm@1.2.0": "@img/sharp-libvips-linux-arm@1.2.4":
optional: true optional: true
"@img/sharp-libvips-linux-ppc64@1.2.0": "@img/sharp-libvips-linux-ppc64@1.2.4":
optional: true optional: true
"@img/sharp-libvips-linux-s390x@1.2.0": "@img/sharp-libvips-linux-riscv64@1.2.4":
optional: true optional: true
"@img/sharp-libvips-linux-x64@1.2.0": "@img/sharp-libvips-linux-s390x@1.2.4":
optional: true optional: true
"@img/sharp-libvips-linuxmusl-arm64@1.2.0": "@img/sharp-libvips-linux-x64@1.2.4":
optional: true optional: true
"@img/sharp-libvips-linuxmusl-x64@1.2.0": "@img/sharp-libvips-linuxmusl-arm64@1.2.4":
optional: true optional: true
"@img/sharp-linux-arm64@0.34.3": "@img/sharp-libvips-linuxmusl-x64@1.2.4":
optional: true
"@img/sharp-linux-arm64@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-linux-arm64": 1.2.0 "@img/sharp-libvips-linux-arm64": 1.2.4
optional: true optional: true
"@img/sharp-linux-arm@0.34.3": "@img/sharp-linux-arm@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-linux-arm": 1.2.0 "@img/sharp-libvips-linux-arm": 1.2.4
optional: true optional: true
"@img/sharp-linux-ppc64@0.34.3": "@img/sharp-linux-ppc64@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-linux-ppc64": 1.2.0 "@img/sharp-libvips-linux-ppc64": 1.2.4
optional: true optional: true
"@img/sharp-linux-s390x@0.34.3": "@img/sharp-linux-riscv64@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-linux-s390x": 1.2.0 "@img/sharp-libvips-linux-riscv64": 1.2.4
optional: true optional: true
"@img/sharp-linux-x64@0.34.3": "@img/sharp-linux-s390x@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-linux-x64": 1.2.0 "@img/sharp-libvips-linux-s390x": 1.2.4
optional: true optional: true
"@img/sharp-linuxmusl-arm64@0.34.3": "@img/sharp-linux-x64@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-linuxmusl-arm64": 1.2.0 "@img/sharp-libvips-linux-x64": 1.2.4
optional: true optional: true
"@img/sharp-linuxmusl-x64@0.34.3": "@img/sharp-linuxmusl-arm64@0.34.5":
optionalDependencies: optionalDependencies:
"@img/sharp-libvips-linuxmusl-x64": 1.2.0 "@img/sharp-libvips-linuxmusl-arm64": 1.2.4
optional: true optional: true
"@img/sharp-wasm32@0.34.3": "@img/sharp-linuxmusl-x64@0.34.5":
optionalDependencies:
"@img/sharp-libvips-linuxmusl-x64": 1.2.4
optional: true
"@img/sharp-wasm32@0.34.5":
dependencies: dependencies:
"@emnapi/runtime": 1.4.5 "@emnapi/runtime": 1.7.1
optional: true optional: true
"@img/sharp-win32-arm64@0.34.3": "@img/sharp-win32-arm64@0.34.5":
optional: true optional: true
"@img/sharp-win32-ia32@0.34.3": "@img/sharp-win32-ia32@0.34.5":
optional: true optional: true
"@img/sharp-win32-x64@0.34.3": "@img/sharp-win32-x64@0.34.5":
optional: true optional: true
"@internationalized/date@3.8.2": "@internationalized/date@3.8.2":
@@ -9877,7 +9918,7 @@ snapshots:
"@tybys/wasm-util": 0.10.0 "@tybys/wasm-util": 0.10.0
optional: true optional: true
"@next/env@15.5.7": {} "@next/env@15.5.9": {}
"@next/eslint-plugin-next@15.5.3": "@next/eslint-plugin-next@15.5.3":
dependencies: dependencies:
@@ -10684,7 +10725,7 @@ snapshots:
"@sentry/core@8.55.0": {} "@sentry/core@8.55.0": {}
"@sentry/nextjs@10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3)": "@sentry/nextjs@10.11.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1)(webpack@5.101.3)":
dependencies: dependencies:
"@opentelemetry/api": 1.9.0 "@opentelemetry/api": 1.9.0
"@opentelemetry/semantic-conventions": 1.37.0 "@opentelemetry/semantic-conventions": 1.37.0
@@ -10698,7 +10739,7 @@ snapshots:
"@sentry/vercel-edge": 10.11.0 "@sentry/vercel-edge": 10.11.0
"@sentry/webpack-plugin": 4.3.0(webpack@5.101.3) "@sentry/webpack-plugin": 4.3.0(webpack@5.101.3)
chalk: 3.0.0 chalk: 3.0.0
next: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0) next: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
resolve: 1.22.8 resolve: 1.22.8
rollup: 4.50.1 rollup: 4.50.1
stacktrace-parser: 0.1.11 stacktrace-parser: 0.1.11
@@ -10829,7 +10870,7 @@ snapshots:
"@tanstack/query-core": 5.85.9 "@tanstack/query-core": 5.85.9
react: 18.3.1 react: 18.3.1
"@tsconfig/node10@1.0.11": "@tsconfig/node10@1.0.12":
optional: true optional: true
"@tsconfig/node12@1.0.11": "@tsconfig/node12@1.0.11":
@@ -10941,9 +10982,9 @@ snapshots:
dependencies: dependencies:
undici-types: 7.10.0 undici-types: 7.10.0
"@types/node@24.3.1": "@types/node@25.0.2":
dependencies: dependencies:
undici-types: 7.10.0 undici-types: 7.16.0
"@types/parse-json@4.0.2": {} "@types/parse-json@4.0.2": {}
@@ -12127,6 +12168,8 @@ snapshots:
caniuse-lite@1.0.30001734: {} caniuse-lite@1.0.30001734: {}
caniuse-lite@1.0.30001760: {}
ccount@2.0.1: {} ccount@2.0.1: {}
chalk@3.0.0: chalk@3.0.0:
@@ -12205,18 +12248,6 @@ snapshots:
color-name@1.1.4: {} color-name@1.1.4: {}
color-string@1.9.1:
dependencies:
color-name: 1.1.4
simple-swizzle: 0.2.2
optional: true
color@4.2.3:
dependencies:
color-convert: 2.0.1
color-string: 1.9.1
optional: true
colorette@1.4.0: {} colorette@1.4.0: {}
combined-stream@1.0.8: combined-stream@1.0.8:
@@ -12335,7 +12366,7 @@ snapshots:
detect-libc@1.0.3: detect-libc@1.0.3:
optional: true optional: true
detect-libc@2.0.4: detect-libc@2.1.2:
optional: true optional: true
detect-newline@3.1.0: {} detect-newline@3.1.0: {}
@@ -12405,10 +12436,10 @@ snapshots:
engine.io-parser@5.2.3: {} engine.io-parser@5.2.3: {}
enhanced-resolve@5.18.3: enhanced-resolve@5.18.4:
dependencies: dependencies:
graceful-fs: 4.2.11 graceful-fs: 4.2.11
tapable: 2.2.3 tapable: 2.3.0
err-code@3.0.1: {} err-code@3.0.1: {}
@@ -13147,9 +13178,6 @@ snapshots:
is-arrayish@0.2.1: {} is-arrayish@0.2.1: {}
is-arrayish@0.3.2:
optional: true
is-async-function@2.1.1: is-async-function@2.1.1:
dependencies: dependencies:
async-function: 1.0.0 async-function: 1.0.0
@@ -13629,7 +13657,7 @@ snapshots:
jest-worker@27.5.1: jest-worker@27.5.1:
dependencies: dependencies:
"@types/node": 24.3.1 "@types/node": 25.0.2
merge-stream: 2.0.0 merge-stream: 2.0.0
supports-color: 8.1.1 supports-color: 8.1.1
@@ -13738,7 +13766,7 @@ snapshots:
lines-and-columns@1.2.4: {} lines-and-columns@1.2.4: {}
loader-runner@4.3.0: {} loader-runner@4.3.1: {}
locate-path@5.0.0: locate-path@5.0.0:
dependencies: dependencies:
@@ -14093,13 +14121,13 @@ snapshots:
neo-async@2.6.2: {} neo-async@2.6.2: {}
next-auth@4.24.11(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1): next-auth@4.24.11(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
dependencies: dependencies:
"@babel/runtime": 7.28.2 "@babel/runtime": 7.28.2
"@panva/hkdf": 1.2.1 "@panva/hkdf": 1.2.1
cookie: 0.7.2 cookie: 0.7.2
jose: 4.15.9 jose: 4.15.9
next: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0) next: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
oauth: 0.9.15 oauth: 0.9.15
openid-client: 5.7.1 openid-client: 5.7.1
preact: 10.27.0 preact: 10.27.0
@@ -14113,11 +14141,11 @@ snapshots:
react: 18.3.1 react: 18.3.1
react-dom: 18.3.1(react@18.3.1) react-dom: 18.3.1(react@18.3.1)
next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0): next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0):
dependencies: dependencies:
"@next/env": 15.5.7 "@next/env": 15.5.9
"@swc/helpers": 0.5.15 "@swc/helpers": 0.5.15
caniuse-lite: 1.0.30001734 caniuse-lite: 1.0.30001760
postcss: 8.4.31 postcss: 8.4.31
react: 18.3.1 react: 18.3.1
react-dom: 18.3.1(react@18.3.1) react-dom: 18.3.1(react@18.3.1)
@@ -14133,7 +14161,7 @@ snapshots:
"@next/swc-win32-x64-msvc": 15.5.7 "@next/swc-win32-x64-msvc": 15.5.7
"@opentelemetry/api": 1.9.0 "@opentelemetry/api": 1.9.0
sass: 1.90.0 sass: 1.90.0
sharp: 0.34.3 sharp: 0.34.5
transitivePeerDependencies: transitivePeerDependencies:
- "@babel/core" - "@babel/core"
- babel-plugin-macros - babel-plugin-macros
@@ -14159,12 +14187,12 @@ snapshots:
dependencies: dependencies:
path-key: 3.1.1 path-key: 3.1.1
nuqs@2.4.3(next@15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1): nuqs@2.4.3(next@15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0))(react@18.3.1):
dependencies: dependencies:
mitt: 3.0.1 mitt: 3.0.1
react: 18.3.1 react: 18.3.1
optionalDependencies: optionalDependencies:
next: 15.5.7(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0) next: 15.5.9(@babel/core@7.28.3)(@opentelemetry/api@1.9.0)(babel-plugin-macros@3.1.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(sass@1.90.0)
oauth@0.9.15: {} oauth@0.9.15: {}
@@ -14734,7 +14762,7 @@ snapshots:
dependencies: dependencies:
loose-envify: 1.4.0 loose-envify: 1.4.0
schema-utils@4.3.2: schema-utils@4.3.3:
dependencies: dependencies:
"@types/json-schema": 7.0.15 "@types/json-schema": 7.0.15
ajv: 8.17.1 ajv: 8.17.1
@@ -14749,6 +14777,9 @@ snapshots:
semver@7.7.2: {} semver@7.7.2: {}
semver@7.7.3:
optional: true
serialize-javascript@6.0.2: serialize-javascript@6.0.2:
dependencies: dependencies:
randombytes: 2.1.0 randombytes: 2.1.0
@@ -14775,34 +14806,36 @@ snapshots:
es-errors: 1.3.0 es-errors: 1.3.0
es-object-atoms: 1.1.1 es-object-atoms: 1.1.1
sharp@0.34.3: sharp@0.34.5:
dependencies: dependencies:
color: 4.2.3 "@img/colour": 1.0.0
detect-libc: 2.0.4 detect-libc: 2.1.2
semver: 7.7.2 semver: 7.7.3
optionalDependencies: optionalDependencies:
"@img/sharp-darwin-arm64": 0.34.3 "@img/sharp-darwin-arm64": 0.34.5
"@img/sharp-darwin-x64": 0.34.3 "@img/sharp-darwin-x64": 0.34.5
"@img/sharp-libvips-darwin-arm64": 1.2.0 "@img/sharp-libvips-darwin-arm64": 1.2.4
"@img/sharp-libvips-darwin-x64": 1.2.0 "@img/sharp-libvips-darwin-x64": 1.2.4
"@img/sharp-libvips-linux-arm": 1.2.0 "@img/sharp-libvips-linux-arm": 1.2.4
"@img/sharp-libvips-linux-arm64": 1.2.0 "@img/sharp-libvips-linux-arm64": 1.2.4
"@img/sharp-libvips-linux-ppc64": 1.2.0 "@img/sharp-libvips-linux-ppc64": 1.2.4
"@img/sharp-libvips-linux-s390x": 1.2.0 "@img/sharp-libvips-linux-riscv64": 1.2.4
"@img/sharp-libvips-linux-x64": 1.2.0 "@img/sharp-libvips-linux-s390x": 1.2.4
"@img/sharp-libvips-linuxmusl-arm64": 1.2.0 "@img/sharp-libvips-linux-x64": 1.2.4
"@img/sharp-libvips-linuxmusl-x64": 1.2.0 "@img/sharp-libvips-linuxmusl-arm64": 1.2.4
"@img/sharp-linux-arm": 0.34.3 "@img/sharp-libvips-linuxmusl-x64": 1.2.4
"@img/sharp-linux-arm64": 0.34.3 "@img/sharp-linux-arm": 0.34.5
"@img/sharp-linux-ppc64": 0.34.3 "@img/sharp-linux-arm64": 0.34.5
"@img/sharp-linux-s390x": 0.34.3 "@img/sharp-linux-ppc64": 0.34.5
"@img/sharp-linux-x64": 0.34.3 "@img/sharp-linux-riscv64": 0.34.5
"@img/sharp-linuxmusl-arm64": 0.34.3 "@img/sharp-linux-s390x": 0.34.5
"@img/sharp-linuxmusl-x64": 0.34.3 "@img/sharp-linux-x64": 0.34.5
"@img/sharp-wasm32": 0.34.3 "@img/sharp-linuxmusl-arm64": 0.34.5
"@img/sharp-win32-arm64": 0.34.3 "@img/sharp-linuxmusl-x64": 0.34.5
"@img/sharp-win32-ia32": 0.34.3 "@img/sharp-wasm32": 0.34.5
"@img/sharp-win32-x64": 0.34.3 "@img/sharp-win32-arm64": 0.34.5
"@img/sharp-win32-ia32": 0.34.5
"@img/sharp-win32-x64": 0.34.5
optional: true optional: true
shebang-command@2.0.0: shebang-command@2.0.0:
@@ -14857,11 +14890,6 @@ snapshots:
transitivePeerDependencies: transitivePeerDependencies:
- supports-color - supports-color
simple-swizzle@0.2.2:
dependencies:
is-arrayish: 0.3.2
optional: true
slash@3.0.0: {} slash@3.0.0: {}
socket.io-client@4.7.2: socket.io-client@4.7.2:
@@ -15086,18 +15114,18 @@ snapshots:
transitivePeerDependencies: transitivePeerDependencies:
- ts-node - ts-node
tapable@2.2.3: {} tapable@2.3.0: {}
terser-webpack-plugin@5.3.14(webpack@5.101.3): terser-webpack-plugin@5.3.16(webpack@5.101.3):
dependencies: dependencies:
"@jridgewell/trace-mapping": 0.3.31 "@jridgewell/trace-mapping": 0.3.31
jest-worker: 27.5.1 jest-worker: 27.5.1
schema-utils: 4.3.2 schema-utils: 4.3.3
serialize-javascript: 6.0.2 serialize-javascript: 6.0.2
terser: 5.44.0 terser: 5.44.1
webpack: 5.101.3 webpack: 5.101.3
terser@5.44.0: terser@5.44.1:
dependencies: dependencies:
"@jridgewell/source-map": 0.3.11 "@jridgewell/source-map": 0.3.11
acorn: 8.15.0 acorn: 8.15.0
@@ -15164,7 +15192,7 @@ snapshots:
ts-node@10.9.1(@types/node@24.2.1)(typescript@5.9.2): ts-node@10.9.1(@types/node@24.2.1)(typescript@5.9.2):
dependencies: dependencies:
"@cspotcode/source-map-support": 0.8.1 "@cspotcode/source-map-support": 0.8.1
"@tsconfig/node10": 1.0.11 "@tsconfig/node10": 1.0.12
"@tsconfig/node12": 1.0.11 "@tsconfig/node12": 1.0.11
"@tsconfig/node14": 1.0.3 "@tsconfig/node14": 1.0.3
"@tsconfig/node16": 1.0.4 "@tsconfig/node16": 1.0.4
@@ -15274,6 +15302,8 @@ snapshots:
undici-types@7.10.0: {} undici-types@7.10.0: {}
undici-types@7.16.0: {}
unified@11.0.5: unified@11.0.5:
dependencies: dependencies:
"@types/unist": 3.0.3 "@types/unist": 3.0.3
@@ -15427,19 +15457,19 @@ snapshots:
acorn-import-phases: 1.0.4(acorn@8.15.0) acorn-import-phases: 1.0.4(acorn@8.15.0)
browserslist: 4.25.2 browserslist: 4.25.2
chrome-trace-event: 1.0.4 chrome-trace-event: 1.0.4
enhanced-resolve: 5.18.3 enhanced-resolve: 5.18.4
es-module-lexer: 1.7.0 es-module-lexer: 1.7.0
eslint-scope: 5.1.1 eslint-scope: 5.1.1
events: 3.3.0 events: 3.3.0
glob-to-regexp: 0.4.1 glob-to-regexp: 0.4.1
graceful-fs: 4.2.11 graceful-fs: 4.2.11
json-parse-even-better-errors: 2.3.1 json-parse-even-better-errors: 2.3.1
loader-runner: 4.3.0 loader-runner: 4.3.1
mime-types: 2.1.35 mime-types: 2.1.35
neo-async: 2.6.2 neo-async: 2.6.2
schema-utils: 4.3.2 schema-utils: 4.3.3
tapable: 2.2.3 tapable: 2.3.0
terser-webpack-plugin: 5.3.14(webpack@5.101.3) terser-webpack-plugin: 5.3.16(webpack@5.101.3)
watchpack: 2.4.4 watchpack: 2.4.4
webpack-sources: 3.3.3 webpack-sources: 3.3.3
transitivePeerDependencies: transitivePeerDependencies: