feat: retake summary using NousResearch/Hermes-3-Llama-3.1-8B model (#415)

This feature a new modal endpoint, and a complete new way to build the
summary.

## SummaryBuilder

The summary builder is based on conversational model, where an exchange
between the model and the user is made. This allow more context
inclusion and a better respect of the rules.

It requires an endpoint with OpenAI-like completions endpoint
(/v1/chat/completions)

## vLLM Hermes3

Unlike previous deployment, this one use vLLM, which gives OpenAI-like
completions endpoint out of the box. It could also handle guided JSON
generation, so jsonformer is not needed. But, the model is quite good to
follow JSON schema if asked in the prompt.

## Conversion of long/short into summary builder

The builder is identifying participants, find key subjects, get a
summary for each, then get a quick recap.

The quick recap is used as a short_summary, while the markdown including
the quick recap + key subjects + summaries are used for the
long_summary.

This is why the nextjs component has to be updated, to correctly style
h1 and keep the new line of the markdown.
This commit is contained in:
2024-09-14 02:28:38 +02:00
committed by GitHub
parent 6c4eac04c1
commit 5267ab2d37
20 changed files with 1383 additions and 238 deletions

View File

@@ -0,0 +1,83 @@
from reflector.llm import LLM
from reflector.processors.base import Processor
from reflector.processors.summary.summary_builder import SummaryBuilder
from reflector.processors.types import FinalLongSummary, FinalShortSummary, TitleSummary
class TranscriptFinalSummaryProcessor(Processor):
"""
Get the final (long and short) summary
"""
INPUT_TYPE = TitleSummary
OUTPUT_TYPE = FinalLongSummary
def __init__(self, transcript=None, **kwargs):
super().__init__(**kwargs)
self.transcript = transcript
self.chunks: list[TitleSummary] = []
self.llm = LLM.get_instance(model_name="NousResearch/Hermes-3-Llama-3.1-8B")
self.builder = None
async def _push(self, data: TitleSummary):
self.chunks.append(data)
async def get_summary_builder(self, text) -> SummaryBuilder:
builder = SummaryBuilder(self.llm)
builder.set_transcript(text)
await builder.identify_participants()
await builder.generate_summary()
return builder
async def get_long_summary(self, text) -> str:
if not self.builder:
self.builder = await self.get_summary_builder(text)
return self.builder.as_markdown()
async def get_short_summary(self, text) -> str | None:
if not self.builder:
self.builder = await self.get_summary_builder(text)
return self.builder.recap
async def _flush(self):
if not self.chunks:
self.logger.warning("No summary to output")
return
# build the speakermap from the transcript
speakermap = {}
if self.transcript:
speakermap = {
participant["speaker"]: participant["name"]
for participant in self.transcript.participants
}
# build the transcript as a single string
# XXX: unsure if the participants name as replaced directly in speaker ?
text_transcript = []
for topic in self.chunks:
for segment in topic.transcript.as_segments():
name = speakermap.get(segment.speaker, f"Speaker {segment.speaker}")
text_transcript.append(f"{name}: {segment.text}")
text_transcript = "\n".join(text_transcript)
last_chunk = self.chunks[-1]
duration = last_chunk.timestamp + last_chunk.duration
long_summary = await self.get_long_summary(text_transcript)
short_summary = await self.get_short_summary(text_transcript)
final_long_summary = FinalLongSummary(
long_summary=long_summary,
duration=duration,
)
if short_summary:
final_short_summary = FinalShortSummary(
short_summary=short_summary,
duration=duration,
)
await self.emit(final_short_summary, name="short_summary")
await self.emit(final_long_summary)