mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-20 20:29:06 +00:00
* feat: add litellm backend implementation * refactor: improve generate/completion methods for base LLM * refactor: remove tokenizer logic * style: apply code formatting * fix: remove hallucinations from LLM responses * refactor: comprehensive LLM and summarization rework * chore: remove debug code * feat: add structured output support to LiteLLM * refactor: apply self-review improvements * docs: add model structured output comments * docs: update model structured output comments * style: apply linting and formatting fixes * fix: resolve type logic bug * refactor: apply PR review feedback * refactor: apply additional PR review feedback * refactor: apply final PR review feedback * fix: improve schema passing for LLMs without structured output * feat: add PR comments and logger improvements * docs: update README and add HTTP logging * feat: improve HTTP logging * feat: add summary chunking functionality * fix: resolve title generation runtime issues * refactor: apply self-review improvements * style: apply linting and formatting * feat: implement LiteLLM class structure * style: apply linting and formatting fixes * docs: env template model name fix * chore: remove older litellm class * chore: format * refactor: simplify OpenAILLM * refactor: OpenAILLM tokenizer * refactor: self-review * refactor: self-review * refactor: self-review * chore: format * chore: remove LLM_USE_STRUCTURED_OUTPUT from envs * chore: roll back migration lint changes * chore: roll back migration lint changes * fix: make summary llm configuration optional for the tests * fix: missing f-string * fix: tweak the prompt for summary title * feat: try llamaindex for summarization * fix: complete refactor of summary builder using llamaindex and structured output when possible * fix: separate prompt as constant * fix: typings * fix: enhance prompt to prevent mentioning others subject while summarize one * fix: various changes after self-review * fix: from igor review --------- Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com>
99 lines
2.9 KiB
Plaintext
99 lines
2.9 KiB
Plaintext
#
|
|
# This file serve as an example of possible configuration
|
|
# All the settings are described here: reflector/settings.py
|
|
#
|
|
|
|
## =======================================================
|
|
## User authentication
|
|
## =======================================================
|
|
|
|
## Using jwt/authentik
|
|
AUTH_BACKEND=jwt
|
|
AUTH_JWT_AUDIENCE=
|
|
|
|
## =======================================================
|
|
## Transcription backend
|
|
##
|
|
## Check reflector/processors/audio_transcript_* for the
|
|
## full list of available transcription backend
|
|
## =======================================================
|
|
|
|
## Using local whisper
|
|
#TRANSCRIPT_BACKEND=whisper
|
|
#WHISPER_MODEL_SIZE=tiny
|
|
|
|
## Using serverless modal.com (require reflector-gpu-modal deployed)
|
|
#TRANSCRIPT_BACKEND=modal
|
|
#TRANSCRIPT_URL=https://xxxxx--reflector-transcriber-web.modal.run
|
|
#TRANSLATE_URL=https://xxxxx--reflector-translator-web.modal.run
|
|
#TRANSCRIPT_MODAL_API_KEY=xxxxx
|
|
|
|
TRANSCRIPT_BACKEND=modal
|
|
TRANSCRIPT_URL=https://monadical-sas--reflector-transcriber-web.modal.run
|
|
TRANSCRIPT_MODAL_API_KEY=***REMOVED***
|
|
|
|
## =======================================================
|
|
## Transcription backend
|
|
##
|
|
## Only available in modal atm
|
|
## =======================================================
|
|
TRANSLATE_URL=https://monadical-sas--reflector-translator-web.modal.run
|
|
|
|
## =======================================================
|
|
## LLM backend
|
|
##
|
|
## Responsible for titles and short summary
|
|
## Check reflector/llm/* for the full list of available
|
|
## llm backend implementation
|
|
## =======================================================
|
|
|
|
## Using serverless modal.com (require reflector-gpu-modal deployed)
|
|
LLM_BACKEND=modal
|
|
LLM_URL=https://monadical-sas--reflector-llm-web.modal.run
|
|
LLM_MODAL_API_KEY=***REMOVED***
|
|
ZEPHYR_LLM_URL=https://monadical-sas--reflector-llm-zephyr-web.modal.run
|
|
|
|
|
|
## Using OpenAI
|
|
#LLM_BACKEND=openai
|
|
#LLM_OPENAI_KEY=xxx
|
|
#LLM_OPENAI_MODEL=gpt-3.5-turbo
|
|
|
|
## Using GPT4ALL
|
|
#LLM_BACKEND=openai
|
|
#LLM_URL=http://localhost:4891/v1/completions
|
|
#LLM_OPENAI_MODEL="GPT4All Falcon"
|
|
|
|
## Default LLM MODEL NAME
|
|
#DEFAULT_LLM=lmsys/vicuna-13b-v1.5
|
|
|
|
## Cache directory to store models
|
|
CACHE_DIR=data
|
|
|
|
## =======================================================
|
|
## Summary LLM configuration
|
|
## =======================================================
|
|
|
|
## Context size for summary generation (tokens)
|
|
SUMMARY_LLM_CONTEXT_SIZE_TOKENS=16000
|
|
SUMMARY_LLM_URL=
|
|
SUMMARY_LLM_API_KEY=sk-
|
|
SUMMARY_MODEL=
|
|
|
|
## =======================================================
|
|
## Diarization
|
|
##
|
|
## Only available on modal
|
|
## To allow diarization, you need to expose expose the files to be dowloded by the pipeline
|
|
## =======================================================
|
|
DIARIZATION_ENABLED=false
|
|
DIARIZATION_URL=https://monadical-sas--reflector-diarizer-web.modal.run
|
|
|
|
|
|
## =======================================================
|
|
## Sentry
|
|
## =======================================================
|
|
|
|
## Sentry DSN configuration
|
|
#SENTRY_DSN=
|