mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-20 20:29:06 +00:00
* initial * add LLM features * update LLM logic * update llm functions: change control flow * add generation config * update return types * update processors and tests * update rtc_offer * revert new title processor change * fix unit tests * add comments and fix HTTP 500 * adjust prompt * test with reflector app * revert new event for final title * update * move onus onto processors * move onus onto processors * stash * add provision for gen config * dynamically pack the LLM input using context length * tune final summary params * update consolidated class structures * update consolidated class structures * update precommit * add broadcast processors * working baseline * Organize LLMParams * minor fixes * minor fixes * minor fixes * fix unit tests * fix unit tests * fix unit tests * update tests * update tests * edit pipeline response events * update summary return types * configure tests * alembic db migration * change LLM response flow * edit main llm functions * edit main llm functions * change llm name and gen cf * Update transcript_topic_detector.py * PR review comments * checkpoint before db event migration * update DB migration of past events * update DB migration of past events * edit LLM classes * Delete unwanted file * remove List typing * remove List typing * update oobabooga API call * topic enhancements * update UI event handling * move ensure_casing to llm base * update tests * update tests
110 lines
3.1 KiB
Plaintext
110 lines
3.1 KiB
Plaintext
#
|
|
# This file serve as an example of possible configuration
|
|
# All the settings are described here: reflector/settings.py
|
|
#
|
|
|
|
## =======================================================
|
|
## Database
|
|
## =======================================================
|
|
|
|
#DATABASE_URL=sqlite://./reflector.db
|
|
#DATABASE_URL=postgresql://reflector:reflector@localhost:5432/reflector
|
|
|
|
|
|
## =======================================================
|
|
## User authentication
|
|
## =======================================================
|
|
|
|
## No authentication
|
|
#AUTH_BACKEND=none
|
|
|
|
## Using fief (fief.dev)
|
|
#AUTH_BACKEND=fief
|
|
#AUTH_FIEF_URL=https://your-fief-instance....
|
|
#AUTH_FIEF_CLIENT_ID=xxx
|
|
#AUTH_FIEF_CLIENT_SECRET=xxx
|
|
|
|
|
|
## =======================================================
|
|
## Public mode
|
|
## =======================================================
|
|
## If set to true, anonymous transcripts will be
|
|
## accessible to anybody.
|
|
|
|
#PUBLIC_MODE=false
|
|
|
|
|
|
## =======================================================
|
|
## Transcription backend
|
|
##
|
|
## Check reflector/processors/audio_transcript_* for the
|
|
## full list of available transcription backend
|
|
## =======================================================
|
|
|
|
## Using local whisper (default)
|
|
#TRANSCRIPT_BACKEND=whisper
|
|
#WHISPER_MODEL_SIZE=tiny
|
|
|
|
## Using serverless modal.com (require reflector-gpu-modal deployed)
|
|
#TRANSCRIPT_BACKEND=modal
|
|
#TRANSCRIPT_URL=https://xxxxx--reflector-transcriber-web.modal.run
|
|
#TRANSCRIPT_MODAL_API_KEY=xxxxx
|
|
|
|
## Using serverless banana.dev (require reflector-gpu-banana deployed)
|
|
## XXX this service is buggy do not use at the moment
|
|
## XXX it also require the audio to be saved to S3
|
|
#TRANSCRIPT_BACKEND=banana
|
|
#TRANSCRIPT_URL=https://reflector-gpu-banana-xxxxx.run.banana.dev
|
|
#TRANSCRIPT_BANANA_API_KEY=xxx
|
|
#TRANSCRIPT_BANANA_MODEL_KEY=xxx
|
|
#TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=xxx
|
|
#TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=xxx
|
|
#TRANSCRIPT_STORAGE_AWS_BUCKET_NAME="reflector-bucket/chunks"
|
|
|
|
## =======================================================
|
|
## LLM backend
|
|
##
|
|
## Check reflector/llm/* for the full list of available
|
|
## llm backend implementation
|
|
## =======================================================
|
|
|
|
## Use oobabooga (default)
|
|
#LLM_BACKEND=oobabooga
|
|
#LLM_URL=http://xxx:7860/api/generate/v1
|
|
|
|
## Using serverless modal.com (require reflector-gpu-modal deployed)
|
|
#LLM_BACKEND=modal
|
|
#LLM_URL=https://xxxxxx--reflector-llm-web.modal.run
|
|
#LLM_MODAL_API_KEY=xxx
|
|
|
|
## Using serverless banana.dev (require reflector-gpu-banana deployed)
|
|
## XXX this service is buggy do not use at the moment
|
|
#LLM_BACKEND=banana
|
|
#LLM_URL=https://reflector-gpu-banana-xxxxx.run.banana.dev
|
|
#LLM_BANANA_API_KEY=xxxxx
|
|
#LLM_BANANA_MODEL_KEY=xxxxx
|
|
|
|
## Using OpenAI
|
|
#LLM_BACKEND=openai
|
|
#LLM_OPENAI_KEY=xxx
|
|
#LLM_OPENAI_MODEL=gpt-3.5-turbo
|
|
|
|
## Using GPT4ALL
|
|
#LLM_BACKEND=openai
|
|
#LLM_URL=http://localhost:4891/v1/completions
|
|
#LLM_OPENAI_MODEL="GPT4All Falcon"
|
|
|
|
## Default LLM MODEL NAME
|
|
DEFAULT_LLM=lmsys/vicuna-13b-v1.5
|
|
|
|
## Cache directory to store models
|
|
CACHE_DIR=data
|
|
|
|
## =======================================================
|
|
## Sentry
|
|
## =======================================================
|
|
|
|
## Sentry DSN configuration
|
|
#SENTRY_DSN=
|
|
|