mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-20 20:29:06 +00:00
82ce8202bdc18326a037db717de02f317d22322c
If LLM is stuck to warm or an exception happen in the pipeline, then the processor responsible for the exception fail, and there is no fallback. So audio continue to arrive, but no processing happen.While this should be done right especially after disconnection, still, we should ignore llm warmup issue and just go. Closes #140
Reflector
Reflector server is responsible for audio transcription and summarization for now. The project is moving fast, documentation is currently unstable and outdated
Server
We currently use oogabooga as a LLM backend.
Using docker
Create a .env with
LLM_URL=http://IP:HOST/api/v1/generate
Then start with:
$ docker-compose up
Languages
Python
72.3%
TypeScript
26.9%
JavaScript
0.3%
Dockerfile
0.2%