mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-20 20:29:06 +00:00
509840cb4ceeb7997f155f86fa8eaaf2aa729f32
Each processor is standalone, with define INPUT/OUTPUT. Processor can be threaded or not (can be extensible later) TODO: Pipeline that automatically connect all processors, flush and clean data To test: python -m reflector.processors tests/records/test_mathieu_hello.wav ``` Transcript: [00:00.500]: Hi there, everyone. Transcript: [00:02.700]: Today, I want to share my incredible experience. Transcript: [00:05.461]: with Reflector, a cutineage product that revolutionizes audio processing. Transcript: [00:10.922]: With Refector, I can easily convert any audio into accurate transcription. Transcript: [00:16.493]: serving me hours of tedious manual work. ``` This is not a good transcript, but not the purpose here.
Reflector
Reflector server is responsible for audio transcription and summarization for now. The project is moving fast, documentation is currently unstable and outdated
Server
We currently use oogabooga as a LLM backend.
Using docker
Create a .env with
LLM_URL=http://IP:HOST/api/v1/generate
Then start with:
$ docker-compose up
Languages
Python
72.3%
TypeScript
26.9%
JavaScript
0.3%
Dockerfile
0.2%