Mathieu Virbel 509840cb4c processors: Introduce processors implementation
Each processor is standalone, with define INPUT/OUTPUT.
Processor can be threaded or not (can be extensible later)
TODO: Pipeline that automatically connect all processors, flush and clean data

To test: python -m reflector.processors tests/records/test_mathieu_hello.wav

```
Transcript: [00:00.500]:  Hi there, everyone.
Transcript: [00:02.700]:  Today, I want to share my incredible experience.
Transcript: [00:05.461]:  with Reflector, a cutineage product that revolutionizes audio processing.
Transcript: [00:10.922]:  With Refector, I can easily convert any audio into accurate transcription.
Transcript: [00:16.493]:  serving me hours of tedious manual work.
```

This is not a good transcript, but not the purpose here.
2023-07-28 20:08:33 +02:00
2023-07-27 18:30:49 +02:00
2023-07-27 19:42:09 -05:00
2023-07-27 15:35:35 +02:00
2023-07-28 23:42:33 +07:00

Reflector

Reflector server is responsible for audio transcription and summarization for now. The project is moving fast, documentation is currently unstable and outdated

Server

We currently use oogabooga as a LLM backend.

Using docker

Create a .env with

LLM_URL=http://IP:HOST/api/v1/generate

Then start with:

$ docker-compose up
Description
100% local ML models for meeting transcription and analysis
Readme MIT 84 MiB
Languages
Python 72.3%
TypeScript 26.9%
JavaScript 0.3%
Dockerfile 0.2%