2023-08-17 14:46:48 +02:00
2023-08-15 09:50:13 +02:00
2023-08-17 14:46:48 +02:00
2023-08-15 22:20:34 +07:00
2023-07-27 15:35:35 +02:00
2023-07-28 23:42:33 +07:00

Reflector

Reflector server is responsible for audio transcription and summarization for now. The project is moving fast, documentation is currently unstable and outdated

Server

We currently use oogabooga as a LLM backend.

Using docker

Create a .env with

LLM_URL=http://IP:HOST/api/v1/generate

Then start with:

$ docker-compose up
Description
100% local ML models for meeting transcription and analysis
Readme MIT 84 MiB
Languages
Python 72.3%
TypeScript 26.9%
JavaScript 0.3%
Dockerfile 0.2%