projects-g 30cffeeb6b Merge pull request #100 from Monadical-SAS/feat/gokul
Sample Chat generation with LLM
2023-08-03 13:06:28 +05:30
2023-08-03 12:28:22 +05:30
2023-08-02 16:38:08 +07:00
2023-07-27 15:35:35 +02:00
2023-07-28 23:42:33 +07:00

Reflector

Reflector server is responsible for audio transcription and summarization for now. The project is moving fast, documentation is currently unstable and outdated

Server

We currently use oogabooga as a LLM backend.

Using docker

Create a .env with

LLM_URL=http://IP:HOST/api/v1/generate

Then start with:

$ docker-compose up
Description
100% local ML models for meeting transcription and analysis
Readme MIT 84 MiB
Languages
Python 72.3%
TypeScript 26.9%
JavaScript 0.3%
Dockerfile 0.2%