Mathieu Virbel 77d07c8178 Merge pull request #137 from Monadical-SAS/serverless-gpu-modal
server: implement modal backend for llm and transcription
2023-08-11 16:19:01 +02:00
2023-08-11 16:18:39 +02:00
2023-07-27 15:35:35 +02:00
2023-07-28 23:42:33 +07:00

Reflector

Reflector server is responsible for audio transcription and summarization for now. The project is moving fast, documentation is currently unstable and outdated

Server

We currently use oogabooga as a LLM backend.

Using docker

Create a .env with

LLM_URL=http://IP:HOST/api/v1/generate

Then start with:

$ docker-compose up
Description
100% local ML models for meeting transcription and analysis
Readme MIT 84 MiB
Languages
Python 72.3%
TypeScript 26.9%
JavaScript 0.3%
Dockerfile 0.2%