Mathieu Virbel d94e2911c3 Serverless GPU support on banana.dev (#106)
* serverless: implement banana backend for both audio and LLM

Related to monadical-sas/reflector-gpu-banana project

* serverless: got llm working on banana !

* tests: fixes

* serverless: fix dockerfile to use fastapi server + httpx
2023-08-04 10:24:11 +02:00
2023-07-27 15:35:35 +02:00
2023-07-28 23:42:33 +07:00

Reflector

Reflector server is responsible for audio transcription and summarization for now. The project is moving fast, documentation is currently unstable and outdated

Server

We currently use oogabooga as a LLM backend.

Using docker

Create a .env with

LLM_URL=http://IP:HOST/api/v1/generate

Then start with:

$ docker-compose up
Description
100% local ML models for meeting transcription and analysis
Readme MIT 84 MiB
Languages
Python 72.3%
TypeScript 26.9%
JavaScript 0.3%
Dockerfile 0.2%