mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-20 12:19:06 +00:00
docs: add AGPL-v3 license and update README (#487)
This commit is contained in:
9
LICENSE
Normal file
9
LICENSE
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2025 Monadical SAS
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||||
166
README.md
166
README.md
@@ -1,46 +1,28 @@
|
|||||||
|
<div align="center">
|
||||||
# Reflector
|
# Reflector
|
||||||
|
|
||||||
Reflector Audio Management and Analysis is a cutting-edge web application under development by Monadical. It utilizes AI to record meetings, providing a permanent record with transcripts, translations, and automated summaries.
|
Reflector Audio Management and Analysis is a cutting-edge web application under development by Monadical. It utilizes AI to record meetings, providing a permanent record with transcripts, translations, and automated summaries.
|
||||||
|
|
||||||
|
[](https://github.com/monadical-sas/cubbi/actions/workflows/pytests.yml)
|
||||||
|
[](https://opensource.org/licenses/AGPL-v3)
|
||||||
|
</div>
|
||||||
|
|
||||||
|
## Background
|
||||||
|
|
||||||
The project architecture consists of three primary components:
|
The project architecture consists of three primary components:
|
||||||
|
|
||||||
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
|
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
|
||||||
- **Back-End**: Python server that offers an API and data persistence, found in `server/`.
|
- **Back-End**: Python server that offers an API and data persistence, found in `server/`.
|
||||||
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations. Most reliable option is Modal deployment
|
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations. Most reliable option is Modal deployment
|
||||||
|
|
||||||
It also uses https://github.com/fief-dev for authentication, and Vercel for deployment and configuration of the front-end.
|
It also uses authentik for authentication if activated, and Vercel for deployment and configuration of the front-end.
|
||||||
|
|
||||||
## Table of Contents
|
## Contribution Guidelines
|
||||||
|
|
||||||
- [Reflector](#reflector)
|
All new contributions should be made in a separate branch, and goes through a Pull Request.
|
||||||
- [Table of Contents](#table-of-contents)
|
[Conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) must be used for the PR title and commits.
|
||||||
- [Miscellaneous](#miscellaneous)
|
|
||||||
- [Contribution Guidelines](#contribution-guidelines)
|
|
||||||
- [How to Install Blackhole (Mac Only)](#how-to-install-blackhole-mac-only)
|
|
||||||
- [Front-End](#front-end)
|
|
||||||
- [Installation](#installation)
|
|
||||||
- [Run the Application](#run-the-application)
|
|
||||||
- [OpenAPI Code Generation](#openapi-code-generation)
|
|
||||||
- [Back-End](#back-end)
|
|
||||||
- [Installation](#installation-1)
|
|
||||||
- [Start the API/Backend](#start-the-apibackend)
|
|
||||||
- [Redis (Mac)](#redis-mac)
|
|
||||||
- [Redis (Windows)](#redis-windows)
|
|
||||||
- [Update the database schema (run on first install, and after each pull containing a migration)](#update-the-database-schema-run-on-first-install-and-after-each-pull-containing-a-migration)
|
|
||||||
- [Main Server](#main-server)
|
|
||||||
- [Crontab (optional)](#crontab-optional)
|
|
||||||
- [Using docker](#using-docker)
|
|
||||||
- [Using local GPT4All](#using-local-gpt4all)
|
|
||||||
- [Using local files](#using-local-files)
|
|
||||||
- [AI Models](#ai-models)
|
|
||||||
|
|
||||||
## Miscellaneous
|
## Usage
|
||||||
|
|
||||||
### Contribution Guidelines
|
|
||||||
|
|
||||||
All new contributions should be made in a separate branch. Before any code is merged into `main`, it requires a code review.
|
|
||||||
|
|
||||||
### Usage instructions
|
|
||||||
|
|
||||||
To record both your voice and the meeting you're taking part in, you need:
|
To record both your voice and the meeting you're taking part in, you need:
|
||||||
|
|
||||||
@@ -66,13 +48,13 @@ Note: We currently do not have instructions for Windows users.
|
|||||||
- Then goto `System Preferences -> Sound` and choose the devices created from the Output and Input tabs.
|
- Then goto `System Preferences -> Sound` and choose the devices created from the Output and Input tabs.
|
||||||
- The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.
|
- The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.
|
||||||
|
|
||||||
## Front-End
|
## Installation
|
||||||
|
|
||||||
Start with `cd www`.
|
### Frontend
|
||||||
|
|
||||||
### Installation
|
Start with `cd backend`.
|
||||||
|
|
||||||
To install the application, run:
|
**Installation**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
yarn install
|
yarn install
|
||||||
@@ -82,9 +64,7 @@ cp config-template.ts config.ts
|
|||||||
|
|
||||||
Then, fill in the environment variables in `.env` and the configuration in `config.ts` as needed. If you are unsure on how to proceed, ask in Zulip.
|
Then, fill in the environment variables in `.env` and the configuration in `config.ts` as needed. If you are unsure on how to proceed, ask in Zulip.
|
||||||
|
|
||||||
### Run the Application
|
**Run in development mode**
|
||||||
|
|
||||||
To run the application in development mode, run:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
yarn dev
|
yarn dev
|
||||||
@@ -92,7 +72,7 @@ yarn dev
|
|||||||
|
|
||||||
Then (after completing server setup and starting it) open [http://localhost:3000](http://localhost:3000) to view it in the browser.
|
Then (after completing server setup and starting it) open [http://localhost:3000](http://localhost:3000) to view it in the browser.
|
||||||
|
|
||||||
### OpenAPI Code Generation
|
**OpenAPI Code Generation**
|
||||||
|
|
||||||
To generate the TypeScript files from the openapi.json file, make sure the python server is running, then run:
|
To generate the TypeScript files from the openapi.json file, make sure the python server is running, then run:
|
||||||
|
|
||||||
@@ -100,87 +80,34 @@ To generate the TypeScript files from the openapi.json file, make sure the pytho
|
|||||||
yarn openapi
|
yarn openapi
|
||||||
```
|
```
|
||||||
|
|
||||||
## Back-End
|
### Backend
|
||||||
|
|
||||||
Start with `cd server`.
|
Start with `cd server`.
|
||||||
|
|
||||||
### Quick-run instructions (only if you installed everything already)
|
**Installation**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
redis-server # Mac
|
poetry install
|
||||||
docker compose up -d redis # Windows
|
|
||||||
poetry run celery -A reflector.worker.app worker --loglevel=info
|
|
||||||
poetry run python -m reflector.app
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Installation
|
**Run in development mode**
|
||||||
|
|
||||||
Download [Python 3.11 from the official website](https://www.python.org/downloads/) and ensure you have version 3.11 by running `python --version`.
|
|
||||||
|
|
||||||
Run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python --version # It should say 3.11
|
|
||||||
pip install poetry
|
|
||||||
poetry install --no-root
|
|
||||||
cp .env_template .env
|
|
||||||
```
|
|
||||||
|
|
||||||
Then fill `.env` with the omitted values (ask in Zulip). At the moment of this writing, the only value omitted is `AUTH_FIEF_CLIENT_SECRET`.
|
|
||||||
|
|
||||||
### Start the API/Backend
|
|
||||||
|
|
||||||
Start the background worker:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
poetry run celery -A reflector.worker.app worker --loglevel=info
|
|
||||||
```
|
|
||||||
|
|
||||||
### Redis (Mac)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
yarn add redis
|
|
||||||
poetry run celery -A reflector.worker.app worker --loglevel=info
|
|
||||||
redis-server
|
|
||||||
```
|
|
||||||
|
|
||||||
### Redis (Windows)
|
|
||||||
|
|
||||||
**Option 1**
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker compose up -d redis
|
docker compose up -d redis
|
||||||
```
|
|
||||||
|
|
||||||
**Option 2**
|
# on the first run, or if the schemas changed
|
||||||
|
poetry run alembic upgrade head
|
||||||
|
|
||||||
Install:
|
# start the worker
|
||||||
|
poetry run celery -A reflector.worker.app worker --loglevel=info
|
||||||
|
|
||||||
- [Git for Windows](https://gitforwindows.org/)
|
# start the app
|
||||||
- [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install)
|
|
||||||
- Install your preferred Linux distribution via the Microsoft Store (e.g., Ubuntu).
|
|
||||||
|
|
||||||
Open your Linux distribution and update the package list:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install redis-server
|
|
||||||
redis-server
|
|
||||||
```
|
|
||||||
|
|
||||||
## Update the database schema (run on first install, and after each pull containing a migration)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
poetry run alembic heads
|
|
||||||
```
|
|
||||||
|
|
||||||
## Main Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
poetry run python -m reflector.app
|
poetry run python -m reflector.app
|
||||||
```
|
```
|
||||||
|
|
||||||
### Crontab (optional)
|
Then fill `.env` with the omitted values (ask in Zulip).
|
||||||
|
|
||||||
|
**Crontab (optional)**
|
||||||
|
|
||||||
For crontab (only healthcheck for now), start the celery beat (you don't need it on your local dev environment):
|
For crontab (only healthcheck for now), start the celery beat (you don't need it on your local dev environment):
|
||||||
|
|
||||||
@@ -188,34 +115,21 @@ For crontab (only healthcheck for now), start the celery beat (you don't need it
|
|||||||
poetry run celery -A reflector.worker.app beat
|
poetry run celery -A reflector.worker.app beat
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Using docker
|
### GPU models
|
||||||
|
|
||||||
Use:
|
Currently, reflector heavily use custom local models, deployed on modal. All the micro services are available in server/gpu/
|
||||||
|
|
||||||
```bash
|
To deploy llm changes to modal, you need:
|
||||||
docker-compose up server
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using local GPT4All
|
|
||||||
|
|
||||||
- Start GPT4All with any model you want
|
|
||||||
- Ensure the API server is activated in GPT4all
|
|
||||||
- Run with: `LLM_BACKEND=openai LLM_URL=http://localhost:4891/v1/completions LLM_OPENAI_MODEL="GPT4All Falcon" python -m reflector.app`
|
|
||||||
|
|
||||||
### Using local files
|
|
||||||
|
|
||||||
```
|
|
||||||
poetry run python -m reflector.tools.process path/to/audio.wav
|
|
||||||
```
|
|
||||||
|
|
||||||
## AI Models
|
|
||||||
|
|
||||||
### Modal
|
|
||||||
To deploy llm changes to modal, you need.
|
|
||||||
- a modal account
|
- a modal account
|
||||||
- set up the required secret in your modal account (REFLECTOR_GPU_APIKEY)
|
- set up the required secret in your modal account (REFLECTOR_GPU_APIKEY)
|
||||||
- install the modal cli
|
- install the modal cli
|
||||||
- connect your modal cli to your account if not done previously
|
- connect your modal cli to your account if not done previously
|
||||||
- `modal run path/to/required/llm`
|
- `modal run path/to/required/llm`
|
||||||
|
|
||||||
_(Documentation for this section is pending.)_
|
## Using local files
|
||||||
|
|
||||||
|
You can manually process an audio file by calling the process tool:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
poetry run python -m reflector.tools.process path/to/audio.wav
|
||||||
|
```
|
||||||
|
|||||||
Reference in New Issue
Block a user