mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-20 12:19:06 +00:00
Upgrade to latest openapi codegen
This commit is contained in:
34
README.md
34
README.md
@@ -4,9 +4,9 @@ Reflector Audio Management and Analysis is a cutting-edge web application under
|
||||
|
||||
The project architecture consists of three primary components:
|
||||
|
||||
* **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
|
||||
* **Back-End**: Python server that offers an API and data persistence, found in `server/`.
|
||||
* **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations.
|
||||
- **Front-End**: NextJS React project hosted on Vercel, located in `www/`.
|
||||
- **Back-End**: Python server that offers an API and data persistence, found in `server/`.
|
||||
- **GPU implementation**: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations.
|
||||
|
||||
It also uses https://github.com/fief-dev for authentication, and Vercel for deployment and configuration of the front-end.
|
||||
|
||||
@@ -41,26 +41,28 @@ It also uses https://github.com/fief-dev for authentication, and Vercel for depl
|
||||
All new contributions should be made in a separate branch. Before any code is merged into `main`, it requires a code review.
|
||||
|
||||
### How to Install Blackhole (Mac Only)
|
||||
|
||||
To record both your voice and the meeting you're taking part in, you need :
|
||||
|
||||
- For an in-person meeting, make sure your microphone is in range of all participants.
|
||||
- If using several miscrophones, make sure to merge the audio feeds into one with an external tool.
|
||||
- For an online meeting, if you do not use headphones, your microphone should be able to pick up both your voice and the audio feed of the meeting.
|
||||
- If you want to use headphones, you need to merge the audio feeds with an external tool.
|
||||
|
||||
|
||||
This is an external tool for merging the audio feeds as explained in the previous section of this document.
|
||||
Note: We currently do not have instructions for Windows users.
|
||||
* Install [Blackhole](https://github.com/ExistentialAudio/BlackHole)-2ch (2 ch is enough) by 1 of 2 options listed.
|
||||
* Setup ["Aggregate device"](https://github.com/ExistentialAudio/BlackHole/wiki/Aggregate-Device) to route web audio and local microphone input.
|
||||
* Setup [Multi-Output device](https://github.com/ExistentialAudio/BlackHole/wiki/Multi-Output-Device)
|
||||
* Then goto ```System Preferences -> Sound``` and choose the devices created from the Output and Input tabs.
|
||||
* The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.
|
||||
|
||||
- Install [Blackhole](https://github.com/ExistentialAudio/BlackHole)-2ch (2 ch is enough) by 1 of 2 options listed.
|
||||
- Setup ["Aggregate device"](https://github.com/ExistentialAudio/BlackHole/wiki/Aggregate-Device) to route web audio and local microphone input.
|
||||
- Setup [Multi-Output device](https://github.com/ExistentialAudio/BlackHole/wiki/Multi-Output-Device)
|
||||
- Then goto `System Preferences -> Sound` and choose the devices created from the Output and Input tabs.
|
||||
- The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.
|
||||
|
||||
Permissions:
|
||||
|
||||
You may have to add permission for browser's microphone access to record audio in
|
||||
```System Preferences -> Privacy & Security -> Microphone```
|
||||
```System Preferences -> Privacy & Security -> Accessibility```. You will be prompted to provide these when you try to connect.
|
||||
`System Preferences -> Privacy & Security -> Microphone`
|
||||
`System Preferences -> Privacy & Security -> Accessibility`. You will be prompted to provide these when you try to connect.
|
||||
|
||||
## Front-End
|
||||
|
||||
@@ -96,8 +98,6 @@ To generate the TypeScript files from the openapi.json file, make sure the pytho
|
||||
yarn openapi
|
||||
```
|
||||
|
||||
You may need to run `yarn global add @openapitools/openapi-generator-cli` first. You also need a Java runtime installed on your machine.
|
||||
|
||||
## Back-End
|
||||
|
||||
Start with `cd server`.
|
||||
@@ -153,11 +153,13 @@ docker compose up -d redis
|
||||
**Option 2**
|
||||
|
||||
Install:
|
||||
|
||||
- [Git for Windows](https://gitforwindows.org/)
|
||||
- [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install)
|
||||
- Install your preferred Linux distribution via the Microsoft Store (e.g., Ubuntu).
|
||||
- Install your preferred Linux distribution via the Microsoft Store (e.g., Ubuntu).
|
||||
|
||||
Open your Linux distribution and update the package list:
|
||||
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install redis-server
|
||||
@@ -198,7 +200,6 @@ docker-compose up server
|
||||
- Ensure the API server is activated in GPT4all
|
||||
- Run with: `LLM_BACKEND=openai LLM_URL=http://localhost:4891/v1/completions LLM_OPENAI_MODEL="GPT4All Falcon" python -m reflector.app`
|
||||
|
||||
|
||||
### Using local files
|
||||
|
||||
```
|
||||
@@ -207,5 +208,4 @@ poetry run python -m reflector.tools.process path/to/audio.wav
|
||||
|
||||
## AI Models
|
||||
|
||||
*(Documentation for this section is pending.)*
|
||||
|
||||
_(Documentation for this section is pending.)_
|
||||
|
||||
Reference in New Issue
Block a user