Remove cache dir to use default download paths
Reflector
Reflector is a cutting-edge web application under development by Monadical. It utilizes AI to record meetings, providing a permanent record with transcripts, translations, and automated summaries.
The project architecture consists of three primary components:
- Front-End: NextJS React project hosted on Vercel, located in
www/. - Back-End: Python server that offers an API and data persistence, found in
server/. - AI Models: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations.
Table of Contents
Miscellaneous
Contribution Guidelines
All new contributions should be made in a separate branch. Before any code is merged into master, it requires a code review.
How to Install Blackhole (Mac Only)
Note: We currently do not have instructions for Windows users.
- Install Blackhole-2ch (2 ch is enough) by 1 of 2 options listed.
- Setup "Aggregate device" to route web audio and local microphone input.
- Setup Multi-Output device
- Then goto
System Preferences -> Soundand choose the devices created from the Output and Input tabs. - The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.
Permissions:
You may have to add permission for browser's microphone access to record audio in
System Preferences -> Privacy & Security -> Microphone
System Preferences -> Privacy & Security -> Accessibility. You will be prompted to provide these when you try to connect.
Front-End
Start with cd www.
Installation
To install the application, run:
yarn install
Run the Application
To run the application in development mode, run:
yarn dev
Then open http://localhost:3000 to view it in the browser.
OpenAPI Code Generation
To generate the TypeScript files from the openapi.json file, make sure the python server is running, then run:
yarn openapi
You may need to run yarn global add @openapitools/openapi-generator-cli first. You also need a Java runtime installed on your machine.
Back-End
Start with cd server.
Installation
Run:
poetry install
Then create an .env with:
TRANSCRIPT_BACKEND=modal
TRANSCRIPT_URL=https://monadical-sas--reflector-transcriber-web.modal.run
TRANSCRIPT_MODAL_API_KEY=<omitted>
LLM_BACKEND=modal
LLM_URL=https://monadical-sas--reflector-llm-web.modal.run
LLM_MODAL_API_KEY=<omitted>
AUTH_BACKEND=fief
AUTH_FIEF_URL=https://auth.reflector.media/reflector-local
AUTH_FIEF_CLIENT_ID=KQzRsNgoY<omitted>
AUTH_FIEF_CLIENT_SECRET=<omitted>
Start the project
Use:
poetry run python3 -m reflector.app
Using docker
Use:
docker-compose up server
Using local GPT4All
- Start GPT4All with any model you want
- Ensure the API server is activated in GPT4all
- Run with:
LLM_BACKEND=openai LLM_URL=http://localhost:4891/v1/completions LLM_OPENAI_MODEL="GPT4All Falcon" python -m reflector.app
Using local files
poetry run python -m reflector.tools.process path/to/audio.wav
AI Models
(Documentation for this section is pending.)