Mathieu Virbel 03561453c5 feat: Monadical SSO as replacement of Fief (#393)
* sso: first pass for integrating SSO

still have issue on refreshing
maybe customize the login page, or completely avoid it
make 100% to understand how session server/client are working
need to test with different configuration option (features flags and
requireLogin)

* sso: correctly handle refresh token, with pro-active refresh

Going on interceptors make extra calls to reflector when 401.
We need then to circle back with NextJS backend to update the jwt,
session, then retry the failed request.

I prefered to go pro-active, and ensure the session AND jwt are always
up to date.

A minute before the expiration, we'll try to refresh it. useEffect() of
NextJS cannot be asynchronous, so we cannot wait for the token to be
refreshed.

Every 20s, a minute before the expiration (so 3x in total max) we'll try
to renew. When the accessToken is renewed, the session is updated, and
dispatching up to the client, which updates the useApi().

Therefore, no component will left without a incorrect token.

* fixes: issue with missing key on react-select-search because the default value is undefined

* sso: fixes login/logout button, and avoid seeing the login with authentik page when clicking

* sso: ensure /transcripts/new is not behind protected page, and feature flags page are honored

* sso: fixes user sub->id

* fixes: remove old layout not used

* fixes: set default NEXT_PUBLIC_SITE_URL as localhost

* fixes: removing fief again due to merge with main

* sso: ensure session is always ready before doing any action

* sso: add migration from fief to jwt in server, only from transcripts list

* fixes: user tests

* fixes: compilation issues
2024-09-03 19:27:15 +02:00
2024-07-15 11:29:25 +02:00
2023-11-20 21:39:33 +07:00
2023-12-04 21:57:45 +07:00
2024-01-30 20:51:58 +05:30
2023-12-18 20:22:15 +07:00
2024-08-12 12:22:21 +02:00

Reflector

Reflector Audio Management and Analysis is a cutting-edge web application under development by Monadical. It utilizes AI to record meetings, providing a permanent record with transcripts, translations, and automated summaries.

The project architecture consists of three primary components:

  • Front-End: NextJS React project hosted on Vercel, located in www/.
  • Back-End: Python server that offers an API and data persistence, found in server/.
  • GPU implementation: Providing services such as speech-to-text transcription, topic generation, automated summaries, and translations. Most reliable option is Modal deployment

It also uses https://github.com/fief-dev for authentication, and Vercel for deployment and configuration of the front-end.

Table of Contents

Miscellaneous

Contribution Guidelines

All new contributions should be made in a separate branch. Before any code is merged into main, it requires a code review.

Usage instructions

To record both your voice and the meeting you're taking part in, you need :

  • For an in-person meeting, make sure your microphone is in range of all participants.
  • If using several microphones, make sure to merge the audio feeds into one with an external tool.
  • For an online meeting, if you do not use headphones, your microphone should be able to pick up both your voice and the audio feed of the meeting.
  • If you want to use headphones, you need to merge the audio feeds with an external tool.

Permissions:

You may have to add permission for browser's microphone access to record audio in System Preferences -> Privacy & Security -> Microphone System Preferences -> Privacy & Security -> Accessibility. You will be prompted to provide these when you try to connect.

How to Install Blackhole (Mac Only)

This is an external tool for merging the audio feeds as explained in the previous section of this document. Note: We currently do not have instructions for Windows users.

  • Install Blackhole-2ch (2 ch is enough) by 1 of 2 options listed.
  • Setup "Aggregate device" to route web audio and local microphone input.
  • Setup Multi-Output device
  • Then goto System Preferences -> Sound and choose the devices created from the Output and Input tabs.
  • The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to and the output should be fed back to your specified output devices if everything is configured properly.

Front-End

Start with cd www.

Installation

To install the application, run:

yarn install
cp .env_template .env
cp config-template.ts config.ts

Then, fill in the environment variables in .env and the configuration in config.ts as needed. If you are unsure on how to proceed, ask in Zulip.

Run the Application

To run the application in development mode, run:

yarn dev

Then (after completing server setup and starting it) open http://localhost:3000 to view it in the browser.

OpenAPI Code Generation

To generate the TypeScript files from the openapi.json file, make sure the python server is running, then run:

yarn openapi

Back-End

Start with cd server.

Quick-run instructions (only if you installed everything already)

redis-server # Mac
docker compose up -d redis # Windows
poetry run celery -A reflector.worker.app worker --loglevel=info
poetry run python -m reflector.app

Installation

Download Python 3.11 from the official website and ensure you have version 3.11 by running python --version.

Run:

python --version # It should say 3.11
pip install poetry
poetry install --no-root
cp .env_template .env

Then fill .env with the omitted values (ask in Zulip). At the moment of this writing, the only value omitted is AUTH_FIEF_CLIENT_SECRET.

Start the API/Backend

Start the background worker:

poetry run celery -A reflector.worker.app worker --loglevel=info

Redis (Mac)

yarn add redis
poetry run celery -A reflector.worker.app worker --loglevel=info
redis-server

Redis (Windows)

Option 1

docker compose up -d redis

Option 2

Install:

Open your Linux distribution and update the package list:

sudo apt update
sudo apt install redis-server
redis-server

Update the database schema (run on first install, and after each pull containing a migration)

poetry run alembic heads

Main Server

poetry run python -m reflector.app

Crontab (optional)

For crontab (only healthcheck for now), start the celery beat (you don't need it on your local dev environment):

poetry run celery -A reflector.worker.app beat

Using docker

Use:

docker-compose up server

Using local GPT4All

  • Start GPT4All with any model you want
  • Ensure the API server is activated in GPT4all
  • Run with: LLM_BACKEND=openai LLM_URL=http://localhost:4891/v1/completions LLM_OPENAI_MODEL="GPT4All Falcon" python -m reflector.app

Using local files

poetry run python -m reflector.tools.process path/to/audio.wav

AI Models

Modal

To deploy llm changes to modal, you need.

  • a modal account
  • set up the required secret in your modal account (REFLECTOR_GPU_APIKEY)
  • install the modal cli
  • connect your modal cli to your account if not done previously
  • modal run path/to/required/llm

(Documentation for this section is pending.)

Description
100% local ML models for meeting transcription and analysis
Readme MIT 84 MiB
Languages
Python 72.3%
TypeScript 26.9%
JavaScript 0.3%
Dockerfile 0.2%