docs polishing

This commit is contained in:
Igor Loskutov
2025-12-10 16:01:00 -05:00
parent 406a7529ee
commit 1d584f4b53
4 changed files with 70 additions and 21 deletions

View File

@@ -95,9 +95,7 @@ DAILYCO_STORAGE_AWS_BUCKET_NAME=<your-bucket-from-daily-setup>
DAILYCO_STORAGE_AWS_REGION=us-east-1
DAILYCO_STORAGE_AWS_ROLE_ARN=<your-role-arn-from-daily-setup>
# Transcript storage (required for Daily.co multitrack processing)
TRANSCRIPT_STORAGE_BACKEND=local
# Or use S3 for production:
# Transcript storage (should already be configured from main setup)
# TRANSCRIPT_STORAGE_BACKEND=aws
# TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=<your-key>
# TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=<your-secret>
@@ -123,12 +121,13 @@ sudo docker compose -f docker-compose.prod.yml up -d server worker
1. Visit your Reflector frontend: `https://app.example.com`
2. Go to **Rooms**
3. Create or join a room
4. Allow camera/microphone access
5. You should see Daily.co video interface
6. Speak for 10-20 seconds
7. Leave the meeting
8. Recording should appear in **Transcripts** within 5 minutes
3. Click **Create Room**
4. Select **Daily** as the platform
5. Allow camera/microphone access
6. You should see Daily.co video interface
7. Speak for 10-20 seconds
8. Leave the meeting
9. Recording should appear in **Transcripts** within 5 minutes (if webhooks aren't set up yet, see [Webhook Configuration](#webhook-configuration-optional) below)
---

View File

@@ -32,14 +32,18 @@ Before starting, you need:
- Modal.com account, OR
- GPU server with NVIDIA GPU (8GB+ VRAM)
- **HuggingFace account** - Free at https://huggingface.co
- Accept both Pyannote licenses (required for speaker diarization):
- https://huggingface.co/pyannote/speaker-diarization-3.1
- https://huggingface.co/pyannote/segmentation-3.0
- **LLM API** - For summaries and topic detection. Choose one:
- OpenAI API key at https://platform.openai.com/account/api-keys, OR
- Any OpenAI-compatible endpoint (vLLM, LiteLLM, Ollama, etc.)
- **AWS S3 bucket** - For storing audio files and transcripts (see [S3 Setup](#create-s3-bucket-for-transcript-storage) below)
### Optional (for live meeting rooms)
- [ ] **Daily.co account** - Free tier at https://dashboard.daily.co
- [ ] **AWS S3 bucket** - For Daily.co recording storage
- [ ] **AWS S3 bucket + IAM Role** - For Daily.co recording storage (separate from transcript storage)
---
@@ -142,6 +146,34 @@ cd reflector
---
## Create S3 Bucket for Transcript Storage
Reflector requires AWS S3 to store audio files during processing.
### Create Bucket
```bash
# Choose a unique bucket name
BUCKET_NAME="reflector-transcripts-yourname"
AWS_REGION="us-east-1"
# Create bucket
aws s3 mb s3://$BUCKET_NAME --region $AWS_REGION
```
### Create IAM User
Create an IAM user with S3 access for Reflector:
1. Go to AWS IAM Console → Users → Create User
2. Name: `reflector-transcripts`
3. Attach policy: `AmazonS3FullAccess` (or create a custom policy for just your bucket)
4. Create access key (Access key ID + Secret access key)
Save these credentials - you'll need them in the next step.
---
## Configure Environment
**Location: YOUR SERVER (via SSH, in the `reflector` directory)**
@@ -193,12 +225,17 @@ DIARIZATION_MODAL_API_KEY=<from-deploy-all.sh-output>
# DIARIZATION_URL=https://gpu.example.com
# DIARIZATION_MODAL_API_KEY=<your-generated-api-key>
# Storage - where to store audio files and transcripts
TRANSCRIPT_STORAGE_BACKEND=local
# Storage - where to store audio files and transcripts (requires AWS S3)
TRANSCRIPT_STORAGE_BACKEND=aws
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=your-aws-access-key
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
TRANSCRIPT_STORAGE_AWS_BUCKET_NAME=reflector-media
TRANSCRIPT_STORAGE_AWS_REGION=us-east-1
# LLM - for generating titles, summaries, and topics
LLM_API_KEY=sk-your-openai-api-key
LLM_MODEL=gpt-4o-mini
# LLM_URL=https://api.openai.com/v1 # Optional: custom endpoint (vLLM, LiteLLM, Ollama, etc.)
# Auth - disable for initial setup (see a dedicated step for authentication)
AUTH_BACKEND=none

View File

@@ -240,3 +240,19 @@ echo ""
echo "Note: Public key saved to server/reflector/auth/jwt/keys/authentik_public.pem"
echo " and mounted via docker-compose volume."
echo ""
echo "==========================================="
echo "Configuration values (for reference):"
echo "==========================================="
echo ""
echo "# server/.env"
echo "AUTH_BACKEND=jwt"
echo "AUTH_JWT_AUDIENCE=$CLIENT_ID"
echo "AUTH_JWT_PUBLIC_KEY=authentik_public.pem"
echo ""
echo "# www/.env"
echo "FEATURE_REQUIRE_LOGIN=true"
echo "AUTHENTIK_ISSUER=$AUTHENTIK_URL/application/o/reflector"
echo "AUTHENTIK_REFRESH_TOKEN_URL=$AUTHENTIK_URL/application/o/token/"
echo "AUTHENTIK_CLIENT_ID=$CLIENT_ID"
echo "AUTHENTIK_CLIENT_SECRET=$CLIENT_SECRET"
echo ""

View File

@@ -95,16 +95,13 @@ DIARIZATION_URL=https://monadical-sas--reflector-diarizer-web.modal.run
## Transcript Storage
##
## Where to store audio files and transcripts
## Options: local, aws
## AWS S3 is required for production
## =======================================================
TRANSCRIPT_STORAGE_BACKEND=local
## For AWS S3 storage (optional):
#TRANSCRIPT_STORAGE_BACKEND=aws
#TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=your-aws-access-key
#TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
#TRANSCRIPT_STORAGE_AWS_BUCKET_NAME=reflector-media
#TRANSCRIPT_STORAGE_AWS_REGION=us-east-1
TRANSCRIPT_STORAGE_BACKEND=aws
TRANSCRIPT_STORAGE_AWS_ACCESS_KEY_ID=your-aws-access-key
TRANSCRIPT_STORAGE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
TRANSCRIPT_STORAGE_AWS_BUCKET_NAME=reflector-media
TRANSCRIPT_STORAGE_AWS_REGION=us-east-1
## =======================================================