mirror of
https://github.com/Monadical-SAS/reflector.git
synced 2025-12-21 04:39:06 +00:00
code cleanup
This commit is contained in:
125
README.md
125
README.md
@@ -1,32 +1,34 @@
|
||||
# Reflector
|
||||
|
||||
This is the code base for the Reflector demo (formerly called agenda-talk-diff) for the leads : Troy Web Consulting panel (A Chat with AWS about AI: Real AI/ML AWS projects and what you should know) on 6/14 at 430PM.
|
||||
|
||||
The target deliverable is a local-first live transcription and visualization tool to compare a discussion's target agenda/objectives to the actual discussion live.
|
||||
This is the code base for the Reflector demo (formerly called agenda-talk-diff) for the leads : Troy Web Consulting
|
||||
panel (A Chat with AWS about AI: Real AI/ML AWS projects and what you should know) on 6/14 at 430PM.
|
||||
|
||||
The target deliverable is a local-first live transcription and visualization tool to compare a discussion's target
|
||||
agenda/objectives to the actual discussion live.
|
||||
|
||||
**S3 bucket:**
|
||||
|
||||
Everything you need for S3 is already configured in config.ini. Only edit it if you need to change it deliberately.
|
||||
|
||||
S3 bucket name is mentioned in config.ini. All transfers will happen between this bucket and the local computer where the
|
||||
script is run. You need AWS_ACCESS_KEY / AWS_SECRET_KEY to authenticate your calls to S3 (done in config.ini).
|
||||
S3 bucket name is mentioned in config.ini. All transfers will happen between this bucket and the local computer where
|
||||
the
|
||||
script is run. You need AWS_ACCESS_KEY / AWS_SECRET_KEY to authenticate your calls to S3 (done in config.ini).
|
||||
|
||||
For AWS S3 Web UI,
|
||||
|
||||
1) Login to AWS management console.
|
||||
2) Search for S3 in the search bar at the top.
|
||||
3) Navigate to list the buckets under the current account, if needed and choose your bucket [```reflector-bucket```]
|
||||
4) You should be able to see items in the bucket. You can upload/download files here directly.
|
||||
|
||||
|
||||
For CLI,
|
||||
For CLI,
|
||||
Refer to the FILE UTIL section below.
|
||||
|
||||
|
||||
**FILE UTIL MODULE:**
|
||||
|
||||
A file_util module has been created to upload/download files with AWS S3 bucket pre-configured using config.ini.
|
||||
Though not needed for the workflow, if you need to upload / download file, separately on your own, apart from the pipeline workflow in the script, you can do so by :
|
||||
A file_util module has been created to upload/download files with AWS S3 bucket pre-configured using config.ini.
|
||||
Though not needed for the workflow, if you need to upload / download file, separately on your own, apart from the
|
||||
pipeline workflow in the script, you can do so by :
|
||||
|
||||
Upload:
|
||||
|
||||
@@ -39,37 +41,37 @@ Download:
|
||||
If you want to access the S3 artefacts, from another machine, you can either use the python file_util with the commands
|
||||
mentioned above or simply use the GUI of AWS Management Console.
|
||||
|
||||
|
||||
To setup,
|
||||
To setup,
|
||||
|
||||
1) Check values in config.ini file. Specifically add your OPENAI_APIKEY if you plan to use OpenAI API requests.
|
||||
2) Run ``` export KMP_DUPLICATE_LIB_OK=True``` in Terminal. [This is taken care of in code, but not reflecting, Will fix this issue later.]
|
||||
2) Run ``` export KMP_DUPLICATE_LIB_OK=True``` in
|
||||
Terminal. [This is taken care of in code, but not reflecting, Will fix this issue later.]
|
||||
|
||||
NOTE: If you don't have portaudio installed already, run ```brew install portaudio```
|
||||
|
||||
3) Run the script setup_depedencies.sh.
|
||||
|
||||
``` chmod +x setup_dependencies.sh ```
|
||||
``` chmod +x setup_dependencies.sh ```
|
||||
|
||||
``` sh setup_dependencies.sh <ENV>```
|
||||
``` sh setup_dependencies.sh <ENV>```
|
||||
|
||||
|
||||
ENV refers to the intended environment for JAX. JAX is available in several variants, [CPU | GPU | Colab TPU | Google Cloud TPU]
|
||||
|
||||
```ENV``` is :
|
||||
|
||||
cpu -> JAX CPU installation
|
||||
ENV refers to the intended environment for JAX. JAX is available in several
|
||||
variants, [CPU | GPU | Colab TPU | Google Cloud TPU]
|
||||
|
||||
cuda11 -> JAX CUDA 11.x version
|
||||
```ENV``` is :
|
||||
|
||||
cuda12 -> JAX CUDA 12.x version (Core Weave has CUDA 12 version, can check with ```nvidia-smi```)
|
||||
cpu -> JAX CPU installation
|
||||
|
||||
cuda11 -> JAX CUDA 11.x version
|
||||
|
||||
cuda12 -> JAX CUDA 12.x version (Core Weave has CUDA 12 version, can check with ```nvidia-smi```)
|
||||
|
||||
```sh setup_dependencies.sh cuda12```
|
||||
|
||||
4) If not already done, install ffmpeg. ```brew install ffmpeg```
|
||||
|
||||
For NLTK SSL error, check [here](https://stackoverflow.com/questions/38916452/nltk-download-ssl-certificate-verify-failed)
|
||||
|
||||
For NLTK SSL error,
|
||||
check [here](https://stackoverflow.com/questions/38916452/nltk-download-ssl-certificate-verify-failed)
|
||||
|
||||
5) Run the Whisper-JAX pipeline. Currently, the repo can take a Youtube video and transcribes/summarizes it.
|
||||
|
||||
@@ -79,83 +81,92 @@ You can even run it on local file or a file in your configured S3 bucket.
|
||||
|
||||
``` python3 whisjax.py "startup.mp4"```
|
||||
|
||||
The script will take care of a few cases like youtube file, local file, video file, audio-only file,
|
||||
The script will take care of a few cases like youtube file, local file, video file, audio-only file,
|
||||
file in S3, etc. If local file is not present, it can automatically take the file from S3.
|
||||
|
||||
**OFFLINE WORKFLOW:**
|
||||
|
||||
1) Specify the input source file] from a local, youtube link or upload to S3 if needed and pass it as input to the script.If the source file is in
|
||||
1) Specify the input source file] from a local, youtube link or upload to S3 if needed and pass it as input to the
|
||||
script.If the source file is in
|
||||
```.m4a``` format, it will get converted to ```.mp4``` automatically.
|
||||
2) Keep the agenda header topics in a local file named ```agenda-headers.txt```. This needs to be present where the script is run.
|
||||
2) Keep the agenda header topics in a local file named ```agenda-headers.txt```. This needs to be present where the
|
||||
script is run.
|
||||
This version of the pipeline compares covered agenda topics using agenda headers in the following format.
|
||||
1) ```agenda_topic : <short description>```
|
||||
3) Check all the values in ```config.ini```. You need to predefine 2 categories for which you need to scatter plot the
|
||||
topic modelling visualization in the config file. This is the default visualization. But, from the dataframe artefact called
|
||||
```df_<timestamp>.pkl``` , you can load the df and choose different topics to plot. You can filter using certain words to search for the
|
||||
1) ```agenda_topic : <short description>```
|
||||
3) Check all the values in ```config.ini```. You need to predefine 2 categories for which you need to scatter plot the
|
||||
topic modelling visualization in the config file. This is the default visualization. But, from the dataframe artefact
|
||||
called
|
||||
```df_<timestamp>.pkl``` , you can load the df and choose different topics to plot. You can filter using certain
|
||||
words to search for the
|
||||
transcriptions and you can see the top influencers and characteristic in each topic we have chosen to plot in the
|
||||
interactive HTML document. I have added a new jupyter notebook that gives the base template to play around with, named
|
||||
interactive HTML document. I have added a new jupyter notebook that gives the base template to play around with,
|
||||
named
|
||||
```Viz_experiments.ipynb```.
|
||||
4) Run the script. The script automatically transcribes, summarizes and creates a scatter plot of words & topics in the form of an interactive
|
||||
HTML file, a sample word cloud and uploads them to the S3 bucket
|
||||
4) Run the script. The script automatically transcribes, summarizes and creates a scatter plot of words & topics in the
|
||||
form of an interactive
|
||||
HTML file, a sample word cloud and uploads them to the S3 bucket
|
||||
5) Additional artefacts pushed to S3:
|
||||
1) HTML visualization file
|
||||
2) pandas df in pickle format for others to collaborate and make their own visualizations
|
||||
3) Summary, transcript and transcript with timestamps file in text format.
|
||||
1) HTML visualization file
|
||||
2) pandas df in pickle format for others to collaborate and make their own visualizations
|
||||
3) Summary, transcript and transcript with timestamps file in text format.
|
||||
|
||||
The script also creates 2 types of mappings.
|
||||
1) Timestamp -> The top 2 matched agenda topic
|
||||
2) Topic -> All matched timestamps in the transcription
|
||||
|
||||
Other visualizations can be planned based on available artefacts or new ones can be created. Refer the section ```Viz-experiments```.
|
||||
The script also creates 2 types of mappings.
|
||||
1) Timestamp -> The top 2 matched agenda topic
|
||||
2) Topic -> All matched timestamps in the transcription
|
||||
|
||||
Other visualizations can be planned based on available artefacts or new ones can be created. Refer the
|
||||
section ```Viz-experiments```.
|
||||
|
||||
**Visualization experiments:**
|
||||
|
||||
This is a jupyter notebook playground with template instructions on handling the metadata and data artefacts generated from the
|
||||
pipeline. Follow the instructions given and tweak your own logic into it or use it as a playground to experiment libraries and
|
||||
This is a jupyter notebook playground with template instructions on handling the metadata and data artefacts generated
|
||||
from the
|
||||
pipeline. Follow the instructions given and tweak your own logic into it or use it as a playground to experiment
|
||||
libraries and
|
||||
visualizations on top of the metadata.
|
||||
|
||||
**WHISPER-JAX REALTIME TRANSCRIPTION PIPELINE:**
|
||||
|
||||
We also support a provision to perform real-time transcripton using whisper-jax pipeline. But, there are
|
||||
a few pre-requisites before you run it on your local machine. The instructions are for
|
||||
We also support a provision to perform real-time transcripton using whisper-jax pipeline. But, there are
|
||||
a few pre-requisites before you run it on your local machine. The instructions are for
|
||||
configuring on a MacOS.
|
||||
|
||||
We need to way to route audio from an application opened via the browser, ex. "Whereby" and audio from your local
|
||||
microphone input which you will be using for speaking. We use [Blackhole](https://github.com/ExistentialAudio/BlackHole).
|
||||
microphone input which you will be using for speaking. We
|
||||
use [Blackhole](https://github.com/ExistentialAudio/BlackHole).
|
||||
|
||||
1) Install Blackhole-2ch (2 ch is enough) by 1 of 2 options listed.
|
||||
2) Setup [Aggregate device](https://github.com/ExistentialAudio/BlackHole/wiki/Aggregate-Device) to route web audio and
|
||||
local microphone input.
|
||||
|
||||
Be sure to mirror the settings given 
|
||||
Be sure to mirror the settings given 
|
||||
3) Setup [Multi-Output device](https://github.com/ExistentialAudio/BlackHole/wiki/Multi-Output-Device)
|
||||
|
||||
|
||||
Refer 
|
||||
|
||||
4) Set the aggregator input device name created in step 2 in config.ini as ```BLACKHOLE_INPUT_AGGREGATOR_DEVICE_NAME```
|
||||
|
||||
5) Then goto ``` System Preferences -> Sound ``` and choose the devices created from the Output and
|
||||
Input tabs.
|
||||
Input tabs.
|
||||
|
||||
6) The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen to
|
||||
and the output should be fed back to your specified output devices if everything is configured properly. Check this
|
||||
before trying out the trial.
|
||||
6) The input from your local microphone, the browser run meeting should be aggregated into one virtual stream to listen
|
||||
to
|
||||
and the output should be fed back to your specified output devices if everything is configured properly. Check this
|
||||
before trying out the trial.
|
||||
|
||||
**Permissions:**
|
||||
|
||||
You may have to add permission for "Terminal"/Code Editors [Pycharm/VSCode, etc.] microphone access to record audio in
|
||||
You may have to add permission for "Terminal"/Code Editors [Pycharm/VSCode, etc.] microphone access to record audio in
|
||||
```System Preferences -> Privacy & Security -> Microphone```,
|
||||
```System Preferences -> Privacy & Security -> Accessibility```,
|
||||
```System Preferences -> Privacy & Security -> Input Monitoring```.
|
||||
|
||||
From the reflector root folder,
|
||||
From the reflector root folder,
|
||||
|
||||
run ```python3 whisjax_realtime.py```
|
||||
|
||||
The transcription text should be written to ```real_time_transcription_<timestamp>.txt```.
|
||||
|
||||
|
||||
NEXT STEPS:
|
||||
|
||||
1) Create a RunPod setup for this feature (mentioned in 1 & 2) and test it end-to-end
|
||||
|
||||
30
client.py
30
client.py
@@ -1,33 +1,33 @@
|
||||
import argparse
|
||||
import asyncio
|
||||
import signal
|
||||
from utils.log_utils import logger
|
||||
|
||||
from aiortc.contrib.signaling import (add_signaling_arguments,
|
||||
create_signaling)
|
||||
|
||||
from stream_client import StreamClient
|
||||
from utils.log_utils import logger
|
||||
|
||||
|
||||
async def main():
|
||||
parser = argparse.ArgumentParser(description="Data channels ping/pong")
|
||||
|
||||
parser.add_argument(
|
||||
"--url", type=str, nargs="?", default="http://127.0.0.1:1250/offer"
|
||||
"--url", type=str, nargs="?", default="http://127.0.0.1:1250/offer"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--ping-pong",
|
||||
help="Benchmark data channel with ping pong",
|
||||
type=eval,
|
||||
choices=[True, False],
|
||||
default="False",
|
||||
"--ping-pong",
|
||||
help="Benchmark data channel with ping pong",
|
||||
type=eval,
|
||||
choices=[True, False],
|
||||
default="False",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--play-from",
|
||||
type=str,
|
||||
default="",
|
||||
"--play-from",
|
||||
type=str,
|
||||
default="",
|
||||
)
|
||||
add_signaling_arguments(parser)
|
||||
|
||||
@@ -54,14 +54,14 @@ async def main():
|
||||
loop = asyncio.get_event_loop()
|
||||
for s in signals:
|
||||
loop.add_signal_handler(
|
||||
s, lambda s=s: asyncio.create_task(shutdown(s, loop)))
|
||||
s, lambda s=s: asyncio.create_task(shutdown(s, loop)))
|
||||
|
||||
# Init client
|
||||
sc = StreamClient(
|
||||
signaling=signaling,
|
||||
url=args.url,
|
||||
play_from=args.play_from,
|
||||
ping_pong=args.ping_pong
|
||||
signaling=signaling,
|
||||
url=args.url,
|
||||
play_from=args.play_from,
|
||||
ping_pong=args.ping_pong
|
||||
)
|
||||
await sc.start()
|
||||
print("Stream client started")
|
||||
|
||||
30
config.ini
30
config.ini
@@ -1,22 +1,22 @@
|
||||
[DEFAULT]
|
||||
# Set exception rule for OpenMP error to allow duplicate lib initialization
|
||||
KMP_DUPLICATE_LIB_OK=TRUE
|
||||
KMP_DUPLICATE_LIB_OK = TRUE
|
||||
# Export OpenAI API Key
|
||||
OPENAI_APIKEY=
|
||||
OPENAI_APIKEY =
|
||||
# Export Whisper Model Size
|
||||
WHISPER_MODEL_SIZE=tiny
|
||||
WHISPER_REAL_TIME_MODEL_SIZE=tiny
|
||||
WHISPER_MODEL_SIZE = tiny
|
||||
WHISPER_REAL_TIME_MODEL_SIZE = tiny
|
||||
# AWS config
|
||||
AWS_ACCESS_KEY=***REMOVED***
|
||||
AWS_SECRET_KEY=***REMOVED***
|
||||
BUCKET_NAME='reflector-bucket'
|
||||
AWS_ACCESS_KEY = ***REMOVED***
|
||||
AWS_SECRET_KEY = ***REMOVED***
|
||||
BUCKET_NAME = 'reflector-bucket'
|
||||
# Summarizer config
|
||||
SUMMARY_MODEL=facebook/bart-large-cnn
|
||||
INPUT_ENCODING_MAX_LENGTH=1024
|
||||
MAX_LENGTH=2048
|
||||
BEAM_SIZE=6
|
||||
MAX_CHUNK_LENGTH=1024
|
||||
SUMMARIZE_USING_CHUNKS=YES
|
||||
SUMMARY_MODEL = facebook/bart-large-cnn
|
||||
INPUT_ENCODING_MAX_LENGTH = 1024
|
||||
MAX_LENGTH = 2048
|
||||
BEAM_SIZE = 6
|
||||
MAX_CHUNK_LENGTH = 1024
|
||||
SUMMARIZE_USING_CHUNKS = YES
|
||||
# Audio device
|
||||
BLACKHOLE_INPUT_AGGREGATOR_DEVICE_NAME=aggregator
|
||||
AV_FOUNDATION_DEVICE_ID=2
|
||||
BLACKHOLE_INPUT_AGGREGATOR_DEVICE_NAME = aggregator
|
||||
AV_FOUNDATION_DEVICE_ID = 2
|
||||
@@ -26,7 +26,7 @@ networkx==3.1
|
||||
numba==0.57.0
|
||||
numpy==1.24.3
|
||||
openai==0.27.7
|
||||
openai-whisper @ git+https://github.com/openai/whisper.git@248b6cb124225dd263bb9bd32d060b6517e067f8
|
||||
openai-whisper@ git+https://github.com/openai/whisper.git@248b6cb124225dd263bb9bd32d060b6517e067f8
|
||||
Pillow==9.5.0
|
||||
proglog==0.1.10
|
||||
pytube==15.0.0
|
||||
@@ -56,5 +56,5 @@ cached_property==1.5.2
|
||||
stamina==23.1.0
|
||||
httpx==0.24.1
|
||||
sortedcontainers==2.4.0
|
||||
openai-whisper @ git+https://github.com/openai/whisper.git@248b6cb124225dd263bb9bd32d060b6517e067f8
|
||||
openai-whisper@ git+https://github.com/openai/whisper.git@248b6cb124225dd263bb9bd32d060b6517e067f8
|
||||
https://github.com/yt-dlp/yt-dlp/archive/master.tar.gz
|
||||
|
||||
@@ -15,7 +15,7 @@ from av import AudioFifo
|
||||
from loguru import logger
|
||||
from whisper_jax import FlaxWhisperPipline
|
||||
|
||||
from utils.server_utils import run_in_executor
|
||||
from utils.run_utils import run_in_executor
|
||||
|
||||
transcription = ""
|
||||
|
||||
@@ -44,10 +44,10 @@ def channel_send(channel, message):
|
||||
if channel:
|
||||
channel.send(message)
|
||||
print(
|
||||
"Bytes handled :",
|
||||
total_bytes_handled,
|
||||
" Time : ",
|
||||
datetime.datetime.now() - start_time,
|
||||
"Bytes handled :",
|
||||
total_bytes_handled,
|
||||
" Time : ",
|
||||
datetime.datetime.now() - start_time,
|
||||
)
|
||||
|
||||
|
||||
@@ -86,12 +86,12 @@ class AudioStreamTrack(MediaStreamTrack):
|
||||
audio_buffer.write(frame)
|
||||
if local_frames := audio_buffer.read_many(256 * 960, partial=False):
|
||||
whisper_result = run_in_executor(
|
||||
get_transcription, local_frames, executor=executor
|
||||
get_transcription, local_frames, executor=executor
|
||||
)
|
||||
whisper_result.add_done_callback(
|
||||
lambda f: channel_send(data_channel, str(whisper_result.result()))
|
||||
if (f.result())
|
||||
else None
|
||||
lambda f: channel_send(data_channel, str(whisper_result.result()))
|
||||
if (f.result())
|
||||
else None
|
||||
)
|
||||
return frame
|
||||
|
||||
@@ -140,10 +140,10 @@ async def offer(request):
|
||||
answer = await pc.createAnswer()
|
||||
await pc.setLocalDescription(answer)
|
||||
return web.Response(
|
||||
content_type="application/json",
|
||||
text=json.dumps(
|
||||
{"sdp": pc.localDescription.sdp, "type": pc.localDescription.type}
|
||||
),
|
||||
content_type="application/json",
|
||||
text=json.dumps(
|
||||
{ "sdp": pc.localDescription.sdp, "type": pc.localDescription.type }
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import asyncio
|
||||
import configparser
|
||||
from utils.run_utils import config
|
||||
import datetime
|
||||
import io
|
||||
import json
|
||||
@@ -8,9 +8,9 @@ import threading
|
||||
import uuid
|
||||
import wave
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from aiohttp import web
|
||||
|
||||
import jax.numpy as jnp
|
||||
from aiohttp import web
|
||||
from aiortc import MediaStreamTrack, RTCPeerConnection, RTCSessionDescription
|
||||
from aiortc.contrib.media import (MediaRelay)
|
||||
from av import AudioFifo
|
||||
@@ -18,13 +18,10 @@ from sortedcontainers import SortedDict
|
||||
from whisper_jax import FlaxWhisperPipline
|
||||
|
||||
from utils.log_utils import logger
|
||||
from utils.server_utils import Mutex
|
||||
from utils.run_utils import Mutex
|
||||
|
||||
ROOT = os.path.dirname(__file__)
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
|
||||
WHISPER_MODEL_SIZE = config['DEFAULT']["WHISPER_MODEL_SIZE"]
|
||||
pcs = set()
|
||||
relay = MediaRelay()
|
||||
@@ -91,10 +88,10 @@ def get_transcription():
|
||||
wf.close()
|
||||
|
||||
whisper_result = pipeline(out_file.getvalue())
|
||||
item = {'text': whisper_result["text"],
|
||||
'start_time': str(frames[0].time),
|
||||
'time': str(datetime.datetime.now())
|
||||
}
|
||||
item = { 'text': whisper_result["text"],
|
||||
'start_time': str(frames[0].time),
|
||||
'time': str(datetime.datetime.now())
|
||||
}
|
||||
sorted_message_queue[frames[0].time] = str(item)
|
||||
start_messaging_thread()
|
||||
except Exception as e:
|
||||
@@ -177,10 +174,10 @@ async def offer(request):
|
||||
answer = await pc.createAnswer()
|
||||
await pc.setLocalDescription(answer)
|
||||
return web.Response(
|
||||
content_type="application/json",
|
||||
text=json.dumps(
|
||||
{"sdp": pc.localDescription.sdp, "type": pc.localDescription.type}
|
||||
),
|
||||
content_type="application/json",
|
||||
text=json.dumps(
|
||||
{ "sdp": pc.localDescription.sdp, "type": pc.localDescription.type }
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -196,5 +193,5 @@ if __name__ == "__main__":
|
||||
start_transcription_thread(6)
|
||||
app.router.add_post("/offer", offer)
|
||||
web.run_app(
|
||||
app, access_log=None, host="127.0.0.1", port=1250
|
||||
app, access_log=None, host="127.0.0.1", port=1250
|
||||
)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import ast
|
||||
import asyncio
|
||||
import configparser
|
||||
from utils.run_utils import config
|
||||
import time
|
||||
import uuid
|
||||
|
||||
@@ -12,12 +12,10 @@ from aiortc import (RTCPeerConnection, RTCSessionDescription)
|
||||
from aiortc.contrib.media import (MediaPlayer, MediaRelay)
|
||||
|
||||
from utils.log_utils import logger
|
||||
from utils.server_utils import Mutex
|
||||
from utils.run_utils import Mutex
|
||||
|
||||
file_lock = Mutex(open("test_sm_6.txt", "a"))
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
|
||||
|
||||
class StreamClient:
|
||||
@@ -42,7 +40,7 @@ class StreamClient:
|
||||
self.time_start = None
|
||||
self.queue = asyncio.Queue()
|
||||
self.player = MediaPlayer(':' + str(config['DEFAULT']["AV_FOUNDATION_DEVICE_ID"]),
|
||||
format='avfoundation', options={'channels': '2'})
|
||||
format='avfoundation', options={ 'channels': '2' })
|
||||
|
||||
def stop(self):
|
||||
self.loop.run_until_complete(self.signaling.close())
|
||||
@@ -127,8 +125,8 @@ class StreamClient:
|
||||
await pc.setLocalDescription(await pc.createOffer())
|
||||
|
||||
sdp = {
|
||||
"sdp": pc.localDescription.sdp,
|
||||
"type": pc.localDescription.type
|
||||
"sdp": pc.localDescription.sdp,
|
||||
"type": pc.localDescription.type
|
||||
}
|
||||
|
||||
@stamina.retry(on=httpx.HTTPError, attempts=5)
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
import configparser
|
||||
import sys
|
||||
|
||||
import boto3
|
||||
import botocore
|
||||
|
||||
from run_utils import config
|
||||
from log_utils import logger
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
|
||||
BUCKET_NAME = 'reflector-bucket'
|
||||
BUCKET_NAME = config["DEFAULT"]["BUCKET_NAME"]
|
||||
|
||||
s3 = boto3.client('s3',
|
||||
aws_access_key_id=config["DEFAULT"]["AWS_ACCESS_KEY"],
|
||||
@@ -18,8 +16,8 @@ s3 = boto3.client('s3',
|
||||
def upload_files(files_to_upload):
|
||||
"""
|
||||
Upload a list of files to the configured S3 bucket
|
||||
:param files_to_upload:
|
||||
:return:
|
||||
:param files_to_upload: List of files to upload
|
||||
:return: None
|
||||
"""
|
||||
for KEY in files_to_upload:
|
||||
logger.info("Uploading file " + KEY)
|
||||
@@ -32,8 +30,8 @@ def upload_files(files_to_upload):
|
||||
def download_files(files_to_download):
|
||||
"""
|
||||
Download a list of files from the configured S3 bucket
|
||||
:param files_to_download:
|
||||
:return:
|
||||
:param files_to_download: List of files to download
|
||||
:return: None
|
||||
"""
|
||||
for KEY in files_to_download:
|
||||
logger.info("Downloading file " + KEY)
|
||||
@@ -47,8 +45,6 @@ def download_files(files_to_download):
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
|
||||
if sys.argv[1] == "download":
|
||||
download_files([sys.argv[2]])
|
||||
elif sys.argv[1] == "upload":
|
||||
|
||||
@@ -6,6 +6,10 @@ class SingletonLogger:
|
||||
|
||||
@staticmethod
|
||||
def get_logger():
|
||||
"""
|
||||
Create or return the singleton instance for the SingletonLogger class
|
||||
:return: SingletonLogger instance
|
||||
"""
|
||||
if not SingletonLogger.__instance:
|
||||
SingletonLogger.__instance = logger
|
||||
return SingletonLogger.__instance
|
||||
|
||||
66
utils/run_utils.py
Normal file
66
utils/run_utils.py
Normal file
@@ -0,0 +1,66 @@
|
||||
import asyncio
|
||||
import configparser
|
||||
import contextlib
|
||||
from functools import partial
|
||||
from threading import Lock
|
||||
from typing import ContextManager, Generic, TypeVar
|
||||
|
||||
|
||||
class ConfigParser:
|
||||
__config = configparser.ConfigParser()
|
||||
|
||||
def __init__(self, config_file='../config.ini'):
|
||||
self.__config.read(config_file)
|
||||
|
||||
@staticmethod
|
||||
def get_config():
|
||||
return ConfigParser.__config
|
||||
|
||||
|
||||
config = ConfigParser.get_config()
|
||||
|
||||
|
||||
def run_in_executor(func, *args, executor=None, **kwargs):
|
||||
"""
|
||||
Run the function in an executor, unblocking the main loop
|
||||
:param func: Function to be run in executor
|
||||
:param args: function parameters
|
||||
:param executor: executor instance [Thread | Process]
|
||||
:param kwargs: Additional parameters
|
||||
:return: Future of function result upon completion
|
||||
"""
|
||||
callback = partial(func, *args, **kwargs)
|
||||
loop = asyncio.get_event_loop()
|
||||
return asyncio.get_event_loop().run_in_executor(executor, callback)
|
||||
|
||||
|
||||
# Genetic type template
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
class Mutex(Generic[T]):
|
||||
"""
|
||||
Mutex class to implement lock/release of a shared
|
||||
protected variable
|
||||
"""
|
||||
|
||||
def __init__(self, value: T):
|
||||
"""
|
||||
Create an instance of Mutex wrapper for the given resource
|
||||
:param value: Shared resources to be thread protected
|
||||
"""
|
||||
self.__value = value
|
||||
self.__lock = Lock()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def lock(self) -> ContextManager[T]:
|
||||
"""
|
||||
Lock the resource with a mutex to be used within a context block
|
||||
The lock is automatically released on context exit
|
||||
:return: Shared resource
|
||||
"""
|
||||
self.__lock.acquire()
|
||||
try:
|
||||
yield self.__value
|
||||
finally:
|
||||
self.__lock.release()
|
||||
@@ -1,28 +0,0 @@
|
||||
import asyncio
|
||||
import contextlib
|
||||
from functools import partial
|
||||
from threading import Lock
|
||||
from typing import ContextManager, Generic, TypeVar
|
||||
|
||||
|
||||
def run_in_executor(func, *args, executor=None, **kwargs):
|
||||
callback = partial(func, *args, **kwargs)
|
||||
loop = asyncio.get_event_loop()
|
||||
return asyncio.get_event_loop().run_in_executor(executor, callback)
|
||||
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
class Mutex(Generic[T]):
|
||||
def __init__(self, value: T):
|
||||
self.__value = value
|
||||
self.__lock = Lock()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def lock(self) -> ContextManager[T]:
|
||||
self.__lock.acquire()
|
||||
try:
|
||||
yield self.__value
|
||||
finally:
|
||||
self.__lock.release()
|
||||
@@ -6,14 +6,12 @@ from nltk.corpus import stopwords
|
||||
from nltk.tokenize import word_tokenize
|
||||
from sklearn.feature_extraction.text import TfidfVectorizer
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
from transformers import BartTokenizer, BartForConditionalGeneration
|
||||
|
||||
from transformers import BartForConditionalGeneration, BartTokenizer
|
||||
from run_utils import config
|
||||
from log_utils import logger
|
||||
|
||||
nltk.download('punkt', quiet=True)
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
|
||||
|
||||
def preprocess_sentence(sentence):
|
||||
@@ -74,7 +72,7 @@ def remove_whisper_repetitive_hallucination(nonduplicate_sentences):
|
||||
|
||||
for sent in nonduplicate_sentences:
|
||||
temp_result = ""
|
||||
seen = {}
|
||||
seen = { }
|
||||
words = nltk.word_tokenize(sent)
|
||||
n_gram_filter = 3
|
||||
for i in range(len(words)):
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import ast
|
||||
import collections
|
||||
import configparser
|
||||
import os
|
||||
import pickle
|
||||
from pathlib import Path
|
||||
@@ -10,10 +9,7 @@ import pandas as pd
|
||||
import scattertext as st
|
||||
import spacy
|
||||
from nltk.corpus import stopwords
|
||||
from wordcloud import WordCloud, STOPWORDS
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
from wordcloud import STOPWORDS, WordCloud
|
||||
|
||||
en = spacy.load('en_core_web_md')
|
||||
spacy_stopwords = en.Defaults.stop_words
|
||||
@@ -92,11 +88,11 @@ def create_talk_diff_scatter_viz(timestamp, real_time=False):
|
||||
# create df for processing
|
||||
df = pd.DataFrame.from_dict(res["chunks"])
|
||||
|
||||
covered_items = {}
|
||||
covered_items = { }
|
||||
# ts: timestamp
|
||||
# Map each timestamped chunk with top1 and top2 matched agenda
|
||||
ts_to_topic_mapping_top_1 = {}
|
||||
ts_to_topic_mapping_top_2 = {}
|
||||
ts_to_topic_mapping_top_1 = { }
|
||||
ts_to_topic_mapping_top_2 = { }
|
||||
|
||||
# Also create a mapping of the different timestamps in which each topic was covered
|
||||
topic_to_ts_mapping_top_1 = collections.defaultdict(list)
|
||||
@@ -189,16 +185,16 @@ def create_talk_diff_scatter_viz(timestamp, real_time=False):
|
||||
# Scatter plot of topics
|
||||
df = df.assign(parse=lambda df: df.text.apply(st.whitespace_nlp_with_sentences))
|
||||
corpus = st.CorpusFromParsedDocuments(
|
||||
df, category_col='ts_to_topic_mapping_top_1', parsed_col='parse'
|
||||
df, category_col='ts_to_topic_mapping_top_1', parsed_col='parse'
|
||||
).build().get_unigram_corpus().compact(st.AssociationCompactor(2000))
|
||||
html = st.produce_scattertext_explorer(
|
||||
corpus,
|
||||
category=cat_1,
|
||||
category_name=cat_1_name,
|
||||
not_category_name=cat_2_name,
|
||||
minimum_term_frequency=0, pmi_threshold_coefficient=0,
|
||||
width_in_pixels=1000,
|
||||
transform=st.Scalers.dense_rank
|
||||
corpus,
|
||||
category=cat_1,
|
||||
category_name=cat_1_name,
|
||||
not_category_name=cat_2_name,
|
||||
minimum_term_frequency=0, pmi_threshold_coefficient=0,
|
||||
width_in_pixels=1000,
|
||||
transform=st.Scalers.dense_rank
|
||||
)
|
||||
if real_time:
|
||||
open('./artefacts/real_time_scatter_' + timestamp.strftime("%m-%d-%Y_%H:%M:%S") + '.html', 'w').write(html)
|
||||
|
||||
29
whisjax.py
29
whisjax.py
@@ -20,18 +20,15 @@ import nltk
|
||||
import yt_dlp as youtube_dl
|
||||
from whisper_jax import FlaxWhisperPipline
|
||||
|
||||
from utils.file_utils import upload_files, download_files
|
||||
from utils.file_utils import download_files, upload_files
|
||||
from utils.log_utils import logger
|
||||
from utils.text_utilities import summarize, post_process_transcription
|
||||
from utils.viz_utilities import create_wordcloud, create_talk_diff_scatter_viz
|
||||
from utils.run_utils import config
|
||||
from utils.text_utilities import post_process_transcription, summarize
|
||||
from utils.viz_utilities import create_talk_diff_scatter_viz, create_wordcloud
|
||||
|
||||
nltk.download('punkt', quiet=True)
|
||||
nltk.download('stopwords', quiet=True)
|
||||
|
||||
# Configurations can be found in config.ini. Set them properly before executing
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
|
||||
WHISPER_MODEL_SIZE = config['DEFAULT']["WHISPER_MODEL_SIZE"]
|
||||
NOW = datetime.now()
|
||||
|
||||
@@ -42,8 +39,8 @@ def init_argparse() -> argparse.ArgumentParser:
|
||||
:return: parser object
|
||||
"""
|
||||
parser = argparse.ArgumentParser(
|
||||
usage="%(prog)s [OPTIONS] <LOCATION> <OUTPUT>",
|
||||
description="Creates a transcript of a video or audio file, then summarizes it using ChatGPT."
|
||||
usage="%(prog)s [OPTIONS] <LOCATION> <OUTPUT>",
|
||||
description="Creates a transcript of a video or audio file, then summarizes it using ChatGPT."
|
||||
)
|
||||
|
||||
parser.add_argument("-l", "--language", help="Language that the summary should be written in", type=str,
|
||||
@@ -74,13 +71,13 @@ def main():
|
||||
|
||||
# Create options for the download
|
||||
ydl_opts = {
|
||||
'format': 'bestaudio/best',
|
||||
'postprocessors': [{
|
||||
'key': 'FFmpegExtractAudio',
|
||||
'preferredcodec': 'mp3',
|
||||
'preferredquality': '192',
|
||||
}],
|
||||
'outtmpl': 'audio', # Specify the output file path and name
|
||||
'format': 'bestaudio/best',
|
||||
'postprocessors': [{
|
||||
'key': 'FFmpegExtractAudio',
|
||||
'preferredcodec': 'mp3',
|
||||
'preferredquality': '192',
|
||||
}],
|
||||
'outtmpl': 'audio', # Specify the output file path and name
|
||||
}
|
||||
|
||||
# Download the audio
|
||||
|
||||
@@ -13,11 +13,10 @@ from whisper_jax import FlaxWhisperPipline
|
||||
|
||||
from utils.file_utils import upload_files
|
||||
from utils.log_utils import logger
|
||||
from utils.text_utilities import summarize, post_process_transcription
|
||||
from utils.viz_utilities import create_wordcloud, create_talk_diff_scatter_viz
|
||||
from utils.run_utils import config
|
||||
from utils.text_utilities import post_process_transcription, summarize
|
||||
from utils.viz_utilities import create_talk_diff_scatter_viz, create_wordcloud
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read('config.ini')
|
||||
|
||||
WHISPER_MODEL_SIZE = config['DEFAULT']["WHISPER_MODEL_SIZE"]
|
||||
|
||||
@@ -37,12 +36,12 @@ def main():
|
||||
AUDIO_DEVICE_ID = i
|
||||
audio_devices = p.get_device_info_by_index(AUDIO_DEVICE_ID)
|
||||
stream = p.open(
|
||||
format=FORMAT,
|
||||
channels=CHANNELS,
|
||||
rate=RATE,
|
||||
input=True,
|
||||
frames_per_buffer=FRAMES_PER_BUFFER,
|
||||
input_device_index=int(audio_devices['index'])
|
||||
format=FORMAT,
|
||||
channels=CHANNELS,
|
||||
rate=RATE,
|
||||
input=True,
|
||||
frames_per_buffer=FRAMES_PER_BUFFER,
|
||||
input_device_index=int(audio_devices['index'])
|
||||
)
|
||||
|
||||
pipeline = FlaxWhisperPipline("openai/whisper-" + config["DEFAULT"]["WHISPER_REAL_TIME_MODEL_SIZE"],
|
||||
@@ -60,7 +59,7 @@ def main():
|
||||
global proceed
|
||||
proceed = False
|
||||
|
||||
transcript_with_timestamp = {"text": "", "chunks": []}
|
||||
transcript_with_timestamp = { "text": "", "chunks": [] }
|
||||
last_transcribed_time = 0.0
|
||||
|
||||
listener = keyboard.Listener(on_press=on_press)
|
||||
@@ -90,10 +89,10 @@ def main():
|
||||
if end is None:
|
||||
end = start + 15.0
|
||||
duration = end - start
|
||||
item = {'timestamp': (last_transcribed_time, last_transcribed_time + duration),
|
||||
'text': whisper_result['text'],
|
||||
'stats': (str(end_time - start_time), str(duration))
|
||||
}
|
||||
item = { 'timestamp': (last_transcribed_time, last_transcribed_time + duration),
|
||||
'text': whisper_result['text'],
|
||||
'stats': (str(end_time - start_time), str(duration))
|
||||
}
|
||||
last_transcribed_time = last_transcribed_time + duration
|
||||
transcript_with_timestamp["chunks"].append(item)
|
||||
transcription += whisper_result['text']
|
||||
|
||||
Reference in New Issue
Block a user