If LLM is stuck to warm or an exception happen in the pipeline, then the processor responsible for the exception fail, and there is no fallback. So audio continue to arrive, but no processing happen.While this should be done right especially after disconnection, still, we should ignore llm warmup issue and just go.
Closes#140
* serverless: implement banana backend for both audio and LLM
Related to monadical-sas/reflector-gpu-banana project
* serverless: got llm working on banana !
* tests: fixes
* serverless: fix dockerfile to use fastapi server + httpx
Each processor is standalone, with define INPUT/OUTPUT.
Processor can be threaded or not (can be extensible later)
TODO: Pipeline that automatically connect all processors, flush and clean data
To test: python -m reflector.processors tests/records/test_mathieu_hello.wav
```
Transcript: [00:00.500]: Hi there, everyone.
Transcript: [00:02.700]: Today, I want to share my incredible experience.
Transcript: [00:05.461]: with Reflector, a cutineage product that revolutionizes audio processing.
Transcript: [00:10.922]: With Refector, I can easily convert any audio into accurate transcription.
Transcript: [00:16.493]: serving me hours of tedious manual work.
```
This is not a good transcript, but not the purpose here.
- replaced loguru to structlog, to get ability of having open tracing later
- moved configuration to pydantic-settings
- merged both secrets.ini and config.ini to .env (check reflector/settings.py)
- allow LLM_URL to be passed directly by env, otherwise fallback to the current config.ini
- prevent usage of global, shared variables are now passed through a context
- can now have multiple meeting at the same time