If LLM is stuck to warm or an exception happen in the pipeline, then the processor responsible for the exception fail, and there is no fallback. So audio continue to arrive, but no processing happen.While this should be done right especially after disconnection, still, we should ignore llm warmup issue and just go.
Closes#140
* serverless: implement banana backend for both audio and LLM
Related to monadical-sas/reflector-gpu-banana project
* serverless: got llm working on banana !
* tests: fixes
* serverless: fix dockerfile to use fastapi server + httpx