server: improve llm warmup exception handling

If LLM is stuck to warm or an exception happen in the pipeline, then the processor responsible for the exception fail, and there is no fallback. So audio continue to arrive, but no processing happen.While this should be done right especially after disconnection, still, we should ignore llm warmup issue and just go.

Closes #140
This commit is contained in:
Mathieu Virbel
2023-08-11 19:29:48 +02:00
committed by Mathieu Virbel
parent 63636b52e1
commit 82ce8202bd
2 changed files with 7 additions and 3 deletions

View File

@@ -39,8 +39,7 @@ class LLM:
duration = monotonic() - start
logger.info(f"LLM[{name}] warmup took {duration:.2f} seconds")
except Exception:
logger.exception(f"LLM[{name}] warmup failed")
raise
logger.exception(f"LLM[{name}] warmup failed, ignoring")
async def _warmup(self, logger: reflector_logger):
pass

View File

@@ -143,7 +143,12 @@ class ThreadedProcessor(Processor):
self.logger.debug(f"Warming up {self.processor.__class__.__name__}")
await self.processor.warmup()
continue
await self.processor.push(data)
try:
await self.processor.push(data)
except Exception:
self.logger.error(
f"Error in push {self.processor.__class__.__name__}, continue"
)
finally:
self.queue.task_done()