Task 3: LLM Streaming Integration
- Import Settings, ChatMessage, MessageRole from llama-index
- Configure LLM with temperature 0.7 on connection
- Build system message with WebVTT transcript context (max 15k chars)
- Initialize conversation history with system message
- Handle 'message' type from client to trigger LLM streaming
- Stream LLM response using Settings.llm.astream_chat()
- Send tokens incrementally via 'token' messages
- Send 'done' message when streaming completes
- Maintain conversation history across multiple messages
- Add error handling with 'error' message type
- Add message protocol validation test
Implements Tasks 3 & 4 from TASKS.md
- Import topics_to_webvtt_named and recordings controller
- Add _get_is_multitrack helper function
- Generate WebVTT context on WebSocket connection
- Add get_context message type to retrieve WebVTT
- Maintain backward compatibility with echo for other messages
- Add test fixture and test for WebVTT context generation
Implements task fn-1.2: WebVTT context generation for transcript chat