Files
reflector/server/tests
Igor Loskutov 663345ece6 feat: local LLM via Ollama + structured output response_format
- Add setup script (scripts/setup-local-llm.sh) for one-command Ollama setup
  Mac: native Metal GPU, Linux: containerized via docker-compose profiles
- Add ollama-gpu and ollama-cpu docker-compose profiles for Linux
- Add extra_hosts to server/hatchet-worker-llm for host.docker.internal
- Pass response_format JSON schema in StructuredOutputWorkflow.extract()
  enabling grammar-based constrained decoding on Ollama/llama.cpp/vLLM/OpenAI
- Update .env.example with Ollama as default LLM option
- Add Ollama PRD and local dev setup docs
2026-02-10 15:55:21 -05:00
..
2025-11-24 22:24:03 -05:00
2023-09-13 17:26:03 +02:00
2026-02-05 18:38:08 -05:00
2025-12-22 12:09:20 -05:00
2025-10-20 12:55:25 -04:00
2025-11-24 22:24:03 -05:00
2025-08-13 10:03:38 -04:00