Files
reflector/server/reflector
Igor Loskutov 663345ece6 feat: local LLM via Ollama + structured output response_format
- Add setup script (scripts/setup-local-llm.sh) for one-command Ollama setup
  Mac: native Metal GPU, Linux: containerized via docker-compose profiles
- Add ollama-gpu and ollama-cpu docker-compose profiles for Linux
- Add extra_hosts to server/hatchet-worker-llm for host.docker.internal
- Pass response_format JSON schema in StructuredOutputWorkflow.extract()
  enabling grammar-based constrained decoding on Ollama/llama.cpp/vLLM/OpenAI
- Update .env.example with Ollama as default LLM option
- Add Ollama PRD and local dev setup docs
2026-02-10 15:55:21 -05:00
..
2026-01-23 12:33:06 -05:00
2026-02-05 18:38:08 -05:00
2026-01-09 10:54:12 -05:00
2025-12-22 12:09:20 -05:00
2026-01-30 13:11:51 -05:00
2026-01-23 12:33:06 -05:00
2026-01-23 12:33:06 -05:00
2026-02-05 18:38:08 -05:00
2025-12-05 12:08:21 -05:00
2023-07-27 15:31:58 +02:00
2026-02-05 18:38:08 -05:00
2026-02-05 14:23:31 -05:00
2025-12-22 12:09:20 -05:00