af16178f86
ci: use github-token to get around potential api throttling + rework dockerfile ( #554 )
...
* ci: use github-token to get around potential api throttling
* build: put pyannote-audio separate to the project
* fix: now that we have a readme, use it
* build: add UV_NO_CACHE
2025-08-20 21:59:29 -06:00
406164033d
feat: new summary using phi-4 and llama-index ( #519 )
...
* feat: add litellm backend implementation
* refactor: improve generate/completion methods for base LLM
* refactor: remove tokenizer logic
* style: apply code formatting
* fix: remove hallucinations from LLM responses
* refactor: comprehensive LLM and summarization rework
* chore: remove debug code
* feat: add structured output support to LiteLLM
* refactor: apply self-review improvements
* docs: add model structured output comments
* docs: update model structured output comments
* style: apply linting and formatting fixes
* fix: resolve type logic bug
* refactor: apply PR review feedback
* refactor: apply additional PR review feedback
* refactor: apply final PR review feedback
* fix: improve schema passing for LLMs without structured output
* feat: add PR comments and logger improvements
* docs: update README and add HTTP logging
* feat: improve HTTP logging
* feat: add summary chunking functionality
* fix: resolve title generation runtime issues
* refactor: apply self-review improvements
* style: apply linting and formatting
* feat: implement LiteLLM class structure
* style: apply linting and formatting fixes
* docs: env template model name fix
* chore: remove older litellm class
* chore: format
* refactor: simplify OpenAILLM
* refactor: OpenAILLM tokenizer
* refactor: self-review
* refactor: self-review
* refactor: self-review
* chore: format
* chore: remove LLM_USE_STRUCTURED_OUTPUT from envs
* chore: roll back migration lint changes
* chore: roll back migration lint changes
* fix: make summary llm configuration optional for the tests
* fix: missing f-string
* fix: tweak the prompt for summary title
* feat: try llamaindex for summarization
* fix: complete refactor of summary builder using llamaindex and structured output when possible
* fix: separate prompt as constant
* fix: typings
* fix: enhance prompt to prevent mentioning others subject while summarize one
* fix: various changes after self-review
* fix: from igor review
---------
Co-authored-by: Igor Loskutov <igor.loskutoff@gmail.com >
2025-07-31 15:29:29 -06:00
Igor Loskutov
7bb2962f94
consent preparation
2025-06-17 12:18:41 -04:00
Sara
918daff66d
more flexible poetry
2024-08-12 12:24:32 +02:00
Koper
fcd98e9fd7
Update README
2023-08-24 19:08:53 +07:00
Koper
92cd950572
Force vercel re-deploy
2023-08-23 15:35:59 +07:00
Mathieu Virbel
e4f2b785ca
server: update process tools and tests
2023-08-01 20:16:54 +02:00
Mathieu Virbel
cb198927b0
server: add default uvicorn server + update readme
2023-08-01 20:13:16 +02:00
Mathieu Virbel
69ba871481
server: refactor to reflector module
...
- replaced loguru to structlog, to get ability of having open tracing later
- moved configuration to pydantic-settings
- merged both secrets.ini and config.ini to .env (check reflector/settings.py)
2023-07-27 15:31:58 +02:00
Mathieu Virbel
b5e0baa6c8
server: dockerize the server and update documentation
2023-07-27 12:18:49 +02:00
Koper
c0400b4232
Moved all server files to server/
2023-07-26 15:13:46 +07:00