Mathieu Virbel 439e9db0a4 Add LLM filtering pattern, .env.example, and workflows/lib
- Add .env.example with LLM_API_URL, LLM_MODEL, LLM_API_KEY
- Add .gitignore to exclude .env
- Add Pattern 5 (LLM filtering) to notebook-patterns.md
- Track workflows/lib with llm_call helper using mirascope
- Update README with LLM setup step and updated project structure
2026-02-10 18:32:20 -06:00
2026-02-10 18:19:30 -06:00

InternalAI Agent

A documentation and pattern library that gives LLM agents the context they need to build data analysis workflows against Monadical's internal systems — ContactDB (people directory) and DataIndex (unified data from email, calendar, Zulip, meetings, documents).

The goal is to use opencode (or any LLM-powered coding tool) to iteratively create marimo notebook workflows that query and analyze company data.

Setup

  1. Install opencode
  2. Make sure InternalAI is running locally (ContactDB + DataIndex accessible via http://localhost:42000)
  3. Configure LiteLLM — add to ~/.config/opencode/config.json:
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "litellm": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Litellm",
      "options": {
        "baseURL": "https://litellm.app.monadical.io",
        "apiKey": "xxxxx"
      },
      "models": {
        "Kimi-K2.5-dev": {
          "name": "Kimi-K2.5-dev"
        }
      }
    }
  }
}

Replace xxxxx with your actual LiteLLM API key.

  1. (Optional) LLM filtering in workflows — if your workflows need to classify or score entities via an LLM, copy .env.example to .env and fill in your key:
cp .env.example .env

The workflows/lib module provides an llm_call helper (using mirascope) for structured LLM calls — see Pattern 5 in docs/notebook-patterns.md.

Quickstart

  1. Run opencode from the project root
  2. Ask it to create a workflow, e.g.: "Create a workflow that shows all meetings about Greyhaven in January"
  3. The agent reads AGENTS.md, proposes a plan, and generates a notebook like workflows/001_greyhaven_meetings_january.py
  4. Run it: uvx marimo edit workflows/001_greyhaven_meetings_january.py
  5. Iterate — review the output in marimo, go back to opencode and ask for refinements

How AGENTS.md is Structured

AGENTS.md is the entry point that opencode reads automatically. It routes the agent to the right documentation:

Topic File
Company context, tools, connectors docs/company-context.md
People, contacts, relationships docs/contactdb-api.md
Querying emails, meetings, chats, docs docs/dataindex-api.md
Connector-to-entity-type mappings docs/connectors-and-sources.md
Notebook templates and patterns docs/notebook-patterns.md

It also includes API base URLs, a translation table mapping natural-language questions to API calls, and rules for when/how to create workflow notebooks.

Project Structure

internalai-agent/
├── AGENTS.md                        # LLM agent routing guide (entry point)
├── .env.example                     # LLM credentials template
├── docs/
│   ├── company-context.md           # Monadical org, tools, key concepts
│   ├── contactdb-api.md             # ContactDB REST API reference
│   ├── dataindex-api.md             # DataIndex REST API reference
│   ├── connectors-and-sources.md    # Connector → entity type mappings
│   └── notebook-patterns.md         # Marimo notebook templates and patterns
└── workflows/
    └── lib/                         # Shared helpers for notebooks
        ├── __init__.py
        └── llm.py                   # llm_call() — structured LLM calls via mirascope
Description
No description provided
Readme 234 KiB
Languages
Python 100%