Files
internalai-agent/README.md
Mathieu Virbel 8eb1fb87a7 Add MYSELF.md user profile for agent personalization
- Add MYSELF.example.md template with identity, role, collaborators,
  and preferences sections
- Add MYSELF.md to .gitignore (contains personal info)
- Reference MYSELF.md in AGENTS.md routing table, new "About the User"
  section, and file index
- Add setup step and routing entry in README.md
2026-02-10 19:30:14 -06:00

91 lines
3.7 KiB
Markdown

# InternalAI Agent
A documentation and pattern library that gives LLM agents the context they need to build data analysis workflows against Monadical's internal systems — ContactDB (people directory) and DataIndex (unified data from email, calendar, Zulip, meetings, documents).
The goal is to use [opencode](https://opencode.ai) (or any LLM-powered coding tool) to iteratively create [marimo](https://marimo.io) notebook workflows that query and analyze company data.
## Setup
1. Install [opencode](https://opencode.ai)
2. Make sure InternalAI is running locally (ContactDB + DataIndex accessible via http://localhost:42000)
3. Configure LiteLLM — add to `~/.config/opencode/config.json`:
```json
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"litellm": {
"npm": "@ai-sdk/openai-compatible",
"name": "Litellm",
"options": {
"baseURL": "https://litellm.app.monadical.io",
"apiKey": "xxxxx"
},
"models": {
"Kimi-K2.5-dev": {
"name": "Kimi-K2.5-dev"
}
}
}
}
}
```
Replace `xxxxx` with your actual LiteLLM API key.
4. **Set up your profile** — copy the example and fill in your name, role, and contact ID so the agent can personalize workflows:
```bash
cp MYSELF.example.md MYSELF.md
```
5. **(Optional) LLM filtering in workflows** — if your workflows need to classify or score entities via an LLM, copy `.env.example` to `.env` and fill in your key:
```bash
cp .env.example .env
```
The `workflows/lib` module provides an `llm_call` helper (using [mirascope](https://mirascope.io)) for structured LLM calls — see Pattern 5 in `docs/notebook-patterns.md`.
## Quickstart
1. Run `opencode` from the project root
2. Ask it to create a workflow, e.g.: *"Create a workflow that shows all meetings about Greyhaven in January"*
3. The agent reads `AGENTS.md`, proposes a plan, and generates a notebook like `workflows/001_greyhaven_meetings_january.py`
4. Run it: `uvx marimo edit workflows/001_greyhaven_meetings_january.py`
5. Iterate — review the output in marimo, go back to opencode and ask for refinements
## How AGENTS.md is Structured
`AGENTS.md` is the entry point that opencode reads automatically. It routes the agent to the right documentation:
| Topic | File |
|-------|------|
| Your identity, role, preferences | `MYSELF.md` (copy from `MYSELF.example.md`) |
| Company context, tools, connectors | `docs/company-context.md` |
| People, contacts, relationships | `docs/contactdb-api.md` |
| Querying emails, meetings, chats, docs | `docs/dataindex-api.md` |
| Connector-to-entity-type mappings | `docs/connectors-and-sources.md` |
| Notebook templates and patterns | `docs/notebook-patterns.md` |
It also includes API base URLs, a translation table mapping natural-language questions to API calls, and rules for when/how to create workflow notebooks.
## Project Structure
```
internalai-agent/
├── AGENTS.md # LLM agent routing guide (entry point)
├── MYSELF.example.md # User profile template (copy to MYSELF.md)
├── .env.example # LLM credentials template
├── docs/
│ ├── company-context.md # Monadical org, tools, key concepts
│ ├── contactdb-api.md # ContactDB REST API reference
│ ├── dataindex-api.md # DataIndex REST API reference
│ ├── connectors-and-sources.md # Connector → entity type mappings
│ └── notebook-patterns.md # Marimo notebook templates and patterns
└── workflows/
└── lib/ # Shared helpers for notebooks
├── __init__.py
└── llm.py # llm_call() — structured LLM calls via mirascope
```