mirror of
https://github.com/Monadical-SAS/cubbi.git
synced 2025-12-21 12:49:07 +00:00
Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fd23e12ff8 | ||
| 2eb15a31f8 | |||
| afae8a13e1 | |||
| d41faf6b30 | |||
| 672b8a8e31 | |||
| da5937e708 | |||
| 4958b07401 | |||
| 4c4e207b67 | |||
| dba7a7c1ef | |||
| 9c8ddbb3f3 | |||
|
|
d750e64608 | ||
|
|
fc0d6b51af | ||
|
|
b28c2bd63e | ||
| e70ec3538b | |||
| 5fca51e515 | |||
| e5121ddea4 |
17
.github/workflows/conventional_commit_pr.yml
vendored
17
.github/workflows/conventional_commit_pr.yml
vendored
@@ -1,17 +0,0 @@
|
|||||||
name: Conventional commit PR
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
cog_check_job:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
name: check conventional commit compliance
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
# pick the pr HEAD instead of the merge commit
|
|
||||||
ref: ${{ github.event.pull_request.head.sha }}
|
|
||||||
|
|
||||||
- name: Conventional commit check
|
|
||||||
uses: cocogitto/cocogitto-action@v3
|
|
||||||
3
.github/workflows/pytests.yml
vendored
3
.github/workflows/pytests.yml
vendored
@@ -30,10 +30,11 @@ jobs:
|
|||||||
- name: Install all dependencies
|
- name: Install all dependencies
|
||||||
run: uv sync --frozen --all-extras --all-groups
|
run: uv sync --frozen --all-extras --all-groups
|
||||||
|
|
||||||
- name: Build goose image
|
- name: Build required images
|
||||||
run: |
|
run: |
|
||||||
uv tool install --with-editable . .
|
uv tool install --with-editable . .
|
||||||
cubbi image build goose
|
cubbi image build goose
|
||||||
|
cubbi image build aider
|
||||||
|
|
||||||
- name: Tests
|
- name: Tests
|
||||||
run: |
|
run: |
|
||||||
|
|||||||
152
CHANGELOG.md
152
CHANGELOG.md
@@ -1,6 +1,158 @@
|
|||||||
# CHANGELOG
|
# CHANGELOG
|
||||||
|
|
||||||
|
|
||||||
|
## v0.3.0 (2025-07-31)
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
- Claudecode and opencode arm64 images ([#21](https://github.com/Monadical-SAS/cubbi/pull/21),
|
||||||
|
[`dba7a7c`](https://github.com/Monadical-SAS/cubbi/commit/dba7a7c1efcc04570a92ecbc4eee39eb6353aaea))
|
||||||
|
|
||||||
|
- Update readme
|
||||||
|
([`4958b07`](https://github.com/Monadical-SAS/cubbi/commit/4958b07401550fb5a6751b99a257eda6c4558ea4))
|
||||||
|
|
||||||
|
### Continuous Integration
|
||||||
|
|
||||||
|
- Remove conventional commit, as only PR is required
|
||||||
|
([`afae8a1`](https://github.com/Monadical-SAS/cubbi/commit/afae8a13e1ea02801b2e5c9d5c84aa65a32d637c))
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add --mcp-type option for remote MCP servers
|
||||||
|
([`d41faf6`](https://github.com/Monadical-SAS/cubbi/commit/d41faf6b3072d4f8bdb2adc896125c7fd0d6117d))
|
||||||
|
|
||||||
|
Auto-detects connection type from URL (/sse -> sse, /mcp -> streamable_http) or allows manual
|
||||||
|
specification. Updates goose plugin to use actual MCP type instead of hardcoded sse.
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
- Add Claude Code image support ([#16](https://github.com/Monadical-SAS/cubbi/pull/16),
|
||||||
|
[`b28c2bd`](https://github.com/Monadical-SAS/cubbi/commit/b28c2bd63e324f875b2d862be9e0afa4a7a17ffc))
|
||||||
|
|
||||||
|
* feat: add Claude Code image support
|
||||||
|
|
||||||
|
Add a new Cubbi image for Claude Code (Anthropic's official CLI) with: - Full Claude Code CLI
|
||||||
|
functionality via NPM package - Secure API key management with multiple authentication options -
|
||||||
|
Enterprise support (Bedrock, Vertex AI, proxy configuration) - Persistent configuration and cache
|
||||||
|
directories - Comprehensive test suite and documentation
|
||||||
|
|
||||||
|
The image allows users to run Claude Code in containers with proper isolation, persistent settings,
|
||||||
|
and seamless Cubbi integration. It gracefully handles missing API keys to allow flexible
|
||||||
|
authentication.
|
||||||
|
|
||||||
|
Also adds optional Claude Code API keys to container.py for enterprise deployments.
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
* Pre-commit fixes
|
||||||
|
|
||||||
|
---------
|
||||||
|
|
||||||
|
Co-authored-by: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
Co-authored-by: Your Name <you@example.com>
|
||||||
|
|
||||||
|
- Add configuration override in session create with --config/-c
|
||||||
|
([`672b8a8`](https://github.com/Monadical-SAS/cubbi/commit/672b8a8e315598d98f40d269dfcfbde6203cbb57))
|
||||||
|
|
||||||
|
- Add MCP tracking to sessions ([#19](https://github.com/Monadical-SAS/cubbi/pull/19),
|
||||||
|
[`d750e64`](https://github.com/Monadical-SAS/cubbi/commit/d750e64608998f6f3a03928bba18428f576b412f))
|
||||||
|
|
||||||
|
Add mcps field to Session model to track active MCP servers and populate it from container labels in
|
||||||
|
ContainerManager. Enhance MCP remove command to warn when removing servers used by active
|
||||||
|
sessions.
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-authored-by: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
- Add network filtering with domain restrictions
|
||||||
|
([#22](https://github.com/Monadical-SAS/cubbi/pull/22),
|
||||||
|
[`2eb15a3`](https://github.com/Monadical-SAS/cubbi/commit/2eb15a31f8bb97f93461bea5e567cc2ccde3f86c))
|
||||||
|
|
||||||
|
* fix: remove config override logging to prevent API key exposure
|
||||||
|
|
||||||
|
* feat: add network filtering with domain restrictions
|
||||||
|
|
||||||
|
- Add --domains flag to restrict container network access to specific domains/ports - Integrate
|
||||||
|
monadicalsas/network-filter container for network isolation - Support domain patterns like
|
||||||
|
'example.com:443', '*.api.com' - Add defaults.domains configuration option - Automatically handle
|
||||||
|
network-filter container lifecycle - Prevent conflicts between --domains and --network options
|
||||||
|
|
||||||
|
* docs: add --domains option to README usage examples
|
||||||
|
|
||||||
|
* docs: remove wildcard domain example from --domains help
|
||||||
|
|
||||||
|
Wildcard domains are not currently supported by network-filter
|
||||||
|
|
||||||
|
- Add ripgrep and openssh-client in images ([#15](https://github.com/Monadical-SAS/cubbi/pull/15),
|
||||||
|
[`e70ec35`](https://github.com/Monadical-SAS/cubbi/commit/e70ec3538ba4e02a60afedca583da1c35b7b6d7a))
|
||||||
|
|
||||||
|
- Add sudo and sudoers ([#20](https://github.com/Monadical-SAS/cubbi/pull/20),
|
||||||
|
[`9c8ddbb`](https://github.com/Monadical-SAS/cubbi/commit/9c8ddbb3f3f2fc97db9283898b6a85aee7235fae))
|
||||||
|
|
||||||
|
* feat: add sudo and sudoers
|
||||||
|
|
||||||
|
* Update cubbi/images/cubbi_init.py
|
||||||
|
|
||||||
|
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
|
||||||
|
|
||||||
|
---------
|
||||||
|
|
||||||
|
- Implement Aider AI pair programming support
|
||||||
|
([#17](https://github.com/Monadical-SAS/cubbi/pull/17),
|
||||||
|
[`fc0d6b5`](https://github.com/Monadical-SAS/cubbi/commit/fc0d6b51af12ddb0bd8655309209dd88e7e4d6f1))
|
||||||
|
|
||||||
|
* feat: implement Aider AI pair programming support
|
||||||
|
|
||||||
|
- Add comprehensive Aider Docker image with Python 3.12 and system pip installation - Implement
|
||||||
|
aider_plugin.py for secure API key management and environment configuration - Support multiple LLM
|
||||||
|
providers: OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter - Add persistent configuration for
|
||||||
|
~/.aider/ and ~/.cache/aider/ directories - Create comprehensive documentation with usage examples
|
||||||
|
and troubleshooting - Include automated test suite with 6 test categories covering all
|
||||||
|
functionality - Update container.py to support DEEPSEEK_API_KEY and GEMINI_API_KEY - Integrate
|
||||||
|
with Cubbi CLI for seamless session management
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
* Fix pytest for aider
|
||||||
|
|
||||||
|
* Fix pre-commit
|
||||||
|
|
||||||
|
---------
|
||||||
|
|
||||||
|
Co-authored-by: Your Name <you@example.com>
|
||||||
|
|
||||||
|
- Include new image opencode ([#14](https://github.com/Monadical-SAS/cubbi/pull/14),
|
||||||
|
[`5fca51e`](https://github.com/Monadical-SAS/cubbi/commit/5fca51e5152dcf7503781eb707fa04414cf33c05))
|
||||||
|
|
||||||
|
* feat: include new image opencode
|
||||||
|
|
||||||
|
* docs: update readme
|
||||||
|
|
||||||
|
- Support config `openai.url` for goose/opencode/aider
|
||||||
|
([`da5937e`](https://github.com/Monadical-SAS/cubbi/commit/da5937e70829b88a66f96c3ce7be7dacfc98facb))
|
||||||
|
|
||||||
|
### Refactoring
|
||||||
|
|
||||||
|
- New image layout and organization ([#13](https://github.com/Monadical-SAS/cubbi/pull/13),
|
||||||
|
[`e5121dd`](https://github.com/Monadical-SAS/cubbi/commit/e5121ddea4230e78a05a85c4ce668e0c169b5ace))
|
||||||
|
|
||||||
|
* refactor: rework how image are defined, in order to create others wrapper for others tools
|
||||||
|
|
||||||
|
* refactor: fix issues with ownership
|
||||||
|
|
||||||
|
* refactor: image share now information with others images type
|
||||||
|
|
||||||
|
* fix: update readme
|
||||||
|
|
||||||
|
|
||||||
## v0.2.0 (2025-05-21)
|
## v0.2.0 (2025-05-21)
|
||||||
|
|
||||||
### Continuous Integration
|
### Continuous Integration
|
||||||
|
|||||||
25
README.md
25
README.md
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
# Cubbi - Container Tool
|
# Cubbi - Container Tool
|
||||||
|
|
||||||
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments. It works with both local Docker and a dedicated remote web service that manages containers in a Docker-in-Docker (DinD) environment. Cubbi also supports connecting to MCP (Model Control Protocol) servers to extend AI tools with additional capabilities.
|
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments, with support for MCP servers.
|
||||||
|
|
||||||

|

|
||||||

|

|
||||||
@@ -42,6 +42,7 @@ Then compile your first image:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
cubbi image build goose
|
cubbi image build goose
|
||||||
|
cubbi image build opencode
|
||||||
```
|
```
|
||||||
|
|
||||||
### For Developers
|
### For Developers
|
||||||
@@ -81,6 +82,7 @@ cubbi session close SESSION_ID
|
|||||||
|
|
||||||
# Create a session with a specific image
|
# Create a session with a specific image
|
||||||
cubbix --image goose
|
cubbix --image goose
|
||||||
|
cubbix --image opencode
|
||||||
|
|
||||||
# Create a session with environment variables
|
# Create a session with environment variables
|
||||||
cubbix -e VAR1=value1 -e VAR2=value2
|
cubbix -e VAR1=value1 -e VAR2=value2
|
||||||
@@ -96,6 +98,9 @@ cubbix /path/to/project
|
|||||||
# Connect to external Docker networks
|
# Connect to external Docker networks
|
||||||
cubbix --network teamnet --network dbnet
|
cubbix --network teamnet --network dbnet
|
||||||
|
|
||||||
|
# Restrict network access to specific domains
|
||||||
|
cubbix --domains github.com --domains "api.example.com:443"
|
||||||
|
|
||||||
# Connect to MCP servers for extended capabilities
|
# Connect to MCP servers for extended capabilities
|
||||||
cubbix --mcp github --mcp jira
|
cubbix --mcp github --mcp jira
|
||||||
|
|
||||||
@@ -123,7 +128,16 @@ cubbix --ssh
|
|||||||
|
|
||||||
## 🖼️ Image Management
|
## 🖼️ Image Management
|
||||||
|
|
||||||
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools:
|
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools
|
||||||
|
|
||||||
|
**Supported Images**
|
||||||
|
|
||||||
|
| Image Name | Langtrace Support |
|
||||||
|
|------------|-------------------|
|
||||||
|
| goose | yes |
|
||||||
|
| opencode | no |
|
||||||
|
| claudecode | no |
|
||||||
|
| aider | no |
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# List available images
|
# List available images
|
||||||
@@ -131,12 +145,11 @@ cubbi image list
|
|||||||
|
|
||||||
# Get detailed information about an image
|
# Get detailed information about an image
|
||||||
cubbi image info goose
|
cubbi image info goose
|
||||||
|
cubbi image info opencode
|
||||||
|
|
||||||
# Build an image
|
# Build an image
|
||||||
cubbi image build goose
|
cubbi image build goose
|
||||||
|
cubbi image build opencode
|
||||||
# Build and push an image
|
|
||||||
cubbi image build goose --push
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Images are defined in the `cubbi/images/` directory, with each subdirectory containing:
|
Images are defined in the `cubbi/images/` directory, with each subdirectory containing:
|
||||||
@@ -144,7 +157,7 @@ Images are defined in the `cubbi/images/` directory, with each subdirectory cont
|
|||||||
- `Dockerfile`: Docker image definition
|
- `Dockerfile`: Docker image definition
|
||||||
- `entrypoint.sh`: Container entrypoint script
|
- `entrypoint.sh`: Container entrypoint script
|
||||||
- `cubbi-init.sh`: Standardized initialization script
|
- `cubbi-init.sh`: Standardized initialization script
|
||||||
- `cubbi-image.yaml`: Image metadata and configuration
|
- `cubbi_image.yaml`: Image metadata and configuration
|
||||||
- `README.md`: Image documentation
|
- `README.md`: Image documentation
|
||||||
|
|
||||||
Cubbi automatically discovers and loads image definitions from the YAML files.
|
Cubbi automatically discovers and loads image definitions from the YAML files.
|
||||||
|
|||||||
202
cubbi/cli.py
202
cubbi/cli.py
@@ -4,6 +4,9 @@ CLI for Cubbi Container Tool.
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
|
import shutil
|
||||||
|
import tempfile
|
||||||
|
from pathlib import Path
|
||||||
from typing import List, Optional
|
from typing import List, Optional
|
||||||
|
|
||||||
import typer
|
import typer
|
||||||
@@ -45,9 +48,7 @@ mcp_manager = MCPManager(config_manager=user_config)
|
|||||||
@app.callback()
|
@app.callback()
|
||||||
def main(
|
def main(
|
||||||
ctx: typer.Context,
|
ctx: typer.Context,
|
||||||
verbose: bool = typer.Option(
|
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
|
||||||
False, "--verbose", "-v", help="Enable verbose logging"
|
|
||||||
),
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Cubbi Container Tool
|
"""Cubbi Container Tool
|
||||||
|
|
||||||
@@ -167,14 +168,23 @@ def create_session(
|
|||||||
gid: Optional[int] = typer.Option(
|
gid: Optional[int] = typer.Option(
|
||||||
None, "--gid", help="Group ID to run the container as (defaults to host user)"
|
None, "--gid", help="Group ID to run the container as (defaults to host user)"
|
||||||
),
|
),
|
||||||
model: Optional[str] = typer.Option(None, "--model", "-m", help="Model to use"),
|
model: Optional[str] = typer.Option(None, "--model", help="Model to use"),
|
||||||
provider: Optional[str] = typer.Option(
|
provider: Optional[str] = typer.Option(
|
||||||
None, "--provider", "-p", help="Provider to use"
|
None, "--provider", "-p", help="Provider to use"
|
||||||
),
|
),
|
||||||
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
|
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
|
||||||
verbose: bool = typer.Option(
|
config: List[str] = typer.Option(
|
||||||
False, "--verbose", "-v", help="Enable verbose logging"
|
[],
|
||||||
|
"--config",
|
||||||
|
"-c",
|
||||||
|
help="Override configuration values (KEY=VALUE) for this session only",
|
||||||
),
|
),
|
||||||
|
domains: List[str] = typer.Option(
|
||||||
|
[],
|
||||||
|
"--domains",
|
||||||
|
help="Restrict network access to specified domains/ports (e.g., 'example.com:443', 'api.github.com')",
|
||||||
|
),
|
||||||
|
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Create a new Cubbi session
|
"""Create a new Cubbi session
|
||||||
|
|
||||||
@@ -190,16 +200,66 @@ def create_session(
|
|||||||
target_gid = gid if gid is not None else os.getgid()
|
target_gid = gid if gid is not None else os.getgid()
|
||||||
console.print(f"Using UID: {target_uid}, GID: {target_gid}")
|
console.print(f"Using UID: {target_uid}, GID: {target_gid}")
|
||||||
|
|
||||||
# Use default image from user configuration
|
# Create a temporary user config manager with overrides
|
||||||
|
temp_user_config = UserConfigManager()
|
||||||
|
|
||||||
|
# Parse and apply config overrides
|
||||||
|
config_overrides = {}
|
||||||
|
for config_item in config:
|
||||||
|
if "=" in config_item:
|
||||||
|
key, value = config_item.split("=", 1)
|
||||||
|
# Convert string value to appropriate type
|
||||||
|
if value.lower() == "true":
|
||||||
|
typed_value = True
|
||||||
|
elif value.lower() == "false":
|
||||||
|
typed_value = False
|
||||||
|
elif value.isdigit():
|
||||||
|
typed_value = int(value)
|
||||||
|
else:
|
||||||
|
typed_value = value
|
||||||
|
config_overrides[key] = typed_value
|
||||||
|
else:
|
||||||
|
console.print(
|
||||||
|
f"[yellow]Warning: Ignoring invalid config format: {config_item}. Use KEY=VALUE.[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply overrides to temp config (without saving)
|
||||||
|
for key, value in config_overrides.items():
|
||||||
|
# Handle shorthand service paths (e.g., "langfuse.url")
|
||||||
|
if (
|
||||||
|
"." in key
|
||||||
|
and not key.startswith("services.")
|
||||||
|
and not any(
|
||||||
|
key.startswith(section + ".")
|
||||||
|
for section in ["defaults", "docker", "remote", "ui"]
|
||||||
|
)
|
||||||
|
):
|
||||||
|
service, setting = key.split(".", 1)
|
||||||
|
key = f"services.{service}.{setting}"
|
||||||
|
|
||||||
|
# Split the key path and navigate to set the value
|
||||||
|
parts = key.split(".")
|
||||||
|
config_dict = temp_user_config.config
|
||||||
|
|
||||||
|
# Navigate to the containing dictionary
|
||||||
|
for part in parts[:-1]:
|
||||||
|
if part not in config_dict:
|
||||||
|
config_dict[part] = {}
|
||||||
|
config_dict = config_dict[part]
|
||||||
|
|
||||||
|
# Set the value without saving
|
||||||
|
config_dict[parts[-1]] = value
|
||||||
|
|
||||||
|
# Use default image from user configuration (with overrides applied)
|
||||||
if not image:
|
if not image:
|
||||||
image_name = user_config.get(
|
image_name = temp_user_config.get(
|
||||||
"defaults.image", config_manager.config.defaults.get("image", "goose")
|
"defaults.image", config_manager.config.defaults.get("image", "goose")
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
image_name = image
|
image_name = image
|
||||||
|
|
||||||
# Start with environment variables from user configuration
|
# Start with environment variables from user configuration (with overrides applied)
|
||||||
environment = user_config.get_environment_variables()
|
environment = temp_user_config.get_environment_variables()
|
||||||
|
|
||||||
# Override with environment variables from command line
|
# Override with environment variables from command line
|
||||||
for var in env:
|
for var in env:
|
||||||
@@ -215,7 +275,7 @@ def create_session(
|
|||||||
volume_mounts = {}
|
volume_mounts = {}
|
||||||
|
|
||||||
# Get default volumes from user config
|
# Get default volumes from user config
|
||||||
default_volumes = user_config.get("defaults.volumes", [])
|
default_volumes = temp_user_config.get("defaults.volumes", [])
|
||||||
|
|
||||||
# Combine default volumes with user-specified volumes
|
# Combine default volumes with user-specified volumes
|
||||||
all_volumes = default_volumes + list(volume)
|
all_volumes = default_volumes + list(volume)
|
||||||
@@ -242,15 +302,27 @@ def create_session(
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Get default networks from user config
|
# Get default networks from user config
|
||||||
default_networks = user_config.get("defaults.networks", [])
|
default_networks = temp_user_config.get("defaults.networks", [])
|
||||||
|
|
||||||
# Combine default networks with user-specified networks, removing duplicates
|
# Combine default networks with user-specified networks, removing duplicates
|
||||||
all_networks = list(set(default_networks + network))
|
all_networks = list(set(default_networks + network))
|
||||||
|
|
||||||
|
# Get default domains from user config
|
||||||
|
default_domains = temp_user_config.get("defaults.domains", [])
|
||||||
|
|
||||||
|
# Combine default domains with user-specified domains
|
||||||
|
all_domains = default_domains + list(domains)
|
||||||
|
|
||||||
|
# Check for conflict between network and domains
|
||||||
|
if all_domains and all_networks:
|
||||||
|
console.print(
|
||||||
|
"[yellow]Warning: --domains cannot be used with --network. Network restrictions will take precedence.[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
# Get default MCPs from user config if none specified
|
# Get default MCPs from user config if none specified
|
||||||
all_mcps = mcp if isinstance(mcp, list) else []
|
all_mcps = mcp if isinstance(mcp, list) else []
|
||||||
if not all_mcps:
|
if not all_mcps:
|
||||||
default_mcps = user_config.get("defaults.mcps", [])
|
default_mcps = temp_user_config.get("defaults.mcps", [])
|
||||||
all_mcps = default_mcps
|
all_mcps = default_mcps
|
||||||
|
|
||||||
if default_mcps:
|
if default_mcps:
|
||||||
@@ -259,6 +331,9 @@ def create_session(
|
|||||||
if all_networks:
|
if all_networks:
|
||||||
console.print(f"Networks: {', '.join(all_networks)}")
|
console.print(f"Networks: {', '.join(all_networks)}")
|
||||||
|
|
||||||
|
if all_domains:
|
||||||
|
console.print(f"Domain restrictions: {', '.join(all_domains)}")
|
||||||
|
|
||||||
# Show volumes that will be mounted
|
# Show volumes that will be mounted
|
||||||
if volume_mounts:
|
if volume_mounts:
|
||||||
console.print("Volumes:")
|
console.print("Volumes:")
|
||||||
@@ -278,6 +353,16 @@ def create_session(
|
|||||||
"[yellow]Warning: --no-shell is ignored without --run[/yellow]"
|
"[yellow]Warning: --no-shell is ignored without --run[/yellow]"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Use model and provider from config overrides if not explicitly provided
|
||||||
|
final_model = (
|
||||||
|
model if model is not None else temp_user_config.get("defaults.model")
|
||||||
|
)
|
||||||
|
final_provider = (
|
||||||
|
provider
|
||||||
|
if provider is not None
|
||||||
|
else temp_user_config.get("defaults.provider")
|
||||||
|
)
|
||||||
|
|
||||||
session = container_manager.create_session(
|
session = container_manager.create_session(
|
||||||
image_name=image_name,
|
image_name=image_name,
|
||||||
project=path_or_url,
|
project=path_or_url,
|
||||||
@@ -293,8 +378,9 @@ def create_session(
|
|||||||
uid=target_uid,
|
uid=target_uid,
|
||||||
gid=target_gid,
|
gid=target_gid,
|
||||||
ssh=ssh,
|
ssh=ssh,
|
||||||
model=model,
|
model=final_model,
|
||||||
provider=provider,
|
provider=final_provider,
|
||||||
|
domains=all_domains,
|
||||||
)
|
)
|
||||||
|
|
||||||
if session:
|
if session:
|
||||||
@@ -308,7 +394,7 @@ def create_session(
|
|||||||
console.print(f" {container_port} -> {host_port}")
|
console.print(f" {container_port} -> {host_port}")
|
||||||
|
|
||||||
# Auto-connect based on user config, unless overridden by --no-connect flag or --no-shell
|
# Auto-connect based on user config, unless overridden by --no-connect flag or --no-shell
|
||||||
auto_connect = user_config.get("defaults.connect", True)
|
auto_connect = temp_user_config.get("defaults.connect", True)
|
||||||
|
|
||||||
# When --no-shell is used with --run, show logs instead of connecting
|
# When --no-shell is used with --run, show logs instead of connecting
|
||||||
if no_shell and run_command:
|
if no_shell and run_command:
|
||||||
@@ -510,9 +596,60 @@ def build_image(
|
|||||||
# Build image name
|
# Build image name
|
||||||
docker_image_name = f"monadical/cubbi-{image_name}:{tag}"
|
docker_image_name = f"monadical/cubbi-{image_name}:{tag}"
|
||||||
|
|
||||||
# Build the image
|
# Create temporary build directory
|
||||||
with console.status(f"Building image {docker_image_name}..."):
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
result = os.system(f"cd {image_path} && docker build -t {docker_image_name} .")
|
temp_path = Path(temp_dir)
|
||||||
|
console.print(f"Using temporary build directory: {temp_path}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Copy all files from the image directory to temp directory
|
||||||
|
for item in image_path.iterdir():
|
||||||
|
if item.is_file():
|
||||||
|
shutil.copy2(item, temp_path / item.name)
|
||||||
|
elif item.is_dir():
|
||||||
|
shutil.copytree(item, temp_path / item.name)
|
||||||
|
|
||||||
|
# Copy shared cubbi_init.py to temp directory
|
||||||
|
shared_init_path = Path(__file__).parent / "images" / "cubbi_init.py"
|
||||||
|
if shared_init_path.exists():
|
||||||
|
shutil.copy2(shared_init_path, temp_path / "cubbi_init.py")
|
||||||
|
console.print("Copied shared cubbi_init.py to build context")
|
||||||
|
else:
|
||||||
|
console.print(
|
||||||
|
f"[yellow]Warning: Shared cubbi_init.py not found at {shared_init_path}[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Copy shared init-status.sh to temp directory
|
||||||
|
shared_status_path = Path(__file__).parent / "images" / "init-status.sh"
|
||||||
|
if shared_status_path.exists():
|
||||||
|
shutil.copy2(shared_status_path, temp_path / "init-status.sh")
|
||||||
|
console.print("Copied shared init-status.sh to build context")
|
||||||
|
else:
|
||||||
|
console.print(
|
||||||
|
f"[yellow]Warning: Shared init-status.sh not found at {shared_status_path}[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Copy image-specific plugin if it exists
|
||||||
|
plugin_path = image_path / f"{image_name.lower()}_plugin.py"
|
||||||
|
if plugin_path.exists():
|
||||||
|
shutil.copy2(plugin_path, temp_path / f"{image_name.lower()}_plugin.py")
|
||||||
|
console.print(f"Copied {image_name.lower()}_plugin.py to build context")
|
||||||
|
|
||||||
|
# Copy init-status.sh if it exists (for backward compatibility with shell connection)
|
||||||
|
init_status_path = image_path / "init-status.sh"
|
||||||
|
if init_status_path.exists():
|
||||||
|
shutil.copy2(init_status_path, temp_path / "init-status.sh")
|
||||||
|
console.print("Copied init-status.sh to build context")
|
||||||
|
|
||||||
|
# Build the image from temporary directory
|
||||||
|
with console.status(f"Building image {docker_image_name}..."):
|
||||||
|
result = os.system(
|
||||||
|
f"cd {temp_path} && docker build -t {docker_image_name} ."
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[red]Error preparing build context: {e}[/red]")
|
||||||
|
return
|
||||||
|
|
||||||
if result != 0:
|
if result != 0:
|
||||||
console.print("[red]Failed to build image[/red]")
|
console.print("[red]Failed to build image[/red]")
|
||||||
@@ -1061,9 +1198,7 @@ def mcp_status(name: str = typer.Argument(..., help="MCP server name")) -> None:
|
|||||||
def start_mcp(
|
def start_mcp(
|
||||||
name: Optional[str] = typer.Argument(None, help="MCP server name"),
|
name: Optional[str] = typer.Argument(None, help="MCP server name"),
|
||||||
all_servers: bool = typer.Option(False, "--all", help="Start all MCP servers"),
|
all_servers: bool = typer.Option(False, "--all", help="Start all MCP servers"),
|
||||||
verbose: bool = typer.Option(
|
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
|
||||||
False, "--verbose", "-v", help="Enable verbose logging"
|
|
||||||
),
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Start an MCP server or all servers"""
|
"""Start an MCP server or all servers"""
|
||||||
# Set log level based on verbose flag
|
# Set log level based on verbose flag
|
||||||
@@ -1458,6 +1593,11 @@ def add_mcp(
|
|||||||
def add_remote_mcp(
|
def add_remote_mcp(
|
||||||
name: str = typer.Argument(..., help="MCP server name"),
|
name: str = typer.Argument(..., help="MCP server name"),
|
||||||
url: str = typer.Argument(..., help="URL of the remote MCP server"),
|
url: str = typer.Argument(..., help="URL of the remote MCP server"),
|
||||||
|
mcp_type: str = typer.Option(
|
||||||
|
"auto",
|
||||||
|
"--mcp-type",
|
||||||
|
help="MCP connection type: sse, streamable_http, stdio, or auto (default: auto)",
|
||||||
|
),
|
||||||
header: List[str] = typer.Option(
|
header: List[str] = typer.Option(
|
||||||
[], "--header", "-H", help="HTTP headers (format: KEY=VALUE)"
|
[], "--header", "-H", help="HTTP headers (format: KEY=VALUE)"
|
||||||
),
|
),
|
||||||
@@ -1466,6 +1606,22 @@ def add_remote_mcp(
|
|||||||
),
|
),
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Add a remote MCP server"""
|
"""Add a remote MCP server"""
|
||||||
|
if mcp_type == "auto":
|
||||||
|
if url.endswith("/sse"):
|
||||||
|
mcp_type = "sse"
|
||||||
|
elif url.endswith("/mcp"):
|
||||||
|
mcp_type = "streamable_http"
|
||||||
|
else:
|
||||||
|
console.print(
|
||||||
|
f"[red]Cannot auto-detect MCP type from URL '{url}'. Please specify --mcp-type (sse, streamable_http, or stdio)[/red]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
elif mcp_type not in ["sse", "streamable_http", "stdio"]:
|
||||||
|
console.print(
|
||||||
|
f"[red]Invalid MCP type '{mcp_type}'. Must be: sse, streamable_http, stdio, or auto[/red]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
# Parse headers
|
# Parse headers
|
||||||
headers = {}
|
headers = {}
|
||||||
for h in header:
|
for h in header:
|
||||||
@@ -1480,7 +1636,7 @@ def add_remote_mcp(
|
|||||||
try:
|
try:
|
||||||
with console.status(f"Adding remote MCP server '{name}'..."):
|
with console.status(f"Adding remote MCP server '{name}'..."):
|
||||||
mcp_manager.add_remote_mcp(
|
mcp_manager.add_remote_mcp(
|
||||||
name, url, headers, add_as_default=not no_default
|
name, url, headers, mcp_type=mcp_type, add_as_default=not no_default
|
||||||
)
|
)
|
||||||
|
|
||||||
console.print(f"[green]Added remote MCP server '{name}'[/green]")
|
console.print(f"[green]Added remote MCP server '{name}'[/green]")
|
||||||
|
|||||||
@@ -64,6 +64,7 @@ class ConfigManager:
|
|||||||
},
|
},
|
||||||
defaults={
|
defaults={
|
||||||
"image": "goose",
|
"image": "goose",
|
||||||
|
"domains": [],
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -108,7 +109,7 @@ class ConfigManager:
|
|||||||
def load_image_from_dir(self, image_dir: Path) -> Optional[Image]:
|
def load_image_from_dir(self, image_dir: Path) -> Optional[Image]:
|
||||||
"""Load an image configuration from a directory"""
|
"""Load an image configuration from a directory"""
|
||||||
# Check for image config file
|
# Check for image config file
|
||||||
yaml_path = image_dir / "cubbi-image.yaml"
|
yaml_path = image_dir / "cubbi_image.yaml"
|
||||||
if not yaml_path.exists():
|
if not yaml_path.exists():
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@@ -150,7 +151,7 @@ class ConfigManager:
|
|||||||
if not BUILTIN_IMAGES_DIR.exists():
|
if not BUILTIN_IMAGES_DIR.exists():
|
||||||
return images
|
return images
|
||||||
|
|
||||||
# Search for cubbi-image.yaml files in each subdirectory
|
# Search for cubbi_image.yaml files in each subdirectory
|
||||||
for image_dir in BUILTIN_IMAGES_DIR.iterdir():
|
for image_dir in BUILTIN_IMAGES_DIR.iterdir():
|
||||||
if image_dir.is_dir():
|
if image_dir.is_dir():
|
||||||
image = self.load_image_from_dir(image_dir)
|
image = self.load_image_from_dir(image_dir)
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ from docker.errors import DockerException, ImageNotFound
|
|||||||
|
|
||||||
from .config import ConfigManager
|
from .config import ConfigManager
|
||||||
from .mcp import MCPManager
|
from .mcp import MCPManager
|
||||||
from .models import Session, SessionStatus
|
from .models import Image, Session, SessionStatus
|
||||||
from .session import SessionManager
|
from .session import SessionManager
|
||||||
from .user_config import UserConfigManager
|
from .user_config import UserConfigManager
|
||||||
|
|
||||||
@@ -107,12 +107,21 @@ class ContainerManager:
|
|||||||
elif container.status == "created":
|
elif container.status == "created":
|
||||||
status = SessionStatus.CREATING
|
status = SessionStatus.CREATING
|
||||||
|
|
||||||
|
# Get MCP list from container labels
|
||||||
|
mcps_str = labels.get("cubbi.mcps", "")
|
||||||
|
mcps = (
|
||||||
|
[mcp.strip() for mcp in mcps_str.split(",") if mcp.strip()]
|
||||||
|
if mcps_str
|
||||||
|
else []
|
||||||
|
)
|
||||||
|
|
||||||
session = Session(
|
session = Session(
|
||||||
id=session_id,
|
id=session_id,
|
||||||
name=labels.get("cubbi.session.name", f"cubbi-{session_id}"),
|
name=labels.get("cubbi.session.name", f"cubbi-{session_id}"),
|
||||||
image=labels.get("cubbi.image", "unknown"),
|
image=labels.get("cubbi.image", "unknown"),
|
||||||
status=status,
|
status=status,
|
||||||
container_id=container_id,
|
container_id=container_id,
|
||||||
|
mcps=mcps,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Get port mappings
|
# Get port mappings
|
||||||
@@ -153,6 +162,7 @@ class ContainerManager:
|
|||||||
model: Optional[str] = None,
|
model: Optional[str] = None,
|
||||||
provider: Optional[str] = None,
|
provider: Optional[str] = None,
|
||||||
ssh: bool = False,
|
ssh: bool = False,
|
||||||
|
domains: Optional[List[str]] = None,
|
||||||
) -> Optional[Session]:
|
) -> Optional[Session]:
|
||||||
"""Create a new Cubbi session
|
"""Create a new Cubbi session
|
||||||
|
|
||||||
@@ -173,13 +183,26 @@ class ContainerManager:
|
|||||||
model: Optional model to use
|
model: Optional model to use
|
||||||
provider: Optional provider to use
|
provider: Optional provider to use
|
||||||
ssh: Whether to start the SSH server in the container (default: False)
|
ssh: Whether to start the SSH server in the container (default: False)
|
||||||
|
domains: Optional list of domains to restrict network access to (uses network-filter)
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
# Validate image exists
|
# Try to get image from config first
|
||||||
image = self.config_manager.get_image(image_name)
|
image = self.config_manager.get_image(image_name)
|
||||||
if not image:
|
if not image:
|
||||||
print(f"Image '{image_name}' not found")
|
# If not found in config, treat it as a Docker image name
|
||||||
return None
|
print(
|
||||||
|
f"Image '{image_name}' not found in Cubbi config, using as Docker image..."
|
||||||
|
)
|
||||||
|
image = Image(
|
||||||
|
name=image_name,
|
||||||
|
description=f"Docker image: {image_name}",
|
||||||
|
version="latest",
|
||||||
|
maintainer="unknown",
|
||||||
|
image=image_name,
|
||||||
|
ports=[],
|
||||||
|
volumes=[],
|
||||||
|
persistent_configs=[],
|
||||||
|
)
|
||||||
|
|
||||||
# Generate session ID and name
|
# Generate session ID and name
|
||||||
session_id = self._generate_session_id()
|
session_id = self._generate_session_id()
|
||||||
@@ -199,17 +222,20 @@ class ContainerManager:
|
|||||||
# Set SSH environment variable
|
# Set SSH environment variable
|
||||||
env_vars["CUBBI_SSH_ENABLED"] = "true" if ssh else "false"
|
env_vars["CUBBI_SSH_ENABLED"] = "true" if ssh else "false"
|
||||||
|
|
||||||
# Pass API keys from host environment to container for local development
|
# Pass some environment from host environment to container for local development
|
||||||
api_keys = [
|
keys = [
|
||||||
"OPENAI_API_KEY",
|
"OPENAI_API_KEY",
|
||||||
|
"OPENAI_URL",
|
||||||
"ANTHROPIC_API_KEY",
|
"ANTHROPIC_API_KEY",
|
||||||
|
"ANTHROPIC_AUTH_TOKEN",
|
||||||
|
"ANTHROPIC_CUSTOM_HEADERS",
|
||||||
"OPENROUTER_API_KEY",
|
"OPENROUTER_API_KEY",
|
||||||
"GOOGLE_API_KEY",
|
"GOOGLE_API_KEY",
|
||||||
"LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
"LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
||||||
"LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
"LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
||||||
"LANGFUSE_URL",
|
"LANGFUSE_URL",
|
||||||
]
|
]
|
||||||
for key in api_keys:
|
for key in keys:
|
||||||
if key in os.environ and key not in env_vars:
|
if key in os.environ and key not in env_vars:
|
||||||
env_vars[key] = os.environ[key]
|
env_vars[key] = os.environ[key]
|
||||||
|
|
||||||
@@ -431,7 +457,7 @@ class ContainerManager:
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Set type-specific information
|
# Set type-specific information
|
||||||
env_vars[f"MCP_{idx}_TYPE"] = "remote"
|
env_vars[f"MCP_{idx}_TYPE"] = mcp_config.get("mcp_type", "sse")
|
||||||
env_vars[f"MCP_{idx}_NAME"] = mcp_name
|
env_vars[f"MCP_{idx}_NAME"] = mcp_name
|
||||||
|
|
||||||
# Set environment variables for MCP count if we have any
|
# Set environment variables for MCP count if we have any
|
||||||
@@ -505,17 +531,99 @@ class ContainerManager:
|
|||||||
"defaults.provider", ""
|
"defaults.provider", ""
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Handle network-filter if domains are specified
|
||||||
|
network_filter_container = None
|
||||||
|
network_mode = None
|
||||||
|
|
||||||
|
if domains:
|
||||||
|
# Check for conflicts
|
||||||
|
if networks:
|
||||||
|
print(
|
||||||
|
"[yellow]Warning: Cannot use --domains with --network. Using domain restrictions only.[/yellow]"
|
||||||
|
)
|
||||||
|
networks = []
|
||||||
|
network_list = [default_network]
|
||||||
|
|
||||||
|
# Create network-filter container
|
||||||
|
network_filter_name = f"cubbi-network-filter-{session_id}"
|
||||||
|
|
||||||
|
# Pull network-filter image if needed
|
||||||
|
network_filter_image = "monadicalsas/network-filter:latest"
|
||||||
|
try:
|
||||||
|
self.client.images.get(network_filter_image)
|
||||||
|
except ImageNotFound:
|
||||||
|
print(f"Pulling network-filter image {network_filter_image}...")
|
||||||
|
self.client.images.pull(network_filter_image)
|
||||||
|
|
||||||
|
# Create and start network-filter container
|
||||||
|
print("Creating network-filter container for domain restrictions...")
|
||||||
|
try:
|
||||||
|
# First check if a network-filter container already exists with this name
|
||||||
|
try:
|
||||||
|
existing = self.client.containers.get(network_filter_name)
|
||||||
|
print(
|
||||||
|
f"Removing existing network-filter container {network_filter_name}"
|
||||||
|
)
|
||||||
|
existing.stop()
|
||||||
|
existing.remove()
|
||||||
|
except DockerException:
|
||||||
|
pass # Container doesn't exist, which is fine
|
||||||
|
|
||||||
|
network_filter_container = self.client.containers.run(
|
||||||
|
image=network_filter_image,
|
||||||
|
name=network_filter_name,
|
||||||
|
hostname=network_filter_name,
|
||||||
|
detach=True,
|
||||||
|
environment={"ALLOWED_DOMAINS": ",".join(domains)},
|
||||||
|
labels={
|
||||||
|
"cubbi.network-filter": "true",
|
||||||
|
"cubbi.session.id": session_id,
|
||||||
|
"cubbi.session.name": session_name,
|
||||||
|
},
|
||||||
|
cap_add=["NET_ADMIN"], # Required for iptables
|
||||||
|
remove=False, # Don't auto-remove on stop
|
||||||
|
)
|
||||||
|
|
||||||
|
# Wait for container to be running
|
||||||
|
import time
|
||||||
|
|
||||||
|
for i in range(10): # Wait up to 10 seconds
|
||||||
|
network_filter_container.reload()
|
||||||
|
if network_filter_container.status == "running":
|
||||||
|
break
|
||||||
|
time.sleep(1)
|
||||||
|
else:
|
||||||
|
raise Exception(
|
||||||
|
f"Network-filter container failed to start. Status: {network_filter_container.status}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Use container ID instead of name for network_mode
|
||||||
|
network_mode = f"container:{network_filter_container.id}"
|
||||||
|
print(
|
||||||
|
f"Network restrictions enabled for domains: {', '.join(domains)}"
|
||||||
|
)
|
||||||
|
print(f"Using network mode: {network_mode}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"[red]Error creating network-filter container: {e}[/red]")
|
||||||
|
raise
|
||||||
|
|
||||||
|
# Warn about MCP limitations when using network-filter
|
||||||
|
if mcp_names:
|
||||||
|
print(
|
||||||
|
"[yellow]Warning: MCP servers may not be accessible when using domain restrictions.[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
# Create container
|
# Create container
|
||||||
container = self.client.containers.create(
|
container_params = {
|
||||||
image=image.image,
|
"image": image.image,
|
||||||
name=session_name,
|
"name": session_name,
|
||||||
hostname=session_name,
|
"detach": True,
|
||||||
detach=True,
|
"tty": True,
|
||||||
tty=True,
|
"stdin_open": True,
|
||||||
stdin_open=True,
|
"environment": env_vars,
|
||||||
environment=env_vars,
|
"volumes": session_volumes,
|
||||||
volumes=session_volumes,
|
"labels": {
|
||||||
labels={
|
|
||||||
"cubbi.session": "true",
|
"cubbi.session": "true",
|
||||||
"cubbi.session.id": session_id,
|
"cubbi.session.id": session_id,
|
||||||
"cubbi.session.name": session_name,
|
"cubbi.session.name": session_name,
|
||||||
@@ -524,17 +632,29 @@ class ContainerManager:
|
|||||||
"cubbi.project_name": project_name or "",
|
"cubbi.project_name": project_name or "",
|
||||||
"cubbi.mcps": ",".join(mcp_names) if mcp_names else "",
|
"cubbi.mcps": ",".join(mcp_names) if mcp_names else "",
|
||||||
},
|
},
|
||||||
network=network_list[0], # Connect to the first network initially
|
"command": container_command, # Set the command
|
||||||
command=container_command, # Set the command
|
"entrypoint": entrypoint, # Set the entrypoint (might be None)
|
||||||
entrypoint=entrypoint, # Set the entrypoint (might be None)
|
"ports": {f"{port}/tcp": None for port in image.ports},
|
||||||
ports={f"{port}/tcp": None for port in image.ports},
|
}
|
||||||
)
|
|
||||||
|
# Use network_mode if domains are specified, otherwise use regular network
|
||||||
|
if network_mode:
|
||||||
|
container_params["network_mode"] = network_mode
|
||||||
|
# Cannot set hostname when using network_mode
|
||||||
|
else:
|
||||||
|
container_params["hostname"] = session_name
|
||||||
|
container_params["network"] = network_list[
|
||||||
|
0
|
||||||
|
] # Connect to the first network initially
|
||||||
|
|
||||||
|
container = self.client.containers.create(**container_params)
|
||||||
|
|
||||||
# Start container
|
# Start container
|
||||||
container.start()
|
container.start()
|
||||||
|
|
||||||
# Connect to additional networks (after the first one in network_list)
|
# Connect to additional networks (after the first one in network_list)
|
||||||
if len(network_list) > 1:
|
# Note: Cannot connect to networks when using network_mode
|
||||||
|
if len(network_list) > 1 and not network_mode:
|
||||||
for network_name in network_list[1:]:
|
for network_name in network_list[1:]:
|
||||||
try:
|
try:
|
||||||
# Get or create the network
|
# Get or create the network
|
||||||
@@ -548,9 +668,6 @@ class ContainerManager:
|
|||||||
|
|
||||||
# Connect the container to the network with session name as an alias
|
# Connect the container to the network with session name as an alias
|
||||||
network.connect(container, aliases=[session_name])
|
network.connect(container, aliases=[session_name])
|
||||||
print(
|
|
||||||
f"Connected to network: {network_name} with alias: {session_name}"
|
|
||||||
)
|
|
||||||
except DockerException as e:
|
except DockerException as e:
|
||||||
print(f"Error connecting to network {network_name}: {e}")
|
print(f"Error connecting to network {network_name}: {e}")
|
||||||
|
|
||||||
@@ -558,29 +675,35 @@ class ContainerManager:
|
|||||||
container.reload()
|
container.reload()
|
||||||
|
|
||||||
# Connect directly to each MCP's dedicated network
|
# Connect directly to each MCP's dedicated network
|
||||||
for mcp_name in mcp_names:
|
# Note: Cannot connect to networks when using network_mode
|
||||||
try:
|
if not network_mode:
|
||||||
# Get the dedicated network for this MCP
|
for mcp_name in mcp_names:
|
||||||
dedicated_network_name = f"cubbi-mcp-{mcp_name}-network"
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
network = self.client.networks.get(dedicated_network_name)
|
# Get the dedicated network for this MCP
|
||||||
|
dedicated_network_name = f"cubbi-mcp-{mcp_name}-network"
|
||||||
|
|
||||||
# Connect the session container to the MCP's dedicated network
|
try:
|
||||||
network.connect(container, aliases=[session_name])
|
network = self.client.networks.get(dedicated_network_name)
|
||||||
print(
|
|
||||||
f"Connected session to MCP '{mcp_name}' via dedicated network: {dedicated_network_name}"
|
|
||||||
)
|
|
||||||
except DockerException as e:
|
|
||||||
print(
|
|
||||||
f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as e:
|
# Connect the session container to the MCP's dedicated network
|
||||||
print(f"Error connecting session to MCP '{mcp_name}': {e}")
|
network.connect(container, aliases=[session_name])
|
||||||
|
print(
|
||||||
|
f"Connected session to MCP '{mcp_name}' via dedicated network: {dedicated_network_name}"
|
||||||
|
)
|
||||||
|
except DockerException:
|
||||||
|
# print(
|
||||||
|
# f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
|
||||||
|
# )
|
||||||
|
# commented out, may be accessible through another attached network, it's
|
||||||
|
# not mandatory here.
|
||||||
|
pass
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error connecting session to MCP '{mcp_name}': {e}")
|
||||||
|
|
||||||
# Connect to additional user-specified networks
|
# Connect to additional user-specified networks
|
||||||
if networks:
|
# Note: Cannot connect to networks when using network_mode
|
||||||
|
if networks and not network_mode:
|
||||||
for network_name in networks:
|
for network_name in networks:
|
||||||
# Check if already connected to this network
|
# Check if already connected to this network
|
||||||
# NetworkSettings.Networks contains a dict where keys are network names
|
# NetworkSettings.Networks contains a dict where keys are network names
|
||||||
@@ -604,9 +727,6 @@ class ContainerManager:
|
|||||||
|
|
||||||
# Connect the container to the network with session name as an alias
|
# Connect the container to the network with session name as an alias
|
||||||
network.connect(container, aliases=[session_name])
|
network.connect(container, aliases=[session_name])
|
||||||
print(
|
|
||||||
f"Connected to network: {network_name} with alias: {session_name}"
|
|
||||||
)
|
|
||||||
except DockerException as e:
|
except DockerException as e:
|
||||||
print(f"Error connecting to network {network_name}: {e}")
|
print(f"Error connecting to network {network_name}: {e}")
|
||||||
|
|
||||||
@@ -642,6 +762,15 @@ class ContainerManager:
|
|||||||
|
|
||||||
except DockerException as e:
|
except DockerException as e:
|
||||||
print(f"Error creating session: {e}")
|
print(f"Error creating session: {e}")
|
||||||
|
|
||||||
|
# Clean up network-filter container if it was created
|
||||||
|
if network_filter_container:
|
||||||
|
try:
|
||||||
|
network_filter_container.stop()
|
||||||
|
network_filter_container.remove()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def close_session(self, session_id: str) -> bool:
|
def close_session(self, session_id: str) -> bool:
|
||||||
@@ -740,9 +869,24 @@ class ContainerManager:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
# First, close the main session container
|
||||||
container = self.client.containers.get(session.container_id)
|
container = self.client.containers.get(session.container_id)
|
||||||
container.stop()
|
container.stop()
|
||||||
container.remove()
|
container.remove()
|
||||||
|
|
||||||
|
# Check for and close any associated network-filter container
|
||||||
|
network_filter_name = f"cubbi-network-filter-{session.id}"
|
||||||
|
try:
|
||||||
|
network_filter_container = self.client.containers.get(
|
||||||
|
network_filter_name
|
||||||
|
)
|
||||||
|
logger.info(f"Stopping network-filter container {network_filter_name}")
|
||||||
|
network_filter_container.stop()
|
||||||
|
network_filter_container.remove()
|
||||||
|
except DockerException:
|
||||||
|
# Network-filter container might not exist, which is fine
|
||||||
|
pass
|
||||||
|
|
||||||
self.session_manager.remove_session(session.id)
|
self.session_manager.remove_session(session.id)
|
||||||
return True
|
return True
|
||||||
except DockerException as e:
|
except DockerException as e:
|
||||||
@@ -776,6 +920,19 @@ class ContainerManager:
|
|||||||
# Stop and remove container
|
# Stop and remove container
|
||||||
container.stop()
|
container.stop()
|
||||||
container.remove()
|
container.remove()
|
||||||
|
|
||||||
|
# Check for and close any associated network-filter container
|
||||||
|
network_filter_name = f"cubbi-network-filter-{session.id}"
|
||||||
|
try:
|
||||||
|
network_filter_container = self.client.containers.get(
|
||||||
|
network_filter_name
|
||||||
|
)
|
||||||
|
network_filter_container.stop()
|
||||||
|
network_filter_container.remove()
|
||||||
|
except DockerException:
|
||||||
|
# Network-filter container might not exist, which is fine
|
||||||
|
pass
|
||||||
|
|
||||||
# Remove from session storage
|
# Remove from session storage
|
||||||
self.session_manager.remove_session(session.id)
|
self.session_manager.remove_session(session.id)
|
||||||
|
|
||||||
|
|||||||
@@ -1,3 +0,0 @@
|
|||||||
"""
|
|
||||||
MAI container image management
|
|
||||||
"""
|
|
||||||
68
cubbi/images/aider/Dockerfile
Normal file
68
cubbi/images/aider/Dockerfile
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
LABEL maintainer="team@monadical.com"
|
||||||
|
LABEL description="Aider AI pair programming for Cubbi"
|
||||||
|
|
||||||
|
# Install system dependencies including gosu for user switching
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
gosu \
|
||||||
|
sudo \
|
||||||
|
passwd \
|
||||||
|
bash \
|
||||||
|
curl \
|
||||||
|
bzip2 \
|
||||||
|
iputils-ping \
|
||||||
|
iproute2 \
|
||||||
|
libxcb1 \
|
||||||
|
libdbus-1-3 \
|
||||||
|
nano \
|
||||||
|
tmux \
|
||||||
|
git-core \
|
||||||
|
ripgrep \
|
||||||
|
openssh-client \
|
||||||
|
vim \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Install uv (Python package manager)
|
||||||
|
WORKDIR /tmp
|
||||||
|
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||||
|
sh install.sh && \
|
||||||
|
mv /root/.local/bin/uv /usr/local/bin/uv && \
|
||||||
|
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||||
|
rm install.sh
|
||||||
|
|
||||||
|
# Install Aider using pip in system Python (more compatible with user switching)
|
||||||
|
RUN python -m pip install aider-chat
|
||||||
|
|
||||||
|
# Make sure aider is in PATH
|
||||||
|
ENV PATH="/root/.local/bin:$PATH"
|
||||||
|
|
||||||
|
# Create app directory
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy initialization system
|
||||||
|
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||||
|
COPY aider_plugin.py /cubbi/aider_plugin.py
|
||||||
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||||
|
COPY init-status.sh /cubbi/init-status.sh
|
||||||
|
|
||||||
|
# Make scripts executable
|
||||||
|
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||||
|
|
||||||
|
# Add aider to PATH in bashrc and init status check
|
||||||
|
RUN echo 'PATH="/root/.local/bin:$PATH"' >> /etc/bash.bashrc
|
||||||
|
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||||
|
|
||||||
|
# Set up environment
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV UV_LINK_MODE=copy
|
||||||
|
|
||||||
|
# Pre-install the cubbi_init
|
||||||
|
RUN /cubbi/cubbi_init.py --help
|
||||||
|
|
||||||
|
# Set WORKDIR to /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||||
|
CMD ["tail", "-f", "/dev/null"]
|
||||||
277
cubbi/images/aider/README.md
Normal file
277
cubbi/images/aider/README.md
Normal file
@@ -0,0 +1,277 @@
|
|||||||
|
# Aider for Cubbi
|
||||||
|
|
||||||
|
This image provides Aider (AI pair programming) in a Cubbi container environment.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Aider is an AI pair programming tool that works in your terminal. This Cubbi image integrates Aider with secure API key management, persistent configuration, and support for multiple LLM providers.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Multiple LLM Support**: Works with OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter, and more
|
||||||
|
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
||||||
|
- **Persistent Configuration**: Settings and history preserved across container restarts
|
||||||
|
- **Git Integration**: Automatic commits and git awareness
|
||||||
|
- **Multi-Language Support**: Works with 100+ programming languages
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Set up API Key
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For OpenAI (GPT models)
|
||||||
|
uv run -m cubbi.cli config set services.openai.api_key "your-openai-key"
|
||||||
|
|
||||||
|
# For Anthropic (Claude models)
|
||||||
|
uv run -m cubbi.cli config set services.anthropic.api_key "your-anthropic-key"
|
||||||
|
|
||||||
|
# For DeepSeek (recommended for cost-effectiveness)
|
||||||
|
uv run -m cubbi.cli config set services.deepseek.api_key "your-deepseek-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run Aider Environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Aider container with your project
|
||||||
|
uv run -m cubbi.cli session create --image aider /path/to/your/project
|
||||||
|
|
||||||
|
# Or without a project
|
||||||
|
uv run -m cubbi.cli session create --image aider
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Use Aider
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage
|
||||||
|
aider
|
||||||
|
|
||||||
|
# With specific model
|
||||||
|
aider --model sonnet
|
||||||
|
|
||||||
|
# With specific files
|
||||||
|
aider main.py utils.py
|
||||||
|
|
||||||
|
# One-shot request
|
||||||
|
aider --message "Add error handling to the login function"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Supported API Keys
|
||||||
|
|
||||||
|
- `OPENAI_API_KEY`: OpenAI GPT models (GPT-4, GPT-4o, etc.)
|
||||||
|
- `ANTHROPIC_API_KEY`: Anthropic Claude models (Sonnet, Haiku, etc.)
|
||||||
|
- `DEEPSEEK_API_KEY`: DeepSeek models (cost-effective option)
|
||||||
|
- `GEMINI_API_KEY`: Google Gemini models
|
||||||
|
- `OPENROUTER_API_KEY`: OpenRouter (access to many models)
|
||||||
|
|
||||||
|
### Additional Configuration
|
||||||
|
|
||||||
|
- `AIDER_MODEL`: Default model to use (e.g., "sonnet", "o3-mini", "deepseek")
|
||||||
|
- `AIDER_AUTO_COMMITS`: Enable automatic git commits (default: true)
|
||||||
|
- `AIDER_DARK_MODE`: Enable dark mode interface (default: false)
|
||||||
|
- `AIDER_API_KEYS`: Additional API keys in format "provider1=key1,provider2=key2"
|
||||||
|
|
||||||
|
### Network Configuration
|
||||||
|
|
||||||
|
- `HTTP_PROXY`: HTTP proxy server URL
|
||||||
|
- `HTTPS_PROXY`: HTTPS proxy server URL
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Basic AI Pair Programming
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Aider with your project
|
||||||
|
uv run -m cubbi.cli session create --image aider /path/to/project
|
||||||
|
|
||||||
|
# Inside the container:
|
||||||
|
aider # Start interactive session
|
||||||
|
aider main.py # Work on specific file
|
||||||
|
aider --message "Add tests" # One-shot request
|
||||||
|
```
|
||||||
|
|
||||||
|
### Model Selection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Use Claude Sonnet
|
||||||
|
aider --model sonnet
|
||||||
|
|
||||||
|
# Use GPT-4o
|
||||||
|
aider --model gpt-4o
|
||||||
|
|
||||||
|
# Use DeepSeek (cost-effective)
|
||||||
|
aider --model deepseek
|
||||||
|
|
||||||
|
# Use OpenRouter
|
||||||
|
aider --model openrouter/anthropic/claude-3.5-sonnet
|
||||||
|
```
|
||||||
|
|
||||||
|
### Advanced Features
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Work with multiple files
|
||||||
|
aider src/main.py tests/test_main.py
|
||||||
|
|
||||||
|
# Auto-commit changes
|
||||||
|
aider --auto-commits
|
||||||
|
|
||||||
|
# Read-only mode (won't edit files)
|
||||||
|
aider --read
|
||||||
|
|
||||||
|
# Apply a specific change
|
||||||
|
aider --message "Refactor the database connection code to use connection pooling"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enterprise/Proxy Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# With proxy
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env HTTPS_PROXY="https://proxy.company.com:8080" \
|
||||||
|
/path/to/project
|
||||||
|
|
||||||
|
# With custom model
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env AIDER_MODEL="sonnet" \
|
||||||
|
/path/to/project
|
||||||
|
```
|
||||||
|
|
||||||
|
## Persistent Configuration
|
||||||
|
|
||||||
|
The following directories are automatically persisted:
|
||||||
|
|
||||||
|
- `~/.aider/`: Aider configuration and chat history
|
||||||
|
- `~/.cache/aider/`: Model cache and temporary files
|
||||||
|
|
||||||
|
Configuration files are maintained across container restarts, ensuring your preferences and chat history are preserved.
|
||||||
|
|
||||||
|
## Model Recommendations
|
||||||
|
|
||||||
|
### Best Overall Performance
|
||||||
|
- **Claude 3.5 Sonnet**: Excellent code understanding and generation
|
||||||
|
- **OpenAI GPT-4o**: Strong performance across languages
|
||||||
|
- **Gemini 2.5 Pro**: Good balance of quality and speed
|
||||||
|
|
||||||
|
### Cost-Effective Options
|
||||||
|
- **DeepSeek V3**: Very cost-effective, good quality
|
||||||
|
- **OpenRouter**: Access to multiple models with competitive pricing
|
||||||
|
|
||||||
|
### Free Options
|
||||||
|
- **Gemini 2.5 Pro Exp**: Free tier available
|
||||||
|
- **OpenRouter**: Some free models available
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
cubbi/images/aider/
|
||||||
|
├── Dockerfile # Container image definition
|
||||||
|
├── cubbi_image.yaml # Cubbi image configuration
|
||||||
|
├── aider_plugin.py # Authentication and setup plugin
|
||||||
|
└── README.md # This documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication Flow
|
||||||
|
|
||||||
|
1. **Environment Variables**: API keys passed from Cubbi configuration
|
||||||
|
2. **Plugin Setup**: `aider_plugin.py` creates environment configuration
|
||||||
|
3. **Environment File**: Creates `~/.aider/.env` with API keys
|
||||||
|
4. **Ready**: Aider is ready for use with configured authentication
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
**No API Key Found**
|
||||||
|
```
|
||||||
|
ℹ️ No API keys found - Aider will run without pre-configuration
|
||||||
|
```
|
||||||
|
**Solution**: Set API key in Cubbi configuration:
|
||||||
|
```bash
|
||||||
|
uv run -m cubbi.cli config set services.openai.api_key "your-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Model Not Available**
|
||||||
|
```
|
||||||
|
Error: Model 'xyz' not found
|
||||||
|
```
|
||||||
|
**Solution**: Check available models for your provider:
|
||||||
|
```bash
|
||||||
|
aider --models # List available models
|
||||||
|
```
|
||||||
|
|
||||||
|
**Git Issues**
|
||||||
|
```
|
||||||
|
Git repository not found
|
||||||
|
```
|
||||||
|
**Solution**: Initialize git in your project or mount a git repository:
|
||||||
|
```bash
|
||||||
|
git init
|
||||||
|
# or
|
||||||
|
uv run -m cubbi.cli session create --image aider /path/to/git/project
|
||||||
|
```
|
||||||
|
|
||||||
|
**Network/Proxy Issues**
|
||||||
|
```
|
||||||
|
Connection timeout or proxy errors
|
||||||
|
```
|
||||||
|
**Solution**: Configure proxy settings:
|
||||||
|
```bash
|
||||||
|
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Aider version
|
||||||
|
aider --version
|
||||||
|
|
||||||
|
# List available models
|
||||||
|
aider --models
|
||||||
|
|
||||||
|
# Check configuration
|
||||||
|
cat ~/.aider/.env
|
||||||
|
|
||||||
|
# Verbose output
|
||||||
|
aider --verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- **API Keys**: Stored securely with 0o600 permissions
|
||||||
|
- **Environment**: Isolated container environment
|
||||||
|
- **Git Integration**: Respects .gitignore and git configurations
|
||||||
|
- **Code Safety**: Always review changes before accepting
|
||||||
|
|
||||||
|
## Advanced Configuration
|
||||||
|
|
||||||
|
### Custom Model Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Use with custom API endpoint
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env OPENAI_API_BASE="https://api.custom-provider.com/v1" \
|
||||||
|
--env OPENAI_API_KEY="your-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multiple API Keys
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Configure multiple providers
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env OPENAI_API_KEY="openai-key" \
|
||||||
|
--env ANTHROPIC_API_KEY="anthropic-key" \
|
||||||
|
--env AIDER_API_KEYS="provider1=key1,provider2=key2"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues related to:
|
||||||
|
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
||||||
|
- **Aider Functionality**: Visit [Aider documentation](https://aider.chat/)
|
||||||
|
- **Model Configuration**: Check [LLM documentation](https://aider.chat/docs/llms.html)
|
||||||
|
- **API Keys**: Visit provider documentation (OpenAI, Anthropic, etc.)
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This image configuration is provided under the same license as the Cubbi project. Aider is licensed separately under Apache 2.0.
|
||||||
192
cubbi/images/aider/aider_plugin.py
Executable file
192
cubbi/images/aider/aider_plugin.py
Executable file
@@ -0,0 +1,192 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Aider Plugin for Cubbi
|
||||||
|
Handles authentication setup and configuration for Aider AI pair programming
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import stat
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
from cubbi_init import ToolPlugin
|
||||||
|
|
||||||
|
|
||||||
|
class AiderPlugin(ToolPlugin):
|
||||||
|
"""Plugin for setting up Aider authentication and configuration"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "aider"
|
||||||
|
|
||||||
|
def _get_user_ids(self) -> tuple[int, int]:
|
||||||
|
"""Get the cubbi user and group IDs from environment"""
|
||||||
|
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
||||||
|
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
||||||
|
return user_id, group_id
|
||||||
|
|
||||||
|
def _set_ownership(self, path: Path) -> None:
|
||||||
|
"""Set ownership of a path to the cubbi user"""
|
||||||
|
user_id, group_id = self._get_user_ids()
|
||||||
|
try:
|
||||||
|
os.chown(path, user_id, group_id)
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
||||||
|
|
||||||
|
def _get_aider_config_dir(self) -> Path:
|
||||||
|
"""Get the Aider configuration directory"""
|
||||||
|
return Path("/home/cubbi/.aider")
|
||||||
|
|
||||||
|
def _get_aider_cache_dir(self) -> Path:
|
||||||
|
"""Get the Aider cache directory"""
|
||||||
|
return Path("/home/cubbi/.cache/aider")
|
||||||
|
|
||||||
|
def _ensure_aider_dirs(self) -> tuple[Path, Path]:
|
||||||
|
"""Ensure Aider directories exist with correct ownership"""
|
||||||
|
config_dir = self._get_aider_config_dir()
|
||||||
|
cache_dir = self._get_aider_cache_dir()
|
||||||
|
|
||||||
|
# Create directories
|
||||||
|
for directory in [config_dir, cache_dir]:
|
||||||
|
try:
|
||||||
|
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
|
||||||
|
self._set_ownership(directory)
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to create Aider directory {directory}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
|
||||||
|
return config_dir, cache_dir
|
||||||
|
|
||||||
|
def initialize(self) -> bool:
|
||||||
|
"""Initialize Aider configuration"""
|
||||||
|
self.status.log("Setting up Aider configuration...")
|
||||||
|
|
||||||
|
# Ensure Aider directories exist
|
||||||
|
config_dir, cache_dir = self._ensure_aider_dirs()
|
||||||
|
|
||||||
|
# Set up environment variables for the session
|
||||||
|
env_vars = self._create_environment_config()
|
||||||
|
|
||||||
|
# Create .env file if we have API keys
|
||||||
|
if env_vars:
|
||||||
|
env_file = config_dir / ".env"
|
||||||
|
success = self._write_env_file(env_file, env_vars)
|
||||||
|
if success:
|
||||||
|
self.status.log("✅ Aider environment configured successfully")
|
||||||
|
else:
|
||||||
|
self.status.log("⚠️ Failed to write Aider environment file", "WARNING")
|
||||||
|
else:
|
||||||
|
self.status.log(
|
||||||
|
"ℹ️ No API keys found - Aider will run without pre-configuration", "INFO"
|
||||||
|
)
|
||||||
|
self.status.log(
|
||||||
|
" You can configure API keys later using environment variables",
|
||||||
|
"INFO",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Always return True to allow container to start
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _create_environment_config(self) -> Dict[str, str]:
|
||||||
|
"""Create environment variable configuration for Aider"""
|
||||||
|
env_vars = {}
|
||||||
|
|
||||||
|
# Map environment variables to Aider configuration
|
||||||
|
api_key_mappings = {
|
||||||
|
"OPENAI_API_KEY": "OPENAI_API_KEY",
|
||||||
|
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY",
|
||||||
|
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY",
|
||||||
|
"GEMINI_API_KEY": "GEMINI_API_KEY",
|
||||||
|
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check for OpenAI API base URL
|
||||||
|
openai_url = os.environ.get("OPENAI_URL")
|
||||||
|
if openai_url:
|
||||||
|
env_vars["OPENAI_API_BASE"] = openai_url
|
||||||
|
self.status.log(f"Set OpenAI API base URL to {openai_url}")
|
||||||
|
|
||||||
|
# Check for standard API keys
|
||||||
|
for env_var, aider_var in api_key_mappings.items():
|
||||||
|
value = os.environ.get(env_var)
|
||||||
|
if value:
|
||||||
|
env_vars[aider_var] = value
|
||||||
|
provider = env_var.replace("_API_KEY", "").lower()
|
||||||
|
self.status.log(f"Added {provider} API key")
|
||||||
|
|
||||||
|
# Handle additional API keys from AIDER_API_KEYS
|
||||||
|
additional_keys = os.environ.get("AIDER_API_KEYS")
|
||||||
|
if additional_keys:
|
||||||
|
try:
|
||||||
|
# Parse format: "provider1=key1,provider2=key2"
|
||||||
|
for pair in additional_keys.split(","):
|
||||||
|
if "=" in pair:
|
||||||
|
provider, key = pair.strip().split("=", 1)
|
||||||
|
env_var_name = f"{provider.upper()}_API_KEY"
|
||||||
|
env_vars[env_var_name] = key
|
||||||
|
self.status.log(f"Added {provider} API key from AIDER_API_KEYS")
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to parse AIDER_API_KEYS: {e}", "WARNING")
|
||||||
|
|
||||||
|
# Add model configuration
|
||||||
|
model = os.environ.get("AIDER_MODEL")
|
||||||
|
if model:
|
||||||
|
env_vars["AIDER_MODEL"] = model
|
||||||
|
self.status.log(f"Set default model to {model}")
|
||||||
|
|
||||||
|
# Add git configuration
|
||||||
|
auto_commits = os.environ.get("AIDER_AUTO_COMMITS", "true")
|
||||||
|
if auto_commits.lower() in ["true", "false"]:
|
||||||
|
env_vars["AIDER_AUTO_COMMITS"] = auto_commits
|
||||||
|
|
||||||
|
# Add dark mode setting
|
||||||
|
dark_mode = os.environ.get("AIDER_DARK_MODE", "false")
|
||||||
|
if dark_mode.lower() in ["true", "false"]:
|
||||||
|
env_vars["AIDER_DARK_MODE"] = dark_mode
|
||||||
|
|
||||||
|
# Add proxy settings
|
||||||
|
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
|
||||||
|
value = os.environ.get(proxy_var)
|
||||||
|
if value:
|
||||||
|
env_vars[proxy_var] = value
|
||||||
|
self.status.log(f"Added proxy configuration: {proxy_var}")
|
||||||
|
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
def _write_env_file(self, env_file: Path, env_vars: Dict[str, str]) -> bool:
|
||||||
|
"""Write environment variables to .env file"""
|
||||||
|
try:
|
||||||
|
content = "\n".join(f"{key}={value}" for key, value in env_vars.items())
|
||||||
|
|
||||||
|
with open(env_file, "w") as f:
|
||||||
|
f.write(content)
|
||||||
|
f.write("\n")
|
||||||
|
|
||||||
|
# Set ownership and secure file permissions (read/write for owner only)
|
||||||
|
self._set_ownership(env_file)
|
||||||
|
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
|
||||||
|
|
||||||
|
self.status.log(f"Created Aider environment file at {env_file}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to write Aider environment file: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def setup_tool_configuration(self) -> bool:
|
||||||
|
"""Set up Aider configuration - called by base class"""
|
||||||
|
# Additional tool configuration can be added here if needed
|
||||||
|
return True
|
||||||
|
|
||||||
|
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||||
|
"""Integrate Aider with available MCP servers if applicable"""
|
||||||
|
if mcp_config["count"] == 0:
|
||||||
|
self.status.log("No MCP servers to integrate")
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Aider doesn't have native MCP support like Claude Code,
|
||||||
|
# but we could potentially add custom integrations here
|
||||||
|
self.status.log(
|
||||||
|
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
|
||||||
|
)
|
||||||
|
return True
|
||||||
88
cubbi/images/aider/cubbi_image.yaml
Normal file
88
cubbi/images/aider/cubbi_image.yaml
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
name: aider
|
||||||
|
description: Aider AI pair programming environment
|
||||||
|
version: 1.0.0
|
||||||
|
maintainer: team@monadical.com
|
||||||
|
image: monadical/cubbi-aider:latest
|
||||||
|
|
||||||
|
init:
|
||||||
|
pre_command: /cubbi-init.sh
|
||||||
|
command: /entrypoint.sh
|
||||||
|
|
||||||
|
environment:
|
||||||
|
# OpenAI Configuration
|
||||||
|
- name: OPENAI_API_KEY
|
||||||
|
description: OpenAI API key for GPT models
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# Anthropic Configuration
|
||||||
|
- name: ANTHROPIC_API_KEY
|
||||||
|
description: Anthropic API key for Claude models
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# DeepSeek Configuration
|
||||||
|
- name: DEEPSEEK_API_KEY
|
||||||
|
description: DeepSeek API key for DeepSeek models
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# Gemini Configuration
|
||||||
|
- name: GEMINI_API_KEY
|
||||||
|
description: Google Gemini API key
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# OpenRouter Configuration
|
||||||
|
- name: OPENROUTER_API_KEY
|
||||||
|
description: OpenRouter API key for various models
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# Generic provider API keys
|
||||||
|
- name: AIDER_API_KEYS
|
||||||
|
description: Additional API keys in format "provider1=key1,provider2=key2"
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# Model Configuration
|
||||||
|
- name: AIDER_MODEL
|
||||||
|
description: Default model to use (e.g., sonnet, o3-mini, deepseek)
|
||||||
|
required: false
|
||||||
|
|
||||||
|
# Git Configuration
|
||||||
|
- name: AIDER_AUTO_COMMITS
|
||||||
|
description: Enable automatic commits (true/false)
|
||||||
|
required: false
|
||||||
|
default: "true"
|
||||||
|
|
||||||
|
- name: AIDER_DARK_MODE
|
||||||
|
description: Enable dark mode (true/false)
|
||||||
|
required: false
|
||||||
|
default: "false"
|
||||||
|
|
||||||
|
# Proxy Configuration
|
||||||
|
- name: HTTP_PROXY
|
||||||
|
description: HTTP proxy server URL
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- name: HTTPS_PROXY
|
||||||
|
description: HTTPS proxy server URL
|
||||||
|
required: false
|
||||||
|
|
||||||
|
ports: []
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- mountPath: /app
|
||||||
|
description: Application directory
|
||||||
|
|
||||||
|
persistent_configs:
|
||||||
|
- source: "/home/cubbi/.aider"
|
||||||
|
target: "/cubbi-config/aider-settings"
|
||||||
|
type: "directory"
|
||||||
|
description: "Aider configuration and history"
|
||||||
|
|
||||||
|
- source: "/home/cubbi/.cache/aider"
|
||||||
|
target: "/cubbi-config/aider-cache"
|
||||||
|
type: "directory"
|
||||||
|
description: "Aider cache directory"
|
||||||
274
cubbi/images/aider/test_aider.py
Executable file
274
cubbi/images/aider/test_aider.py
Executable file
@@ -0,0 +1,274 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Comprehensive test script for Aider Cubbi image
|
||||||
|
Tests Docker image build, API key configuration, and Cubbi CLI integration
|
||||||
|
"""
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import re
|
||||||
|
|
||||||
|
|
||||||
|
def run_command(cmd, description="", check=True):
|
||||||
|
"""Run a shell command and return result"""
|
||||||
|
print(f"\n🔍 {description}")
|
||||||
|
print(f"Running: {cmd}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd, shell=True, capture_output=True, text=True, check=check
|
||||||
|
)
|
||||||
|
|
||||||
|
if result.stdout:
|
||||||
|
print("STDOUT:")
|
||||||
|
print(result.stdout)
|
||||||
|
|
||||||
|
if result.stderr:
|
||||||
|
print("STDERR:")
|
||||||
|
print(result.stderr)
|
||||||
|
|
||||||
|
return result
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"❌ Command failed with exit code {e.returncode}")
|
||||||
|
if e.stdout:
|
||||||
|
print("STDOUT:")
|
||||||
|
print(e.stdout)
|
||||||
|
if e.stderr:
|
||||||
|
print("STDERR:")
|
||||||
|
print(e.stderr)
|
||||||
|
if check:
|
||||||
|
raise
|
||||||
|
return e
|
||||||
|
|
||||||
|
|
||||||
|
def test_docker_image_exists():
|
||||||
|
"""Test if the Aider Docker image exists"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Docker Image Existence")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
"docker images monadical/cubbi-aider:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
|
||||||
|
"Checking if Aider Docker image exists",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "monadical/cubbi-aider" in result.stdout:
|
||||||
|
print("✅ Aider Docker image exists")
|
||||||
|
else:
|
||||||
|
print("❌ Aider Docker image not found")
|
||||||
|
assert False, "Aider Docker image not found"
|
||||||
|
|
||||||
|
|
||||||
|
def test_aider_version():
|
||||||
|
"""Test basic Aider functionality in container"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Aider Version")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm monadical/cubbi-aider:latest bash -c 'aider --version'",
|
||||||
|
"Testing Aider version command",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert (
|
||||||
|
"aider" in result.stdout and result.returncode == 0
|
||||||
|
), "Aider version command failed"
|
||||||
|
print("✅ Aider version command works")
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_key_configuration():
|
||||||
|
"""Test API key configuration and environment setup"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing API Key Configuration")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test with multiple API keys
|
||||||
|
test_keys = {
|
||||||
|
"OPENAI_API_KEY": "test-openai-key",
|
||||||
|
"ANTHROPIC_API_KEY": "test-anthropic-key",
|
||||||
|
"DEEPSEEK_API_KEY": "test-deepseek-key",
|
||||||
|
"GEMINI_API_KEY": "test-gemini-key",
|
||||||
|
"OPENROUTER_API_KEY": "test-openrouter-key",
|
||||||
|
}
|
||||||
|
|
||||||
|
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
f"docker run --rm {env_flags} monadical/cubbi-aider:latest bash -c 'cat ~/.aider/.env'",
|
||||||
|
"Testing API key configuration in .env file",
|
||||||
|
)
|
||||||
|
|
||||||
|
success = True
|
||||||
|
for key, value in test_keys.items():
|
||||||
|
if f"{key}={value}" not in result.stdout:
|
||||||
|
print(f"❌ {key} not found in .env file")
|
||||||
|
success = False
|
||||||
|
else:
|
||||||
|
print(f"✅ {key} configured correctly")
|
||||||
|
|
||||||
|
# Test default configuration values
|
||||||
|
if "AIDER_AUTO_COMMITS=true" in result.stdout:
|
||||||
|
print("✅ Default AIDER_AUTO_COMMITS configured")
|
||||||
|
else:
|
||||||
|
print("❌ Default AIDER_AUTO_COMMITS not found")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
if "AIDER_DARK_MODE=false" in result.stdout:
|
||||||
|
print("✅ Default AIDER_DARK_MODE configured")
|
||||||
|
else:
|
||||||
|
print("❌ Default AIDER_DARK_MODE not found")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
assert success, "API key configuration test failed"
|
||||||
|
|
||||||
|
|
||||||
|
def test_cubbi_cli_integration():
|
||||||
|
"""Test Cubbi CLI integration"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Cubbi CLI Integration")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test image listing
|
||||||
|
result = run_command(
|
||||||
|
"uv run -m cubbi.cli image list | grep aider",
|
||||||
|
"Testing Cubbi CLI can see Aider image",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "aider" in result.stdout and "Aider AI pair" in result.stdout:
|
||||||
|
print("✅ Cubbi CLI can list Aider image")
|
||||||
|
else:
|
||||||
|
print("❌ Cubbi CLI cannot see Aider image")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Test session creation with test command
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
test_env = {
|
||||||
|
"OPENAI_API_KEY": "test-session-key",
|
||||||
|
"ANTHROPIC_API_KEY": "test-anthropic-session-key",
|
||||||
|
}
|
||||||
|
|
||||||
|
env_vars = " ".join([f"{k}={v}" for k, v in test_env.items()])
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
f"{env_vars} uv run -m cubbi.cli session create --image aider {temp_dir} --no-shell --run \"aider --version && echo 'Cubbi CLI test successful'\"",
|
||||||
|
"Testing Cubbi CLI session creation with Aider",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert (
|
||||||
|
result.returncode == 0
|
||||||
|
and re.search(r"aider \d+\.\d+\.\d+", result.stdout)
|
||||||
|
and "Cubbi CLI test successful" in result.stdout
|
||||||
|
), "Cubbi CLI session creation failed"
|
||||||
|
print("✅ Cubbi CLI session creation works")
|
||||||
|
|
||||||
|
|
||||||
|
def test_persistent_configuration():
|
||||||
|
"""Test persistent configuration directories"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Persistent Configuration")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test that persistent directories are created
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm -e OPENAI_API_KEY='test-key' monadical/cubbi-aider:latest bash -c 'ls -la /home/cubbi/.aider/ && ls -la /home/cubbi/.cache/'",
|
||||||
|
"Testing persistent configuration directories",
|
||||||
|
)
|
||||||
|
|
||||||
|
success = True
|
||||||
|
|
||||||
|
if ".env" in result.stdout:
|
||||||
|
print("✅ .env file created in ~/.aider/")
|
||||||
|
else:
|
||||||
|
print("❌ .env file not found in ~/.aider/")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
if "aider" in result.stdout:
|
||||||
|
print("✅ ~/.cache/aider directory exists")
|
||||||
|
else:
|
||||||
|
print("❌ ~/.cache/aider directory not found")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
assert success, "API key configuration test failed"
|
||||||
|
|
||||||
|
|
||||||
|
def test_plugin_functionality():
|
||||||
|
"""Test the Aider plugin functionality"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Plugin Functionality")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test plugin without API keys (should still work)
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm monadical/cubbi-aider:latest bash -c 'echo \"Plugin test without API keys\"'",
|
||||||
|
"Testing plugin functionality without API keys",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "No API keys found - Aider will run without pre-configuration" in result.stdout:
|
||||||
|
print("✅ Plugin handles missing API keys gracefully")
|
||||||
|
else:
|
||||||
|
# This might be in stderr or initialization might have changed
|
||||||
|
print("ℹ️ Plugin API key handling test - check output above")
|
||||||
|
|
||||||
|
# Test plugin with API keys
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm -e OPENAI_API_KEY='test-plugin-key' monadical/cubbi-aider:latest bash -c 'echo \"Plugin test with API keys\"'",
|
||||||
|
"Testing plugin functionality with API keys",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "Aider environment configured successfully" in result.stdout:
|
||||||
|
print("✅ Plugin configures environment successfully")
|
||||||
|
else:
|
||||||
|
print("❌ Plugin environment configuration failed")
|
||||||
|
assert False, "Plugin environment configuration failed"
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Run all tests"""
|
||||||
|
print("🚀 Starting Aider Cubbi Image Tests")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
tests = [
|
||||||
|
("Docker Image Exists", test_docker_image_exists),
|
||||||
|
("Aider Version", test_aider_version),
|
||||||
|
("API Key Configuration", test_api_key_configuration),
|
||||||
|
("Persistent Configuration", test_persistent_configuration),
|
||||||
|
("Plugin Functionality", test_plugin_functionality),
|
||||||
|
("Cubbi CLI Integration", test_cubbi_cli_integration),
|
||||||
|
]
|
||||||
|
|
||||||
|
results = {}
|
||||||
|
|
||||||
|
for test_name, test_func in tests:
|
||||||
|
try:
|
||||||
|
test_func()
|
||||||
|
results[test_name] = True
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Test '{test_name}' failed with exception: {e}")
|
||||||
|
results[test_name] = False
|
||||||
|
|
||||||
|
# Print summary
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("📊 TEST SUMMARY")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
total_tests = len(tests)
|
||||||
|
passed_tests = sum(1 for result in results.values() if result)
|
||||||
|
failed_tests = total_tests - passed_tests
|
||||||
|
|
||||||
|
for test_name, result in results.items():
|
||||||
|
status = "✅ PASS" if result else "❌ FAIL"
|
||||||
|
print(f"{status} {test_name}")
|
||||||
|
|
||||||
|
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
|
||||||
|
|
||||||
|
if failed_tests == 0:
|
||||||
|
print("\n🎉 All tests passed! Aider image is ready for use.")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
82
cubbi/images/claudecode/Dockerfile
Normal file
82
cubbi/images/claudecode/Dockerfile
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
LABEL maintainer="team@monadical.com"
|
||||||
|
LABEL description="Claude Code for Cubbi"
|
||||||
|
|
||||||
|
# Install system dependencies including gosu for user switching
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
gosu \
|
||||||
|
sudo \
|
||||||
|
passwd \
|
||||||
|
bash \
|
||||||
|
curl \
|
||||||
|
bzip2 \
|
||||||
|
iputils-ping \
|
||||||
|
iproute2 \
|
||||||
|
libxcb1 \
|
||||||
|
libdbus-1-3 \
|
||||||
|
nano \
|
||||||
|
tmux \
|
||||||
|
git-core \
|
||||||
|
ripgrep \
|
||||||
|
openssh-client \
|
||||||
|
vim \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Install uv (Python package manager)
|
||||||
|
WORKDIR /tmp
|
||||||
|
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||||
|
sh install.sh && \
|
||||||
|
mv /root/.local/bin/uv /usr/local/bin/uv && \
|
||||||
|
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||||
|
rm install.sh
|
||||||
|
|
||||||
|
# Install Node.js (for Claude Code NPM package)
|
||||||
|
ARG NODE_VERSION=v22.16.0
|
||||||
|
RUN mkdir -p /opt/node && \
|
||||||
|
ARCH=$(uname -m) && \
|
||||||
|
if [ "$ARCH" = "x86_64" ]; then \
|
||||||
|
NODE_ARCH=linux-x64; \
|
||||||
|
elif [ "$ARCH" = "aarch64" ]; then \
|
||||||
|
NODE_ARCH=linux-arm64; \
|
||||||
|
else \
|
||||||
|
echo "Unsupported architecture"; exit 1; \
|
||||||
|
fi && \
|
||||||
|
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
|
||||||
|
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
|
||||||
|
rm node.tar.gz
|
||||||
|
|
||||||
|
ENV PATH="/opt/node/bin:$PATH"
|
||||||
|
|
||||||
|
# Install Claude Code globally
|
||||||
|
RUN npm install -g @anthropic-ai/claude-code
|
||||||
|
|
||||||
|
# Create app directory
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy initialization system
|
||||||
|
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||||
|
COPY claudecode_plugin.py /cubbi/claudecode_plugin.py
|
||||||
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||||
|
COPY init-status.sh /cubbi/init-status.sh
|
||||||
|
|
||||||
|
# Make scripts executable
|
||||||
|
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||||
|
|
||||||
|
# Add Node.js to PATH in bashrc and init status check
|
||||||
|
RUN echo 'PATH="/opt/node/bin:$PATH"' >> /etc/bash.bashrc
|
||||||
|
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||||
|
|
||||||
|
# Set up environment
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV UV_LINK_MODE=copy
|
||||||
|
|
||||||
|
# Pre-install the cubbi_init
|
||||||
|
RUN /cubbi/cubbi_init.py --help
|
||||||
|
|
||||||
|
# Set WORKDIR to /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||||
|
CMD ["tail", "-f", "/dev/null"]
|
||||||
222
cubbi/images/claudecode/README.md
Normal file
222
cubbi/images/claudecode/README.md
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
# Claude Code for Cubbi
|
||||||
|
|
||||||
|
This image provides Claude Code (Anthropic's official CLI for Claude) in a Cubbi container environment.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Claude Code is an interactive CLI tool that helps with software engineering tasks. This Cubbi image integrates Claude Code with secure API key management, persistent configuration, and enterprise features.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Claude Code CLI**: Full access to Claude's coding capabilities
|
||||||
|
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
||||||
|
- **Persistent Configuration**: Settings and cache preserved across container restarts
|
||||||
|
- **Enterprise Support**: Bedrock and Vertex AI integration
|
||||||
|
- **Network Support**: Proxy configuration for corporate environments
|
||||||
|
- **Tool Permissions**: Pre-configured permissions for all Claude Code tools
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Set up API Key
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set your Anthropic API key in Cubbi configuration
|
||||||
|
cubbi config set services.anthropic.api_key "your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run Claude Code Environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Claude Code container
|
||||||
|
cubbi run claudecode
|
||||||
|
|
||||||
|
# Execute Claude Code commands
|
||||||
|
cubbi exec claudecode "claude 'help me write a Python function'"
|
||||||
|
|
||||||
|
# Start interactive session
|
||||||
|
cubbi exec claudecode "claude"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Required Environment Variables
|
||||||
|
|
||||||
|
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
|
||||||
|
|
||||||
|
### Optional Environment Variables
|
||||||
|
|
||||||
|
- `ANTHROPIC_AUTH_TOKEN`: Custom authorization token for enterprise deployments
|
||||||
|
- `ANTHROPIC_CUSTOM_HEADERS`: Additional HTTP headers (JSON format)
|
||||||
|
- `CLAUDE_CODE_USE_BEDROCK`: Set to "true" to use Amazon Bedrock
|
||||||
|
- `CLAUDE_CODE_USE_VERTEX`: Set to "true" to use Google Vertex AI
|
||||||
|
- `HTTP_PROXY`: HTTP proxy server URL
|
||||||
|
- `HTTPS_PROXY`: HTTPS proxy server URL
|
||||||
|
- `DISABLE_TELEMETRY`: Set to "true" to disable telemetry
|
||||||
|
|
||||||
|
### Advanced Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enterprise deployment with Bedrock
|
||||||
|
cubbi config set environment.claude_code_use_bedrock true
|
||||||
|
cubbi run claudecode
|
||||||
|
|
||||||
|
# With custom proxy
|
||||||
|
cubbi config set network.https_proxy "https://proxy.company.com:8080"
|
||||||
|
cubbi run claudecode
|
||||||
|
|
||||||
|
# Disable telemetry
|
||||||
|
cubbi config set environment.disable_telemetry true
|
||||||
|
cubbi run claudecode
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Basic Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get help
|
||||||
|
cubbi exec claudecode "claude --help"
|
||||||
|
|
||||||
|
# One-time task
|
||||||
|
cubbi exec claudecode "claude 'write a unit test for this function'"
|
||||||
|
|
||||||
|
# Interactive mode
|
||||||
|
cubbi exec claudecode "claude"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Working with Projects
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Claude Code in your project directory
|
||||||
|
cubbi run claudecode --mount /path/to/your/project:/app
|
||||||
|
cubbi exec claudecode "cd /app && claude"
|
||||||
|
|
||||||
|
# Create a commit
|
||||||
|
cubbi exec claudecode "cd /app && claude commit"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Advanced Features
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run with specific model configuration
|
||||||
|
cubbi exec claudecode "claude -m claude-3-5-sonnet-20241022 'analyze this code'"
|
||||||
|
|
||||||
|
# Use with plan mode
|
||||||
|
cubbi exec claudecode "claude -p 'refactor this function'"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Persistent Configuration
|
||||||
|
|
||||||
|
The following directories are automatically persisted:
|
||||||
|
|
||||||
|
- `~/.claude/`: Claude Code settings and configuration
|
||||||
|
- `~/.cache/claude/`: Claude Code cache and temporary files
|
||||||
|
|
||||||
|
Configuration files are maintained across container restarts, ensuring your settings and preferences are preserved.
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
cubbi/images/claudecode/
|
||||||
|
├── Dockerfile # Container image definition
|
||||||
|
├── cubbi_image.yaml # Cubbi image configuration
|
||||||
|
├── claudecode_plugin.py # Authentication and setup plugin
|
||||||
|
├── cubbi_init.py # Initialization script (shared)
|
||||||
|
├── init-status.sh # Status check script (shared)
|
||||||
|
└── README.md # This documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication Flow
|
||||||
|
|
||||||
|
1. **Environment Variables**: API key passed from Cubbi configuration
|
||||||
|
2. **Plugin Setup**: `claudecode_plugin.py` creates `~/.claude/settings.json`
|
||||||
|
3. **Verification**: Plugin verifies Claude Code installation and configuration
|
||||||
|
4. **Ready**: Claude Code is ready for use with configured authentication
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
**API Key Not Set**
|
||||||
|
```
|
||||||
|
⚠️ No authentication configuration found
|
||||||
|
Please set ANTHROPIC_API_KEY environment variable
|
||||||
|
```
|
||||||
|
**Solution**: Set API key in Cubbi configuration:
|
||||||
|
```bash
|
||||||
|
cubbi config set services.anthropic.api_key "your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Claude Code Not Found**
|
||||||
|
```
|
||||||
|
❌ Claude Code not properly installed
|
||||||
|
```
|
||||||
|
**Solution**: Rebuild the container image:
|
||||||
|
```bash
|
||||||
|
docker build -t cubbi-claudecode:latest cubbi/images/claudecode/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Network Issues**
|
||||||
|
```
|
||||||
|
Connection timeout or proxy errors
|
||||||
|
```
|
||||||
|
**Solution**: Configure proxy settings:
|
||||||
|
```bash
|
||||||
|
cubbi config set network.https_proxy "your-proxy-url"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
Enable verbose output for debugging:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check configuration
|
||||||
|
cubbi exec claudecode "cat ~/.claude/settings.json"
|
||||||
|
|
||||||
|
# Verify installation
|
||||||
|
cubbi exec claudecode "claude --version"
|
||||||
|
cubbi exec claudecode "which claude"
|
||||||
|
cubbi exec claudecode "node --version"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- **API Keys**: Stored securely with 0o600 permissions
|
||||||
|
- **Configuration**: Settings files have restricted access
|
||||||
|
- **Environment**: Isolated container environment
|
||||||
|
- **Telemetry**: Can be disabled for privacy
|
||||||
|
|
||||||
|
## Development
|
||||||
|
|
||||||
|
### Building the Image
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build locally
|
||||||
|
docker build -t cubbi-claudecode:test cubbi/images/claudecode/
|
||||||
|
|
||||||
|
# Test basic functionality
|
||||||
|
docker run --rm -it \
|
||||||
|
-e ANTHROPIC_API_KEY="your-api-key" \
|
||||||
|
cubbi-claudecode:test \
|
||||||
|
bash -c "claude --version"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run through Cubbi
|
||||||
|
cubbi run claudecode --name test-claude
|
||||||
|
cubbi exec test-claude "claude --version"
|
||||||
|
cubbi stop test-claude
|
||||||
|
```
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues related to:
|
||||||
|
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
||||||
|
- **Claude Code**: Visit [Claude Code documentation](https://docs.anthropic.com/en/docs/claude-code)
|
||||||
|
- **API Keys**: Visit [Anthropic Console](https://console.anthropic.com/)
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This image configuration is provided under the same license as the Cubbi project. Claude Code is licensed separately by Anthropic.
|
||||||
193
cubbi/images/claudecode/claudecode_plugin.py
Executable file
193
cubbi/images/claudecode/claudecode_plugin.py
Executable file
@@ -0,0 +1,193 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Claude Code Plugin for Cubbi
|
||||||
|
Handles authentication setup and configuration for Claude Code
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import stat
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
from cubbi_init import ToolPlugin
|
||||||
|
|
||||||
|
# API key mappings from environment variables to Claude Code configuration
|
||||||
|
API_KEY_MAPPINGS = {
|
||||||
|
"ANTHROPIC_API_KEY": "api_key",
|
||||||
|
"ANTHROPIC_AUTH_TOKEN": "auth_token",
|
||||||
|
"ANTHROPIC_CUSTOM_HEADERS": "custom_headers",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Enterprise integration environment variables
|
||||||
|
ENTERPRISE_MAPPINGS = {
|
||||||
|
"CLAUDE_CODE_USE_BEDROCK": "use_bedrock",
|
||||||
|
"CLAUDE_CODE_USE_VERTEX": "use_vertex",
|
||||||
|
"HTTP_PROXY": "http_proxy",
|
||||||
|
"HTTPS_PROXY": "https_proxy",
|
||||||
|
"DISABLE_TELEMETRY": "disable_telemetry",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class ClaudeCodePlugin(ToolPlugin):
|
||||||
|
"""Plugin for setting up Claude Code authentication and configuration"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "claudecode"
|
||||||
|
|
||||||
|
def _get_user_ids(self) -> tuple[int, int]:
|
||||||
|
"""Get the cubbi user and group IDs from environment"""
|
||||||
|
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
||||||
|
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
||||||
|
return user_id, group_id
|
||||||
|
|
||||||
|
def _set_ownership(self, path: Path) -> None:
|
||||||
|
"""Set ownership of a path to the cubbi user"""
|
||||||
|
user_id, group_id = self._get_user_ids()
|
||||||
|
try:
|
||||||
|
os.chown(path, user_id, group_id)
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
||||||
|
|
||||||
|
def _get_claude_dir(self) -> Path:
|
||||||
|
"""Get the Claude Code configuration directory"""
|
||||||
|
return Path("/home/cubbi/.claude")
|
||||||
|
|
||||||
|
def _ensure_claude_dir(self) -> Path:
|
||||||
|
"""Ensure Claude directory exists with correct ownership"""
|
||||||
|
claude_dir = self._get_claude_dir()
|
||||||
|
|
||||||
|
try:
|
||||||
|
claude_dir.mkdir(mode=0o700, parents=True, exist_ok=True)
|
||||||
|
self._set_ownership(claude_dir)
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to create Claude directory {claude_dir}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
|
||||||
|
return claude_dir
|
||||||
|
|
||||||
|
def initialize(self) -> bool:
|
||||||
|
"""Initialize Claude Code configuration"""
|
||||||
|
self.status.log("Setting up Claude Code authentication...")
|
||||||
|
|
||||||
|
# Ensure Claude directory exists
|
||||||
|
claude_dir = self._ensure_claude_dir()
|
||||||
|
|
||||||
|
# Create settings configuration
|
||||||
|
settings = self._create_settings()
|
||||||
|
|
||||||
|
if settings:
|
||||||
|
settings_file = claude_dir / "settings.json"
|
||||||
|
success = self._write_settings(settings_file, settings)
|
||||||
|
if success:
|
||||||
|
self.status.log("✅ Claude Code authentication configured successfully")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
self.status.log("⚠️ No authentication configuration found", "WARNING")
|
||||||
|
self.status.log(
|
||||||
|
" Please set ANTHROPIC_API_KEY environment variable", "WARNING"
|
||||||
|
)
|
||||||
|
self.status.log(" Claude Code will run without authentication", "INFO")
|
||||||
|
# Return True to allow container to start without API key
|
||||||
|
# Users can still use Claude Code with their own authentication methods
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _create_settings(self) -> Optional[Dict]:
|
||||||
|
"""Create Claude Code settings configuration"""
|
||||||
|
settings = {}
|
||||||
|
|
||||||
|
# Core authentication
|
||||||
|
api_key = os.environ.get("ANTHROPIC_API_KEY")
|
||||||
|
if not api_key:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Basic authentication setup
|
||||||
|
settings["apiKey"] = api_key
|
||||||
|
|
||||||
|
# Custom authorization token (optional)
|
||||||
|
auth_token = os.environ.get("ANTHROPIC_AUTH_TOKEN")
|
||||||
|
if auth_token:
|
||||||
|
settings["authToken"] = auth_token
|
||||||
|
|
||||||
|
# Custom headers (optional)
|
||||||
|
custom_headers = os.environ.get("ANTHROPIC_CUSTOM_HEADERS")
|
||||||
|
if custom_headers:
|
||||||
|
try:
|
||||||
|
# Expect JSON string format
|
||||||
|
settings["customHeaders"] = json.loads(custom_headers)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
self.status.log(
|
||||||
|
"⚠️ Invalid ANTHROPIC_CUSTOM_HEADERS format, skipping", "WARNING"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Enterprise integration settings
|
||||||
|
if os.environ.get("CLAUDE_CODE_USE_BEDROCK") == "true":
|
||||||
|
settings["provider"] = "bedrock"
|
||||||
|
|
||||||
|
if os.environ.get("CLAUDE_CODE_USE_VERTEX") == "true":
|
||||||
|
settings["provider"] = "vertex"
|
||||||
|
|
||||||
|
# Network proxy settings
|
||||||
|
http_proxy = os.environ.get("HTTP_PROXY")
|
||||||
|
https_proxy = os.environ.get("HTTPS_PROXY")
|
||||||
|
if http_proxy or https_proxy:
|
||||||
|
settings["proxy"] = {}
|
||||||
|
if http_proxy:
|
||||||
|
settings["proxy"]["http"] = http_proxy
|
||||||
|
if https_proxy:
|
||||||
|
settings["proxy"]["https"] = https_proxy
|
||||||
|
|
||||||
|
# Telemetry settings
|
||||||
|
if os.environ.get("DISABLE_TELEMETRY") == "true":
|
||||||
|
settings["telemetry"] = {"enabled": False}
|
||||||
|
|
||||||
|
# Tool permissions (allow all by default in Cubbi environment)
|
||||||
|
settings["permissions"] = {
|
||||||
|
"tools": {
|
||||||
|
"read": {"allowed": True},
|
||||||
|
"write": {"allowed": True},
|
||||||
|
"edit": {"allowed": True},
|
||||||
|
"bash": {"allowed": True},
|
||||||
|
"webfetch": {"allowed": True},
|
||||||
|
"websearch": {"allowed": True},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return settings
|
||||||
|
|
||||||
|
def _write_settings(self, settings_file: Path, settings: Dict) -> bool:
|
||||||
|
"""Write settings to Claude Code configuration file"""
|
||||||
|
try:
|
||||||
|
# Write settings with secure permissions
|
||||||
|
with open(settings_file, "w") as f:
|
||||||
|
json.dump(settings, f, indent=2)
|
||||||
|
|
||||||
|
# Set ownership and secure file permissions (read/write for owner only)
|
||||||
|
self._set_ownership(settings_file)
|
||||||
|
os.chmod(settings_file, stat.S_IRUSR | stat.S_IWUSR)
|
||||||
|
|
||||||
|
self.status.log(f"Created Claude Code settings at {settings_file}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to write Claude Code settings: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def setup_tool_configuration(self) -> bool:
|
||||||
|
"""Set up Claude Code configuration - called by base class"""
|
||||||
|
# Additional tool configuration can be added here if needed
|
||||||
|
return True
|
||||||
|
|
||||||
|
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||||
|
"""Integrate Claude Code with available MCP servers"""
|
||||||
|
if mcp_config["count"] == 0:
|
||||||
|
self.status.log("No MCP servers to integrate")
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Claude Code has built-in MCP support, so we can potentially
|
||||||
|
# configure MCP servers in the settings if needed
|
||||||
|
self.status.log("MCP server integration available for Claude Code")
|
||||||
|
return True
|
||||||
68
cubbi/images/claudecode/cubbi_image.yaml
Normal file
68
cubbi/images/claudecode/cubbi_image.yaml
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
name: claudecode
|
||||||
|
description: Claude Code AI environment
|
||||||
|
version: 1.0.0
|
||||||
|
maintainer: team@monadical.com
|
||||||
|
image: monadical/cubbi-claudecode:latest
|
||||||
|
|
||||||
|
init:
|
||||||
|
pre_command: /cubbi-init.sh
|
||||||
|
command: /entrypoint.sh
|
||||||
|
|
||||||
|
environment:
|
||||||
|
# Core Anthropic Authentication
|
||||||
|
- name: ANTHROPIC_API_KEY
|
||||||
|
description: Anthropic API key for Claude
|
||||||
|
required: true
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# Optional Enterprise Integration
|
||||||
|
- name: ANTHROPIC_AUTH_TOKEN
|
||||||
|
description: Custom authorization token for Claude
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
- name: ANTHROPIC_CUSTOM_HEADERS
|
||||||
|
description: Additional HTTP headers for Claude API requests
|
||||||
|
required: false
|
||||||
|
sensitive: true
|
||||||
|
|
||||||
|
# Enterprise Deployment Options
|
||||||
|
- name: CLAUDE_CODE_USE_BEDROCK
|
||||||
|
description: Use Amazon Bedrock instead of direct API
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- name: CLAUDE_CODE_USE_VERTEX
|
||||||
|
description: Use Google Vertex AI instead of direct API
|
||||||
|
required: false
|
||||||
|
|
||||||
|
# Network Configuration
|
||||||
|
- name: HTTP_PROXY
|
||||||
|
description: HTTP proxy server URL
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- name: HTTPS_PROXY
|
||||||
|
description: HTTPS proxy server URL
|
||||||
|
required: false
|
||||||
|
|
||||||
|
# Optional Telemetry Control
|
||||||
|
- name: DISABLE_TELEMETRY
|
||||||
|
description: Disable Claude Code telemetry
|
||||||
|
required: false
|
||||||
|
default: "false"
|
||||||
|
|
||||||
|
ports: []
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- mountPath: /app
|
||||||
|
description: Application directory
|
||||||
|
|
||||||
|
persistent_configs:
|
||||||
|
- source: "/home/cubbi/.claude"
|
||||||
|
target: "/cubbi-config/claude-settings"
|
||||||
|
type: "directory"
|
||||||
|
description: "Claude Code settings and configuration"
|
||||||
|
|
||||||
|
- source: "/home/cubbi/.cache/claude"
|
||||||
|
target: "/cubbi-config/claude-cache"
|
||||||
|
type: "directory"
|
||||||
|
description: "Claude Code cache directory"
|
||||||
251
cubbi/images/claudecode/test_claudecode.py
Executable file
251
cubbi/images/claudecode/test_claudecode.py
Executable file
@@ -0,0 +1,251 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Automated test suite for Claude Code Cubbi integration
|
||||||
|
"""
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
|
||||||
|
def run_test(description: str, command: list, timeout: int = 30) -> bool:
|
||||||
|
"""Run a test command and return success status"""
|
||||||
|
print(f"🧪 Testing: {description}")
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
command, capture_output=True, text=True, timeout=timeout
|
||||||
|
)
|
||||||
|
if result.returncode == 0:
|
||||||
|
print(" ✅ PASS")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" ❌ FAIL: {result.stderr}")
|
||||||
|
if result.stdout:
|
||||||
|
print(f" 📋 stdout: {result.stdout}")
|
||||||
|
return False
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
print(f" ⏰ TIMEOUT: Command exceeded {timeout}s")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ❌ ERROR: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_suite():
|
||||||
|
"""Run complete test suite"""
|
||||||
|
tests_passed = 0
|
||||||
|
total_tests = 0
|
||||||
|
|
||||||
|
print("🚀 Starting Claude Code Cubbi Integration Test Suite")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test 1: Build image
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Build Claude Code image",
|
||||||
|
["docker", "build", "-t", "cubbi-claudecode:test", "cubbi/images/claudecode/"],
|
||||||
|
timeout=180,
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 2: Tag image for Cubbi
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Tag image for Cubbi",
|
||||||
|
["docker", "tag", "cubbi-claudecode:test", "monadical/cubbi-claudecode:latest"],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 3: Basic container startup
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Container startup with test API key",
|
||||||
|
[
|
||||||
|
"docker",
|
||||||
|
"run",
|
||||||
|
"--rm",
|
||||||
|
"-e",
|
||||||
|
"ANTHROPIC_API_KEY=test-key",
|
||||||
|
"cubbi-claudecode:test",
|
||||||
|
"bash",
|
||||||
|
"-c",
|
||||||
|
"claude --version",
|
||||||
|
],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 4: Cubbi image list
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Cubbi image list includes claudecode",
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "image", "list"],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 5: Cubbi session creation
|
||||||
|
total_tests += 1
|
||||||
|
session_result = subprocess.run(
|
||||||
|
[
|
||||||
|
"uv",
|
||||||
|
"run",
|
||||||
|
"-m",
|
||||||
|
"cubbi.cli",
|
||||||
|
"session",
|
||||||
|
"create",
|
||||||
|
"--image",
|
||||||
|
"claudecode",
|
||||||
|
"--name",
|
||||||
|
"test-automation",
|
||||||
|
"--no-connect",
|
||||||
|
"--env",
|
||||||
|
"ANTHROPIC_API_KEY=test-key",
|
||||||
|
"--run",
|
||||||
|
"claude --version",
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=60,
|
||||||
|
)
|
||||||
|
|
||||||
|
if session_result.returncode == 0:
|
||||||
|
print("🧪 Testing: Cubbi session creation")
|
||||||
|
print(" ✅ PASS")
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Extract session ID for cleanup
|
||||||
|
session_id = None
|
||||||
|
for line in session_result.stdout.split("\n"):
|
||||||
|
if "Session ID:" in line:
|
||||||
|
session_id = line.split("Session ID: ")[1].strip()
|
||||||
|
break
|
||||||
|
|
||||||
|
if session_id:
|
||||||
|
# Test 6: Session cleanup
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Clean up test session",
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Clean up test session")
|
||||||
|
print(" ⚠️ SKIP: Could not extract session ID")
|
||||||
|
total_tests += 1
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Cubbi session creation")
|
||||||
|
print(f" ❌ FAIL: {session_result.stderr}")
|
||||||
|
total_tests += 2 # This test and cleanup test both fail
|
||||||
|
|
||||||
|
# Test 7: Session without API key
|
||||||
|
total_tests += 1
|
||||||
|
no_key_result = subprocess.run(
|
||||||
|
[
|
||||||
|
"uv",
|
||||||
|
"run",
|
||||||
|
"-m",
|
||||||
|
"cubbi.cli",
|
||||||
|
"session",
|
||||||
|
"create",
|
||||||
|
"--image",
|
||||||
|
"claudecode",
|
||||||
|
"--name",
|
||||||
|
"test-no-key",
|
||||||
|
"--no-connect",
|
||||||
|
"--run",
|
||||||
|
"claude --version",
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=60,
|
||||||
|
)
|
||||||
|
|
||||||
|
if no_key_result.returncode == 0:
|
||||||
|
print("🧪 Testing: Session without API key")
|
||||||
|
print(" ✅ PASS")
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Extract session ID and close
|
||||||
|
session_id = None
|
||||||
|
for line in no_key_result.stdout.split("\n"):
|
||||||
|
if "Session ID:" in line:
|
||||||
|
session_id = line.split("Session ID: ")[1].strip()
|
||||||
|
break
|
||||||
|
|
||||||
|
if session_id:
|
||||||
|
subprocess.run(
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||||
|
capture_output=True,
|
||||||
|
timeout=30,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Session without API key")
|
||||||
|
print(f" ❌ FAIL: {no_key_result.stderr}")
|
||||||
|
|
||||||
|
# Test 8: Persistent configuration test
|
||||||
|
total_tests += 1
|
||||||
|
persist_result = subprocess.run(
|
||||||
|
[
|
||||||
|
"uv",
|
||||||
|
"run",
|
||||||
|
"-m",
|
||||||
|
"cubbi.cli",
|
||||||
|
"session",
|
||||||
|
"create",
|
||||||
|
"--image",
|
||||||
|
"claudecode",
|
||||||
|
"--name",
|
||||||
|
"test-persist-auto",
|
||||||
|
"--project",
|
||||||
|
"test-automation",
|
||||||
|
"--no-connect",
|
||||||
|
"--env",
|
||||||
|
"ANTHROPIC_API_KEY=test-key",
|
||||||
|
"--run",
|
||||||
|
"echo 'automation test' > ~/.claude/automation.txt && cat ~/.claude/automation.txt",
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=60,
|
||||||
|
)
|
||||||
|
|
||||||
|
if persist_result.returncode == 0:
|
||||||
|
print("🧪 Testing: Persistent configuration")
|
||||||
|
print(" ✅ PASS")
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Extract session ID and close
|
||||||
|
session_id = None
|
||||||
|
for line in persist_result.stdout.split("\n"):
|
||||||
|
if "Session ID:" in line:
|
||||||
|
session_id = line.split("Session ID: ")[1].strip()
|
||||||
|
break
|
||||||
|
|
||||||
|
if session_id:
|
||||||
|
subprocess.run(
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||||
|
capture_output=True,
|
||||||
|
timeout=30,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Persistent configuration")
|
||||||
|
print(f" ❌ FAIL: {persist_result.stderr}")
|
||||||
|
|
||||||
|
print("=" * 60)
|
||||||
|
print(f"📊 Test Results: {tests_passed}/{total_tests} tests passed")
|
||||||
|
|
||||||
|
if tests_passed == total_tests:
|
||||||
|
print("🎉 All tests passed! Claude Code integration is working correctly.")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(
|
||||||
|
f"❌ {total_tests - tests_passed} test(s) failed. Please check the output above."
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main test entry point"""
|
||||||
|
success = test_suite()
|
||||||
|
exit(0 if success else 1)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
702
cubbi/images/cubbi_init.py
Executable file
702
cubbi/images/cubbi_init.py
Executable file
@@ -0,0 +1,702 @@
|
|||||||
|
#!/usr/bin/env -S uv run --script
|
||||||
|
# /// script
|
||||||
|
# dependencies = ["ruamel.yaml"]
|
||||||
|
# ///
|
||||||
|
"""
|
||||||
|
Standalone Cubbi initialization script
|
||||||
|
|
||||||
|
This is a self-contained script that includes all the necessary initialization
|
||||||
|
logic without requiring the full cubbi package to be installed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import grp
|
||||||
|
import importlib.util
|
||||||
|
import os
|
||||||
|
import pwd
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
|
from ruamel.yaml import YAML
|
||||||
|
|
||||||
|
|
||||||
|
# Status Management
|
||||||
|
class StatusManager:
|
||||||
|
"""Manages initialization status and logging"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self, log_file: str = "/cubbi/init.log", status_file: str = "/cubbi/init.status"
|
||||||
|
):
|
||||||
|
self.log_file = Path(log_file)
|
||||||
|
self.status_file = Path(status_file)
|
||||||
|
self._setup_logging()
|
||||||
|
|
||||||
|
def _setup_logging(self) -> None:
|
||||||
|
"""Set up logging to both stdout and log file"""
|
||||||
|
self.log_file.touch(exist_ok=True)
|
||||||
|
self.set_status(False)
|
||||||
|
|
||||||
|
def log(self, message: str, level: str = "INFO") -> None:
|
||||||
|
"""Log a message with timestamp"""
|
||||||
|
print(message)
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
with open(self.log_file, "a") as f:
|
||||||
|
f.write(message + "\n")
|
||||||
|
f.flush()
|
||||||
|
|
||||||
|
def set_status(self, complete: bool) -> None:
|
||||||
|
"""Set initialization completion status"""
|
||||||
|
status = "true" if complete else "false"
|
||||||
|
with open(self.status_file, "w") as f:
|
||||||
|
f.write(f"INIT_COMPLETE={status}\n")
|
||||||
|
|
||||||
|
def start_initialization(self) -> None:
|
||||||
|
"""Mark initialization as started"""
|
||||||
|
self.set_status(False)
|
||||||
|
|
||||||
|
def complete_initialization(self) -> None:
|
||||||
|
"""Mark initialization as completed"""
|
||||||
|
self.set_status(True)
|
||||||
|
|
||||||
|
|
||||||
|
# Configuration Management
|
||||||
|
@dataclass
|
||||||
|
class PersistentConfig:
|
||||||
|
"""Persistent configuration mapping"""
|
||||||
|
|
||||||
|
source: str
|
||||||
|
target: str
|
||||||
|
type: str = "directory"
|
||||||
|
description: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ImageConfig:
|
||||||
|
"""Cubbi image configuration"""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
description: str
|
||||||
|
version: str
|
||||||
|
maintainer: str
|
||||||
|
image: str
|
||||||
|
persistent_configs: List[PersistentConfig] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
class ConfigParser:
|
||||||
|
"""Parses Cubbi image configuration and environment variables"""
|
||||||
|
|
||||||
|
def __init__(self, config_file: str = "/cubbi/cubbi_image.yaml"):
|
||||||
|
self.config_file = Path(config_file)
|
||||||
|
self.environment: Dict[str, str] = dict(os.environ)
|
||||||
|
|
||||||
|
def load_image_config(self) -> ImageConfig:
|
||||||
|
"""Load and parse the cubbi_image.yaml configuration"""
|
||||||
|
if not self.config_file.exists():
|
||||||
|
raise FileNotFoundError(f"Configuration file not found: {self.config_file}")
|
||||||
|
|
||||||
|
yaml = YAML(typ="safe")
|
||||||
|
with open(self.config_file, "r") as f:
|
||||||
|
config_data = yaml.load(f)
|
||||||
|
|
||||||
|
# Parse persistent configurations
|
||||||
|
persistent_configs = []
|
||||||
|
for pc_data in config_data.get("persistent_configs", []):
|
||||||
|
persistent_configs.append(PersistentConfig(**pc_data))
|
||||||
|
|
||||||
|
return ImageConfig(
|
||||||
|
name=config_data["name"],
|
||||||
|
description=config_data["description"],
|
||||||
|
version=config_data["version"],
|
||||||
|
maintainer=config_data["maintainer"],
|
||||||
|
image=config_data["image"],
|
||||||
|
persistent_configs=persistent_configs,
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_cubbi_config(self) -> Dict[str, Any]:
|
||||||
|
"""Get standard Cubbi configuration from environment"""
|
||||||
|
return {
|
||||||
|
"user_id": int(self.environment.get("CUBBI_USER_ID", "1000")),
|
||||||
|
"group_id": int(self.environment.get("CUBBI_GROUP_ID", "1000")),
|
||||||
|
"run_command": self.environment.get("CUBBI_RUN_COMMAND"),
|
||||||
|
"no_shell": self.environment.get("CUBBI_NO_SHELL", "false").lower()
|
||||||
|
== "true",
|
||||||
|
"config_dir": self.environment.get("CUBBI_CONFIG_DIR", "/cubbi-config"),
|
||||||
|
"persistent_links": self.environment.get("CUBBI_PERSISTENT_LINKS", ""),
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_mcp_config(self) -> Dict[str, Any]:
|
||||||
|
"""Get MCP server configuration from environment"""
|
||||||
|
mcp_count = int(self.environment.get("MCP_COUNT", "0"))
|
||||||
|
mcp_servers = []
|
||||||
|
|
||||||
|
for idx in range(mcp_count):
|
||||||
|
server = {
|
||||||
|
"name": self.environment.get(f"MCP_{idx}_NAME"),
|
||||||
|
"type": self.environment.get(f"MCP_{idx}_TYPE"),
|
||||||
|
"host": self.environment.get(f"MCP_{idx}_HOST"),
|
||||||
|
"url": self.environment.get(f"MCP_{idx}_URL"),
|
||||||
|
}
|
||||||
|
if server["name"]: # Only add if name is present
|
||||||
|
mcp_servers.append(server)
|
||||||
|
|
||||||
|
return {"count": mcp_count, "servers": mcp_servers}
|
||||||
|
|
||||||
|
|
||||||
|
# Core Management Classes
|
||||||
|
class UserManager:
|
||||||
|
"""Manages user and group creation/modification in containers"""
|
||||||
|
|
||||||
|
def __init__(self, status: StatusManager):
|
||||||
|
self.status = status
|
||||||
|
self.username = "cubbi"
|
||||||
|
|
||||||
|
def _run_command(self, cmd: list[str]) -> bool:
|
||||||
|
"""Run a system command and log the result"""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
|
||||||
|
if result.stdout:
|
||||||
|
self.status.log(f"Command output: {result.stdout.strip()}")
|
||||||
|
return True
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
self.status.log(f"Command failed: {' '.join(cmd)}", "ERROR")
|
||||||
|
self.status.log(f"Error: {e.stderr}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def setup_user_and_group(self, user_id: int, group_id: int) -> bool:
|
||||||
|
"""Set up user and group with specified IDs"""
|
||||||
|
self.status.log(
|
||||||
|
f"Setting up user '{self.username}' with UID: {user_id}, GID: {group_id}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Handle group creation/modification
|
||||||
|
try:
|
||||||
|
existing_group = grp.getgrnam(self.username)
|
||||||
|
if existing_group.gr_gid != group_id:
|
||||||
|
self.status.log(
|
||||||
|
f"Modifying group '{self.username}' GID from {existing_group.gr_gid} to {group_id}"
|
||||||
|
)
|
||||||
|
if not self._run_command(
|
||||||
|
["groupmod", "-g", str(group_id), self.username]
|
||||||
|
):
|
||||||
|
return False
|
||||||
|
except KeyError:
|
||||||
|
if not self._run_command(["groupadd", "-g", str(group_id), self.username]):
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Handle user creation/modification
|
||||||
|
try:
|
||||||
|
existing_user = pwd.getpwnam(self.username)
|
||||||
|
if existing_user.pw_uid != user_id or existing_user.pw_gid != group_id:
|
||||||
|
self.status.log(
|
||||||
|
f"Modifying user '{self.username}' UID from {existing_user.pw_uid} to {user_id}, GID from {existing_user.pw_gid} to {group_id}"
|
||||||
|
)
|
||||||
|
if not self._run_command(
|
||||||
|
[
|
||||||
|
"usermod",
|
||||||
|
"--uid",
|
||||||
|
str(user_id),
|
||||||
|
"--gid",
|
||||||
|
str(group_id),
|
||||||
|
self.username,
|
||||||
|
]
|
||||||
|
):
|
||||||
|
return False
|
||||||
|
except KeyError:
|
||||||
|
if not self._run_command(
|
||||||
|
[
|
||||||
|
"useradd",
|
||||||
|
"--shell",
|
||||||
|
"/bin/bash",
|
||||||
|
"--uid",
|
||||||
|
str(user_id),
|
||||||
|
"--gid",
|
||||||
|
str(group_id),
|
||||||
|
"--no-create-home",
|
||||||
|
self.username,
|
||||||
|
]
|
||||||
|
):
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Create the sudoers file entry for the 'cubbi' user
|
||||||
|
sudoers_command = [
|
||||||
|
"sh",
|
||||||
|
"-c",
|
||||||
|
"echo 'cubbi ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/cubbi && chmod 0440 /etc/sudoers.d/cubbi",
|
||||||
|
]
|
||||||
|
if not self._run_command(sudoers_command):
|
||||||
|
self.status.log("Failed to create sudoers entry for cubbi", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
class DirectoryManager:
|
||||||
|
"""Manages directory creation and permission setup"""
|
||||||
|
|
||||||
|
def __init__(self, status: StatusManager):
|
||||||
|
self.status = status
|
||||||
|
|
||||||
|
def create_directory(
|
||||||
|
self, path: str, user_id: int, group_id: int, mode: int = 0o755
|
||||||
|
) -> bool:
|
||||||
|
"""Create a directory with proper ownership and permissions"""
|
||||||
|
dir_path = Path(path)
|
||||||
|
|
||||||
|
try:
|
||||||
|
dir_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
os.chown(path, user_id, group_id)
|
||||||
|
dir_path.chmod(mode)
|
||||||
|
self.status.log(f"Created directory: {path}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to create/configure directory {path}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def setup_standard_directories(self, user_id: int, group_id: int) -> bool:
|
||||||
|
"""Set up standard Cubbi directories"""
|
||||||
|
directories = [
|
||||||
|
("/app", 0o755),
|
||||||
|
("/cubbi-config", 0o755),
|
||||||
|
("/cubbi-config/home", 0o755),
|
||||||
|
]
|
||||||
|
|
||||||
|
self.status.log("Setting up standard directories")
|
||||||
|
|
||||||
|
success = True
|
||||||
|
for dir_path, mode in directories:
|
||||||
|
if not self.create_directory(dir_path, user_id, group_id, mode):
|
||||||
|
success = False
|
||||||
|
|
||||||
|
# Create /home/cubbi as a symlink to /cubbi-config/home
|
||||||
|
try:
|
||||||
|
home_cubbi = Path("/home/cubbi")
|
||||||
|
if home_cubbi.exists() or home_cubbi.is_symlink():
|
||||||
|
home_cubbi.unlink()
|
||||||
|
|
||||||
|
self.status.log("Creating /home/cubbi as symlink to /cubbi-config/home")
|
||||||
|
home_cubbi.symlink_to("/cubbi-config/home")
|
||||||
|
os.lchown("/home/cubbi", user_id, group_id)
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to create home directory symlink: {e}", "ERROR")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
# Create .local directory in the persistent home
|
||||||
|
local_dir = Path("/cubbi-config/home/.local")
|
||||||
|
if not self.create_directory(str(local_dir), user_id, group_id, 0o755):
|
||||||
|
success = False
|
||||||
|
|
||||||
|
# Copy /root/.local/bin to user's home if it exists
|
||||||
|
root_local_bin = Path("/root/.local/bin")
|
||||||
|
if root_local_bin.exists():
|
||||||
|
user_local_bin = Path("/cubbi-config/home/.local/bin")
|
||||||
|
try:
|
||||||
|
user_local_bin.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
for item in root_local_bin.iterdir():
|
||||||
|
if item.is_file():
|
||||||
|
shutil.copy2(item, user_local_bin / item.name)
|
||||||
|
elif item.is_dir():
|
||||||
|
shutil.copytree(
|
||||||
|
item, user_local_bin / item.name, dirs_exist_ok=True
|
||||||
|
)
|
||||||
|
|
||||||
|
self._chown_recursive(user_local_bin, user_id, group_id)
|
||||||
|
self.status.log("Copied /root/.local/bin to user directory")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to copy /root/.local/bin: {e}", "ERROR")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
return success
|
||||||
|
|
||||||
|
def _chown_recursive(self, path: Path, user_id: int, group_id: int) -> None:
|
||||||
|
"""Recursively change ownership of a directory"""
|
||||||
|
try:
|
||||||
|
os.chown(path, user_id, group_id)
|
||||||
|
for item in path.iterdir():
|
||||||
|
if item.is_dir():
|
||||||
|
self._chown_recursive(item, user_id, group_id)
|
||||||
|
else:
|
||||||
|
os.chown(item, user_id, group_id)
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Warning: Could not change ownership of {path}: {e}", "WARNING"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class ConfigManager:
|
||||||
|
"""Manages persistent configuration symlinks and mappings"""
|
||||||
|
|
||||||
|
def __init__(self, status: StatusManager):
|
||||||
|
self.status = status
|
||||||
|
|
||||||
|
def create_symlink(
|
||||||
|
self, source_path: str, target_path: str, user_id: int, group_id: int
|
||||||
|
) -> bool:
|
||||||
|
"""Create a symlink with proper ownership"""
|
||||||
|
try:
|
||||||
|
source = Path(source_path)
|
||||||
|
|
||||||
|
parent_dir = source.parent
|
||||||
|
if not parent_dir.exists():
|
||||||
|
self.status.log(f"Creating parent directory: {parent_dir}")
|
||||||
|
parent_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
os.chown(parent_dir, user_id, group_id)
|
||||||
|
|
||||||
|
self.status.log(f"Creating symlink: {source_path} -> {target_path}")
|
||||||
|
if source.is_symlink() or source.exists():
|
||||||
|
source.unlink()
|
||||||
|
|
||||||
|
source.symlink_to(target_path)
|
||||||
|
os.lchown(source_path, user_id, group_id)
|
||||||
|
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to create symlink {source_path} -> {target_path}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _ensure_target_directory(
|
||||||
|
self, target_path: str, user_id: int, group_id: int
|
||||||
|
) -> bool:
|
||||||
|
"""Ensure the target directory exists with proper ownership"""
|
||||||
|
try:
|
||||||
|
target_dir = Path(target_path)
|
||||||
|
if not target_dir.exists():
|
||||||
|
self.status.log(f"Creating target directory: {target_path}")
|
||||||
|
target_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Set ownership of the target directory to cubbi user
|
||||||
|
os.chown(target_path, user_id, group_id)
|
||||||
|
self.status.log(f"Set ownership of {target_path} to {user_id}:{group_id}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to ensure target directory {target_path}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def setup_persistent_configs(
|
||||||
|
self, persistent_configs: List[PersistentConfig], user_id: int, group_id: int
|
||||||
|
) -> bool:
|
||||||
|
"""Set up persistent configuration symlinks from image config"""
|
||||||
|
if not persistent_configs:
|
||||||
|
self.status.log("No persistent configurations defined in image config")
|
||||||
|
return True
|
||||||
|
|
||||||
|
success = True
|
||||||
|
for config in persistent_configs:
|
||||||
|
# Ensure target directory exists with proper ownership
|
||||||
|
if not self._ensure_target_directory(config.target, user_id, group_id):
|
||||||
|
success = False
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not self.create_symlink(config.source, config.target, user_id, group_id):
|
||||||
|
success = False
|
||||||
|
|
||||||
|
return success
|
||||||
|
|
||||||
|
|
||||||
|
class CommandManager:
|
||||||
|
"""Manages command execution and user switching"""
|
||||||
|
|
||||||
|
def __init__(self, status: StatusManager):
|
||||||
|
self.status = status
|
||||||
|
self.username = "cubbi"
|
||||||
|
|
||||||
|
def run_as_user(self, command: List[str], user: str = None) -> int:
|
||||||
|
"""Run a command as the specified user using gosu"""
|
||||||
|
if user is None:
|
||||||
|
user = self.username
|
||||||
|
|
||||||
|
full_command = ["gosu", user] + command
|
||||||
|
self.status.log(f"Executing as {user}: {' '.join(command)}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(full_command, check=False)
|
||||||
|
return result.returncode
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to execute command: {e}", "ERROR")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
def run_user_command(self, command: str) -> int:
|
||||||
|
"""Run user-specified command as cubbi user"""
|
||||||
|
if not command:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
self.status.log(f"Executing user command: {command}")
|
||||||
|
return self.run_as_user(["sh", "-c", command])
|
||||||
|
|
||||||
|
def exec_as_user(self, args: List[str]) -> None:
|
||||||
|
"""Execute the final command as cubbi user (replaces current process)"""
|
||||||
|
if not args:
|
||||||
|
args = ["tail", "-f", "/dev/null"]
|
||||||
|
|
||||||
|
self.status.log(
|
||||||
|
f"Switching to user '{self.username}' and executing: {' '.join(args)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
os.execvp("gosu", ["gosu", self.username] + args)
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to exec as user: {e}", "ERROR")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
# Tool Plugin System
|
||||||
|
class ToolPlugin(ABC):
|
||||||
|
"""Base class for tool-specific initialization plugins"""
|
||||||
|
|
||||||
|
def __init__(self, status: StatusManager, config: Dict[str, Any]):
|
||||||
|
self.status = status
|
||||||
|
self.config = config
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
"""Return the name of the tool this plugin supports"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def initialize(self) -> bool:
|
||||||
|
"""Main tool initialization logic"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||||
|
"""Integrate with available MCP servers"""
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
# Main Initializer
|
||||||
|
class CubbiInitializer:
|
||||||
|
"""Main Cubbi initialization orchestrator"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.status = StatusManager()
|
||||||
|
self.config_parser = ConfigParser()
|
||||||
|
self.user_manager = UserManager(self.status)
|
||||||
|
self.directory_manager = DirectoryManager(self.status)
|
||||||
|
self.config_manager = ConfigManager(self.status)
|
||||||
|
self.command_manager = CommandManager(self.status)
|
||||||
|
|
||||||
|
def run_initialization(self, final_args: List[str]) -> None:
|
||||||
|
"""Run the complete initialization process"""
|
||||||
|
try:
|
||||||
|
self.status.start_initialization()
|
||||||
|
|
||||||
|
# Load configuration
|
||||||
|
image_config = self.config_parser.load_image_config()
|
||||||
|
cubbi_config = self.config_parser.get_cubbi_config()
|
||||||
|
mcp_config = self.config_parser.get_mcp_config()
|
||||||
|
|
||||||
|
self.status.log(f"Initializing {image_config.name} v{image_config.version}")
|
||||||
|
|
||||||
|
# Core initialization
|
||||||
|
success = self._run_core_initialization(image_config, cubbi_config)
|
||||||
|
if not success:
|
||||||
|
self.status.log("Core initialization failed", "ERROR")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Tool-specific initialization
|
||||||
|
success = self._run_tool_initialization(
|
||||||
|
image_config, cubbi_config, mcp_config
|
||||||
|
)
|
||||||
|
if not success:
|
||||||
|
self.status.log("Tool initialization failed", "ERROR")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Mark complete
|
||||||
|
self.status.complete_initialization()
|
||||||
|
|
||||||
|
# Handle commands
|
||||||
|
self._handle_command_execution(cubbi_config, final_args)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Initialization failed with error: {e}", "ERROR")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
def _run_core_initialization(self, image_config, cubbi_config) -> bool:
|
||||||
|
"""Run core Cubbi initialization steps"""
|
||||||
|
user_id = cubbi_config["user_id"]
|
||||||
|
group_id = cubbi_config["group_id"]
|
||||||
|
|
||||||
|
if not self.user_manager.setup_user_and_group(user_id, group_id):
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not self.directory_manager.setup_standard_directories(user_id, group_id):
|
||||||
|
return False
|
||||||
|
|
||||||
|
config_path = Path(cubbi_config["config_dir"])
|
||||||
|
if not config_path.exists():
|
||||||
|
self.status.log(f"Creating config directory: {cubbi_config['config_dir']}")
|
||||||
|
try:
|
||||||
|
config_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
os.chown(cubbi_config["config_dir"], user_id, group_id)
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to create config directory: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not self.config_manager.setup_persistent_configs(
|
||||||
|
image_config.persistent_configs, user_id, group_id
|
||||||
|
):
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _run_tool_initialization(self, image_config, cubbi_config, mcp_config) -> bool:
|
||||||
|
"""Run tool-specific initialization"""
|
||||||
|
# Look for a tool-specific plugin file in the same directory
|
||||||
|
plugin_name = image_config.name.lower().replace("-", "_")
|
||||||
|
plugin_file = Path(__file__).parent / f"{plugin_name}_plugin.py"
|
||||||
|
|
||||||
|
if not plugin_file.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"No tool-specific plugin found at {plugin_file}, skipping tool initialization"
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Dynamically load the plugin module
|
||||||
|
spec = importlib.util.spec_from_file_location(
|
||||||
|
f"{image_config.name.lower()}_plugin", plugin_file
|
||||||
|
)
|
||||||
|
plugin_module = importlib.util.module_from_spec(spec)
|
||||||
|
spec.loader.exec_module(plugin_module)
|
||||||
|
|
||||||
|
# Find the plugin class (should inherit from ToolPlugin)
|
||||||
|
plugin_class = None
|
||||||
|
for attr_name in dir(plugin_module):
|
||||||
|
attr = getattr(plugin_module, attr_name)
|
||||||
|
if (
|
||||||
|
isinstance(attr, type)
|
||||||
|
and hasattr(attr, "tool_name")
|
||||||
|
and hasattr(attr, "initialize")
|
||||||
|
and attr_name != "ToolPlugin"
|
||||||
|
): # Skip the base class
|
||||||
|
plugin_class = attr
|
||||||
|
break
|
||||||
|
|
||||||
|
if not plugin_class:
|
||||||
|
self.status.log(
|
||||||
|
f"No valid plugin class found in {plugin_file}", "ERROR"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Instantiate and run the plugin
|
||||||
|
plugin = plugin_class(
|
||||||
|
self.status,
|
||||||
|
{
|
||||||
|
"image_config": image_config,
|
||||||
|
"cubbi_config": cubbi_config,
|
||||||
|
"mcp_config": mcp_config,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
self.status.log(f"Running {plugin.tool_name}-specific initialization")
|
||||||
|
|
||||||
|
if not plugin.initialize():
|
||||||
|
self.status.log(f"{plugin.tool_name} initialization failed", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if not plugin.integrate_mcp_servers(mcp_config):
|
||||||
|
self.status.log(f"{plugin.tool_name} MCP integration failed", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to load or execute plugin {plugin_file}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _handle_command_execution(self, cubbi_config, final_args):
|
||||||
|
"""Handle command execution"""
|
||||||
|
exit_code = 0
|
||||||
|
|
||||||
|
if cubbi_config["run_command"]:
|
||||||
|
self.status.log("--- Executing initial command ---")
|
||||||
|
exit_code = self.command_manager.run_user_command(
|
||||||
|
cubbi_config["run_command"]
|
||||||
|
)
|
||||||
|
self.status.log(
|
||||||
|
f"--- Initial command finished (exit code: {exit_code}) ---"
|
||||||
|
)
|
||||||
|
|
||||||
|
if cubbi_config["no_shell"]:
|
||||||
|
self.status.log(
|
||||||
|
"--- CUBBI_NO_SHELL=true, exiting container without starting shell ---"
|
||||||
|
)
|
||||||
|
sys.exit(exit_code)
|
||||||
|
|
||||||
|
self.command_manager.exec_as_user(final_args)
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
"""Main CLI entry point"""
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Cubbi container initialization script",
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog="""
|
||||||
|
This script initializes a Cubbi container environment by:
|
||||||
|
1. Setting up user and group with proper IDs
|
||||||
|
2. Creating standard directories with correct permissions
|
||||||
|
3. Setting up persistent configuration symlinks
|
||||||
|
4. Running tool-specific initialization if available
|
||||||
|
5. Executing user commands or starting an interactive shell
|
||||||
|
|
||||||
|
Environment Variables:
|
||||||
|
CUBBI_USER_ID User ID for the cubbi user (default: 1000)
|
||||||
|
CUBBI_GROUP_ID Group ID for the cubbi user (default: 1000)
|
||||||
|
CUBBI_RUN_COMMAND Initial command to run before shell
|
||||||
|
CUBBI_NO_SHELL Exit after run command instead of starting shell
|
||||||
|
CUBBI_CONFIG_DIR Configuration directory path (default: /cubbi-config)
|
||||||
|
MCP_COUNT Number of MCP servers to configure
|
||||||
|
MCP_<N>_NAME Name of MCP server N
|
||||||
|
MCP_<N>_TYPE Type of MCP server N
|
||||||
|
MCP_<N>_HOST Host of MCP server N
|
||||||
|
MCP_<N>_URL URL of MCP server N
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
cubbi_init.py # Initialize and start bash shell
|
||||||
|
cubbi_init.py --help # Show this help message
|
||||||
|
cubbi_init.py /bin/zsh # Initialize and start zsh shell
|
||||||
|
cubbi_init.py python script.py # Initialize and run python script
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"command",
|
||||||
|
nargs="*",
|
||||||
|
help="Command to execute after initialization (default: interactive shell)",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Parse known args to handle cases where the command might have its own arguments
|
||||||
|
args, unknown = parser.parse_known_args()
|
||||||
|
|
||||||
|
# Combine parsed command with unknown args
|
||||||
|
final_args = args.command + unknown
|
||||||
|
|
||||||
|
# Handle the common case where docker CMD passes ["tail", "-f", "/dev/null"]
|
||||||
|
# This should be treated as "no specific command" (empty args)
|
||||||
|
if final_args == ["tail", "-f", "/dev/null"]:
|
||||||
|
final_args = []
|
||||||
|
|
||||||
|
initializer = CubbiInitializer()
|
||||||
|
initializer.run_initialization(final_args)
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
@@ -1,14 +1,13 @@
|
|||||||
FROM python:3.12-slim
|
FROM python:3.12-slim
|
||||||
|
|
||||||
LABEL maintainer="team@monadical.com"
|
LABEL maintainer="team@monadical.com"
|
||||||
LABEL description="Goose with MCP servers for Cubbi"
|
LABEL description="Goose for Cubbi"
|
||||||
|
|
||||||
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
||||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
gosu \
|
gosu \
|
||||||
|
sudo \
|
||||||
passwd \
|
passwd \
|
||||||
git \
|
|
||||||
openssh-server \
|
|
||||||
bash \
|
bash \
|
||||||
curl \
|
curl \
|
||||||
bzip2 \
|
bzip2 \
|
||||||
@@ -17,13 +16,13 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
|
|||||||
libxcb1 \
|
libxcb1 \
|
||||||
libdbus-1-3 \
|
libdbus-1-3 \
|
||||||
nano \
|
nano \
|
||||||
|
tmux \
|
||||||
|
git-core \
|
||||||
|
ripgrep \
|
||||||
|
openssh-client \
|
||||||
vim \
|
vim \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# Set up SSH server directory (configuration will be handled by entrypoint if needed)
|
|
||||||
RUN mkdir -p /var/run/sshd && chmod 0755 /var/run/sshd
|
|
||||||
# Do NOT enable root login or set root password here
|
|
||||||
|
|
||||||
# Install deps
|
# Install deps
|
||||||
WORKDIR /tmp
|
WORKDIR /tmp
|
||||||
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||||
@@ -40,32 +39,24 @@ RUN curl -fsSL https://github.com/block/goose/releases/download/stable/download_
|
|||||||
# Create app directory
|
# Create app directory
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
# Copy initialization scripts
|
# Copy initialization system
|
||||||
COPY cubbi-init.sh /cubbi-init.sh
|
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||||
COPY entrypoint.sh /entrypoint.sh
|
COPY goose_plugin.py /cubbi/goose_plugin.py
|
||||||
COPY cubbi-image.yaml /cubbi-image.yaml
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||||
COPY init-status.sh /init-status.sh
|
COPY init-status.sh /cubbi/init-status.sh
|
||||||
COPY update-goose-config.py /usr/local/bin/update-goose-config.py
|
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||||
|
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||||
# Extend env via bashrc
|
|
||||||
|
|
||||||
# Make scripts executable
|
|
||||||
RUN chmod +x /cubbi-init.sh /entrypoint.sh /init-status.sh \
|
|
||||||
/usr/local/bin/update-goose-config.py
|
|
||||||
|
|
||||||
# Set up initialization status check on login
|
|
||||||
RUN echo '[ -x /init-status.sh ] && /init-status.sh' >> /etc/bash.bashrc
|
|
||||||
|
|
||||||
# Set up environment
|
# Set up environment
|
||||||
ENV PYTHONUNBUFFERED=1
|
ENV PYTHONUNBUFFERED=1
|
||||||
ENV PYTHONDONTWRITEBYTECODE=1
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV UV_LINK_MODE=copy
|
||||||
|
|
||||||
|
# Pre-install the cubbi_init
|
||||||
|
RUN /cubbi/cubbi_init.py --help
|
||||||
|
|
||||||
# Set WORKDIR to /app, common practice and expected by cubbi-init.sh
|
# Set WORKDIR to /app, common practice and expected by cubbi-init.sh
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
# Expose ports
|
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||||
EXPOSE 8000 22
|
|
||||||
|
|
||||||
# Set entrypoint - container starts as root, entrypoint handles user switching
|
|
||||||
ENTRYPOINT ["/entrypoint.sh"]
|
|
||||||
# Default command if none is provided (entrypoint will run this via gosu)
|
|
||||||
CMD ["tail", "-f", "/dev/null"]
|
CMD ["tail", "-f", "/dev/null"]
|
||||||
|
|||||||
@@ -1,25 +1,50 @@
|
|||||||
# Goose Image for MC
|
# Goose Image for Cubbi
|
||||||
|
|
||||||
This image provides a containerized environment for running [Goose](https://goose.ai).
|
This image provides a containerized environment for running [Goose](https://goose.ai).
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- Pre-configured environment for Goose AI
|
- Pre-configured environment for Goose AI
|
||||||
- Self-hosted instance integration
|
|
||||||
- SSH access
|
|
||||||
- Git repository integration
|
|
||||||
- Langfuse logging support
|
- Langfuse logging support
|
||||||
|
|
||||||
## Environment Variables
|
## Environment Variables
|
||||||
|
|
||||||
|
### Goose Configuration
|
||||||
|
|
||||||
|
| Variable | Description | Required | Default |
|
||||||
|
|----------|-------------|----------|---------|
|
||||||
|
| `CUBBI_MODEL` | Model to use with Goose | No | - |
|
||||||
|
| `CUBBI_PROVIDER` | Provider to use with Goose | No | - |
|
||||||
|
|
||||||
|
### Langfuse Integration
|
||||||
|
|
||||||
|
| Variable | Description | Required | Default |
|
||||||
|
|----------|-------------|----------|---------|
|
||||||
|
| `LANGFUSE_INIT_PROJECT_PUBLIC_KEY` | Langfuse public key | No | - |
|
||||||
|
| `LANGFUSE_INIT_PROJECT_SECRET_KEY` | Langfuse secret key | No | - |
|
||||||
|
| `LANGFUSE_URL` | Langfuse API URL | No | `https://cloud.langfuse.com` |
|
||||||
|
|
||||||
|
### Cubbi Core Variables
|
||||||
|
|
||||||
|
| Variable | Description | Required | Default |
|
||||||
|
|----------|-------------|----------|---------|
|
||||||
|
| `CUBBI_USER_ID` | UID for the container user | No | `1000` |
|
||||||
|
| `CUBBI_GROUP_ID` | GID for the container user | No | `1000` |
|
||||||
|
| `CUBBI_RUN_COMMAND` | Command to execute after initialization | No | - |
|
||||||
|
| `CUBBI_NO_SHELL` | Exit after command execution | No | `false` |
|
||||||
|
| `CUBBI_CONFIG_DIR` | Directory for persistent configurations | No | `/cubbi-config` |
|
||||||
|
| `CUBBI_PERSISTENT_LINKS` | Semicolon-separated list of source:target symlinks | No | - |
|
||||||
|
|
||||||
|
### MCP Integration Variables
|
||||||
|
|
||||||
| Variable | Description | Required |
|
| Variable | Description | Required |
|
||||||
|----------|-------------|----------|
|
|----------|-------------|----------|
|
||||||
| `LANGFUSE_INIT_PROJECT_PUBLIC_KEY` | Langfuse public key | No |
|
| `MCP_COUNT` | Number of available MCP servers | No |
|
||||||
| `LANGFUSE_INIT_PROJECT_SECRET_KEY` | Langfuse secret key | No |
|
| `MCP_NAMES` | JSON array of MCP server names | No |
|
||||||
| `LANGFUSE_URL` | Langfuse API URL | No |
|
| `MCP_{idx}_NAME` | Name of MCP server at index | No |
|
||||||
| `CUBBI_PROJECT_URL` | Project repository URL | No |
|
| `MCP_{idx}_TYPE` | Type of MCP server | No |
|
||||||
| `CUBBI_GIT_SSH_KEY` | SSH key for Git authentication | No |
|
| `MCP_{idx}_HOST` | Hostname of MCP server | No |
|
||||||
| `CUBBI_GIT_TOKEN` | Token for Git authentication | No |
|
| `MCP_{idx}_URL` | Full URL for remote MCP servers | No |
|
||||||
|
|
||||||
## Build
|
## Build
|
||||||
|
|
||||||
@@ -34,8 +59,5 @@ docker build -t monadical/cubbi-goose:latest .
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Create a new session with this image
|
# Create a new session with this image
|
||||||
cubbi session create --driver goose
|
cubbix -i goose
|
||||||
|
|
||||||
# Create with project repository
|
|
||||||
cubbi session create --driver goose --project github.com/username/repo
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,188 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Standardized initialization script for Cubbi images
|
|
||||||
|
|
||||||
# Redirect all output to both stdout and the log file
|
|
||||||
exec > >(tee -a /init.log) 2>&1
|
|
||||||
|
|
||||||
# Mark initialization as started
|
|
||||||
echo "=== Cubbi Initialization started at $(date) ==="
|
|
||||||
|
|
||||||
# --- START INSERTED BLOCK ---
|
|
||||||
|
|
||||||
# Default UID/GID if not provided (should be passed by cubbi tool)
|
|
||||||
CUBBI_USER_ID=${CUBBI_USER_ID:-1000}
|
|
||||||
CUBBI_GROUP_ID=${CUBBI_GROUP_ID:-1000}
|
|
||||||
|
|
||||||
echo "Using UID: $CUBBI_USER_ID, GID: $CUBBI_GROUP_ID"
|
|
||||||
|
|
||||||
# Create group if it doesn't exist
|
|
||||||
if ! getent group cubbi > /dev/null; then
|
|
||||||
groupadd -g $CUBBI_GROUP_ID cubbi
|
|
||||||
else
|
|
||||||
# If group exists but has different GID, modify it
|
|
||||||
EXISTING_GID=$(getent group cubbi | cut -d: -f3)
|
|
||||||
if [ "$EXISTING_GID" != "$CUBBI_GROUP_ID" ]; then
|
|
||||||
groupmod -g $CUBBI_GROUP_ID cubbi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create user if it doesn't exist
|
|
||||||
if ! getent passwd cubbi > /dev/null; then
|
|
||||||
useradd --shell /bin/bash --uid $CUBBI_USER_ID --gid $CUBBI_GROUP_ID --no-create-home cubbi
|
|
||||||
else
|
|
||||||
# If user exists but has different UID/GID, modify it
|
|
||||||
EXISTING_UID=$(getent passwd cubbi | cut -d: -f3)
|
|
||||||
EXISTING_GID=$(getent passwd cubbi | cut -d: -f4)
|
|
||||||
if [ "$EXISTING_UID" != "$CUBBI_USER_ID" ] || [ "$EXISTING_GID" != "$CUBBI_GROUP_ID" ]; then
|
|
||||||
usermod --uid $CUBBI_USER_ID --gid $CUBBI_GROUP_ID cubbi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create home directory and set permissions
|
|
||||||
mkdir -p /home/cubbi
|
|
||||||
chown $CUBBI_USER_ID:$CUBBI_GROUP_ID /home/cubbi
|
|
||||||
mkdir -p /app
|
|
||||||
chown $CUBBI_USER_ID:$CUBBI_GROUP_ID /app
|
|
||||||
|
|
||||||
# Copy /root/.local/bin to the user's home directory
|
|
||||||
if [ -d /root/.local/bin ]; then
|
|
||||||
echo "Copying /root/.local/bin to /home/cubbi/.local/bin..."
|
|
||||||
mkdir -p /home/cubbi/.local/bin
|
|
||||||
cp -r /root/.local/bin/* /home/cubbi/.local/bin/
|
|
||||||
chown -R $CUBBI_USER_ID:$CUBBI_GROUP_ID /home/cubbi/.local
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Start SSH server only if explicitly enabled
|
|
||||||
if [ "$CUBBI_SSH_ENABLED" = "true" ]; then
|
|
||||||
echo "Starting SSH server..."
|
|
||||||
/usr/sbin/sshd
|
|
||||||
else
|
|
||||||
echo "SSH server disabled (use --ssh flag to enable)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# --- END INSERTED BLOCK ---
|
|
||||||
|
|
||||||
echo "INIT_COMPLETE=false" > /init.status
|
|
||||||
|
|
||||||
# Project initialization
|
|
||||||
if [ -n "$CUBBI_PROJECT_URL" ]; then
|
|
||||||
echo "Initializing project: $CUBBI_PROJECT_URL"
|
|
||||||
|
|
||||||
# Set up SSH key if provided
|
|
||||||
if [ -n "$CUBBI_GIT_SSH_KEY" ]; then
|
|
||||||
mkdir -p ~/.ssh
|
|
||||||
echo "$CUBBI_GIT_SSH_KEY" > ~/.ssh/id_ed25519
|
|
||||||
chmod 600 ~/.ssh/id_ed25519
|
|
||||||
ssh-keyscan github.com >> ~/.ssh/known_hosts 2>/dev/null
|
|
||||||
ssh-keyscan gitlab.com >> ~/.ssh/known_hosts 2>/dev/null
|
|
||||||
ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts 2>/dev/null
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set up token if provided
|
|
||||||
if [ -n "$CUBBI_GIT_TOKEN" ]; then
|
|
||||||
git config --global credential.helper store
|
|
||||||
echo "https://$CUBBI_GIT_TOKEN:x-oauth-basic@github.com" > ~/.git-credentials
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Clone repository
|
|
||||||
git clone $CUBBI_PROJECT_URL /app
|
|
||||||
cd /app
|
|
||||||
|
|
||||||
# Run project-specific initialization if present
|
|
||||||
if [ -f "/app/.cubbi/init.sh" ]; then
|
|
||||||
bash /app/.cubbi/init.sh
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Persistent configs are now directly mounted as volumes
|
|
||||||
# No need to create symlinks anymore
|
|
||||||
if [ -n "$CUBBI_CONFIG_DIR" ] && [ -d "$CUBBI_CONFIG_DIR" ]; then
|
|
||||||
echo "Using persistent configuration volumes (direct mounts)"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Goose uses self-hosted instance, no API key required
|
|
||||||
|
|
||||||
# Set up Langfuse logging if credentials are provided
|
|
||||||
if [ -n "$LANGFUSE_INIT_PROJECT_SECRET_KEY" ] && [ -n "$LANGFUSE_INIT_PROJECT_PUBLIC_KEY" ]; then
|
|
||||||
echo "Setting up Langfuse logging"
|
|
||||||
export LANGFUSE_INIT_PROJECT_SECRET_KEY="$LANGFUSE_INIT_PROJECT_SECRET_KEY"
|
|
||||||
export LANGFUSE_INIT_PROJECT_PUBLIC_KEY="$LANGFUSE_INIT_PROJECT_PUBLIC_KEY"
|
|
||||||
export LANGFUSE_URL="${LANGFUSE_URL:-https://cloud.langfuse.com}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Ensure /cubbi-config directory exists (required for symlinks)
|
|
||||||
if [ ! -d "/cubbi-config" ]; then
|
|
||||||
echo "Creating /cubbi-config directory since it doesn't exist"
|
|
||||||
mkdir -p /cubbi-config
|
|
||||||
chown $CUBBI_USER_ID:$CUBBI_GROUP_ID /cubbi-config
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create symlinks for persistent configurations defined in the image
|
|
||||||
if [ -n "$CUBBI_PERSISTENT_LINKS" ]; then
|
|
||||||
echo "Creating persistent configuration symlinks..."
|
|
||||||
# Split by semicolon
|
|
||||||
IFS=';' read -ra LINKS <<< "$CUBBI_PERSISTENT_LINKS"
|
|
||||||
for link_pair in "${LINKS[@]}"; do
|
|
||||||
# Split by colon
|
|
||||||
IFS=':' read -r source_path target_path <<< "$link_pair"
|
|
||||||
|
|
||||||
if [ -z "$source_path" ] || [ -z "$target_path" ]; then
|
|
||||||
echo "Warning: Invalid link pair format '$link_pair', skipping."
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Processing link: $source_path -> $target_path"
|
|
||||||
parent_dir=$(dirname "$source_path")
|
|
||||||
|
|
||||||
# Ensure parent directory of the link source exists and is owned by cubbi
|
|
||||||
if [ ! -d "$parent_dir" ]; then
|
|
||||||
echo "Creating parent directory: $parent_dir"
|
|
||||||
mkdir -p "$parent_dir"
|
|
||||||
echo "Changing ownership of parent $parent_dir to $CUBBI_USER_ID:$CUBBI_GROUP_ID"
|
|
||||||
chown "$CUBBI_USER_ID:$CUBBI_GROUP_ID" "$parent_dir" || echo "Warning: Could not chown parent $parent_dir"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create the symlink (force, no-dereference)
|
|
||||||
echo "Creating symlink: ln -sfn $target_path $source_path"
|
|
||||||
ln -sfn "$target_path" "$source_path"
|
|
||||||
# Optionally, change ownership of the symlink itself
|
|
||||||
echo "Changing ownership of symlink $source_path to $CUBBI_USER_ID:$CUBBI_GROUP_ID"
|
|
||||||
chown -h "$CUBBI_USER_ID:$CUBBI_GROUP_ID" "$source_path" || echo "Warning: Could not chown symlink $source_path"
|
|
||||||
|
|
||||||
done
|
|
||||||
echo "Persistent configuration symlinks created."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Update Goose configuration with available MCP servers (run as cubbi after symlinks are created)
|
|
||||||
if [ -f "/usr/local/bin/update-goose-config.py" ]; then
|
|
||||||
echo "Updating Goose configuration with MCP servers as cubbi..."
|
|
||||||
gosu cubbi /usr/local/bin/update-goose-config.py
|
|
||||||
elif [ -f "$(dirname "$0")/update-goose-config.py" ]; then
|
|
||||||
echo "Updating Goose configuration with MCP servers as cubbi..."
|
|
||||||
gosu cubbi "$(dirname "$0")/update-goose-config.py"
|
|
||||||
else
|
|
||||||
echo "Warning: update-goose-config.py script not found. Goose configuration will not be updated."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Run the user command first, if set, as cubbi
|
|
||||||
if [ -n "$CUBBI_RUN_COMMAND" ]; then
|
|
||||||
echo "--- Executing initial command: $CUBBI_RUN_COMMAND ---";
|
|
||||||
gosu cubbi sh -c "$CUBBI_RUN_COMMAND"; # Run user command as cubbi
|
|
||||||
COMMAND_EXIT_CODE=$?;
|
|
||||||
echo "--- Initial command finished (exit code: $COMMAND_EXIT_CODE) ---";
|
|
||||||
|
|
||||||
# If CUBBI_NO_SHELL is set, exit instead of starting a shell
|
|
||||||
if [ "$CUBBI_NO_SHELL" = "true" ]; then
|
|
||||||
echo "--- CUBBI_NO_SHELL=true, exiting container without starting shell ---";
|
|
||||||
# Mark initialization as complete before exiting
|
|
||||||
echo "=== Cubbi Initialization completed at $(date) ==="
|
|
||||||
echo "INIT_COMPLETE=true" > /init.status
|
|
||||||
exit $COMMAND_EXIT_CODE;
|
|
||||||
fi;
|
|
||||||
fi;
|
|
||||||
|
|
||||||
# Mark initialization as complete
|
|
||||||
echo "=== Cubbi Initialization completed at $(date) ==="
|
|
||||||
echo "INIT_COMPLETE=true" > /init.status
|
|
||||||
|
|
||||||
exec gosu cubbi "$@"
|
|
||||||
@@ -24,29 +24,8 @@ environment:
|
|||||||
required: false
|
required: false
|
||||||
default: https://cloud.langfuse.com
|
default: https://cloud.langfuse.com
|
||||||
|
|
||||||
# Project environment variables
|
|
||||||
- name: CUBBI_PROJECT_URL
|
|
||||||
description: Project repository URL
|
|
||||||
required: false
|
|
||||||
|
|
||||||
- name: CUBBI_PROJECT_TYPE
|
|
||||||
description: Project repository type (git, svn, etc.)
|
|
||||||
required: false
|
|
||||||
default: git
|
|
||||||
|
|
||||||
- name: CUBBI_GIT_SSH_KEY
|
|
||||||
description: SSH key for Git authentication
|
|
||||||
required: false
|
|
||||||
sensitive: true
|
|
||||||
|
|
||||||
- name: CUBBI_GIT_TOKEN
|
|
||||||
description: Token for Git authentication
|
|
||||||
required: false
|
|
||||||
sensitive: true
|
|
||||||
|
|
||||||
ports:
|
ports:
|
||||||
- 8000 # Main application
|
- 8000
|
||||||
- 22 # SSH server
|
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- mountPath: /app
|
- mountPath: /app
|
||||||
@@ -57,7 +36,3 @@ persistent_configs:
|
|||||||
target: "/cubbi-config/goose-app"
|
target: "/cubbi-config/goose-app"
|
||||||
type: "directory"
|
type: "directory"
|
||||||
description: "Goose memory"
|
description: "Goose memory"
|
||||||
- source: "/home/cubbi/.config/goose"
|
|
||||||
target: "/cubbi-config/goose-config"
|
|
||||||
type: "directory"
|
|
||||||
description: "Goose configuration"
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Entrypoint script for Goose image
|
|
||||||
# Executes the standard initialization script, which handles user setup,
|
|
||||||
# service startup (like sshd), and switching to the non-root user
|
|
||||||
# before running the container's command (CMD).
|
|
||||||
|
|
||||||
exec /cubbi-init.sh "$@"
|
|
||||||
202
cubbi/images/goose/goose_plugin.py
Normal file
202
cubbi/images/goose/goose_plugin.py
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Goose-specific plugin for Cubbi initialization
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
from cubbi_init import ToolPlugin
|
||||||
|
from ruamel.yaml import YAML
|
||||||
|
|
||||||
|
|
||||||
|
class GoosePlugin(ToolPlugin):
|
||||||
|
"""Plugin for Goose AI tool initialization"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "goose"
|
||||||
|
|
||||||
|
def _get_user_ids(self) -> tuple[int, int]:
|
||||||
|
"""Get the cubbi user and group IDs from environment"""
|
||||||
|
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
||||||
|
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
||||||
|
return user_id, group_id
|
||||||
|
|
||||||
|
def _set_ownership(self, path: Path) -> None:
|
||||||
|
"""Set ownership of a path to the cubbi user"""
|
||||||
|
user_id, group_id = self._get_user_ids()
|
||||||
|
try:
|
||||||
|
os.chown(path, user_id, group_id)
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
||||||
|
|
||||||
|
def _get_user_config_path(self) -> Path:
|
||||||
|
"""Get the correct config path for the cubbi user"""
|
||||||
|
return Path("/home/cubbi/.config/goose")
|
||||||
|
|
||||||
|
def _ensure_user_config_dir(self) -> Path:
|
||||||
|
"""Ensure config directory exists with correct ownership"""
|
||||||
|
config_dir = self._get_user_config_path()
|
||||||
|
|
||||||
|
# Create the full directory path
|
||||||
|
try:
|
||||||
|
config_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
except FileExistsError:
|
||||||
|
# Directory already exists, which is fine
|
||||||
|
pass
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to create config directory {config_dir}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
return config_dir
|
||||||
|
|
||||||
|
# Set ownership for the directories
|
||||||
|
config_parent = config_dir.parent
|
||||||
|
if config_parent.exists():
|
||||||
|
self._set_ownership(config_parent)
|
||||||
|
|
||||||
|
if config_dir.exists():
|
||||||
|
self._set_ownership(config_dir)
|
||||||
|
|
||||||
|
return config_dir
|
||||||
|
|
||||||
|
def initialize(self) -> bool:
|
||||||
|
"""Initialize Goose configuration"""
|
||||||
|
self._ensure_user_config_dir()
|
||||||
|
return self.setup_tool_configuration()
|
||||||
|
|
||||||
|
def setup_tool_configuration(self) -> bool:
|
||||||
|
"""Set up Goose configuration file"""
|
||||||
|
# Ensure directory exists before writing
|
||||||
|
config_dir = self._ensure_user_config_dir()
|
||||||
|
if not config_dir.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"Config directory {config_dir} does not exist and could not be created",
|
||||||
|
"ERROR",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
config_file = config_dir / "config.yaml"
|
||||||
|
yaml = YAML(typ="safe")
|
||||||
|
|
||||||
|
# Load or initialize configuration
|
||||||
|
if config_file.exists():
|
||||||
|
with config_file.open("r") as f:
|
||||||
|
config_data = yaml.load(f) or {}
|
||||||
|
else:
|
||||||
|
config_data = {}
|
||||||
|
|
||||||
|
if "extensions" not in config_data:
|
||||||
|
config_data["extensions"] = {}
|
||||||
|
|
||||||
|
# Add default developer extension
|
||||||
|
config_data["extensions"]["developer"] = {
|
||||||
|
"enabled": True,
|
||||||
|
"name": "developer",
|
||||||
|
"timeout": 300,
|
||||||
|
"type": "builtin",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Update with environment variables
|
||||||
|
goose_model = os.environ.get("CUBBI_MODEL")
|
||||||
|
goose_provider = os.environ.get("CUBBI_PROVIDER")
|
||||||
|
|
||||||
|
if goose_model:
|
||||||
|
config_data["GOOSE_MODEL"] = goose_model
|
||||||
|
self.status.log(f"Set GOOSE_MODEL to {goose_model}")
|
||||||
|
|
||||||
|
if goose_provider:
|
||||||
|
config_data["GOOSE_PROVIDER"] = goose_provider
|
||||||
|
self.status.log(f"Set GOOSE_PROVIDER to {goose_provider}")
|
||||||
|
|
||||||
|
# If provider is OpenAI and OPENAI_URL is set, configure OPENAI_HOST
|
||||||
|
if goose_provider.lower() == "openai":
|
||||||
|
openai_url = os.environ.get("OPENAI_URL")
|
||||||
|
if openai_url:
|
||||||
|
config_data["OPENAI_HOST"] = openai_url
|
||||||
|
self.status.log(f"Set OPENAI_HOST to {openai_url}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
with config_file.open("w") as f:
|
||||||
|
yaml.dump(config_data, f)
|
||||||
|
|
||||||
|
# Set ownership of the config file to cubbi user
|
||||||
|
self._set_ownership(config_file)
|
||||||
|
|
||||||
|
self.status.log(f"Updated Goose configuration at {config_file}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to write Goose configuration: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||||
|
"""Integrate Goose with available MCP servers"""
|
||||||
|
if mcp_config["count"] == 0:
|
||||||
|
self.status.log("No MCP servers to integrate")
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Ensure directory exists before writing
|
||||||
|
config_dir = self._ensure_user_config_dir()
|
||||||
|
if not config_dir.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"Config directory {config_dir} does not exist and could not be created",
|
||||||
|
"ERROR",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
config_file = config_dir / "config.yaml"
|
||||||
|
yaml = YAML(typ="safe")
|
||||||
|
|
||||||
|
if config_file.exists():
|
||||||
|
with config_file.open("r") as f:
|
||||||
|
config_data = yaml.load(f) or {}
|
||||||
|
else:
|
||||||
|
config_data = {"extensions": {}}
|
||||||
|
|
||||||
|
if "extensions" not in config_data:
|
||||||
|
config_data["extensions"] = {}
|
||||||
|
|
||||||
|
for server in mcp_config["servers"]:
|
||||||
|
server_name = server["name"]
|
||||||
|
server_host = server["host"]
|
||||||
|
server_url = server["url"]
|
||||||
|
|
||||||
|
if server_name and server_host:
|
||||||
|
mcp_url = f"http://{server_host}:8080/sse"
|
||||||
|
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}")
|
||||||
|
|
||||||
|
config_data["extensions"][server_name] = {
|
||||||
|
"enabled": True,
|
||||||
|
"name": server_name,
|
||||||
|
"timeout": 60,
|
||||||
|
"type": server.get("type", "sse"),
|
||||||
|
"uri": mcp_url,
|
||||||
|
"envs": {},
|
||||||
|
}
|
||||||
|
elif server_name and server_url:
|
||||||
|
self.status.log(
|
||||||
|
f"Adding remote MCP extension: {server_name} - {server_url}"
|
||||||
|
)
|
||||||
|
|
||||||
|
config_data["extensions"][server_name] = {
|
||||||
|
"enabled": True,
|
||||||
|
"name": server_name,
|
||||||
|
"timeout": 60,
|
||||||
|
"type": server.get("type", "sse"),
|
||||||
|
"uri": server_url,
|
||||||
|
"envs": {},
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
with config_file.open("w") as f:
|
||||||
|
yaml.dump(config_data, f)
|
||||||
|
|
||||||
|
# Set ownership of the config file to cubbi user
|
||||||
|
self._set_ownership(config_file)
|
||||||
|
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
|
||||||
|
return False
|
||||||
@@ -1,106 +0,0 @@
|
|||||||
#!/usr/bin/env -S uv run --script
|
|
||||||
# /// script
|
|
||||||
# dependencies = ["ruamel.yaml"]
|
|
||||||
# ///
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from ruamel.yaml import YAML
|
|
||||||
|
|
||||||
# Path to goose config
|
|
||||||
GOOSE_CONFIG = Path.home() / ".config/goose/config.yaml"
|
|
||||||
CONFIG_DIR = GOOSE_CONFIG.parent
|
|
||||||
|
|
||||||
# Create config directory if it doesn't exist
|
|
||||||
CONFIG_DIR.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
|
|
||||||
def update_config():
|
|
||||||
"""Update Goose configuration based on environment variables and config file"""
|
|
||||||
|
|
||||||
yaml = YAML()
|
|
||||||
|
|
||||||
# Load or initialize the YAML configuration
|
|
||||||
if not GOOSE_CONFIG.exists():
|
|
||||||
config_data = {"extensions": {}}
|
|
||||||
else:
|
|
||||||
with GOOSE_CONFIG.open("r") as f:
|
|
||||||
config_data = yaml.load(f)
|
|
||||||
if "extensions" not in config_data:
|
|
||||||
config_data["extensions"] = {}
|
|
||||||
|
|
||||||
# Add default developer extension
|
|
||||||
config_data["extensions"]["developer"] = {
|
|
||||||
"enabled": True,
|
|
||||||
"name": "developer",
|
|
||||||
"timeout": 300,
|
|
||||||
"type": "builtin",
|
|
||||||
}
|
|
||||||
|
|
||||||
# Update goose configuration with model and provider from environment variables
|
|
||||||
goose_model = os.environ.get("CUBBI_MODEL")
|
|
||||||
goose_provider = os.environ.get("CUBBI_PROVIDER")
|
|
||||||
|
|
||||||
if goose_model:
|
|
||||||
config_data["GOOSE_MODEL"] = goose_model
|
|
||||||
print(f"Set GOOSE_MODEL to {goose_model}")
|
|
||||||
|
|
||||||
if goose_provider:
|
|
||||||
config_data["GOOSE_PROVIDER"] = goose_provider
|
|
||||||
print(f"Set GOOSE_PROVIDER to {goose_provider}")
|
|
||||||
|
|
||||||
# Get MCP information from environment variables
|
|
||||||
mcp_count = int(os.environ.get("MCP_COUNT", "0"))
|
|
||||||
mcp_names_str = os.environ.get("MCP_NAMES", "[]")
|
|
||||||
|
|
||||||
try:
|
|
||||||
mcp_names = json.loads(mcp_names_str)
|
|
||||||
print(f"Found {mcp_count} MCP servers: {', '.join(mcp_names)}")
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
mcp_names = []
|
|
||||||
print("Error parsing MCP_NAMES environment variable")
|
|
||||||
|
|
||||||
# Process each MCP - collect the MCP configs to add or update
|
|
||||||
for idx in range(mcp_count):
|
|
||||||
mcp_name = os.environ.get(f"MCP_{idx}_NAME")
|
|
||||||
mcp_type = os.environ.get(f"MCP_{idx}_TYPE")
|
|
||||||
mcp_host = os.environ.get(f"MCP_{idx}_HOST")
|
|
||||||
|
|
||||||
# Always use container's SSE port (8080) not the host-bound port
|
|
||||||
if mcp_name and mcp_host:
|
|
||||||
# Use standard MCP SSE port (8080)
|
|
||||||
mcp_url = f"http://{mcp_host}:8080/sse"
|
|
||||||
print(f"Processing MCP extension: {mcp_name} ({mcp_type}) - {mcp_url}")
|
|
||||||
config_data["extensions"][mcp_name] = {
|
|
||||||
"enabled": True,
|
|
||||||
"name": mcp_name,
|
|
||||||
"timeout": 60,
|
|
||||||
"type": "sse",
|
|
||||||
"uri": mcp_url,
|
|
||||||
"envs": {},
|
|
||||||
}
|
|
||||||
elif mcp_name and os.environ.get(f"MCP_{idx}_URL"):
|
|
||||||
# For remote MCPs, use the URL provided in environment
|
|
||||||
mcp_url = os.environ.get(f"MCP_{idx}_URL")
|
|
||||||
print(
|
|
||||||
f"Processing remote MCP extension: {mcp_name} ({mcp_type}) - {mcp_url}"
|
|
||||||
)
|
|
||||||
config_data["extensions"][mcp_name] = {
|
|
||||||
"enabled": True,
|
|
||||||
"name": mcp_name,
|
|
||||||
"timeout": 60,
|
|
||||||
"type": "sse",
|
|
||||||
"uri": mcp_url,
|
|
||||||
"envs": {},
|
|
||||||
}
|
|
||||||
|
|
||||||
# Write the updated configuration back to the file
|
|
||||||
with GOOSE_CONFIG.open("w") as f:
|
|
||||||
yaml.dump(config_data, f)
|
|
||||||
|
|
||||||
print(f"Updated Goose configuration at {GOOSE_CONFIG}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
update_config()
|
|
||||||
13
cubbi/images/goose/init-status.sh → cubbi/images/init-status.sh
Normal file → Executable file
13
cubbi/images/goose/init-status.sh → cubbi/images/init-status.sh
Normal file → Executable file
@@ -6,17 +6,20 @@ if [ "$(id -u)" != "0" ]; then
|
|||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Ensure files exist before checking them
|
||||||
|
touch /cubbi/init.status /cubbi/init.log
|
||||||
|
|
||||||
# Quick check instead of full logic
|
# Quick check instead of full logic
|
||||||
if ! grep -q "INIT_COMPLETE=true" "/init.status" 2>/dev/null; then
|
if ! grep -q "INIT_COMPLETE=true" "/cubbi/init.status" 2>/dev/null; then
|
||||||
# Only follow logs if initialization is incomplete
|
# Only follow logs if initialization is incomplete
|
||||||
if [ -f "/init.log" ]; then
|
if [ -f "/cubbi/init.log" ]; then
|
||||||
echo "----------------------------------------"
|
echo "----------------------------------------"
|
||||||
tail -f /init.log &
|
tail -f /cubbi/init.log &
|
||||||
tail_pid=$!
|
tail_pid=$!
|
||||||
|
|
||||||
# Check every second if initialization has completed
|
# Check every second if initialization has completed
|
||||||
while true; do
|
while true; do
|
||||||
if grep -q "INIT_COMPLETE=true" "/init.status" 2>/dev/null; then
|
if grep -q "INIT_COMPLETE=true" "/cubbi/init.status" 2>/dev/null; then
|
||||||
kill $tail_pid 2>/dev/null
|
kill $tail_pid 2>/dev/null
|
||||||
echo "----------------------------------------"
|
echo "----------------------------------------"
|
||||||
break
|
break
|
||||||
@@ -28,4 +31,4 @@ if ! grep -q "INIT_COMPLETE=true" "/init.status" 2>/dev/null; then
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
exec gosu cubbi /bin/bash -il
|
exec gosu cubbi /bin/bash -i
|
||||||
78
cubbi/images/opencode/Dockerfile
Normal file
78
cubbi/images/opencode/Dockerfile
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
LABEL maintainer="team@monadical.com"
|
||||||
|
LABEL description="Opencode for Cubbi"
|
||||||
|
|
||||||
|
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
gosu \
|
||||||
|
sudo \
|
||||||
|
passwd \
|
||||||
|
bash \
|
||||||
|
curl \
|
||||||
|
bzip2 \
|
||||||
|
iputils-ping \
|
||||||
|
iproute2 \
|
||||||
|
libxcb1 \
|
||||||
|
libdbus-1-3 \
|
||||||
|
nano \
|
||||||
|
tmux \
|
||||||
|
git-core \
|
||||||
|
ripgrep \
|
||||||
|
openssh-client \
|
||||||
|
vim \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Install deps
|
||||||
|
WORKDIR /tmp
|
||||||
|
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||||
|
sh install.sh && \
|
||||||
|
mv /root/.local/bin/uv /usr/local/bin/uv && \
|
||||||
|
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||||
|
rm install.sh
|
||||||
|
|
||||||
|
# Install Node.js
|
||||||
|
ARG NODE_VERSION=v22.16.0
|
||||||
|
RUN mkdir -p /opt/node && \
|
||||||
|
ARCH=$(uname -m) && \
|
||||||
|
if [ "$ARCH" = "x86_64" ]; then \
|
||||||
|
NODE_ARCH=linux-x64; \
|
||||||
|
elif [ "$ARCH" = "aarch64" ]; then \
|
||||||
|
NODE_ARCH=linux-arm64; \
|
||||||
|
else \
|
||||||
|
echo "Unsupported architecture"; exit 1; \
|
||||||
|
fi && \
|
||||||
|
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
|
||||||
|
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
|
||||||
|
rm node.tar.gz
|
||||||
|
|
||||||
|
|
||||||
|
ENV PATH="/opt/node/bin:$PATH"
|
||||||
|
RUN npm i -g yarn
|
||||||
|
RUN npm i -g opencode-ai
|
||||||
|
|
||||||
|
# Create app directory
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy initialization system
|
||||||
|
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||||
|
COPY opencode_plugin.py /cubbi/opencode_plugin.py
|
||||||
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||||
|
COPY init-status.sh /cubbi/init-status.sh
|
||||||
|
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||||
|
RUN echo 'PATH="/opt/node/bin:$PATH"' >> /etc/bash.bashrc
|
||||||
|
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||||
|
|
||||||
|
# Set up environment
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV UV_LINK_MODE=copy
|
||||||
|
|
||||||
|
# Pre-install the cubbi_init
|
||||||
|
RUN /cubbi/cubbi_init.py --help
|
||||||
|
|
||||||
|
# Set WORKDIR to /app, common practice and expected by cubbi-init.sh
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||||
|
CMD ["tail", "-f", "/dev/null"]
|
||||||
55
cubbi/images/opencode/README.md
Normal file
55
cubbi/images/opencode/README.md
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
# Opencode Image for Cubbi
|
||||||
|
|
||||||
|
This image provides a containerized environment for running [Opencode](https://opencode.ai).
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Pre-configured environment for Opencode AI
|
||||||
|
- Langfuse logging support
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
### Opencode Configuration
|
||||||
|
|
||||||
|
| Variable | Description | Required | Default |
|
||||||
|
|----------|-------------|----------|---------|
|
||||||
|
| `CUBBI_MODEL` | Model to use with Opencode | No | - |
|
||||||
|
| `CUBBI_PROVIDER` | Provider to use with Opencode | No | - |
|
||||||
|
|
||||||
|
### Cubbi Core Variables
|
||||||
|
|
||||||
|
| Variable | Description | Required | Default |
|
||||||
|
|----------|-------------|----------|---------|
|
||||||
|
| `CUBBI_USER_ID` | UID for the container user | No | `1000` |
|
||||||
|
| `CUBBI_GROUP_ID` | GID for the container user | No | `1000` |
|
||||||
|
| `CUBBI_RUN_COMMAND` | Command to execute after initialization | No | - |
|
||||||
|
| `CUBBI_NO_SHELL` | Exit after command execution | No | `false` |
|
||||||
|
| `CUBBI_CONFIG_DIR` | Directory for persistent configurations | No | `/cubbi-config` |
|
||||||
|
| `CUBBI_PERSISTENT_LINKS` | Semicolon-separated list of source:target symlinks | No | - |
|
||||||
|
|
||||||
|
### MCP Integration Variables
|
||||||
|
|
||||||
|
| Variable | Description | Required |
|
||||||
|
|----------|-------------|----------|
|
||||||
|
| `MCP_COUNT` | Number of available MCP servers | No |
|
||||||
|
| `MCP_NAMES` | JSON array of MCP server names | No |
|
||||||
|
| `MCP_{idx}_NAME` | Name of MCP server at index | No |
|
||||||
|
| `MCP_{idx}_TYPE` | Type of MCP server | No |
|
||||||
|
| `MCP_{idx}_HOST` | Hostname of MCP server | No |
|
||||||
|
| `MCP_{idx}_URL` | Full URL for remote MCP servers | No |
|
||||||
|
|
||||||
|
## Build
|
||||||
|
|
||||||
|
To build this image:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd drivers/opencode
|
||||||
|
docker build -t monadical/cubbi-opencode:latest .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a new session with this image
|
||||||
|
cubbix -i opencode
|
||||||
|
```
|
||||||
22
cubbi/images/opencode/cubbi_image.yaml
Normal file
22
cubbi/images/opencode/cubbi_image.yaml
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
name: opencode
|
||||||
|
description: Opencode AI environment
|
||||||
|
version: 1.0.0
|
||||||
|
maintainer: team@monadical.com
|
||||||
|
image: monadical/cubbi-opencode:latest
|
||||||
|
|
||||||
|
init:
|
||||||
|
pre_command: /cubbi-init.sh
|
||||||
|
command: /entrypoint.sh
|
||||||
|
|
||||||
|
environment: []
|
||||||
|
ports: []
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- mountPath: /app
|
||||||
|
description: Application directory
|
||||||
|
|
||||||
|
persistent_configs:
|
||||||
|
- source: "/home/cubbi/.config/opencode"
|
||||||
|
target: "/cubbi-config/config-opencode"
|
||||||
|
type: "directory"
|
||||||
|
description: "Opencode configuration"
|
||||||
265
cubbi/images/opencode/opencode_plugin.py
Normal file
265
cubbi/images/opencode/opencode_plugin.py
Normal file
@@ -0,0 +1,265 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Opencode-specific plugin for Cubbi initialization
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
from cubbi_init import ToolPlugin
|
||||||
|
|
||||||
|
# Map of environment variables to provider names in auth.json
|
||||||
|
API_KEY_MAPPINGS = {
|
||||||
|
"ANTHROPIC_API_KEY": "anthropic",
|
||||||
|
"GOOGLE_API_KEY": "google",
|
||||||
|
"OPENAI_API_KEY": "openai",
|
||||||
|
"OPENROUTER_API_KEY": "openrouter",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class OpencodePlugin(ToolPlugin):
|
||||||
|
"""Plugin for Opencode AI tool initialization"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "opencode"
|
||||||
|
|
||||||
|
def _get_user_ids(self) -> tuple[int, int]:
|
||||||
|
"""Get the cubbi user and group IDs from environment"""
|
||||||
|
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
||||||
|
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
||||||
|
return user_id, group_id
|
||||||
|
|
||||||
|
def _set_ownership(self, path: Path) -> None:
|
||||||
|
"""Set ownership of a path to the cubbi user"""
|
||||||
|
user_id, group_id = self._get_user_ids()
|
||||||
|
try:
|
||||||
|
os.chown(path, user_id, group_id)
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
||||||
|
|
||||||
|
def _get_user_config_path(self) -> Path:
|
||||||
|
"""Get the correct config path for the cubbi user"""
|
||||||
|
return Path("/home/cubbi/.config/opencode")
|
||||||
|
|
||||||
|
def _get_user_data_path(self) -> Path:
|
||||||
|
"""Get the correct data path for the cubbi user"""
|
||||||
|
return Path("/home/cubbi/.local/share/opencode")
|
||||||
|
|
||||||
|
def _ensure_user_config_dir(self) -> Path:
|
||||||
|
"""Ensure config directory exists with correct ownership"""
|
||||||
|
config_dir = self._get_user_config_path()
|
||||||
|
|
||||||
|
# Create the full directory path
|
||||||
|
try:
|
||||||
|
config_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
except FileExistsError:
|
||||||
|
# Directory already exists, which is fine
|
||||||
|
pass
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(
|
||||||
|
f"Failed to create config directory {config_dir}: {e}", "ERROR"
|
||||||
|
)
|
||||||
|
return config_dir
|
||||||
|
|
||||||
|
# Set ownership for the directories
|
||||||
|
config_parent = config_dir.parent
|
||||||
|
if config_parent.exists():
|
||||||
|
self._set_ownership(config_parent)
|
||||||
|
|
||||||
|
if config_dir.exists():
|
||||||
|
self._set_ownership(config_dir)
|
||||||
|
|
||||||
|
return config_dir
|
||||||
|
|
||||||
|
def _ensure_user_data_dir(self) -> Path:
|
||||||
|
"""Ensure data directory exists with correct ownership"""
|
||||||
|
data_dir = self._get_user_data_path()
|
||||||
|
|
||||||
|
# Create the full directory path
|
||||||
|
try:
|
||||||
|
data_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
except FileExistsError:
|
||||||
|
# Directory already exists, which is fine
|
||||||
|
pass
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(f"Failed to create data directory {data_dir}: {e}", "ERROR")
|
||||||
|
return data_dir
|
||||||
|
|
||||||
|
# Set ownership for the directories
|
||||||
|
data_parent = data_dir.parent
|
||||||
|
if data_parent.exists():
|
||||||
|
self._set_ownership(data_parent)
|
||||||
|
|
||||||
|
if data_dir.exists():
|
||||||
|
self._set_ownership(data_dir)
|
||||||
|
|
||||||
|
return data_dir
|
||||||
|
|
||||||
|
def _create_auth_file(self) -> bool:
|
||||||
|
"""Create auth.json file with configured API keys"""
|
||||||
|
# Ensure data directory exists
|
||||||
|
data_dir = self._ensure_user_data_dir()
|
||||||
|
if not data_dir.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"Data directory {data_dir} does not exist and could not be created",
|
||||||
|
"ERROR",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
auth_file = data_dir / "auth.json"
|
||||||
|
auth_data = {}
|
||||||
|
|
||||||
|
# Check each API key and add to auth data if present
|
||||||
|
for env_var, provider in API_KEY_MAPPINGS.items():
|
||||||
|
api_key = os.environ.get(env_var)
|
||||||
|
if api_key:
|
||||||
|
auth_data[provider] = {"type": "api", "key": api_key}
|
||||||
|
|
||||||
|
# Add custom endpoint URL for OpenAI if available
|
||||||
|
if provider == "openai":
|
||||||
|
openai_url = os.environ.get("OPENAI_URL")
|
||||||
|
if openai_url:
|
||||||
|
auth_data[provider]["baseURL"] = openai_url
|
||||||
|
self.status.log(
|
||||||
|
f"Added OpenAI custom endpoint URL: {openai_url}"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.status.log(f"Added {provider} API key to auth configuration")
|
||||||
|
|
||||||
|
# Only write file if we have at least one API key
|
||||||
|
if not auth_data:
|
||||||
|
self.status.log("No API keys found, skipping auth.json creation")
|
||||||
|
return True
|
||||||
|
|
||||||
|
try:
|
||||||
|
with auth_file.open("w") as f:
|
||||||
|
json.dump(auth_data, f, indent=2)
|
||||||
|
|
||||||
|
# Set ownership of the auth file to cubbi user
|
||||||
|
self._set_ownership(auth_file)
|
||||||
|
|
||||||
|
# Set secure permissions (readable only by owner)
|
||||||
|
auth_file.chmod(0o600)
|
||||||
|
|
||||||
|
self.status.log(f"Created OpenCode auth configuration at {auth_file}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to create auth configuration: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def initialize(self) -> bool:
|
||||||
|
"""Initialize Opencode configuration"""
|
||||||
|
self._ensure_user_config_dir()
|
||||||
|
|
||||||
|
# Create auth.json file with API keys
|
||||||
|
auth_success = self._create_auth_file()
|
||||||
|
|
||||||
|
# Set up tool configuration
|
||||||
|
config_success = self.setup_tool_configuration()
|
||||||
|
|
||||||
|
return auth_success and config_success
|
||||||
|
|
||||||
|
def setup_tool_configuration(self) -> bool:
|
||||||
|
"""Set up Opencode configuration file"""
|
||||||
|
# Ensure directory exists before writing
|
||||||
|
config_dir = self._ensure_user_config_dir()
|
||||||
|
if not config_dir.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"Config directory {config_dir} does not exist and could not be created",
|
||||||
|
"ERROR",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
config_file = config_dir / "config.json"
|
||||||
|
|
||||||
|
# Load or initialize configuration
|
||||||
|
if config_file.exists():
|
||||||
|
with config_file.open("r") as f:
|
||||||
|
config_data = json.load(f) or {}
|
||||||
|
else:
|
||||||
|
config_data = {}
|
||||||
|
|
||||||
|
# Update with environment variables
|
||||||
|
opencode_model = os.environ.get("CUBBI_MODEL")
|
||||||
|
opencode_provider = os.environ.get("CUBBI_PROVIDER")
|
||||||
|
|
||||||
|
if opencode_model and opencode_provider:
|
||||||
|
config_data["model"] = f"{opencode_provider}/{opencode_model}"
|
||||||
|
self.status.log(f"Set model to {config_data['model']}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
with config_file.open("w") as f:
|
||||||
|
json.dump(config_data, f, indent=2)
|
||||||
|
|
||||||
|
# Set ownership of the config file to cubbi user
|
||||||
|
self._set_ownership(config_file)
|
||||||
|
|
||||||
|
self.status.log(f"Updated Opencode configuration at {config_file}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to write Opencode configuration: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||||
|
"""Integrate Opencode with available MCP servers"""
|
||||||
|
if mcp_config["count"] == 0:
|
||||||
|
self.status.log("No MCP servers to integrate")
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Ensure directory exists before writing
|
||||||
|
config_dir = self._ensure_user_config_dir()
|
||||||
|
if not config_dir.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"Config directory {config_dir} does not exist and could not be created",
|
||||||
|
"ERROR",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
config_file = config_dir / "config.json"
|
||||||
|
|
||||||
|
if config_file.exists():
|
||||||
|
with config_file.open("r") as f:
|
||||||
|
config_data = json.load(f) or {}
|
||||||
|
else:
|
||||||
|
config_data = {}
|
||||||
|
|
||||||
|
if "mcp" not in config_data:
|
||||||
|
config_data["mcp"] = {}
|
||||||
|
|
||||||
|
for server in mcp_config["servers"]:
|
||||||
|
server_name = server["name"]
|
||||||
|
server_host = server.get("host")
|
||||||
|
server_url = server.get("url")
|
||||||
|
|
||||||
|
if server_name and server_host:
|
||||||
|
mcp_url = f"http://{server_host}:8080/sse"
|
||||||
|
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}")
|
||||||
|
|
||||||
|
config_data["mcp"][server_name] = {
|
||||||
|
"type": "remote",
|
||||||
|
"url": mcp_url,
|
||||||
|
}
|
||||||
|
elif server_name and server_url:
|
||||||
|
self.status.log(
|
||||||
|
f"Adding remote MCP extension: {server_name} - {server_url}"
|
||||||
|
)
|
||||||
|
|
||||||
|
config_data["mcp"][server_name] = {
|
||||||
|
"type": "remote",
|
||||||
|
"url": server_url,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
with config_file.open("w") as f:
|
||||||
|
json.dump(config_data, f, indent=2)
|
||||||
|
|
||||||
|
# Set ownership of the config file to cubbi user
|
||||||
|
self._set_ownership(config_file)
|
||||||
|
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
|
||||||
|
return False
|
||||||
@@ -79,6 +79,7 @@ class MCPManager:
|
|||||||
name: str,
|
name: str,
|
||||||
url: str,
|
url: str,
|
||||||
headers: Dict[str, str] = None,
|
headers: Dict[str, str] = None,
|
||||||
|
mcp_type: Optional[str] = None,
|
||||||
add_as_default: bool = True,
|
add_as_default: bool = True,
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
"""Add a remote MCP server.
|
"""Add a remote MCP server.
|
||||||
@@ -97,6 +98,7 @@ class MCPManager:
|
|||||||
name=name,
|
name=name,
|
||||||
url=url,
|
url=url,
|
||||||
headers=headers or {},
|
headers=headers or {},
|
||||||
|
mcp_type=mcp_type,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Add to the configuration
|
# Add to the configuration
|
||||||
|
|||||||
@@ -61,6 +61,7 @@ class RemoteMCP(BaseModel):
|
|||||||
type: str = "remote"
|
type: str = "remote"
|
||||||
url: str
|
url: str
|
||||||
headers: Dict[str, str] = Field(default_factory=dict)
|
headers: Dict[str, str] = Field(default_factory=dict)
|
||||||
|
mcp_type: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class DockerMCP(BaseModel):
|
class DockerMCP(BaseModel):
|
||||||
@@ -102,6 +103,7 @@ class Session(BaseModel):
|
|||||||
status: SessionStatus
|
status: SessionStatus
|
||||||
container_id: Optional[str] = None
|
container_id: Optional[str] = None
|
||||||
ports: Dict[int, int] = Field(default_factory=dict)
|
ports: Dict[int, int] = Field(default_factory=dict)
|
||||||
|
mcps: List[str] = Field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
class Config(BaseModel):
|
class Config(BaseModel):
|
||||||
@@ -109,5 +111,5 @@ class Config(BaseModel):
|
|||||||
images: Dict[str, Image] = Field(default_factory=dict)
|
images: Dict[str, Image] = Field(default_factory=dict)
|
||||||
defaults: Dict[str, object] = Field(
|
defaults: Dict[str, object] = Field(
|
||||||
default_factory=dict
|
default_factory=dict
|
||||||
) # Can store strings, booleans, or other values
|
) # Can store strings, booleans, lists, or other values
|
||||||
mcps: List[Dict[str, Any]] = Field(default_factory=list)
|
mcps: List[Dict[str, Any]] = Field(default_factory=list)
|
||||||
|
|||||||
@@ -14,6 +14,7 @@ ENV_MAPPINGS = {
|
|||||||
"services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
"services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
||||||
"services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
"services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
||||||
"services.openai.api_key": "OPENAI_API_KEY",
|
"services.openai.api_key": "OPENAI_API_KEY",
|
||||||
|
"services.openai.url": "OPENAI_URL",
|
||||||
"services.anthropic.api_key": "ANTHROPIC_API_KEY",
|
"services.anthropic.api_key": "ANTHROPIC_API_KEY",
|
||||||
"services.openrouter.api_key": "OPENROUTER_API_KEY",
|
"services.openrouter.api_key": "OPENROUTER_API_KEY",
|
||||||
"services.google.api_key": "GOOGLE_API_KEY",
|
"services.google.api_key": "GOOGLE_API_KEY",
|
||||||
|
|||||||
@@ -387,7 +387,7 @@ Cubbi provides persistent storage for project-specific configurations that need
|
|||||||
|
|
||||||
2. **Image Configuration**:
|
2. **Image Configuration**:
|
||||||
- Each image can specify configuration files/directories that should persist across sessions
|
- Each image can specify configuration files/directories that should persist across sessions
|
||||||
- These are defined in the image's `cubbi-image.yaml` file in the `persistent_configs` section
|
- These are defined in the image's `cubbi_image.yaml` file in the `persistent_configs` section
|
||||||
- Example for Goose image:
|
- Example for Goose image:
|
||||||
```yaml
|
```yaml
|
||||||
persistent_configs:
|
persistent_configs:
|
||||||
@@ -458,7 +458,7 @@ Each image is a Docker container with a standardized structure:
|
|||||||
/
|
/
|
||||||
├── entrypoint.sh # Container initialization
|
├── entrypoint.sh # Container initialization
|
||||||
├── cubbi-init.sh # Standardized initialization script
|
├── cubbi-init.sh # Standardized initialization script
|
||||||
├── cubbi-image.yaml # Image metadata and configuration
|
├── cubbi_image.yaml # Image metadata and configuration
|
||||||
├── tool/ # AI tool installation
|
├── tool/ # AI tool installation
|
||||||
└── ssh/ # SSH server configuration
|
└── ssh/ # SSH server configuration
|
||||||
```
|
```
|
||||||
@@ -500,7 +500,7 @@ fi
|
|||||||
# Image-specific initialization continues...
|
# Image-specific initialization continues...
|
||||||
```
|
```
|
||||||
|
|
||||||
### Image Configuration (cubbi-image.yaml)
|
### Image Configuration (cubbi_image.yaml)
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
name: goose
|
name: goose
|
||||||
|
|||||||
327
docs/specs/3_IMAGE.md
Normal file
327
docs/specs/3_IMAGE.md
Normal file
@@ -0,0 +1,327 @@
|
|||||||
|
# Cubbi Image Specifications
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This document defines the specifications and requirements for building Cubbi-compatible container images. These images serve as isolated development environments for AI tools within the Cubbi platform.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
Cubbi images use a Python-based initialization system with a plugin architecture that separates core Cubbi functionality from tool-specific configuration.
|
||||||
|
|
||||||
|
### Core Components
|
||||||
|
|
||||||
|
1. **Image Metadata File** (`cubbi_image.yaml`) - *Tool-specific*
|
||||||
|
2. **Container Definition** (`Dockerfile`) - *Tool-specific*
|
||||||
|
3. **Python Initialization Script** (`cubbi_init.py`) - *Shared across all images*
|
||||||
|
4. **Tool-specific Plugins** (e.g., `goose_plugin.py`) - *Tool-specific*
|
||||||
|
5. **Status Tracking Scripts** (`init-status.sh`) - *Shared across all images*
|
||||||
|
|
||||||
|
## Image Metadata Specification
|
||||||
|
|
||||||
|
### Required Fields
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: string # Unique identifier for the image
|
||||||
|
description: string # Human-readable description
|
||||||
|
version: string # Semantic version (e.g., "1.0.0")
|
||||||
|
maintainer: string # Contact information
|
||||||
|
image: string # Docker image name and tag
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
environment:
|
||||||
|
- name: string # Variable name
|
||||||
|
description: string # Human-readable description
|
||||||
|
required: boolean # Whether variable is mandatory
|
||||||
|
sensitive: boolean # Whether variable contains secrets
|
||||||
|
default: string # Default value (optional)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Standard Environment Variables
|
||||||
|
|
||||||
|
All images MUST support these standard environment variables:
|
||||||
|
|
||||||
|
- `CUBBI_USER_ID`: UID for the container user (default: 1000)
|
||||||
|
- `CUBBI_GROUP_ID`: GID for the container user (default: 1000)
|
||||||
|
- `CUBBI_RUN_COMMAND`: Command to execute after initialization
|
||||||
|
- `CUBBI_NO_SHELL`: Exit after command execution ("true"/"false")
|
||||||
|
- `CUBBI_CONFIG_DIR`: Directory for persistent configurations (default: "/cubbi-config")
|
||||||
|
- `CUBBI_MODEL`: Model to use for the tool
|
||||||
|
- `CUBBI_PROVIDER`: Provider to use for the tool
|
||||||
|
|
||||||
|
#### MCP Integration Variables
|
||||||
|
|
||||||
|
For MCP (Model Context Protocol) integration:
|
||||||
|
|
||||||
|
- `MCP_COUNT`: Number of available MCP servers
|
||||||
|
- `MCP_{idx}_NAME`: Name of MCP server at index
|
||||||
|
- `MCP_{idx}_TYPE`: Type of MCP server
|
||||||
|
- `MCP_{idx}_HOST`: Hostname of MCP server
|
||||||
|
- `MCP_{idx}_URL`: Full URL for remote MCP servers
|
||||||
|
|
||||||
|
### Network Configuration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
ports:
|
||||||
|
- number # Port to expose (e.g., 8000)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Configuration
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
volumes:
|
||||||
|
- mountPath: string # Path inside container
|
||||||
|
description: string # Purpose of the volume
|
||||||
|
|
||||||
|
persistent_configs:
|
||||||
|
- source: string # Path inside container
|
||||||
|
target: string # Path in persistent storage
|
||||||
|
type: string # "file" or "directory"
|
||||||
|
description: string # Purpose of the configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
## Container Requirements
|
||||||
|
|
||||||
|
### Base System Dependencies
|
||||||
|
|
||||||
|
All images MUST include:
|
||||||
|
|
||||||
|
- `python3` - For the initialization system
|
||||||
|
- `gosu` - For secure user switching
|
||||||
|
- `bash` - For script execution
|
||||||
|
|
||||||
|
### Python Dependencies
|
||||||
|
|
||||||
|
The Cubbi initialization system requires:
|
||||||
|
|
||||||
|
- `ruamel.yaml` - For YAML configuration parsing
|
||||||
|
|
||||||
|
### User Management
|
||||||
|
|
||||||
|
Images MUST:
|
||||||
|
|
||||||
|
1. Run as root initially for setup
|
||||||
|
2. Create a non-root user (`cubbi`) with configurable UID/GID
|
||||||
|
3. Switch to the non-root user for tool execution
|
||||||
|
4. Handle user ID mapping for volume permissions
|
||||||
|
|
||||||
|
### Directory Structure
|
||||||
|
|
||||||
|
Standard directories:
|
||||||
|
|
||||||
|
- `/app` - Primary working directory (owned by cubbi user)
|
||||||
|
- `/home/cubbi` - User home directory
|
||||||
|
- `/cubbi-config` - Persistent configuration storage
|
||||||
|
- `/cubbi/init.log` - Initialization log file
|
||||||
|
- `/cubbi/init.status` - Initialization status tracking
|
||||||
|
- `/cubbi/cubbi_image.yaml` - Image configuration
|
||||||
|
|
||||||
|
## Initialization System
|
||||||
|
|
||||||
|
### Shared Scripts
|
||||||
|
|
||||||
|
The following scripts are **shared across all Cubbi images** and should be copied from the main Cubbi repository:
|
||||||
|
|
||||||
|
#### Main Script (`cubbi_init.py`) - *Shared*
|
||||||
|
|
||||||
|
The standalone initialization script that:
|
||||||
|
|
||||||
|
1. Sets up user and group with proper IDs
|
||||||
|
2. Creates standard directories with correct permissions
|
||||||
|
3. Sets up persistent configuration symlinks
|
||||||
|
4. Runs tool-specific initialization
|
||||||
|
5. Executes user commands or starts interactive shell
|
||||||
|
|
||||||
|
The script supports:
|
||||||
|
- `--help` for usage information
|
||||||
|
- Argument passing to final command
|
||||||
|
- Environment variable configuration
|
||||||
|
- Plugin-based tool initialization
|
||||||
|
|
||||||
|
#### Status Tracking Script (`init-status.sh`) - *Shared*
|
||||||
|
|
||||||
|
A bash script that:
|
||||||
|
- Monitors initialization progress
|
||||||
|
- Displays logs during setup
|
||||||
|
- Ensures files exist before operations
|
||||||
|
- Switches to user shell when complete
|
||||||
|
|
||||||
|
### Tool-Specific Components
|
||||||
|
|
||||||
|
#### Tool Plugins (`{tool}_plugin.py`) - *Tool-specific*
|
||||||
|
|
||||||
|
Each tool MUST provide a plugin (`{tool}_plugin.py`) implementing:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from cubbi_init import ToolPlugin
|
||||||
|
|
||||||
|
class MyToolPlugin(ToolPlugin):
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "mytool"
|
||||||
|
|
||||||
|
def initialize(self) -> bool:
|
||||||
|
"""Main tool initialization logic"""
|
||||||
|
# Tool-specific setup
|
||||||
|
return True
|
||||||
|
|
||||||
|
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||||
|
"""Integrate with available MCP servers"""
|
||||||
|
# MCP integration logic
|
||||||
|
return True
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Image Configuration (`cubbi_image.yaml`) - *Tool-specific*
|
||||||
|
|
||||||
|
Each tool provides its own metadata file defining:
|
||||||
|
- Tool-specific environment variables
|
||||||
|
- Port configurations
|
||||||
|
- Volume mounts
|
||||||
|
- Persistent configuration mappings
|
||||||
|
|
||||||
|
## Plugin Architecture
|
||||||
|
|
||||||
|
### Plugin Discovery
|
||||||
|
|
||||||
|
Plugins are automatically discovered by:
|
||||||
|
|
||||||
|
1. Looking for `{image_name}_plugin.py` in the same directory as `cubbi_init.py`
|
||||||
|
2. Loading classes that inherit from `ToolPlugin`
|
||||||
|
3. Executing initialization and MCP integration
|
||||||
|
|
||||||
|
### Plugin Requirements
|
||||||
|
|
||||||
|
Tool plugins MUST:
|
||||||
|
- Inherit from `ToolPlugin` base class
|
||||||
|
- Implement `tool_name` property
|
||||||
|
- Implement `initialize()` method
|
||||||
|
- Optionally implement `integrate_mcp_servers()` method
|
||||||
|
- Use ruamel.yaml for configuration file operations
|
||||||
|
|
||||||
|
## Security Requirements
|
||||||
|
|
||||||
|
### User Isolation
|
||||||
|
|
||||||
|
- Container MUST NOT run processes as root after initialization
|
||||||
|
- All user processes MUST run as the `cubbi` user
|
||||||
|
- Proper file ownership and permissions MUST be maintained
|
||||||
|
|
||||||
|
### Secrets Management
|
||||||
|
|
||||||
|
- Sensitive environment variables MUST be marked as `sensitive: true`
|
||||||
|
- SSH keys and tokens MUST have restricted permissions (600)
|
||||||
|
- No secrets SHOULD be logged or exposed in configuration files
|
||||||
|
|
||||||
|
### Network Security
|
||||||
|
|
||||||
|
- Only necessary ports SHOULD be exposed
|
||||||
|
- Network services should be properly configured and secured
|
||||||
|
|
||||||
|
## Integration Requirements
|
||||||
|
|
||||||
|
### MCP Server Integration
|
||||||
|
|
||||||
|
Images MUST support dynamic MCP server discovery and configuration through:
|
||||||
|
|
||||||
|
1. Environment variable parsing for server count and details
|
||||||
|
2. Automatic tool configuration updates
|
||||||
|
3. Standard MCP communication protocols
|
||||||
|
|
||||||
|
### Persistent Configuration
|
||||||
|
|
||||||
|
Images MUST support:
|
||||||
|
|
||||||
|
1. Configuration persistence through volume mounts
|
||||||
|
2. Symlink creation for tool configuration directories
|
||||||
|
3. Proper ownership and permission handling
|
||||||
|
|
||||||
|
## Docker Integration
|
||||||
|
|
||||||
|
### Dockerfile Requirements
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# Copy shared scripts from main Cubbi repository
|
||||||
|
COPY cubbi_init.py /cubbi_init.py # Shared
|
||||||
|
COPY init-status.sh /init-status.sh # Shared
|
||||||
|
|
||||||
|
# Copy tool-specific files
|
||||||
|
COPY {tool}_plugin.py /{tool}_plugin.py # Tool-specific
|
||||||
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml # Tool-specific
|
||||||
|
|
||||||
|
# Install Python dependencies
|
||||||
|
RUN pip install ruamel.yaml
|
||||||
|
|
||||||
|
# Make scripts executable
|
||||||
|
RUN chmod +x /cubbi_init.py /init-status.sh
|
||||||
|
|
||||||
|
# Set entrypoint
|
||||||
|
ENTRYPOINT ["/cubbi_init.py"]
|
||||||
|
CMD ["tail", "-f", "/dev/null"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Init Container Support
|
||||||
|
|
||||||
|
For complex initialization, use:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# Use init-status.sh as entrypoint for monitoring
|
||||||
|
ENTRYPOINT ["/init-status.sh"]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
|
||||||
|
- Use multi-stage builds to minimize image size
|
||||||
|
- Clean up package caches and temporary files
|
||||||
|
- Use specific base image versions for reproducibility
|
||||||
|
|
||||||
|
### Maintainability
|
||||||
|
|
||||||
|
- Follow consistent naming conventions
|
||||||
|
- Include comprehensive documentation
|
||||||
|
- Use semantic versioning for image releases
|
||||||
|
- Provide clear error messages and logging
|
||||||
|
|
||||||
|
### Compatibility
|
||||||
|
|
||||||
|
- Support common development workflows
|
||||||
|
- Maintain backward compatibility when possible
|
||||||
|
- Test with various project types and configurations
|
||||||
|
|
||||||
|
## Validation Checklist
|
||||||
|
|
||||||
|
Before releasing a Cubbi image, verify:
|
||||||
|
|
||||||
|
- [ ] All required metadata fields are present in `cubbi_image.yaml`
|
||||||
|
- [ ] Standard environment variables are supported
|
||||||
|
- [ ] `cubbi_init.py` script is properly installed and executable
|
||||||
|
- [ ] Tool plugin is discovered and loads correctly
|
||||||
|
- [ ] User management works correctly
|
||||||
|
- [ ] Persistent configurations are properly handled
|
||||||
|
- [ ] MCP integration functions (if applicable)
|
||||||
|
- [ ] Tool-specific functionality works as expected
|
||||||
|
- [ ] Security requirements are met
|
||||||
|
- [ ] Python dependencies are satisfied
|
||||||
|
- [ ] Status tracking works correctly
|
||||||
|
- [ ] Documentation is complete and accurate
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Complete Goose Example
|
||||||
|
|
||||||
|
See the `/cubbi/images/goose/` directory for a complete implementation including:
|
||||||
|
- `Dockerfile` - Container definition
|
||||||
|
- `cubbi_image.yaml` - Image metadata
|
||||||
|
- `goose_plugin.py` - Tool-specific initialization
|
||||||
|
- `README.md` - Tool-specific documentation
|
||||||
|
|
||||||
|
### Migration Notes
|
||||||
|
|
||||||
|
The current Python-based system uses:
|
||||||
|
- `cubbi_init.py` - Standalone initialization script with plugin support
|
||||||
|
- `{tool}_plugin.py` - Tool-specific configuration and MCP integration
|
||||||
|
- `init-status.sh` - Status monitoring and log display
|
||||||
|
- `cubbi_image.yaml` - Image metadata and configuration
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "cubbi"
|
name = "cubbi"
|
||||||
version = "0.2.0"
|
version = "0.3.0"
|
||||||
description = "Cubbi Container Tool"
|
description = "Cubbi Container Tool"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.12"
|
requires-python = ">=3.12"
|
||||||
|
|||||||
@@ -93,21 +93,212 @@ def test_mcp_remove(cli_runner, patched_config_manager):
|
|||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the get_mcp and remove_mcp methods
|
# Mock the container_manager.list_sessions to return sessions without MCPs
|
||||||
with patch("cubbi.cli.mcp_manager.get_mcp") as mock_get_mcp:
|
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||||
# First make get_mcp return our MCP
|
mock_list_sessions.return_value = []
|
||||||
mock_get_mcp.return_value = {
|
|
||||||
"name": "test-mcp",
|
|
||||||
"type": "remote",
|
|
||||||
"url": "http://test-server.com/sse",
|
|
||||||
"headers": {"Authorization": "Bearer test-token"},
|
|
||||||
}
|
|
||||||
|
|
||||||
# Remove the MCP server
|
# Mock the remove_mcp method
|
||||||
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||||
|
# Make remove_mcp return True (successful removal)
|
||||||
|
mock_remove_mcp.return_value = True
|
||||||
|
|
||||||
# Just check it ran successfully with exit code 0
|
# Remove the MCP server
|
||||||
assert result.exit_code == 0
|
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
||||||
|
|
||||||
|
# Just check it ran successfully with exit code 0
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Removed MCP server 'test-mcp'" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_mcp_remove_with_active_sessions(cli_runner, patched_config_manager):
|
||||||
|
"""Test removing an MCP server that is used by active sessions."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Add a remote MCP server
|
||||||
|
patched_config_manager.set(
|
||||||
|
"mcps",
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"name": "test-mcp",
|
||||||
|
"type": "remote",
|
||||||
|
"url": "http://test-server.com/sse",
|
||||||
|
"headers": {"Authorization": "Bearer test-token"},
|
||||||
|
}
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create mock sessions that use the MCP
|
||||||
|
mock_sessions = [
|
||||||
|
Session(
|
||||||
|
id="session-1",
|
||||||
|
name="test-session-1",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="container-1",
|
||||||
|
mcps=["test-mcp", "other-mcp"],
|
||||||
|
),
|
||||||
|
Session(
|
||||||
|
id="session-2",
|
||||||
|
name="test-session-2",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="container-2",
|
||||||
|
mcps=["other-mcp"], # This one doesn't use test-mcp
|
||||||
|
),
|
||||||
|
Session(
|
||||||
|
id="session-3",
|
||||||
|
name="test-session-3",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="container-3",
|
||||||
|
mcps=["test-mcp"], # This one uses test-mcp
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
# Mock the container_manager.list_sessions to return our sessions
|
||||||
|
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||||
|
mock_list_sessions.return_value = mock_sessions
|
||||||
|
|
||||||
|
# Mock the remove_mcp method
|
||||||
|
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||||
|
# Make remove_mcp return True (successful removal)
|
||||||
|
mock_remove_mcp.return_value = True
|
||||||
|
|
||||||
|
# Remove the MCP server
|
||||||
|
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
||||||
|
|
||||||
|
# Check it ran successfully with exit code 0
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Removed MCP server 'test-mcp'" in result.stdout
|
||||||
|
# Check warning about affected sessions
|
||||||
|
assert (
|
||||||
|
"Warning: Found 2 active sessions using MCP 'test-mcp'" in result.stdout
|
||||||
|
)
|
||||||
|
assert "session-1" in result.stdout
|
||||||
|
assert "session-3" in result.stdout
|
||||||
|
# session-2 should not be mentioned since it doesn't use test-mcp
|
||||||
|
assert "session-2" not in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_mcp_remove_nonexistent(cli_runner, patched_config_manager):
|
||||||
|
"""Test removing a non-existent MCP server."""
|
||||||
|
# No MCPs configured
|
||||||
|
patched_config_manager.set("mcps", [])
|
||||||
|
|
||||||
|
# Mock the container_manager.list_sessions to return empty list
|
||||||
|
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||||
|
mock_list_sessions.return_value = []
|
||||||
|
|
||||||
|
# Mock the remove_mcp method to return False (MCP not found)
|
||||||
|
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||||
|
mock_remove_mcp.return_value = False
|
||||||
|
|
||||||
|
# Try to remove a non-existent MCP server
|
||||||
|
result = cli_runner.invoke(app, ["mcp", "remove", "nonexistent-mcp"])
|
||||||
|
|
||||||
|
# Check it ran successfully but reported not found
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "MCP server 'nonexistent-mcp' not found" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_mcps_attribute():
|
||||||
|
"""Test that Session model has mcps attribute and can be populated correctly."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Test that Session can be created with mcps attribute
|
||||||
|
session = Session(
|
||||||
|
id="test-session",
|
||||||
|
name="test-session",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="test-container",
|
||||||
|
mcps=["mcp1", "mcp2"],
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session.mcps == ["mcp1", "mcp2"]
|
||||||
|
|
||||||
|
# Test that Session can be created with empty mcps list
|
||||||
|
session_empty = Session(
|
||||||
|
id="test-session-2",
|
||||||
|
name="test-session-2",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="test-container-2",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session_empty.mcps == [] # Should default to empty list
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_mcps_from_container_labels():
|
||||||
|
"""Test that Session mcps are correctly populated from container labels."""
|
||||||
|
from unittest.mock import Mock
|
||||||
|
from cubbi.container import ContainerManager
|
||||||
|
|
||||||
|
# Mock a container with MCP labels
|
||||||
|
mock_container = Mock()
|
||||||
|
mock_container.id = "test-container-id"
|
||||||
|
mock_container.status = "running"
|
||||||
|
mock_container.labels = {
|
||||||
|
"cubbi.session": "true",
|
||||||
|
"cubbi.session.id": "test-session",
|
||||||
|
"cubbi.session.name": "test-session-name",
|
||||||
|
"cubbi.image": "goose",
|
||||||
|
"cubbi.mcps": "mcp1,mcp2,mcp3", # Test with multiple MCPs
|
||||||
|
}
|
||||||
|
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
|
||||||
|
|
||||||
|
# Mock Docker client
|
||||||
|
mock_client = Mock()
|
||||||
|
mock_client.containers.list.return_value = [mock_container]
|
||||||
|
|
||||||
|
# Create container manager with mocked client
|
||||||
|
with patch("cubbi.container.docker.from_env") as mock_docker:
|
||||||
|
mock_docker.return_value = mock_client
|
||||||
|
mock_client.ping.return_value = True
|
||||||
|
|
||||||
|
container_manager = ContainerManager()
|
||||||
|
sessions = container_manager.list_sessions()
|
||||||
|
|
||||||
|
assert len(sessions) == 1
|
||||||
|
session = sessions[0]
|
||||||
|
assert session.id == "test-session"
|
||||||
|
assert session.mcps == ["mcp1", "mcp2", "mcp3"]
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_mcps_from_empty_container_labels():
|
||||||
|
"""Test that Session mcps are correctly handled when container has no MCP labels."""
|
||||||
|
from unittest.mock import Mock
|
||||||
|
from cubbi.container import ContainerManager
|
||||||
|
|
||||||
|
# Mock a container without MCP labels
|
||||||
|
mock_container = Mock()
|
||||||
|
mock_container.id = "test-container-id"
|
||||||
|
mock_container.status = "running"
|
||||||
|
mock_container.labels = {
|
||||||
|
"cubbi.session": "true",
|
||||||
|
"cubbi.session.id": "test-session",
|
||||||
|
"cubbi.session.name": "test-session-name",
|
||||||
|
"cubbi.image": "goose",
|
||||||
|
# No cubbi.mcps label
|
||||||
|
}
|
||||||
|
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
|
||||||
|
|
||||||
|
# Mock Docker client
|
||||||
|
mock_client = Mock()
|
||||||
|
mock_client.containers.list.return_value = [mock_container]
|
||||||
|
|
||||||
|
# Create container manager with mocked client
|
||||||
|
with patch("cubbi.container.docker.from_env") as mock_docker:
|
||||||
|
mock_docker.return_value = mock_client
|
||||||
|
mock_client.ping.return_value = True
|
||||||
|
|
||||||
|
container_manager = ContainerManager()
|
||||||
|
sessions = container_manager.list_sessions()
|
||||||
|
|
||||||
|
assert len(sessions) == 1
|
||||||
|
session = sessions[0]
|
||||||
|
assert session.id == "test-session"
|
||||||
|
assert session.mcps == [] # Should be empty list when no MCPs
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.requires_docker
|
@pytest.mark.requires_docker
|
||||||
|
|||||||
4
uv.lock
generated
4
uv.lock
generated
@@ -1,5 +1,5 @@
|
|||||||
version = 1
|
version = 1
|
||||||
revision = 2
|
revision = 3
|
||||||
requires-python = ">=3.12"
|
requires-python = ">=3.12"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -78,7 +78,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cubbi"
|
name = "cubbi"
|
||||||
version = "0.2.0"
|
version = "0.3.0"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "docker" },
|
{ name = "docker" },
|
||||||
|
|||||||
Reference in New Issue
Block a user