mirror of
https://github.com/Monadical-SAS/cubbi.git
synced 2025-12-21 12:49:07 +00:00
Compare commits
3 Commits
v0.3.0
...
feature/ge
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b173bcd08c | ||
|
|
96243a99e4 | ||
|
|
ae20e6a455 |
17
.github/workflows/conventional_commit_pr.yml
vendored
Normal file
17
.github/workflows/conventional_commit_pr.yml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
name: Conventional commit PR
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
cog_check_job:
|
||||
runs-on: ubuntu-latest
|
||||
name: check conventional commit compliance
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
# pick the pr HEAD instead of the merge commit
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
|
||||
- name: Conventional commit check
|
||||
uses: cocogitto/cocogitto-action@v3
|
||||
2
.github/workflows/pytests.yml
vendored
2
.github/workflows/pytests.yml
vendored
@@ -34,7 +34,7 @@ jobs:
|
||||
run: |
|
||||
uv tool install --with-editable . .
|
||||
cubbi image build goose
|
||||
cubbi image build aider
|
||||
cubbi image build gemini-cli
|
||||
|
||||
- name: Tests
|
||||
run: |
|
||||
|
||||
152
CHANGELOG.md
152
CHANGELOG.md
@@ -1,158 +1,6 @@
|
||||
# CHANGELOG
|
||||
|
||||
|
||||
## v0.3.0 (2025-07-31)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Claudecode and opencode arm64 images ([#21](https://github.com/Monadical-SAS/cubbi/pull/21),
|
||||
[`dba7a7c`](https://github.com/Monadical-SAS/cubbi/commit/dba7a7c1efcc04570a92ecbc4eee39eb6353aaea))
|
||||
|
||||
- Update readme
|
||||
([`4958b07`](https://github.com/Monadical-SAS/cubbi/commit/4958b07401550fb5a6751b99a257eda6c4558ea4))
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
- Remove conventional commit, as only PR is required
|
||||
([`afae8a1`](https://github.com/Monadical-SAS/cubbi/commit/afae8a13e1ea02801b2e5c9d5c84aa65a32d637c))
|
||||
|
||||
### Features
|
||||
|
||||
- Add --mcp-type option for remote MCP servers
|
||||
([`d41faf6`](https://github.com/Monadical-SAS/cubbi/commit/d41faf6b3072d4f8bdb2adc896125c7fd0d6117d))
|
||||
|
||||
Auto-detects connection type from URL (/sse -> sse, /mcp -> streamable_http) or allows manual
|
||||
specification. Updates goose plugin to use actual MCP type instead of hardcoded sse.
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
|
||||
- Add Claude Code image support ([#16](https://github.com/Monadical-SAS/cubbi/pull/16),
|
||||
[`b28c2bd`](https://github.com/Monadical-SAS/cubbi/commit/b28c2bd63e324f875b2d862be9e0afa4a7a17ffc))
|
||||
|
||||
* feat: add Claude Code image support
|
||||
|
||||
Add a new Cubbi image for Claude Code (Anthropic's official CLI) with: - Full Claude Code CLI
|
||||
functionality via NPM package - Secure API key management with multiple authentication options -
|
||||
Enterprise support (Bedrock, Vertex AI, proxy configuration) - Persistent configuration and cache
|
||||
directories - Comprehensive test suite and documentation
|
||||
|
||||
The image allows users to run Claude Code in containers with proper isolation, persistent settings,
|
||||
and seamless Cubbi integration. It gracefully handles missing API keys to allow flexible
|
||||
authentication.
|
||||
|
||||
Also adds optional Claude Code API keys to container.py for enterprise deployments.
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
|
||||
* Pre-commit fixes
|
||||
|
||||
---------
|
||||
|
||||
Co-authored-by: Claude <noreply@anthropic.com>
|
||||
|
||||
Co-authored-by: Your Name <you@example.com>
|
||||
|
||||
- Add configuration override in session create with --config/-c
|
||||
([`672b8a8`](https://github.com/Monadical-SAS/cubbi/commit/672b8a8e315598d98f40d269dfcfbde6203cbb57))
|
||||
|
||||
- Add MCP tracking to sessions ([#19](https://github.com/Monadical-SAS/cubbi/pull/19),
|
||||
[`d750e64`](https://github.com/Monadical-SAS/cubbi/commit/d750e64608998f6f3a03928bba18428f576b412f))
|
||||
|
||||
Add mcps field to Session model to track active MCP servers and populate it from container labels in
|
||||
ContainerManager. Enhance MCP remove command to warn when removing servers used by active
|
||||
sessions.
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||
|
||||
Co-authored-by: Claude <noreply@anthropic.com>
|
||||
|
||||
- Add network filtering with domain restrictions
|
||||
([#22](https://github.com/Monadical-SAS/cubbi/pull/22),
|
||||
[`2eb15a3`](https://github.com/Monadical-SAS/cubbi/commit/2eb15a31f8bb97f93461bea5e567cc2ccde3f86c))
|
||||
|
||||
* fix: remove config override logging to prevent API key exposure
|
||||
|
||||
* feat: add network filtering with domain restrictions
|
||||
|
||||
- Add --domains flag to restrict container network access to specific domains/ports - Integrate
|
||||
monadicalsas/network-filter container for network isolation - Support domain patterns like
|
||||
'example.com:443', '*.api.com' - Add defaults.domains configuration option - Automatically handle
|
||||
network-filter container lifecycle - Prevent conflicts between --domains and --network options
|
||||
|
||||
* docs: add --domains option to README usage examples
|
||||
|
||||
* docs: remove wildcard domain example from --domains help
|
||||
|
||||
Wildcard domains are not currently supported by network-filter
|
||||
|
||||
- Add ripgrep and openssh-client in images ([#15](https://github.com/Monadical-SAS/cubbi/pull/15),
|
||||
[`e70ec35`](https://github.com/Monadical-SAS/cubbi/commit/e70ec3538ba4e02a60afedca583da1c35b7b6d7a))
|
||||
|
||||
- Add sudo and sudoers ([#20](https://github.com/Monadical-SAS/cubbi/pull/20),
|
||||
[`9c8ddbb`](https://github.com/Monadical-SAS/cubbi/commit/9c8ddbb3f3f2fc97db9283898b6a85aee7235fae))
|
||||
|
||||
* feat: add sudo and sudoers
|
||||
|
||||
* Update cubbi/images/cubbi_init.py
|
||||
|
||||
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
|
||||
|
||||
---------
|
||||
|
||||
- Implement Aider AI pair programming support
|
||||
([#17](https://github.com/Monadical-SAS/cubbi/pull/17),
|
||||
[`fc0d6b5`](https://github.com/Monadical-SAS/cubbi/commit/fc0d6b51af12ddb0bd8655309209dd88e7e4d6f1))
|
||||
|
||||
* feat: implement Aider AI pair programming support
|
||||
|
||||
- Add comprehensive Aider Docker image with Python 3.12 and system pip installation - Implement
|
||||
aider_plugin.py for secure API key management and environment configuration - Support multiple LLM
|
||||
providers: OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter - Add persistent configuration for
|
||||
~/.aider/ and ~/.cache/aider/ directories - Create comprehensive documentation with usage examples
|
||||
and troubleshooting - Include automated test suite with 6 test categories covering all
|
||||
functionality - Update container.py to support DEEPSEEK_API_KEY and GEMINI_API_KEY - Integrate
|
||||
with Cubbi CLI for seamless session management
|
||||
|
||||
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||
|
||||
* Fix pytest for aider
|
||||
|
||||
* Fix pre-commit
|
||||
|
||||
---------
|
||||
|
||||
Co-authored-by: Your Name <you@example.com>
|
||||
|
||||
- Include new image opencode ([#14](https://github.com/Monadical-SAS/cubbi/pull/14),
|
||||
[`5fca51e`](https://github.com/Monadical-SAS/cubbi/commit/5fca51e5152dcf7503781eb707fa04414cf33c05))
|
||||
|
||||
* feat: include new image opencode
|
||||
|
||||
* docs: update readme
|
||||
|
||||
- Support config `openai.url` for goose/opencode/aider
|
||||
([`da5937e`](https://github.com/Monadical-SAS/cubbi/commit/da5937e70829b88a66f96c3ce7be7dacfc98facb))
|
||||
|
||||
### Refactoring
|
||||
|
||||
- New image layout and organization ([#13](https://github.com/Monadical-SAS/cubbi/pull/13),
|
||||
[`e5121dd`](https://github.com/Monadical-SAS/cubbi/commit/e5121ddea4230e78a05a85c4ce668e0c169b5ace))
|
||||
|
||||
* refactor: rework how image are defined, in order to create others wrapper for others tools
|
||||
|
||||
* refactor: fix issues with ownership
|
||||
|
||||
* refactor: image share now information with others images type
|
||||
|
||||
* fix: update readme
|
||||
|
||||
|
||||
## v0.2.0 (2025-05-21)
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
16
README.md
16
README.md
@@ -2,7 +2,7 @@
|
||||
|
||||
# Cubbi - Container Tool
|
||||
|
||||
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments, with support for MCP servers.
|
||||
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments. It works with both local Docker and a dedicated remote web service that manages containers in a Docker-in-Docker (DinD) environment. Cubbi also supports connecting to MCP (Model Control Protocol) servers to extend AI tools with additional capabilities.
|
||||
|
||||

|
||||

|
||||
@@ -98,9 +98,6 @@ cubbix /path/to/project
|
||||
# Connect to external Docker networks
|
||||
cubbix --network teamnet --network dbnet
|
||||
|
||||
# Restrict network access to specific domains
|
||||
cubbix --domains github.com --domains "api.example.com:443"
|
||||
|
||||
# Connect to MCP servers for extended capabilities
|
||||
cubbix --mcp github --mcp jira
|
||||
|
||||
@@ -128,16 +125,7 @@ cubbix --ssh
|
||||
|
||||
## 🖼️ Image Management
|
||||
|
||||
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools
|
||||
|
||||
**Supported Images**
|
||||
|
||||
| Image Name | Langtrace Support |
|
||||
|------------|-------------------|
|
||||
| goose | yes |
|
||||
| opencode | no |
|
||||
| claudecode | no |
|
||||
| aider | no |
|
||||
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools:
|
||||
|
||||
```bash
|
||||
# List available images
|
||||
|
||||
130
cubbi/cli.py
130
cubbi/cli.py
@@ -173,17 +173,6 @@ def create_session(
|
||||
None, "--provider", "-p", help="Provider to use"
|
||||
),
|
||||
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
|
||||
config: List[str] = typer.Option(
|
||||
[],
|
||||
"--config",
|
||||
"-c",
|
||||
help="Override configuration values (KEY=VALUE) for this session only",
|
||||
),
|
||||
domains: List[str] = typer.Option(
|
||||
[],
|
||||
"--domains",
|
||||
help="Restrict network access to specified domains/ports (e.g., 'example.com:443', 'api.github.com')",
|
||||
),
|
||||
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
|
||||
) -> None:
|
||||
"""Create a new Cubbi session
|
||||
@@ -200,66 +189,16 @@ def create_session(
|
||||
target_gid = gid if gid is not None else os.getgid()
|
||||
console.print(f"Using UID: {target_uid}, GID: {target_gid}")
|
||||
|
||||
# Create a temporary user config manager with overrides
|
||||
temp_user_config = UserConfigManager()
|
||||
|
||||
# Parse and apply config overrides
|
||||
config_overrides = {}
|
||||
for config_item in config:
|
||||
if "=" in config_item:
|
||||
key, value = config_item.split("=", 1)
|
||||
# Convert string value to appropriate type
|
||||
if value.lower() == "true":
|
||||
typed_value = True
|
||||
elif value.lower() == "false":
|
||||
typed_value = False
|
||||
elif value.isdigit():
|
||||
typed_value = int(value)
|
||||
else:
|
||||
typed_value = value
|
||||
config_overrides[key] = typed_value
|
||||
else:
|
||||
console.print(
|
||||
f"[yellow]Warning: Ignoring invalid config format: {config_item}. Use KEY=VALUE.[/yellow]"
|
||||
)
|
||||
|
||||
# Apply overrides to temp config (without saving)
|
||||
for key, value in config_overrides.items():
|
||||
# Handle shorthand service paths (e.g., "langfuse.url")
|
||||
if (
|
||||
"." in key
|
||||
and not key.startswith("services.")
|
||||
and not any(
|
||||
key.startswith(section + ".")
|
||||
for section in ["defaults", "docker", "remote", "ui"]
|
||||
)
|
||||
):
|
||||
service, setting = key.split(".", 1)
|
||||
key = f"services.{service}.{setting}"
|
||||
|
||||
# Split the key path and navigate to set the value
|
||||
parts = key.split(".")
|
||||
config_dict = temp_user_config.config
|
||||
|
||||
# Navigate to the containing dictionary
|
||||
for part in parts[:-1]:
|
||||
if part not in config_dict:
|
||||
config_dict[part] = {}
|
||||
config_dict = config_dict[part]
|
||||
|
||||
# Set the value without saving
|
||||
config_dict[parts[-1]] = value
|
||||
|
||||
# Use default image from user configuration (with overrides applied)
|
||||
# Use default image from user configuration
|
||||
if not image:
|
||||
image_name = temp_user_config.get(
|
||||
image_name = user_config.get(
|
||||
"defaults.image", config_manager.config.defaults.get("image", "goose")
|
||||
)
|
||||
else:
|
||||
image_name = image
|
||||
|
||||
# Start with environment variables from user configuration (with overrides applied)
|
||||
environment = temp_user_config.get_environment_variables()
|
||||
# Start with environment variables from user configuration
|
||||
environment = user_config.get_environment_variables()
|
||||
|
||||
# Override with environment variables from command line
|
||||
for var in env:
|
||||
@@ -275,7 +214,7 @@ def create_session(
|
||||
volume_mounts = {}
|
||||
|
||||
# Get default volumes from user config
|
||||
default_volumes = temp_user_config.get("defaults.volumes", [])
|
||||
default_volumes = user_config.get("defaults.volumes", [])
|
||||
|
||||
# Combine default volumes with user-specified volumes
|
||||
all_volumes = default_volumes + list(volume)
|
||||
@@ -302,27 +241,15 @@ def create_session(
|
||||
)
|
||||
|
||||
# Get default networks from user config
|
||||
default_networks = temp_user_config.get("defaults.networks", [])
|
||||
default_networks = user_config.get("defaults.networks", [])
|
||||
|
||||
# Combine default networks with user-specified networks, removing duplicates
|
||||
all_networks = list(set(default_networks + network))
|
||||
|
||||
# Get default domains from user config
|
||||
default_domains = temp_user_config.get("defaults.domains", [])
|
||||
|
||||
# Combine default domains with user-specified domains
|
||||
all_domains = default_domains + list(domains)
|
||||
|
||||
# Check for conflict between network and domains
|
||||
if all_domains and all_networks:
|
||||
console.print(
|
||||
"[yellow]Warning: --domains cannot be used with --network. Network restrictions will take precedence.[/yellow]"
|
||||
)
|
||||
|
||||
# Get default MCPs from user config if none specified
|
||||
all_mcps = mcp if isinstance(mcp, list) else []
|
||||
if not all_mcps:
|
||||
default_mcps = temp_user_config.get("defaults.mcps", [])
|
||||
default_mcps = user_config.get("defaults.mcps", [])
|
||||
all_mcps = default_mcps
|
||||
|
||||
if default_mcps:
|
||||
@@ -331,9 +258,6 @@ def create_session(
|
||||
if all_networks:
|
||||
console.print(f"Networks: {', '.join(all_networks)}")
|
||||
|
||||
if all_domains:
|
||||
console.print(f"Domain restrictions: {', '.join(all_domains)}")
|
||||
|
||||
# Show volumes that will be mounted
|
||||
if volume_mounts:
|
||||
console.print("Volumes:")
|
||||
@@ -353,16 +277,6 @@ def create_session(
|
||||
"[yellow]Warning: --no-shell is ignored without --run[/yellow]"
|
||||
)
|
||||
|
||||
# Use model and provider from config overrides if not explicitly provided
|
||||
final_model = (
|
||||
model if model is not None else temp_user_config.get("defaults.model")
|
||||
)
|
||||
final_provider = (
|
||||
provider
|
||||
if provider is not None
|
||||
else temp_user_config.get("defaults.provider")
|
||||
)
|
||||
|
||||
session = container_manager.create_session(
|
||||
image_name=image_name,
|
||||
project=path_or_url,
|
||||
@@ -378,9 +292,8 @@ def create_session(
|
||||
uid=target_uid,
|
||||
gid=target_gid,
|
||||
ssh=ssh,
|
||||
model=final_model,
|
||||
provider=final_provider,
|
||||
domains=all_domains,
|
||||
model=model,
|
||||
provider=provider,
|
||||
)
|
||||
|
||||
if session:
|
||||
@@ -394,7 +307,7 @@ def create_session(
|
||||
console.print(f" {container_port} -> {host_port}")
|
||||
|
||||
# Auto-connect based on user config, unless overridden by --no-connect flag or --no-shell
|
||||
auto_connect = temp_user_config.get("defaults.connect", True)
|
||||
auto_connect = user_config.get("defaults.connect", True)
|
||||
|
||||
# When --no-shell is used with --run, show logs instead of connecting
|
||||
if no_shell and run_command:
|
||||
@@ -1593,11 +1506,6 @@ def add_mcp(
|
||||
def add_remote_mcp(
|
||||
name: str = typer.Argument(..., help="MCP server name"),
|
||||
url: str = typer.Argument(..., help="URL of the remote MCP server"),
|
||||
mcp_type: str = typer.Option(
|
||||
"auto",
|
||||
"--mcp-type",
|
||||
help="MCP connection type: sse, streamable_http, stdio, or auto (default: auto)",
|
||||
),
|
||||
header: List[str] = typer.Option(
|
||||
[], "--header", "-H", help="HTTP headers (format: KEY=VALUE)"
|
||||
),
|
||||
@@ -1606,22 +1514,6 @@ def add_remote_mcp(
|
||||
),
|
||||
) -> None:
|
||||
"""Add a remote MCP server"""
|
||||
if mcp_type == "auto":
|
||||
if url.endswith("/sse"):
|
||||
mcp_type = "sse"
|
||||
elif url.endswith("/mcp"):
|
||||
mcp_type = "streamable_http"
|
||||
else:
|
||||
console.print(
|
||||
f"[red]Cannot auto-detect MCP type from URL '{url}'. Please specify --mcp-type (sse, streamable_http, or stdio)[/red]"
|
||||
)
|
||||
return
|
||||
elif mcp_type not in ["sse", "streamable_http", "stdio"]:
|
||||
console.print(
|
||||
f"[red]Invalid MCP type '{mcp_type}'. Must be: sse, streamable_http, stdio, or auto[/red]"
|
||||
)
|
||||
return
|
||||
|
||||
# Parse headers
|
||||
headers = {}
|
||||
for h in header:
|
||||
@@ -1636,7 +1528,7 @@ def add_remote_mcp(
|
||||
try:
|
||||
with console.status(f"Adding remote MCP server '{name}'..."):
|
||||
mcp_manager.add_remote_mcp(
|
||||
name, url, headers, mcp_type=mcp_type, add_as_default=not no_default
|
||||
name, url, headers, add_as_default=not no_default
|
||||
)
|
||||
|
||||
console.print(f"[green]Added remote MCP server '{name}'[/green]")
|
||||
|
||||
@@ -64,7 +64,6 @@ class ConfigManager:
|
||||
},
|
||||
defaults={
|
||||
"image": "goose",
|
||||
"domains": [],
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ from docker.errors import DockerException, ImageNotFound
|
||||
|
||||
from .config import ConfigManager
|
||||
from .mcp import MCPManager
|
||||
from .models import Image, Session, SessionStatus
|
||||
from .models import Session, SessionStatus
|
||||
from .session import SessionManager
|
||||
from .user_config import UserConfigManager
|
||||
|
||||
@@ -107,21 +107,12 @@ class ContainerManager:
|
||||
elif container.status == "created":
|
||||
status = SessionStatus.CREATING
|
||||
|
||||
# Get MCP list from container labels
|
||||
mcps_str = labels.get("cubbi.mcps", "")
|
||||
mcps = (
|
||||
[mcp.strip() for mcp in mcps_str.split(",") if mcp.strip()]
|
||||
if mcps_str
|
||||
else []
|
||||
)
|
||||
|
||||
session = Session(
|
||||
id=session_id,
|
||||
name=labels.get("cubbi.session.name", f"cubbi-{session_id}"),
|
||||
image=labels.get("cubbi.image", "unknown"),
|
||||
status=status,
|
||||
container_id=container_id,
|
||||
mcps=mcps,
|
||||
)
|
||||
|
||||
# Get port mappings
|
||||
@@ -162,7 +153,6 @@ class ContainerManager:
|
||||
model: Optional[str] = None,
|
||||
provider: Optional[str] = None,
|
||||
ssh: bool = False,
|
||||
domains: Optional[List[str]] = None,
|
||||
) -> Optional[Session]:
|
||||
"""Create a new Cubbi session
|
||||
|
||||
@@ -183,26 +173,13 @@ class ContainerManager:
|
||||
model: Optional model to use
|
||||
provider: Optional provider to use
|
||||
ssh: Whether to start the SSH server in the container (default: False)
|
||||
domains: Optional list of domains to restrict network access to (uses network-filter)
|
||||
"""
|
||||
try:
|
||||
# Try to get image from config first
|
||||
# Validate image exists
|
||||
image = self.config_manager.get_image(image_name)
|
||||
if not image:
|
||||
# If not found in config, treat it as a Docker image name
|
||||
print(
|
||||
f"Image '{image_name}' not found in Cubbi config, using as Docker image..."
|
||||
)
|
||||
image = Image(
|
||||
name=image_name,
|
||||
description=f"Docker image: {image_name}",
|
||||
version="latest",
|
||||
maintainer="unknown",
|
||||
image=image_name,
|
||||
ports=[],
|
||||
volumes=[],
|
||||
persistent_configs=[],
|
||||
)
|
||||
print(f"Image '{image_name}' not found")
|
||||
return None
|
||||
|
||||
# Generate session ID and name
|
||||
session_id = self._generate_session_id()
|
||||
@@ -222,20 +199,17 @@ class ContainerManager:
|
||||
# Set SSH environment variable
|
||||
env_vars["CUBBI_SSH_ENABLED"] = "true" if ssh else "false"
|
||||
|
||||
# Pass some environment from host environment to container for local development
|
||||
keys = [
|
||||
# Pass API keys from host environment to container for local development
|
||||
api_keys = [
|
||||
"OPENAI_API_KEY",
|
||||
"OPENAI_URL",
|
||||
"ANTHROPIC_API_KEY",
|
||||
"ANTHROPIC_AUTH_TOKEN",
|
||||
"ANTHROPIC_CUSTOM_HEADERS",
|
||||
"OPENROUTER_API_KEY",
|
||||
"GOOGLE_API_KEY",
|
||||
"LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
||||
"LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
||||
"LANGFUSE_URL",
|
||||
]
|
||||
for key in keys:
|
||||
for key in api_keys:
|
||||
if key in os.environ and key not in env_vars:
|
||||
env_vars[key] = os.environ[key]
|
||||
|
||||
@@ -457,7 +431,7 @@ class ContainerManager:
|
||||
)
|
||||
|
||||
# Set type-specific information
|
||||
env_vars[f"MCP_{idx}_TYPE"] = mcp_config.get("mcp_type", "sse")
|
||||
env_vars[f"MCP_{idx}_TYPE"] = "remote"
|
||||
env_vars[f"MCP_{idx}_NAME"] = mcp_name
|
||||
|
||||
# Set environment variables for MCP count if we have any
|
||||
@@ -531,99 +505,17 @@ class ContainerManager:
|
||||
"defaults.provider", ""
|
||||
)
|
||||
|
||||
# Handle network-filter if domains are specified
|
||||
network_filter_container = None
|
||||
network_mode = None
|
||||
|
||||
if domains:
|
||||
# Check for conflicts
|
||||
if networks:
|
||||
print(
|
||||
"[yellow]Warning: Cannot use --domains with --network. Using domain restrictions only.[/yellow]"
|
||||
)
|
||||
networks = []
|
||||
network_list = [default_network]
|
||||
|
||||
# Create network-filter container
|
||||
network_filter_name = f"cubbi-network-filter-{session_id}"
|
||||
|
||||
# Pull network-filter image if needed
|
||||
network_filter_image = "monadicalsas/network-filter:latest"
|
||||
try:
|
||||
self.client.images.get(network_filter_image)
|
||||
except ImageNotFound:
|
||||
print(f"Pulling network-filter image {network_filter_image}...")
|
||||
self.client.images.pull(network_filter_image)
|
||||
|
||||
# Create and start network-filter container
|
||||
print("Creating network-filter container for domain restrictions...")
|
||||
try:
|
||||
# First check if a network-filter container already exists with this name
|
||||
try:
|
||||
existing = self.client.containers.get(network_filter_name)
|
||||
print(
|
||||
f"Removing existing network-filter container {network_filter_name}"
|
||||
)
|
||||
existing.stop()
|
||||
existing.remove()
|
||||
except DockerException:
|
||||
pass # Container doesn't exist, which is fine
|
||||
|
||||
network_filter_container = self.client.containers.run(
|
||||
image=network_filter_image,
|
||||
name=network_filter_name,
|
||||
hostname=network_filter_name,
|
||||
detach=True,
|
||||
environment={"ALLOWED_DOMAINS": ",".join(domains)},
|
||||
labels={
|
||||
"cubbi.network-filter": "true",
|
||||
"cubbi.session.id": session_id,
|
||||
"cubbi.session.name": session_name,
|
||||
},
|
||||
cap_add=["NET_ADMIN"], # Required for iptables
|
||||
remove=False, # Don't auto-remove on stop
|
||||
)
|
||||
|
||||
# Wait for container to be running
|
||||
import time
|
||||
|
||||
for i in range(10): # Wait up to 10 seconds
|
||||
network_filter_container.reload()
|
||||
if network_filter_container.status == "running":
|
||||
break
|
||||
time.sleep(1)
|
||||
else:
|
||||
raise Exception(
|
||||
f"Network-filter container failed to start. Status: {network_filter_container.status}"
|
||||
)
|
||||
|
||||
# Use container ID instead of name for network_mode
|
||||
network_mode = f"container:{network_filter_container.id}"
|
||||
print(
|
||||
f"Network restrictions enabled for domains: {', '.join(domains)}"
|
||||
)
|
||||
print(f"Using network mode: {network_mode}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"[red]Error creating network-filter container: {e}[/red]")
|
||||
raise
|
||||
|
||||
# Warn about MCP limitations when using network-filter
|
||||
if mcp_names:
|
||||
print(
|
||||
"[yellow]Warning: MCP servers may not be accessible when using domain restrictions.[/yellow]"
|
||||
)
|
||||
|
||||
# Create container
|
||||
container_params = {
|
||||
"image": image.image,
|
||||
"name": session_name,
|
||||
"detach": True,
|
||||
"tty": True,
|
||||
"stdin_open": True,
|
||||
"environment": env_vars,
|
||||
"volumes": session_volumes,
|
||||
"labels": {
|
||||
container = self.client.containers.create(
|
||||
image=image.image,
|
||||
name=session_name,
|
||||
hostname=session_name,
|
||||
detach=True,
|
||||
tty=True,
|
||||
stdin_open=True,
|
||||
environment=env_vars,
|
||||
volumes=session_volumes,
|
||||
labels={
|
||||
"cubbi.session": "true",
|
||||
"cubbi.session.id": session_id,
|
||||
"cubbi.session.name": session_name,
|
||||
@@ -632,29 +524,17 @@ class ContainerManager:
|
||||
"cubbi.project_name": project_name or "",
|
||||
"cubbi.mcps": ",".join(mcp_names) if mcp_names else "",
|
||||
},
|
||||
"command": container_command, # Set the command
|
||||
"entrypoint": entrypoint, # Set the entrypoint (might be None)
|
||||
"ports": {f"{port}/tcp": None for port in image.ports},
|
||||
}
|
||||
|
||||
# Use network_mode if domains are specified, otherwise use regular network
|
||||
if network_mode:
|
||||
container_params["network_mode"] = network_mode
|
||||
# Cannot set hostname when using network_mode
|
||||
else:
|
||||
container_params["hostname"] = session_name
|
||||
container_params["network"] = network_list[
|
||||
0
|
||||
] # Connect to the first network initially
|
||||
|
||||
container = self.client.containers.create(**container_params)
|
||||
network=network_list[0], # Connect to the first network initially
|
||||
command=container_command, # Set the command
|
||||
entrypoint=entrypoint, # Set the entrypoint (might be None)
|
||||
ports={f"{port}/tcp": None for port in image.ports},
|
||||
)
|
||||
|
||||
# Start container
|
||||
container.start()
|
||||
|
||||
# Connect to additional networks (after the first one in network_list)
|
||||
# Note: Cannot connect to networks when using network_mode
|
||||
if len(network_list) > 1 and not network_mode:
|
||||
if len(network_list) > 1:
|
||||
for network_name in network_list[1:]:
|
||||
try:
|
||||
# Get or create the network
|
||||
@@ -675,35 +555,32 @@ class ContainerManager:
|
||||
container.reload()
|
||||
|
||||
# Connect directly to each MCP's dedicated network
|
||||
# Note: Cannot connect to networks when using network_mode
|
||||
if not network_mode:
|
||||
for mcp_name in mcp_names:
|
||||
for mcp_name in mcp_names:
|
||||
try:
|
||||
# Get the dedicated network for this MCP
|
||||
dedicated_network_name = f"cubbi-mcp-{mcp_name}-network"
|
||||
|
||||
try:
|
||||
# Get the dedicated network for this MCP
|
||||
dedicated_network_name = f"cubbi-mcp-{mcp_name}-network"
|
||||
network = self.client.networks.get(dedicated_network_name)
|
||||
|
||||
try:
|
||||
network = self.client.networks.get(dedicated_network_name)
|
||||
# Connect the session container to the MCP's dedicated network
|
||||
network.connect(container, aliases=[session_name])
|
||||
print(
|
||||
f"Connected session to MCP '{mcp_name}' via dedicated network: {dedicated_network_name}"
|
||||
)
|
||||
except DockerException:
|
||||
# print(
|
||||
# f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
|
||||
# )
|
||||
# commented out, may be accessible through another attached network, it's
|
||||
# not mandatory here.
|
||||
pass
|
||||
|
||||
# Connect the session container to the MCP's dedicated network
|
||||
network.connect(container, aliases=[session_name])
|
||||
print(
|
||||
f"Connected session to MCP '{mcp_name}' via dedicated network: {dedicated_network_name}"
|
||||
)
|
||||
except DockerException:
|
||||
# print(
|
||||
# f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
|
||||
# )
|
||||
# commented out, may be accessible through another attached network, it's
|
||||
# not mandatory here.
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error connecting session to MCP '{mcp_name}': {e}")
|
||||
except Exception as e:
|
||||
print(f"Error connecting session to MCP '{mcp_name}': {e}")
|
||||
|
||||
# Connect to additional user-specified networks
|
||||
# Note: Cannot connect to networks when using network_mode
|
||||
if networks and not network_mode:
|
||||
if networks:
|
||||
for network_name in networks:
|
||||
# Check if already connected to this network
|
||||
# NetworkSettings.Networks contains a dict where keys are network names
|
||||
@@ -762,15 +639,6 @@ class ContainerManager:
|
||||
|
||||
except DockerException as e:
|
||||
print(f"Error creating session: {e}")
|
||||
|
||||
# Clean up network-filter container if it was created
|
||||
if network_filter_container:
|
||||
try:
|
||||
network_filter_container.stop()
|
||||
network_filter_container.remove()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
def close_session(self, session_id: str) -> bool:
|
||||
@@ -869,24 +737,9 @@ class ContainerManager:
|
||||
return False
|
||||
|
||||
try:
|
||||
# First, close the main session container
|
||||
container = self.client.containers.get(session.container_id)
|
||||
container.stop()
|
||||
container.remove()
|
||||
|
||||
# Check for and close any associated network-filter container
|
||||
network_filter_name = f"cubbi-network-filter-{session.id}"
|
||||
try:
|
||||
network_filter_container = self.client.containers.get(
|
||||
network_filter_name
|
||||
)
|
||||
logger.info(f"Stopping network-filter container {network_filter_name}")
|
||||
network_filter_container.stop()
|
||||
network_filter_container.remove()
|
||||
except DockerException:
|
||||
# Network-filter container might not exist, which is fine
|
||||
pass
|
||||
|
||||
self.session_manager.remove_session(session.id)
|
||||
return True
|
||||
except DockerException as e:
|
||||
@@ -920,19 +773,6 @@ class ContainerManager:
|
||||
# Stop and remove container
|
||||
container.stop()
|
||||
container.remove()
|
||||
|
||||
# Check for and close any associated network-filter container
|
||||
network_filter_name = f"cubbi-network-filter-{session.id}"
|
||||
try:
|
||||
network_filter_container = self.client.containers.get(
|
||||
network_filter_name
|
||||
)
|
||||
network_filter_container.stop()
|
||||
network_filter_container.remove()
|
||||
except DockerException:
|
||||
# Network-filter container might not exist, which is fine
|
||||
pass
|
||||
|
||||
# Remove from session storage
|
||||
self.session_manager.remove_session(session.id)
|
||||
|
||||
|
||||
@@ -1,277 +0,0 @@
|
||||
# Aider for Cubbi
|
||||
|
||||
This image provides Aider (AI pair programming) in a Cubbi container environment.
|
||||
|
||||
## Overview
|
||||
|
||||
Aider is an AI pair programming tool that works in your terminal. This Cubbi image integrates Aider with secure API key management, persistent configuration, and support for multiple LLM providers.
|
||||
|
||||
## Features
|
||||
|
||||
- **Multiple LLM Support**: Works with OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter, and more
|
||||
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
||||
- **Persistent Configuration**: Settings and history preserved across container restarts
|
||||
- **Git Integration**: Automatic commits and git awareness
|
||||
- **Multi-Language Support**: Works with 100+ programming languages
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Set up API Key
|
||||
|
||||
```bash
|
||||
# For OpenAI (GPT models)
|
||||
uv run -m cubbi.cli config set services.openai.api_key "your-openai-key"
|
||||
|
||||
# For Anthropic (Claude models)
|
||||
uv run -m cubbi.cli config set services.anthropic.api_key "your-anthropic-key"
|
||||
|
||||
# For DeepSeek (recommended for cost-effectiveness)
|
||||
uv run -m cubbi.cli config set services.deepseek.api_key "your-deepseek-key"
|
||||
```
|
||||
|
||||
### 2. Run Aider Environment
|
||||
|
||||
```bash
|
||||
# Start Aider container with your project
|
||||
uv run -m cubbi.cli session create --image aider /path/to/your/project
|
||||
|
||||
# Or without a project
|
||||
uv run -m cubbi.cli session create --image aider
|
||||
```
|
||||
|
||||
### 3. Use Aider
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
aider
|
||||
|
||||
# With specific model
|
||||
aider --model sonnet
|
||||
|
||||
# With specific files
|
||||
aider main.py utils.py
|
||||
|
||||
# One-shot request
|
||||
aider --message "Add error handling to the login function"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Supported API Keys
|
||||
|
||||
- `OPENAI_API_KEY`: OpenAI GPT models (GPT-4, GPT-4o, etc.)
|
||||
- `ANTHROPIC_API_KEY`: Anthropic Claude models (Sonnet, Haiku, etc.)
|
||||
- `DEEPSEEK_API_KEY`: DeepSeek models (cost-effective option)
|
||||
- `GEMINI_API_KEY`: Google Gemini models
|
||||
- `OPENROUTER_API_KEY`: OpenRouter (access to many models)
|
||||
|
||||
### Additional Configuration
|
||||
|
||||
- `AIDER_MODEL`: Default model to use (e.g., "sonnet", "o3-mini", "deepseek")
|
||||
- `AIDER_AUTO_COMMITS`: Enable automatic git commits (default: true)
|
||||
- `AIDER_DARK_MODE`: Enable dark mode interface (default: false)
|
||||
- `AIDER_API_KEYS`: Additional API keys in format "provider1=key1,provider2=key2"
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- `HTTP_PROXY`: HTTP proxy server URL
|
||||
- `HTTPS_PROXY`: HTTPS proxy server URL
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic AI Pair Programming
|
||||
|
||||
```bash
|
||||
# Start Aider with your project
|
||||
uv run -m cubbi.cli session create --image aider /path/to/project
|
||||
|
||||
# Inside the container:
|
||||
aider # Start interactive session
|
||||
aider main.py # Work on specific file
|
||||
aider --message "Add tests" # One-shot request
|
||||
```
|
||||
|
||||
### Model Selection
|
||||
|
||||
```bash
|
||||
# Use Claude Sonnet
|
||||
aider --model sonnet
|
||||
|
||||
# Use GPT-4o
|
||||
aider --model gpt-4o
|
||||
|
||||
# Use DeepSeek (cost-effective)
|
||||
aider --model deepseek
|
||||
|
||||
# Use OpenRouter
|
||||
aider --model openrouter/anthropic/claude-3.5-sonnet
|
||||
```
|
||||
|
||||
### Advanced Features
|
||||
|
||||
```bash
|
||||
# Work with multiple files
|
||||
aider src/main.py tests/test_main.py
|
||||
|
||||
# Auto-commit changes
|
||||
aider --auto-commits
|
||||
|
||||
# Read-only mode (won't edit files)
|
||||
aider --read
|
||||
|
||||
# Apply a specific change
|
||||
aider --message "Refactor the database connection code to use connection pooling"
|
||||
```
|
||||
|
||||
### Enterprise/Proxy Setup
|
||||
|
||||
```bash
|
||||
# With proxy
|
||||
uv run -m cubbi.cli session create --image aider \
|
||||
--env HTTPS_PROXY="https://proxy.company.com:8080" \
|
||||
/path/to/project
|
||||
|
||||
# With custom model
|
||||
uv run -m cubbi.cli session create --image aider \
|
||||
--env AIDER_MODEL="sonnet" \
|
||||
/path/to/project
|
||||
```
|
||||
|
||||
## Persistent Configuration
|
||||
|
||||
The following directories are automatically persisted:
|
||||
|
||||
- `~/.aider/`: Aider configuration and chat history
|
||||
- `~/.cache/aider/`: Model cache and temporary files
|
||||
|
||||
Configuration files are maintained across container restarts, ensuring your preferences and chat history are preserved.
|
||||
|
||||
## Model Recommendations
|
||||
|
||||
### Best Overall Performance
|
||||
- **Claude 3.5 Sonnet**: Excellent code understanding and generation
|
||||
- **OpenAI GPT-4o**: Strong performance across languages
|
||||
- **Gemini 2.5 Pro**: Good balance of quality and speed
|
||||
|
||||
### Cost-Effective Options
|
||||
- **DeepSeek V3**: Very cost-effective, good quality
|
||||
- **OpenRouter**: Access to multiple models with competitive pricing
|
||||
|
||||
### Free Options
|
||||
- **Gemini 2.5 Pro Exp**: Free tier available
|
||||
- **OpenRouter**: Some free models available
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
cubbi/images/aider/
|
||||
├── Dockerfile # Container image definition
|
||||
├── cubbi_image.yaml # Cubbi image configuration
|
||||
├── aider_plugin.py # Authentication and setup plugin
|
||||
└── README.md # This documentation
|
||||
```
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
1. **Environment Variables**: API keys passed from Cubbi configuration
|
||||
2. **Plugin Setup**: `aider_plugin.py` creates environment configuration
|
||||
3. **Environment File**: Creates `~/.aider/.env` with API keys
|
||||
4. **Ready**: Aider is ready for use with configured authentication
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**No API Key Found**
|
||||
```
|
||||
ℹ️ No API keys found - Aider will run without pre-configuration
|
||||
```
|
||||
**Solution**: Set API key in Cubbi configuration:
|
||||
```bash
|
||||
uv run -m cubbi.cli config set services.openai.api_key "your-key"
|
||||
```
|
||||
|
||||
**Model Not Available**
|
||||
```
|
||||
Error: Model 'xyz' not found
|
||||
```
|
||||
**Solution**: Check available models for your provider:
|
||||
```bash
|
||||
aider --models # List available models
|
||||
```
|
||||
|
||||
**Git Issues**
|
||||
```
|
||||
Git repository not found
|
||||
```
|
||||
**Solution**: Initialize git in your project or mount a git repository:
|
||||
```bash
|
||||
git init
|
||||
# or
|
||||
uv run -m cubbi.cli session create --image aider /path/to/git/project
|
||||
```
|
||||
|
||||
**Network/Proxy Issues**
|
||||
```
|
||||
Connection timeout or proxy errors
|
||||
```
|
||||
**Solution**: Configure proxy settings:
|
||||
```bash
|
||||
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
```bash
|
||||
# Check Aider version
|
||||
aider --version
|
||||
|
||||
# List available models
|
||||
aider --models
|
||||
|
||||
# Check configuration
|
||||
cat ~/.aider/.env
|
||||
|
||||
# Verbose output
|
||||
aider --verbose
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **API Keys**: Stored securely with 0o600 permissions
|
||||
- **Environment**: Isolated container environment
|
||||
- **Git Integration**: Respects .gitignore and git configurations
|
||||
- **Code Safety**: Always review changes before accepting
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Model Configuration
|
||||
|
||||
```bash
|
||||
# Use with custom API endpoint
|
||||
uv run -m cubbi.cli session create --image aider \
|
||||
--env OPENAI_API_BASE="https://api.custom-provider.com/v1" \
|
||||
--env OPENAI_API_KEY="your-key"
|
||||
```
|
||||
|
||||
### Multiple API Keys
|
||||
|
||||
```bash
|
||||
# Configure multiple providers
|
||||
uv run -m cubbi.cli session create --image aider \
|
||||
--env OPENAI_API_KEY="openai-key" \
|
||||
--env ANTHROPIC_API_KEY="anthropic-key" \
|
||||
--env AIDER_API_KEYS="provider1=key1,provider2=key2"
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues related to:
|
||||
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
||||
- **Aider Functionality**: Visit [Aider documentation](https://aider.chat/)
|
||||
- **Model Configuration**: Check [LLM documentation](https://aider.chat/docs/llms.html)
|
||||
- **API Keys**: Visit provider documentation (OpenAI, Anthropic, etc.)
|
||||
|
||||
## License
|
||||
|
||||
This image configuration is provided under the same license as the Cubbi project. Aider is licensed separately under Apache 2.0.
|
||||
@@ -1,192 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Aider Plugin for Cubbi
|
||||
Handles authentication setup and configuration for Aider AI pair programming
|
||||
"""
|
||||
|
||||
import os
|
||||
import stat
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict
|
||||
|
||||
from cubbi_init import ToolPlugin
|
||||
|
||||
|
||||
class AiderPlugin(ToolPlugin):
|
||||
"""Plugin for setting up Aider authentication and configuration"""
|
||||
|
||||
@property
|
||||
def tool_name(self) -> str:
|
||||
return "aider"
|
||||
|
||||
def _get_user_ids(self) -> tuple[int, int]:
|
||||
"""Get the cubbi user and group IDs from environment"""
|
||||
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
||||
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
||||
return user_id, group_id
|
||||
|
||||
def _set_ownership(self, path: Path) -> None:
|
||||
"""Set ownership of a path to the cubbi user"""
|
||||
user_id, group_id = self._get_user_ids()
|
||||
try:
|
||||
os.chown(path, user_id, group_id)
|
||||
except OSError as e:
|
||||
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
||||
|
||||
def _get_aider_config_dir(self) -> Path:
|
||||
"""Get the Aider configuration directory"""
|
||||
return Path("/home/cubbi/.aider")
|
||||
|
||||
def _get_aider_cache_dir(self) -> Path:
|
||||
"""Get the Aider cache directory"""
|
||||
return Path("/home/cubbi/.cache/aider")
|
||||
|
||||
def _ensure_aider_dirs(self) -> tuple[Path, Path]:
|
||||
"""Ensure Aider directories exist with correct ownership"""
|
||||
config_dir = self._get_aider_config_dir()
|
||||
cache_dir = self._get_aider_cache_dir()
|
||||
|
||||
# Create directories
|
||||
for directory in [config_dir, cache_dir]:
|
||||
try:
|
||||
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
|
||||
self._set_ownership(directory)
|
||||
except OSError as e:
|
||||
self.status.log(
|
||||
f"Failed to create Aider directory {directory}: {e}", "ERROR"
|
||||
)
|
||||
|
||||
return config_dir, cache_dir
|
||||
|
||||
def initialize(self) -> bool:
|
||||
"""Initialize Aider configuration"""
|
||||
self.status.log("Setting up Aider configuration...")
|
||||
|
||||
# Ensure Aider directories exist
|
||||
config_dir, cache_dir = self._ensure_aider_dirs()
|
||||
|
||||
# Set up environment variables for the session
|
||||
env_vars = self._create_environment_config()
|
||||
|
||||
# Create .env file if we have API keys
|
||||
if env_vars:
|
||||
env_file = config_dir / ".env"
|
||||
success = self._write_env_file(env_file, env_vars)
|
||||
if success:
|
||||
self.status.log("✅ Aider environment configured successfully")
|
||||
else:
|
||||
self.status.log("⚠️ Failed to write Aider environment file", "WARNING")
|
||||
else:
|
||||
self.status.log(
|
||||
"ℹ️ No API keys found - Aider will run without pre-configuration", "INFO"
|
||||
)
|
||||
self.status.log(
|
||||
" You can configure API keys later using environment variables",
|
||||
"INFO",
|
||||
)
|
||||
|
||||
# Always return True to allow container to start
|
||||
return True
|
||||
|
||||
def _create_environment_config(self) -> Dict[str, str]:
|
||||
"""Create environment variable configuration for Aider"""
|
||||
env_vars = {}
|
||||
|
||||
# Map environment variables to Aider configuration
|
||||
api_key_mappings = {
|
||||
"OPENAI_API_KEY": "OPENAI_API_KEY",
|
||||
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY",
|
||||
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY",
|
||||
"GEMINI_API_KEY": "GEMINI_API_KEY",
|
||||
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY",
|
||||
}
|
||||
|
||||
# Check for OpenAI API base URL
|
||||
openai_url = os.environ.get("OPENAI_URL")
|
||||
if openai_url:
|
||||
env_vars["OPENAI_API_BASE"] = openai_url
|
||||
self.status.log(f"Set OpenAI API base URL to {openai_url}")
|
||||
|
||||
# Check for standard API keys
|
||||
for env_var, aider_var in api_key_mappings.items():
|
||||
value = os.environ.get(env_var)
|
||||
if value:
|
||||
env_vars[aider_var] = value
|
||||
provider = env_var.replace("_API_KEY", "").lower()
|
||||
self.status.log(f"Added {provider} API key")
|
||||
|
||||
# Handle additional API keys from AIDER_API_KEYS
|
||||
additional_keys = os.environ.get("AIDER_API_KEYS")
|
||||
if additional_keys:
|
||||
try:
|
||||
# Parse format: "provider1=key1,provider2=key2"
|
||||
for pair in additional_keys.split(","):
|
||||
if "=" in pair:
|
||||
provider, key = pair.strip().split("=", 1)
|
||||
env_var_name = f"{provider.upper()}_API_KEY"
|
||||
env_vars[env_var_name] = key
|
||||
self.status.log(f"Added {provider} API key from AIDER_API_KEYS")
|
||||
except Exception as e:
|
||||
self.status.log(f"Failed to parse AIDER_API_KEYS: {e}", "WARNING")
|
||||
|
||||
# Add model configuration
|
||||
model = os.environ.get("AIDER_MODEL")
|
||||
if model:
|
||||
env_vars["AIDER_MODEL"] = model
|
||||
self.status.log(f"Set default model to {model}")
|
||||
|
||||
# Add git configuration
|
||||
auto_commits = os.environ.get("AIDER_AUTO_COMMITS", "true")
|
||||
if auto_commits.lower() in ["true", "false"]:
|
||||
env_vars["AIDER_AUTO_COMMITS"] = auto_commits
|
||||
|
||||
# Add dark mode setting
|
||||
dark_mode = os.environ.get("AIDER_DARK_MODE", "false")
|
||||
if dark_mode.lower() in ["true", "false"]:
|
||||
env_vars["AIDER_DARK_MODE"] = dark_mode
|
||||
|
||||
# Add proxy settings
|
||||
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
|
||||
value = os.environ.get(proxy_var)
|
||||
if value:
|
||||
env_vars[proxy_var] = value
|
||||
self.status.log(f"Added proxy configuration: {proxy_var}")
|
||||
|
||||
return env_vars
|
||||
|
||||
def _write_env_file(self, env_file: Path, env_vars: Dict[str, str]) -> bool:
|
||||
"""Write environment variables to .env file"""
|
||||
try:
|
||||
content = "\n".join(f"{key}={value}" for key, value in env_vars.items())
|
||||
|
||||
with open(env_file, "w") as f:
|
||||
f.write(content)
|
||||
f.write("\n")
|
||||
|
||||
# Set ownership and secure file permissions (read/write for owner only)
|
||||
self._set_ownership(env_file)
|
||||
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
|
||||
|
||||
self.status.log(f"Created Aider environment file at {env_file}")
|
||||
return True
|
||||
except Exception as e:
|
||||
self.status.log(f"Failed to write Aider environment file: {e}", "ERROR")
|
||||
return False
|
||||
|
||||
def setup_tool_configuration(self) -> bool:
|
||||
"""Set up Aider configuration - called by base class"""
|
||||
# Additional tool configuration can be added here if needed
|
||||
return True
|
||||
|
||||
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||
"""Integrate Aider with available MCP servers if applicable"""
|
||||
if mcp_config["count"] == 0:
|
||||
self.status.log("No MCP servers to integrate")
|
||||
return True
|
||||
|
||||
# Aider doesn't have native MCP support like Claude Code,
|
||||
# but we could potentially add custom integrations here
|
||||
self.status.log(
|
||||
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
|
||||
)
|
||||
return True
|
||||
@@ -1,88 +0,0 @@
|
||||
name: aider
|
||||
description: Aider AI pair programming environment
|
||||
version: 1.0.0
|
||||
maintainer: team@monadical.com
|
||||
image: monadical/cubbi-aider:latest
|
||||
|
||||
init:
|
||||
pre_command: /cubbi-init.sh
|
||||
command: /entrypoint.sh
|
||||
|
||||
environment:
|
||||
# OpenAI Configuration
|
||||
- name: OPENAI_API_KEY
|
||||
description: OpenAI API key for GPT models
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# Anthropic Configuration
|
||||
- name: ANTHROPIC_API_KEY
|
||||
description: Anthropic API key for Claude models
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# DeepSeek Configuration
|
||||
- name: DEEPSEEK_API_KEY
|
||||
description: DeepSeek API key for DeepSeek models
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# Gemini Configuration
|
||||
- name: GEMINI_API_KEY
|
||||
description: Google Gemini API key
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# OpenRouter Configuration
|
||||
- name: OPENROUTER_API_KEY
|
||||
description: OpenRouter API key for various models
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# Generic provider API keys
|
||||
- name: AIDER_API_KEYS
|
||||
description: Additional API keys in format "provider1=key1,provider2=key2"
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# Model Configuration
|
||||
- name: AIDER_MODEL
|
||||
description: Default model to use (e.g., sonnet, o3-mini, deepseek)
|
||||
required: false
|
||||
|
||||
# Git Configuration
|
||||
- name: AIDER_AUTO_COMMITS
|
||||
description: Enable automatic commits (true/false)
|
||||
required: false
|
||||
default: "true"
|
||||
|
||||
- name: AIDER_DARK_MODE
|
||||
description: Enable dark mode (true/false)
|
||||
required: false
|
||||
default: "false"
|
||||
|
||||
# Proxy Configuration
|
||||
- name: HTTP_PROXY
|
||||
description: HTTP proxy server URL
|
||||
required: false
|
||||
|
||||
- name: HTTPS_PROXY
|
||||
description: HTTPS proxy server URL
|
||||
required: false
|
||||
|
||||
ports: []
|
||||
|
||||
volumes:
|
||||
- mountPath: /app
|
||||
description: Application directory
|
||||
|
||||
persistent_configs:
|
||||
- source: "/home/cubbi/.aider"
|
||||
target: "/cubbi-config/aider-settings"
|
||||
type: "directory"
|
||||
description: "Aider configuration and history"
|
||||
|
||||
- source: "/home/cubbi/.cache/aider"
|
||||
target: "/cubbi-config/aider-cache"
|
||||
type: "directory"
|
||||
description: "Aider cache directory"
|
||||
@@ -1,274 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Comprehensive test script for Aider Cubbi image
|
||||
Tests Docker image build, API key configuration, and Cubbi CLI integration
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import re
|
||||
|
||||
|
||||
def run_command(cmd, description="", check=True):
|
||||
"""Run a shell command and return result"""
|
||||
print(f"\n🔍 {description}")
|
||||
print(f"Running: {cmd}")
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd, shell=True, capture_output=True, text=True, check=check
|
||||
)
|
||||
|
||||
if result.stdout:
|
||||
print("STDOUT:")
|
||||
print(result.stdout)
|
||||
|
||||
if result.stderr:
|
||||
print("STDERR:")
|
||||
print(result.stderr)
|
||||
|
||||
return result
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"❌ Command failed with exit code {e.returncode}")
|
||||
if e.stdout:
|
||||
print("STDOUT:")
|
||||
print(e.stdout)
|
||||
if e.stderr:
|
||||
print("STDERR:")
|
||||
print(e.stderr)
|
||||
if check:
|
||||
raise
|
||||
return e
|
||||
|
||||
|
||||
def test_docker_image_exists():
|
||||
"""Test if the Aider Docker image exists"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Docker Image Existence")
|
||||
print("=" * 60)
|
||||
|
||||
result = run_command(
|
||||
"docker images monadical/cubbi-aider:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
|
||||
"Checking if Aider Docker image exists",
|
||||
)
|
||||
|
||||
if "monadical/cubbi-aider" in result.stdout:
|
||||
print("✅ Aider Docker image exists")
|
||||
else:
|
||||
print("❌ Aider Docker image not found")
|
||||
assert False, "Aider Docker image not found"
|
||||
|
||||
|
||||
def test_aider_version():
|
||||
"""Test basic Aider functionality in container"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Aider Version")
|
||||
print("=" * 60)
|
||||
|
||||
result = run_command(
|
||||
"docker run --rm monadical/cubbi-aider:latest bash -c 'aider --version'",
|
||||
"Testing Aider version command",
|
||||
)
|
||||
|
||||
assert (
|
||||
"aider" in result.stdout and result.returncode == 0
|
||||
), "Aider version command failed"
|
||||
print("✅ Aider version command works")
|
||||
|
||||
|
||||
def test_api_key_configuration():
|
||||
"""Test API key configuration and environment setup"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing API Key Configuration")
|
||||
print("=" * 60)
|
||||
|
||||
# Test with multiple API keys
|
||||
test_keys = {
|
||||
"OPENAI_API_KEY": "test-openai-key",
|
||||
"ANTHROPIC_API_KEY": "test-anthropic-key",
|
||||
"DEEPSEEK_API_KEY": "test-deepseek-key",
|
||||
"GEMINI_API_KEY": "test-gemini-key",
|
||||
"OPENROUTER_API_KEY": "test-openrouter-key",
|
||||
}
|
||||
|
||||
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
|
||||
|
||||
result = run_command(
|
||||
f"docker run --rm {env_flags} monadical/cubbi-aider:latest bash -c 'cat ~/.aider/.env'",
|
||||
"Testing API key configuration in .env file",
|
||||
)
|
||||
|
||||
success = True
|
||||
for key, value in test_keys.items():
|
||||
if f"{key}={value}" not in result.stdout:
|
||||
print(f"❌ {key} not found in .env file")
|
||||
success = False
|
||||
else:
|
||||
print(f"✅ {key} configured correctly")
|
||||
|
||||
# Test default configuration values
|
||||
if "AIDER_AUTO_COMMITS=true" in result.stdout:
|
||||
print("✅ Default AIDER_AUTO_COMMITS configured")
|
||||
else:
|
||||
print("❌ Default AIDER_AUTO_COMMITS not found")
|
||||
success = False
|
||||
|
||||
if "AIDER_DARK_MODE=false" in result.stdout:
|
||||
print("✅ Default AIDER_DARK_MODE configured")
|
||||
else:
|
||||
print("❌ Default AIDER_DARK_MODE not found")
|
||||
success = False
|
||||
|
||||
assert success, "API key configuration test failed"
|
||||
|
||||
|
||||
def test_cubbi_cli_integration():
|
||||
"""Test Cubbi CLI integration"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Cubbi CLI Integration")
|
||||
print("=" * 60)
|
||||
|
||||
# Test image listing
|
||||
result = run_command(
|
||||
"uv run -m cubbi.cli image list | grep aider",
|
||||
"Testing Cubbi CLI can see Aider image",
|
||||
)
|
||||
|
||||
if "aider" in result.stdout and "Aider AI pair" in result.stdout:
|
||||
print("✅ Cubbi CLI can list Aider image")
|
||||
else:
|
||||
print("❌ Cubbi CLI cannot see Aider image")
|
||||
return False
|
||||
|
||||
# Test session creation with test command
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
test_env = {
|
||||
"OPENAI_API_KEY": "test-session-key",
|
||||
"ANTHROPIC_API_KEY": "test-anthropic-session-key",
|
||||
}
|
||||
|
||||
env_vars = " ".join([f"{k}={v}" for k, v in test_env.items()])
|
||||
|
||||
result = run_command(
|
||||
f"{env_vars} uv run -m cubbi.cli session create --image aider {temp_dir} --no-shell --run \"aider --version && echo 'Cubbi CLI test successful'\"",
|
||||
"Testing Cubbi CLI session creation with Aider",
|
||||
)
|
||||
|
||||
assert (
|
||||
result.returncode == 0
|
||||
and re.search(r"aider \d+\.\d+\.\d+", result.stdout)
|
||||
and "Cubbi CLI test successful" in result.stdout
|
||||
), "Cubbi CLI session creation failed"
|
||||
print("✅ Cubbi CLI session creation works")
|
||||
|
||||
|
||||
def test_persistent_configuration():
|
||||
"""Test persistent configuration directories"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Persistent Configuration")
|
||||
print("=" * 60)
|
||||
|
||||
# Test that persistent directories are created
|
||||
result = run_command(
|
||||
"docker run --rm -e OPENAI_API_KEY='test-key' monadical/cubbi-aider:latest bash -c 'ls -la /home/cubbi/.aider/ && ls -la /home/cubbi/.cache/'",
|
||||
"Testing persistent configuration directories",
|
||||
)
|
||||
|
||||
success = True
|
||||
|
||||
if ".env" in result.stdout:
|
||||
print("✅ .env file created in ~/.aider/")
|
||||
else:
|
||||
print("❌ .env file not found in ~/.aider/")
|
||||
success = False
|
||||
|
||||
if "aider" in result.stdout:
|
||||
print("✅ ~/.cache/aider directory exists")
|
||||
else:
|
||||
print("❌ ~/.cache/aider directory not found")
|
||||
success = False
|
||||
|
||||
assert success, "API key configuration test failed"
|
||||
|
||||
|
||||
def test_plugin_functionality():
|
||||
"""Test the Aider plugin functionality"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Plugin Functionality")
|
||||
print("=" * 60)
|
||||
|
||||
# Test plugin without API keys (should still work)
|
||||
result = run_command(
|
||||
"docker run --rm monadical/cubbi-aider:latest bash -c 'echo \"Plugin test without API keys\"'",
|
||||
"Testing plugin functionality without API keys",
|
||||
)
|
||||
|
||||
if "No API keys found - Aider will run without pre-configuration" in result.stdout:
|
||||
print("✅ Plugin handles missing API keys gracefully")
|
||||
else:
|
||||
# This might be in stderr or initialization might have changed
|
||||
print("ℹ️ Plugin API key handling test - check output above")
|
||||
|
||||
# Test plugin with API keys
|
||||
result = run_command(
|
||||
"docker run --rm -e OPENAI_API_KEY='test-plugin-key' monadical/cubbi-aider:latest bash -c 'echo \"Plugin test with API keys\"'",
|
||||
"Testing plugin functionality with API keys",
|
||||
)
|
||||
|
||||
if "Aider environment configured successfully" in result.stdout:
|
||||
print("✅ Plugin configures environment successfully")
|
||||
else:
|
||||
print("❌ Plugin environment configuration failed")
|
||||
assert False, "Plugin environment configuration failed"
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all tests"""
|
||||
print("🚀 Starting Aider Cubbi Image Tests")
|
||||
print("=" * 60)
|
||||
|
||||
tests = [
|
||||
("Docker Image Exists", test_docker_image_exists),
|
||||
("Aider Version", test_aider_version),
|
||||
("API Key Configuration", test_api_key_configuration),
|
||||
("Persistent Configuration", test_persistent_configuration),
|
||||
("Plugin Functionality", test_plugin_functionality),
|
||||
("Cubbi CLI Integration", test_cubbi_cli_integration),
|
||||
]
|
||||
|
||||
results = {}
|
||||
|
||||
for test_name, test_func in tests:
|
||||
try:
|
||||
test_func()
|
||||
results[test_name] = True
|
||||
except Exception as e:
|
||||
print(f"❌ Test '{test_name}' failed with exception: {e}")
|
||||
results[test_name] = False
|
||||
|
||||
# Print summary
|
||||
print("\n" + "=" * 60)
|
||||
print("📊 TEST SUMMARY")
|
||||
print("=" * 60)
|
||||
|
||||
total_tests = len(tests)
|
||||
passed_tests = sum(1 for result in results.values() if result)
|
||||
failed_tests = total_tests - passed_tests
|
||||
|
||||
for test_name, result in results.items():
|
||||
status = "✅ PASS" if result else "❌ FAIL"
|
||||
print(f"{status} {test_name}")
|
||||
|
||||
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
|
||||
|
||||
if failed_tests == 0:
|
||||
print("\n🎉 All tests passed! Aider image is ready for use.")
|
||||
return 0
|
||||
else:
|
||||
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,82 +0,0 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
LABEL maintainer="team@monadical.com"
|
||||
LABEL description="Claude Code for Cubbi"
|
||||
|
||||
# Install system dependencies including gosu for user switching
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gosu \
|
||||
sudo \
|
||||
passwd \
|
||||
bash \
|
||||
curl \
|
||||
bzip2 \
|
||||
iputils-ping \
|
||||
iproute2 \
|
||||
libxcb1 \
|
||||
libdbus-1-3 \
|
||||
nano \
|
||||
tmux \
|
||||
git-core \
|
||||
ripgrep \
|
||||
openssh-client \
|
||||
vim \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install uv (Python package manager)
|
||||
WORKDIR /tmp
|
||||
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||
sh install.sh && \
|
||||
mv /root/.local/bin/uv /usr/local/bin/uv && \
|
||||
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||
rm install.sh
|
||||
|
||||
# Install Node.js (for Claude Code NPM package)
|
||||
ARG NODE_VERSION=v22.16.0
|
||||
RUN mkdir -p /opt/node && \
|
||||
ARCH=$(uname -m) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then \
|
||||
NODE_ARCH=linux-x64; \
|
||||
elif [ "$ARCH" = "aarch64" ]; then \
|
||||
NODE_ARCH=linux-arm64; \
|
||||
else \
|
||||
echo "Unsupported architecture"; exit 1; \
|
||||
fi && \
|
||||
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
|
||||
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
|
||||
rm node.tar.gz
|
||||
|
||||
ENV PATH="/opt/node/bin:$PATH"
|
||||
|
||||
# Install Claude Code globally
|
||||
RUN npm install -g @anthropic-ai/claude-code
|
||||
|
||||
# Create app directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy initialization system
|
||||
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||
COPY claudecode_plugin.py /cubbi/claudecode_plugin.py
|
||||
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||
COPY init-status.sh /cubbi/init-status.sh
|
||||
|
||||
# Make scripts executable
|
||||
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||
|
||||
# Add Node.js to PATH in bashrc and init status check
|
||||
RUN echo 'PATH="/opt/node/bin:$PATH"' >> /etc/bash.bashrc
|
||||
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||
|
||||
# Set up environment
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
ENV UV_LINK_MODE=copy
|
||||
|
||||
# Pre-install the cubbi_init
|
||||
RUN /cubbi/cubbi_init.py --help
|
||||
|
||||
# Set WORKDIR to /app
|
||||
WORKDIR /app
|
||||
|
||||
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||
CMD ["tail", "-f", "/dev/null"]
|
||||
@@ -1,222 +0,0 @@
|
||||
# Claude Code for Cubbi
|
||||
|
||||
This image provides Claude Code (Anthropic's official CLI for Claude) in a Cubbi container environment.
|
||||
|
||||
## Overview
|
||||
|
||||
Claude Code is an interactive CLI tool that helps with software engineering tasks. This Cubbi image integrates Claude Code with secure API key management, persistent configuration, and enterprise features.
|
||||
|
||||
## Features
|
||||
|
||||
- **Claude Code CLI**: Full access to Claude's coding capabilities
|
||||
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
||||
- **Persistent Configuration**: Settings and cache preserved across container restarts
|
||||
- **Enterprise Support**: Bedrock and Vertex AI integration
|
||||
- **Network Support**: Proxy configuration for corporate environments
|
||||
- **Tool Permissions**: Pre-configured permissions for all Claude Code tools
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Set up API Key
|
||||
|
||||
```bash
|
||||
# Set your Anthropic API key in Cubbi configuration
|
||||
cubbi config set services.anthropic.api_key "your-api-key-here"
|
||||
```
|
||||
|
||||
### 2. Run Claude Code Environment
|
||||
|
||||
```bash
|
||||
# Start Claude Code container
|
||||
cubbi run claudecode
|
||||
|
||||
# Execute Claude Code commands
|
||||
cubbi exec claudecode "claude 'help me write a Python function'"
|
||||
|
||||
# Start interactive session
|
||||
cubbi exec claudecode "claude"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Required Environment Variables
|
||||
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
|
||||
|
||||
### Optional Environment Variables
|
||||
|
||||
- `ANTHROPIC_AUTH_TOKEN`: Custom authorization token for enterprise deployments
|
||||
- `ANTHROPIC_CUSTOM_HEADERS`: Additional HTTP headers (JSON format)
|
||||
- `CLAUDE_CODE_USE_BEDROCK`: Set to "true" to use Amazon Bedrock
|
||||
- `CLAUDE_CODE_USE_VERTEX`: Set to "true" to use Google Vertex AI
|
||||
- `HTTP_PROXY`: HTTP proxy server URL
|
||||
- `HTTPS_PROXY`: HTTPS proxy server URL
|
||||
- `DISABLE_TELEMETRY`: Set to "true" to disable telemetry
|
||||
|
||||
### Advanced Configuration
|
||||
|
||||
```bash
|
||||
# Enterprise deployment with Bedrock
|
||||
cubbi config set environment.claude_code_use_bedrock true
|
||||
cubbi run claudecode
|
||||
|
||||
# With custom proxy
|
||||
cubbi config set network.https_proxy "https://proxy.company.com:8080"
|
||||
cubbi run claudecode
|
||||
|
||||
# Disable telemetry
|
||||
cubbi config set environment.disable_telemetry true
|
||||
cubbi run claudecode
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Get help
|
||||
cubbi exec claudecode "claude --help"
|
||||
|
||||
# One-time task
|
||||
cubbi exec claudecode "claude 'write a unit test for this function'"
|
||||
|
||||
# Interactive mode
|
||||
cubbi exec claudecode "claude"
|
||||
```
|
||||
|
||||
### Working with Projects
|
||||
|
||||
```bash
|
||||
# Start Claude Code in your project directory
|
||||
cubbi run claudecode --mount /path/to/your/project:/app
|
||||
cubbi exec claudecode "cd /app && claude"
|
||||
|
||||
# Create a commit
|
||||
cubbi exec claudecode "cd /app && claude commit"
|
||||
```
|
||||
|
||||
### Advanced Features
|
||||
|
||||
```bash
|
||||
# Run with specific model configuration
|
||||
cubbi exec claudecode "claude -m claude-3-5-sonnet-20241022 'analyze this code'"
|
||||
|
||||
# Use with plan mode
|
||||
cubbi exec claudecode "claude -p 'refactor this function'"
|
||||
```
|
||||
|
||||
## Persistent Configuration
|
||||
|
||||
The following directories are automatically persisted:
|
||||
|
||||
- `~/.claude/`: Claude Code settings and configuration
|
||||
- `~/.cache/claude/`: Claude Code cache and temporary files
|
||||
|
||||
Configuration files are maintained across container restarts, ensuring your settings and preferences are preserved.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
cubbi/images/claudecode/
|
||||
├── Dockerfile # Container image definition
|
||||
├── cubbi_image.yaml # Cubbi image configuration
|
||||
├── claudecode_plugin.py # Authentication and setup plugin
|
||||
├── cubbi_init.py # Initialization script (shared)
|
||||
├── init-status.sh # Status check script (shared)
|
||||
└── README.md # This documentation
|
||||
```
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
1. **Environment Variables**: API key passed from Cubbi configuration
|
||||
2. **Plugin Setup**: `claudecode_plugin.py` creates `~/.claude/settings.json`
|
||||
3. **Verification**: Plugin verifies Claude Code installation and configuration
|
||||
4. **Ready**: Claude Code is ready for use with configured authentication
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**API Key Not Set**
|
||||
```
|
||||
⚠️ No authentication configuration found
|
||||
Please set ANTHROPIC_API_KEY environment variable
|
||||
```
|
||||
**Solution**: Set API key in Cubbi configuration:
|
||||
```bash
|
||||
cubbi config set services.anthropic.api_key "your-api-key-here"
|
||||
```
|
||||
|
||||
**Claude Code Not Found**
|
||||
```
|
||||
❌ Claude Code not properly installed
|
||||
```
|
||||
**Solution**: Rebuild the container image:
|
||||
```bash
|
||||
docker build -t cubbi-claudecode:latest cubbi/images/claudecode/
|
||||
```
|
||||
|
||||
**Network Issues**
|
||||
```
|
||||
Connection timeout or proxy errors
|
||||
```
|
||||
**Solution**: Configure proxy settings:
|
||||
```bash
|
||||
cubbi config set network.https_proxy "your-proxy-url"
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable verbose output for debugging:
|
||||
|
||||
```bash
|
||||
# Check configuration
|
||||
cubbi exec claudecode "cat ~/.claude/settings.json"
|
||||
|
||||
# Verify installation
|
||||
cubbi exec claudecode "claude --version"
|
||||
cubbi exec claudecode "which claude"
|
||||
cubbi exec claudecode "node --version"
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **API Keys**: Stored securely with 0o600 permissions
|
||||
- **Configuration**: Settings files have restricted access
|
||||
- **Environment**: Isolated container environment
|
||||
- **Telemetry**: Can be disabled for privacy
|
||||
|
||||
## Development
|
||||
|
||||
### Building the Image
|
||||
|
||||
```bash
|
||||
# Build locally
|
||||
docker build -t cubbi-claudecode:test cubbi/images/claudecode/
|
||||
|
||||
# Test basic functionality
|
||||
docker run --rm -it \
|
||||
-e ANTHROPIC_API_KEY="your-api-key" \
|
||||
cubbi-claudecode:test \
|
||||
bash -c "claude --version"
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Run through Cubbi
|
||||
cubbi run claudecode --name test-claude
|
||||
cubbi exec test-claude "claude --version"
|
||||
cubbi stop test-claude
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues related to:
|
||||
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
||||
- **Claude Code**: Visit [Claude Code documentation](https://docs.anthropic.com/en/docs/claude-code)
|
||||
- **API Keys**: Visit [Anthropic Console](https://console.anthropic.com/)
|
||||
|
||||
## License
|
||||
|
||||
This image configuration is provided under the same license as the Cubbi project. Claude Code is licensed separately by Anthropic.
|
||||
@@ -1,193 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Claude Code Plugin for Cubbi
|
||||
Handles authentication setup and configuration for Claude Code
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import stat
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from cubbi_init import ToolPlugin
|
||||
|
||||
# API key mappings from environment variables to Claude Code configuration
|
||||
API_KEY_MAPPINGS = {
|
||||
"ANTHROPIC_API_KEY": "api_key",
|
||||
"ANTHROPIC_AUTH_TOKEN": "auth_token",
|
||||
"ANTHROPIC_CUSTOM_HEADERS": "custom_headers",
|
||||
}
|
||||
|
||||
# Enterprise integration environment variables
|
||||
ENTERPRISE_MAPPINGS = {
|
||||
"CLAUDE_CODE_USE_BEDROCK": "use_bedrock",
|
||||
"CLAUDE_CODE_USE_VERTEX": "use_vertex",
|
||||
"HTTP_PROXY": "http_proxy",
|
||||
"HTTPS_PROXY": "https_proxy",
|
||||
"DISABLE_TELEMETRY": "disable_telemetry",
|
||||
}
|
||||
|
||||
|
||||
class ClaudeCodePlugin(ToolPlugin):
|
||||
"""Plugin for setting up Claude Code authentication and configuration"""
|
||||
|
||||
@property
|
||||
def tool_name(self) -> str:
|
||||
return "claudecode"
|
||||
|
||||
def _get_user_ids(self) -> tuple[int, int]:
|
||||
"""Get the cubbi user and group IDs from environment"""
|
||||
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
||||
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
||||
return user_id, group_id
|
||||
|
||||
def _set_ownership(self, path: Path) -> None:
|
||||
"""Set ownership of a path to the cubbi user"""
|
||||
user_id, group_id = self._get_user_ids()
|
||||
try:
|
||||
os.chown(path, user_id, group_id)
|
||||
except OSError as e:
|
||||
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
||||
|
||||
def _get_claude_dir(self) -> Path:
|
||||
"""Get the Claude Code configuration directory"""
|
||||
return Path("/home/cubbi/.claude")
|
||||
|
||||
def _ensure_claude_dir(self) -> Path:
|
||||
"""Ensure Claude directory exists with correct ownership"""
|
||||
claude_dir = self._get_claude_dir()
|
||||
|
||||
try:
|
||||
claude_dir.mkdir(mode=0o700, parents=True, exist_ok=True)
|
||||
self._set_ownership(claude_dir)
|
||||
except OSError as e:
|
||||
self.status.log(
|
||||
f"Failed to create Claude directory {claude_dir}: {e}", "ERROR"
|
||||
)
|
||||
|
||||
return claude_dir
|
||||
|
||||
def initialize(self) -> bool:
|
||||
"""Initialize Claude Code configuration"""
|
||||
self.status.log("Setting up Claude Code authentication...")
|
||||
|
||||
# Ensure Claude directory exists
|
||||
claude_dir = self._ensure_claude_dir()
|
||||
|
||||
# Create settings configuration
|
||||
settings = self._create_settings()
|
||||
|
||||
if settings:
|
||||
settings_file = claude_dir / "settings.json"
|
||||
success = self._write_settings(settings_file, settings)
|
||||
if success:
|
||||
self.status.log("✅ Claude Code authentication configured successfully")
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
else:
|
||||
self.status.log("⚠️ No authentication configuration found", "WARNING")
|
||||
self.status.log(
|
||||
" Please set ANTHROPIC_API_KEY environment variable", "WARNING"
|
||||
)
|
||||
self.status.log(" Claude Code will run without authentication", "INFO")
|
||||
# Return True to allow container to start without API key
|
||||
# Users can still use Claude Code with their own authentication methods
|
||||
return True
|
||||
|
||||
def _create_settings(self) -> Optional[Dict]:
|
||||
"""Create Claude Code settings configuration"""
|
||||
settings = {}
|
||||
|
||||
# Core authentication
|
||||
api_key = os.environ.get("ANTHROPIC_API_KEY")
|
||||
if not api_key:
|
||||
return None
|
||||
|
||||
# Basic authentication setup
|
||||
settings["apiKey"] = api_key
|
||||
|
||||
# Custom authorization token (optional)
|
||||
auth_token = os.environ.get("ANTHROPIC_AUTH_TOKEN")
|
||||
if auth_token:
|
||||
settings["authToken"] = auth_token
|
||||
|
||||
# Custom headers (optional)
|
||||
custom_headers = os.environ.get("ANTHROPIC_CUSTOM_HEADERS")
|
||||
if custom_headers:
|
||||
try:
|
||||
# Expect JSON string format
|
||||
settings["customHeaders"] = json.loads(custom_headers)
|
||||
except json.JSONDecodeError:
|
||||
self.status.log(
|
||||
"⚠️ Invalid ANTHROPIC_CUSTOM_HEADERS format, skipping", "WARNING"
|
||||
)
|
||||
|
||||
# Enterprise integration settings
|
||||
if os.environ.get("CLAUDE_CODE_USE_BEDROCK") == "true":
|
||||
settings["provider"] = "bedrock"
|
||||
|
||||
if os.environ.get("CLAUDE_CODE_USE_VERTEX") == "true":
|
||||
settings["provider"] = "vertex"
|
||||
|
||||
# Network proxy settings
|
||||
http_proxy = os.environ.get("HTTP_PROXY")
|
||||
https_proxy = os.environ.get("HTTPS_PROXY")
|
||||
if http_proxy or https_proxy:
|
||||
settings["proxy"] = {}
|
||||
if http_proxy:
|
||||
settings["proxy"]["http"] = http_proxy
|
||||
if https_proxy:
|
||||
settings["proxy"]["https"] = https_proxy
|
||||
|
||||
# Telemetry settings
|
||||
if os.environ.get("DISABLE_TELEMETRY") == "true":
|
||||
settings["telemetry"] = {"enabled": False}
|
||||
|
||||
# Tool permissions (allow all by default in Cubbi environment)
|
||||
settings["permissions"] = {
|
||||
"tools": {
|
||||
"read": {"allowed": True},
|
||||
"write": {"allowed": True},
|
||||
"edit": {"allowed": True},
|
||||
"bash": {"allowed": True},
|
||||
"webfetch": {"allowed": True},
|
||||
"websearch": {"allowed": True},
|
||||
}
|
||||
}
|
||||
|
||||
return settings
|
||||
|
||||
def _write_settings(self, settings_file: Path, settings: Dict) -> bool:
|
||||
"""Write settings to Claude Code configuration file"""
|
||||
try:
|
||||
# Write settings with secure permissions
|
||||
with open(settings_file, "w") as f:
|
||||
json.dump(settings, f, indent=2)
|
||||
|
||||
# Set ownership and secure file permissions (read/write for owner only)
|
||||
self._set_ownership(settings_file)
|
||||
os.chmod(settings_file, stat.S_IRUSR | stat.S_IWUSR)
|
||||
|
||||
self.status.log(f"Created Claude Code settings at {settings_file}")
|
||||
return True
|
||||
except Exception as e:
|
||||
self.status.log(f"Failed to write Claude Code settings: {e}", "ERROR")
|
||||
return False
|
||||
|
||||
def setup_tool_configuration(self) -> bool:
|
||||
"""Set up Claude Code configuration - called by base class"""
|
||||
# Additional tool configuration can be added here if needed
|
||||
return True
|
||||
|
||||
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||
"""Integrate Claude Code with available MCP servers"""
|
||||
if mcp_config["count"] == 0:
|
||||
self.status.log("No MCP servers to integrate")
|
||||
return True
|
||||
|
||||
# Claude Code has built-in MCP support, so we can potentially
|
||||
# configure MCP servers in the settings if needed
|
||||
self.status.log("MCP server integration available for Claude Code")
|
||||
return True
|
||||
@@ -1,68 +0,0 @@
|
||||
name: claudecode
|
||||
description: Claude Code AI environment
|
||||
version: 1.0.0
|
||||
maintainer: team@monadical.com
|
||||
image: monadical/cubbi-claudecode:latest
|
||||
|
||||
init:
|
||||
pre_command: /cubbi-init.sh
|
||||
command: /entrypoint.sh
|
||||
|
||||
environment:
|
||||
# Core Anthropic Authentication
|
||||
- name: ANTHROPIC_API_KEY
|
||||
description: Anthropic API key for Claude
|
||||
required: true
|
||||
sensitive: true
|
||||
|
||||
# Optional Enterprise Integration
|
||||
- name: ANTHROPIC_AUTH_TOKEN
|
||||
description: Custom authorization token for Claude
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
- name: ANTHROPIC_CUSTOM_HEADERS
|
||||
description: Additional HTTP headers for Claude API requests
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# Enterprise Deployment Options
|
||||
- name: CLAUDE_CODE_USE_BEDROCK
|
||||
description: Use Amazon Bedrock instead of direct API
|
||||
required: false
|
||||
|
||||
- name: CLAUDE_CODE_USE_VERTEX
|
||||
description: Use Google Vertex AI instead of direct API
|
||||
required: false
|
||||
|
||||
# Network Configuration
|
||||
- name: HTTP_PROXY
|
||||
description: HTTP proxy server URL
|
||||
required: false
|
||||
|
||||
- name: HTTPS_PROXY
|
||||
description: HTTPS proxy server URL
|
||||
required: false
|
||||
|
||||
# Optional Telemetry Control
|
||||
- name: DISABLE_TELEMETRY
|
||||
description: Disable Claude Code telemetry
|
||||
required: false
|
||||
default: "false"
|
||||
|
||||
ports: []
|
||||
|
||||
volumes:
|
||||
- mountPath: /app
|
||||
description: Application directory
|
||||
|
||||
persistent_configs:
|
||||
- source: "/home/cubbi/.claude"
|
||||
target: "/cubbi-config/claude-settings"
|
||||
type: "directory"
|
||||
description: "Claude Code settings and configuration"
|
||||
|
||||
- source: "/home/cubbi/.cache/claude"
|
||||
target: "/cubbi-config/claude-cache"
|
||||
type: "directory"
|
||||
description: "Claude Code cache directory"
|
||||
@@ -1,251 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Automated test suite for Claude Code Cubbi integration
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
|
||||
|
||||
def run_test(description: str, command: list, timeout: int = 30) -> bool:
|
||||
"""Run a test command and return success status"""
|
||||
print(f"🧪 Testing: {description}")
|
||||
try:
|
||||
result = subprocess.run(
|
||||
command, capture_output=True, text=True, timeout=timeout
|
||||
)
|
||||
if result.returncode == 0:
|
||||
print(" ✅ PASS")
|
||||
return True
|
||||
else:
|
||||
print(f" ❌ FAIL: {result.stderr}")
|
||||
if result.stdout:
|
||||
print(f" 📋 stdout: {result.stdout}")
|
||||
return False
|
||||
except subprocess.TimeoutExpired:
|
||||
print(f" ⏰ TIMEOUT: Command exceeded {timeout}s")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f" ❌ ERROR: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def test_suite():
|
||||
"""Run complete test suite"""
|
||||
tests_passed = 0
|
||||
total_tests = 0
|
||||
|
||||
print("🚀 Starting Claude Code Cubbi Integration Test Suite")
|
||||
print("=" * 60)
|
||||
|
||||
# Test 1: Build image
|
||||
total_tests += 1
|
||||
if run_test(
|
||||
"Build Claude Code image",
|
||||
["docker", "build", "-t", "cubbi-claudecode:test", "cubbi/images/claudecode/"],
|
||||
timeout=180,
|
||||
):
|
||||
tests_passed += 1
|
||||
|
||||
# Test 2: Tag image for Cubbi
|
||||
total_tests += 1
|
||||
if run_test(
|
||||
"Tag image for Cubbi",
|
||||
["docker", "tag", "cubbi-claudecode:test", "monadical/cubbi-claudecode:latest"],
|
||||
):
|
||||
tests_passed += 1
|
||||
|
||||
# Test 3: Basic container startup
|
||||
total_tests += 1
|
||||
if run_test(
|
||||
"Container startup with test API key",
|
||||
[
|
||||
"docker",
|
||||
"run",
|
||||
"--rm",
|
||||
"-e",
|
||||
"ANTHROPIC_API_KEY=test-key",
|
||||
"cubbi-claudecode:test",
|
||||
"bash",
|
||||
"-c",
|
||||
"claude --version",
|
||||
],
|
||||
):
|
||||
tests_passed += 1
|
||||
|
||||
# Test 4: Cubbi image list
|
||||
total_tests += 1
|
||||
if run_test(
|
||||
"Cubbi image list includes claudecode",
|
||||
["uv", "run", "-m", "cubbi.cli", "image", "list"],
|
||||
):
|
||||
tests_passed += 1
|
||||
|
||||
# Test 5: Cubbi session creation
|
||||
total_tests += 1
|
||||
session_result = subprocess.run(
|
||||
[
|
||||
"uv",
|
||||
"run",
|
||||
"-m",
|
||||
"cubbi.cli",
|
||||
"session",
|
||||
"create",
|
||||
"--image",
|
||||
"claudecode",
|
||||
"--name",
|
||||
"test-automation",
|
||||
"--no-connect",
|
||||
"--env",
|
||||
"ANTHROPIC_API_KEY=test-key",
|
||||
"--run",
|
||||
"claude --version",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=60,
|
||||
)
|
||||
|
||||
if session_result.returncode == 0:
|
||||
print("🧪 Testing: Cubbi session creation")
|
||||
print(" ✅ PASS")
|
||||
tests_passed += 1
|
||||
|
||||
# Extract session ID for cleanup
|
||||
session_id = None
|
||||
for line in session_result.stdout.split("\n"):
|
||||
if "Session ID:" in line:
|
||||
session_id = line.split("Session ID: ")[1].strip()
|
||||
break
|
||||
|
||||
if session_id:
|
||||
# Test 6: Session cleanup
|
||||
total_tests += 1
|
||||
if run_test(
|
||||
"Clean up test session",
|
||||
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||
):
|
||||
tests_passed += 1
|
||||
else:
|
||||
print("🧪 Testing: Clean up test session")
|
||||
print(" ⚠️ SKIP: Could not extract session ID")
|
||||
total_tests += 1
|
||||
else:
|
||||
print("🧪 Testing: Cubbi session creation")
|
||||
print(f" ❌ FAIL: {session_result.stderr}")
|
||||
total_tests += 2 # This test and cleanup test both fail
|
||||
|
||||
# Test 7: Session without API key
|
||||
total_tests += 1
|
||||
no_key_result = subprocess.run(
|
||||
[
|
||||
"uv",
|
||||
"run",
|
||||
"-m",
|
||||
"cubbi.cli",
|
||||
"session",
|
||||
"create",
|
||||
"--image",
|
||||
"claudecode",
|
||||
"--name",
|
||||
"test-no-key",
|
||||
"--no-connect",
|
||||
"--run",
|
||||
"claude --version",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=60,
|
||||
)
|
||||
|
||||
if no_key_result.returncode == 0:
|
||||
print("🧪 Testing: Session without API key")
|
||||
print(" ✅ PASS")
|
||||
tests_passed += 1
|
||||
|
||||
# Extract session ID and close
|
||||
session_id = None
|
||||
for line in no_key_result.stdout.split("\n"):
|
||||
if "Session ID:" in line:
|
||||
session_id = line.split("Session ID: ")[1].strip()
|
||||
break
|
||||
|
||||
if session_id:
|
||||
subprocess.run(
|
||||
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||
capture_output=True,
|
||||
timeout=30,
|
||||
)
|
||||
else:
|
||||
print("🧪 Testing: Session without API key")
|
||||
print(f" ❌ FAIL: {no_key_result.stderr}")
|
||||
|
||||
# Test 8: Persistent configuration test
|
||||
total_tests += 1
|
||||
persist_result = subprocess.run(
|
||||
[
|
||||
"uv",
|
||||
"run",
|
||||
"-m",
|
||||
"cubbi.cli",
|
||||
"session",
|
||||
"create",
|
||||
"--image",
|
||||
"claudecode",
|
||||
"--name",
|
||||
"test-persist-auto",
|
||||
"--project",
|
||||
"test-automation",
|
||||
"--no-connect",
|
||||
"--env",
|
||||
"ANTHROPIC_API_KEY=test-key",
|
||||
"--run",
|
||||
"echo 'automation test' > ~/.claude/automation.txt && cat ~/.claude/automation.txt",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=60,
|
||||
)
|
||||
|
||||
if persist_result.returncode == 0:
|
||||
print("🧪 Testing: Persistent configuration")
|
||||
print(" ✅ PASS")
|
||||
tests_passed += 1
|
||||
|
||||
# Extract session ID and close
|
||||
session_id = None
|
||||
for line in persist_result.stdout.split("\n"):
|
||||
if "Session ID:" in line:
|
||||
session_id = line.split("Session ID: ")[1].strip()
|
||||
break
|
||||
|
||||
if session_id:
|
||||
subprocess.run(
|
||||
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||
capture_output=True,
|
||||
timeout=30,
|
||||
)
|
||||
else:
|
||||
print("🧪 Testing: Persistent configuration")
|
||||
print(f" ❌ FAIL: {persist_result.stderr}")
|
||||
|
||||
print("=" * 60)
|
||||
print(f"📊 Test Results: {tests_passed}/{total_tests} tests passed")
|
||||
|
||||
if tests_passed == total_tests:
|
||||
print("🎉 All tests passed! Claude Code integration is working correctly.")
|
||||
return True
|
||||
else:
|
||||
print(
|
||||
f"❌ {total_tests - tests_passed} test(s) failed. Please check the output above."
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def main():
|
||||
"""Main test entry point"""
|
||||
success = test_suite()
|
||||
exit(0 if success else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -222,16 +222,6 @@ class UserManager:
|
||||
):
|
||||
return False
|
||||
|
||||
# Create the sudoers file entry for the 'cubbi' user
|
||||
sudoers_command = [
|
||||
"sh",
|
||||
"-c",
|
||||
"echo 'cubbi ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/cubbi && chmod 0440 /etc/sudoers.d/cubbi",
|
||||
]
|
||||
if not self._run_command(sudoers_command):
|
||||
self.status.log("Failed to create sudoers entry for cubbi", "ERROR")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
FROM python:3.12-slim
|
||||
FROM node:20-slim
|
||||
|
||||
LABEL maintainer="team@monadical.com"
|
||||
LABEL description="Aider AI pair programming for Cubbi"
|
||||
LABEL description="Google Gemini CLI for Cubbi"
|
||||
|
||||
# Install system dependencies including gosu for user switching
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gosu \
|
||||
sudo \
|
||||
passwd \
|
||||
bash \
|
||||
curl \
|
||||
@@ -21,9 +20,11 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
ripgrep \
|
||||
openssh-client \
|
||||
vim \
|
||||
python3 \
|
||||
python3-pip \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install uv (Python package manager)
|
||||
# Install uv (Python package manager) for cubbi_init.py compatibility
|
||||
WORKDIR /tmp
|
||||
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||
sh install.sh && \
|
||||
@@ -31,26 +32,25 @@ RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||
rm install.sh
|
||||
|
||||
# Install Aider using pip in system Python (more compatible with user switching)
|
||||
RUN python -m pip install aider-chat
|
||||
# Install Gemini CLI globally
|
||||
RUN npm install -g @google/gemini-cli
|
||||
|
||||
# Make sure aider is in PATH
|
||||
ENV PATH="/root/.local/bin:$PATH"
|
||||
# Verify installation
|
||||
RUN gemini --version
|
||||
|
||||
# Create app directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy initialization system
|
||||
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||
COPY aider_plugin.py /cubbi/aider_plugin.py
|
||||
COPY gemini_cli_plugin.py /cubbi/gemini_cli_plugin.py
|
||||
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||
COPY init-status.sh /cubbi/init-status.sh
|
||||
|
||||
# Make scripts executable
|
||||
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||
|
||||
# Add aider to PATH in bashrc and init status check
|
||||
RUN echo 'PATH="/root/.local/bin:$PATH"' >> /etc/bash.bashrc
|
||||
# Add init status check to bashrc
|
||||
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||
|
||||
# Set up environment
|
||||
@@ -65,4 +65,4 @@ RUN /cubbi/cubbi_init.py --help
|
||||
WORKDIR /app
|
||||
|
||||
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||
CMD ["tail", "-f", "/dev/null"]
|
||||
CMD ["tail", "-f", "/dev/null"]
|
||||
339
cubbi/images/gemini-cli/README.md
Normal file
339
cubbi/images/gemini-cli/README.md
Normal file
@@ -0,0 +1,339 @@
|
||||
# Google Gemini CLI for Cubbi
|
||||
|
||||
This image provides Google Gemini CLI in a Cubbi container environment.
|
||||
|
||||
## Overview
|
||||
|
||||
Google Gemini CLI is an AI-powered development tool that allows you to query and edit large codebases, generate applications from PDFs/sketches, automate operational tasks, and integrate with media generation tools using Google's Gemini models.
|
||||
|
||||
## Features
|
||||
|
||||
- **Advanced AI Models**: Access to Gemini 1.5 Pro, Flash, and other Google AI models
|
||||
- **Codebase Analysis**: Query and edit large codebases intelligently
|
||||
- **Multi-modal Support**: Work with text, images, PDFs, and sketches
|
||||
- **Google Search Grounding**: Ground queries using Google Search for up-to-date information
|
||||
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
||||
- **Persistent Configuration**: Settings and history preserved across container restarts
|
||||
- **Project Integration**: Seamless integration with existing projects
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Set up API Key
|
||||
|
||||
```bash
|
||||
# For Google AI (recommended)
|
||||
uv run -m cubbi.cli config set services.google.api_key "your-gemini-api-key"
|
||||
|
||||
# Alternative using GEMINI_API_KEY
|
||||
uv run -m cubbi.cli config set services.gemini.api_key "your-gemini-api-key"
|
||||
```
|
||||
|
||||
Get your API key from [Google AI Studio](https://aistudio.google.com/apikey).
|
||||
|
||||
### 2. Run Gemini CLI Environment
|
||||
|
||||
```bash
|
||||
# Start Gemini CLI container with your project
|
||||
uv run -m cubbi.cli session create --image gemini-cli /path/to/your/project
|
||||
|
||||
# Or without a project
|
||||
uv run -m cubbi.cli session create --image gemini-cli
|
||||
```
|
||||
|
||||
### 3. Use Gemini CLI
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
gemini
|
||||
|
||||
# Interactive mode with specific query
|
||||
gemini
|
||||
> Write me a Discord bot that answers questions using a FAQ.md file
|
||||
|
||||
# Analyze existing project
|
||||
gemini
|
||||
> Give me a summary of all changes that went in yesterday
|
||||
|
||||
# Generate from sketch/PDF
|
||||
gemini
|
||||
> Create a web app based on this wireframe.png
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Supported API Keys
|
||||
|
||||
- `GEMINI_API_KEY`: Google AI API key for Gemini models
|
||||
- `GOOGLE_API_KEY`: Alternative Google API key (compatibility)
|
||||
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to Google Cloud service account JSON file
|
||||
|
||||
### Model Configuration
|
||||
|
||||
- `GEMINI_MODEL`: Default model (default: "gemini-1.5-pro")
|
||||
- Available: "gemini-1.5-pro", "gemini-1.5-flash", "gemini-1.0-pro"
|
||||
- `GEMINI_TEMPERATURE`: Model temperature 0.0-2.0 (default: 0.7)
|
||||
- `GEMINI_MAX_TOKENS`: Maximum tokens in response
|
||||
|
||||
### Advanced Configuration
|
||||
|
||||
- `GEMINI_SEARCH_ENABLED`: Enable Google Search grounding (true/false, default: false)
|
||||
- `GEMINI_DEBUG`: Enable debug mode (true/false, default: false)
|
||||
- `GCLOUD_PROJECT`: Google Cloud project ID
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- `HTTP_PROXY`: HTTP proxy server URL
|
||||
- `HTTPS_PROXY`: HTTPS proxy server URL
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic AI Development
|
||||
|
||||
```bash
|
||||
# Start Gemini CLI with your project
|
||||
uv run -m cubbi.cli session create --image gemini-cli /path/to/project
|
||||
|
||||
# Inside the container:
|
||||
gemini # Start interactive session
|
||||
```
|
||||
|
||||
### Codebase Analysis
|
||||
|
||||
```bash
|
||||
# Analyze changes
|
||||
gemini
|
||||
> What are the main functions in src/main.py?
|
||||
|
||||
# Code generation
|
||||
gemini
|
||||
> Add error handling to the authentication module
|
||||
|
||||
# Documentation
|
||||
gemini
|
||||
> Generate README documentation for this project
|
||||
```
|
||||
|
||||
### Multi-modal Development
|
||||
|
||||
```bash
|
||||
# Work with images
|
||||
gemini
|
||||
> Analyze this architecture diagram and suggest improvements
|
||||
|
||||
# PDF processing
|
||||
gemini
|
||||
> Convert this API specification PDF to OpenAPI YAML
|
||||
|
||||
# Sketch to code
|
||||
gemini
|
||||
> Create a React component based on this UI mockup
|
||||
```
|
||||
|
||||
### Advanced Features
|
||||
|
||||
```bash
|
||||
# With Google Search grounding
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GEMINI_SEARCH_ENABLED="true" \
|
||||
/path/to/project
|
||||
|
||||
# With specific model
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GEMINI_MODEL="gemini-1.5-flash" \
|
||||
--env GEMINI_TEMPERATURE="0.3" \
|
||||
/path/to/project
|
||||
|
||||
# Debug mode
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GEMINI_DEBUG="true" \
|
||||
/path/to/project
|
||||
```
|
||||
|
||||
### Enterprise/Proxy Setup
|
||||
|
||||
```bash
|
||||
# With proxy
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env HTTPS_PROXY="https://proxy.company.com:8080" \
|
||||
/path/to/project
|
||||
|
||||
# With Google Cloud authentication
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
|
||||
--env GCLOUD_PROJECT="your-project-id" \
|
||||
/path/to/project
|
||||
```
|
||||
|
||||
## Persistent Configuration
|
||||
|
||||
The following directories are automatically persisted:
|
||||
|
||||
- `~/.config/gemini/`: Gemini CLI configuration files
|
||||
- `~/.cache/gemini/`: Model cache and temporary files
|
||||
|
||||
Configuration files are maintained across container restarts, ensuring your preferences and session history are preserved.
|
||||
|
||||
## Model Recommendations
|
||||
|
||||
### Best Overall Performance
|
||||
- **Gemini 1.5 Pro**: Excellent reasoning and code understanding
|
||||
- **Gemini 1.5 Flash**: Faster responses, good for iterative development
|
||||
|
||||
### Cost-Effective Options
|
||||
- **Gemini 1.5 Flash**: Lower cost, high speed
|
||||
- **Gemini 1.0 Pro**: Basic model for simple tasks
|
||||
|
||||
### Specialized Use Cases
|
||||
- **Code Analysis**: Gemini 1.5 Pro
|
||||
- **Quick Iterations**: Gemini 1.5 Flash
|
||||
- **Multi-modal Tasks**: Gemini 1.5 Pro (supports images, PDFs)
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
cubbi/images/gemini-cli/
|
||||
├── Dockerfile # Container image definition
|
||||
├── cubbi_image.yaml # Cubbi image configuration
|
||||
├── gemini_plugin.py # Authentication and setup plugin
|
||||
└── README.md # This documentation
|
||||
```
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
1. **API Key Setup**: API key configured via Cubbi configuration or environment variables
|
||||
2. **Plugin Initialization**: `gemini_plugin.py` creates configuration files
|
||||
3. **Environment File**: Creates `~/.config/gemini/.env` with API key
|
||||
4. **Configuration**: Creates `~/.config/gemini/config.json` with settings
|
||||
5. **Ready**: Gemini CLI is ready for use with configured authentication
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**No API Key Found**
|
||||
```
|
||||
ℹ️ No API key found - Gemini CLI will require authentication
|
||||
```
|
||||
**Solution**: Set API key in Cubbi configuration:
|
||||
```bash
|
||||
uv run -m cubbi.cli config set services.google.api_key "your-key"
|
||||
```
|
||||
|
||||
**Authentication Failed**
|
||||
```
|
||||
Error: Invalid API key or authentication failed
|
||||
```
|
||||
**Solution**: Verify your API key at [Google AI Studio](https://aistudio.google.com/apikey):
|
||||
```bash
|
||||
# Test your API key
|
||||
curl -H "Content-Type: application/json" \
|
||||
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}' \
|
||||
"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY"
|
||||
```
|
||||
|
||||
**Model Not Available**
|
||||
```
|
||||
Error: Model 'xyz' not found
|
||||
```
|
||||
**Solution**: Use supported models:
|
||||
```bash
|
||||
# List available models (inside container)
|
||||
curl -H "Content-Type: application/json" \
|
||||
"https://generativelanguage.googleapis.com/v1beta/models?key=YOUR_API_KEY"
|
||||
```
|
||||
|
||||
**Rate Limit Exceeded**
|
||||
```
|
||||
Error: Quota exceeded
|
||||
```
|
||||
**Solution**: Google AI provides:
|
||||
- 60 requests per minute
|
||||
- 1,000 requests per day
|
||||
- Consider upgrading to Google Cloud for higher limits
|
||||
|
||||
**Network/Proxy Issues**
|
||||
```
|
||||
Connection timeout or proxy errors
|
||||
```
|
||||
**Solution**: Configure proxy settings:
|
||||
```bash
|
||||
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
```bash
|
||||
# Enable debug output
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GEMINI_DEBUG="true"
|
||||
|
||||
# Check configuration
|
||||
cat ~/.config/gemini/config.json
|
||||
|
||||
# Check environment
|
||||
cat ~/.config/gemini/.env
|
||||
|
||||
# Test CLI directly
|
||||
gemini --help
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **API Keys**: Stored securely with 0o600 permissions
|
||||
- **Environment**: Isolated container environment
|
||||
- **Configuration**: Secure file permissions for config files
|
||||
- **Google Cloud**: Supports service account authentication for enterprise use
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Model Configuration
|
||||
|
||||
```bash
|
||||
# Use specific model with custom settings
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GEMINI_MODEL="gemini-1.5-flash" \
|
||||
--env GEMINI_TEMPERATURE="0.2" \
|
||||
--env GEMINI_MAX_TOKENS="8192"
|
||||
```
|
||||
|
||||
### Google Search Integration
|
||||
|
||||
```bash
|
||||
# Enable Google Search grounding for up-to-date information
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GEMINI_SEARCH_ENABLED="true"
|
||||
```
|
||||
|
||||
### Google Cloud Integration
|
||||
|
||||
```bash
|
||||
# Use with Google Cloud service account
|
||||
uv run -m cubbi.cli session create --image gemini-cli \
|
||||
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
|
||||
--env GCLOUD_PROJECT="your-project-id"
|
||||
```
|
||||
|
||||
## API Limits and Pricing
|
||||
|
||||
### Free Tier (Google AI)
|
||||
- 60 requests per minute
|
||||
- 1,000 requests per day
|
||||
- Personal Google account required
|
||||
|
||||
### Paid Tier (Google Cloud)
|
||||
- Higher rate limits
|
||||
- Enterprise features
|
||||
- Service account authentication
|
||||
- Custom quotas available
|
||||
|
||||
## Support
|
||||
|
||||
For issues related to:
|
||||
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
||||
- **Gemini CLI Functionality**: Visit [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli)
|
||||
- **Google AI Platform**: Visit [Google AI documentation](https://ai.google.dev/)
|
||||
- **API Keys**: Visit [Google AI Studio](https://aistudio.google.com/)
|
||||
|
||||
## License
|
||||
|
||||
This image configuration is provided under the same license as the Cubbi project. Google Gemini CLI is licensed separately by Google.
|
||||
80
cubbi/images/gemini-cli/cubbi_image.yaml
Normal file
80
cubbi/images/gemini-cli/cubbi_image.yaml
Normal file
@@ -0,0 +1,80 @@
|
||||
name: gemini-cli
|
||||
description: Google Gemini CLI environment for AI-powered development
|
||||
version: 1.0.0
|
||||
maintainer: team@monadical.com
|
||||
image: monadical/cubbi-gemini-cli:latest
|
||||
|
||||
environment:
|
||||
# Google AI Configuration
|
||||
- name: GEMINI_API_KEY
|
||||
description: Google AI API key for Gemini models
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
- name: GOOGLE_API_KEY
|
||||
description: Alternative Google API key (compatibility)
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
# Google Cloud Configuration
|
||||
- name: GOOGLE_APPLICATION_CREDENTIALS
|
||||
description: Path to Google Cloud service account JSON file
|
||||
required: false
|
||||
sensitive: true
|
||||
|
||||
- name: GCLOUD_PROJECT
|
||||
description: Google Cloud project ID
|
||||
required: false
|
||||
|
||||
# Model Configuration
|
||||
- name: GEMINI_MODEL
|
||||
description: Default Gemini model (e.g., gemini-1.5-pro, gemini-1.5-flash)
|
||||
required: false
|
||||
default: "gemini-1.5-pro"
|
||||
|
||||
- name: GEMINI_TEMPERATURE
|
||||
description: Model temperature (0.0-2.0)
|
||||
required: false
|
||||
default: "0.7"
|
||||
|
||||
- name: GEMINI_MAX_TOKENS
|
||||
description: Maximum tokens in response
|
||||
required: false
|
||||
|
||||
# Search Configuration
|
||||
- name: GEMINI_SEARCH_ENABLED
|
||||
description: Enable Google Search grounding (true/false)
|
||||
required: false
|
||||
default: "false"
|
||||
|
||||
# Proxy Configuration
|
||||
- name: HTTP_PROXY
|
||||
description: HTTP proxy server URL
|
||||
required: false
|
||||
|
||||
- name: HTTPS_PROXY
|
||||
description: HTTPS proxy server URL
|
||||
required: false
|
||||
|
||||
# Debug Configuration
|
||||
- name: GEMINI_DEBUG
|
||||
description: Enable debug mode (true/false)
|
||||
required: false
|
||||
default: "false"
|
||||
|
||||
ports: []
|
||||
|
||||
volumes:
|
||||
- mountPath: /app
|
||||
description: Application directory
|
||||
|
||||
persistent_configs:
|
||||
- source: "/home/cubbi/.config/gemini"
|
||||
target: "/cubbi-config/gemini-settings"
|
||||
type: "directory"
|
||||
description: "Gemini CLI configuration and history"
|
||||
|
||||
- source: "/home/cubbi/.cache/gemini"
|
||||
target: "/cubbi-config/gemini-cache"
|
||||
type: "directory"
|
||||
description: "Gemini CLI cache directory"
|
||||
241
cubbi/images/gemini-cli/gemini_cli_plugin.py
Normal file
241
cubbi/images/gemini-cli/gemini_cli_plugin.py
Normal file
@@ -0,0 +1,241 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Gemini CLI Plugin for Cubbi
|
||||
Handles authentication setup and configuration for Google Gemini CLI
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import stat
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict
|
||||
|
||||
from cubbi_init import ToolPlugin
|
||||
|
||||
|
||||
class GeminiCliPlugin(ToolPlugin):
|
||||
"""Plugin for setting up Gemini CLI authentication and configuration"""
|
||||
|
||||
@property
|
||||
def tool_name(self) -> str:
|
||||
return "gemini-cli"
|
||||
|
||||
def _get_user_ids(self) -> tuple[int, int]:
|
||||
"""Get the cubbi user and group IDs from environment"""
|
||||
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
||||
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
||||
return user_id, group_id
|
||||
|
||||
def _set_ownership(self, path: Path) -> None:
|
||||
"""Set ownership of a path to the cubbi user"""
|
||||
user_id, group_id = self._get_user_ids()
|
||||
try:
|
||||
os.chown(path, user_id, group_id)
|
||||
except OSError as e:
|
||||
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
||||
|
||||
def _get_gemini_config_dir(self) -> Path:
|
||||
"""Get the Gemini configuration directory"""
|
||||
# Get the actual username from the config if available
|
||||
username = self.config.get("username", "cubbi")
|
||||
return Path(f"/home/{username}/.config/gemini")
|
||||
|
||||
def _get_gemini_cache_dir(self) -> Path:
|
||||
"""Get the Gemini cache directory"""
|
||||
# Get the actual username from the config if available
|
||||
username = self.config.get("username", "cubbi")
|
||||
return Path(f"/home/{username}/.cache/gemini")
|
||||
|
||||
def _ensure_gemini_dirs(self) -> tuple[Path, Path]:
|
||||
"""Ensure Gemini directories exist with correct ownership"""
|
||||
config_dir = self._get_gemini_config_dir()
|
||||
cache_dir = self._get_gemini_cache_dir()
|
||||
|
||||
# Create directories
|
||||
for directory in [config_dir, cache_dir]:
|
||||
try:
|
||||
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
|
||||
self._set_ownership(directory)
|
||||
except OSError as e:
|
||||
self.status.log(
|
||||
f"Failed to create Gemini directory {directory}: {e}", "ERROR"
|
||||
)
|
||||
|
||||
return config_dir, cache_dir
|
||||
|
||||
def initialize(self) -> bool:
|
||||
"""Initialize Gemini CLI configuration"""
|
||||
self.status.log("Setting up Gemini CLI configuration...")
|
||||
|
||||
# Ensure Gemini directories exist
|
||||
config_dir, cache_dir = self._ensure_gemini_dirs()
|
||||
|
||||
# Set up authentication and configuration
|
||||
auth_configured = self._setup_authentication(config_dir)
|
||||
config_created = self._create_configuration_file(config_dir)
|
||||
|
||||
if auth_configured or config_created:
|
||||
self.status.log("✅ Gemini CLI configured successfully")
|
||||
else:
|
||||
self.status.log(
|
||||
"ℹ️ No API key found - Gemini CLI will require authentication",
|
||||
"INFO",
|
||||
)
|
||||
self.status.log(
|
||||
" You can configure API keys using environment variables", "INFO"
|
||||
)
|
||||
|
||||
# Always return True to allow container to start
|
||||
return True
|
||||
|
||||
def _setup_authentication(self, config_dir: Path) -> bool:
|
||||
"""Set up Gemini authentication"""
|
||||
api_key = self._get_api_key()
|
||||
|
||||
if not api_key:
|
||||
return False
|
||||
|
||||
# Create environment file for API key
|
||||
env_file = config_dir / ".env"
|
||||
try:
|
||||
with open(env_file, "w") as f:
|
||||
f.write(f"GEMINI_API_KEY={api_key}\n")
|
||||
|
||||
# Set ownership and secure file permissions
|
||||
self._set_ownership(env_file)
|
||||
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
|
||||
|
||||
self.status.log(f"Created Gemini environment file at {env_file}")
|
||||
self.status.log("Added Gemini API key")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self.status.log(f"Failed to create environment file: {e}", "ERROR")
|
||||
return False
|
||||
|
||||
def _get_api_key(self) -> str:
|
||||
"""Get the Gemini API key from environment variables"""
|
||||
# Check multiple possible environment variable names
|
||||
for key_name in ["GEMINI_API_KEY", "GOOGLE_API_KEY"]:
|
||||
api_key = os.environ.get(key_name)
|
||||
if api_key:
|
||||
return api_key
|
||||
return ""
|
||||
|
||||
def _create_configuration_file(self, config_dir: Path) -> bool:
|
||||
"""Create Gemini CLI configuration file"""
|
||||
try:
|
||||
config = self._build_configuration()
|
||||
|
||||
if not config:
|
||||
return False
|
||||
|
||||
config_file = config_dir / "config.json"
|
||||
with open(config_file, "w") as f:
|
||||
json.dump(config, f, indent=2)
|
||||
|
||||
# Set ownership and permissions
|
||||
self._set_ownership(config_file)
|
||||
os.chmod(config_file, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP)
|
||||
|
||||
self.status.log(f"Created Gemini configuration at {config_file}")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self.status.log(f"Failed to create configuration file: {e}", "ERROR")
|
||||
return False
|
||||
|
||||
def _build_configuration(self) -> Dict[str, Any]:
|
||||
"""Build Gemini CLI configuration from environment variables"""
|
||||
config = {}
|
||||
|
||||
# Model configuration
|
||||
model = os.environ.get("GEMINI_MODEL", "gemini-1.5-pro")
|
||||
if model:
|
||||
config["defaultModel"] = model
|
||||
self.status.log(f"Set default model to {model}")
|
||||
|
||||
# Temperature setting
|
||||
temperature = os.environ.get("GEMINI_TEMPERATURE")
|
||||
if temperature:
|
||||
try:
|
||||
temp_value = float(temperature)
|
||||
if 0.0 <= temp_value <= 2.0:
|
||||
config["temperature"] = temp_value
|
||||
self.status.log(f"Set temperature to {temp_value}")
|
||||
else:
|
||||
self.status.log(
|
||||
f"Invalid temperature value {temperature}, using default",
|
||||
"WARNING",
|
||||
)
|
||||
except ValueError:
|
||||
self.status.log(
|
||||
f"Invalid temperature format {temperature}, using default",
|
||||
"WARNING",
|
||||
)
|
||||
|
||||
# Max tokens setting
|
||||
max_tokens = os.environ.get("GEMINI_MAX_TOKENS")
|
||||
if max_tokens:
|
||||
try:
|
||||
tokens_value = int(max_tokens)
|
||||
if tokens_value > 0:
|
||||
config["maxTokens"] = tokens_value
|
||||
self.status.log(f"Set max tokens to {tokens_value}")
|
||||
else:
|
||||
self.status.log(
|
||||
f"Invalid max tokens value {max_tokens}, using default",
|
||||
"WARNING",
|
||||
)
|
||||
except ValueError:
|
||||
self.status.log(
|
||||
f"Invalid max tokens format {max_tokens}, using default",
|
||||
"WARNING",
|
||||
)
|
||||
|
||||
# Search configuration
|
||||
search_enabled = os.environ.get("GEMINI_SEARCH_ENABLED", "false")
|
||||
if search_enabled.lower() in ["true", "false"]:
|
||||
config["searchEnabled"] = search_enabled.lower() == "true"
|
||||
if config["searchEnabled"]:
|
||||
self.status.log("Enabled Google Search grounding")
|
||||
|
||||
# Debug mode
|
||||
debug_mode = os.environ.get("GEMINI_DEBUG", "false")
|
||||
if debug_mode.lower() in ["true", "false"]:
|
||||
config["debug"] = debug_mode.lower() == "true"
|
||||
if config["debug"]:
|
||||
self.status.log("Enabled debug mode")
|
||||
|
||||
# Proxy settings
|
||||
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
|
||||
proxy_value = os.environ.get(proxy_var)
|
||||
if proxy_value:
|
||||
config[proxy_var.lower()] = proxy_value
|
||||
self.status.log(f"Added proxy configuration: {proxy_var}")
|
||||
|
||||
# Google Cloud project
|
||||
project = os.environ.get("GCLOUD_PROJECT")
|
||||
if project:
|
||||
config["project"] = project
|
||||
self.status.log(f"Set Google Cloud project to {project}")
|
||||
|
||||
return config
|
||||
|
||||
def setup_tool_configuration(self) -> bool:
|
||||
"""Set up Gemini CLI configuration - called by base class"""
|
||||
# Additional tool configuration can be added here if needed
|
||||
return True
|
||||
|
||||
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
||||
"""Integrate Gemini CLI with available MCP servers if applicable"""
|
||||
if mcp_config["count"] == 0:
|
||||
self.status.log("No MCP servers to integrate")
|
||||
return True
|
||||
|
||||
# Gemini CLI doesn't have native MCP support,
|
||||
# but we could potentially add custom integrations here
|
||||
self.status.log(
|
||||
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
|
||||
)
|
||||
return True
|
||||
312
cubbi/images/gemini-cli/test_gemini.py
Normal file
312
cubbi/images/gemini-cli/test_gemini.py
Normal file
@@ -0,0 +1,312 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Comprehensive test script for Gemini CLI Cubbi image
|
||||
Tests Docker image build, API key configuration, and Cubbi CLI integration
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
import os
|
||||
|
||||
|
||||
def run_command(cmd, description="", check=True):
|
||||
"""Run a shell command and return result"""
|
||||
print(f"\n🔍 {description}")
|
||||
print(f"Running: {cmd}")
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd, shell=True, capture_output=True, text=True, check=check
|
||||
)
|
||||
|
||||
if result.stdout:
|
||||
print("STDOUT:")
|
||||
print(result.stdout)
|
||||
|
||||
if result.stderr:
|
||||
print("STDERR:")
|
||||
print(result.stderr)
|
||||
|
||||
return result
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"❌ Command failed with exit code {e.returncode}")
|
||||
if e.stdout:
|
||||
print("STDOUT:")
|
||||
print(e.stdout)
|
||||
if e.stderr:
|
||||
print("STDERR:")
|
||||
print(e.stderr)
|
||||
if check:
|
||||
raise
|
||||
return e
|
||||
|
||||
|
||||
def test_docker_build():
|
||||
"""Test Docker image build"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Docker Image Build")
|
||||
print("=" * 60)
|
||||
|
||||
# Get the directory containing this test file
|
||||
test_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
result = run_command(
|
||||
f"cd {test_dir} && docker build -t monadical/cubbi-gemini-cli:latest .",
|
||||
"Building Gemini CLI Docker image",
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
print("✅ Gemini CLI Docker image built successfully")
|
||||
return True
|
||||
else:
|
||||
print("❌ Gemini CLI Docker image build failed")
|
||||
return False
|
||||
|
||||
|
||||
def test_docker_image_exists():
|
||||
"""Test if the Gemini CLI Docker image exists"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Docker Image Existence")
|
||||
print("=" * 60)
|
||||
|
||||
result = run_command(
|
||||
"docker images monadical/cubbi-gemini-cli:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
|
||||
"Checking if Gemini CLI Docker image exists",
|
||||
)
|
||||
|
||||
if "monadical/cubbi-gemini-cli" in result.stdout:
|
||||
print("✅ Gemini CLI Docker image exists")
|
||||
return True
|
||||
else:
|
||||
print("❌ Gemini CLI Docker image not found")
|
||||
return False
|
||||
|
||||
|
||||
def test_gemini_version():
|
||||
"""Test basic Gemini CLI functionality in container"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Gemini CLI Version")
|
||||
print("=" * 60)
|
||||
|
||||
result = run_command(
|
||||
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'gemini --version'",
|
||||
"Testing Gemini CLI version command",
|
||||
)
|
||||
|
||||
if result.returncode == 0 and (
|
||||
"gemini" in result.stdout.lower() or "version" in result.stdout.lower()
|
||||
):
|
||||
print("✅ Gemini CLI version command works")
|
||||
return True
|
||||
else:
|
||||
print("❌ Gemini CLI version command failed")
|
||||
return False
|
||||
|
||||
|
||||
def test_api_key_configuration():
|
||||
"""Test API key configuration and environment setup"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing API Key Configuration")
|
||||
print("=" * 60)
|
||||
|
||||
# Test with multiple API keys
|
||||
test_keys = {
|
||||
"GEMINI_API_KEY": "test-gemini-key",
|
||||
"GOOGLE_API_KEY": "test-google-key",
|
||||
}
|
||||
|
||||
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
|
||||
|
||||
result = run_command(
|
||||
f"docker run --rm {env_flags} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/.env 2>/dev/null || echo \"No .env file found\"'",
|
||||
"Testing API key configuration in .env file",
|
||||
)
|
||||
|
||||
success = True
|
||||
if "test-gemini-key" in result.stdout:
|
||||
print("✅ GEMINI_API_KEY configured correctly")
|
||||
else:
|
||||
print("❌ GEMINI_API_KEY not found in configuration")
|
||||
success = False
|
||||
|
||||
return success
|
||||
|
||||
|
||||
def test_configuration_file():
|
||||
"""Test Gemini CLI configuration file creation"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Configuration File")
|
||||
print("=" * 60)
|
||||
|
||||
env_vars = "-e GEMINI_API_KEY='test-key' -e GEMINI_MODEL='gemini-1.5-pro' -e GEMINI_TEMPERATURE='0.5'"
|
||||
|
||||
result = run_command(
|
||||
f"docker run --rm {env_vars} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/config.json 2>/dev/null || echo \"No config file found\"'",
|
||||
"Testing configuration file creation",
|
||||
)
|
||||
|
||||
success = True
|
||||
if "gemini-1.5-pro" in result.stdout:
|
||||
print("✅ Default model configured correctly")
|
||||
else:
|
||||
print("❌ Default model not found in configuration")
|
||||
success = False
|
||||
|
||||
if "0.5" in result.stdout:
|
||||
print("✅ Temperature configured correctly")
|
||||
else:
|
||||
print("❌ Temperature not found in configuration")
|
||||
success = False
|
||||
|
||||
return success
|
||||
|
||||
|
||||
def test_cubbi_cli_integration():
|
||||
"""Test Cubbi CLI integration"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Cubbi CLI Integration")
|
||||
print("=" * 60)
|
||||
|
||||
# Change to project root for cubbi commands
|
||||
project_root = os.path.dirname(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
)
|
||||
|
||||
# Test image listing
|
||||
result = run_command(
|
||||
f"cd {project_root} && uv run -m cubbi.cli image list",
|
||||
"Testing Cubbi CLI can see images",
|
||||
check=False,
|
||||
)
|
||||
|
||||
if "gemini-cli" in result.stdout:
|
||||
print("✅ Cubbi CLI can list Gemini CLI image")
|
||||
else:
|
||||
print(
|
||||
"ℹ️ Gemini CLI image not yet registered with Cubbi CLI - this is expected during development"
|
||||
)
|
||||
|
||||
# Test basic cubbi CLI works
|
||||
result = run_command(
|
||||
f"cd {project_root} && uv run -m cubbi.cli --help",
|
||||
"Testing basic Cubbi CLI functionality",
|
||||
)
|
||||
|
||||
if result.returncode == 0 and "cubbi" in result.stdout.lower():
|
||||
print("✅ Cubbi CLI basic functionality works")
|
||||
return True
|
||||
else:
|
||||
print("❌ Cubbi CLI basic functionality failed")
|
||||
return False
|
||||
|
||||
|
||||
def test_persistent_configuration():
|
||||
"""Test persistent configuration directories"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Persistent Configuration")
|
||||
print("=" * 60)
|
||||
|
||||
# Test that persistent directories are created
|
||||
result = run_command(
|
||||
"docker run --rm -e GEMINI_API_KEY='test-key' monadical/cubbi-gemini-cli:latest bash -c 'ls -la ~/.config/ && ls -la ~/.cache/'",
|
||||
"Testing persistent configuration directories",
|
||||
)
|
||||
|
||||
success = True
|
||||
|
||||
if "gemini" in result.stdout:
|
||||
print("✅ ~/.config/gemini directory exists")
|
||||
else:
|
||||
print("❌ ~/.config/gemini directory not found")
|
||||
success = False
|
||||
|
||||
if "gemini" in result.stdout:
|
||||
print("✅ ~/.cache/gemini directory exists")
|
||||
else:
|
||||
print("❌ ~/.cache/gemini directory not found")
|
||||
success = False
|
||||
|
||||
return success
|
||||
|
||||
|
||||
def test_plugin_functionality():
|
||||
"""Test the Gemini CLI plugin functionality"""
|
||||
print("\n" + "=" * 60)
|
||||
print("🧪 Testing Plugin Functionality")
|
||||
print("=" * 60)
|
||||
|
||||
# Test plugin without API keys (should still work)
|
||||
result = run_command(
|
||||
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test without API keys\"'",
|
||||
"Testing plugin functionality without API keys",
|
||||
)
|
||||
|
||||
if "No API key found - Gemini CLI will require authentication" in result.stdout:
|
||||
print("✅ Plugin handles missing API keys gracefully")
|
||||
else:
|
||||
print("ℹ️ Plugin API key handling test - check output above")
|
||||
|
||||
# Test plugin with API keys
|
||||
result = run_command(
|
||||
"docker run --rm -e GEMINI_API_KEY='test-plugin-key' monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test with API keys\"'",
|
||||
"Testing plugin functionality with API keys",
|
||||
)
|
||||
|
||||
if "Gemini CLI configured successfully" in result.stdout:
|
||||
print("✅ Plugin configures environment successfully")
|
||||
return True
|
||||
else:
|
||||
print("❌ Plugin environment configuration failed")
|
||||
return False
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all tests"""
|
||||
print("🚀 Starting Gemini CLI Cubbi Image Tests")
|
||||
print("=" * 60)
|
||||
|
||||
tests = [
|
||||
("Docker Image Build", test_docker_build),
|
||||
("Docker Image Exists", test_docker_image_exists),
|
||||
("Cubbi CLI Integration", test_cubbi_cli_integration),
|
||||
("Gemini CLI Version", test_gemini_version),
|
||||
("API Key Configuration", test_api_key_configuration),
|
||||
("Configuration File", test_configuration_file),
|
||||
("Persistent Configuration", test_persistent_configuration),
|
||||
("Plugin Functionality", test_plugin_functionality),
|
||||
]
|
||||
|
||||
results = {}
|
||||
|
||||
for test_name, test_func in tests:
|
||||
try:
|
||||
results[test_name] = test_func()
|
||||
except Exception as e:
|
||||
print(f"❌ Test '{test_name}' failed with exception: {e}")
|
||||
results[test_name] = False
|
||||
|
||||
# Print summary
|
||||
print("\n" + "=" * 60)
|
||||
print("📊 TEST SUMMARY")
|
||||
print("=" * 60)
|
||||
|
||||
total_tests = len(tests)
|
||||
passed_tests = sum(1 for result in results.values() if result)
|
||||
failed_tests = total_tests - passed_tests
|
||||
|
||||
for test_name, result in results.items():
|
||||
status = "✅ PASS" if result else "❌ FAIL"
|
||||
print(f"{status} {test_name}")
|
||||
|
||||
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
|
||||
|
||||
if failed_tests == 0:
|
||||
print("\n🎉 All tests passed! Gemini CLI image is ready for use.")
|
||||
return 0
|
||||
else:
|
||||
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -6,7 +6,6 @@ LABEL description="Goose for Cubbi"
|
||||
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gosu \
|
||||
sudo \
|
||||
passwd \
|
||||
bash \
|
||||
curl \
|
||||
|
||||
@@ -111,13 +111,6 @@ class GoosePlugin(ToolPlugin):
|
||||
config_data["GOOSE_PROVIDER"] = goose_provider
|
||||
self.status.log(f"Set GOOSE_PROVIDER to {goose_provider}")
|
||||
|
||||
# If provider is OpenAI and OPENAI_URL is set, configure OPENAI_HOST
|
||||
if goose_provider.lower() == "openai":
|
||||
openai_url = os.environ.get("OPENAI_URL")
|
||||
if openai_url:
|
||||
config_data["OPENAI_HOST"] = openai_url
|
||||
self.status.log(f"Set OPENAI_HOST to {openai_url}")
|
||||
|
||||
try:
|
||||
with config_file.open("w") as f:
|
||||
yaml.dump(config_data, f)
|
||||
@@ -171,7 +164,7 @@ class GoosePlugin(ToolPlugin):
|
||||
"enabled": True,
|
||||
"name": server_name,
|
||||
"timeout": 60,
|
||||
"type": server.get("type", "sse"),
|
||||
"type": "sse",
|
||||
"uri": mcp_url,
|
||||
"envs": {},
|
||||
}
|
||||
@@ -184,7 +177,7 @@ class GoosePlugin(ToolPlugin):
|
||||
"enabled": True,
|
||||
"name": server_name,
|
||||
"timeout": 60,
|
||||
"type": server.get("type", "sse"),
|
||||
"type": "sse",
|
||||
"uri": server_url,
|
||||
"envs": {},
|
||||
}
|
||||
|
||||
@@ -6,7 +6,6 @@ LABEL description="Opencode for Cubbi"
|
||||
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
gosu \
|
||||
sudo \
|
||||
passwd \
|
||||
bash \
|
||||
curl \
|
||||
@@ -31,22 +30,12 @@ RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||
rm install.sh
|
||||
|
||||
# Install Node.js
|
||||
ARG NODE_VERSION=v22.16.0
|
||||
# Install opencode-ai
|
||||
RUN mkdir -p /opt/node && \
|
||||
ARCH=$(uname -m) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then \
|
||||
NODE_ARCH=linux-x64; \
|
||||
elif [ "$ARCH" = "aarch64" ]; then \
|
||||
NODE_ARCH=linux-arm64; \
|
||||
else \
|
||||
echo "Unsupported architecture"; exit 1; \
|
||||
fi && \
|
||||
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
|
||||
curl -fsSL https://nodejs.org/dist/v22.16.0/node-v22.16.0-linux-x64.tar.gz -o node.tar.gz && \
|
||||
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
|
||||
rm node.tar.gz
|
||||
|
||||
|
||||
ENV PATH="/opt/node/bin:$PATH"
|
||||
RUN npm i -g yarn
|
||||
RUN npm i -g opencode-ai
|
||||
|
||||
@@ -15,8 +15,4 @@ volumes:
|
||||
- mountPath: /app
|
||||
description: Application directory
|
||||
|
||||
persistent_configs:
|
||||
- source: "/home/cubbi/.config/opencode"
|
||||
target: "/cubbi-config/config-opencode"
|
||||
type: "directory"
|
||||
description: "Opencode configuration"
|
||||
persistent_configs: []
|
||||
|
||||
@@ -117,16 +117,6 @@ class OpencodePlugin(ToolPlugin):
|
||||
api_key = os.environ.get(env_var)
|
||||
if api_key:
|
||||
auth_data[provider] = {"type": "api", "key": api_key}
|
||||
|
||||
# Add custom endpoint URL for OpenAI if available
|
||||
if provider == "openai":
|
||||
openai_url = os.environ.get("OPENAI_URL")
|
||||
if openai_url:
|
||||
auth_data[provider]["baseURL"] = openai_url
|
||||
self.status.log(
|
||||
f"Added OpenAI custom endpoint URL: {openai_url}"
|
||||
)
|
||||
|
||||
self.status.log(f"Added {provider} API key to auth configuration")
|
||||
|
||||
# Only write file if we have at least one API key
|
||||
|
||||
@@ -79,7 +79,6 @@ class MCPManager:
|
||||
name: str,
|
||||
url: str,
|
||||
headers: Dict[str, str] = None,
|
||||
mcp_type: Optional[str] = None,
|
||||
add_as_default: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
"""Add a remote MCP server.
|
||||
@@ -98,7 +97,6 @@ class MCPManager:
|
||||
name=name,
|
||||
url=url,
|
||||
headers=headers or {},
|
||||
mcp_type=mcp_type,
|
||||
)
|
||||
|
||||
# Add to the configuration
|
||||
|
||||
@@ -61,7 +61,6 @@ class RemoteMCP(BaseModel):
|
||||
type: str = "remote"
|
||||
url: str
|
||||
headers: Dict[str, str] = Field(default_factory=dict)
|
||||
mcp_type: Optional[str] = None
|
||||
|
||||
|
||||
class DockerMCP(BaseModel):
|
||||
@@ -103,7 +102,6 @@ class Session(BaseModel):
|
||||
status: SessionStatus
|
||||
container_id: Optional[str] = None
|
||||
ports: Dict[int, int] = Field(default_factory=dict)
|
||||
mcps: List[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class Config(BaseModel):
|
||||
@@ -111,5 +109,5 @@ class Config(BaseModel):
|
||||
images: Dict[str, Image] = Field(default_factory=dict)
|
||||
defaults: Dict[str, object] = Field(
|
||||
default_factory=dict
|
||||
) # Can store strings, booleans, lists, or other values
|
||||
) # Can store strings, booleans, or other values
|
||||
mcps: List[Dict[str, Any]] = Field(default_factory=list)
|
||||
|
||||
@@ -14,7 +14,6 @@ ENV_MAPPINGS = {
|
||||
"services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
||||
"services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
||||
"services.openai.api_key": "OPENAI_API_KEY",
|
||||
"services.openai.url": "OPENAI_URL",
|
||||
"services.anthropic.api_key": "ANTHROPIC_API_KEY",
|
||||
"services.openrouter.api_key": "OPENROUTER_API_KEY",
|
||||
"services.google.api_key": "GOOGLE_API_KEY",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "cubbi"
|
||||
version = "0.3.0"
|
||||
version = "0.2.0"
|
||||
description = "Cubbi Container Tool"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.12"
|
||||
|
||||
@@ -93,212 +93,21 @@ def test_mcp_remove(cli_runner, patched_config_manager):
|
||||
],
|
||||
)
|
||||
|
||||
# Mock the container_manager.list_sessions to return sessions without MCPs
|
||||
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||
mock_list_sessions.return_value = []
|
||||
# Mock the get_mcp and remove_mcp methods
|
||||
with patch("cubbi.cli.mcp_manager.get_mcp") as mock_get_mcp:
|
||||
# First make get_mcp return our MCP
|
||||
mock_get_mcp.return_value = {
|
||||
"name": "test-mcp",
|
||||
"type": "remote",
|
||||
"url": "http://test-server.com/sse",
|
||||
"headers": {"Authorization": "Bearer test-token"},
|
||||
}
|
||||
|
||||
# Mock the remove_mcp method
|
||||
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||
# Make remove_mcp return True (successful removal)
|
||||
mock_remove_mcp.return_value = True
|
||||
# Remove the MCP server
|
||||
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
||||
|
||||
# Remove the MCP server
|
||||
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
||||
|
||||
# Just check it ran successfully with exit code 0
|
||||
assert result.exit_code == 0
|
||||
assert "Removed MCP server 'test-mcp'" in result.stdout
|
||||
|
||||
|
||||
def test_mcp_remove_with_active_sessions(cli_runner, patched_config_manager):
|
||||
"""Test removing an MCP server that is used by active sessions."""
|
||||
from cubbi.models import Session, SessionStatus
|
||||
|
||||
# Add a remote MCP server
|
||||
patched_config_manager.set(
|
||||
"mcps",
|
||||
[
|
||||
{
|
||||
"name": "test-mcp",
|
||||
"type": "remote",
|
||||
"url": "http://test-server.com/sse",
|
||||
"headers": {"Authorization": "Bearer test-token"},
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
# Create mock sessions that use the MCP
|
||||
mock_sessions = [
|
||||
Session(
|
||||
id="session-1",
|
||||
name="test-session-1",
|
||||
image="goose",
|
||||
status=SessionStatus.RUNNING,
|
||||
container_id="container-1",
|
||||
mcps=["test-mcp", "other-mcp"],
|
||||
),
|
||||
Session(
|
||||
id="session-2",
|
||||
name="test-session-2",
|
||||
image="goose",
|
||||
status=SessionStatus.RUNNING,
|
||||
container_id="container-2",
|
||||
mcps=["other-mcp"], # This one doesn't use test-mcp
|
||||
),
|
||||
Session(
|
||||
id="session-3",
|
||||
name="test-session-3",
|
||||
image="goose",
|
||||
status=SessionStatus.RUNNING,
|
||||
container_id="container-3",
|
||||
mcps=["test-mcp"], # This one uses test-mcp
|
||||
),
|
||||
]
|
||||
|
||||
# Mock the container_manager.list_sessions to return our sessions
|
||||
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||
mock_list_sessions.return_value = mock_sessions
|
||||
|
||||
# Mock the remove_mcp method
|
||||
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||
# Make remove_mcp return True (successful removal)
|
||||
mock_remove_mcp.return_value = True
|
||||
|
||||
# Remove the MCP server
|
||||
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
||||
|
||||
# Check it ran successfully with exit code 0
|
||||
assert result.exit_code == 0
|
||||
assert "Removed MCP server 'test-mcp'" in result.stdout
|
||||
# Check warning about affected sessions
|
||||
assert (
|
||||
"Warning: Found 2 active sessions using MCP 'test-mcp'" in result.stdout
|
||||
)
|
||||
assert "session-1" in result.stdout
|
||||
assert "session-3" in result.stdout
|
||||
# session-2 should not be mentioned since it doesn't use test-mcp
|
||||
assert "session-2" not in result.stdout
|
||||
|
||||
|
||||
def test_mcp_remove_nonexistent(cli_runner, patched_config_manager):
|
||||
"""Test removing a non-existent MCP server."""
|
||||
# No MCPs configured
|
||||
patched_config_manager.set("mcps", [])
|
||||
|
||||
# Mock the container_manager.list_sessions to return empty list
|
||||
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||
mock_list_sessions.return_value = []
|
||||
|
||||
# Mock the remove_mcp method to return False (MCP not found)
|
||||
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||
mock_remove_mcp.return_value = False
|
||||
|
||||
# Try to remove a non-existent MCP server
|
||||
result = cli_runner.invoke(app, ["mcp", "remove", "nonexistent-mcp"])
|
||||
|
||||
# Check it ran successfully but reported not found
|
||||
assert result.exit_code == 0
|
||||
assert "MCP server 'nonexistent-mcp' not found" in result.stdout
|
||||
|
||||
|
||||
def test_session_mcps_attribute():
|
||||
"""Test that Session model has mcps attribute and can be populated correctly."""
|
||||
from cubbi.models import Session, SessionStatus
|
||||
|
||||
# Test that Session can be created with mcps attribute
|
||||
session = Session(
|
||||
id="test-session",
|
||||
name="test-session",
|
||||
image="goose",
|
||||
status=SessionStatus.RUNNING,
|
||||
container_id="test-container",
|
||||
mcps=["mcp1", "mcp2"],
|
||||
)
|
||||
|
||||
assert session.mcps == ["mcp1", "mcp2"]
|
||||
|
||||
# Test that Session can be created with empty mcps list
|
||||
session_empty = Session(
|
||||
id="test-session-2",
|
||||
name="test-session-2",
|
||||
image="goose",
|
||||
status=SessionStatus.RUNNING,
|
||||
container_id="test-container-2",
|
||||
)
|
||||
|
||||
assert session_empty.mcps == [] # Should default to empty list
|
||||
|
||||
|
||||
def test_session_mcps_from_container_labels():
|
||||
"""Test that Session mcps are correctly populated from container labels."""
|
||||
from unittest.mock import Mock
|
||||
from cubbi.container import ContainerManager
|
||||
|
||||
# Mock a container with MCP labels
|
||||
mock_container = Mock()
|
||||
mock_container.id = "test-container-id"
|
||||
mock_container.status = "running"
|
||||
mock_container.labels = {
|
||||
"cubbi.session": "true",
|
||||
"cubbi.session.id": "test-session",
|
||||
"cubbi.session.name": "test-session-name",
|
||||
"cubbi.image": "goose",
|
||||
"cubbi.mcps": "mcp1,mcp2,mcp3", # Test with multiple MCPs
|
||||
}
|
||||
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
|
||||
|
||||
# Mock Docker client
|
||||
mock_client = Mock()
|
||||
mock_client.containers.list.return_value = [mock_container]
|
||||
|
||||
# Create container manager with mocked client
|
||||
with patch("cubbi.container.docker.from_env") as mock_docker:
|
||||
mock_docker.return_value = mock_client
|
||||
mock_client.ping.return_value = True
|
||||
|
||||
container_manager = ContainerManager()
|
||||
sessions = container_manager.list_sessions()
|
||||
|
||||
assert len(sessions) == 1
|
||||
session = sessions[0]
|
||||
assert session.id == "test-session"
|
||||
assert session.mcps == ["mcp1", "mcp2", "mcp3"]
|
||||
|
||||
|
||||
def test_session_mcps_from_empty_container_labels():
|
||||
"""Test that Session mcps are correctly handled when container has no MCP labels."""
|
||||
from unittest.mock import Mock
|
||||
from cubbi.container import ContainerManager
|
||||
|
||||
# Mock a container without MCP labels
|
||||
mock_container = Mock()
|
||||
mock_container.id = "test-container-id"
|
||||
mock_container.status = "running"
|
||||
mock_container.labels = {
|
||||
"cubbi.session": "true",
|
||||
"cubbi.session.id": "test-session",
|
||||
"cubbi.session.name": "test-session-name",
|
||||
"cubbi.image": "goose",
|
||||
# No cubbi.mcps label
|
||||
}
|
||||
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
|
||||
|
||||
# Mock Docker client
|
||||
mock_client = Mock()
|
||||
mock_client.containers.list.return_value = [mock_container]
|
||||
|
||||
# Create container manager with mocked client
|
||||
with patch("cubbi.container.docker.from_env") as mock_docker:
|
||||
mock_docker.return_value = mock_client
|
||||
mock_client.ping.return_value = True
|
||||
|
||||
container_manager = ContainerManager()
|
||||
sessions = container_manager.list_sessions()
|
||||
|
||||
assert len(sessions) == 1
|
||||
session = sessions[0]
|
||||
assert session.id == "test-session"
|
||||
assert session.mcps == [] # Should be empty list when no MCPs
|
||||
# Just check it ran successfully with exit code 0
|
||||
assert result.exit_code == 0
|
||||
|
||||
|
||||
@pytest.mark.requires_docker
|
||||
|
||||
Reference in New Issue
Block a user