3 Commits

Author SHA1 Message Date
Your Name
b173bcd08c Add missing image build 2025-06-26 16:30:51 -04:00
Your Name
96243a99e4 fix: resolve gemini-cli container initialization and testing issues
- Fix hardcoded paths in tests to use dynamic path resolution
- Update gemini-cli plugin to use actual username instead of hardcoded "cubbi"
- Simplify persistent configuration test to use ~ instead of absolute paths
- Remove unused imports and improve test reliability
- Ensure configuration files are created in correct user directories

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-26 16:24:11 -04:00
Your Name
ae20e6a455 feat: add Gemini CLI image with fixed user/group handling
- Add complete Gemini CLI container image for AI-powered development
- Support Google Gemini models (1.5 Pro, Flash) with configurable settings
- Include comprehensive plugin system for authentication and configuration
- Fix user/group creation conflicts with existing base image users
- Dynamic username handling for compatibility with node:20-slim base
- Persistent configuration for .config/gemini and .cache/gemini
- Test suite for Docker build, API key setup, and Cubbi integration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-26 15:13:32 -04:00
44 changed files with 1238 additions and 3827 deletions

View File

@@ -0,0 +1,17 @@
name: Conventional commit PR
on: [pull_request]
jobs:
cog_check_job:
runs-on: ubuntu-latest
name: check conventional commit compliance
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
# pick the pr HEAD instead of the merge commit
ref: ${{ github.event.pull_request.head.sha }}
- name: Conventional commit check
uses: cocogitto/cocogitto-action@v3

View File

@@ -34,7 +34,7 @@ jobs:
run: |
uv tool install --with-editable . .
cubbi image build goose
cubbi image build aider
cubbi image build gemini-cli
- name: Tests
run: |

View File

@@ -1,187 +1,6 @@
# CHANGELOG
## v0.4.0 (2025-08-06)
### Documentation
- Update readme ([#25](https://github.com/Monadical-SAS/cubbi/pull/25),
[`9dc1158`](https://github.com/Monadical-SAS/cubbi/commit/9dc11582a21371a069d407390308340a87358a9f))
doc: update readme
### Features
- Add user port support ([#26](https://github.com/Monadical-SAS/cubbi/pull/26),
[`75c9849`](https://github.com/Monadical-SAS/cubbi/commit/75c9849315aebb41ffbd5ac942c7eb3c4a151663))
* feat: add user port support
* fix: fix unit test and improve isolation
* refactor: remove some fixture
- Make opencode beautiful by default ([#24](https://github.com/Monadical-SAS/cubbi/pull/24),
[`b8ecad6`](https://github.com/Monadical-SAS/cubbi/commit/b8ecad6227f6a328517edfc442cd9bcf4d3361dc))
opencode: try having compatible default theme
- Support for crush ([#23](https://github.com/Monadical-SAS/cubbi/pull/23),
[`472f030`](https://github.com/Monadical-SAS/cubbi/commit/472f030924e58973dea0a41188950540550c125d))
## v0.3.0 (2025-07-31)
### Bug Fixes
- Claudecode and opencode arm64 images ([#21](https://github.com/Monadical-SAS/cubbi/pull/21),
[`dba7a7c`](https://github.com/Monadical-SAS/cubbi/commit/dba7a7c1efcc04570a92ecbc4eee39eb6353aaea))
- Update readme
([`4958b07`](https://github.com/Monadical-SAS/cubbi/commit/4958b07401550fb5a6751b99a257eda6c4558ea4))
### Continuous Integration
- Remove conventional commit, as only PR is required
([`afae8a1`](https://github.com/Monadical-SAS/cubbi/commit/afae8a13e1ea02801b2e5c9d5c84aa65a32d637c))
### Features
- Add --mcp-type option for remote MCP servers
([`d41faf6`](https://github.com/Monadical-SAS/cubbi/commit/d41faf6b3072d4f8bdb2adc896125c7fd0d6117d))
Auto-detects connection type from URL (/sse -> sse, /mcp -> streamable_http) or allows manual
specification. Updates goose plugin to use actual MCP type instead of hardcoded sse.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add Claude Code image support ([#16](https://github.com/Monadical-SAS/cubbi/pull/16),
[`b28c2bd`](https://github.com/Monadical-SAS/cubbi/commit/b28c2bd63e324f875b2d862be9e0afa4a7a17ffc))
* feat: add Claude Code image support
Add a new Cubbi image for Claude Code (Anthropic's official CLI) with: - Full Claude Code CLI
functionality via NPM package - Secure API key management with multiple authentication options -
Enterprise support (Bedrock, Vertex AI, proxy configuration) - Persistent configuration and cache
directories - Comprehensive test suite and documentation
The image allows users to run Claude Code in containers with proper isolation, persistent settings,
and seamless Cubbi integration. It gracefully handles missing API keys to allow flexible
authentication.
Also adds optional Claude Code API keys to container.py for enterprise deployments.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Pre-commit fixes
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Your Name <you@example.com>
- Add configuration override in session create with --config/-c
([`672b8a8`](https://github.com/Monadical-SAS/cubbi/commit/672b8a8e315598d98f40d269dfcfbde6203cbb57))
- Add MCP tracking to sessions ([#19](https://github.com/Monadical-SAS/cubbi/pull/19),
[`d750e64`](https://github.com/Monadical-SAS/cubbi/commit/d750e64608998f6f3a03928bba18428f576b412f))
Add mcps field to Session model to track active MCP servers and populate it from container labels in
ContainerManager. Enhance MCP remove command to warn when removing servers used by active
sessions.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
- Add network filtering with domain restrictions
([#22](https://github.com/Monadical-SAS/cubbi/pull/22),
[`2eb15a3`](https://github.com/Monadical-SAS/cubbi/commit/2eb15a31f8bb97f93461bea5e567cc2ccde3f86c))
* fix: remove config override logging to prevent API key exposure
* feat: add network filtering with domain restrictions
- Add --domains flag to restrict container network access to specific domains/ports - Integrate
monadicalsas/network-filter container for network isolation - Support domain patterns like
'example.com:443', '*.api.com' - Add defaults.domains configuration option - Automatically handle
network-filter container lifecycle - Prevent conflicts between --domains and --network options
* docs: add --domains option to README usage examples
* docs: remove wildcard domain example from --domains help
Wildcard domains are not currently supported by network-filter
- Add ripgrep and openssh-client in images ([#15](https://github.com/Monadical-SAS/cubbi/pull/15),
[`e70ec35`](https://github.com/Monadical-SAS/cubbi/commit/e70ec3538ba4e02a60afedca583da1c35b7b6d7a))
- Add sudo and sudoers ([#20](https://github.com/Monadical-SAS/cubbi/pull/20),
[`9c8ddbb`](https://github.com/Monadical-SAS/cubbi/commit/9c8ddbb3f3f2fc97db9283898b6a85aee7235fae))
* feat: add sudo and sudoers
* Update cubbi/images/cubbi_init.py
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
---------
- Implement Aider AI pair programming support
([#17](https://github.com/Monadical-SAS/cubbi/pull/17),
[`fc0d6b5`](https://github.com/Monadical-SAS/cubbi/commit/fc0d6b51af12ddb0bd8655309209dd88e7e4d6f1))
* feat: implement Aider AI pair programming support
- Add comprehensive Aider Docker image with Python 3.12 and system pip installation - Implement
aider_plugin.py for secure API key management and environment configuration - Support multiple LLM
providers: OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter - Add persistent configuration for
~/.aider/ and ~/.cache/aider/ directories - Create comprehensive documentation with usage examples
and troubleshooting - Include automated test suite with 6 test categories covering all
functionality - Update container.py to support DEEPSEEK_API_KEY and GEMINI_API_KEY - Integrate
with Cubbi CLI for seamless session management
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fix pytest for aider
* Fix pre-commit
---------
Co-authored-by: Your Name <you@example.com>
- Include new image opencode ([#14](https://github.com/Monadical-SAS/cubbi/pull/14),
[`5fca51e`](https://github.com/Monadical-SAS/cubbi/commit/5fca51e5152dcf7503781eb707fa04414cf33c05))
* feat: include new image opencode
* docs: update readme
- Support config `openai.url` for goose/opencode/aider
([`da5937e`](https://github.com/Monadical-SAS/cubbi/commit/da5937e70829b88a66f96c3ce7be7dacfc98facb))
### Refactoring
- New image layout and organization ([#13](https://github.com/Monadical-SAS/cubbi/pull/13),
[`e5121dd`](https://github.com/Monadical-SAS/cubbi/commit/e5121ddea4230e78a05a85c4ce668e0c169b5ace))
* refactor: rework how image are defined, in order to create others wrapper for others tools
* refactor: fix issues with ownership
* refactor: image share now information with others images type
* fix: update readme
## v0.2.0 (2025-05-21)
### Continuous Integration

View File

@@ -48,15 +48,3 @@ Use uv instead:
- **Configuration**: Use environment variables with YAML for configuration
Refer to SPECIFICATIONS.md for detailed architecture and implementation guidance.
## Cubbi images
A cubbi image is a flavored docker image that wrap a tool (let's say goose), and dynamically configure the tool when the image is starting. All cubbi images are defined in `cubbi/images` directory.
Each image must have (let's take goose image for example):
- `goose/cubbi_image.yaml`, list of persistent paths, etc.
- `goose/Dockerfile`, that is used to build the cubbi image with cubbi tools
- `goose/goose_plugin.py`, a plugin file named of the cubbi image name, that is specific for this image, with the intent to configure dynamically the docker image when starting with the preferences of the user (via environment variable). They all import `cubbi_init.py`, but this file is shared accross all images, so it is normal that execution of the plugin import does not work, because the build system will copy the file in place during the build.
- `goose/README.md`, a tiny readme about the image
If you are creating a new image, look about existing images (goose, opencode).

View File

@@ -2,7 +2,7 @@
# Cubbi - Container Tool
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments, with support for MCP servers. It supports [Aider](https://github.com/Aider-AI/aider), [Crush](https://github.com/charmbracelet/crush), [Claude Code](https://github.com/anthropics/claude-code), [Goose](https://github.com/block/goose), [Opencode](https://github.com/sst/opencode).
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments. It works with both local Docker and a dedicated remote web service that manages containers in a Docker-in-Docker (DinD) environment. Cubbi also supports connecting to MCP (Model Control Protocol) servers to extend AI tools with additional capabilities.
![PyPI - Version](https://img.shields.io/pypi/v/cubbi)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cubbi)
@@ -17,6 +17,7 @@ Cubbi is a command-line tool for managing ephemeral containers that run AI tools
- `cubbix` - Shortcut for `cubbi session create`
- `cubbix .` - Mount the current directory
- `cubbix /path/to/dir` - Mount a specific directory
- `cubbix https://github.com/user/repo` - Clone a repository
## 📋 Requirements
@@ -26,6 +27,9 @@ Cubbi is a command-line tool for managing ephemeral containers that run AI tools
## 📥 Installation
```bash
# Via pip
pip install cubbi
# Via uv
uv tool install cubbi
@@ -39,7 +43,6 @@ Then compile your first image:
```bash
cubbi image build goose
cubbi image build opencode
cubbi image build crush
```
### For Developers
@@ -77,19 +80,9 @@ cubbi session connect SESSION_ID
# Close a session when done
cubbi session close SESSION_ID
# Close a session quickly (kill instead of graceful stop)
cubbi session close SESSION_ID --kill
# Close all sessions at once
cubbi session close --all
# Close all sessions quickly
cubbi session close --all --kill
# Create a session with a specific image
cubbix --image goose
cubbix --image opencode
cubbix --image crush
# Create a session with environment variables
cubbix -e VAR1=value1 -e VAR2=value2
@@ -102,17 +95,9 @@ cubbix -v ~/data:/data -v ./configs:/etc/app/config
cubbix .
cubbix /path/to/project
# Forward ports from container to host
cubbix --port 8000 # Forward port 8000
cubbix --port 8000,3000,5173 # Forward multiple ports (comma-separated)
cubbix --port 8000 --port 3000 # Forward multiple ports (repeated flag)
# Connect to external Docker networks
cubbix --network teamnet --network dbnet
# Restrict network access to specific domains
cubbix --domains github.com --domains "api.example.com:443"
# Connect to MCP servers for extended capabilities
cubbix --mcp github --mcp jira
@@ -140,17 +125,7 @@ cubbix --ssh
## 🖼️ Image Management
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools
**Supported Images**
| Image Name | Langtrace Support |
|------------|-------------------|
| goose | yes |
| opencode | no |
| claudecode | no |
| aider | no |
| crush | no |
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools:
```bash
# List available images
@@ -159,12 +134,10 @@ cubbi image list
# Get detailed information about an image
cubbi image info goose
cubbi image info opencode
cubbi image info crush
# Build an image
cubbi image build goose
cubbi image build opencode
cubbi image build crush
```
Images are defined in the `cubbi/images/` directory, with each subdirectory containing:
@@ -249,26 +222,6 @@ cubbi config volume remove /local/path
Default volumes will be combined with any volumes specified using the `-v` flag when creating a session.
### Default Ports Configuration
You can configure default ports that will be automatically forwarded in every new session:
```bash
# List default ports
cubbi config port list
# Add a single port to defaults
cubbi config port add 8000
# Add multiple ports to defaults (comma-separated)
cubbi config port add 8000,3000,5173
# Remove a port from defaults
cubbi config port remove 8000
```
Default ports will be combined with any ports specified using the `--port` flag when creating a session.
### Default MCP Servers Configuration
You can configure default MCP servers that sessions will automatically connect to:

View File

@@ -142,11 +142,6 @@ def create_session(
network: List[str] = typer.Option(
[], "--network", "-N", help="Connect to additional Docker networks"
),
port: List[str] = typer.Option(
[],
"--port",
help="Forward ports (e.g., '8000' or '8000,3000' or multiple --port flags)",
),
name: Optional[str] = typer.Option(None, "--name", "-n", help="Session name"),
run_command: Optional[str] = typer.Option(
None,
@@ -178,17 +173,6 @@ def create_session(
None, "--provider", "-p", help="Provider to use"
),
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
config: List[str] = typer.Option(
[],
"--config",
"-c",
help="Override configuration values (KEY=VALUE) for this session only",
),
domains: List[str] = typer.Option(
[],
"--domains",
help="Restrict network access to specified domains/ports (e.g., 'example.com:443', 'api.github.com')",
),
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
) -> None:
"""Create a new Cubbi session
@@ -205,66 +189,16 @@ def create_session(
target_gid = gid if gid is not None else os.getgid()
console.print(f"Using UID: {target_uid}, GID: {target_gid}")
# Create a temporary user config manager with overrides
temp_user_config = UserConfigManager()
# Parse and apply config overrides
config_overrides = {}
for config_item in config:
if "=" in config_item:
key, value = config_item.split("=", 1)
# Convert string value to appropriate type
if value.lower() == "true":
typed_value = True
elif value.lower() == "false":
typed_value = False
elif value.isdigit():
typed_value = int(value)
else:
typed_value = value
config_overrides[key] = typed_value
else:
console.print(
f"[yellow]Warning: Ignoring invalid config format: {config_item}. Use KEY=VALUE.[/yellow]"
)
# Apply overrides to temp config (without saving)
for key, value in config_overrides.items():
# Handle shorthand service paths (e.g., "langfuse.url")
if (
"." in key
and not key.startswith("services.")
and not any(
key.startswith(section + ".")
for section in ["defaults", "docker", "remote", "ui"]
)
):
service, setting = key.split(".", 1)
key = f"services.{service}.{setting}"
# Split the key path and navigate to set the value
parts = key.split(".")
config_dict = temp_user_config.config
# Navigate to the containing dictionary
for part in parts[:-1]:
if part not in config_dict:
config_dict[part] = {}
config_dict = config_dict[part]
# Set the value without saving
config_dict[parts[-1]] = value
# Use default image from user configuration (with overrides applied)
# Use default image from user configuration
if not image:
image_name = temp_user_config.get(
image_name = user_config.get(
"defaults.image", config_manager.config.defaults.get("image", "goose")
)
else:
image_name = image
# Start with environment variables from user configuration (with overrides applied)
environment = temp_user_config.get_environment_variables()
# Start with environment variables from user configuration
environment = user_config.get_environment_variables()
# Override with environment variables from command line
for var in env:
@@ -280,7 +214,7 @@ def create_session(
volume_mounts = {}
# Get default volumes from user config
default_volumes = temp_user_config.get("defaults.volumes", [])
default_volumes = user_config.get("defaults.volumes", [])
# Combine default volumes with user-specified volumes
all_volumes = default_volumes + list(volume)
@@ -307,56 +241,15 @@ def create_session(
)
# Get default networks from user config
default_networks = temp_user_config.get("defaults.networks", [])
default_networks = user_config.get("defaults.networks", [])
# Combine default networks with user-specified networks, removing duplicates
all_networks = list(set(default_networks + network))
# Get default domains from user config
default_domains = temp_user_config.get("defaults.domains", [])
# Combine default domains with user-specified domains
all_domains = default_domains + list(domains)
# Check for conflict between network and domains
if all_domains and all_networks:
console.print(
"[yellow]Warning: --domains cannot be used with --network. Network restrictions will take precedence.[/yellow]"
)
# Get default ports from user config
default_ports = temp_user_config.get("defaults.ports", [])
# Parse and combine ports from command line
session_ports = []
for port_arg in port:
try:
parsed_ports = [int(p.strip()) for p in port_arg.split(",")]
# Validate port ranges
invalid_ports = [p for p in parsed_ports if not (1 <= p <= 65535)]
if invalid_ports:
console.print(
f"[red]Error: Invalid ports {invalid_ports}. Ports must be between 1 and 65535[/red]"
)
return
session_ports.extend(parsed_ports)
except ValueError:
console.print(
f"[yellow]Warning: Ignoring invalid port format: {port_arg}. Use integers only.[/yellow]"
)
# Combine default ports with session ports, removing duplicates
all_ports = list(set(default_ports + session_ports))
if all_ports:
console.print(f"Forwarding ports: {', '.join(map(str, all_ports))}")
# Get default MCPs from user config if none specified
all_mcps = mcp if isinstance(mcp, list) else []
if not all_mcps:
default_mcps = temp_user_config.get("defaults.mcps", [])
default_mcps = user_config.get("defaults.mcps", [])
all_mcps = default_mcps
if default_mcps:
@@ -365,9 +258,6 @@ def create_session(
if all_networks:
console.print(f"Networks: {', '.join(all_networks)}")
if all_domains:
console.print(f"Domain restrictions: {', '.join(all_domains)}")
# Show volumes that will be mounted
if volume_mounts:
console.print("Volumes:")
@@ -387,16 +277,6 @@ def create_session(
"[yellow]Warning: --no-shell is ignored without --run[/yellow]"
)
# Use model and provider from config overrides if not explicitly provided
final_model = (
model if model is not None else temp_user_config.get("defaults.model")
)
final_provider = (
provider
if provider is not None
else temp_user_config.get("defaults.provider")
)
session = container_manager.create_session(
image_name=image_name,
project=path_or_url,
@@ -406,16 +286,14 @@ def create_session(
mount_local=mount_local,
volumes=volume_mounts,
networks=all_networks,
ports=all_ports,
mcp=all_mcps,
run_command=run_command,
no_shell=no_shell,
uid=target_uid,
gid=target_gid,
ssh=ssh,
model=final_model,
provider=final_provider,
domains=all_domains,
model=model,
provider=provider,
)
if session:
@@ -429,7 +307,7 @@ def create_session(
console.print(f" {container_port} -> {host_port}")
# Auto-connect based on user config, unless overridden by --no-connect flag or --no-shell
auto_connect = temp_user_config.get("defaults.connect", True)
auto_connect = user_config.get("defaults.connect", True)
# When --no-shell is used with --run, show logs instead of connecting
if no_shell and run_command:
@@ -492,9 +370,6 @@ def create_session(
def close_session(
session_id: Optional[str] = typer.Argument(None, help="Session ID to close"),
all_sessions: bool = typer.Option(False, "--all", help="Close all active sessions"),
kill: bool = typer.Option(
False, "--kill", help="Forcefully kill containers instead of graceful stop"
),
) -> None:
"""Close a Cubbi session or all sessions"""
if all_sessions:
@@ -518,9 +393,7 @@ def close_session(
)
# Start closing sessions with progress updates
count, success = container_manager.close_all_sessions(
update_progress, kill=kill
)
count, success = container_manager.close_all_sessions(update_progress)
# Final result
if success:
@@ -529,7 +402,7 @@ def close_session(
console.print("[red]Failed to close all sessions[/red]")
elif session_id:
with console.status(f"Closing session {session_id}..."):
success = container_manager.close_session(session_id, kill=kill)
success = container_manager.close_session(session_id)
if success:
console.print(f"[green]Session {session_id} closed successfully[/green]")
@@ -751,10 +624,6 @@ config_app.add_typer(network_app, name="network", no_args_is_help=True)
volume_app = typer.Typer(help="Manage default volumes")
config_app.add_typer(volume_app, name="volume", no_args_is_help=True)
# Create a port subcommand for config
port_app = typer.Typer(help="Manage default ports")
config_app.add_typer(port_app, name="port", no_args_is_help=True)
# Create an MCP subcommand for config
config_mcp_app = typer.Typer(help="Manage default MCP servers")
config_app.add_typer(config_mcp_app, name="mcp", no_args_is_help=True)
@@ -1065,91 +934,6 @@ def remove_volume(
console.print(f"[green]Removed volume '{volume_to_remove}' from defaults[/green]")
# Port configuration commands
@port_app.command("list")
def list_ports() -> None:
"""List all default ports"""
ports = user_config.get("defaults.ports", [])
if not ports:
console.print("No default ports configured")
return
table = Table(show_header=True, header_style="bold")
table.add_column("Port")
for port in ports:
table.add_row(str(port))
console.print(table)
@port_app.command("add")
def add_port(
ports_arg: str = typer.Argument(
..., help="Port(s) to add to defaults (e.g., '8000' or '8000,3000,5173')"
),
) -> None:
"""Add port(s) to default ports"""
current_ports = user_config.get("defaults.ports", [])
# Parse ports (support comma-separated)
try:
if "," in ports_arg:
new_ports = [int(p.strip()) for p in ports_arg.split(",")]
else:
new_ports = [int(ports_arg)]
except ValueError:
console.print(
"[red]Error: Invalid port format. Use integers only (e.g., '8000' or '8000,3000')[/red]"
)
return
# Validate port ranges
invalid_ports = [p for p in new_ports if not (1 <= p <= 65535)]
if invalid_ports:
console.print(
f"[red]Error: Invalid ports {invalid_ports}. Ports must be between 1 and 65535[/red]"
)
return
# Add new ports, avoiding duplicates
added_ports = []
for port in new_ports:
if port not in current_ports:
current_ports.append(port)
added_ports.append(port)
if not added_ports:
if len(new_ports) == 1:
console.print(f"Port {new_ports[0]} is already in defaults")
else:
console.print(f"All ports {new_ports} are already in defaults")
return
user_config.set("defaults.ports", current_ports)
if len(added_ports) == 1:
console.print(f"[green]Added port {added_ports[0]} to defaults[/green]")
else:
console.print(f"[green]Added ports {added_ports} to defaults[/green]")
@port_app.command("remove")
def remove_port(
port: int = typer.Argument(..., help="Port to remove from defaults"),
) -> None:
"""Remove a port from default ports"""
ports = user_config.get("defaults.ports", [])
if port not in ports:
console.print(f"Port {port} is not in defaults")
return
ports.remove(port)
user_config.set("defaults.ports", ports)
console.print(f"[green]Removed port {port} from defaults[/green]")
# MCP Management Commands
@@ -1722,11 +1506,6 @@ def add_mcp(
def add_remote_mcp(
name: str = typer.Argument(..., help="MCP server name"),
url: str = typer.Argument(..., help="URL of the remote MCP server"),
mcp_type: str = typer.Option(
"auto",
"--mcp-type",
help="MCP connection type: sse, streamable_http, stdio, or auto (default: auto)",
),
header: List[str] = typer.Option(
[], "--header", "-H", help="HTTP headers (format: KEY=VALUE)"
),
@@ -1735,22 +1514,6 @@ def add_remote_mcp(
),
) -> None:
"""Add a remote MCP server"""
if mcp_type == "auto":
if url.endswith("/sse"):
mcp_type = "sse"
elif url.endswith("/mcp"):
mcp_type = "streamable_http"
else:
console.print(
f"[red]Cannot auto-detect MCP type from URL '{url}'. Please specify --mcp-type (sse, streamable_http, or stdio)[/red]"
)
return
elif mcp_type not in ["sse", "streamable_http", "stdio"]:
console.print(
f"[red]Invalid MCP type '{mcp_type}'. Must be: sse, streamable_http, stdio, or auto[/red]"
)
return
# Parse headers
headers = {}
for h in header:
@@ -1765,7 +1528,7 @@ def add_remote_mcp(
try:
with console.status(f"Adding remote MCP server '{name}'..."):
mcp_manager.add_remote_mcp(
name, url, headers, mcp_type=mcp_type, add_as_default=not no_default
name, url, headers, add_as_default=not no_default
)
console.print(f"[green]Added remote MCP server '{name}'[/green]")

View File

@@ -64,7 +64,6 @@ class ConfigManager:
},
defaults={
"image": "goose",
"domains": [],
},
)

View File

@@ -12,7 +12,7 @@ from docker.errors import DockerException, ImageNotFound
from .config import ConfigManager
from .mcp import MCPManager
from .models import Image, Session, SessionStatus
from .models import Session, SessionStatus
from .session import SessionManager
from .user_config import UserConfigManager
@@ -107,21 +107,12 @@ class ContainerManager:
elif container.status == "created":
status = SessionStatus.CREATING
# Get MCP list from container labels
mcps_str = labels.get("cubbi.mcps", "")
mcps = (
[mcp.strip() for mcp in mcps_str.split(",") if mcp.strip()]
if mcps_str
else []
)
session = Session(
id=session_id,
name=labels.get("cubbi.session.name", f"cubbi-{session_id}"),
image=labels.get("cubbi.image", "unknown"),
status=status,
container_id=container_id,
mcps=mcps,
)
# Get port mappings
@@ -154,7 +145,6 @@ class ContainerManager:
mount_local: bool = False,
volumes: Optional[Dict[str, Dict[str, str]]] = None,
networks: Optional[List[str]] = None,
ports: Optional[List[int]] = None,
mcp: Optional[List[str]] = None,
run_command: Optional[str] = None,
no_shell: bool = False,
@@ -163,7 +153,6 @@ class ContainerManager:
model: Optional[str] = None,
provider: Optional[str] = None,
ssh: bool = False,
domains: Optional[List[str]] = None,
) -> Optional[Session]:
"""Create a new Cubbi session
@@ -184,26 +173,13 @@ class ContainerManager:
model: Optional model to use
provider: Optional provider to use
ssh: Whether to start the SSH server in the container (default: False)
domains: Optional list of domains to restrict network access to (uses network-filter)
"""
try:
# Try to get image from config first
# Validate image exists
image = self.config_manager.get_image(image_name)
if not image:
# If not found in config, treat it as a Docker image name
print(
f"Image '{image_name}' not found in Cubbi config, using as Docker image..."
)
image = Image(
name=image_name,
description=f"Docker image: {image_name}",
version="latest",
maintainer="unknown",
image=image_name,
ports=[],
volumes=[],
persistent_configs=[],
)
print(f"Image '{image_name}' not found")
return None
# Generate session ID and name
session_id = self._generate_session_id()
@@ -223,20 +199,17 @@ class ContainerManager:
# Set SSH environment variable
env_vars["CUBBI_SSH_ENABLED"] = "true" if ssh else "false"
# Pass some environment from host environment to container for local development
keys = [
# Pass API keys from host environment to container for local development
api_keys = [
"OPENAI_API_KEY",
"OPENAI_URL",
"ANTHROPIC_API_KEY",
"ANTHROPIC_AUTH_TOKEN",
"ANTHROPIC_CUSTOM_HEADERS",
"OPENROUTER_API_KEY",
"GOOGLE_API_KEY",
"LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
"LANGFUSE_INIT_PROJECT_SECRET_KEY",
"LANGFUSE_URL",
]
for key in keys:
for key in api_keys:
if key in os.environ and key not in env_vars:
env_vars[key] = os.environ[key]
@@ -458,7 +431,7 @@ class ContainerManager:
)
# Set type-specific information
env_vars[f"MCP_{idx}_TYPE"] = mcp_config.get("mcp_type", "sse")
env_vars[f"MCP_{idx}_TYPE"] = "remote"
env_vars[f"MCP_{idx}_NAME"] = mcp_name
# Set environment variables for MCP count if we have any
@@ -532,99 +505,17 @@ class ContainerManager:
"defaults.provider", ""
)
# Handle network-filter if domains are specified
network_filter_container = None
network_mode = None
if domains:
# Check for conflicts
if networks:
print(
"[yellow]Warning: Cannot use --domains with --network. Using domain restrictions only.[/yellow]"
)
networks = []
network_list = [default_network]
# Create network-filter container
network_filter_name = f"cubbi-network-filter-{session_id}"
# Pull network-filter image if needed
network_filter_image = "monadicalsas/network-filter:latest"
try:
self.client.images.get(network_filter_image)
except ImageNotFound:
print(f"Pulling network-filter image {network_filter_image}...")
self.client.images.pull(network_filter_image)
# Create and start network-filter container
print("Creating network-filter container for domain restrictions...")
try:
# First check if a network-filter container already exists with this name
try:
existing = self.client.containers.get(network_filter_name)
print(
f"Removing existing network-filter container {network_filter_name}"
)
existing.stop()
existing.remove()
except DockerException:
pass # Container doesn't exist, which is fine
network_filter_container = self.client.containers.run(
image=network_filter_image,
name=network_filter_name,
hostname=network_filter_name,
detach=True,
environment={"ALLOWED_DOMAINS": ",".join(domains)},
labels={
"cubbi.network-filter": "true",
"cubbi.session.id": session_id,
"cubbi.session.name": session_name,
},
cap_add=["NET_ADMIN"], # Required for iptables
remove=False, # Don't auto-remove on stop
)
# Wait for container to be running
import time
for i in range(10): # Wait up to 10 seconds
network_filter_container.reload()
if network_filter_container.status == "running":
break
time.sleep(1)
else:
raise Exception(
f"Network-filter container failed to start. Status: {network_filter_container.status}"
)
# Use container ID instead of name for network_mode
network_mode = f"container:{network_filter_container.id}"
print(
f"Network restrictions enabled for domains: {', '.join(domains)}"
)
print(f"Using network mode: {network_mode}")
except Exception as e:
print(f"[red]Error creating network-filter container: {e}[/red]")
raise
# Warn about MCP limitations when using network-filter
if mcp_names:
print(
"[yellow]Warning: MCP servers may not be accessible when using domain restrictions.[/yellow]"
)
# Create container
container_params = {
"image": image.image,
"name": session_name,
"detach": True,
"tty": True,
"stdin_open": True,
"environment": env_vars,
"volumes": session_volumes,
"labels": {
container = self.client.containers.create(
image=image.image,
name=session_name,
hostname=session_name,
detach=True,
tty=True,
stdin_open=True,
environment=env_vars,
volumes=session_volumes,
labels={
"cubbi.session": "true",
"cubbi.session.id": session_id,
"cubbi.session.name": session_name,
@@ -633,32 +524,17 @@ class ContainerManager:
"cubbi.project_name": project_name or "",
"cubbi.mcps": ",".join(mcp_names) if mcp_names else "",
},
"command": container_command, # Set the command
"entrypoint": entrypoint, # Set the entrypoint (might be None)
}
# Add port forwarding if ports are specified
if ports:
container_params["ports"] = {f"{port}/tcp": None for port in ports}
# Use network_mode if domains are specified, otherwise use regular network
if network_mode:
container_params["network_mode"] = network_mode
# Cannot set hostname when using network_mode
else:
container_params["hostname"] = session_name
container_params["network"] = network_list[
0
] # Connect to the first network initially
container = self.client.containers.create(**container_params)
network=network_list[0], # Connect to the first network initially
command=container_command, # Set the command
entrypoint=entrypoint, # Set the entrypoint (might be None)
ports={f"{port}/tcp": None for port in image.ports},
)
# Start container
container.start()
# Connect to additional networks (after the first one in network_list)
# Note: Cannot connect to networks when using network_mode
if len(network_list) > 1 and not network_mode:
if len(network_list) > 1:
for network_name in network_list[1:]:
try:
# Get or create the network
@@ -679,8 +555,6 @@ class ContainerManager:
container.reload()
# Connect directly to each MCP's dedicated network
# Note: Cannot connect to networks when using network_mode
if not network_mode:
for mcp_name in mcp_names:
try:
# Get the dedicated network for this MCP
@@ -706,8 +580,7 @@ class ContainerManager:
print(f"Error connecting session to MCP '{mcp_name}': {e}")
# Connect to additional user-specified networks
# Note: Cannot connect to networks when using network_mode
if networks and not network_mode:
if networks:
for network_name in networks:
# Check if already connected to this network
# NetworkSettings.Networks contains a dict where keys are network names
@@ -766,29 +639,15 @@ class ContainerManager:
except DockerException as e:
print(f"Error creating session: {e}")
# Clean up network-filter container if it was created
if network_filter_container:
try:
network_filter_container.stop()
network_filter_container.remove()
except Exception:
pass
return None
def close_session(self, session_id: str, kill: bool = False) -> bool:
"""Close a Cubbi session
Args:
session_id: The ID of the session to close
kill: If True, forcefully kill the container instead of graceful stop
"""
def close_session(self, session_id: str) -> bool:
"""Close a Cubbi session"""
try:
sessions = self.list_sessions()
for session in sessions:
if session.id == session_id:
return self._close_single_session(session, kill=kill)
return self._close_single_session(session)
print(f"Session '{session_id}' not found")
return False
@@ -865,12 +724,11 @@ class ContainerManager:
print(f"Error connecting to session: {e}")
return False
def _close_single_session(self, session: Session, kill: bool = False) -> bool:
def _close_single_session(self, session: Session) -> bool:
"""Close a single session (helper for parallel processing)
Args:
session: The session to close
kill: If True, forcefully kill the container instead of graceful stop
Returns:
bool: Whether the session was successfully closed
@@ -879,45 +737,21 @@ class ContainerManager:
return False
try:
# First, close the main session container
container = self.client.containers.get(session.container_id)
if kill:
container.kill()
else:
container.stop()
container.remove()
# Check for and close any associated network-filter container
network_filter_name = f"cubbi-network-filter-{session.id}"
try:
network_filter_container = self.client.containers.get(
network_filter_name
)
logger.info(f"Stopping network-filter container {network_filter_name}")
if kill:
network_filter_container.kill()
else:
network_filter_container.stop()
network_filter_container.remove()
except DockerException:
# Network-filter container might not exist, which is fine
pass
self.session_manager.remove_session(session.id)
return True
except DockerException as e:
print(f"Error closing session {session.id}: {e}")
return False
def close_all_sessions(
self, progress_callback=None, kill: bool = False
) -> Tuple[int, bool]:
def close_all_sessions(self, progress_callback=None) -> Tuple[int, bool]:
"""Close all Cubbi sessions with parallel processing and progress reporting
Args:
progress_callback: Optional callback function to report progress
The callback should accept (session_id, status, message)
kill: If True, forcefully kill containers instead of graceful stop
Returns:
tuple: (number of sessions closed, success)
@@ -937,27 +771,8 @@ class ContainerManager:
try:
container = self.client.containers.get(session.container_id)
# Stop and remove container
if kill:
container.kill()
else:
container.stop()
container.remove()
# Check for and close any associated network-filter container
network_filter_name = f"cubbi-network-filter-{session.id}"
try:
network_filter_container = self.client.containers.get(
network_filter_name
)
if kill:
network_filter_container.kill()
else:
network_filter_container.stop()
network_filter_container.remove()
except DockerException:
# Network-filter container might not exist, which is fine
pass
# Remove from session storage
self.session_manager.remove_session(session.id)

View File

@@ -1,277 +0,0 @@
# Aider for Cubbi
This image provides Aider (AI pair programming) in a Cubbi container environment.
## Overview
Aider is an AI pair programming tool that works in your terminal. This Cubbi image integrates Aider with secure API key management, persistent configuration, and support for multiple LLM providers.
## Features
- **Multiple LLM Support**: Works with OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter, and more
- **Secure Authentication**: API key management through Cubbi's secure environment system
- **Persistent Configuration**: Settings and history preserved across container restarts
- **Git Integration**: Automatic commits and git awareness
- **Multi-Language Support**: Works with 100+ programming languages
## Quick Start
### 1. Set up API Key
```bash
# For OpenAI (GPT models)
uv run -m cubbi.cli config set services.openai.api_key "your-openai-key"
# For Anthropic (Claude models)
uv run -m cubbi.cli config set services.anthropic.api_key "your-anthropic-key"
# For DeepSeek (recommended for cost-effectiveness)
uv run -m cubbi.cli config set services.deepseek.api_key "your-deepseek-key"
```
### 2. Run Aider Environment
```bash
# Start Aider container with your project
uv run -m cubbi.cli session create --image aider /path/to/your/project
# Or without a project
uv run -m cubbi.cli session create --image aider
```
### 3. Use Aider
```bash
# Basic usage
aider
# With specific model
aider --model sonnet
# With specific files
aider main.py utils.py
# One-shot request
aider --message "Add error handling to the login function"
```
## Configuration
### Supported API Keys
- `OPENAI_API_KEY`: OpenAI GPT models (GPT-4, GPT-4o, etc.)
- `ANTHROPIC_API_KEY`: Anthropic Claude models (Sonnet, Haiku, etc.)
- `DEEPSEEK_API_KEY`: DeepSeek models (cost-effective option)
- `GEMINI_API_KEY`: Google Gemini models
- `OPENROUTER_API_KEY`: OpenRouter (access to many models)
### Additional Configuration
- `AIDER_MODEL`: Default model to use (e.g., "sonnet", "o3-mini", "deepseek")
- `AIDER_AUTO_COMMITS`: Enable automatic git commits (default: true)
- `AIDER_DARK_MODE`: Enable dark mode interface (default: false)
- `AIDER_API_KEYS`: Additional API keys in format "provider1=key1,provider2=key2"
### Network Configuration
- `HTTP_PROXY`: HTTP proxy server URL
- `HTTPS_PROXY`: HTTPS proxy server URL
## Usage Examples
### Basic AI Pair Programming
```bash
# Start Aider with your project
uv run -m cubbi.cli session create --image aider /path/to/project
# Inside the container:
aider # Start interactive session
aider main.py # Work on specific file
aider --message "Add tests" # One-shot request
```
### Model Selection
```bash
# Use Claude Sonnet
aider --model sonnet
# Use GPT-4o
aider --model gpt-4o
# Use DeepSeek (cost-effective)
aider --model deepseek
# Use OpenRouter
aider --model openrouter/anthropic/claude-3.5-sonnet
```
### Advanced Features
```bash
# Work with multiple files
aider src/main.py tests/test_main.py
# Auto-commit changes
aider --auto-commits
# Read-only mode (won't edit files)
aider --read
# Apply a specific change
aider --message "Refactor the database connection code to use connection pooling"
```
### Enterprise/Proxy Setup
```bash
# With proxy
uv run -m cubbi.cli session create --image aider \
--env HTTPS_PROXY="https://proxy.company.com:8080" \
/path/to/project
# With custom model
uv run -m cubbi.cli session create --image aider \
--env AIDER_MODEL="sonnet" \
/path/to/project
```
## Persistent Configuration
The following directories are automatically persisted:
- `~/.aider/`: Aider configuration and chat history
- `~/.cache/aider/`: Model cache and temporary files
Configuration files are maintained across container restarts, ensuring your preferences and chat history are preserved.
## Model Recommendations
### Best Overall Performance
- **Claude 3.5 Sonnet**: Excellent code understanding and generation
- **OpenAI GPT-4o**: Strong performance across languages
- **Gemini 2.5 Pro**: Good balance of quality and speed
### Cost-Effective Options
- **DeepSeek V3**: Very cost-effective, good quality
- **OpenRouter**: Access to multiple models with competitive pricing
### Free Options
- **Gemini 2.5 Pro Exp**: Free tier available
- **OpenRouter**: Some free models available
## File Structure
```
cubbi/images/aider/
├── Dockerfile # Container image definition
├── cubbi_image.yaml # Cubbi image configuration
├── aider_plugin.py # Authentication and setup plugin
└── README.md # This documentation
```
## Authentication Flow
1. **Environment Variables**: API keys passed from Cubbi configuration
2. **Plugin Setup**: `aider_plugin.py` creates environment configuration
3. **Environment File**: Creates `~/.aider/.env` with API keys
4. **Ready**: Aider is ready for use with configured authentication
## Troubleshooting
### Common Issues
**No API Key Found**
```
No API keys found - Aider will run without pre-configuration
```
**Solution**: Set API key in Cubbi configuration:
```bash
uv run -m cubbi.cli config set services.openai.api_key "your-key"
```
**Model Not Available**
```
Error: Model 'xyz' not found
```
**Solution**: Check available models for your provider:
```bash
aider --models # List available models
```
**Git Issues**
```
Git repository not found
```
**Solution**: Initialize git in your project or mount a git repository:
```bash
git init
# or
uv run -m cubbi.cli session create --image aider /path/to/git/project
```
**Network/Proxy Issues**
```
Connection timeout or proxy errors
```
**Solution**: Configure proxy settings:
```bash
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
```
### Debug Mode
```bash
# Check Aider version
aider --version
# List available models
aider --models
# Check configuration
cat ~/.aider/.env
# Verbose output
aider --verbose
```
## Security Considerations
- **API Keys**: Stored securely with 0o600 permissions
- **Environment**: Isolated container environment
- **Git Integration**: Respects .gitignore and git configurations
- **Code Safety**: Always review changes before accepting
## Advanced Configuration
### Custom Model Configuration
```bash
# Use with custom API endpoint
uv run -m cubbi.cli session create --image aider \
--env OPENAI_API_BASE="https://api.custom-provider.com/v1" \
--env OPENAI_API_KEY="your-key"
```
### Multiple API Keys
```bash
# Configure multiple providers
uv run -m cubbi.cli session create --image aider \
--env OPENAI_API_KEY="openai-key" \
--env ANTHROPIC_API_KEY="anthropic-key" \
--env AIDER_API_KEYS="provider1=key1,provider2=key2"
```
## Support
For issues related to:
- **Cubbi Integration**: Check Cubbi documentation or open an issue
- **Aider Functionality**: Visit [Aider documentation](https://aider.chat/)
- **Model Configuration**: Check [LLM documentation](https://aider.chat/docs/llms.html)
- **API Keys**: Visit provider documentation (OpenAI, Anthropic, etc.)
## License
This image configuration is provided under the same license as the Cubbi project. Aider is licensed separately under Apache 2.0.

View File

@@ -1,192 +0,0 @@
#!/usr/bin/env python3
"""
Aider Plugin for Cubbi
Handles authentication setup and configuration for Aider AI pair programming
"""
import os
import stat
from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin
class AiderPlugin(ToolPlugin):
"""Plugin for setting up Aider authentication and configuration"""
@property
def tool_name(self) -> str:
return "aider"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_aider_config_dir(self) -> Path:
"""Get the Aider configuration directory"""
return Path("/home/cubbi/.aider")
def _get_aider_cache_dir(self) -> Path:
"""Get the Aider cache directory"""
return Path("/home/cubbi/.cache/aider")
def _ensure_aider_dirs(self) -> tuple[Path, Path]:
"""Ensure Aider directories exist with correct ownership"""
config_dir = self._get_aider_config_dir()
cache_dir = self._get_aider_cache_dir()
# Create directories
for directory in [config_dir, cache_dir]:
try:
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
self._set_ownership(directory)
except OSError as e:
self.status.log(
f"Failed to create Aider directory {directory}: {e}", "ERROR"
)
return config_dir, cache_dir
def initialize(self) -> bool:
"""Initialize Aider configuration"""
self.status.log("Setting up Aider configuration...")
# Ensure Aider directories exist
config_dir, cache_dir = self._ensure_aider_dirs()
# Set up environment variables for the session
env_vars = self._create_environment_config()
# Create .env file if we have API keys
if env_vars:
env_file = config_dir / ".env"
success = self._write_env_file(env_file, env_vars)
if success:
self.status.log("✅ Aider environment configured successfully")
else:
self.status.log("⚠️ Failed to write Aider environment file", "WARNING")
else:
self.status.log(
" No API keys found - Aider will run without pre-configuration", "INFO"
)
self.status.log(
" You can configure API keys later using environment variables",
"INFO",
)
# Always return True to allow container to start
return True
def _create_environment_config(self) -> Dict[str, str]:
"""Create environment variable configuration for Aider"""
env_vars = {}
# Map environment variables to Aider configuration
api_key_mappings = {
"OPENAI_API_KEY": "OPENAI_API_KEY",
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY",
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY",
"GEMINI_API_KEY": "GEMINI_API_KEY",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY",
}
# Check for OpenAI API base URL
openai_url = os.environ.get("OPENAI_URL")
if openai_url:
env_vars["OPENAI_API_BASE"] = openai_url
self.status.log(f"Set OpenAI API base URL to {openai_url}")
# Check for standard API keys
for env_var, aider_var in api_key_mappings.items():
value = os.environ.get(env_var)
if value:
env_vars[aider_var] = value
provider = env_var.replace("_API_KEY", "").lower()
self.status.log(f"Added {provider} API key")
# Handle additional API keys from AIDER_API_KEYS
additional_keys = os.environ.get("AIDER_API_KEYS")
if additional_keys:
try:
# Parse format: "provider1=key1,provider2=key2"
for pair in additional_keys.split(","):
if "=" in pair:
provider, key = pair.strip().split("=", 1)
env_var_name = f"{provider.upper()}_API_KEY"
env_vars[env_var_name] = key
self.status.log(f"Added {provider} API key from AIDER_API_KEYS")
except Exception as e:
self.status.log(f"Failed to parse AIDER_API_KEYS: {e}", "WARNING")
# Add model configuration
model = os.environ.get("AIDER_MODEL")
if model:
env_vars["AIDER_MODEL"] = model
self.status.log(f"Set default model to {model}")
# Add git configuration
auto_commits = os.environ.get("AIDER_AUTO_COMMITS", "true")
if auto_commits.lower() in ["true", "false"]:
env_vars["AIDER_AUTO_COMMITS"] = auto_commits
# Add dark mode setting
dark_mode = os.environ.get("AIDER_DARK_MODE", "false")
if dark_mode.lower() in ["true", "false"]:
env_vars["AIDER_DARK_MODE"] = dark_mode
# Add proxy settings
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
value = os.environ.get(proxy_var)
if value:
env_vars[proxy_var] = value
self.status.log(f"Added proxy configuration: {proxy_var}")
return env_vars
def _write_env_file(self, env_file: Path, env_vars: Dict[str, str]) -> bool:
"""Write environment variables to .env file"""
try:
content = "\n".join(f"{key}={value}" for key, value in env_vars.items())
with open(env_file, "w") as f:
f.write(content)
f.write("\n")
# Set ownership and secure file permissions (read/write for owner only)
self._set_ownership(env_file)
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
self.status.log(f"Created Aider environment file at {env_file}")
return True
except Exception as e:
self.status.log(f"Failed to write Aider environment file: {e}", "ERROR")
return False
def setup_tool_configuration(self) -> bool:
"""Set up Aider configuration - called by base class"""
# Additional tool configuration can be added here if needed
return True
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate Aider with available MCP servers if applicable"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Aider doesn't have native MCP support like Claude Code,
# but we could potentially add custom integrations here
self.status.log(
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
)
return True

View File

@@ -1,86 +0,0 @@
name: aider
description: Aider AI pair programming environment
version: 1.0.0
maintainer: team@monadical.com
image: monadical/cubbi-aider:latest
init:
pre_command: /cubbi-init.sh
command: /entrypoint.sh
environment:
# OpenAI Configuration
- name: OPENAI_API_KEY
description: OpenAI API key for GPT models
required: false
sensitive: true
# Anthropic Configuration
- name: ANTHROPIC_API_KEY
description: Anthropic API key for Claude models
required: false
sensitive: true
# DeepSeek Configuration
- name: DEEPSEEK_API_KEY
description: DeepSeek API key for DeepSeek models
required: false
sensitive: true
# Gemini Configuration
- name: GEMINI_API_KEY
description: Google Gemini API key
required: false
sensitive: true
# OpenRouter Configuration
- name: OPENROUTER_API_KEY
description: OpenRouter API key for various models
required: false
sensitive: true
# Generic provider API keys
- name: AIDER_API_KEYS
description: Additional API keys in format "provider1=key1,provider2=key2"
required: false
sensitive: true
# Model Configuration
- name: AIDER_MODEL
description: Default model to use (e.g., sonnet, o3-mini, deepseek)
required: false
# Git Configuration
- name: AIDER_AUTO_COMMITS
description: Enable automatic commits (true/false)
required: false
default: "true"
- name: AIDER_DARK_MODE
description: Enable dark mode (true/false)
required: false
default: "false"
# Proxy Configuration
- name: HTTP_PROXY
description: HTTP proxy server URL
required: false
- name: HTTPS_PROXY
description: HTTPS proxy server URL
required: false
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.aider"
target: "/cubbi-config/aider-settings"
type: "directory"
description: "Aider configuration and history"
- source: "/home/cubbi/.cache/aider"
target: "/cubbi-config/aider-cache"
type: "directory"
description: "Aider cache directory"

View File

@@ -1,274 +0,0 @@
#!/usr/bin/env python3
"""
Comprehensive test script for Aider Cubbi image
Tests Docker image build, API key configuration, and Cubbi CLI integration
"""
import subprocess
import sys
import tempfile
import re
def run_command(cmd, description="", check=True):
"""Run a shell command and return result"""
print(f"\n🔍 {description}")
print(f"Running: {cmd}")
try:
result = subprocess.run(
cmd, shell=True, capture_output=True, text=True, check=check
)
if result.stdout:
print("STDOUT:")
print(result.stdout)
if result.stderr:
print("STDERR:")
print(result.stderr)
return result
except subprocess.CalledProcessError as e:
print(f"❌ Command failed with exit code {e.returncode}")
if e.stdout:
print("STDOUT:")
print(e.stdout)
if e.stderr:
print("STDERR:")
print(e.stderr)
if check:
raise
return e
def test_docker_image_exists():
"""Test if the Aider Docker image exists"""
print("\n" + "=" * 60)
print("🧪 Testing Docker Image Existence")
print("=" * 60)
result = run_command(
"docker images monadical/cubbi-aider:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
"Checking if Aider Docker image exists",
)
if "monadical/cubbi-aider" in result.stdout:
print("✅ Aider Docker image exists")
else:
print("❌ Aider Docker image not found")
assert False, "Aider Docker image not found"
def test_aider_version():
"""Test basic Aider functionality in container"""
print("\n" + "=" * 60)
print("🧪 Testing Aider Version")
print("=" * 60)
result = run_command(
"docker run --rm monadical/cubbi-aider:latest bash -c 'aider --version'",
"Testing Aider version command",
)
assert (
"aider" in result.stdout and result.returncode == 0
), "Aider version command failed"
print("✅ Aider version command works")
def test_api_key_configuration():
"""Test API key configuration and environment setup"""
print("\n" + "=" * 60)
print("🧪 Testing API Key Configuration")
print("=" * 60)
# Test with multiple API keys
test_keys = {
"OPENAI_API_KEY": "test-openai-key",
"ANTHROPIC_API_KEY": "test-anthropic-key",
"DEEPSEEK_API_KEY": "test-deepseek-key",
"GEMINI_API_KEY": "test-gemini-key",
"OPENROUTER_API_KEY": "test-openrouter-key",
}
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
result = run_command(
f"docker run --rm {env_flags} monadical/cubbi-aider:latest bash -c 'cat ~/.aider/.env'",
"Testing API key configuration in .env file",
)
success = True
for key, value in test_keys.items():
if f"{key}={value}" not in result.stdout:
print(f"{key} not found in .env file")
success = False
else:
print(f"{key} configured correctly")
# Test default configuration values
if "AIDER_AUTO_COMMITS=true" in result.stdout:
print("✅ Default AIDER_AUTO_COMMITS configured")
else:
print("❌ Default AIDER_AUTO_COMMITS not found")
success = False
if "AIDER_DARK_MODE=false" in result.stdout:
print("✅ Default AIDER_DARK_MODE configured")
else:
print("❌ Default AIDER_DARK_MODE not found")
success = False
assert success, "API key configuration test failed"
def test_cubbi_cli_integration():
"""Test Cubbi CLI integration"""
print("\n" + "=" * 60)
print("🧪 Testing Cubbi CLI Integration")
print("=" * 60)
# Test image listing
result = run_command(
"uv run -m cubbi.cli image list | grep aider",
"Testing Cubbi CLI can see Aider image",
)
if "aider" in result.stdout and "Aider AI pair" in result.stdout:
print("✅ Cubbi CLI can list Aider image")
else:
print("❌ Cubbi CLI cannot see Aider image")
return False
# Test session creation with test command
with tempfile.TemporaryDirectory() as temp_dir:
test_env = {
"OPENAI_API_KEY": "test-session-key",
"ANTHROPIC_API_KEY": "test-anthropic-session-key",
}
env_vars = " ".join([f"{k}={v}" for k, v in test_env.items()])
result = run_command(
f"{env_vars} uv run -m cubbi.cli session create --image aider {temp_dir} --no-shell --run \"aider --version && echo 'Cubbi CLI test successful'\"",
"Testing Cubbi CLI session creation with Aider",
)
assert (
result.returncode == 0
and re.search(r"aider \d+\.\d+\.\d+", result.stdout)
and "Cubbi CLI test successful" in result.stdout
), "Cubbi CLI session creation failed"
print("✅ Cubbi CLI session creation works")
def test_persistent_configuration():
"""Test persistent configuration directories"""
print("\n" + "=" * 60)
print("🧪 Testing Persistent Configuration")
print("=" * 60)
# Test that persistent directories are created
result = run_command(
"docker run --rm -e OPENAI_API_KEY='test-key' monadical/cubbi-aider:latest bash -c 'ls -la /home/cubbi/.aider/ && ls -la /home/cubbi/.cache/'",
"Testing persistent configuration directories",
)
success = True
if ".env" in result.stdout:
print("✅ .env file created in ~/.aider/")
else:
print("❌ .env file not found in ~/.aider/")
success = False
if "aider" in result.stdout:
print("✅ ~/.cache/aider directory exists")
else:
print("❌ ~/.cache/aider directory not found")
success = False
assert success, "API key configuration test failed"
def test_plugin_functionality():
"""Test the Aider plugin functionality"""
print("\n" + "=" * 60)
print("🧪 Testing Plugin Functionality")
print("=" * 60)
# Test plugin without API keys (should still work)
result = run_command(
"docker run --rm monadical/cubbi-aider:latest bash -c 'echo \"Plugin test without API keys\"'",
"Testing plugin functionality without API keys",
)
if "No API keys found - Aider will run without pre-configuration" in result.stdout:
print("✅ Plugin handles missing API keys gracefully")
else:
# This might be in stderr or initialization might have changed
print(" Plugin API key handling test - check output above")
# Test plugin with API keys
result = run_command(
"docker run --rm -e OPENAI_API_KEY='test-plugin-key' monadical/cubbi-aider:latest bash -c 'echo \"Plugin test with API keys\"'",
"Testing plugin functionality with API keys",
)
if "Aider environment configured successfully" in result.stdout:
print("✅ Plugin configures environment successfully")
else:
print("❌ Plugin environment configuration failed")
assert False, "Plugin environment configuration failed"
def main():
"""Run all tests"""
print("🚀 Starting Aider Cubbi Image Tests")
print("=" * 60)
tests = [
("Docker Image Exists", test_docker_image_exists),
("Aider Version", test_aider_version),
("API Key Configuration", test_api_key_configuration),
("Persistent Configuration", test_persistent_configuration),
("Plugin Functionality", test_plugin_functionality),
("Cubbi CLI Integration", test_cubbi_cli_integration),
]
results = {}
for test_name, test_func in tests:
try:
test_func()
results[test_name] = True
except Exception as e:
print(f"❌ Test '{test_name}' failed with exception: {e}")
results[test_name] = False
# Print summary
print("\n" + "=" * 60)
print("📊 TEST SUMMARY")
print("=" * 60)
total_tests = len(tests)
passed_tests = sum(1 for result in results.values() if result)
failed_tests = total_tests - passed_tests
for test_name, result in results.items():
status = "✅ PASS" if result else "❌ FAIL"
print(f"{status} {test_name}")
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
if failed_tests == 0:
print("\n🎉 All tests passed! Aider image is ready for use.")
return 0
else:
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,82 +0,0 @@
FROM python:3.12-slim
LABEL maintainer="team@monadical.com"
LABEL description="Claude Code for Cubbi"
# Install system dependencies including gosu for user switching
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
sudo \
passwd \
bash \
curl \
bzip2 \
iputils-ping \
iproute2 \
libxcb1 \
libdbus-1-3 \
nano \
tmux \
git-core \
ripgrep \
openssh-client \
vim \
&& rm -rf /var/lib/apt/lists/*
# Install uv (Python package manager)
WORKDIR /tmp
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
sh install.sh && \
mv /root/.local/bin/uv /usr/local/bin/uv && \
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
rm install.sh
# Install Node.js (for Claude Code NPM package)
ARG NODE_VERSION=v22.16.0
RUN mkdir -p /opt/node && \
ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
NODE_ARCH=linux-x64; \
elif [ "$ARCH" = "aarch64" ]; then \
NODE_ARCH=linux-arm64; \
else \
echo "Unsupported architecture"; exit 1; \
fi && \
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
rm node.tar.gz
ENV PATH="/opt/node/bin:$PATH"
# Install Claude Code globally
RUN npm install -g @anthropic-ai/claude-code
# Create app directory
WORKDIR /app
# Copy initialization system
COPY cubbi_init.py /cubbi/cubbi_init.py
COPY claudecode_plugin.py /cubbi/claudecode_plugin.py
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
COPY init-status.sh /cubbi/init-status.sh
# Make scripts executable
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
# Add Node.js to PATH in bashrc and init status check
RUN echo 'PATH="/opt/node/bin:$PATH"' >> /etc/bash.bashrc
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
# Set up environment
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV UV_LINK_MODE=copy
# Pre-install the cubbi_init
RUN /cubbi/cubbi_init.py --help
# Set WORKDIR to /app
WORKDIR /app
ENTRYPOINT ["/cubbi/cubbi_init.py"]
CMD ["tail", "-f", "/dev/null"]

View File

@@ -1,222 +0,0 @@
# Claude Code for Cubbi
This image provides Claude Code (Anthropic's official CLI for Claude) in a Cubbi container environment.
## Overview
Claude Code is an interactive CLI tool that helps with software engineering tasks. This Cubbi image integrates Claude Code with secure API key management, persistent configuration, and enterprise features.
## Features
- **Claude Code CLI**: Full access to Claude's coding capabilities
- **Secure Authentication**: API key management through Cubbi's secure environment system
- **Persistent Configuration**: Settings and cache preserved across container restarts
- **Enterprise Support**: Bedrock and Vertex AI integration
- **Network Support**: Proxy configuration for corporate environments
- **Tool Permissions**: Pre-configured permissions for all Claude Code tools
## Quick Start
### 1. Set up API Key
```bash
# Set your Anthropic API key in Cubbi configuration
cubbi config set services.anthropic.api_key "your-api-key-here"
```
### 2. Run Claude Code Environment
```bash
# Start Claude Code container
cubbi run claudecode
# Execute Claude Code commands
cubbi exec claudecode "claude 'help me write a Python function'"
# Start interactive session
cubbi exec claudecode "claude"
```
## Configuration
### Required Environment Variables
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
### Optional Environment Variables
- `ANTHROPIC_AUTH_TOKEN`: Custom authorization token for enterprise deployments
- `ANTHROPIC_CUSTOM_HEADERS`: Additional HTTP headers (JSON format)
- `CLAUDE_CODE_USE_BEDROCK`: Set to "true" to use Amazon Bedrock
- `CLAUDE_CODE_USE_VERTEX`: Set to "true" to use Google Vertex AI
- `HTTP_PROXY`: HTTP proxy server URL
- `HTTPS_PROXY`: HTTPS proxy server URL
- `DISABLE_TELEMETRY`: Set to "true" to disable telemetry
### Advanced Configuration
```bash
# Enterprise deployment with Bedrock
cubbi config set environment.claude_code_use_bedrock true
cubbi run claudecode
# With custom proxy
cubbi config set network.https_proxy "https://proxy.company.com:8080"
cubbi run claudecode
# Disable telemetry
cubbi config set environment.disable_telemetry true
cubbi run claudecode
```
## Usage Examples
### Basic Usage
```bash
# Get help
cubbi exec claudecode "claude --help"
# One-time task
cubbi exec claudecode "claude 'write a unit test for this function'"
# Interactive mode
cubbi exec claudecode "claude"
```
### Working with Projects
```bash
# Start Claude Code in your project directory
cubbi run claudecode --mount /path/to/your/project:/app
cubbi exec claudecode "cd /app && claude"
# Create a commit
cubbi exec claudecode "cd /app && claude commit"
```
### Advanced Features
```bash
# Run with specific model configuration
cubbi exec claudecode "claude -m claude-3-5-sonnet-20241022 'analyze this code'"
# Use with plan mode
cubbi exec claudecode "claude -p 'refactor this function'"
```
## Persistent Configuration
The following directories are automatically persisted:
- `~/.claude/`: Claude Code settings and configuration
- `~/.cache/claude/`: Claude Code cache and temporary files
Configuration files are maintained across container restarts, ensuring your settings and preferences are preserved.
## File Structure
```
cubbi/images/claudecode/
├── Dockerfile # Container image definition
├── cubbi_image.yaml # Cubbi image configuration
├── claudecode_plugin.py # Authentication and setup plugin
├── cubbi_init.py # Initialization script (shared)
├── init-status.sh # Status check script (shared)
└── README.md # This documentation
```
## Authentication Flow
1. **Environment Variables**: API key passed from Cubbi configuration
2. **Plugin Setup**: `claudecode_plugin.py` creates `~/.claude/settings.json`
3. **Verification**: Plugin verifies Claude Code installation and configuration
4. **Ready**: Claude Code is ready for use with configured authentication
## Troubleshooting
### Common Issues
**API Key Not Set**
```
⚠️ No authentication configuration found
Please set ANTHROPIC_API_KEY environment variable
```
**Solution**: Set API key in Cubbi configuration:
```bash
cubbi config set services.anthropic.api_key "your-api-key-here"
```
**Claude Code Not Found**
```
❌ Claude Code not properly installed
```
**Solution**: Rebuild the container image:
```bash
docker build -t cubbi-claudecode:latest cubbi/images/claudecode/
```
**Network Issues**
```
Connection timeout or proxy errors
```
**Solution**: Configure proxy settings:
```bash
cubbi config set network.https_proxy "your-proxy-url"
```
### Debug Mode
Enable verbose output for debugging:
```bash
# Check configuration
cubbi exec claudecode "cat ~/.claude/settings.json"
# Verify installation
cubbi exec claudecode "claude --version"
cubbi exec claudecode "which claude"
cubbi exec claudecode "node --version"
```
## Security Considerations
- **API Keys**: Stored securely with 0o600 permissions
- **Configuration**: Settings files have restricted access
- **Environment**: Isolated container environment
- **Telemetry**: Can be disabled for privacy
## Development
### Building the Image
```bash
# Build locally
docker build -t cubbi-claudecode:test cubbi/images/claudecode/
# Test basic functionality
docker run --rm -it \
-e ANTHROPIC_API_KEY="your-api-key" \
cubbi-claudecode:test \
bash -c "claude --version"
```
### Testing
```bash
# Run through Cubbi
cubbi run claudecode --name test-claude
cubbi exec test-claude "claude --version"
cubbi stop test-claude
```
## Support
For issues related to:
- **Cubbi Integration**: Check Cubbi documentation or open an issue
- **Claude Code**: Visit [Claude Code documentation](https://docs.anthropic.com/en/docs/claude-code)
- **API Keys**: Visit [Anthropic Console](https://console.anthropic.com/)
## License
This image configuration is provided under the same license as the Cubbi project. Claude Code is licensed separately by Anthropic.

View File

@@ -1,193 +0,0 @@
#!/usr/bin/env python3
"""
Claude Code Plugin for Cubbi
Handles authentication setup and configuration for Claude Code
"""
import json
import os
import stat
from pathlib import Path
from typing import Any, Dict, Optional
from cubbi_init import ToolPlugin
# API key mappings from environment variables to Claude Code configuration
API_KEY_MAPPINGS = {
"ANTHROPIC_API_KEY": "api_key",
"ANTHROPIC_AUTH_TOKEN": "auth_token",
"ANTHROPIC_CUSTOM_HEADERS": "custom_headers",
}
# Enterprise integration environment variables
ENTERPRISE_MAPPINGS = {
"CLAUDE_CODE_USE_BEDROCK": "use_bedrock",
"CLAUDE_CODE_USE_VERTEX": "use_vertex",
"HTTP_PROXY": "http_proxy",
"HTTPS_PROXY": "https_proxy",
"DISABLE_TELEMETRY": "disable_telemetry",
}
class ClaudeCodePlugin(ToolPlugin):
"""Plugin for setting up Claude Code authentication and configuration"""
@property
def tool_name(self) -> str:
return "claudecode"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_claude_dir(self) -> Path:
"""Get the Claude Code configuration directory"""
return Path("/home/cubbi/.claude")
def _ensure_claude_dir(self) -> Path:
"""Ensure Claude directory exists with correct ownership"""
claude_dir = self._get_claude_dir()
try:
claude_dir.mkdir(mode=0o700, parents=True, exist_ok=True)
self._set_ownership(claude_dir)
except OSError as e:
self.status.log(
f"Failed to create Claude directory {claude_dir}: {e}", "ERROR"
)
return claude_dir
def initialize(self) -> bool:
"""Initialize Claude Code configuration"""
self.status.log("Setting up Claude Code authentication...")
# Ensure Claude directory exists
claude_dir = self._ensure_claude_dir()
# Create settings configuration
settings = self._create_settings()
if settings:
settings_file = claude_dir / "settings.json"
success = self._write_settings(settings_file, settings)
if success:
self.status.log("✅ Claude Code authentication configured successfully")
return True
else:
return False
else:
self.status.log("⚠️ No authentication configuration found", "WARNING")
self.status.log(
" Please set ANTHROPIC_API_KEY environment variable", "WARNING"
)
self.status.log(" Claude Code will run without authentication", "INFO")
# Return True to allow container to start without API key
# Users can still use Claude Code with their own authentication methods
return True
def _create_settings(self) -> Optional[Dict]:
"""Create Claude Code settings configuration"""
settings = {}
# Core authentication
api_key = os.environ.get("ANTHROPIC_API_KEY")
if not api_key:
return None
# Basic authentication setup
settings["apiKey"] = api_key
# Custom authorization token (optional)
auth_token = os.environ.get("ANTHROPIC_AUTH_TOKEN")
if auth_token:
settings["authToken"] = auth_token
# Custom headers (optional)
custom_headers = os.environ.get("ANTHROPIC_CUSTOM_HEADERS")
if custom_headers:
try:
# Expect JSON string format
settings["customHeaders"] = json.loads(custom_headers)
except json.JSONDecodeError:
self.status.log(
"⚠️ Invalid ANTHROPIC_CUSTOM_HEADERS format, skipping", "WARNING"
)
# Enterprise integration settings
if os.environ.get("CLAUDE_CODE_USE_BEDROCK") == "true":
settings["provider"] = "bedrock"
if os.environ.get("CLAUDE_CODE_USE_VERTEX") == "true":
settings["provider"] = "vertex"
# Network proxy settings
http_proxy = os.environ.get("HTTP_PROXY")
https_proxy = os.environ.get("HTTPS_PROXY")
if http_proxy or https_proxy:
settings["proxy"] = {}
if http_proxy:
settings["proxy"]["http"] = http_proxy
if https_proxy:
settings["proxy"]["https"] = https_proxy
# Telemetry settings
if os.environ.get("DISABLE_TELEMETRY") == "true":
settings["telemetry"] = {"enabled": False}
# Tool permissions (allow all by default in Cubbi environment)
settings["permissions"] = {
"tools": {
"read": {"allowed": True},
"write": {"allowed": True},
"edit": {"allowed": True},
"bash": {"allowed": True},
"webfetch": {"allowed": True},
"websearch": {"allowed": True},
}
}
return settings
def _write_settings(self, settings_file: Path, settings: Dict) -> bool:
"""Write settings to Claude Code configuration file"""
try:
# Write settings with secure permissions
with open(settings_file, "w") as f:
json.dump(settings, f, indent=2)
# Set ownership and secure file permissions (read/write for owner only)
self._set_ownership(settings_file)
os.chmod(settings_file, stat.S_IRUSR | stat.S_IWUSR)
self.status.log(f"Created Claude Code settings at {settings_file}")
return True
except Exception as e:
self.status.log(f"Failed to write Claude Code settings: {e}", "ERROR")
return False
def setup_tool_configuration(self) -> bool:
"""Set up Claude Code configuration - called by base class"""
# Additional tool configuration can be added here if needed
return True
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate Claude Code with available MCP servers"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Claude Code has built-in MCP support, so we can potentially
# configure MCP servers in the settings if needed
self.status.log("MCP server integration available for Claude Code")
return True

View File

@@ -1,66 +0,0 @@
name: claudecode
description: Claude Code AI environment
version: 1.0.0
maintainer: team@monadical.com
image: monadical/cubbi-claudecode:latest
init:
pre_command: /cubbi-init.sh
command: /entrypoint.sh
environment:
# Core Anthropic Authentication
- name: ANTHROPIC_API_KEY
description: Anthropic API key for Claude
required: true
sensitive: true
# Optional Enterprise Integration
- name: ANTHROPIC_AUTH_TOKEN
description: Custom authorization token for Claude
required: false
sensitive: true
- name: ANTHROPIC_CUSTOM_HEADERS
description: Additional HTTP headers for Claude API requests
required: false
sensitive: true
# Enterprise Deployment Options
- name: CLAUDE_CODE_USE_BEDROCK
description: Use Amazon Bedrock instead of direct API
required: false
- name: CLAUDE_CODE_USE_VERTEX
description: Use Google Vertex AI instead of direct API
required: false
# Network Configuration
- name: HTTP_PROXY
description: HTTP proxy server URL
required: false
- name: HTTPS_PROXY
description: HTTPS proxy server URL
required: false
# Optional Telemetry Control
- name: DISABLE_TELEMETRY
description: Disable Claude Code telemetry
required: false
default: "false"
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.claude"
target: "/cubbi-config/claude-settings"
type: "directory"
description: "Claude Code settings and configuration"
- source: "/home/cubbi/.cache/claude"
target: "/cubbi-config/claude-cache"
type: "directory"
description: "Claude Code cache directory"

View File

@@ -1,251 +0,0 @@
#!/usr/bin/env python3
"""
Automated test suite for Claude Code Cubbi integration
"""
import subprocess
def run_test(description: str, command: list, timeout: int = 30) -> bool:
"""Run a test command and return success status"""
print(f"🧪 Testing: {description}")
try:
result = subprocess.run(
command, capture_output=True, text=True, timeout=timeout
)
if result.returncode == 0:
print(" ✅ PASS")
return True
else:
print(f" ❌ FAIL: {result.stderr}")
if result.stdout:
print(f" 📋 stdout: {result.stdout}")
return False
except subprocess.TimeoutExpired:
print(f" ⏰ TIMEOUT: Command exceeded {timeout}s")
return False
except Exception as e:
print(f" ❌ ERROR: {e}")
return False
def test_suite():
"""Run complete test suite"""
tests_passed = 0
total_tests = 0
print("🚀 Starting Claude Code Cubbi Integration Test Suite")
print("=" * 60)
# Test 1: Build image
total_tests += 1
if run_test(
"Build Claude Code image",
["docker", "build", "-t", "cubbi-claudecode:test", "cubbi/images/claudecode/"],
timeout=180,
):
tests_passed += 1
# Test 2: Tag image for Cubbi
total_tests += 1
if run_test(
"Tag image for Cubbi",
["docker", "tag", "cubbi-claudecode:test", "monadical/cubbi-claudecode:latest"],
):
tests_passed += 1
# Test 3: Basic container startup
total_tests += 1
if run_test(
"Container startup with test API key",
[
"docker",
"run",
"--rm",
"-e",
"ANTHROPIC_API_KEY=test-key",
"cubbi-claudecode:test",
"bash",
"-c",
"claude --version",
],
):
tests_passed += 1
# Test 4: Cubbi image list
total_tests += 1
if run_test(
"Cubbi image list includes claudecode",
["uv", "run", "-m", "cubbi.cli", "image", "list"],
):
tests_passed += 1
# Test 5: Cubbi session creation
total_tests += 1
session_result = subprocess.run(
[
"uv",
"run",
"-m",
"cubbi.cli",
"session",
"create",
"--image",
"claudecode",
"--name",
"test-automation",
"--no-connect",
"--env",
"ANTHROPIC_API_KEY=test-key",
"--run",
"claude --version",
],
capture_output=True,
text=True,
timeout=60,
)
if session_result.returncode == 0:
print("🧪 Testing: Cubbi session creation")
print(" ✅ PASS")
tests_passed += 1
# Extract session ID for cleanup
session_id = None
for line in session_result.stdout.split("\n"):
if "Session ID:" in line:
session_id = line.split("Session ID: ")[1].strip()
break
if session_id:
# Test 6: Session cleanup
total_tests += 1
if run_test(
"Clean up test session",
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
):
tests_passed += 1
else:
print("🧪 Testing: Clean up test session")
print(" ⚠️ SKIP: Could not extract session ID")
total_tests += 1
else:
print("🧪 Testing: Cubbi session creation")
print(f" ❌ FAIL: {session_result.stderr}")
total_tests += 2 # This test and cleanup test both fail
# Test 7: Session without API key
total_tests += 1
no_key_result = subprocess.run(
[
"uv",
"run",
"-m",
"cubbi.cli",
"session",
"create",
"--image",
"claudecode",
"--name",
"test-no-key",
"--no-connect",
"--run",
"claude --version",
],
capture_output=True,
text=True,
timeout=60,
)
if no_key_result.returncode == 0:
print("🧪 Testing: Session without API key")
print(" ✅ PASS")
tests_passed += 1
# Extract session ID and close
session_id = None
for line in no_key_result.stdout.split("\n"):
if "Session ID:" in line:
session_id = line.split("Session ID: ")[1].strip()
break
if session_id:
subprocess.run(
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
capture_output=True,
timeout=30,
)
else:
print("🧪 Testing: Session without API key")
print(f" ❌ FAIL: {no_key_result.stderr}")
# Test 8: Persistent configuration test
total_tests += 1
persist_result = subprocess.run(
[
"uv",
"run",
"-m",
"cubbi.cli",
"session",
"create",
"--image",
"claudecode",
"--name",
"test-persist-auto",
"--project",
"test-automation",
"--no-connect",
"--env",
"ANTHROPIC_API_KEY=test-key",
"--run",
"echo 'automation test' > ~/.claude/automation.txt && cat ~/.claude/automation.txt",
],
capture_output=True,
text=True,
timeout=60,
)
if persist_result.returncode == 0:
print("🧪 Testing: Persistent configuration")
print(" ✅ PASS")
tests_passed += 1
# Extract session ID and close
session_id = None
for line in persist_result.stdout.split("\n"):
if "Session ID:" in line:
session_id = line.split("Session ID: ")[1].strip()
break
if session_id:
subprocess.run(
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
capture_output=True,
timeout=30,
)
else:
print("🧪 Testing: Persistent configuration")
print(f" ❌ FAIL: {persist_result.stderr}")
print("=" * 60)
print(f"📊 Test Results: {tests_passed}/{total_tests} tests passed")
if tests_passed == total_tests:
print("🎉 All tests passed! Claude Code integration is working correctly.")
return True
else:
print(
f"{total_tests - tests_passed} test(s) failed. Please check the output above."
)
return False
def main():
"""Main test entry point"""
success = test_suite()
exit(0 if success else 1)
if __name__ == "__main__":
main()

View File

@@ -1,62 +0,0 @@
FROM python:3.12-slim
LABEL maintainer="team@monadical.com"
LABEL description="Crush AI coding assistant for Cubbi"
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
sudo \
passwd \
bash \
curl \
bzip2 \
iputils-ping \
iproute2 \
libxcb1 \
libdbus-1-3 \
nano \
tmux \
git-core \
ripgrep \
openssh-client \
vim \
nodejs \
npm \
&& rm -rf /var/lib/apt/lists/*
# Install deps
WORKDIR /tmp
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
sh install.sh && \
mv /root/.local/bin/uv /usr/local/bin/uv && \
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
rm install.sh
# Install crush via npm
RUN npm install -g @charmland/crush
# Create app directory
WORKDIR /app
# Copy initialization system
COPY cubbi_init.py /cubbi/cubbi_init.py
COPY crush_plugin.py /cubbi/crush_plugin.py
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
COPY init-status.sh /cubbi/init-status.sh
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
# Set up environment
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV UV_LINK_MODE=copy
# Pre-install the cubbi_init
RUN /cubbi/cubbi_init.py --help
# Set WORKDIR to /app, common practice and expected by cubbi-init.sh
WORKDIR /app
ENTRYPOINT ["/cubbi/cubbi_init.py"]
CMD ["tail", "-f", "/dev/null"]

View File

@@ -1,77 +0,0 @@
# Crush Image for Cubbi
This image provides a containerized environment for running [Crush](https://github.com/charmbracelet/crush), a terminal-based AI coding assistant.
## Features
- Pre-configured environment for Crush AI coding assistant
- Multi-model support (OpenAI, Anthropic, Groq)
- JSON-based configuration
- MCP server integration support
- Session preservation across runs
## Environment Variables
### AI Provider Configuration
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `OPENAI_API_KEY` | OpenAI API key for crush | No | - |
| `ANTHROPIC_API_KEY` | Anthropic API key for crush | No | - |
| `GROQ_API_KEY` | Groq API key for crush | No | - |
| `OPENAI_URL` | Custom OpenAI-compatible API URL | No | - |
| `CUBBI_MODEL` | AI model to use with crush | No | - |
| `CUBBI_PROVIDER` | AI provider to use with crush | No | - |
### Cubbi Core Variables
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `CUBBI_USER_ID` | UID for the container user | No | `1000` |
| `CUBBI_GROUP_ID` | GID for the container user | No | `1000` |
| `CUBBI_RUN_COMMAND` | Command to execute after initialization | No | - |
| `CUBBI_NO_SHELL` | Exit after command execution | No | `false` |
| `CUBBI_CONFIG_DIR` | Directory for persistent configurations | No | `/cubbi-config` |
| `CUBBI_PERSISTENT_LINKS` | Semicolon-separated list of source:target symlinks | No | - |
### MCP Integration Variables
| Variable | Description | Required |
|----------|-------------|----------|
| `MCP_COUNT` | Number of available MCP servers | No |
| `MCP_NAMES` | JSON array of MCP server names | No |
| `MCP_{idx}_NAME` | Name of MCP server at index | No |
| `MCP_{idx}_TYPE` | Type of MCP server | No |
| `MCP_{idx}_HOST` | Hostname of MCP server | No |
| `MCP_{idx}_URL` | Full URL for remote MCP servers | No |
## Build
To build this image:
```bash
cd cubbi/images/crush
docker build -t monadical/cubbi-crush:latest .
```
## Usage
```bash
# Create a new session with this image
cubbix -i crush
# Run crush with specific provider
cubbix -i crush -e CUBBI_PROVIDER=openai -e CUBBI_MODEL=gpt-4
# Test crush installation
cubbix -i crush --no-shell --run "crush --help"
```
## Configuration
Crush uses JSON configuration stored in `/home/cubbi/.config/crush/config.json`. The plugin automatically configures:
- AI providers based on available API keys
- Default models and providers from environment variables
- Session preservation settings
- MCP server integrations

View File

@@ -1,177 +0,0 @@
#!/usr/bin/env python3
"""
Crush-specific plugin for Cubbi initialization
"""
import json
import os
from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin
class CrushPlugin(ToolPlugin):
"""Plugin for Crush AI coding assistant initialization"""
@property
def tool_name(self) -> str:
return "crush"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_user_config_path(self) -> Path:
"""Get the correct config path for the cubbi user"""
return Path("/home/cubbi/.config/crush")
def _ensure_user_config_dir(self) -> Path:
"""Ensure config directory exists with correct ownership"""
config_dir = self._get_user_config_path()
# Create the full directory path
try:
config_dir.mkdir(parents=True, exist_ok=True)
except FileExistsError:
# Directory already exists, which is fine
pass
except OSError as e:
self.status.log(
f"Failed to create config directory {config_dir}: {e}", "ERROR"
)
return config_dir
# Set ownership for the directories
config_parent = config_dir.parent
if config_parent.exists():
self._set_ownership(config_parent)
if config_dir.exists():
self._set_ownership(config_dir)
return config_dir
def initialize(self) -> bool:
"""Initialize Crush configuration"""
self._ensure_user_config_dir()
return self.setup_tool_configuration()
def setup_tool_configuration(self) -> bool:
"""Set up Crush configuration file"""
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.json"
# Load or initialize configuration
if config_file.exists():
try:
with config_file.open("r") as f:
config_data = json.load(f)
except (json.JSONDecodeError, OSError) as e:
self.status.log(f"Failed to load existing config: {e}", "WARNING")
config_data = {}
else:
config_data = {}
# Set default model and provider if specified
# cubbi_model = os.environ.get("CUBBI_MODEL")
# cubbi_provider = os.environ.get("CUBBI_PROVIDER")
# XXX i didn't understood yet the configuration file, tbd later.
try:
with config_file.open("w") as f:
json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user
self._set_ownership(config_file)
self.status.log(f"Updated Crush configuration at {config_file}")
return True
except Exception as e:
self.status.log(f"Failed to write Crush configuration: {e}", "ERROR")
return False
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate Crush with available MCP servers"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.json"
if config_file.exists():
try:
with config_file.open("r") as f:
config_data = json.load(f)
except (json.JSONDecodeError, OSError) as e:
self.status.log(f"Failed to load existing config: {e}", "WARNING")
config_data = {}
else:
config_data = {}
if "mcp_servers" not in config_data:
config_data["mcp_servers"] = {}
for server in mcp_config["servers"]:
server_name = server["name"]
server_host = server["host"]
server_url = server["url"]
if server_name and server_host:
mcp_url = f"http://{server_host}:8080/sse"
self.status.log(f"Adding MCP server: {server_name} - {mcp_url}")
config_data["mcp_servers"][server_name] = {
"uri": mcp_url,
"type": server.get("type", "sse"),
"enabled": True,
}
elif server_name and server_url:
self.status.log(
f"Adding remote MCP server: {server_name} - {server_url}"
)
config_data["mcp_servers"][server_name] = {
"uri": server_url,
"type": server.get("type", "sse"),
"enabled": True,
}
try:
with config_file.open("w") as f:
json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user
self._set_ownership(config_file)
return True
except Exception as e:
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
return False

View File

@@ -1,51 +0,0 @@
name: crush
description: Crush AI coding assistant environment
version: 1.0.0
maintainer: team@monadical.com
image: monadical/cubbi-crush:latest
init:
pre_command: /cubbi-init.sh
command: /entrypoint.sh
environment:
- name: OPENAI_API_KEY
description: OpenAI API key for crush
required: false
sensitive: true
- name: ANTHROPIC_API_KEY
description: Anthropic API key for crush
required: false
sensitive: true
- name: GROQ_API_KEY
description: Groq API key for crush
required: false
sensitive: true
- name: OPENAI_URL
description: Custom OpenAI-compatible API URL
required: false
- name: CUBBI_MODEL
description: AI model to use with crush
required: false
- name: CUBBI_PROVIDER
description: AI provider to use with crush
required: false
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.config/crush"
target: "/cubbi-config/crush-config"
type: "directory"
description: "Crush configuration directory"
- source: "/app/.crush"
target: "/cubbi-config/crush-app"
type: "directory"
description: "Crush application data and sessions"

View File

@@ -222,16 +222,6 @@ class UserManager:
):
return False
# Create the sudoers file entry for the 'cubbi' user
sudoers_command = [
"sh",
"-c",
"echo 'cubbi ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/cubbi && chmod 0440 /etc/sudoers.d/cubbi",
]
if not self._run_command(sudoers_command):
self.status.log("Failed to create sudoers entry for cubbi", "ERROR")
return False
return True

View File

@@ -1,12 +1,11 @@
FROM python:3.12-slim
FROM node:20-slim
LABEL maintainer="team@monadical.com"
LABEL description="Aider AI pair programming for Cubbi"
LABEL description="Google Gemini CLI for Cubbi"
# Install system dependencies including gosu for user switching
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
sudo \
passwd \
bash \
curl \
@@ -21,9 +20,11 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
ripgrep \
openssh-client \
vim \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Install uv (Python package manager)
# Install uv (Python package manager) for cubbi_init.py compatibility
WORKDIR /tmp
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
sh install.sh && \
@@ -31,26 +32,25 @@ RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
rm install.sh
# Install Aider using pip in system Python (more compatible with user switching)
RUN python -m pip install aider-chat
# Install Gemini CLI globally
RUN npm install -g @google/gemini-cli
# Make sure aider is in PATH
ENV PATH="/root/.local/bin:$PATH"
# Verify installation
RUN gemini --version
# Create app directory
WORKDIR /app
# Copy initialization system
COPY cubbi_init.py /cubbi/cubbi_init.py
COPY aider_plugin.py /cubbi/aider_plugin.py
COPY gemini_cli_plugin.py /cubbi/gemini_cli_plugin.py
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
COPY init-status.sh /cubbi/init-status.sh
# Make scripts executable
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
# Add aider to PATH in bashrc and init status check
RUN echo 'PATH="/root/.local/bin:$PATH"' >> /etc/bash.bashrc
# Add init status check to bashrc
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
# Set up environment

View File

@@ -0,0 +1,339 @@
# Google Gemini CLI for Cubbi
This image provides Google Gemini CLI in a Cubbi container environment.
## Overview
Google Gemini CLI is an AI-powered development tool that allows you to query and edit large codebases, generate applications from PDFs/sketches, automate operational tasks, and integrate with media generation tools using Google's Gemini models.
## Features
- **Advanced AI Models**: Access to Gemini 1.5 Pro, Flash, and other Google AI models
- **Codebase Analysis**: Query and edit large codebases intelligently
- **Multi-modal Support**: Work with text, images, PDFs, and sketches
- **Google Search Grounding**: Ground queries using Google Search for up-to-date information
- **Secure Authentication**: API key management through Cubbi's secure environment system
- **Persistent Configuration**: Settings and history preserved across container restarts
- **Project Integration**: Seamless integration with existing projects
## Quick Start
### 1. Set up API Key
```bash
# For Google AI (recommended)
uv run -m cubbi.cli config set services.google.api_key "your-gemini-api-key"
# Alternative using GEMINI_API_KEY
uv run -m cubbi.cli config set services.gemini.api_key "your-gemini-api-key"
```
Get your API key from [Google AI Studio](https://aistudio.google.com/apikey).
### 2. Run Gemini CLI Environment
```bash
# Start Gemini CLI container with your project
uv run -m cubbi.cli session create --image gemini-cli /path/to/your/project
# Or without a project
uv run -m cubbi.cli session create --image gemini-cli
```
### 3. Use Gemini CLI
```bash
# Basic usage
gemini
# Interactive mode with specific query
gemini
> Write me a Discord bot that answers questions using a FAQ.md file
# Analyze existing project
gemini
> Give me a summary of all changes that went in yesterday
# Generate from sketch/PDF
gemini
> Create a web app based on this wireframe.png
```
## Configuration
### Supported API Keys
- `GEMINI_API_KEY`: Google AI API key for Gemini models
- `GOOGLE_API_KEY`: Alternative Google API key (compatibility)
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to Google Cloud service account JSON file
### Model Configuration
- `GEMINI_MODEL`: Default model (default: "gemini-1.5-pro")
- Available: "gemini-1.5-pro", "gemini-1.5-flash", "gemini-1.0-pro"
- `GEMINI_TEMPERATURE`: Model temperature 0.0-2.0 (default: 0.7)
- `GEMINI_MAX_TOKENS`: Maximum tokens in response
### Advanced Configuration
- `GEMINI_SEARCH_ENABLED`: Enable Google Search grounding (true/false, default: false)
- `GEMINI_DEBUG`: Enable debug mode (true/false, default: false)
- `GCLOUD_PROJECT`: Google Cloud project ID
### Network Configuration
- `HTTP_PROXY`: HTTP proxy server URL
- `HTTPS_PROXY`: HTTPS proxy server URL
## Usage Examples
### Basic AI Development
```bash
# Start Gemini CLI with your project
uv run -m cubbi.cli session create --image gemini-cli /path/to/project
# Inside the container:
gemini # Start interactive session
```
### Codebase Analysis
```bash
# Analyze changes
gemini
> What are the main functions in src/main.py?
# Code generation
gemini
> Add error handling to the authentication module
# Documentation
gemini
> Generate README documentation for this project
```
### Multi-modal Development
```bash
# Work with images
gemini
> Analyze this architecture diagram and suggest improvements
# PDF processing
gemini
> Convert this API specification PDF to OpenAPI YAML
# Sketch to code
gemini
> Create a React component based on this UI mockup
```
### Advanced Features
```bash
# With Google Search grounding
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_SEARCH_ENABLED="true" \
/path/to/project
# With specific model
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_MODEL="gemini-1.5-flash" \
--env GEMINI_TEMPERATURE="0.3" \
/path/to/project
# Debug mode
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_DEBUG="true" \
/path/to/project
```
### Enterprise/Proxy Setup
```bash
# With proxy
uv run -m cubbi.cli session create --image gemini-cli \
--env HTTPS_PROXY="https://proxy.company.com:8080" \
/path/to/project
# With Google Cloud authentication
uv run -m cubbi.cli session create --image gemini-cli \
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
--env GCLOUD_PROJECT="your-project-id" \
/path/to/project
```
## Persistent Configuration
The following directories are automatically persisted:
- `~/.config/gemini/`: Gemini CLI configuration files
- `~/.cache/gemini/`: Model cache and temporary files
Configuration files are maintained across container restarts, ensuring your preferences and session history are preserved.
## Model Recommendations
### Best Overall Performance
- **Gemini 1.5 Pro**: Excellent reasoning and code understanding
- **Gemini 1.5 Flash**: Faster responses, good for iterative development
### Cost-Effective Options
- **Gemini 1.5 Flash**: Lower cost, high speed
- **Gemini 1.0 Pro**: Basic model for simple tasks
### Specialized Use Cases
- **Code Analysis**: Gemini 1.5 Pro
- **Quick Iterations**: Gemini 1.5 Flash
- **Multi-modal Tasks**: Gemini 1.5 Pro (supports images, PDFs)
## File Structure
```
cubbi/images/gemini-cli/
├── Dockerfile # Container image definition
├── cubbi_image.yaml # Cubbi image configuration
├── gemini_plugin.py # Authentication and setup plugin
└── README.md # This documentation
```
## Authentication Flow
1. **API Key Setup**: API key configured via Cubbi configuration or environment variables
2. **Plugin Initialization**: `gemini_plugin.py` creates configuration files
3. **Environment File**: Creates `~/.config/gemini/.env` with API key
4. **Configuration**: Creates `~/.config/gemini/config.json` with settings
5. **Ready**: Gemini CLI is ready for use with configured authentication
## Troubleshooting
### Common Issues
**No API Key Found**
```
No API key found - Gemini CLI will require authentication
```
**Solution**: Set API key in Cubbi configuration:
```bash
uv run -m cubbi.cli config set services.google.api_key "your-key"
```
**Authentication Failed**
```
Error: Invalid API key or authentication failed
```
**Solution**: Verify your API key at [Google AI Studio](https://aistudio.google.com/apikey):
```bash
# Test your API key
curl -H "Content-Type: application/json" \
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}' \
"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY"
```
**Model Not Available**
```
Error: Model 'xyz' not found
```
**Solution**: Use supported models:
```bash
# List available models (inside container)
curl -H "Content-Type: application/json" \
"https://generativelanguage.googleapis.com/v1beta/models?key=YOUR_API_KEY"
```
**Rate Limit Exceeded**
```
Error: Quota exceeded
```
**Solution**: Google AI provides:
- 60 requests per minute
- 1,000 requests per day
- Consider upgrading to Google Cloud for higher limits
**Network/Proxy Issues**
```
Connection timeout or proxy errors
```
**Solution**: Configure proxy settings:
```bash
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
```
### Debug Mode
```bash
# Enable debug output
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_DEBUG="true"
# Check configuration
cat ~/.config/gemini/config.json
# Check environment
cat ~/.config/gemini/.env
# Test CLI directly
gemini --help
```
## Security Considerations
- **API Keys**: Stored securely with 0o600 permissions
- **Environment**: Isolated container environment
- **Configuration**: Secure file permissions for config files
- **Google Cloud**: Supports service account authentication for enterprise use
## Advanced Configuration
### Custom Model Configuration
```bash
# Use specific model with custom settings
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_MODEL="gemini-1.5-flash" \
--env GEMINI_TEMPERATURE="0.2" \
--env GEMINI_MAX_TOKENS="8192"
```
### Google Search Integration
```bash
# Enable Google Search grounding for up-to-date information
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_SEARCH_ENABLED="true"
```
### Google Cloud Integration
```bash
# Use with Google Cloud service account
uv run -m cubbi.cli session create --image gemini-cli \
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
--env GCLOUD_PROJECT="your-project-id"
```
## API Limits and Pricing
### Free Tier (Google AI)
- 60 requests per minute
- 1,000 requests per day
- Personal Google account required
### Paid Tier (Google Cloud)
- Higher rate limits
- Enterprise features
- Service account authentication
- Custom quotas available
## Support
For issues related to:
- **Cubbi Integration**: Check Cubbi documentation or open an issue
- **Gemini CLI Functionality**: Visit [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli)
- **Google AI Platform**: Visit [Google AI documentation](https://ai.google.dev/)
- **API Keys**: Visit [Google AI Studio](https://aistudio.google.com/)
## License
This image configuration is provided under the same license as the Cubbi project. Google Gemini CLI is licensed separately by Google.

View File

@@ -0,0 +1,80 @@
name: gemini-cli
description: Google Gemini CLI environment for AI-powered development
version: 1.0.0
maintainer: team@monadical.com
image: monadical/cubbi-gemini-cli:latest
environment:
# Google AI Configuration
- name: GEMINI_API_KEY
description: Google AI API key for Gemini models
required: false
sensitive: true
- name: GOOGLE_API_KEY
description: Alternative Google API key (compatibility)
required: false
sensitive: true
# Google Cloud Configuration
- name: GOOGLE_APPLICATION_CREDENTIALS
description: Path to Google Cloud service account JSON file
required: false
sensitive: true
- name: GCLOUD_PROJECT
description: Google Cloud project ID
required: false
# Model Configuration
- name: GEMINI_MODEL
description: Default Gemini model (e.g., gemini-1.5-pro, gemini-1.5-flash)
required: false
default: "gemini-1.5-pro"
- name: GEMINI_TEMPERATURE
description: Model temperature (0.0-2.0)
required: false
default: "0.7"
- name: GEMINI_MAX_TOKENS
description: Maximum tokens in response
required: false
# Search Configuration
- name: GEMINI_SEARCH_ENABLED
description: Enable Google Search grounding (true/false)
required: false
default: "false"
# Proxy Configuration
- name: HTTP_PROXY
description: HTTP proxy server URL
required: false
- name: HTTPS_PROXY
description: HTTPS proxy server URL
required: false
# Debug Configuration
- name: GEMINI_DEBUG
description: Enable debug mode (true/false)
required: false
default: "false"
ports: []
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.config/gemini"
target: "/cubbi-config/gemini-settings"
type: "directory"
description: "Gemini CLI configuration and history"
- source: "/home/cubbi/.cache/gemini"
target: "/cubbi-config/gemini-cache"
type: "directory"
description: "Gemini CLI cache directory"

View File

@@ -0,0 +1,241 @@
#!/usr/bin/env python3
"""
Gemini CLI Plugin for Cubbi
Handles authentication setup and configuration for Google Gemini CLI
"""
import json
import os
import stat
from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin
class GeminiCliPlugin(ToolPlugin):
"""Plugin for setting up Gemini CLI authentication and configuration"""
@property
def tool_name(self) -> str:
return "gemini-cli"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_gemini_config_dir(self) -> Path:
"""Get the Gemini configuration directory"""
# Get the actual username from the config if available
username = self.config.get("username", "cubbi")
return Path(f"/home/{username}/.config/gemini")
def _get_gemini_cache_dir(self) -> Path:
"""Get the Gemini cache directory"""
# Get the actual username from the config if available
username = self.config.get("username", "cubbi")
return Path(f"/home/{username}/.cache/gemini")
def _ensure_gemini_dirs(self) -> tuple[Path, Path]:
"""Ensure Gemini directories exist with correct ownership"""
config_dir = self._get_gemini_config_dir()
cache_dir = self._get_gemini_cache_dir()
# Create directories
for directory in [config_dir, cache_dir]:
try:
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
self._set_ownership(directory)
except OSError as e:
self.status.log(
f"Failed to create Gemini directory {directory}: {e}", "ERROR"
)
return config_dir, cache_dir
def initialize(self) -> bool:
"""Initialize Gemini CLI configuration"""
self.status.log("Setting up Gemini CLI configuration...")
# Ensure Gemini directories exist
config_dir, cache_dir = self._ensure_gemini_dirs()
# Set up authentication and configuration
auth_configured = self._setup_authentication(config_dir)
config_created = self._create_configuration_file(config_dir)
if auth_configured or config_created:
self.status.log("✅ Gemini CLI configured successfully")
else:
self.status.log(
" No API key found - Gemini CLI will require authentication",
"INFO",
)
self.status.log(
" You can configure API keys using environment variables", "INFO"
)
# Always return True to allow container to start
return True
def _setup_authentication(self, config_dir: Path) -> bool:
"""Set up Gemini authentication"""
api_key = self._get_api_key()
if not api_key:
return False
# Create environment file for API key
env_file = config_dir / ".env"
try:
with open(env_file, "w") as f:
f.write(f"GEMINI_API_KEY={api_key}\n")
# Set ownership and secure file permissions
self._set_ownership(env_file)
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
self.status.log(f"Created Gemini environment file at {env_file}")
self.status.log("Added Gemini API key")
return True
except Exception as e:
self.status.log(f"Failed to create environment file: {e}", "ERROR")
return False
def _get_api_key(self) -> str:
"""Get the Gemini API key from environment variables"""
# Check multiple possible environment variable names
for key_name in ["GEMINI_API_KEY", "GOOGLE_API_KEY"]:
api_key = os.environ.get(key_name)
if api_key:
return api_key
return ""
def _create_configuration_file(self, config_dir: Path) -> bool:
"""Create Gemini CLI configuration file"""
try:
config = self._build_configuration()
if not config:
return False
config_file = config_dir / "config.json"
with open(config_file, "w") as f:
json.dump(config, f, indent=2)
# Set ownership and permissions
self._set_ownership(config_file)
os.chmod(config_file, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP)
self.status.log(f"Created Gemini configuration at {config_file}")
return True
except Exception as e:
self.status.log(f"Failed to create configuration file: {e}", "ERROR")
return False
def _build_configuration(self) -> Dict[str, Any]:
"""Build Gemini CLI configuration from environment variables"""
config = {}
# Model configuration
model = os.environ.get("GEMINI_MODEL", "gemini-1.5-pro")
if model:
config["defaultModel"] = model
self.status.log(f"Set default model to {model}")
# Temperature setting
temperature = os.environ.get("GEMINI_TEMPERATURE")
if temperature:
try:
temp_value = float(temperature)
if 0.0 <= temp_value <= 2.0:
config["temperature"] = temp_value
self.status.log(f"Set temperature to {temp_value}")
else:
self.status.log(
f"Invalid temperature value {temperature}, using default",
"WARNING",
)
except ValueError:
self.status.log(
f"Invalid temperature format {temperature}, using default",
"WARNING",
)
# Max tokens setting
max_tokens = os.environ.get("GEMINI_MAX_TOKENS")
if max_tokens:
try:
tokens_value = int(max_tokens)
if tokens_value > 0:
config["maxTokens"] = tokens_value
self.status.log(f"Set max tokens to {tokens_value}")
else:
self.status.log(
f"Invalid max tokens value {max_tokens}, using default",
"WARNING",
)
except ValueError:
self.status.log(
f"Invalid max tokens format {max_tokens}, using default",
"WARNING",
)
# Search configuration
search_enabled = os.environ.get("GEMINI_SEARCH_ENABLED", "false")
if search_enabled.lower() in ["true", "false"]:
config["searchEnabled"] = search_enabled.lower() == "true"
if config["searchEnabled"]:
self.status.log("Enabled Google Search grounding")
# Debug mode
debug_mode = os.environ.get("GEMINI_DEBUG", "false")
if debug_mode.lower() in ["true", "false"]:
config["debug"] = debug_mode.lower() == "true"
if config["debug"]:
self.status.log("Enabled debug mode")
# Proxy settings
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
proxy_value = os.environ.get(proxy_var)
if proxy_value:
config[proxy_var.lower()] = proxy_value
self.status.log(f"Added proxy configuration: {proxy_var}")
# Google Cloud project
project = os.environ.get("GCLOUD_PROJECT")
if project:
config["project"] = project
self.status.log(f"Set Google Cloud project to {project}")
return config
def setup_tool_configuration(self) -> bool:
"""Set up Gemini CLI configuration - called by base class"""
# Additional tool configuration can be added here if needed
return True
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate Gemini CLI with available MCP servers if applicable"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Gemini CLI doesn't have native MCP support,
# but we could potentially add custom integrations here
self.status.log(
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
)
return True

View File

@@ -0,0 +1,312 @@
#!/usr/bin/env python3
"""
Comprehensive test script for Gemini CLI Cubbi image
Tests Docker image build, API key configuration, and Cubbi CLI integration
"""
import subprocess
import sys
import os
def run_command(cmd, description="", check=True):
"""Run a shell command and return result"""
print(f"\n🔍 {description}")
print(f"Running: {cmd}")
try:
result = subprocess.run(
cmd, shell=True, capture_output=True, text=True, check=check
)
if result.stdout:
print("STDOUT:")
print(result.stdout)
if result.stderr:
print("STDERR:")
print(result.stderr)
return result
except subprocess.CalledProcessError as e:
print(f"❌ Command failed with exit code {e.returncode}")
if e.stdout:
print("STDOUT:")
print(e.stdout)
if e.stderr:
print("STDERR:")
print(e.stderr)
if check:
raise
return e
def test_docker_build():
"""Test Docker image build"""
print("\n" + "=" * 60)
print("🧪 Testing Docker Image Build")
print("=" * 60)
# Get the directory containing this test file
test_dir = os.path.dirname(os.path.abspath(__file__))
result = run_command(
f"cd {test_dir} && docker build -t monadical/cubbi-gemini-cli:latest .",
"Building Gemini CLI Docker image",
)
if result.returncode == 0:
print("✅ Gemini CLI Docker image built successfully")
return True
else:
print("❌ Gemini CLI Docker image build failed")
return False
def test_docker_image_exists():
"""Test if the Gemini CLI Docker image exists"""
print("\n" + "=" * 60)
print("🧪 Testing Docker Image Existence")
print("=" * 60)
result = run_command(
"docker images monadical/cubbi-gemini-cli:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
"Checking if Gemini CLI Docker image exists",
)
if "monadical/cubbi-gemini-cli" in result.stdout:
print("✅ Gemini CLI Docker image exists")
return True
else:
print("❌ Gemini CLI Docker image not found")
return False
def test_gemini_version():
"""Test basic Gemini CLI functionality in container"""
print("\n" + "=" * 60)
print("🧪 Testing Gemini CLI Version")
print("=" * 60)
result = run_command(
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'gemini --version'",
"Testing Gemini CLI version command",
)
if result.returncode == 0 and (
"gemini" in result.stdout.lower() or "version" in result.stdout.lower()
):
print("✅ Gemini CLI version command works")
return True
else:
print("❌ Gemini CLI version command failed")
return False
def test_api_key_configuration():
"""Test API key configuration and environment setup"""
print("\n" + "=" * 60)
print("🧪 Testing API Key Configuration")
print("=" * 60)
# Test with multiple API keys
test_keys = {
"GEMINI_API_KEY": "test-gemini-key",
"GOOGLE_API_KEY": "test-google-key",
}
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
result = run_command(
f"docker run --rm {env_flags} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/.env 2>/dev/null || echo \"No .env file found\"'",
"Testing API key configuration in .env file",
)
success = True
if "test-gemini-key" in result.stdout:
print("✅ GEMINI_API_KEY configured correctly")
else:
print("❌ GEMINI_API_KEY not found in configuration")
success = False
return success
def test_configuration_file():
"""Test Gemini CLI configuration file creation"""
print("\n" + "=" * 60)
print("🧪 Testing Configuration File")
print("=" * 60)
env_vars = "-e GEMINI_API_KEY='test-key' -e GEMINI_MODEL='gemini-1.5-pro' -e GEMINI_TEMPERATURE='0.5'"
result = run_command(
f"docker run --rm {env_vars} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/config.json 2>/dev/null || echo \"No config file found\"'",
"Testing configuration file creation",
)
success = True
if "gemini-1.5-pro" in result.stdout:
print("✅ Default model configured correctly")
else:
print("❌ Default model not found in configuration")
success = False
if "0.5" in result.stdout:
print("✅ Temperature configured correctly")
else:
print("❌ Temperature not found in configuration")
success = False
return success
def test_cubbi_cli_integration():
"""Test Cubbi CLI integration"""
print("\n" + "=" * 60)
print("🧪 Testing Cubbi CLI Integration")
print("=" * 60)
# Change to project root for cubbi commands
project_root = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
# Test image listing
result = run_command(
f"cd {project_root} && uv run -m cubbi.cli image list",
"Testing Cubbi CLI can see images",
check=False,
)
if "gemini-cli" in result.stdout:
print("✅ Cubbi CLI can list Gemini CLI image")
else:
print(
" Gemini CLI image not yet registered with Cubbi CLI - this is expected during development"
)
# Test basic cubbi CLI works
result = run_command(
f"cd {project_root} && uv run -m cubbi.cli --help",
"Testing basic Cubbi CLI functionality",
)
if result.returncode == 0 and "cubbi" in result.stdout.lower():
print("✅ Cubbi CLI basic functionality works")
return True
else:
print("❌ Cubbi CLI basic functionality failed")
return False
def test_persistent_configuration():
"""Test persistent configuration directories"""
print("\n" + "=" * 60)
print("🧪 Testing Persistent Configuration")
print("=" * 60)
# Test that persistent directories are created
result = run_command(
"docker run --rm -e GEMINI_API_KEY='test-key' monadical/cubbi-gemini-cli:latest bash -c 'ls -la ~/.config/ && ls -la ~/.cache/'",
"Testing persistent configuration directories",
)
success = True
if "gemini" in result.stdout:
print("✅ ~/.config/gemini directory exists")
else:
print("❌ ~/.config/gemini directory not found")
success = False
if "gemini" in result.stdout:
print("✅ ~/.cache/gemini directory exists")
else:
print("❌ ~/.cache/gemini directory not found")
success = False
return success
def test_plugin_functionality():
"""Test the Gemini CLI plugin functionality"""
print("\n" + "=" * 60)
print("🧪 Testing Plugin Functionality")
print("=" * 60)
# Test plugin without API keys (should still work)
result = run_command(
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test without API keys\"'",
"Testing plugin functionality without API keys",
)
if "No API key found - Gemini CLI will require authentication" in result.stdout:
print("✅ Plugin handles missing API keys gracefully")
else:
print(" Plugin API key handling test - check output above")
# Test plugin with API keys
result = run_command(
"docker run --rm -e GEMINI_API_KEY='test-plugin-key' monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test with API keys\"'",
"Testing plugin functionality with API keys",
)
if "Gemini CLI configured successfully" in result.stdout:
print("✅ Plugin configures environment successfully")
return True
else:
print("❌ Plugin environment configuration failed")
return False
def main():
"""Run all tests"""
print("🚀 Starting Gemini CLI Cubbi Image Tests")
print("=" * 60)
tests = [
("Docker Image Build", test_docker_build),
("Docker Image Exists", test_docker_image_exists),
("Cubbi CLI Integration", test_cubbi_cli_integration),
("Gemini CLI Version", test_gemini_version),
("API Key Configuration", test_api_key_configuration),
("Configuration File", test_configuration_file),
("Persistent Configuration", test_persistent_configuration),
("Plugin Functionality", test_plugin_functionality),
]
results = {}
for test_name, test_func in tests:
try:
results[test_name] = test_func()
except Exception as e:
print(f"❌ Test '{test_name}' failed with exception: {e}")
results[test_name] = False
# Print summary
print("\n" + "=" * 60)
print("📊 TEST SUMMARY")
print("=" * 60)
total_tests = len(tests)
passed_tests = sum(1 for result in results.values() if result)
failed_tests = total_tests - passed_tests
for test_name, result in results.items():
status = "✅ PASS" if result else "❌ FAIL"
print(f"{status} {test_name}")
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
if failed_tests == 0:
print("\n🎉 All tests passed! Gemini CLI image is ready for use.")
return 0
else:
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -6,7 +6,6 @@ LABEL description="Goose for Cubbi"
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
sudo \
passwd \
bash \
curl \

View File

@@ -24,6 +24,9 @@ environment:
required: false
default: https://cloud.langfuse.com
ports:
- 8000
volumes:
- mountPath: /app
description: Application directory

View File

@@ -111,13 +111,6 @@ class GoosePlugin(ToolPlugin):
config_data["GOOSE_PROVIDER"] = goose_provider
self.status.log(f"Set GOOSE_PROVIDER to {goose_provider}")
# If provider is OpenAI and OPENAI_URL is set, configure OPENAI_HOST
if goose_provider.lower() == "openai":
openai_url = os.environ.get("OPENAI_URL")
if openai_url:
config_data["OPENAI_HOST"] = openai_url
self.status.log(f"Set OPENAI_HOST to {openai_url}")
try:
with config_file.open("w") as f:
yaml.dump(config_data, f)
@@ -171,7 +164,7 @@ class GoosePlugin(ToolPlugin):
"enabled": True,
"name": server_name,
"timeout": 60,
"type": server.get("type", "sse"),
"type": "sse",
"uri": mcp_url,
"envs": {},
}
@@ -184,7 +177,7 @@ class GoosePlugin(ToolPlugin):
"enabled": True,
"name": server_name,
"timeout": 60,
"type": server.get("type", "sse"),
"type": "sse",
"uri": server_url,
"envs": {},
}

View File

@@ -6,7 +6,6 @@ LABEL description="Opencode for Cubbi"
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
sudo \
passwd \
bash \
curl \
@@ -31,22 +30,12 @@ RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
rm install.sh
# Install Node.js
ARG NODE_VERSION=v22.16.0
# Install opencode-ai
RUN mkdir -p /opt/node && \
ARCH=$(uname -m) && \
if [ "$ARCH" = "x86_64" ]; then \
NODE_ARCH=linux-x64; \
elif [ "$ARCH" = "aarch64" ]; then \
NODE_ARCH=linux-arm64; \
else \
echo "Unsupported architecture"; exit 1; \
fi && \
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
curl -fsSL https://nodejs.org/dist/v22.16.0/node-v22.16.0-linux-x64.tar.gz -o node.tar.gz && \
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
rm node.tar.gz
ENV PATH="/opt/node/bin:$PATH"
RUN npm i -g yarn
RUN npm i -g opencode-ai
@@ -67,7 +56,6 @@ RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.ba
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV UV_LINK_MODE=copy
ENV COLORTERM=truecolor
# Pre-install the cubbi_init
RUN /cubbi/cubbi_init.py --help

View File

@@ -9,12 +9,10 @@ init:
command: /entrypoint.sh
environment: []
ports: []
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.config/opencode"
target: "/cubbi-config/config-opencode"
type: "directory"
description: "Opencode configuration"
persistent_configs: []

View File

@@ -117,16 +117,6 @@ class OpencodePlugin(ToolPlugin):
api_key = os.environ.get(env_var)
if api_key:
auth_data[provider] = {"type": "api", "key": api_key}
# Add custom endpoint URL for OpenAI if available
if provider == "openai":
openai_url = os.environ.get("OPENAI_URL")
if openai_url:
auth_data[provider]["baseURL"] = openai_url
self.status.log(
f"Added OpenAI custom endpoint URL: {openai_url}"
)
self.status.log(f"Added {provider} API key to auth configuration")
# Only write file if we have at least one API key
@@ -182,9 +172,6 @@ class OpencodePlugin(ToolPlugin):
else:
config_data = {}
# Set default theme to system
config_data.setdefault("theme", "system")
# Update with environment variables
opencode_model = os.environ.get("CUBBI_MODEL")
opencode_provider = os.environ.get("CUBBI_PROVIDER")

View File

@@ -79,7 +79,6 @@ class MCPManager:
name: str,
url: str,
headers: Dict[str, str] = None,
mcp_type: Optional[str] = None,
add_as_default: bool = True,
) -> Dict[str, Any]:
"""Add a remote MCP server.
@@ -98,7 +97,6 @@ class MCPManager:
name=name,
url=url,
headers=headers or {},
mcp_type=mcp_type,
)
# Add to the configuration

View File

@@ -51,6 +51,7 @@ class Image(BaseModel):
image: str
init: Optional[ImageInit] = None
environment: List[ImageEnvironmentVariable] = []
ports: List[int] = []
volumes: List[VolumeMount] = []
persistent_configs: List[PersistentConfig] = []
@@ -60,7 +61,6 @@ class RemoteMCP(BaseModel):
type: str = "remote"
url: str
headers: Dict[str, str] = Field(default_factory=dict)
mcp_type: Optional[str] = None
class DockerMCP(BaseModel):
@@ -102,7 +102,6 @@ class Session(BaseModel):
status: SessionStatus
container_id: Optional[str] = None
ports: Dict[int, int] = Field(default_factory=dict)
mcps: List[str] = Field(default_factory=list)
class Config(BaseModel):
@@ -110,5 +109,5 @@ class Config(BaseModel):
images: Dict[str, Image] = Field(default_factory=dict)
defaults: Dict[str, object] = Field(
default_factory=dict
) # Can store strings, booleans, lists, or other values
) # Can store strings, booleans, or other values
mcps: List[Dict[str, Any]] = Field(default_factory=list)

View File

@@ -14,7 +14,6 @@ ENV_MAPPINGS = {
"services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
"services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY",
"services.openai.api_key": "OPENAI_API_KEY",
"services.openai.url": "OPENAI_URL",
"services.anthropic.api_key": "ANTHROPIC_API_KEY",
"services.openrouter.api_key": "OPENROUTER_API_KEY",
"services.google.api_key": "GOOGLE_API_KEY",
@@ -96,7 +95,6 @@ class UserConfigManager:
"mount_local": True,
"networks": [], # Default networks to connect to (besides cubbi-network)
"volumes": [], # Default volumes to mount, format: "source:dest"
"ports": [], # Default ports to forward, format: list of integers
"mcps": [], # Default MCP servers to connect to
"model": "claude-3-5-sonnet-latest", # Default LLM model to use
"provider": "anthropic", # Default LLM provider to use

View File

@@ -1,6 +1,6 @@
[project]
name = "cubbi"
version = "0.4.0"
version = "0.2.0"
description = "Cubbi Container Tool"
readme = "README.md"
requires-python = ">=3.12"

View File

@@ -2,18 +2,17 @@
Common test fixtures for Cubbi Container tests.
"""
import tempfile
import uuid
import tempfile
import pytest
import docker
from pathlib import Path
from unittest.mock import patch
import docker
import pytest
from cubbi.config import ConfigManager
from cubbi.container import ContainerManager
from cubbi.models import Session, SessionStatus
from cubbi.session import SessionManager
from cubbi.config import ConfigManager
from cubbi.models import Session, SessionStatus
from cubbi.user_config import UserConfigManager
@@ -42,6 +41,13 @@ requires_docker = pytest.mark.skipif(
)
@pytest.fixture
def temp_dir():
"""Create a temporary directory for test files."""
with tempfile.TemporaryDirectory() as tmp_dir:
yield Path(tmp_dir)
@pytest.fixture
def temp_config_dir():
"""Create a temporary directory for configuration files."""
@@ -50,26 +56,76 @@ def temp_config_dir():
@pytest.fixture
def mock_container_manager(isolate_cubbi_config):
"""Mock the ContainerManager class with proper behaviors for testing."""
def isolated_config(temp_config_dir):
"""Provide an isolated UserConfigManager instance."""
config_path = temp_config_dir / "config.yaml"
config_path.parent.mkdir(parents=True, exist_ok=True)
return UserConfigManager(str(config_path))
@pytest.fixture
def isolated_session_manager(temp_config_dir):
"""Create an isolated session manager for testing."""
sessions_path = temp_config_dir / "sessions.yaml"
return SessionManager(sessions_path)
@pytest.fixture
def isolated_config_manager():
"""Create an isolated config manager for testing."""
config_manager = ConfigManager()
# Ensure we're using the built-in images, not trying to load from user config
return config_manager
@pytest.fixture
def mock_session_manager():
"""Mock the SessionManager class."""
with patch("cubbi.cli.session_manager") as mock_manager:
yield mock_manager
@pytest.fixture
def mock_container_manager():
"""Mock the ContainerManager class with proper initialization."""
mock_session = Session(
id="test-session-id",
name="test-session",
image="goose",
status=SessionStatus.RUNNING,
ports={8080: 32768},
ports={"8080": "8080"},
)
container_manager = isolate_cubbi_config["container_manager"]
with patch("cubbi.cli.container_manager") as mock_manager:
# Set behaviors to avoid TypeErrors
mock_manager.list_sessions.return_value = []
mock_manager.create_session.return_value = mock_session
mock_manager.close_session.return_value = True
mock_manager.close_all_sessions.return_value = (3, True)
# MCP-related mocks
mock_manager.get_mcp_status.return_value = {
"status": "running",
"container_id": "test-id",
}
mock_manager.start_mcp.return_value = {
"status": "running",
"container_id": "test-id",
}
mock_manager.stop_mcp.return_value = True
mock_manager.restart_mcp.return_value = {
"status": "running",
"container_id": "test-id",
}
mock_manager.get_mcp_logs.return_value = "Test log output"
yield mock_manager
# Patch the container manager methods for mocking
with (
patch.object(container_manager, "list_sessions", return_value=[]),
patch.object(container_manager, "create_session", return_value=mock_session),
patch.object(container_manager, "close_session", return_value=True),
patch.object(container_manager, "close_all_sessions", return_value=(3, True)),
):
yield container_manager
@pytest.fixture
def container_manager(isolated_session_manager, isolated_config_manager):
"""Create a container manager with isolated components."""
return ContainerManager(
config_manager=isolated_config_manager, session_manager=isolated_session_manager
)
@pytest.fixture
@@ -81,23 +137,28 @@ def cli_runner():
@pytest.fixture
def test_file_content(temp_config_dir):
"""Create a test file with content in a temporary directory."""
def test_file_content(temp_dir):
"""Create a test file with content in the temporary directory."""
test_content = "This is a test file for volume mounting"
test_file = temp_config_dir / "test_volume_file.txt"
test_file = temp_dir / "test_volume_file.txt"
with open(test_file, "w") as f:
f.write(test_content)
return test_file, test_content
@pytest.fixture
def docker_test_network():
def test_network_name():
"""Generate a unique network name for testing."""
return f"cubbi-test-network-{uuid.uuid4().hex[:8]}"
@pytest.fixture
def docker_test_network(test_network_name):
"""Create a Docker network for testing and clean it up after."""
if not is_docker_available():
pytest.skip("Docker is not available")
return None
test_network_name = f"cubbi-test-network-{uuid.uuid4().hex[:8]}"
client = docker.from_env()
network = client.networks.create(test_network_name, driver="bridge")
@@ -111,59 +172,8 @@ def docker_test_network():
pass
@pytest.fixture(autouse=True, scope="function")
def isolate_cubbi_config(temp_config_dir):
"""
Automatically isolate all Cubbi configuration for every test.
This fixture ensures that tests never touch the user's real configuration
by patching both ConfigManager and UserConfigManager in cli.py to use
temporary directories.
"""
# Create isolated config instances with temporary paths
config_path = temp_config_dir / "config.yaml"
user_config_path = temp_config_dir / "user_config.yaml"
# Create the ConfigManager with a custom config path
isolated_config_manager = ConfigManager(config_path)
# Create the UserConfigManager with a custom config path
isolated_user_config = UserConfigManager(str(user_config_path))
# Create isolated session manager
sessions_path = temp_config_dir / "sessions.yaml"
isolated_session_manager = SessionManager(sessions_path)
# Create isolated container manager
isolated_container_manager = ContainerManager(
isolated_config_manager, isolated_session_manager, isolated_user_config
)
# Patch all the global instances in cli.py and the UserConfigManager class
with (
patch("cubbi.cli.config_manager", isolated_config_manager),
patch("cubbi.cli.user_config", isolated_user_config),
patch("cubbi.cli.session_manager", isolated_session_manager),
patch("cubbi.cli.container_manager", isolated_container_manager),
patch("cubbi.cli.UserConfigManager", return_value=isolated_user_config),
):
# Create isolated MCP manager with isolated user config
from cubbi.mcp import MCPManager
isolated_mcp_manager = MCPManager(config_manager=isolated_user_config)
# Patch the global mcp_manager instance
with patch("cubbi.cli.mcp_manager", isolated_mcp_manager):
yield {
"config_manager": isolated_config_manager,
"user_config": isolated_user_config,
"session_manager": isolated_session_manager,
"container_manager": isolated_container_manager,
"mcp_manager": isolated_mcp_manager,
}
@pytest.fixture
def patched_config_manager(isolate_cubbi_config):
"""Compatibility fixture - returns the isolated user config."""
return isolate_cubbi_config["user_config"]
def patched_config_manager(isolated_config):
"""Patch the UserConfigManager in cli.py to use our isolated instance."""
with patch("cubbi.cli.user_config", isolated_config):
yield isolated_config

View File

@@ -189,103 +189,4 @@ def test_config_reset(cli_runner, patched_config_manager, monkeypatch):
assert patched_config_manager.get("defaults.image") == "goose"
def test_port_list_empty(cli_runner, patched_config_manager):
"""Test listing ports when none are configured."""
result = cli_runner.invoke(app, ["config", "port", "list"])
assert result.exit_code == 0
assert "No default ports configured" in result.stdout
def test_port_add_single(cli_runner, patched_config_manager):
"""Test adding a single port."""
result = cli_runner.invoke(app, ["config", "port", "add", "8000"])
assert result.exit_code == 0
assert "Added port 8000 to defaults" in result.stdout
# Verify it was added
ports = patched_config_manager.get("defaults.ports")
assert 8000 in ports
def test_port_add_multiple(cli_runner, patched_config_manager):
"""Test adding multiple ports with comma separation."""
result = cli_runner.invoke(app, ["config", "port", "add", "8000,3000,5173"])
assert result.exit_code == 0
assert "Added ports [8000, 3000, 5173] to defaults" in result.stdout
# Verify they were added
ports = patched_config_manager.get("defaults.ports")
assert 8000 in ports
assert 3000 in ports
assert 5173 in ports
def test_port_add_duplicate(cli_runner, patched_config_manager):
"""Test adding a port that already exists."""
# Add a port first
patched_config_manager.set("defaults.ports", [8000])
# Try to add the same port again
result = cli_runner.invoke(app, ["config", "port", "add", "8000"])
assert result.exit_code == 0
assert "Port 8000 is already in defaults" in result.stdout
def test_port_add_invalid_format(cli_runner, patched_config_manager):
"""Test adding an invalid port format."""
result = cli_runner.invoke(app, ["config", "port", "add", "invalid"])
assert result.exit_code == 0
assert "Error: Invalid port format" in result.stdout
def test_port_add_invalid_range(cli_runner, patched_config_manager):
"""Test adding a port outside valid range."""
result = cli_runner.invoke(app, ["config", "port", "add", "70000"])
assert result.exit_code == 0
assert "Error: Invalid ports [70000]" in result.stdout
def test_port_list_with_ports(cli_runner, patched_config_manager):
"""Test listing ports when some are configured."""
# Add some ports
patched_config_manager.set("defaults.ports", [8000, 3000])
# List ports
result = cli_runner.invoke(app, ["config", "port", "list"])
assert result.exit_code == 0
assert "8000" in result.stdout
assert "3000" in result.stdout
def test_port_remove(cli_runner, patched_config_manager):
"""Test removing a port."""
# Add a port first
patched_config_manager.set("defaults.ports", [8000])
# Remove the port
result = cli_runner.invoke(app, ["config", "port", "remove", "8000"])
assert result.exit_code == 0
assert "Removed port 8000 from defaults" in result.stdout
# Verify it's gone
ports = patched_config_manager.get("defaults.ports")
assert 8000 not in ports
def test_port_remove_not_found(cli_runner, patched_config_manager):
"""Test removing a port that doesn't exist."""
result = cli_runner.invoke(app, ["config", "port", "remove", "8000"])
assert result.exit_code == 0
assert "Port 8000 is not in defaults" in result.stdout
# patched_config_manager fixture is now in conftest.py

View File

@@ -1,90 +0,0 @@
"""
Test that configuration isolation works correctly and doesn't touch user's real config.
"""
from pathlib import Path
from cubbi.cli import app
def test_config_isolation_preserves_user_config(cli_runner, isolate_cubbi_config):
"""Test that test isolation doesn't affect user's real configuration."""
# Get the user's real config path
real_config_path = Path.home() / ".config" / "cubbi" / "config.yaml"
# If the user has a real config, store its content before test
original_content = None
if real_config_path.exists():
with open(real_config_path, "r") as f:
original_content = f.read()
# Run some config modification commands in the test
result = cli_runner.invoke(app, ["config", "port", "add", "9999"])
assert result.exit_code == 0
result = cli_runner.invoke(app, ["config", "set", "defaults.image", "test-image"])
assert result.exit_code == 0
# Verify the user's real config is unchanged
if original_content is not None:
with open(real_config_path, "r") as f:
current_content = f.read()
assert current_content == original_content
else:
# If no real config existed, it should still not exist
assert not real_config_path.exists()
def test_isolated_config_works_independently(cli_runner, isolate_cubbi_config):
"""Test that the isolated config works correctly for tests."""
# Add a port to isolated config
result = cli_runner.invoke(app, ["config", "port", "add", "8888"])
assert result.exit_code == 0
assert "Added port 8888 to defaults" in result.stdout
# Verify it appears in the list
result = cli_runner.invoke(app, ["config", "port", "list"])
assert result.exit_code == 0
assert "8888" in result.stdout
# Remove the port
result = cli_runner.invoke(app, ["config", "port", "remove", "8888"])
assert result.exit_code == 0
assert "Removed port 8888 from defaults" in result.stdout
# Verify it's gone
result = cli_runner.invoke(app, ["config", "port", "list"])
assert result.exit_code == 0
assert "No default ports configured" in result.stdout
def test_each_test_gets_fresh_config(cli_runner, isolate_cubbi_config):
"""Test that each test gets a fresh, isolated configuration."""
# This test should start with empty ports (fresh config)
result = cli_runner.invoke(app, ["config", "port", "list"])
assert result.exit_code == 0
assert "No default ports configured" in result.stdout
# Add a port
result = cli_runner.invoke(app, ["config", "port", "add", "7777"])
assert result.exit_code == 0
# Verify it's there
result = cli_runner.invoke(app, ["config", "port", "list"])
assert result.exit_code == 0
assert "7777" in result.stdout
def test_another_fresh_config_test(cli_runner, isolate_cubbi_config):
"""Another test to verify each test gets a completely fresh config."""
# This test should also start with empty ports (independent of previous test)
result = cli_runner.invoke(app, ["config", "port", "list"])
assert result.exit_code == 0
assert "No default ports configured" in result.stdout
# The port from the previous test should not be here
result = cli_runner.invoke(app, ["config", "port", "list"])
assert "7777" not in result.stdout

View File

@@ -6,8 +6,6 @@ These tests require Docker to be running.
import subprocess
import time
import uuid
import docker
# Import the requires_docker decorator from conftest
from conftest import requires_docker
@@ -23,56 +21,13 @@ def execute_command_in_container(container_id, command):
return result.stdout.strip()
def wait_for_container_init(container_id, timeout=5.0, poll_interval=0.1):
"""
Wait for a Cubbi container to complete initialization by polling /cubbi/init.status.
Args:
container_id: Docker container ID
timeout: Maximum time to wait in seconds (default: 5.0)
poll_interval: Time between polls in seconds (default: 0.1)
Returns:
bool: True if initialization completed, False if timed out
Raises:
subprocess.CalledProcessError: If docker exec command fails
"""
start_time = time.time()
while time.time() - start_time < timeout:
try:
# Check if /cubbi/init.status contains INIT_COMPLETE=true
result = execute_command_in_container(
container_id,
"grep -q 'INIT_COMPLETE=true' /cubbi/init.status 2>/dev/null && echo 'COMPLETE' || echo 'PENDING'",
)
if result == "COMPLETE":
return True
except subprocess.CalledProcessError:
# File might not exist yet or container not ready, continue polling
pass
time.sleep(poll_interval)
# Timeout reached
return False
@requires_docker
def test_integration_session_create_with_volumes(
isolate_cubbi_config, test_file_content
):
def test_integration_session_create_with_volumes(container_manager, test_file_content):
"""Test creating a session with a volume mount."""
test_file, test_content = test_file_content
session = None
try:
# Get the isolated container manager
container_manager = isolate_cubbi_config["container_manager"]
# Create a session with a volume mount
session = container_manager.create_session(
image_name="goose",
@@ -84,9 +39,8 @@ def test_integration_session_create_with_volumes(
assert session is not None
assert session.status == "running"
# Wait for container initialization to complete
init_success = wait_for_container_init(session.container_id)
assert init_success, "Container initialization timed out"
# Give container time to fully start
time.sleep(2)
# Verify the file exists in the container and has correct content
container_content = execute_command_in_container(
@@ -96,22 +50,19 @@ def test_integration_session_create_with_volumes(
assert container_content == test_content
finally:
# Clean up the container (use kill for faster test cleanup)
# Clean up the container
if session and session.container_id:
container_manager.close_session(session.id, kill=True)
container_manager.close_session(session.id)
@requires_docker
def test_integration_session_create_with_networks(
isolate_cubbi_config, docker_test_network
container_manager, docker_test_network
):
"""Test creating a session connected to a custom network."""
session = None
try:
# Get the isolated container manager
container_manager = isolate_cubbi_config["container_manager"]
# Create a session with the test network
session = container_manager.create_session(
image_name="goose",
@@ -123,9 +74,8 @@ def test_integration_session_create_with_networks(
assert session is not None
assert session.status == "running"
# Wait for container initialization to complete
init_success = wait_for_container_init(session.container_id)
assert init_success, "Container initialization timed out"
# Give container time to fully start
time.sleep(2)
# Verify the container is connected to the test network
# Use inspect to check network connections
@@ -147,240 +97,6 @@ def test_integration_session_create_with_networks(
assert int(network_interfaces) >= 2
finally:
# Clean up the container (use kill for faster test cleanup)
# Clean up the container
if session and session.container_id:
container_manager.close_session(session.id, kill=True)
@requires_docker
def test_integration_session_create_with_ports(isolate_cubbi_config):
"""Test creating a session with port forwarding."""
session = None
try:
# Get the isolated container manager
container_manager = isolate_cubbi_config["container_manager"]
# Create a session with port forwarding
session = container_manager.create_session(
image_name="goose",
session_name=f"cubbi-test-ports-{uuid.uuid4().hex[:8]}",
mount_local=False, # Don't mount current directory
ports=[8080, 9000], # Forward these ports
)
assert session is not None
assert session.status == "running"
# Verify ports are mapped
assert isinstance(session.ports, dict)
assert 8080 in session.ports
assert 9000 in session.ports
# Verify port mappings are valid (host ports should be assigned)
assert isinstance(session.ports[8080], int)
assert isinstance(session.ports[9000], int)
assert session.ports[8080] > 0
assert session.ports[9000] > 0
# Wait for container initialization to complete
init_success = wait_for_container_init(session.container_id)
assert init_success, "Container initialization timed out"
# Verify Docker port mappings using Docker client
import docker
client = docker.from_env()
container = client.containers.get(session.container_id)
container_ports = container.attrs["NetworkSettings"]["Ports"]
# Verify both ports are exposed
assert "8080/tcp" in container_ports
assert "9000/tcp" in container_ports
# Verify host port bindings exist
assert container_ports["8080/tcp"] is not None
assert container_ports["9000/tcp"] is not None
assert len(container_ports["8080/tcp"]) > 0
assert len(container_ports["9000/tcp"]) > 0
# Verify host ports match session.ports
host_port_8080 = int(container_ports["8080/tcp"][0]["HostPort"])
host_port_9000 = int(container_ports["9000/tcp"][0]["HostPort"])
assert session.ports[8080] == host_port_8080
assert session.ports[9000] == host_port_9000
finally:
# Clean up the container (use kill for faster test cleanup)
if session and session.container_id:
container_manager.close_session(session.id, kill=True)
@requires_docker
def test_integration_session_create_no_ports(isolate_cubbi_config):
"""Test creating a session without port forwarding."""
session = None
try:
# Get the isolated container manager
container_manager = isolate_cubbi_config["container_manager"]
# Create a session without ports
session = container_manager.create_session(
image_name="goose",
session_name=f"cubbi-test-no-ports-{uuid.uuid4().hex[:8]}",
mount_local=False, # Don't mount current directory
ports=[], # No ports
)
assert session is not None
assert session.status == "running"
# Verify no ports are mapped
assert isinstance(session.ports, dict)
assert len(session.ports) == 0
# Wait for container initialization to complete
init_success = wait_for_container_init(session.container_id)
assert init_success, "Container initialization timed out"
# Verify Docker has no port mappings
import docker
client = docker.from_env()
container = client.containers.get(session.container_id)
container_ports = container.attrs["NetworkSettings"]["Ports"]
# Should have no port mappings (empty dict or None values)
for port_spec, bindings in container_ports.items():
assert bindings is None or len(bindings) == 0
finally:
# Clean up the container (use kill for faster test cleanup)
if session and session.container_id:
container_manager.close_session(session.id, kill=True)
@requires_docker
def test_integration_session_create_with_single_port(isolate_cubbi_config):
"""Test creating a session with a single port forward."""
session = None
try:
# Get the isolated container manager
container_manager = isolate_cubbi_config["container_manager"]
# Create a session with single port
session = container_manager.create_session(
image_name="goose",
session_name=f"cubbi-test-single-port-{uuid.uuid4().hex[:8]}",
mount_local=False, # Don't mount current directory
ports=[3000], # Single port
)
assert session is not None
assert session.status == "running"
# Verify single port is mapped
assert isinstance(session.ports, dict)
assert len(session.ports) == 1
assert 3000 in session.ports
assert isinstance(session.ports[3000], int)
assert session.ports[3000] > 0
# Wait for container initialization to complete
init_success = wait_for_container_init(session.container_id)
assert init_success, "Container initialization timed out"
client = docker.from_env()
container = client.containers.get(session.container_id)
container_ports = container.attrs["NetworkSettings"]["Ports"]
# Should have exactly one port mapping
port_mappings = {
k: v for k, v in container_ports.items() if v is not None and len(v) > 0
}
assert len(port_mappings) == 1
assert "3000/tcp" in port_mappings
finally:
# Clean up the container (use kill for faster test cleanup)
if session and session.container_id:
container_manager.close_session(session.id, kill=True)
@requires_docker
def test_integration_kill_vs_stop_speed(isolate_cubbi_config):
"""Test that kill is faster than stop for container termination."""
import time
# Get the isolated container manager
container_manager = isolate_cubbi_config["container_manager"]
# Create two identical sessions for comparison
session_stop = None
session_kill = None
try:
# Create first session (will be stopped gracefully)
session_stop = container_manager.create_session(
image_name="goose",
session_name=f"cubbi-test-stop-{uuid.uuid4().hex[:8]}",
mount_local=False,
ports=[],
)
# Create second session (will be killed)
session_kill = container_manager.create_session(
image_name="goose",
session_name=f"cubbi-test-kill-{uuid.uuid4().hex[:8]}",
mount_local=False,
ports=[],
)
assert session_stop is not None
assert session_kill is not None
# Wait for both containers to initialize
init_success_stop = wait_for_container_init(session_stop.container_id)
init_success_kill = wait_for_container_init(session_kill.container_id)
assert init_success_stop, "Stop test container initialization timed out"
assert init_success_kill, "Kill test container initialization timed out"
# Time graceful stop
start_time = time.time()
container_manager.close_session(session_stop.id, kill=False)
stop_time = time.time() - start_time
session_stop = None # Mark as cleaned up
# Time kill
start_time = time.time()
container_manager.close_session(session_kill.id, kill=True)
kill_time = time.time() - start_time
session_kill = None # Mark as cleaned up
# Kill should be faster than stop (usually by several seconds)
# We use a generous threshold since system performance can vary
assert (
kill_time < stop_time
), f"Kill ({kill_time:.2f}s) should be faster than stop ({stop_time:.2f}s)"
# Verify both methods successfully closed the containers
# (containers should no longer be in the session list)
remaining_sessions = container_manager.list_sessions()
session_ids = [s.id for s in remaining_sessions]
assert session_stop.id if session_stop else "stop-session" not in session_ids
assert session_kill.id if session_kill else "kill-session" not in session_ids
finally:
# Clean up any remaining containers
if session_stop and session_stop.container_id:
try:
container_manager.close_session(session_stop.id, kill=True)
except Exception:
pass
if session_kill and session_kill.container_id:
try:
container_manager.close_session(session_kill.id, kill=True)
except Exception:
pass
container_manager.close_session(session.id)

View File

@@ -21,7 +21,7 @@ def test_mcp_list_empty(cli_runner, patched_config_manager):
assert "No MCP servers configured" in result.stdout
def test_mcp_add_remote(cli_runner, isolate_cubbi_config):
def test_mcp_add_remote(cli_runner, patched_config_manager):
"""Test adding a remote MCP server and listing it."""
# Add a remote MCP server
result = cli_runner.invoke(
@@ -49,7 +49,7 @@ def test_mcp_add_remote(cli_runner, isolate_cubbi_config):
assert "http://mcp-se" in result.stdout # Truncated in table view
def test_mcp_add(cli_runner, isolate_cubbi_config):
def test_mcp_add(cli_runner, patched_config_manager):
"""Test adding a proxy-based MCP server and listing it."""
# Add a Docker MCP server
result = cli_runner.invoke(
@@ -93,212 +93,21 @@ def test_mcp_remove(cli_runner, patched_config_manager):
],
)
# Mock the container_manager.list_sessions to return sessions without MCPs
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
mock_list_sessions.return_value = []
# Mock the remove_mcp method
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
# Make remove_mcp return True (successful removal)
mock_remove_mcp.return_value = True
# Mock the get_mcp and remove_mcp methods
with patch("cubbi.cli.mcp_manager.get_mcp") as mock_get_mcp:
# First make get_mcp return our MCP
mock_get_mcp.return_value = {
"name": "test-mcp",
"type": "remote",
"url": "http://test-server.com/sse",
"headers": {"Authorization": "Bearer test-token"},
}
# Remove the MCP server
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
# Just check it ran successfully with exit code 0
assert result.exit_code == 0
assert "Removed MCP server 'test-mcp'" in result.stdout
def test_mcp_remove_with_active_sessions(cli_runner, patched_config_manager):
"""Test removing an MCP server that is used by active sessions."""
from cubbi.models import Session, SessionStatus
# Add a remote MCP server
patched_config_manager.set(
"mcps",
[
{
"name": "test-mcp",
"type": "remote",
"url": "http://test-server.com/sse",
"headers": {"Authorization": "Bearer test-token"},
}
],
)
# Create mock sessions that use the MCP
mock_sessions = [
Session(
id="session-1",
name="test-session-1",
image="goose",
status=SessionStatus.RUNNING,
container_id="container-1",
mcps=["test-mcp", "other-mcp"],
),
Session(
id="session-2",
name="test-session-2",
image="goose",
status=SessionStatus.RUNNING,
container_id="container-2",
mcps=["other-mcp"], # This one doesn't use test-mcp
),
Session(
id="session-3",
name="test-session-3",
image="goose",
status=SessionStatus.RUNNING,
container_id="container-3",
mcps=["test-mcp"], # This one uses test-mcp
),
]
# Mock the container_manager.list_sessions to return our sessions
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
mock_list_sessions.return_value = mock_sessions
# Mock the remove_mcp method
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
# Make remove_mcp return True (successful removal)
mock_remove_mcp.return_value = True
# Remove the MCP server
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
# Check it ran successfully with exit code 0
assert result.exit_code == 0
assert "Removed MCP server 'test-mcp'" in result.stdout
# Check warning about affected sessions
assert (
"Warning: Found 2 active sessions using MCP 'test-mcp'" in result.stdout
)
assert "session-1" in result.stdout
assert "session-3" in result.stdout
# session-2 should not be mentioned since it doesn't use test-mcp
assert "session-2" not in result.stdout
def test_mcp_remove_nonexistent(cli_runner, patched_config_manager):
"""Test removing a non-existent MCP server."""
# No MCPs configured
patched_config_manager.set("mcps", [])
# Mock the container_manager.list_sessions to return empty list
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
mock_list_sessions.return_value = []
# Mock the remove_mcp method to return False (MCP not found)
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
mock_remove_mcp.return_value = False
# Try to remove a non-existent MCP server
result = cli_runner.invoke(app, ["mcp", "remove", "nonexistent-mcp"])
# Check it ran successfully but reported not found
assert result.exit_code == 0
assert "MCP server 'nonexistent-mcp' not found" in result.stdout
def test_session_mcps_attribute():
"""Test that Session model has mcps attribute and can be populated correctly."""
from cubbi.models import Session, SessionStatus
# Test that Session can be created with mcps attribute
session = Session(
id="test-session",
name="test-session",
image="goose",
status=SessionStatus.RUNNING,
container_id="test-container",
mcps=["mcp1", "mcp2"],
)
assert session.mcps == ["mcp1", "mcp2"]
# Test that Session can be created with empty mcps list
session_empty = Session(
id="test-session-2",
name="test-session-2",
image="goose",
status=SessionStatus.RUNNING,
container_id="test-container-2",
)
assert session_empty.mcps == [] # Should default to empty list
def test_session_mcps_from_container_labels():
"""Test that Session mcps are correctly populated from container labels."""
from unittest.mock import Mock
from cubbi.container import ContainerManager
# Mock a container with MCP labels
mock_container = Mock()
mock_container.id = "test-container-id"
mock_container.status = "running"
mock_container.labels = {
"cubbi.session": "true",
"cubbi.session.id": "test-session",
"cubbi.session.name": "test-session-name",
"cubbi.image": "goose",
"cubbi.mcps": "mcp1,mcp2,mcp3", # Test with multiple MCPs
}
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
# Mock Docker client
mock_client = Mock()
mock_client.containers.list.return_value = [mock_container]
# Create container manager with mocked client
with patch("cubbi.container.docker.from_env") as mock_docker:
mock_docker.return_value = mock_client
mock_client.ping.return_value = True
container_manager = ContainerManager()
sessions = container_manager.list_sessions()
assert len(sessions) == 1
session = sessions[0]
assert session.id == "test-session"
assert session.mcps == ["mcp1", "mcp2", "mcp3"]
def test_session_mcps_from_empty_container_labels():
"""Test that Session mcps are correctly handled when container has no MCP labels."""
from unittest.mock import Mock
from cubbi.container import ContainerManager
# Mock a container without MCP labels
mock_container = Mock()
mock_container.id = "test-container-id"
mock_container.status = "running"
mock_container.labels = {
"cubbi.session": "true",
"cubbi.session.id": "test-session",
"cubbi.session.name": "test-session-name",
"cubbi.image": "goose",
# No cubbi.mcps label
}
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
# Mock Docker client
mock_client = Mock()
mock_client.containers.list.return_value = [mock_container]
# Create container manager with mocked client
with patch("cubbi.container.docker.from_env") as mock_docker:
mock_docker.return_value = mock_client
mock_client.ping.return_value = True
container_manager = ContainerManager()
sessions = container_manager.list_sessions()
assert len(sessions) == 1
session = sessions[0]
assert session.id == "test-session"
assert session.mcps == [] # Should be empty list when no MCPs
@pytest.mark.requires_docker
@@ -350,12 +159,10 @@ def test_mcp_status(cli_runner, patched_config_manager, mock_container_manager):
@pytest.mark.requires_docker
def test_mcp_start(cli_runner, isolate_cubbi_config):
def test_mcp_start(cli_runner, patched_config_manager, mock_container_manager):
"""Test starting an MCP server."""
mcp_manager = isolate_cubbi_config["mcp_manager"]
# Add a Docker MCP
isolate_cubbi_config["user_config"].set(
patched_config_manager.set(
"mcps",
[
{
@@ -367,15 +174,12 @@ def test_mcp_start(cli_runner, isolate_cubbi_config):
],
)
# Mock the start_mcp method to avoid actual Docker operations
with patch.object(
mcp_manager,
"start_mcp",
return_value={
# Mock the start operation
mock_container_manager.start_mcp.return_value = {
"container_id": "test-container-id",
"status": "running",
},
):
}
# Start the MCP
result = cli_runner.invoke(app, ["mcp", "start", "test-docker-mcp"])
@@ -385,12 +189,10 @@ def test_mcp_start(cli_runner, isolate_cubbi_config):
@pytest.mark.requires_docker
def test_mcp_stop(cli_runner, isolate_cubbi_config):
def test_mcp_stop(cli_runner, patched_config_manager, mock_container_manager):
"""Test stopping an MCP server."""
mcp_manager = isolate_cubbi_config["mcp_manager"]
# Add a Docker MCP
isolate_cubbi_config["user_config"].set(
patched_config_manager.set(
"mcps",
[
{
@@ -402,8 +204,9 @@ def test_mcp_stop(cli_runner, isolate_cubbi_config):
],
)
# Mock the stop_mcp method to avoid actual Docker operations
with patch.object(mcp_manager, "stop_mcp", return_value=True):
# Mock the stop operation
mock_container_manager.stop_mcp.return_value = True
# Stop the MCP
result = cli_runner.invoke(app, ["mcp", "stop", "test-docker-mcp"])
@@ -413,12 +216,10 @@ def test_mcp_stop(cli_runner, isolate_cubbi_config):
@pytest.mark.requires_docker
def test_mcp_restart(cli_runner, isolate_cubbi_config):
def test_mcp_restart(cli_runner, patched_config_manager, mock_container_manager):
"""Test restarting an MCP server."""
mcp_manager = isolate_cubbi_config["mcp_manager"]
# Add a Docker MCP
isolate_cubbi_config["user_config"].set(
patched_config_manager.set(
"mcps",
[
{
@@ -430,15 +231,12 @@ def test_mcp_restart(cli_runner, isolate_cubbi_config):
],
)
# Mock the restart_mcp method to avoid actual Docker operations
with patch.object(
mcp_manager,
"restart_mcp",
return_value={
# Mock the restart operation
mock_container_manager.restart_mcp.return_value = {
"container_id": "test-container-id",
"status": "running",
},
):
}
# Restart the MCP
result = cli_runner.invoke(app, ["mcp", "restart", "test-docker-mcp"])

View File

@@ -83,9 +83,7 @@ def test_session_close(cli_runner, mock_container_manager):
assert result.exit_code == 0
assert "closed successfully" in result.stdout
mock_container_manager.close_session.assert_called_once_with(
"test-session-id", kill=False
)
mock_container_manager.close_session.assert_called_once_with("test-session-id")
def test_session_close_all(cli_runner, mock_container_manager):
@@ -115,197 +113,6 @@ def test_session_close_all(cli_runner, mock_container_manager):
mock_container_manager.close_all_sessions.assert_called_once()
def test_session_create_with_ports(
cli_runner, mock_container_manager, patched_config_manager
):
"""Test session creation with port forwarding."""
from cubbi.models import Session, SessionStatus
# Mock the create_session to return a session with ports
mock_session = Session(
id="test-session-id",
name="test-session",
image="goose",
status=SessionStatus.RUNNING,
ports={8000: 32768, 3000: 32769},
)
mock_container_manager.create_session.return_value = mock_session
result = cli_runner.invoke(app, ["session", "create", "--port", "8000,3000"])
assert result.exit_code == 0
assert "Session created successfully" in result.stdout
assert "Forwarding ports: 8000, 3000" in result.stdout
# Verify create_session was called with correct ports
mock_container_manager.create_session.assert_called_once()
call_args = mock_container_manager.create_session.call_args
assert call_args.kwargs["ports"] == [8000, 3000]
def test_session_create_with_default_ports(
cli_runner, mock_container_manager, patched_config_manager
):
"""Test session creation using default ports."""
from cubbi.models import Session, SessionStatus
# Set up default ports
patched_config_manager.set("defaults.ports", [8080, 9000])
# Mock the create_session to return a session with ports
mock_session = Session(
id="test-session-id",
name="test-session",
image="goose",
status=SessionStatus.RUNNING,
ports={8080: 32768, 9000: 32769},
)
mock_container_manager.create_session.return_value = mock_session
result = cli_runner.invoke(app, ["session", "create"])
assert result.exit_code == 0
assert "Session created successfully" in result.stdout
assert "Forwarding ports: 8080, 9000" in result.stdout
# Verify create_session was called with default ports
mock_container_manager.create_session.assert_called_once()
call_args = mock_container_manager.create_session.call_args
assert call_args.kwargs["ports"] == [8080, 9000]
def test_session_create_combine_default_and_custom_ports(
cli_runner, mock_container_manager, patched_config_manager
):
"""Test session creation combining default and custom ports."""
from cubbi.models import Session, SessionStatus
# Set up default ports
patched_config_manager.set("defaults.ports", [8080])
# Mock the create_session to return a session with combined ports
mock_session = Session(
id="test-session-id",
name="test-session",
image="goose",
status=SessionStatus.RUNNING,
ports={8080: 32768, 3000: 32769},
)
mock_container_manager.create_session.return_value = mock_session
result = cli_runner.invoke(app, ["session", "create", "--port", "3000"])
assert result.exit_code == 0
assert "Session created successfully" in result.stdout
# Ports should be combined and deduplicated
assert "Forwarding ports:" in result.stdout
# Verify create_session was called with combined ports
mock_container_manager.create_session.assert_called_once()
call_args = mock_container_manager.create_session.call_args
# Should contain both default (8080) and custom (3000) ports
assert set(call_args.kwargs["ports"]) == {8080, 3000}
def test_session_create_invalid_port_format(
cli_runner, mock_container_manager, patched_config_manager
):
"""Test session creation with invalid port format."""
result = cli_runner.invoke(app, ["session", "create", "--port", "invalid"])
assert result.exit_code == 0
assert "Warning: Ignoring invalid port format" in result.stdout
# Session creation should continue with empty ports list (invalid port ignored)
mock_container_manager.create_session.assert_called_once()
call_args = mock_container_manager.create_session.call_args
assert call_args.kwargs["ports"] == [] # Invalid port should be ignored
def test_session_create_invalid_port_range(
cli_runner, mock_container_manager, patched_config_manager
):
"""Test session creation with port outside valid range."""
result = cli_runner.invoke(app, ["session", "create", "--port", "70000"])
assert result.exit_code == 0
assert "Error: Invalid ports [70000]" in result.stdout
# Session creation should not happen due to early return
mock_container_manager.create_session.assert_not_called()
def test_session_list_shows_ports(cli_runner, mock_container_manager):
"""Test that session list shows port mappings."""
from cubbi.models import Session, SessionStatus
mock_session = Session(
id="test-session-id",
name="test-session",
image="goose",
status=SessionStatus.RUNNING,
ports={8000: 32768, 3000: 32769},
)
mock_container_manager.list_sessions.return_value = [mock_session]
result = cli_runner.invoke(app, ["session", "list"])
assert result.exit_code == 0
assert "8000:32768" in result.stdout
assert "3000:32769" in result.stdout
def test_session_close_with_kill_flag(
cli_runner, mock_container_manager, patched_config_manager
):
"""Test session close with --kill flag."""
result = cli_runner.invoke(app, ["session", "close", "test-session-id", "--kill"])
assert result.exit_code == 0
# Verify close_session was called with kill=True
mock_container_manager.close_session.assert_called_once_with(
"test-session-id", kill=True
)
def test_session_close_all_with_kill_flag(
cli_runner, mock_container_manager, patched_config_manager
):
"""Test session close --all with --kill flag."""
from cubbi.models import Session, SessionStatus
# Mock some sessions to close
mock_sessions = [
Session(
id="session-1",
name="Session 1",
image="goose",
status=SessionStatus.RUNNING,
ports={},
),
Session(
id="session-2",
name="Session 2",
image="goose",
status=SessionStatus.RUNNING,
ports={},
),
]
mock_container_manager.list_sessions.return_value = mock_sessions
mock_container_manager.close_all_sessions.return_value = (2, True)
result = cli_runner.invoke(app, ["session", "close", "--all", "--kill"])
assert result.exit_code == 0
assert "2 sessions closed successfully" in result.stdout
# Verify close_all_sessions was called with kill=True
mock_container_manager.close_all_sessions.assert_called_once()
call_args = mock_container_manager.close_all_sessions.call_args
assert call_args.kwargs["kill"] is True
# For more complex tests that need actual Docker,
# we've implemented them in test_integration_docker.py
# They will run automatically if Docker is available

4
uv.lock generated
View File

@@ -1,5 +1,5 @@
version = 1
revision = 3
revision = 2
requires-python = ">=3.12"
[[package]]
@@ -78,7 +78,7 @@ wheels = [
[[package]]
name = "cubbi"
version = "0.4.0"
version = "0.2.0"
source = { editable = "." }
dependencies = [
{ name = "docker" },