6 Commits

Author SHA1 Message Date
Your Name
b173bcd08c Add missing image build 2025-06-26 16:30:51 -04:00
Your Name
96243a99e4 fix: resolve gemini-cli container initialization and testing issues
- Fix hardcoded paths in tests to use dynamic path resolution
- Update gemini-cli plugin to use actual username instead of hardcoded "cubbi"
- Simplify persistent configuration test to use ~ instead of absolute paths
- Remove unused imports and improve test reliability
- Ensure configuration files are created in correct user directories

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-26 16:24:11 -04:00
Your Name
ae20e6a455 feat: add Gemini CLI image with fixed user/group handling
- Add complete Gemini CLI container image for AI-powered development
- Support Google Gemini models (1.5 Pro, Flash) with configurable settings
- Include comprehensive plugin system for authentication and configuration
- Fix user/group creation conflicts with existing base image users
- Dynamic username handling for compatibility with node:20-slim base
- Persistent configuration for .config/gemini and .cache/gemini
- Test suite for Docker build, API key setup, and Cubbi integration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-26 15:13:32 -04:00
e70ec3538b feat: add ripgrep and openssh-client in images (#15) 2025-06-24 19:05:30 -06:00
5fca51e515 feat: include new image opencode (#14)
* feat: include new image opencode

* docs: update readme
2025-06-20 02:09:12 +02:00
e5121ddea4 refactor: new image layout and organization (#13)
* refactor: rework how image are defined, in order to create others wrapper for others tools

* refactor: fix issues with ownership

* refactor: image share now information with others images type

* fix: update readme
2025-06-20 02:04:31 +02:00
26 changed files with 2792 additions and 410 deletions

View File

@@ -30,10 +30,11 @@ jobs:
- name: Install all dependencies
run: uv sync --frozen --all-extras --all-groups
- name: Build goose image
- name: Build required images
run: |
uv tool install --with-editable . .
cubbi image build goose
cubbi image build gemini-cli
- name: Tests
run: |

View File

@@ -42,6 +42,7 @@ Then compile your first image:
```bash
cubbi image build goose
cubbi image build opencode
```
### For Developers
@@ -81,6 +82,7 @@ cubbi session close SESSION_ID
# Create a session with a specific image
cubbix --image goose
cubbix --image opencode
# Create a session with environment variables
cubbix -e VAR1=value1 -e VAR2=value2
@@ -131,12 +133,11 @@ cubbi image list
# Get detailed information about an image
cubbi image info goose
cubbi image info opencode
# Build an image
cubbi image build goose
# Build and push an image
cubbi image build goose --push
cubbi image build opencode
```
Images are defined in the `cubbi/images/` directory, with each subdirectory containing:
@@ -144,7 +145,7 @@ Images are defined in the `cubbi/images/` directory, with each subdirectory cont
- `Dockerfile`: Docker image definition
- `entrypoint.sh`: Container entrypoint script
- `cubbi-init.sh`: Standardized initialization script
- `cubbi-image.yaml`: Image metadata and configuration
- `cubbi_image.yaml`: Image metadata and configuration
- `README.md`: Image documentation
Cubbi automatically discovers and loads image definitions from the YAML files.

View File

@@ -4,6 +4,9 @@ CLI for Cubbi Container Tool.
import logging
import os
import shutil
import tempfile
from pathlib import Path
from typing import List, Optional
import typer
@@ -45,9 +48,7 @@ mcp_manager = MCPManager(config_manager=user_config)
@app.callback()
def main(
ctx: typer.Context,
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
) -> None:
"""Cubbi Container Tool
@@ -167,14 +168,12 @@ def create_session(
gid: Optional[int] = typer.Option(
None, "--gid", help="Group ID to run the container as (defaults to host user)"
),
model: Optional[str] = typer.Option(None, "--model", "-m", help="Model to use"),
model: Optional[str] = typer.Option(None, "--model", help="Model to use"),
provider: Optional[str] = typer.Option(
None, "--provider", "-p", help="Provider to use"
),
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
) -> None:
"""Create a new Cubbi session
@@ -510,9 +509,60 @@ def build_image(
# Build image name
docker_image_name = f"monadical/cubbi-{image_name}:{tag}"
# Build the image
# Create temporary build directory
with tempfile.TemporaryDirectory() as temp_dir:
temp_path = Path(temp_dir)
console.print(f"Using temporary build directory: {temp_path}")
try:
# Copy all files from the image directory to temp directory
for item in image_path.iterdir():
if item.is_file():
shutil.copy2(item, temp_path / item.name)
elif item.is_dir():
shutil.copytree(item, temp_path / item.name)
# Copy shared cubbi_init.py to temp directory
shared_init_path = Path(__file__).parent / "images" / "cubbi_init.py"
if shared_init_path.exists():
shutil.copy2(shared_init_path, temp_path / "cubbi_init.py")
console.print("Copied shared cubbi_init.py to build context")
else:
console.print(
f"[yellow]Warning: Shared cubbi_init.py not found at {shared_init_path}[/yellow]"
)
# Copy shared init-status.sh to temp directory
shared_status_path = Path(__file__).parent / "images" / "init-status.sh"
if shared_status_path.exists():
shutil.copy2(shared_status_path, temp_path / "init-status.sh")
console.print("Copied shared init-status.sh to build context")
else:
console.print(
f"[yellow]Warning: Shared init-status.sh not found at {shared_status_path}[/yellow]"
)
# Copy image-specific plugin if it exists
plugin_path = image_path / f"{image_name.lower()}_plugin.py"
if plugin_path.exists():
shutil.copy2(plugin_path, temp_path / f"{image_name.lower()}_plugin.py")
console.print(f"Copied {image_name.lower()}_plugin.py to build context")
# Copy init-status.sh if it exists (for backward compatibility with shell connection)
init_status_path = image_path / "init-status.sh"
if init_status_path.exists():
shutil.copy2(init_status_path, temp_path / "init-status.sh")
console.print("Copied init-status.sh to build context")
# Build the image from temporary directory
with console.status(f"Building image {docker_image_name}..."):
result = os.system(f"cd {image_path} && docker build -t {docker_image_name} .")
result = os.system(
f"cd {temp_path} && docker build -t {docker_image_name} ."
)
except Exception as e:
console.print(f"[red]Error preparing build context: {e}[/red]")
return
if result != 0:
console.print("[red]Failed to build image[/red]")
@@ -1061,9 +1111,7 @@ def mcp_status(name: str = typer.Argument(..., help="MCP server name")) -> None:
def start_mcp(
name: Optional[str] = typer.Argument(None, help="MCP server name"),
all_servers: bool = typer.Option(False, "--all", help="Start all MCP servers"),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
) -> None:
"""Start an MCP server or all servers"""
# Set log level based on verbose flag

View File

@@ -108,7 +108,7 @@ class ConfigManager:
def load_image_from_dir(self, image_dir: Path) -> Optional[Image]:
"""Load an image configuration from a directory"""
# Check for image config file
yaml_path = image_dir / "cubbi-image.yaml"
yaml_path = image_dir / "cubbi_image.yaml"
if not yaml_path.exists():
return None
@@ -150,7 +150,7 @@ class ConfigManager:
if not BUILTIN_IMAGES_DIR.exists():
return images
# Search for cubbi-image.yaml files in each subdirectory
# Search for cubbi_image.yaml files in each subdirectory
for image_dir in BUILTIN_IMAGES_DIR.iterdir():
if image_dir.is_dir():
image = self.load_image_from_dir(image_dir)

View File

@@ -548,9 +548,6 @@ class ContainerManager:
# Connect the container to the network with session name as an alias
network.connect(container, aliases=[session_name])
print(
f"Connected to network: {network_name} with alias: {session_name}"
)
except DockerException as e:
print(f"Error connecting to network {network_name}: {e}")
@@ -571,10 +568,13 @@ class ContainerManager:
print(
f"Connected session to MCP '{mcp_name}' via dedicated network: {dedicated_network_name}"
)
except DockerException as e:
print(
f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
)
except DockerException:
# print(
# f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
# )
# commented out, may be accessible through another attached network, it's
# not mandatory here.
pass
except Exception as e:
print(f"Error connecting session to MCP '{mcp_name}': {e}")
@@ -604,9 +604,6 @@ class ContainerManager:
# Connect the container to the network with session name as an alias
network.connect(container, aliases=[session_name])
print(
f"Connected to network: {network_name} with alias: {session_name}"
)
except DockerException as e:
print(f"Error connecting to network {network_name}: {e}")

View File

@@ -1,3 +0,0 @@
"""
MAI container image management
"""

692
cubbi/images/cubbi_init.py Executable file
View File

@@ -0,0 +1,692 @@
#!/usr/bin/env -S uv run --script
# /// script
# dependencies = ["ruamel.yaml"]
# ///
"""
Standalone Cubbi initialization script
This is a self-contained script that includes all the necessary initialization
logic without requiring the full cubbi package to be installed.
"""
import grp
import importlib.util
import os
import pwd
import shutil
import subprocess
import sys
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Dict, List
from ruamel.yaml import YAML
# Status Management
class StatusManager:
"""Manages initialization status and logging"""
def __init__(
self, log_file: str = "/cubbi/init.log", status_file: str = "/cubbi/init.status"
):
self.log_file = Path(log_file)
self.status_file = Path(status_file)
self._setup_logging()
def _setup_logging(self) -> None:
"""Set up logging to both stdout and log file"""
self.log_file.touch(exist_ok=True)
self.set_status(False)
def log(self, message: str, level: str = "INFO") -> None:
"""Log a message with timestamp"""
print(message)
sys.stdout.flush()
with open(self.log_file, "a") as f:
f.write(message + "\n")
f.flush()
def set_status(self, complete: bool) -> None:
"""Set initialization completion status"""
status = "true" if complete else "false"
with open(self.status_file, "w") as f:
f.write(f"INIT_COMPLETE={status}\n")
def start_initialization(self) -> None:
"""Mark initialization as started"""
self.set_status(False)
def complete_initialization(self) -> None:
"""Mark initialization as completed"""
self.set_status(True)
# Configuration Management
@dataclass
class PersistentConfig:
"""Persistent configuration mapping"""
source: str
target: str
type: str = "directory"
description: str = ""
@dataclass
class ImageConfig:
"""Cubbi image configuration"""
name: str
description: str
version: str
maintainer: str
image: str
persistent_configs: List[PersistentConfig] = field(default_factory=list)
class ConfigParser:
"""Parses Cubbi image configuration and environment variables"""
def __init__(self, config_file: str = "/cubbi/cubbi_image.yaml"):
self.config_file = Path(config_file)
self.environment: Dict[str, str] = dict(os.environ)
def load_image_config(self) -> ImageConfig:
"""Load and parse the cubbi_image.yaml configuration"""
if not self.config_file.exists():
raise FileNotFoundError(f"Configuration file not found: {self.config_file}")
yaml = YAML(typ="safe")
with open(self.config_file, "r") as f:
config_data = yaml.load(f)
# Parse persistent configurations
persistent_configs = []
for pc_data in config_data.get("persistent_configs", []):
persistent_configs.append(PersistentConfig(**pc_data))
return ImageConfig(
name=config_data["name"],
description=config_data["description"],
version=config_data["version"],
maintainer=config_data["maintainer"],
image=config_data["image"],
persistent_configs=persistent_configs,
)
def get_cubbi_config(self) -> Dict[str, Any]:
"""Get standard Cubbi configuration from environment"""
return {
"user_id": int(self.environment.get("CUBBI_USER_ID", "1000")),
"group_id": int(self.environment.get("CUBBI_GROUP_ID", "1000")),
"run_command": self.environment.get("CUBBI_RUN_COMMAND"),
"no_shell": self.environment.get("CUBBI_NO_SHELL", "false").lower()
== "true",
"config_dir": self.environment.get("CUBBI_CONFIG_DIR", "/cubbi-config"),
"persistent_links": self.environment.get("CUBBI_PERSISTENT_LINKS", ""),
}
def get_mcp_config(self) -> Dict[str, Any]:
"""Get MCP server configuration from environment"""
mcp_count = int(self.environment.get("MCP_COUNT", "0"))
mcp_servers = []
for idx in range(mcp_count):
server = {
"name": self.environment.get(f"MCP_{idx}_NAME"),
"type": self.environment.get(f"MCP_{idx}_TYPE"),
"host": self.environment.get(f"MCP_{idx}_HOST"),
"url": self.environment.get(f"MCP_{idx}_URL"),
}
if server["name"]: # Only add if name is present
mcp_servers.append(server)
return {"count": mcp_count, "servers": mcp_servers}
# Core Management Classes
class UserManager:
"""Manages user and group creation/modification in containers"""
def __init__(self, status: StatusManager):
self.status = status
self.username = "cubbi"
def _run_command(self, cmd: list[str]) -> bool:
"""Run a system command and log the result"""
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
if result.stdout:
self.status.log(f"Command output: {result.stdout.strip()}")
return True
except subprocess.CalledProcessError as e:
self.status.log(f"Command failed: {' '.join(cmd)}", "ERROR")
self.status.log(f"Error: {e.stderr}", "ERROR")
return False
def setup_user_and_group(self, user_id: int, group_id: int) -> bool:
"""Set up user and group with specified IDs"""
self.status.log(
f"Setting up user '{self.username}' with UID: {user_id}, GID: {group_id}"
)
# Handle group creation/modification
try:
existing_group = grp.getgrnam(self.username)
if existing_group.gr_gid != group_id:
self.status.log(
f"Modifying group '{self.username}' GID from {existing_group.gr_gid} to {group_id}"
)
if not self._run_command(
["groupmod", "-g", str(group_id), self.username]
):
return False
except KeyError:
if not self._run_command(["groupadd", "-g", str(group_id), self.username]):
return False
# Handle user creation/modification
try:
existing_user = pwd.getpwnam(self.username)
if existing_user.pw_uid != user_id or existing_user.pw_gid != group_id:
self.status.log(
f"Modifying user '{self.username}' UID from {existing_user.pw_uid} to {user_id}, GID from {existing_user.pw_gid} to {group_id}"
)
if not self._run_command(
[
"usermod",
"--uid",
str(user_id),
"--gid",
str(group_id),
self.username,
]
):
return False
except KeyError:
if not self._run_command(
[
"useradd",
"--shell",
"/bin/bash",
"--uid",
str(user_id),
"--gid",
str(group_id),
"--no-create-home",
self.username,
]
):
return False
return True
class DirectoryManager:
"""Manages directory creation and permission setup"""
def __init__(self, status: StatusManager):
self.status = status
def create_directory(
self, path: str, user_id: int, group_id: int, mode: int = 0o755
) -> bool:
"""Create a directory with proper ownership and permissions"""
dir_path = Path(path)
try:
dir_path.mkdir(parents=True, exist_ok=True)
os.chown(path, user_id, group_id)
dir_path.chmod(mode)
self.status.log(f"Created directory: {path}")
return True
except Exception as e:
self.status.log(
f"Failed to create/configure directory {path}: {e}", "ERROR"
)
return False
def setup_standard_directories(self, user_id: int, group_id: int) -> bool:
"""Set up standard Cubbi directories"""
directories = [
("/app", 0o755),
("/cubbi-config", 0o755),
("/cubbi-config/home", 0o755),
]
self.status.log("Setting up standard directories")
success = True
for dir_path, mode in directories:
if not self.create_directory(dir_path, user_id, group_id, mode):
success = False
# Create /home/cubbi as a symlink to /cubbi-config/home
try:
home_cubbi = Path("/home/cubbi")
if home_cubbi.exists() or home_cubbi.is_symlink():
home_cubbi.unlink()
self.status.log("Creating /home/cubbi as symlink to /cubbi-config/home")
home_cubbi.symlink_to("/cubbi-config/home")
os.lchown("/home/cubbi", user_id, group_id)
except Exception as e:
self.status.log(f"Failed to create home directory symlink: {e}", "ERROR")
success = False
# Create .local directory in the persistent home
local_dir = Path("/cubbi-config/home/.local")
if not self.create_directory(str(local_dir), user_id, group_id, 0o755):
success = False
# Copy /root/.local/bin to user's home if it exists
root_local_bin = Path("/root/.local/bin")
if root_local_bin.exists():
user_local_bin = Path("/cubbi-config/home/.local/bin")
try:
user_local_bin.mkdir(parents=True, exist_ok=True)
for item in root_local_bin.iterdir():
if item.is_file():
shutil.copy2(item, user_local_bin / item.name)
elif item.is_dir():
shutil.copytree(
item, user_local_bin / item.name, dirs_exist_ok=True
)
self._chown_recursive(user_local_bin, user_id, group_id)
self.status.log("Copied /root/.local/bin to user directory")
except Exception as e:
self.status.log(f"Failed to copy /root/.local/bin: {e}", "ERROR")
success = False
return success
def _chown_recursive(self, path: Path, user_id: int, group_id: int) -> None:
"""Recursively change ownership of a directory"""
try:
os.chown(path, user_id, group_id)
for item in path.iterdir():
if item.is_dir():
self._chown_recursive(item, user_id, group_id)
else:
os.chown(item, user_id, group_id)
except Exception as e:
self.status.log(
f"Warning: Could not change ownership of {path}: {e}", "WARNING"
)
class ConfigManager:
"""Manages persistent configuration symlinks and mappings"""
def __init__(self, status: StatusManager):
self.status = status
def create_symlink(
self, source_path: str, target_path: str, user_id: int, group_id: int
) -> bool:
"""Create a symlink with proper ownership"""
try:
source = Path(source_path)
parent_dir = source.parent
if not parent_dir.exists():
self.status.log(f"Creating parent directory: {parent_dir}")
parent_dir.mkdir(parents=True, exist_ok=True)
os.chown(parent_dir, user_id, group_id)
self.status.log(f"Creating symlink: {source_path} -> {target_path}")
if source.is_symlink() or source.exists():
source.unlink()
source.symlink_to(target_path)
os.lchown(source_path, user_id, group_id)
return True
except Exception as e:
self.status.log(
f"Failed to create symlink {source_path} -> {target_path}: {e}", "ERROR"
)
return False
def _ensure_target_directory(
self, target_path: str, user_id: int, group_id: int
) -> bool:
"""Ensure the target directory exists with proper ownership"""
try:
target_dir = Path(target_path)
if not target_dir.exists():
self.status.log(f"Creating target directory: {target_path}")
target_dir.mkdir(parents=True, exist_ok=True)
# Set ownership of the target directory to cubbi user
os.chown(target_path, user_id, group_id)
self.status.log(f"Set ownership of {target_path} to {user_id}:{group_id}")
return True
except Exception as e:
self.status.log(
f"Failed to ensure target directory {target_path}: {e}", "ERROR"
)
return False
def setup_persistent_configs(
self, persistent_configs: List[PersistentConfig], user_id: int, group_id: int
) -> bool:
"""Set up persistent configuration symlinks from image config"""
if not persistent_configs:
self.status.log("No persistent configurations defined in image config")
return True
success = True
for config in persistent_configs:
# Ensure target directory exists with proper ownership
if not self._ensure_target_directory(config.target, user_id, group_id):
success = False
continue
if not self.create_symlink(config.source, config.target, user_id, group_id):
success = False
return success
class CommandManager:
"""Manages command execution and user switching"""
def __init__(self, status: StatusManager):
self.status = status
self.username = "cubbi"
def run_as_user(self, command: List[str], user: str = None) -> int:
"""Run a command as the specified user using gosu"""
if user is None:
user = self.username
full_command = ["gosu", user] + command
self.status.log(f"Executing as {user}: {' '.join(command)}")
try:
result = subprocess.run(full_command, check=False)
return result.returncode
except Exception as e:
self.status.log(f"Failed to execute command: {e}", "ERROR")
return 1
def run_user_command(self, command: str) -> int:
"""Run user-specified command as cubbi user"""
if not command:
return 0
self.status.log(f"Executing user command: {command}")
return self.run_as_user(["sh", "-c", command])
def exec_as_user(self, args: List[str]) -> None:
"""Execute the final command as cubbi user (replaces current process)"""
if not args:
args = ["tail", "-f", "/dev/null"]
self.status.log(
f"Switching to user '{self.username}' and executing: {' '.join(args)}"
)
try:
os.execvp("gosu", ["gosu", self.username] + args)
except Exception as e:
self.status.log(f"Failed to exec as user: {e}", "ERROR")
sys.exit(1)
# Tool Plugin System
class ToolPlugin(ABC):
"""Base class for tool-specific initialization plugins"""
def __init__(self, status: StatusManager, config: Dict[str, Any]):
self.status = status
self.config = config
@property
@abstractmethod
def tool_name(self) -> str:
"""Return the name of the tool this plugin supports"""
pass
@abstractmethod
def initialize(self) -> bool:
"""Main tool initialization logic"""
pass
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate with available MCP servers"""
return True
# Main Initializer
class CubbiInitializer:
"""Main Cubbi initialization orchestrator"""
def __init__(self):
self.status = StatusManager()
self.config_parser = ConfigParser()
self.user_manager = UserManager(self.status)
self.directory_manager = DirectoryManager(self.status)
self.config_manager = ConfigManager(self.status)
self.command_manager = CommandManager(self.status)
def run_initialization(self, final_args: List[str]) -> None:
"""Run the complete initialization process"""
try:
self.status.start_initialization()
# Load configuration
image_config = self.config_parser.load_image_config()
cubbi_config = self.config_parser.get_cubbi_config()
mcp_config = self.config_parser.get_mcp_config()
self.status.log(f"Initializing {image_config.name} v{image_config.version}")
# Core initialization
success = self._run_core_initialization(image_config, cubbi_config)
if not success:
self.status.log("Core initialization failed", "ERROR")
sys.exit(1)
# Tool-specific initialization
success = self._run_tool_initialization(
image_config, cubbi_config, mcp_config
)
if not success:
self.status.log("Tool initialization failed", "ERROR")
sys.exit(1)
# Mark complete
self.status.complete_initialization()
# Handle commands
self._handle_command_execution(cubbi_config, final_args)
except Exception as e:
self.status.log(f"Initialization failed with error: {e}", "ERROR")
sys.exit(1)
def _run_core_initialization(self, image_config, cubbi_config) -> bool:
"""Run core Cubbi initialization steps"""
user_id = cubbi_config["user_id"]
group_id = cubbi_config["group_id"]
if not self.user_manager.setup_user_and_group(user_id, group_id):
return False
if not self.directory_manager.setup_standard_directories(user_id, group_id):
return False
config_path = Path(cubbi_config["config_dir"])
if not config_path.exists():
self.status.log(f"Creating config directory: {cubbi_config['config_dir']}")
try:
config_path.mkdir(parents=True, exist_ok=True)
os.chown(cubbi_config["config_dir"], user_id, group_id)
except Exception as e:
self.status.log(f"Failed to create config directory: {e}", "ERROR")
return False
if not self.config_manager.setup_persistent_configs(
image_config.persistent_configs, user_id, group_id
):
return False
return True
def _run_tool_initialization(self, image_config, cubbi_config, mcp_config) -> bool:
"""Run tool-specific initialization"""
# Look for a tool-specific plugin file in the same directory
plugin_name = image_config.name.lower().replace("-", "_")
plugin_file = Path(__file__).parent / f"{plugin_name}_plugin.py"
if not plugin_file.exists():
self.status.log(
f"No tool-specific plugin found at {plugin_file}, skipping tool initialization"
)
return True
try:
# Dynamically load the plugin module
spec = importlib.util.spec_from_file_location(
f"{image_config.name.lower()}_plugin", plugin_file
)
plugin_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(plugin_module)
# Find the plugin class (should inherit from ToolPlugin)
plugin_class = None
for attr_name in dir(plugin_module):
attr = getattr(plugin_module, attr_name)
if (
isinstance(attr, type)
and hasattr(attr, "tool_name")
and hasattr(attr, "initialize")
and attr_name != "ToolPlugin"
): # Skip the base class
plugin_class = attr
break
if not plugin_class:
self.status.log(
f"No valid plugin class found in {plugin_file}", "ERROR"
)
return False
# Instantiate and run the plugin
plugin = plugin_class(
self.status,
{
"image_config": image_config,
"cubbi_config": cubbi_config,
"mcp_config": mcp_config,
},
)
self.status.log(f"Running {plugin.tool_name}-specific initialization")
if not plugin.initialize():
self.status.log(f"{plugin.tool_name} initialization failed", "ERROR")
return False
if not plugin.integrate_mcp_servers(mcp_config):
self.status.log(f"{plugin.tool_name} MCP integration failed", "ERROR")
return False
return True
except Exception as e:
self.status.log(
f"Failed to load or execute plugin {plugin_file}: {e}", "ERROR"
)
return False
def _handle_command_execution(self, cubbi_config, final_args):
"""Handle command execution"""
exit_code = 0
if cubbi_config["run_command"]:
self.status.log("--- Executing initial command ---")
exit_code = self.command_manager.run_user_command(
cubbi_config["run_command"]
)
self.status.log(
f"--- Initial command finished (exit code: {exit_code}) ---"
)
if cubbi_config["no_shell"]:
self.status.log(
"--- CUBBI_NO_SHELL=true, exiting container without starting shell ---"
)
sys.exit(exit_code)
self.command_manager.exec_as_user(final_args)
def main() -> int:
"""Main CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="Cubbi container initialization script",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
This script initializes a Cubbi container environment by:
1. Setting up user and group with proper IDs
2. Creating standard directories with correct permissions
3. Setting up persistent configuration symlinks
4. Running tool-specific initialization if available
5. Executing user commands or starting an interactive shell
Environment Variables:
CUBBI_USER_ID User ID for the cubbi user (default: 1000)
CUBBI_GROUP_ID Group ID for the cubbi user (default: 1000)
CUBBI_RUN_COMMAND Initial command to run before shell
CUBBI_NO_SHELL Exit after run command instead of starting shell
CUBBI_CONFIG_DIR Configuration directory path (default: /cubbi-config)
MCP_COUNT Number of MCP servers to configure
MCP_<N>_NAME Name of MCP server N
MCP_<N>_TYPE Type of MCP server N
MCP_<N>_HOST Host of MCP server N
MCP_<N>_URL URL of MCP server N
Examples:
cubbi_init.py # Initialize and start bash shell
cubbi_init.py --help # Show this help message
cubbi_init.py /bin/zsh # Initialize and start zsh shell
cubbi_init.py python script.py # Initialize and run python script
""",
)
parser.add_argument(
"command",
nargs="*",
help="Command to execute after initialization (default: interactive shell)",
)
# Parse known args to handle cases where the command might have its own arguments
args, unknown = parser.parse_known_args()
# Combine parsed command with unknown args
final_args = args.command + unknown
# Handle the common case where docker CMD passes ["tail", "-f", "/dev/null"]
# This should be treated as "no specific command" (empty args)
if final_args == ["tail", "-f", "/dev/null"]:
final_args = []
initializer = CubbiInitializer()
initializer.run_initialization(final_args)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,68 @@
FROM node:20-slim
LABEL maintainer="team@monadical.com"
LABEL description="Google Gemini CLI for Cubbi"
# Install system dependencies including gosu for user switching
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
passwd \
bash \
curl \
bzip2 \
iputils-ping \
iproute2 \
libxcb1 \
libdbus-1-3 \
nano \
tmux \
git-core \
ripgrep \
openssh-client \
vim \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Install uv (Python package manager) for cubbi_init.py compatibility
WORKDIR /tmp
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
sh install.sh && \
mv /root/.local/bin/uv /usr/local/bin/uv && \
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
rm install.sh
# Install Gemini CLI globally
RUN npm install -g @google/gemini-cli
# Verify installation
RUN gemini --version
# Create app directory
WORKDIR /app
# Copy initialization system
COPY cubbi_init.py /cubbi/cubbi_init.py
COPY gemini_cli_plugin.py /cubbi/gemini_cli_plugin.py
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
COPY init-status.sh /cubbi/init-status.sh
# Make scripts executable
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
# Add init status check to bashrc
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
# Set up environment
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV UV_LINK_MODE=copy
# Pre-install the cubbi_init
RUN /cubbi/cubbi_init.py --help
# Set WORKDIR to /app
WORKDIR /app
ENTRYPOINT ["/cubbi/cubbi_init.py"]
CMD ["tail", "-f", "/dev/null"]

View File

@@ -0,0 +1,339 @@
# Google Gemini CLI for Cubbi
This image provides Google Gemini CLI in a Cubbi container environment.
## Overview
Google Gemini CLI is an AI-powered development tool that allows you to query and edit large codebases, generate applications from PDFs/sketches, automate operational tasks, and integrate with media generation tools using Google's Gemini models.
## Features
- **Advanced AI Models**: Access to Gemini 1.5 Pro, Flash, and other Google AI models
- **Codebase Analysis**: Query and edit large codebases intelligently
- **Multi-modal Support**: Work with text, images, PDFs, and sketches
- **Google Search Grounding**: Ground queries using Google Search for up-to-date information
- **Secure Authentication**: API key management through Cubbi's secure environment system
- **Persistent Configuration**: Settings and history preserved across container restarts
- **Project Integration**: Seamless integration with existing projects
## Quick Start
### 1. Set up API Key
```bash
# For Google AI (recommended)
uv run -m cubbi.cli config set services.google.api_key "your-gemini-api-key"
# Alternative using GEMINI_API_KEY
uv run -m cubbi.cli config set services.gemini.api_key "your-gemini-api-key"
```
Get your API key from [Google AI Studio](https://aistudio.google.com/apikey).
### 2. Run Gemini CLI Environment
```bash
# Start Gemini CLI container with your project
uv run -m cubbi.cli session create --image gemini-cli /path/to/your/project
# Or without a project
uv run -m cubbi.cli session create --image gemini-cli
```
### 3. Use Gemini CLI
```bash
# Basic usage
gemini
# Interactive mode with specific query
gemini
> Write me a Discord bot that answers questions using a FAQ.md file
# Analyze existing project
gemini
> Give me a summary of all changes that went in yesterday
# Generate from sketch/PDF
gemini
> Create a web app based on this wireframe.png
```
## Configuration
### Supported API Keys
- `GEMINI_API_KEY`: Google AI API key for Gemini models
- `GOOGLE_API_KEY`: Alternative Google API key (compatibility)
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to Google Cloud service account JSON file
### Model Configuration
- `GEMINI_MODEL`: Default model (default: "gemini-1.5-pro")
- Available: "gemini-1.5-pro", "gemini-1.5-flash", "gemini-1.0-pro"
- `GEMINI_TEMPERATURE`: Model temperature 0.0-2.0 (default: 0.7)
- `GEMINI_MAX_TOKENS`: Maximum tokens in response
### Advanced Configuration
- `GEMINI_SEARCH_ENABLED`: Enable Google Search grounding (true/false, default: false)
- `GEMINI_DEBUG`: Enable debug mode (true/false, default: false)
- `GCLOUD_PROJECT`: Google Cloud project ID
### Network Configuration
- `HTTP_PROXY`: HTTP proxy server URL
- `HTTPS_PROXY`: HTTPS proxy server URL
## Usage Examples
### Basic AI Development
```bash
# Start Gemini CLI with your project
uv run -m cubbi.cli session create --image gemini-cli /path/to/project
# Inside the container:
gemini # Start interactive session
```
### Codebase Analysis
```bash
# Analyze changes
gemini
> What are the main functions in src/main.py?
# Code generation
gemini
> Add error handling to the authentication module
# Documentation
gemini
> Generate README documentation for this project
```
### Multi-modal Development
```bash
# Work with images
gemini
> Analyze this architecture diagram and suggest improvements
# PDF processing
gemini
> Convert this API specification PDF to OpenAPI YAML
# Sketch to code
gemini
> Create a React component based on this UI mockup
```
### Advanced Features
```bash
# With Google Search grounding
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_SEARCH_ENABLED="true" \
/path/to/project
# With specific model
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_MODEL="gemini-1.5-flash" \
--env GEMINI_TEMPERATURE="0.3" \
/path/to/project
# Debug mode
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_DEBUG="true" \
/path/to/project
```
### Enterprise/Proxy Setup
```bash
# With proxy
uv run -m cubbi.cli session create --image gemini-cli \
--env HTTPS_PROXY="https://proxy.company.com:8080" \
/path/to/project
# With Google Cloud authentication
uv run -m cubbi.cli session create --image gemini-cli \
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
--env GCLOUD_PROJECT="your-project-id" \
/path/to/project
```
## Persistent Configuration
The following directories are automatically persisted:
- `~/.config/gemini/`: Gemini CLI configuration files
- `~/.cache/gemini/`: Model cache and temporary files
Configuration files are maintained across container restarts, ensuring your preferences and session history are preserved.
## Model Recommendations
### Best Overall Performance
- **Gemini 1.5 Pro**: Excellent reasoning and code understanding
- **Gemini 1.5 Flash**: Faster responses, good for iterative development
### Cost-Effective Options
- **Gemini 1.5 Flash**: Lower cost, high speed
- **Gemini 1.0 Pro**: Basic model for simple tasks
### Specialized Use Cases
- **Code Analysis**: Gemini 1.5 Pro
- **Quick Iterations**: Gemini 1.5 Flash
- **Multi-modal Tasks**: Gemini 1.5 Pro (supports images, PDFs)
## File Structure
```
cubbi/images/gemini-cli/
├── Dockerfile # Container image definition
├── cubbi_image.yaml # Cubbi image configuration
├── gemini_plugin.py # Authentication and setup plugin
└── README.md # This documentation
```
## Authentication Flow
1. **API Key Setup**: API key configured via Cubbi configuration or environment variables
2. **Plugin Initialization**: `gemini_plugin.py` creates configuration files
3. **Environment File**: Creates `~/.config/gemini/.env` with API key
4. **Configuration**: Creates `~/.config/gemini/config.json` with settings
5. **Ready**: Gemini CLI is ready for use with configured authentication
## Troubleshooting
### Common Issues
**No API Key Found**
```
No API key found - Gemini CLI will require authentication
```
**Solution**: Set API key in Cubbi configuration:
```bash
uv run -m cubbi.cli config set services.google.api_key "your-key"
```
**Authentication Failed**
```
Error: Invalid API key or authentication failed
```
**Solution**: Verify your API key at [Google AI Studio](https://aistudio.google.com/apikey):
```bash
# Test your API key
curl -H "Content-Type: application/json" \
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}' \
"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY"
```
**Model Not Available**
```
Error: Model 'xyz' not found
```
**Solution**: Use supported models:
```bash
# List available models (inside container)
curl -H "Content-Type: application/json" \
"https://generativelanguage.googleapis.com/v1beta/models?key=YOUR_API_KEY"
```
**Rate Limit Exceeded**
```
Error: Quota exceeded
```
**Solution**: Google AI provides:
- 60 requests per minute
- 1,000 requests per day
- Consider upgrading to Google Cloud for higher limits
**Network/Proxy Issues**
```
Connection timeout or proxy errors
```
**Solution**: Configure proxy settings:
```bash
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
```
### Debug Mode
```bash
# Enable debug output
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_DEBUG="true"
# Check configuration
cat ~/.config/gemini/config.json
# Check environment
cat ~/.config/gemini/.env
# Test CLI directly
gemini --help
```
## Security Considerations
- **API Keys**: Stored securely with 0o600 permissions
- **Environment**: Isolated container environment
- **Configuration**: Secure file permissions for config files
- **Google Cloud**: Supports service account authentication for enterprise use
## Advanced Configuration
### Custom Model Configuration
```bash
# Use specific model with custom settings
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_MODEL="gemini-1.5-flash" \
--env GEMINI_TEMPERATURE="0.2" \
--env GEMINI_MAX_TOKENS="8192"
```
### Google Search Integration
```bash
# Enable Google Search grounding for up-to-date information
uv run -m cubbi.cli session create --image gemini-cli \
--env GEMINI_SEARCH_ENABLED="true"
```
### Google Cloud Integration
```bash
# Use with Google Cloud service account
uv run -m cubbi.cli session create --image gemini-cli \
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
--env GCLOUD_PROJECT="your-project-id"
```
## API Limits and Pricing
### Free Tier (Google AI)
- 60 requests per minute
- 1,000 requests per day
- Personal Google account required
### Paid Tier (Google Cloud)
- Higher rate limits
- Enterprise features
- Service account authentication
- Custom quotas available
## Support
For issues related to:
- **Cubbi Integration**: Check Cubbi documentation or open an issue
- **Gemini CLI Functionality**: Visit [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli)
- **Google AI Platform**: Visit [Google AI documentation](https://ai.google.dev/)
- **API Keys**: Visit [Google AI Studio](https://aistudio.google.com/)
## License
This image configuration is provided under the same license as the Cubbi project. Google Gemini CLI is licensed separately by Google.

View File

@@ -0,0 +1,80 @@
name: gemini-cli
description: Google Gemini CLI environment for AI-powered development
version: 1.0.0
maintainer: team@monadical.com
image: monadical/cubbi-gemini-cli:latest
environment:
# Google AI Configuration
- name: GEMINI_API_KEY
description: Google AI API key for Gemini models
required: false
sensitive: true
- name: GOOGLE_API_KEY
description: Alternative Google API key (compatibility)
required: false
sensitive: true
# Google Cloud Configuration
- name: GOOGLE_APPLICATION_CREDENTIALS
description: Path to Google Cloud service account JSON file
required: false
sensitive: true
- name: GCLOUD_PROJECT
description: Google Cloud project ID
required: false
# Model Configuration
- name: GEMINI_MODEL
description: Default Gemini model (e.g., gemini-1.5-pro, gemini-1.5-flash)
required: false
default: "gemini-1.5-pro"
- name: GEMINI_TEMPERATURE
description: Model temperature (0.0-2.0)
required: false
default: "0.7"
- name: GEMINI_MAX_TOKENS
description: Maximum tokens in response
required: false
# Search Configuration
- name: GEMINI_SEARCH_ENABLED
description: Enable Google Search grounding (true/false)
required: false
default: "false"
# Proxy Configuration
- name: HTTP_PROXY
description: HTTP proxy server URL
required: false
- name: HTTPS_PROXY
description: HTTPS proxy server URL
required: false
# Debug Configuration
- name: GEMINI_DEBUG
description: Enable debug mode (true/false)
required: false
default: "false"
ports: []
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.config/gemini"
target: "/cubbi-config/gemini-settings"
type: "directory"
description: "Gemini CLI configuration and history"
- source: "/home/cubbi/.cache/gemini"
target: "/cubbi-config/gemini-cache"
type: "directory"
description: "Gemini CLI cache directory"

View File

@@ -0,0 +1,241 @@
#!/usr/bin/env python3
"""
Gemini CLI Plugin for Cubbi
Handles authentication setup and configuration for Google Gemini CLI
"""
import json
import os
import stat
from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin
class GeminiCliPlugin(ToolPlugin):
"""Plugin for setting up Gemini CLI authentication and configuration"""
@property
def tool_name(self) -> str:
return "gemini-cli"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_gemini_config_dir(self) -> Path:
"""Get the Gemini configuration directory"""
# Get the actual username from the config if available
username = self.config.get("username", "cubbi")
return Path(f"/home/{username}/.config/gemini")
def _get_gemini_cache_dir(self) -> Path:
"""Get the Gemini cache directory"""
# Get the actual username from the config if available
username = self.config.get("username", "cubbi")
return Path(f"/home/{username}/.cache/gemini")
def _ensure_gemini_dirs(self) -> tuple[Path, Path]:
"""Ensure Gemini directories exist with correct ownership"""
config_dir = self._get_gemini_config_dir()
cache_dir = self._get_gemini_cache_dir()
# Create directories
for directory in [config_dir, cache_dir]:
try:
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
self._set_ownership(directory)
except OSError as e:
self.status.log(
f"Failed to create Gemini directory {directory}: {e}", "ERROR"
)
return config_dir, cache_dir
def initialize(self) -> bool:
"""Initialize Gemini CLI configuration"""
self.status.log("Setting up Gemini CLI configuration...")
# Ensure Gemini directories exist
config_dir, cache_dir = self._ensure_gemini_dirs()
# Set up authentication and configuration
auth_configured = self._setup_authentication(config_dir)
config_created = self._create_configuration_file(config_dir)
if auth_configured or config_created:
self.status.log("✅ Gemini CLI configured successfully")
else:
self.status.log(
" No API key found - Gemini CLI will require authentication",
"INFO",
)
self.status.log(
" You can configure API keys using environment variables", "INFO"
)
# Always return True to allow container to start
return True
def _setup_authentication(self, config_dir: Path) -> bool:
"""Set up Gemini authentication"""
api_key = self._get_api_key()
if not api_key:
return False
# Create environment file for API key
env_file = config_dir / ".env"
try:
with open(env_file, "w") as f:
f.write(f"GEMINI_API_KEY={api_key}\n")
# Set ownership and secure file permissions
self._set_ownership(env_file)
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
self.status.log(f"Created Gemini environment file at {env_file}")
self.status.log("Added Gemini API key")
return True
except Exception as e:
self.status.log(f"Failed to create environment file: {e}", "ERROR")
return False
def _get_api_key(self) -> str:
"""Get the Gemini API key from environment variables"""
# Check multiple possible environment variable names
for key_name in ["GEMINI_API_KEY", "GOOGLE_API_KEY"]:
api_key = os.environ.get(key_name)
if api_key:
return api_key
return ""
def _create_configuration_file(self, config_dir: Path) -> bool:
"""Create Gemini CLI configuration file"""
try:
config = self._build_configuration()
if not config:
return False
config_file = config_dir / "config.json"
with open(config_file, "w") as f:
json.dump(config, f, indent=2)
# Set ownership and permissions
self._set_ownership(config_file)
os.chmod(config_file, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP)
self.status.log(f"Created Gemini configuration at {config_file}")
return True
except Exception as e:
self.status.log(f"Failed to create configuration file: {e}", "ERROR")
return False
def _build_configuration(self) -> Dict[str, Any]:
"""Build Gemini CLI configuration from environment variables"""
config = {}
# Model configuration
model = os.environ.get("GEMINI_MODEL", "gemini-1.5-pro")
if model:
config["defaultModel"] = model
self.status.log(f"Set default model to {model}")
# Temperature setting
temperature = os.environ.get("GEMINI_TEMPERATURE")
if temperature:
try:
temp_value = float(temperature)
if 0.0 <= temp_value <= 2.0:
config["temperature"] = temp_value
self.status.log(f"Set temperature to {temp_value}")
else:
self.status.log(
f"Invalid temperature value {temperature}, using default",
"WARNING",
)
except ValueError:
self.status.log(
f"Invalid temperature format {temperature}, using default",
"WARNING",
)
# Max tokens setting
max_tokens = os.environ.get("GEMINI_MAX_TOKENS")
if max_tokens:
try:
tokens_value = int(max_tokens)
if tokens_value > 0:
config["maxTokens"] = tokens_value
self.status.log(f"Set max tokens to {tokens_value}")
else:
self.status.log(
f"Invalid max tokens value {max_tokens}, using default",
"WARNING",
)
except ValueError:
self.status.log(
f"Invalid max tokens format {max_tokens}, using default",
"WARNING",
)
# Search configuration
search_enabled = os.environ.get("GEMINI_SEARCH_ENABLED", "false")
if search_enabled.lower() in ["true", "false"]:
config["searchEnabled"] = search_enabled.lower() == "true"
if config["searchEnabled"]:
self.status.log("Enabled Google Search grounding")
# Debug mode
debug_mode = os.environ.get("GEMINI_DEBUG", "false")
if debug_mode.lower() in ["true", "false"]:
config["debug"] = debug_mode.lower() == "true"
if config["debug"]:
self.status.log("Enabled debug mode")
# Proxy settings
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
proxy_value = os.environ.get(proxy_var)
if proxy_value:
config[proxy_var.lower()] = proxy_value
self.status.log(f"Added proxy configuration: {proxy_var}")
# Google Cloud project
project = os.environ.get("GCLOUD_PROJECT")
if project:
config["project"] = project
self.status.log(f"Set Google Cloud project to {project}")
return config
def setup_tool_configuration(self) -> bool:
"""Set up Gemini CLI configuration - called by base class"""
# Additional tool configuration can be added here if needed
return True
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate Gemini CLI with available MCP servers if applicable"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Gemini CLI doesn't have native MCP support,
# but we could potentially add custom integrations here
self.status.log(
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
)
return True

View File

@@ -0,0 +1,312 @@
#!/usr/bin/env python3
"""
Comprehensive test script for Gemini CLI Cubbi image
Tests Docker image build, API key configuration, and Cubbi CLI integration
"""
import subprocess
import sys
import os
def run_command(cmd, description="", check=True):
"""Run a shell command and return result"""
print(f"\n🔍 {description}")
print(f"Running: {cmd}")
try:
result = subprocess.run(
cmd, shell=True, capture_output=True, text=True, check=check
)
if result.stdout:
print("STDOUT:")
print(result.stdout)
if result.stderr:
print("STDERR:")
print(result.stderr)
return result
except subprocess.CalledProcessError as e:
print(f"❌ Command failed with exit code {e.returncode}")
if e.stdout:
print("STDOUT:")
print(e.stdout)
if e.stderr:
print("STDERR:")
print(e.stderr)
if check:
raise
return e
def test_docker_build():
"""Test Docker image build"""
print("\n" + "=" * 60)
print("🧪 Testing Docker Image Build")
print("=" * 60)
# Get the directory containing this test file
test_dir = os.path.dirname(os.path.abspath(__file__))
result = run_command(
f"cd {test_dir} && docker build -t monadical/cubbi-gemini-cli:latest .",
"Building Gemini CLI Docker image",
)
if result.returncode == 0:
print("✅ Gemini CLI Docker image built successfully")
return True
else:
print("❌ Gemini CLI Docker image build failed")
return False
def test_docker_image_exists():
"""Test if the Gemini CLI Docker image exists"""
print("\n" + "=" * 60)
print("🧪 Testing Docker Image Existence")
print("=" * 60)
result = run_command(
"docker images monadical/cubbi-gemini-cli:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
"Checking if Gemini CLI Docker image exists",
)
if "monadical/cubbi-gemini-cli" in result.stdout:
print("✅ Gemini CLI Docker image exists")
return True
else:
print("❌ Gemini CLI Docker image not found")
return False
def test_gemini_version():
"""Test basic Gemini CLI functionality in container"""
print("\n" + "=" * 60)
print("🧪 Testing Gemini CLI Version")
print("=" * 60)
result = run_command(
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'gemini --version'",
"Testing Gemini CLI version command",
)
if result.returncode == 0 and (
"gemini" in result.stdout.lower() or "version" in result.stdout.lower()
):
print("✅ Gemini CLI version command works")
return True
else:
print("❌ Gemini CLI version command failed")
return False
def test_api_key_configuration():
"""Test API key configuration and environment setup"""
print("\n" + "=" * 60)
print("🧪 Testing API Key Configuration")
print("=" * 60)
# Test with multiple API keys
test_keys = {
"GEMINI_API_KEY": "test-gemini-key",
"GOOGLE_API_KEY": "test-google-key",
}
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
result = run_command(
f"docker run --rm {env_flags} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/.env 2>/dev/null || echo \"No .env file found\"'",
"Testing API key configuration in .env file",
)
success = True
if "test-gemini-key" in result.stdout:
print("✅ GEMINI_API_KEY configured correctly")
else:
print("❌ GEMINI_API_KEY not found in configuration")
success = False
return success
def test_configuration_file():
"""Test Gemini CLI configuration file creation"""
print("\n" + "=" * 60)
print("🧪 Testing Configuration File")
print("=" * 60)
env_vars = "-e GEMINI_API_KEY='test-key' -e GEMINI_MODEL='gemini-1.5-pro' -e GEMINI_TEMPERATURE='0.5'"
result = run_command(
f"docker run --rm {env_vars} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/config.json 2>/dev/null || echo \"No config file found\"'",
"Testing configuration file creation",
)
success = True
if "gemini-1.5-pro" in result.stdout:
print("✅ Default model configured correctly")
else:
print("❌ Default model not found in configuration")
success = False
if "0.5" in result.stdout:
print("✅ Temperature configured correctly")
else:
print("❌ Temperature not found in configuration")
success = False
return success
def test_cubbi_cli_integration():
"""Test Cubbi CLI integration"""
print("\n" + "=" * 60)
print("🧪 Testing Cubbi CLI Integration")
print("=" * 60)
# Change to project root for cubbi commands
project_root = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
# Test image listing
result = run_command(
f"cd {project_root} && uv run -m cubbi.cli image list",
"Testing Cubbi CLI can see images",
check=False,
)
if "gemini-cli" in result.stdout:
print("✅ Cubbi CLI can list Gemini CLI image")
else:
print(
" Gemini CLI image not yet registered with Cubbi CLI - this is expected during development"
)
# Test basic cubbi CLI works
result = run_command(
f"cd {project_root} && uv run -m cubbi.cli --help",
"Testing basic Cubbi CLI functionality",
)
if result.returncode == 0 and "cubbi" in result.stdout.lower():
print("✅ Cubbi CLI basic functionality works")
return True
else:
print("❌ Cubbi CLI basic functionality failed")
return False
def test_persistent_configuration():
"""Test persistent configuration directories"""
print("\n" + "=" * 60)
print("🧪 Testing Persistent Configuration")
print("=" * 60)
# Test that persistent directories are created
result = run_command(
"docker run --rm -e GEMINI_API_KEY='test-key' monadical/cubbi-gemini-cli:latest bash -c 'ls -la ~/.config/ && ls -la ~/.cache/'",
"Testing persistent configuration directories",
)
success = True
if "gemini" in result.stdout:
print("✅ ~/.config/gemini directory exists")
else:
print("❌ ~/.config/gemini directory not found")
success = False
if "gemini" in result.stdout:
print("✅ ~/.cache/gemini directory exists")
else:
print("❌ ~/.cache/gemini directory not found")
success = False
return success
def test_plugin_functionality():
"""Test the Gemini CLI plugin functionality"""
print("\n" + "=" * 60)
print("🧪 Testing Plugin Functionality")
print("=" * 60)
# Test plugin without API keys (should still work)
result = run_command(
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test without API keys\"'",
"Testing plugin functionality without API keys",
)
if "No API key found - Gemini CLI will require authentication" in result.stdout:
print("✅ Plugin handles missing API keys gracefully")
else:
print(" Plugin API key handling test - check output above")
# Test plugin with API keys
result = run_command(
"docker run --rm -e GEMINI_API_KEY='test-plugin-key' monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test with API keys\"'",
"Testing plugin functionality with API keys",
)
if "Gemini CLI configured successfully" in result.stdout:
print("✅ Plugin configures environment successfully")
return True
else:
print("❌ Plugin environment configuration failed")
return False
def main():
"""Run all tests"""
print("🚀 Starting Gemini CLI Cubbi Image Tests")
print("=" * 60)
tests = [
("Docker Image Build", test_docker_build),
("Docker Image Exists", test_docker_image_exists),
("Cubbi CLI Integration", test_cubbi_cli_integration),
("Gemini CLI Version", test_gemini_version),
("API Key Configuration", test_api_key_configuration),
("Configuration File", test_configuration_file),
("Persistent Configuration", test_persistent_configuration),
("Plugin Functionality", test_plugin_functionality),
]
results = {}
for test_name, test_func in tests:
try:
results[test_name] = test_func()
except Exception as e:
print(f"❌ Test '{test_name}' failed with exception: {e}")
results[test_name] = False
# Print summary
print("\n" + "=" * 60)
print("📊 TEST SUMMARY")
print("=" * 60)
total_tests = len(tests)
passed_tests = sum(1 for result in results.values() if result)
failed_tests = total_tests - passed_tests
for test_name, result in results.items():
status = "✅ PASS" if result else "❌ FAIL"
print(f"{status} {test_name}")
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
if failed_tests == 0:
print("\n🎉 All tests passed! Gemini CLI image is ready for use.")
return 0
else:
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,14 +1,12 @@
FROM python:3.12-slim
LABEL maintainer="team@monadical.com"
LABEL description="Goose with MCP servers for Cubbi"
LABEL description="Goose for Cubbi"
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
passwd \
git \
openssh-server \
bash \
curl \
bzip2 \
@@ -17,13 +15,13 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libxcb1 \
libdbus-1-3 \
nano \
tmux \
git-core \
ripgrep \
openssh-client \
vim \
&& rm -rf /var/lib/apt/lists/*
# Set up SSH server directory (configuration will be handled by entrypoint if needed)
RUN mkdir -p /var/run/sshd && chmod 0755 /var/run/sshd
# Do NOT enable root login or set root password here
# Install deps
WORKDIR /tmp
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
@@ -40,32 +38,24 @@ RUN curl -fsSL https://github.com/block/goose/releases/download/stable/download_
# Create app directory
WORKDIR /app
# Copy initialization scripts
COPY cubbi-init.sh /cubbi-init.sh
COPY entrypoint.sh /entrypoint.sh
COPY cubbi-image.yaml /cubbi-image.yaml
COPY init-status.sh /init-status.sh
COPY update-goose-config.py /usr/local/bin/update-goose-config.py
# Extend env via bashrc
# Make scripts executable
RUN chmod +x /cubbi-init.sh /entrypoint.sh /init-status.sh \
/usr/local/bin/update-goose-config.py
# Set up initialization status check on login
RUN echo '[ -x /init-status.sh ] && /init-status.sh' >> /etc/bash.bashrc
# Copy initialization system
COPY cubbi_init.py /cubbi/cubbi_init.py
COPY goose_plugin.py /cubbi/goose_plugin.py
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
COPY init-status.sh /cubbi/init-status.sh
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
# Set up environment
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV UV_LINK_MODE=copy
# Pre-install the cubbi_init
RUN /cubbi/cubbi_init.py --help
# Set WORKDIR to /app, common practice and expected by cubbi-init.sh
WORKDIR /app
# Expose ports
EXPOSE 8000 22
# Set entrypoint - container starts as root, entrypoint handles user switching
ENTRYPOINT ["/entrypoint.sh"]
# Default command if none is provided (entrypoint will run this via gosu)
ENTRYPOINT ["/cubbi/cubbi_init.py"]
CMD ["tail", "-f", "/dev/null"]

View File

@@ -1,25 +1,50 @@
# Goose Image for MC
# Goose Image for Cubbi
This image provides a containerized environment for running [Goose](https://goose.ai).
## Features
- Pre-configured environment for Goose AI
- Self-hosted instance integration
- SSH access
- Git repository integration
- Langfuse logging support
## Environment Variables
### Goose Configuration
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `CUBBI_MODEL` | Model to use with Goose | No | - |
| `CUBBI_PROVIDER` | Provider to use with Goose | No | - |
### Langfuse Integration
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `LANGFUSE_INIT_PROJECT_PUBLIC_KEY` | Langfuse public key | No | - |
| `LANGFUSE_INIT_PROJECT_SECRET_KEY` | Langfuse secret key | No | - |
| `LANGFUSE_URL` | Langfuse API URL | No | `https://cloud.langfuse.com` |
### Cubbi Core Variables
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `CUBBI_USER_ID` | UID for the container user | No | `1000` |
| `CUBBI_GROUP_ID` | GID for the container user | No | `1000` |
| `CUBBI_RUN_COMMAND` | Command to execute after initialization | No | - |
| `CUBBI_NO_SHELL` | Exit after command execution | No | `false` |
| `CUBBI_CONFIG_DIR` | Directory for persistent configurations | No | `/cubbi-config` |
| `CUBBI_PERSISTENT_LINKS` | Semicolon-separated list of source:target symlinks | No | - |
### MCP Integration Variables
| Variable | Description | Required |
|----------|-------------|----------|
| `LANGFUSE_INIT_PROJECT_PUBLIC_KEY` | Langfuse public key | No |
| `LANGFUSE_INIT_PROJECT_SECRET_KEY` | Langfuse secret key | No |
| `LANGFUSE_URL` | Langfuse API URL | No |
| `CUBBI_PROJECT_URL` | Project repository URL | No |
| `CUBBI_GIT_SSH_KEY` | SSH key for Git authentication | No |
| `CUBBI_GIT_TOKEN` | Token for Git authentication | No |
| `MCP_COUNT` | Number of available MCP servers | No |
| `MCP_NAMES` | JSON array of MCP server names | No |
| `MCP_{idx}_NAME` | Name of MCP server at index | No |
| `MCP_{idx}_TYPE` | Type of MCP server | No |
| `MCP_{idx}_HOST` | Hostname of MCP server | No |
| `MCP_{idx}_URL` | Full URL for remote MCP servers | No |
## Build
@@ -34,8 +59,5 @@ docker build -t monadical/cubbi-goose:latest .
```bash
# Create a new session with this image
cubbi session create --driver goose
# Create with project repository
cubbi session create --driver goose --project github.com/username/repo
cubbix -i goose
```

View File

@@ -1,188 +0,0 @@
#!/bin/bash
# Standardized initialization script for Cubbi images
# Redirect all output to both stdout and the log file
exec > >(tee -a /init.log) 2>&1
# Mark initialization as started
echo "=== Cubbi Initialization started at $(date) ==="
# --- START INSERTED BLOCK ---
# Default UID/GID if not provided (should be passed by cubbi tool)
CUBBI_USER_ID=${CUBBI_USER_ID:-1000}
CUBBI_GROUP_ID=${CUBBI_GROUP_ID:-1000}
echo "Using UID: $CUBBI_USER_ID, GID: $CUBBI_GROUP_ID"
# Create group if it doesn't exist
if ! getent group cubbi > /dev/null; then
groupadd -g $CUBBI_GROUP_ID cubbi
else
# If group exists but has different GID, modify it
EXISTING_GID=$(getent group cubbi | cut -d: -f3)
if [ "$EXISTING_GID" != "$CUBBI_GROUP_ID" ]; then
groupmod -g $CUBBI_GROUP_ID cubbi
fi
fi
# Create user if it doesn't exist
if ! getent passwd cubbi > /dev/null; then
useradd --shell /bin/bash --uid $CUBBI_USER_ID --gid $CUBBI_GROUP_ID --no-create-home cubbi
else
# If user exists but has different UID/GID, modify it
EXISTING_UID=$(getent passwd cubbi | cut -d: -f3)
EXISTING_GID=$(getent passwd cubbi | cut -d: -f4)
if [ "$EXISTING_UID" != "$CUBBI_USER_ID" ] || [ "$EXISTING_GID" != "$CUBBI_GROUP_ID" ]; then
usermod --uid $CUBBI_USER_ID --gid $CUBBI_GROUP_ID cubbi
fi
fi
# Create home directory and set permissions
mkdir -p /home/cubbi
chown $CUBBI_USER_ID:$CUBBI_GROUP_ID /home/cubbi
mkdir -p /app
chown $CUBBI_USER_ID:$CUBBI_GROUP_ID /app
# Copy /root/.local/bin to the user's home directory
if [ -d /root/.local/bin ]; then
echo "Copying /root/.local/bin to /home/cubbi/.local/bin..."
mkdir -p /home/cubbi/.local/bin
cp -r /root/.local/bin/* /home/cubbi/.local/bin/
chown -R $CUBBI_USER_ID:$CUBBI_GROUP_ID /home/cubbi/.local
fi
# Start SSH server only if explicitly enabled
if [ "$CUBBI_SSH_ENABLED" = "true" ]; then
echo "Starting SSH server..."
/usr/sbin/sshd
else
echo "SSH server disabled (use --ssh flag to enable)"
fi
# --- END INSERTED BLOCK ---
echo "INIT_COMPLETE=false" > /init.status
# Project initialization
if [ -n "$CUBBI_PROJECT_URL" ]; then
echo "Initializing project: $CUBBI_PROJECT_URL"
# Set up SSH key if provided
if [ -n "$CUBBI_GIT_SSH_KEY" ]; then
mkdir -p ~/.ssh
echo "$CUBBI_GIT_SSH_KEY" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan github.com >> ~/.ssh/known_hosts 2>/dev/null
ssh-keyscan gitlab.com >> ~/.ssh/known_hosts 2>/dev/null
ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts 2>/dev/null
fi
# Set up token if provided
if [ -n "$CUBBI_GIT_TOKEN" ]; then
git config --global credential.helper store
echo "https://$CUBBI_GIT_TOKEN:x-oauth-basic@github.com" > ~/.git-credentials
fi
# Clone repository
git clone $CUBBI_PROJECT_URL /app
cd /app
# Run project-specific initialization if present
if [ -f "/app/.cubbi/init.sh" ]; then
bash /app/.cubbi/init.sh
fi
# Persistent configs are now directly mounted as volumes
# No need to create symlinks anymore
if [ -n "$CUBBI_CONFIG_DIR" ] && [ -d "$CUBBI_CONFIG_DIR" ]; then
echo "Using persistent configuration volumes (direct mounts)"
fi
fi
# Goose uses self-hosted instance, no API key required
# Set up Langfuse logging if credentials are provided
if [ -n "$LANGFUSE_INIT_PROJECT_SECRET_KEY" ] && [ -n "$LANGFUSE_INIT_PROJECT_PUBLIC_KEY" ]; then
echo "Setting up Langfuse logging"
export LANGFUSE_INIT_PROJECT_SECRET_KEY="$LANGFUSE_INIT_PROJECT_SECRET_KEY"
export LANGFUSE_INIT_PROJECT_PUBLIC_KEY="$LANGFUSE_INIT_PROJECT_PUBLIC_KEY"
export LANGFUSE_URL="${LANGFUSE_URL:-https://cloud.langfuse.com}"
fi
# Ensure /cubbi-config directory exists (required for symlinks)
if [ ! -d "/cubbi-config" ]; then
echo "Creating /cubbi-config directory since it doesn't exist"
mkdir -p /cubbi-config
chown $CUBBI_USER_ID:$CUBBI_GROUP_ID /cubbi-config
fi
# Create symlinks for persistent configurations defined in the image
if [ -n "$CUBBI_PERSISTENT_LINKS" ]; then
echo "Creating persistent configuration symlinks..."
# Split by semicolon
IFS=';' read -ra LINKS <<< "$CUBBI_PERSISTENT_LINKS"
for link_pair in "${LINKS[@]}"; do
# Split by colon
IFS=':' read -r source_path target_path <<< "$link_pair"
if [ -z "$source_path" ] || [ -z "$target_path" ]; then
echo "Warning: Invalid link pair format '$link_pair', skipping."
continue
fi
echo "Processing link: $source_path -> $target_path"
parent_dir=$(dirname "$source_path")
# Ensure parent directory of the link source exists and is owned by cubbi
if [ ! -d "$parent_dir" ]; then
echo "Creating parent directory: $parent_dir"
mkdir -p "$parent_dir"
echo "Changing ownership of parent $parent_dir to $CUBBI_USER_ID:$CUBBI_GROUP_ID"
chown "$CUBBI_USER_ID:$CUBBI_GROUP_ID" "$parent_dir" || echo "Warning: Could not chown parent $parent_dir"
fi
# Create the symlink (force, no-dereference)
echo "Creating symlink: ln -sfn $target_path $source_path"
ln -sfn "$target_path" "$source_path"
# Optionally, change ownership of the symlink itself
echo "Changing ownership of symlink $source_path to $CUBBI_USER_ID:$CUBBI_GROUP_ID"
chown -h "$CUBBI_USER_ID:$CUBBI_GROUP_ID" "$source_path" || echo "Warning: Could not chown symlink $source_path"
done
echo "Persistent configuration symlinks created."
fi
# Update Goose configuration with available MCP servers (run as cubbi after symlinks are created)
if [ -f "/usr/local/bin/update-goose-config.py" ]; then
echo "Updating Goose configuration with MCP servers as cubbi..."
gosu cubbi /usr/local/bin/update-goose-config.py
elif [ -f "$(dirname "$0")/update-goose-config.py" ]; then
echo "Updating Goose configuration with MCP servers as cubbi..."
gosu cubbi "$(dirname "$0")/update-goose-config.py"
else
echo "Warning: update-goose-config.py script not found. Goose configuration will not be updated."
fi
# Run the user command first, if set, as cubbi
if [ -n "$CUBBI_RUN_COMMAND" ]; then
echo "--- Executing initial command: $CUBBI_RUN_COMMAND ---";
gosu cubbi sh -c "$CUBBI_RUN_COMMAND"; # Run user command as cubbi
COMMAND_EXIT_CODE=$?;
echo "--- Initial command finished (exit code: $COMMAND_EXIT_CODE) ---";
# If CUBBI_NO_SHELL is set, exit instead of starting a shell
if [ "$CUBBI_NO_SHELL" = "true" ]; then
echo "--- CUBBI_NO_SHELL=true, exiting container without starting shell ---";
# Mark initialization as complete before exiting
echo "=== Cubbi Initialization completed at $(date) ==="
echo "INIT_COMPLETE=true" > /init.status
exit $COMMAND_EXIT_CODE;
fi;
fi;
# Mark initialization as complete
echo "=== Cubbi Initialization completed at $(date) ==="
echo "INIT_COMPLETE=true" > /init.status
exec gosu cubbi "$@"

View File

@@ -24,29 +24,8 @@ environment:
required: false
default: https://cloud.langfuse.com
# Project environment variables
- name: CUBBI_PROJECT_URL
description: Project repository URL
required: false
- name: CUBBI_PROJECT_TYPE
description: Project repository type (git, svn, etc.)
required: false
default: git
- name: CUBBI_GIT_SSH_KEY
description: SSH key for Git authentication
required: false
sensitive: true
- name: CUBBI_GIT_TOKEN
description: Token for Git authentication
required: false
sensitive: true
ports:
- 8000 # Main application
- 22 # SSH server
- 8000
volumes:
- mountPath: /app
@@ -57,7 +36,3 @@ persistent_configs:
target: "/cubbi-config/goose-app"
type: "directory"
description: "Goose memory"
- source: "/home/cubbi/.config/goose"
target: "/cubbi-config/goose-config"
type: "directory"
description: "Goose configuration"

View File

@@ -1,7 +0,0 @@
#!/bin/bash
# Entrypoint script for Goose image
# Executes the standard initialization script, which handles user setup,
# service startup (like sshd), and switching to the non-root user
# before running the container's command (CMD).
exec /cubbi-init.sh "$@"

View File

@@ -0,0 +1,195 @@
#!/usr/bin/env python3
"""
Goose-specific plugin for Cubbi initialization
"""
import os
from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin
from ruamel.yaml import YAML
class GoosePlugin(ToolPlugin):
"""Plugin for Goose AI tool initialization"""
@property
def tool_name(self) -> str:
return "goose"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_user_config_path(self) -> Path:
"""Get the correct config path for the cubbi user"""
return Path("/home/cubbi/.config/goose")
def _ensure_user_config_dir(self) -> Path:
"""Ensure config directory exists with correct ownership"""
config_dir = self._get_user_config_path()
# Create the full directory path
try:
config_dir.mkdir(parents=True, exist_ok=True)
except FileExistsError:
# Directory already exists, which is fine
pass
except OSError as e:
self.status.log(
f"Failed to create config directory {config_dir}: {e}", "ERROR"
)
return config_dir
# Set ownership for the directories
config_parent = config_dir.parent
if config_parent.exists():
self._set_ownership(config_parent)
if config_dir.exists():
self._set_ownership(config_dir)
return config_dir
def initialize(self) -> bool:
"""Initialize Goose configuration"""
self._ensure_user_config_dir()
return self.setup_tool_configuration()
def setup_tool_configuration(self) -> bool:
"""Set up Goose configuration file"""
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.yaml"
yaml = YAML(typ="safe")
# Load or initialize configuration
if config_file.exists():
with config_file.open("r") as f:
config_data = yaml.load(f) or {}
else:
config_data = {}
if "extensions" not in config_data:
config_data["extensions"] = {}
# Add default developer extension
config_data["extensions"]["developer"] = {
"enabled": True,
"name": "developer",
"timeout": 300,
"type": "builtin",
}
# Update with environment variables
goose_model = os.environ.get("CUBBI_MODEL")
goose_provider = os.environ.get("CUBBI_PROVIDER")
if goose_model:
config_data["GOOSE_MODEL"] = goose_model
self.status.log(f"Set GOOSE_MODEL to {goose_model}")
if goose_provider:
config_data["GOOSE_PROVIDER"] = goose_provider
self.status.log(f"Set GOOSE_PROVIDER to {goose_provider}")
try:
with config_file.open("w") as f:
yaml.dump(config_data, f)
# Set ownership of the config file to cubbi user
self._set_ownership(config_file)
self.status.log(f"Updated Goose configuration at {config_file}")
return True
except Exception as e:
self.status.log(f"Failed to write Goose configuration: {e}", "ERROR")
return False
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate Goose with available MCP servers"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.yaml"
yaml = YAML(typ="safe")
if config_file.exists():
with config_file.open("r") as f:
config_data = yaml.load(f) or {}
else:
config_data = {"extensions": {}}
if "extensions" not in config_data:
config_data["extensions"] = {}
for server in mcp_config["servers"]:
server_name = server["name"]
server_host = server["host"]
server_url = server["url"]
if server_name and server_host:
mcp_url = f"http://{server_host}:8080/sse"
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}")
config_data["extensions"][server_name] = {
"enabled": True,
"name": server_name,
"timeout": 60,
"type": "sse",
"uri": mcp_url,
"envs": {},
}
elif server_name and server_url:
self.status.log(
f"Adding remote MCP extension: {server_name} - {server_url}"
)
config_data["extensions"][server_name] = {
"enabled": True,
"name": server_name,
"timeout": 60,
"type": "sse",
"uri": server_url,
"envs": {},
}
try:
with config_file.open("w") as f:
yaml.dump(config_data, f)
# Set ownership of the config file to cubbi user
self._set_ownership(config_file)
return True
except Exception as e:
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
return False

View File

@@ -1,106 +0,0 @@
#!/usr/bin/env -S uv run --script
# /// script
# dependencies = ["ruamel.yaml"]
# ///
import json
import os
from pathlib import Path
from ruamel.yaml import YAML
# Path to goose config
GOOSE_CONFIG = Path.home() / ".config/goose/config.yaml"
CONFIG_DIR = GOOSE_CONFIG.parent
# Create config directory if it doesn't exist
CONFIG_DIR.mkdir(parents=True, exist_ok=True)
def update_config():
"""Update Goose configuration based on environment variables and config file"""
yaml = YAML()
# Load or initialize the YAML configuration
if not GOOSE_CONFIG.exists():
config_data = {"extensions": {}}
else:
with GOOSE_CONFIG.open("r") as f:
config_data = yaml.load(f)
if "extensions" not in config_data:
config_data["extensions"] = {}
# Add default developer extension
config_data["extensions"]["developer"] = {
"enabled": True,
"name": "developer",
"timeout": 300,
"type": "builtin",
}
# Update goose configuration with model and provider from environment variables
goose_model = os.environ.get("CUBBI_MODEL")
goose_provider = os.environ.get("CUBBI_PROVIDER")
if goose_model:
config_data["GOOSE_MODEL"] = goose_model
print(f"Set GOOSE_MODEL to {goose_model}")
if goose_provider:
config_data["GOOSE_PROVIDER"] = goose_provider
print(f"Set GOOSE_PROVIDER to {goose_provider}")
# Get MCP information from environment variables
mcp_count = int(os.environ.get("MCP_COUNT", "0"))
mcp_names_str = os.environ.get("MCP_NAMES", "[]")
try:
mcp_names = json.loads(mcp_names_str)
print(f"Found {mcp_count} MCP servers: {', '.join(mcp_names)}")
except json.JSONDecodeError:
mcp_names = []
print("Error parsing MCP_NAMES environment variable")
# Process each MCP - collect the MCP configs to add or update
for idx in range(mcp_count):
mcp_name = os.environ.get(f"MCP_{idx}_NAME")
mcp_type = os.environ.get(f"MCP_{idx}_TYPE")
mcp_host = os.environ.get(f"MCP_{idx}_HOST")
# Always use container's SSE port (8080) not the host-bound port
if mcp_name and mcp_host:
# Use standard MCP SSE port (8080)
mcp_url = f"http://{mcp_host}:8080/sse"
print(f"Processing MCP extension: {mcp_name} ({mcp_type}) - {mcp_url}")
config_data["extensions"][mcp_name] = {
"enabled": True,
"name": mcp_name,
"timeout": 60,
"type": "sse",
"uri": mcp_url,
"envs": {},
}
elif mcp_name and os.environ.get(f"MCP_{idx}_URL"):
# For remote MCPs, use the URL provided in environment
mcp_url = os.environ.get(f"MCP_{idx}_URL")
print(
f"Processing remote MCP extension: {mcp_name} ({mcp_type}) - {mcp_url}"
)
config_data["extensions"][mcp_name] = {
"enabled": True,
"name": mcp_name,
"timeout": 60,
"type": "sse",
"uri": mcp_url,
"envs": {},
}
# Write the updated configuration back to the file
with GOOSE_CONFIG.open("w") as f:
yaml.dump(config_data, f)
print(f"Updated Goose configuration at {GOOSE_CONFIG}")
if __name__ == "__main__":
update_config()

View File

@@ -6,17 +6,20 @@ if [ "$(id -u)" != "0" ]; then
exit 0
fi
# Ensure files exist before checking them
touch /cubbi/init.status /cubbi/init.log
# Quick check instead of full logic
if ! grep -q "INIT_COMPLETE=true" "/init.status" 2>/dev/null; then
if ! grep -q "INIT_COMPLETE=true" "/cubbi/init.status" 2>/dev/null; then
# Only follow logs if initialization is incomplete
if [ -f "/init.log" ]; then
if [ -f "/cubbi/init.log" ]; then
echo "----------------------------------------"
tail -f /init.log &
tail -f /cubbi/init.log &
tail_pid=$!
# Check every second if initialization has completed
while true; do
if grep -q "INIT_COMPLETE=true" "/init.status" 2>/dev/null; then
if grep -q "INIT_COMPLETE=true" "/cubbi/init.status" 2>/dev/null; then
kill $tail_pid 2>/dev/null
echo "----------------------------------------"
break
@@ -28,4 +31,4 @@ if ! grep -q "INIT_COMPLETE=true" "/init.status" 2>/dev/null; then
fi
fi
exec gosu cubbi /bin/bash -il
exec gosu cubbi /bin/bash -i

View File

@@ -0,0 +1,67 @@
FROM python:3.12-slim
LABEL maintainer="team@monadical.com"
LABEL description="Opencode for Cubbi"
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu \
passwd \
bash \
curl \
bzip2 \
iputils-ping \
iproute2 \
libxcb1 \
libdbus-1-3 \
nano \
tmux \
git-core \
ripgrep \
openssh-client \
vim \
&& rm -rf /var/lib/apt/lists/*
# Install deps
WORKDIR /tmp
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
sh install.sh && \
mv /root/.local/bin/uv /usr/local/bin/uv && \
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
rm install.sh
# Install opencode-ai
RUN mkdir -p /opt/node && \
curl -fsSL https://nodejs.org/dist/v22.16.0/node-v22.16.0-linux-x64.tar.gz -o node.tar.gz && \
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
rm node.tar.gz
ENV PATH="/opt/node/bin:$PATH"
RUN npm i -g yarn
RUN npm i -g opencode-ai
# Create app directory
WORKDIR /app
# Copy initialization system
COPY cubbi_init.py /cubbi/cubbi_init.py
COPY opencode_plugin.py /cubbi/opencode_plugin.py
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
COPY init-status.sh /cubbi/init-status.sh
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
RUN echo 'PATH="/opt/node/bin:$PATH"' >> /etc/bash.bashrc
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
# Set up environment
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV UV_LINK_MODE=copy
# Pre-install the cubbi_init
RUN /cubbi/cubbi_init.py --help
# Set WORKDIR to /app, common practice and expected by cubbi-init.sh
WORKDIR /app
ENTRYPOINT ["/cubbi/cubbi_init.py"]
CMD ["tail", "-f", "/dev/null"]

View File

@@ -0,0 +1,55 @@
# Opencode Image for Cubbi
This image provides a containerized environment for running [Opencode](https://opencode.ai).
## Features
- Pre-configured environment for Opencode AI
- Langfuse logging support
## Environment Variables
### Opencode Configuration
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `CUBBI_MODEL` | Model to use with Opencode | No | - |
| `CUBBI_PROVIDER` | Provider to use with Opencode | No | - |
### Cubbi Core Variables
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `CUBBI_USER_ID` | UID for the container user | No | `1000` |
| `CUBBI_GROUP_ID` | GID for the container user | No | `1000` |
| `CUBBI_RUN_COMMAND` | Command to execute after initialization | No | - |
| `CUBBI_NO_SHELL` | Exit after command execution | No | `false` |
| `CUBBI_CONFIG_DIR` | Directory for persistent configurations | No | `/cubbi-config` |
| `CUBBI_PERSISTENT_LINKS` | Semicolon-separated list of source:target symlinks | No | - |
### MCP Integration Variables
| Variable | Description | Required |
|----------|-------------|----------|
| `MCP_COUNT` | Number of available MCP servers | No |
| `MCP_NAMES` | JSON array of MCP server names | No |
| `MCP_{idx}_NAME` | Name of MCP server at index | No |
| `MCP_{idx}_TYPE` | Type of MCP server | No |
| `MCP_{idx}_HOST` | Hostname of MCP server | No |
| `MCP_{idx}_URL` | Full URL for remote MCP servers | No |
## Build
To build this image:
```bash
cd drivers/opencode
docker build -t monadical/cubbi-opencode:latest .
```
## Usage
```bash
# Create a new session with this image
cubbix -i opencode
```

View File

@@ -0,0 +1,18 @@
name: opencode
description: Opencode AI environment
version: 1.0.0
maintainer: team@monadical.com
image: monadical/cubbi-opencode:latest
init:
pre_command: /cubbi-init.sh
command: /entrypoint.sh
environment: []
ports: []
volumes:
- mountPath: /app
description: Application directory
persistent_configs: []

View File

@@ -0,0 +1,255 @@
#!/usr/bin/env python3
"""
Opencode-specific plugin for Cubbi initialization
"""
import json
import os
from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin
# Map of environment variables to provider names in auth.json
API_KEY_MAPPINGS = {
"ANTHROPIC_API_KEY": "anthropic",
"GOOGLE_API_KEY": "google",
"OPENAI_API_KEY": "openai",
"OPENROUTER_API_KEY": "openrouter",
}
class OpencodePlugin(ToolPlugin):
"""Plugin for Opencode AI tool initialization"""
@property
def tool_name(self) -> str:
return "opencode"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_user_config_path(self) -> Path:
"""Get the correct config path for the cubbi user"""
return Path("/home/cubbi/.config/opencode")
def _get_user_data_path(self) -> Path:
"""Get the correct data path for the cubbi user"""
return Path("/home/cubbi/.local/share/opencode")
def _ensure_user_config_dir(self) -> Path:
"""Ensure config directory exists with correct ownership"""
config_dir = self._get_user_config_path()
# Create the full directory path
try:
config_dir.mkdir(parents=True, exist_ok=True)
except FileExistsError:
# Directory already exists, which is fine
pass
except OSError as e:
self.status.log(
f"Failed to create config directory {config_dir}: {e}", "ERROR"
)
return config_dir
# Set ownership for the directories
config_parent = config_dir.parent
if config_parent.exists():
self._set_ownership(config_parent)
if config_dir.exists():
self._set_ownership(config_dir)
return config_dir
def _ensure_user_data_dir(self) -> Path:
"""Ensure data directory exists with correct ownership"""
data_dir = self._get_user_data_path()
# Create the full directory path
try:
data_dir.mkdir(parents=True, exist_ok=True)
except FileExistsError:
# Directory already exists, which is fine
pass
except OSError as e:
self.status.log(f"Failed to create data directory {data_dir}: {e}", "ERROR")
return data_dir
# Set ownership for the directories
data_parent = data_dir.parent
if data_parent.exists():
self._set_ownership(data_parent)
if data_dir.exists():
self._set_ownership(data_dir)
return data_dir
def _create_auth_file(self) -> bool:
"""Create auth.json file with configured API keys"""
# Ensure data directory exists
data_dir = self._ensure_user_data_dir()
if not data_dir.exists():
self.status.log(
f"Data directory {data_dir} does not exist and could not be created",
"ERROR",
)
return False
auth_file = data_dir / "auth.json"
auth_data = {}
# Check each API key and add to auth data if present
for env_var, provider in API_KEY_MAPPINGS.items():
api_key = os.environ.get(env_var)
if api_key:
auth_data[provider] = {"type": "api", "key": api_key}
self.status.log(f"Added {provider} API key to auth configuration")
# Only write file if we have at least one API key
if not auth_data:
self.status.log("No API keys found, skipping auth.json creation")
return True
try:
with auth_file.open("w") as f:
json.dump(auth_data, f, indent=2)
# Set ownership of the auth file to cubbi user
self._set_ownership(auth_file)
# Set secure permissions (readable only by owner)
auth_file.chmod(0o600)
self.status.log(f"Created OpenCode auth configuration at {auth_file}")
return True
except Exception as e:
self.status.log(f"Failed to create auth configuration: {e}", "ERROR")
return False
def initialize(self) -> bool:
"""Initialize Opencode configuration"""
self._ensure_user_config_dir()
# Create auth.json file with API keys
auth_success = self._create_auth_file()
# Set up tool configuration
config_success = self.setup_tool_configuration()
return auth_success and config_success
def setup_tool_configuration(self) -> bool:
"""Set up Opencode configuration file"""
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.json"
# Load or initialize configuration
if config_file.exists():
with config_file.open("r") as f:
config_data = json.load(f) or {}
else:
config_data = {}
# Update with environment variables
opencode_model = os.environ.get("CUBBI_MODEL")
opencode_provider = os.environ.get("CUBBI_PROVIDER")
if opencode_model and opencode_provider:
config_data["model"] = f"{opencode_provider}/{opencode_model}"
self.status.log(f"Set model to {config_data['model']}")
try:
with config_file.open("w") as f:
json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user
self._set_ownership(config_file)
self.status.log(f"Updated Opencode configuration at {config_file}")
return True
except Exception as e:
self.status.log(f"Failed to write Opencode configuration: {e}", "ERROR")
return False
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate Opencode with available MCP servers"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.json"
if config_file.exists():
with config_file.open("r") as f:
config_data = json.load(f) or {}
else:
config_data = {}
if "mcp" not in config_data:
config_data["mcp"] = {}
for server in mcp_config["servers"]:
server_name = server["name"]
server_host = server.get("host")
server_url = server.get("url")
if server_name and server_host:
mcp_url = f"http://{server_host}:8080/sse"
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}")
config_data["mcp"][server_name] = {
"type": "remote",
"url": mcp_url,
}
elif server_name and server_url:
self.status.log(
f"Adding remote MCP extension: {server_name} - {server_url}"
)
config_data["mcp"][server_name] = {
"type": "remote",
"url": server_url,
}
try:
with config_file.open("w") as f:
json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user
self._set_ownership(config_file)
return True
except Exception as e:
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
return False

View File

@@ -387,7 +387,7 @@ Cubbi provides persistent storage for project-specific configurations that need
2. **Image Configuration**:
- Each image can specify configuration files/directories that should persist across sessions
- These are defined in the image's `cubbi-image.yaml` file in the `persistent_configs` section
- These are defined in the image's `cubbi_image.yaml` file in the `persistent_configs` section
- Example for Goose image:
```yaml
persistent_configs:
@@ -458,7 +458,7 @@ Each image is a Docker container with a standardized structure:
/
├── entrypoint.sh # Container initialization
├── cubbi-init.sh # Standardized initialization script
├── cubbi-image.yaml # Image metadata and configuration
├── cubbi_image.yaml # Image metadata and configuration
├── tool/ # AI tool installation
└── ssh/ # SSH server configuration
```
@@ -500,7 +500,7 @@ fi
# Image-specific initialization continues...
```
### Image Configuration (cubbi-image.yaml)
### Image Configuration (cubbi_image.yaml)
```yaml
name: goose

327
docs/specs/3_IMAGE.md Normal file
View File

@@ -0,0 +1,327 @@
# Cubbi Image Specifications
## Overview
This document defines the specifications and requirements for building Cubbi-compatible container images. These images serve as isolated development environments for AI tools within the Cubbi platform.
## Architecture
Cubbi images use a Python-based initialization system with a plugin architecture that separates core Cubbi functionality from tool-specific configuration.
### Core Components
1. **Image Metadata File** (`cubbi_image.yaml`) - *Tool-specific*
2. **Container Definition** (`Dockerfile`) - *Tool-specific*
3. **Python Initialization Script** (`cubbi_init.py`) - *Shared across all images*
4. **Tool-specific Plugins** (e.g., `goose_plugin.py`) - *Tool-specific*
5. **Status Tracking Scripts** (`init-status.sh`) - *Shared across all images*
## Image Metadata Specification
### Required Fields
```yaml
name: string # Unique identifier for the image
description: string # Human-readable description
version: string # Semantic version (e.g., "1.0.0")
maintainer: string # Contact information
image: string # Docker image name and tag
```
### Environment Variables
```yaml
environment:
- name: string # Variable name
description: string # Human-readable description
required: boolean # Whether variable is mandatory
sensitive: boolean # Whether variable contains secrets
default: string # Default value (optional)
```
#### Standard Environment Variables
All images MUST support these standard environment variables:
- `CUBBI_USER_ID`: UID for the container user (default: 1000)
- `CUBBI_GROUP_ID`: GID for the container user (default: 1000)
- `CUBBI_RUN_COMMAND`: Command to execute after initialization
- `CUBBI_NO_SHELL`: Exit after command execution ("true"/"false")
- `CUBBI_CONFIG_DIR`: Directory for persistent configurations (default: "/cubbi-config")
- `CUBBI_MODEL`: Model to use for the tool
- `CUBBI_PROVIDER`: Provider to use for the tool
#### MCP Integration Variables
For MCP (Model Context Protocol) integration:
- `MCP_COUNT`: Number of available MCP servers
- `MCP_{idx}_NAME`: Name of MCP server at index
- `MCP_{idx}_TYPE`: Type of MCP server
- `MCP_{idx}_HOST`: Hostname of MCP server
- `MCP_{idx}_URL`: Full URL for remote MCP servers
### Network Configuration
```yaml
ports:
- number # Port to expose (e.g., 8000)
```
### Storage Configuration
```yaml
volumes:
- mountPath: string # Path inside container
description: string # Purpose of the volume
persistent_configs:
- source: string # Path inside container
target: string # Path in persistent storage
type: string # "file" or "directory"
description: string # Purpose of the configuration
```
## Container Requirements
### Base System Dependencies
All images MUST include:
- `python3` - For the initialization system
- `gosu` - For secure user switching
- `bash` - For script execution
### Python Dependencies
The Cubbi initialization system requires:
- `ruamel.yaml` - For YAML configuration parsing
### User Management
Images MUST:
1. Run as root initially for setup
2. Create a non-root user (`cubbi`) with configurable UID/GID
3. Switch to the non-root user for tool execution
4. Handle user ID mapping for volume permissions
### Directory Structure
Standard directories:
- `/app` - Primary working directory (owned by cubbi user)
- `/home/cubbi` - User home directory
- `/cubbi-config` - Persistent configuration storage
- `/cubbi/init.log` - Initialization log file
- `/cubbi/init.status` - Initialization status tracking
- `/cubbi/cubbi_image.yaml` - Image configuration
## Initialization System
### Shared Scripts
The following scripts are **shared across all Cubbi images** and should be copied from the main Cubbi repository:
#### Main Script (`cubbi_init.py`) - *Shared*
The standalone initialization script that:
1. Sets up user and group with proper IDs
2. Creates standard directories with correct permissions
3. Sets up persistent configuration symlinks
4. Runs tool-specific initialization
5. Executes user commands or starts interactive shell
The script supports:
- `--help` for usage information
- Argument passing to final command
- Environment variable configuration
- Plugin-based tool initialization
#### Status Tracking Script (`init-status.sh`) - *Shared*
A bash script that:
- Monitors initialization progress
- Displays logs during setup
- Ensures files exist before operations
- Switches to user shell when complete
### Tool-Specific Components
#### Tool Plugins (`{tool}_plugin.py`) - *Tool-specific*
Each tool MUST provide a plugin (`{tool}_plugin.py`) implementing:
```python
from cubbi_init import ToolPlugin
class MyToolPlugin(ToolPlugin):
@property
def tool_name(self) -> str:
return "mytool"
def initialize(self) -> bool:
"""Main tool initialization logic"""
# Tool-specific setup
return True
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
"""Integrate with available MCP servers"""
# MCP integration logic
return True
```
#### Image Configuration (`cubbi_image.yaml`) - *Tool-specific*
Each tool provides its own metadata file defining:
- Tool-specific environment variables
- Port configurations
- Volume mounts
- Persistent configuration mappings
## Plugin Architecture
### Plugin Discovery
Plugins are automatically discovered by:
1. Looking for `{image_name}_plugin.py` in the same directory as `cubbi_init.py`
2. Loading classes that inherit from `ToolPlugin`
3. Executing initialization and MCP integration
### Plugin Requirements
Tool plugins MUST:
- Inherit from `ToolPlugin` base class
- Implement `tool_name` property
- Implement `initialize()` method
- Optionally implement `integrate_mcp_servers()` method
- Use ruamel.yaml for configuration file operations
## Security Requirements
### User Isolation
- Container MUST NOT run processes as root after initialization
- All user processes MUST run as the `cubbi` user
- Proper file ownership and permissions MUST be maintained
### Secrets Management
- Sensitive environment variables MUST be marked as `sensitive: true`
- SSH keys and tokens MUST have restricted permissions (600)
- No secrets SHOULD be logged or exposed in configuration files
### Network Security
- Only necessary ports SHOULD be exposed
- Network services should be properly configured and secured
## Integration Requirements
### MCP Server Integration
Images MUST support dynamic MCP server discovery and configuration through:
1. Environment variable parsing for server count and details
2. Automatic tool configuration updates
3. Standard MCP communication protocols
### Persistent Configuration
Images MUST support:
1. Configuration persistence through volume mounts
2. Symlink creation for tool configuration directories
3. Proper ownership and permission handling
## Docker Integration
### Dockerfile Requirements
```dockerfile
# Copy shared scripts from main Cubbi repository
COPY cubbi_init.py /cubbi_init.py # Shared
COPY init-status.sh /init-status.sh # Shared
# Copy tool-specific files
COPY {tool}_plugin.py /{tool}_plugin.py # Tool-specific
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml # Tool-specific
# Install Python dependencies
RUN pip install ruamel.yaml
# Make scripts executable
RUN chmod +x /cubbi_init.py /init-status.sh
# Set entrypoint
ENTRYPOINT ["/cubbi_init.py"]
CMD ["tail", "-f", "/dev/null"]
```
### Init Container Support
For complex initialization, use:
```dockerfile
# Use init-status.sh as entrypoint for monitoring
ENTRYPOINT ["/init-status.sh"]
```
## Best Practices
### Performance
- Use multi-stage builds to minimize image size
- Clean up package caches and temporary files
- Use specific base image versions for reproducibility
### Maintainability
- Follow consistent naming conventions
- Include comprehensive documentation
- Use semantic versioning for image releases
- Provide clear error messages and logging
### Compatibility
- Support common development workflows
- Maintain backward compatibility when possible
- Test with various project types and configurations
## Validation Checklist
Before releasing a Cubbi image, verify:
- [ ] All required metadata fields are present in `cubbi_image.yaml`
- [ ] Standard environment variables are supported
- [ ] `cubbi_init.py` script is properly installed and executable
- [ ] Tool plugin is discovered and loads correctly
- [ ] User management works correctly
- [ ] Persistent configurations are properly handled
- [ ] MCP integration functions (if applicable)
- [ ] Tool-specific functionality works as expected
- [ ] Security requirements are met
- [ ] Python dependencies are satisfied
- [ ] Status tracking works correctly
- [ ] Documentation is complete and accurate
## Examples
### Complete Goose Example
See the `/cubbi/images/goose/` directory for a complete implementation including:
- `Dockerfile` - Container definition
- `cubbi_image.yaml` - Image metadata
- `goose_plugin.py` - Tool-specific initialization
- `README.md` - Tool-specific documentation
### Migration Notes
The current Python-based system uses:
- `cubbi_init.py` - Standalone initialization script with plugin support
- `{tool}_plugin.py` - Tool-specific configuration and MCP integration
- `init-status.sh` - Status monitoring and log display
- `cubbi_image.yaml` - Image metadata and configuration