16 Commits
v0.4.0 ... main

Author SHA1 Message Date
github-actions
21cb53b597 0.5.0
Automatically generated by python-semantic-release
2025-12-15 18:27:27 +00:00
10d9e9d3ab fix: prevent concurrent YAML corruption in sessions (#36)
fix: add file locking to prevent concurrent YAML corruption in sessions

When multiple cubbi instances run simultaneously, they can corrupt the
sessions.yaml file due to concurrent writes. This manifests as malformed
YAML entries (e.g., "status: running\ning2dc3ff11:").

This commit adds:
- fcntl-based file locking for all write operations
- Read-modify-write pattern that reloads from disk before each write
- Proper lock acquisition/release via context manager

All write operations (add_session, remove_session, save) now:
1. Acquire exclusive lock on sessions.yaml
2. Reload latest state from disk
3. Apply modifications
4. Write atomically to file
5. Update in-memory cache
6. Release lock

This ensures that concurrent cubbi instances can safely modify the
sessions file without corruption.
2025-12-15 12:25:55 -06:00
b788f3f52e fix: ensure Docker containers are always removed when closing sessions (#35)
When closing sessions with already-stopped containers, the stop/kill
operation would raise an exception, preventing container.remove() from
being called. This left stopped containers in Docker even though they
were removed from cubbi's session tracking.

The fix wraps stop/kill operations in their own try-except block,
allowing the code to always reach container.remove() regardless of
whether the container was already stopped.
2025-10-06 16:40:50 -06:00
3795de1484 docs: update README with --no-cache and local MCP server documentation
- Added documentation for the new --no-cache flag in image build command
- Added documentation for local MCP server support (add-local command)
- Updated MCP server types to include local MCP servers
- Added examples for all three types of MCP servers (Docker, Remote, Local)
2025-09-25 16:41:46 -06:00
be171cf2c6 feat: add --no-cache option to image build command
Added a --no-cache flag to 'cubbi image build' command to allow building
Docker images without using the build cache, useful for forcing fresh builds.
2025-09-25 16:40:20 -06:00
b9cffe3008 feat: add local MCP server support
- Add LocalMCP model for stdio-based MCP servers
- Implement add_local_mcp() method in MCPManager
- Add 'mcp add-local' CLI command with args and env support
- Update cubbi_init.py MCPConfig with command, args, env fields
- Add local MCP support in interactive configure tool
- Update image plugins (opencode, goose, crush) to handle local MCPs
  - OpenCode: Maps to "local" type with command array
  - Goose: Maps to "stdio" type with command/args
  - Crush: Maps to "stdio" transport type

Local MCPs run as stdio-based commands inside containers, allowing
users to integrate local MCP servers without containerization.
2025-09-25 16:12:24 -06:00
a66843714d fix: remove container even if already removed 2025-09-25 15:28:35 -06:00
407c1a1c9b fix: make groupadd optional (group already may exist, like gid 20 from osx) 2025-08-17 23:12:23 -06:00
fc819a3861 feat: universal model management for all standard providers (#34)
* fix: add crush plugin support too

* feat: comprehensive model management for all standard providers

- Add universal provider support for model fetching (OpenAI, Anthropic, Google, OpenRouter)
- Add default API URLs for standard providers in config.py
- Enhance model fetcher with provider-specific authentication:
  * Anthropic: x-api-key header + anthropic-version header
  * Google: x-goog-api-key header + custom response format handling
  * OpenAI/OpenRouter: Bearer token (unchanged)
- Support Google's unique API response format (models vs data key, name vs id field)
- Update CLI commands to work with all supported provider types
- Enhance configure interface to include all providers (even those without API keys)
- Update both OpenCode and Crush plugins to populate models for all provider types
- Add comprehensive provider support detection methods
2025-08-08 21:12:04 +00:00
7d6bc5dbfa feat: dynamic model management for OpenAI-compatible providers (#33)
feat: add models fetch for openai-compatible endpoint
2025-08-08 12:08:08 -06:00
310149dc34 fix: cubbi configure not working when configuring other provider (#32) 2025-08-08 16:24:45 +00:00
3a7b9213b0 refactor: deep clean plugins (#31)
* refactor: deep clean plugins

* refactor: modernize plugin system with Python 3.12+ typing and simplified discovery

- Update typing to Python 3.12+ style (Dict->dict, Optional->union types)
- Simplify plugin discovery using PLUGIN_CLASS exports instead of dir() reflection
- Add public get_user_ids() and set_ownership() functions in cubbi_init
- Add create_directory_with_ownership() helper method to ToolPlugin base class
- Replace initialize() + integrate_mcp_servers() pattern with unified configure()
- Add is_already_configured() checks to prevent overwriting existing configs
- Remove excessive comments and clean up code structure
- All 5 plugins updated: goose, opencode, claudecode, aider, crush

* fix: remove duplicate
2025-08-07 12:10:22 -06:00
a709071d10 fix: crush providers configuration (#30) 2025-08-07 11:04:51 -06:00
bae951cf7c feat: comprehensive configuration system and environment variable forwarding (#29)
* feat: migrate container configuration from env vars to YAML config files

- Replace environment variable-based configuration with structured YAML config files
- Add Pydantic models for type-safe configuration management in cubbi_init.py
- Update container.py to generate /cubbi/config.yaml and mount into containers
- Simplify goose plugin to extract provider from default model format
- Remove complex environment variable handling in favor of direct config access
- Maintain backward compatibility while enabling cleaner plugin architecture

* feat: optimize goose plugin to only pass required API key for selected model

- Update goose plugin to set only the API key for the provider of the selected model
- Add selective API key configuration for anthropic, openai, google, and openrouter
- Update README.md with comprehensive automated testing documentation
- Add litellm/gpt-oss:120b to test.sh model matrix (now 5 images × 4 models = 20 tests)
- Include single prompt command syntax for each tool in the documentation

* feat: add comprehensive integration tests with pytest parametrization

- Create tests/test_integration.py with parametrized tests for 5 images × 4 models (20 combinations)
- Add pytest configuration to exclude integration tests by default
- Add integration marker for selective test running
- Include help command tests and image availability tests
- Document test usage in tests/README_integration.md

Integration tests cover:
- goose, aider, claudecode, opencode, crush images
- anthropic/claude-sonnet-4-20250514, openai/gpt-4o, openrouter/openai/gpt-4o, litellm/gpt-oss:120b models
- Proper command syntax for each tool
- Success validation with exit codes and completion markers

Usage:
- pytest (regular tests only)
- pytest -m integration (integration tests only)
- pytest -m integration -k "goose" (specific image)

* feat: update OpenCode plugin with perfect multi-provider configuration

- Add global STANDARD_PROVIDERS constant for maintainability
- Support custom providers (with baseURL) vs standard providers
- Custom providers: include npm package, name, baseURL, apiKey, models
- Standard providers: include only apiKey and empty models
- Use direct API key values from cubbi config instead of env vars
- Only add default model to the provider that matches the default model
- Use @ai-sdk/openai-compatible for OpenAI-compatible providers
- Preserve model names without transformation
- All providers get required empty models{} section per OpenCode spec

This ensures OpenCode can properly recognize and use both native
providers (anthropic, openai, google, openrouter) and custom
providers (litellm, etc.) with correct configuration format.

* refactor: model is now a combination of provider/model

* feat: add separate integration test for Claude Code without model config

Claude Code is Anthropic-specific and doesn't require model selection like other tools.
Created dedicated test that verifies basic functionality without model preselection.

* feat: update Claude Code and Crush plugins to use new config system

- Claude Code plugin now uses cubbi_config.providers to get Anthropic API key
- Crush plugin updated to use cubbi_config.providers for provider configuration
- Both plugins maintain backwards compatibility with environment variables
- Consistent plugin structure across all cubbi images

* feat: add environments_to_forward support for images

- Add environments_to_forward field to ImageConfig and Image models
- Update container creation logic to forward specified environment variables from host
- Add environments_to_forward to claudecode cubbi_image.yaml to ensure Anthropic API key is always available
- Claude Code now gets required environment variables regardless of model selection
- This ensures Claude Code works properly even when other models are specified

Fixes the issue where Claude Code couldn't access Anthropic API key when using different model configurations.

* refactor: remove unused environment field from cubbi_image.yaml files

The 'environment' field was loaded but never processed at runtime.
Only 'environments_to_forward' is actually used to pass environment
variables from host to container.

Cleaned up configuration files by removing:
- 72 lines from aider/cubbi_image.yaml
- 42 lines from claudecode/cubbi_image.yaml
- 28 lines from crush/cubbi_image.yaml
- 16 lines from goose/cubbi_image.yaml
- Empty environment: [] from opencode/cubbi_image.yaml

This makes the configuration files cleaner and only contains
fields that are actually used by the system.

* feat: implement environment variable forwarding for aider

Updates aider to automatically receive all relevant environment variables
from the host, similar to how opencode works.

Changes:
- Added environments_to_forward field to aider/cubbi_image.yaml with
  comprehensive list of API keys, configuration, and proxy variables
- Updated aider_plugin.py to use cubbi_config system for provider/model setup
- Environment variables now forwarded automatically during container creation
- Maintains backward compatibility with legacy environment variables

Environment variables forwarded:
- API Keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY, etc.
- Configuration: AIDER_MODEL, GIT_* variables, HTTP_PROXY, etc.
- Timezone: TZ for proper log timestamps

Tested: All aider tests pass, environment variables confirmed forwarded.

* refactor: remove unused volumes and init fields from cubbi_image.yaml files

Both 'volumes' and 'init' fields were loaded but never processed at runtime.
These were incomplete implementations that didn't affect container behavior.

Removed from all 5 images:
- volumes: List with mountPath: /app (incomplete, missing host paths)
- init: pre_command and command fields (unused during container creation)

The cubbi_image.yaml files now only contain fields that are actually used:
- Basic metadata (name, description, version, maintainer, image)
- persistent_configs (working functionality)
- environments_to_forward (working functionality where present)

This makes the configuration files cleaner and eliminates confusion
about what functionality is actually implemented.

* refactor: remove unused ImageInit and VolumeMount models

These models were only referenced in the Image model definition but
never used at runtime since we removed all init: and volumes: fields
from cubbi_image.yaml files.

Removed:
- VolumeMount class (mountPath, description fields)
- ImageInit class (pre_command, command fields)
- init: Optional[ImageInit] field from Image model
- volumes: List[VolumeMount] field from Image model

The Image model now only contains fields that are actually used:
- Basic metadata (name, description, version, maintainer, image)
- environment (loaded but unused - kept for future cleanup)
- persistent_configs (working functionality)
- environments_to_forward (working functionality)

This makes the data model cleaner and eliminates dead code.

* feat: add interactive configuration command

Adds `cubbi configure` command for interactive setup of LLM providers
and models through a user-friendly questionnaire interface.

New features:
- Interactive provider configuration (OpenAI, Anthropic, OpenRouter, etc.)
- API key management with environment variable references
- Model selection with provider/model format validation
- Default settings configuration (image, ports, volumes, etc.)
- Added questionary dependency for interactive prompts

Changes:
- Added cubbi/configure.py with full interactive configuration logic
- Added configure command to cubbi/cli.py
- Updated uv.lock with questionary and prompt-toolkit dependencies

Usage: `cubbi configure`

* refactor: update integration tests for current functionality

Updates integration tests to reflect current cubbi functionality:

test_integration.py:
- Simplified image list (removed crush temporarily)
- Updated model list with current supported models
- Removed outdated help command tests that were timing out
- Simplified claudecode test to basic functionality test
- Updated command templates for current tool versions

test_integration_docker.py:
- Cleaned up container management tests
- Fixed formatting and improved readability
- Updated assertion formatting for better error messages

These changes align the tests with the current state of the codebase
and remove tests that were causing timeouts or failures.

* fix: fix temporary file chmod
2025-08-06 21:27:26 -06:00
e4c64a54ed fix: remove persistent_configs of images (#28) 2025-08-06 17:49:36 +00:00
b7b78ea075 feat: add opencode state/cache to persistent_config (#27) 2025-08-06 02:47:29 +00:00
27 changed files with 3987 additions and 1223 deletions

View File

@@ -1,6 +1,294 @@
# CHANGELOG # CHANGELOG
## v0.5.0 (2025-12-15)
### Bug Fixes
- Crush providers configuration ([#30](https://github.com/Monadical-SAS/cubbi/pull/30),
[`a709071`](https://github.com/Monadical-SAS/cubbi/commit/a709071d1008d7b805da86d82fb056e144a328fd))
- Cubbi configure not working when configuring other provider
([#32](https://github.com/Monadical-SAS/cubbi/pull/32),
[`310149d`](https://github.com/Monadical-SAS/cubbi/commit/310149dc34bfd41237ee92ff42620bf3f4316634))
- Ensure Docker containers are always removed when closing sessions
([#35](https://github.com/Monadical-SAS/cubbi/pull/35),
[`b788f3f`](https://github.com/Monadical-SAS/cubbi/commit/b788f3f52e6f85fd99e1dd117565850dbe13332b))
When closing sessions with already-stopped containers, the stop/kill operation would raise an
exception, preventing container.remove() from being called. This left stopped containers in Docker
even though they were removed from cubbi's session tracking.
The fix wraps stop/kill operations in their own try-except block, allowing the code to always reach
container.remove() regardless of whether the container was already stopped.
- Make groupadd optional (group already may exist, like gid 20 from osx)
([`407c1a1`](https://github.com/Monadical-SAS/cubbi/commit/407c1a1c9bc85e06600c762c78905d1bfdf89922))
- Prevent concurrent YAML corruption in sessions
([#36](https://github.com/Monadical-SAS/cubbi/pull/36),
[`10d9e9d`](https://github.com/Monadical-SAS/cubbi/commit/10d9e9d3abc135718be667adc574a7b3f8470ff7))
fix: add file locking to prevent concurrent YAML corruption in sessions
When multiple cubbi instances run simultaneously, they can corrupt the sessions.yaml file due to
concurrent writes. This manifests as malformed YAML entries (e.g., "status:
running\ning2dc3ff11:").
This commit adds: - fcntl-based file locking for all write operations - Read-modify-write pattern
that reloads from disk before each write - Proper lock acquisition/release via context manager
All write operations (add_session, remove_session, save) now: 1. Acquire exclusive lock on
sessions.yaml 2. Reload latest state from disk 3. Apply modifications 4. Write atomically to file
5. Update in-memory cache 6. Release lock
This ensures that concurrent cubbi instances can safely modify the sessions file without corruption.
- Remove container even if already removed
([`a668437`](https://github.com/Monadical-SAS/cubbi/commit/a66843714d01d163e2ce17dd4399a0fa64d2be65))
- Remove persistent_configs of images ([#28](https://github.com/Monadical-SAS/cubbi/pull/28),
[`e4c64a5`](https://github.com/Monadical-SAS/cubbi/commit/e4c64a54ed39ba0a65ace75c7f03ff287073e71e))
### Documentation
- Update README with --no-cache and local MCP server documentation
([`3795de1`](https://github.com/Monadical-SAS/cubbi/commit/3795de1484e1df3905c8eb90908ab79927b03194))
- Added documentation for the new --no-cache flag in image build command - Added documentation for
local MCP server support (add-local command) - Updated MCP server types to include local MCP
servers - Added examples for all three types of MCP servers (Docker, Remote, Local)
### Features
- Add --no-cache option to image build command
([`be171cf`](https://github.com/Monadical-SAS/cubbi/commit/be171cf2c6252dfa926a759915a057a3a6791cc2))
Added a --no-cache flag to 'cubbi image build' command to allow building Docker images without using
the build cache, useful for forcing fresh builds.
- Add local MCP server support
([`b9cffe3`](https://github.com/Monadical-SAS/cubbi/commit/b9cffe3008bccbcf4eaa7c5c03e62215520d8627))
- Add LocalMCP model for stdio-based MCP servers - Implement add_local_mcp() method in MCPManager -
Add 'mcp add-local' CLI command with args and env support - Update cubbi_init.py MCPConfig with
command, args, env fields - Add local MCP support in interactive configure tool - Update image
plugins (opencode, goose, crush) to handle local MCPs - OpenCode: Maps to "local" type with
command array - Goose: Maps to "stdio" type with command/args - Crush: Maps to "stdio" transport
type
Local MCPs run as stdio-based commands inside containers, allowing users to integrate local MCP
servers without containerization.
- Add opencode state/cache to persistent_config
([#27](https://github.com/Monadical-SAS/cubbi/pull/27),
[`b7b78ea`](https://github.com/Monadical-SAS/cubbi/commit/b7b78ea0754360efe56cf3f3255f90efda737a91))
- Comprehensive configuration system and environment variable forwarding
([#29](https://github.com/Monadical-SAS/cubbi/pull/29),
[`bae951c`](https://github.com/Monadical-SAS/cubbi/commit/bae951cf7c4e498b6cdd7cd00836935acbd98e42))
* feat: migrate container configuration from env vars to YAML config files
- Replace environment variable-based configuration with structured YAML config files - Add Pydantic
models for type-safe configuration management in cubbi_init.py - Update container.py to generate
/cubbi/config.yaml and mount into containers - Simplify goose plugin to extract provider from
default model format - Remove complex environment variable handling in favor of direct config
access - Maintain backward compatibility while enabling cleaner plugin architecture
* feat: optimize goose plugin to only pass required API key for selected model
- Update goose plugin to set only the API key for the provider of the selected model - Add selective
API key configuration for anthropic, openai, google, and openrouter - Update README.md with
comprehensive automated testing documentation - Add litellm/gpt-oss:120b to test.sh model matrix
(now 5 images × 4 models = 20 tests) - Include single prompt command syntax for each tool in the
documentation
* feat: add comprehensive integration tests with pytest parametrization
- Create tests/test_integration.py with parametrized tests for 5 images × 4 models (20 combinations)
- Add pytest configuration to exclude integration tests by default - Add integration marker for
selective test running - Include help command tests and image availability tests - Document test
usage in tests/README_integration.md
Integration tests cover: - goose, aider, claudecode, opencode, crush images -
anthropic/claude-sonnet-4-20250514, openai/gpt-4o, openrouter/openai/gpt-4o, litellm/gpt-oss:120b
models - Proper command syntax for each tool - Success validation with exit codes and completion
markers
Usage: - pytest (regular tests only) - pytest -m integration (integration tests only) - pytest -m
integration -k "goose" (specific image)
* feat: update OpenCode plugin with perfect multi-provider configuration
- Add global STANDARD_PROVIDERS constant for maintainability - Support custom providers (with
baseURL) vs standard providers - Custom providers: include npm package, name, baseURL, apiKey,
models - Standard providers: include only apiKey and empty models - Use direct API key values from
cubbi config instead of env vars - Only add default model to the provider that matches the default
model - Use @ai-sdk/openai-compatible for OpenAI-compatible providers - Preserve model names
without transformation - All providers get required empty models{} section per OpenCode spec
This ensures OpenCode can properly recognize and use both native providers (anthropic, openai,
google, openrouter) and custom providers (litellm, etc.) with correct configuration format.
* refactor: model is now a combination of provider/model
* feat: add separate integration test for Claude Code without model config
Claude Code is Anthropic-specific and doesn't require model selection like other tools. Created
dedicated test that verifies basic functionality without model preselection.
* feat: update Claude Code and Crush plugins to use new config system
- Claude Code plugin now uses cubbi_config.providers to get Anthropic API key - Crush plugin updated
to use cubbi_config.providers for provider configuration - Both plugins maintain backwards
compatibility with environment variables - Consistent plugin structure across all cubbi images
* feat: add environments_to_forward support for images
- Add environments_to_forward field to ImageConfig and Image models - Update container creation
logic to forward specified environment variables from host - Add environments_to_forward to
claudecode cubbi_image.yaml to ensure Anthropic API key is always available - Claude Code now gets
required environment variables regardless of model selection - This ensures Claude Code works
properly even when other models are specified
Fixes the issue where Claude Code couldn't access Anthropic API key when using different model
configurations.
* refactor: remove unused environment field from cubbi_image.yaml files
The 'environment' field was loaded but never processed at runtime. Only 'environments_to_forward' is
actually used to pass environment variables from host to container.
Cleaned up configuration files by removing: - 72 lines from aider/cubbi_image.yaml - 42 lines from
claudecode/cubbi_image.yaml - 28 lines from crush/cubbi_image.yaml - 16 lines from
goose/cubbi_image.yaml - Empty environment: [] from opencode/cubbi_image.yaml
This makes the configuration files cleaner and only contains fields that are actually used by the
system.
* feat: implement environment variable forwarding for aider
Updates aider to automatically receive all relevant environment variables from the host, similar to
how opencode works.
Changes: - Added environments_to_forward field to aider/cubbi_image.yaml with comprehensive list of
API keys, configuration, and proxy variables - Updated aider_plugin.py to use cubbi_config system
for provider/model setup - Environment variables now forwarded automatically during container
creation - Maintains backward compatibility with legacy environment variables
Environment variables forwarded: - API Keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY,
etc. - Configuration: AIDER_MODEL, GIT_* variables, HTTP_PROXY, etc. - Timezone: TZ for proper log
timestamps
Tested: All aider tests pass, environment variables confirmed forwarded.
* refactor: remove unused volumes and init fields from cubbi_image.yaml files
Both 'volumes' and 'init' fields were loaded but never processed at runtime. These were incomplete
implementations that didn't affect container behavior.
Removed from all 5 images: - volumes: List with mountPath: /app (incomplete, missing host paths) -
init: pre_command and command fields (unused during container creation)
The cubbi_image.yaml files now only contain fields that are actually used: - Basic metadata (name,
description, version, maintainer, image) - persistent_configs (working functionality) -
environments_to_forward (working functionality where present)
This makes the configuration files cleaner and eliminates confusion about what functionality is
actually implemented.
* refactor: remove unused ImageInit and VolumeMount models
These models were only referenced in the Image model definition but never used at runtime since we
removed all init: and volumes: fields from cubbi_image.yaml files.
Removed: - VolumeMount class (mountPath, description fields) - ImageInit class (pre_command, command
fields) - init: Optional[ImageInit] field from Image model - volumes: List[VolumeMount] field from
Image model
The Image model now only contains fields that are actually used: - Basic metadata (name,
description, version, maintainer, image) - environment (loaded but unused - kept for future
cleanup) - persistent_configs (working functionality) - environments_to_forward (working
functionality)
This makes the data model cleaner and eliminates dead code.
* feat: add interactive configuration command
Adds `cubbi configure` command for interactive setup of LLM providers and models through a
user-friendly questionnaire interface.
New features: - Interactive provider configuration (OpenAI, Anthropic, OpenRouter, etc.) - API key
management with environment variable references - Model selection with provider/model format
validation - Default settings configuration (image, ports, volumes, etc.) - Added questionary
dependency for interactive prompts
Changes: - Added cubbi/configure.py with full interactive configuration logic - Added configure
command to cubbi/cli.py - Updated uv.lock with questionary and prompt-toolkit dependencies
Usage: `cubbi configure`
* refactor: update integration tests for current functionality
Updates integration tests to reflect current cubbi functionality:
test_integration.py: - Simplified image list (removed crush temporarily) - Updated model list with
current supported models - Removed outdated help command tests that were timing out - Simplified
claudecode test to basic functionality test - Updated command templates for current tool versions
test_integration_docker.py: - Cleaned up container management tests - Fixed formatting and improved
readability - Updated assertion formatting for better error messages
These changes align the tests with the current state of the codebase and remove tests that were
causing timeouts or failures.
* fix: fix temporary file chmod
- Dynamic model management for OpenAI-compatible providers
([#33](https://github.com/Monadical-SAS/cubbi/pull/33),
[`7d6bc5d`](https://github.com/Monadical-SAS/cubbi/commit/7d6bc5dbfa5f4d4ef69a7b806846aebdeec38aa0))
feat: add models fetch for openai-compatible endpoint
- Universal model management for all standard providers
([#34](https://github.com/Monadical-SAS/cubbi/pull/34),
[`fc819a3`](https://github.com/Monadical-SAS/cubbi/commit/fc819a386185330e60946ee4712f268cfed2b66a))
* fix: add crush plugin support too
* feat: comprehensive model management for all standard providers
- Add universal provider support for model fetching (OpenAI, Anthropic, Google, OpenRouter) - Add
default API URLs for standard providers in config.py - Enhance model fetcher with
provider-specific authentication: * Anthropic: x-api-key header + anthropic-version header *
Google: x-goog-api-key header + custom response format handling * OpenAI/OpenRouter: Bearer token
(unchanged) - Support Google's unique API response format (models vs data key, name vs id field) -
Update CLI commands to work with all supported provider types - Enhance configure interface to
include all providers (even those without API keys) - Update both OpenCode and Crush plugins to
populate models for all provider types - Add comprehensive provider support detection methods
### Refactoring
- Deep clean plugins ([#31](https://github.com/Monadical-SAS/cubbi/pull/31),
[`3a7b921`](https://github.com/Monadical-SAS/cubbi/commit/3a7b9213b0d4e5ce0cfb1250624651b242fdc325))
* refactor: deep clean plugins
* refactor: modernize plugin system with Python 3.12+ typing and simplified discovery
- Update typing to Python 3.12+ style (Dict->dict, Optional->union types) - Simplify plugin
discovery using PLUGIN_CLASS exports instead of dir() reflection - Add public get_user_ids() and
set_ownership() functions in cubbi_init - Add create_directory_with_ownership() helper method to
ToolPlugin base class - Replace initialize() + integrate_mcp_servers() pattern with unified
configure() - Add is_already_configured() checks to prevent overwriting existing configs - Remove
excessive comments and clean up code structure - All 5 plugins updated: goose, opencode,
claudecode, aider, crush
* fix: remove duplicate
## v0.4.0 (2025-08-06) ## v0.4.0 (2025-08-06)
### Documentation ### Documentation

View File

@@ -144,13 +144,37 @@ Cubbi includes an image management system that allows you to build, manage, and
**Supported Images** **Supported Images**
| Image Name | Langtrace Support | | Image Name | Langtrace Support | Single Prompt Command |
|------------|-------------------| |------------|-------------------|----------------------|
| goose | yes | | goose | yes | `goose run -t 'prompt' --no-session --quiet` |
| opencode | no | | opencode | no | `opencode run -m MODEL 'prompt'` |
| claudecode | no | | claudecode | no | `claude -p 'prompt'` |
| aider | no | | aider | no | `aider --message 'prompt' --yes-always --no-fancy-input` |
| crush | no | | crush | no | `crush run 'prompt'` |
**Automated Testing:**
Each image can be tested with single prompt commands using different models:
```bash
# Test a single image with a specific model
cubbix -i goose -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "goose run -t 'What is 2+2?' --no-session --quiet"
# Test aider with non-interactive flags
cubbix -i aider -m openai/gpt-4o --no-connect --no-shell --run "aider --message 'What is 2+2?' --yes-always --no-fancy-input --no-check-update"
# Test claude-code (note: binary name is 'claude', not 'claude-code')
cubbix -i claudecode -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "claude -p 'What is 2+2?'"
# Test opencode with model specification
cubbix -i opencode -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "opencode run -m anthropic/claude-sonnet-4-20250514 'What is 2+2?'"
# Test crush
cubbix -i crush -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "crush run 'What is 2+2?'"
# Run comprehensive test suite (requires test.sh script)
./test.sh # Tests all images with multiple models: anthropic/claude-sonnet-4-20250514, openai/gpt-4o, openrouter/openai/gpt-4o, litellm/gpt-oss:120b
```
```bash ```bash
# List available images # List available images
@@ -165,6 +189,9 @@ cubbi image info crush
cubbi image build goose cubbi image build goose
cubbi image build opencode cubbi image build opencode
cubbi image build crush cubbi image build crush
# Build an image without using cache (force fresh build)
cubbi image build --no-cache goose
``` ```
Images are defined in the `cubbi/images/` directory, with each subdirectory containing: Images are defined in the `cubbi/images/` directory, with each subdirectory containing:
@@ -341,7 +368,8 @@ Service credentials like API keys configured in `~/.config/cubbi/config.yaml` ar
MCP (Model Control Protocol) servers provide tool-calling capabilities to AI models, enhancing their ability to interact with external services, databases, and systems. Cubbi supports multiple types of MCP servers: MCP (Model Control Protocol) servers provide tool-calling capabilities to AI models, enhancing their ability to interact with external services, databases, and systems. Cubbi supports multiple types of MCP servers:
1. **Remote HTTP SSE servers** - External MCP servers accessed over HTTP 1. **Remote HTTP SSE servers** - External MCP servers accessed over HTTP
2. **Docker-based MCP servers** - Local MCP servers running in Docker containers, with a SSE proxy for stdio-to-SSE conversion 2. **Docker-based MCP servers** - MCP servers running in Docker containers, with a SSE proxy for stdio-to-SSE conversion
3. **Local MCP servers** - MCP servers running as local processes on your host machine
### Managing MCP Servers ### Managing MCP Servers
@@ -389,12 +417,24 @@ cubbi mcp remove github
Cubbi supports different types of MCP servers: Cubbi supports different types of MCP servers:
```bash ```bash
# Example of docker-based MCP server # Docker-based MCP server (with proxy)
cubbi mcp add fetch mcp/fetch cubbi mcp add fetch mcp/fetch
cubbi mcp add github -e GITHUB_PERSONAL_ACCESS_TOKEN=xxxx github mcp/github cubbi mcp add github -e GITHUB_PERSONAL_ACCESS_TOKEN=xxxx mcp/github mcp/github-proxy
# Example of SSE-based MCP server # Remote HTTP SSE server
cubbi mcp add myserver https://myssemcp.com cubbi mcp add-remote myserver https://myssemcp.com/sse
# Local MCP server (runs as a local process)
cubbi mcp add-local mylocalmcp /path/to/mcp-executable
cubbi mcp add-local mylocalmcp /usr/local/bin/mcp-tool --args "--config" --args "/etc/mcp.conf"
cubbi mcp add-local mylocalmcp npx --args "@modelcontextprotocol/server-filesystem" --args "/path/to/data"
# Add environment variables to local MCP servers
cubbi mcp add-local mylocalmcp /path/to/mcp-server -e API_KEY=xxx -e BASE_URL=https://api.example.com
# Prevent adding to default MCPs
cubbi mcp add myserver mcp/server --no-default
cubbi mcp add-local mylocalmcp /path/to/executable --no-default
``` ```
### Using MCP Servers with Sessions ### Using MCP Servers with Sessions

View File

@@ -14,6 +14,7 @@ from rich.console import Console
from rich.table import Table from rich.table import Table
from .config import ConfigManager from .config import ConfigManager
from .configure import run_interactive_config
from .container import ContainerManager from .container import ContainerManager
from .mcp import MCPManager from .mcp import MCPManager
from .models import SessionStatus from .models import SessionStatus
@@ -60,6 +61,12 @@ def main(
logging.getLogger().setLevel(logging.INFO) logging.getLogger().setLevel(logging.INFO)
@app.command()
def configure() -> None:
"""Interactive configuration of LLM providers and models"""
run_interactive_config()
@app.command() @app.command()
def version() -> None: def version() -> None:
"""Show Cubbi version information""" """Show Cubbi version information"""
@@ -173,9 +180,11 @@ def create_session(
gid: Optional[int] = typer.Option( gid: Optional[int] = typer.Option(
None, "--gid", help="Group ID to run the container as (defaults to host user)" None, "--gid", help="Group ID to run the container as (defaults to host user)"
), ),
model: Optional[str] = typer.Option(None, "--model", help="Model to use"), model: Optional[str] = typer.Option(
provider: Optional[str] = typer.Option( None,
None, "--provider", "-p", help="Provider to use" "--model",
"-m",
help="Model to use in 'provider/model' format (e.g., 'anthropic/claude-3-5-sonnet')",
), ),
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"), ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
config: List[str] = typer.Option( config: List[str] = typer.Option(
@@ -387,15 +396,10 @@ def create_session(
"[yellow]Warning: --no-shell is ignored without --run[/yellow]" "[yellow]Warning: --no-shell is ignored without --run[/yellow]"
) )
# Use model and provider from config overrides if not explicitly provided # Use model from config overrides if not explicitly provided
final_model = ( final_model = (
model if model is not None else temp_user_config.get("defaults.model") model if model is not None else temp_user_config.get("defaults.model")
) )
final_provider = (
provider
if provider is not None
else temp_user_config.get("defaults.provider")
)
session = container_manager.create_session( session = container_manager.create_session(
image_name=image_name, image_name=image_name,
@@ -414,7 +418,6 @@ def create_session(
gid=target_gid, gid=target_gid,
ssh=ssh, ssh=ssh,
model=final_model, model=final_model,
provider=final_provider,
domains=all_domains, domains=all_domains,
) )
@@ -619,6 +622,9 @@ def build_image(
push: bool = typer.Option( push: bool = typer.Option(
False, "--push", "-p", help="Push image to registry after building" False, "--push", "-p", help="Push image to registry after building"
), ),
no_cache: bool = typer.Option(
False, "--no-cache", help="Build without using cache"
),
) -> None: ) -> None:
"""Build an image Docker image""" """Build an image Docker image"""
# Get image path # Get image path
@@ -683,9 +689,11 @@ def build_image(
# Build the image from temporary directory # Build the image from temporary directory
with console.status(f"Building image {docker_image_name}..."): with console.status(f"Building image {docker_image_name}..."):
result = os.system( build_cmd = f"cd {temp_path} && docker build"
f"cd {temp_path} && docker build -t {docker_image_name} ." if no_cache:
) build_cmd += " --no-cache"
build_cmd += f" -t {docker_image_name} ."
result = os.system(build_cmd)
except Exception as e: except Exception as e:
console.print(f"[red]Error preparing build context: {e}[/red]") console.print(f"[red]Error preparing build context: {e}[/red]")
@@ -759,6 +767,10 @@ config_app.add_typer(port_app, name="port", no_args_is_help=True)
config_mcp_app = typer.Typer(help="Manage default MCP servers") config_mcp_app = typer.Typer(help="Manage default MCP servers")
config_app.add_typer(config_mcp_app, name="mcp", no_args_is_help=True) config_app.add_typer(config_mcp_app, name="mcp", no_args_is_help=True)
# Create a models subcommand for config
models_app = typer.Typer(help="Manage provider models")
config_app.add_typer(models_app, name="models", no_args_is_help=True)
# MCP configuration commands # MCP configuration commands
@config_mcp_app.command("list") @config_mcp_app.command("list")
@@ -1779,6 +1791,50 @@ def add_remote_mcp(
console.print(f"[red]Error adding remote MCP server: {e}[/red]") console.print(f"[red]Error adding remote MCP server: {e}[/red]")
@mcp_app.command("add-local")
def add_local_mcp(
name: str = typer.Argument(..., help="MCP server name"),
command: str = typer.Argument(..., help="Path to executable"),
args: List[str] = typer.Option([], "--args", "-a", help="Command arguments"),
env: List[str] = typer.Option(
[], "--env", "-e", help="Environment variables (format: KEY=VALUE)"
),
no_default: bool = typer.Option(
False, "--no-default", help="Don't add to default MCPs"
),
) -> None:
"""Add a local MCP server"""
# Parse environment variables
environment = {}
for e in env:
if "=" in e:
key, value = e.split("=", 1)
environment[key] = value
else:
console.print(f"[yellow]Warning: Ignoring invalid env format: {e}[/yellow]")
try:
with console.status(f"Adding local MCP server '{name}'..."):
mcp_manager.add_local_mcp(
name,
command,
args,
environment,
add_as_default=not no_default,
)
console.print(f"[green]Added local MCP server '{name}'[/green]")
console.print(f"Command: {command}")
if args:
console.print(f"Arguments: {' '.join(args)}")
if not no_default:
console.print(f"MCP server '{name}' added to defaults")
else:
console.print(f"MCP server '{name}' not added to defaults")
except Exception as e:
console.print(f"[red]Error adding local MCP server: {e}[/red]")
@mcp_app.command("inspector") @mcp_app.command("inspector")
def run_mcp_inspector( def run_mcp_inspector(
client_port: int = typer.Option( client_port: int = typer.Option(
@@ -2228,6 +2284,139 @@ exec npm start
console.print("[green]MCP Inspector stopped[/green]") console.print("[green]MCP Inspector stopped[/green]")
# Model management commands
@models_app.command("list")
def list_models(
provider: Optional[str] = typer.Argument(None, help="Provider name (optional)"),
) -> None:
if provider:
# List models for specific provider
models = user_config.list_provider_models(provider)
if not models:
if not user_config.get_provider(provider):
console.print(f"[red]Provider '{provider}' not found[/red]")
else:
console.print(f"No models configured for provider '{provider}'")
return
table = Table(show_header=True, header_style="bold")
table.add_column("Model ID")
for model in models:
table.add_row(model["id"])
console.print(f"\n[bold]Models for provider '{provider}'[/bold]")
console.print(table)
else:
# List models for all providers
providers = user_config.list_providers()
if not providers:
console.print("No providers configured")
return
table = Table(show_header=True, header_style="bold")
table.add_column("Provider")
table.add_column("Model ID")
found_models = False
for provider_name in providers.keys():
models = user_config.list_provider_models(provider_name)
for model in models:
table.add_row(provider_name, model["id"])
found_models = True
if found_models:
console.print(table)
else:
console.print("No models configured for any provider")
@models_app.command("refresh")
def refresh_models(
provider: Optional[str] = typer.Argument(None, help="Provider name (optional)"),
) -> None:
from .model_fetcher import fetch_provider_models
if provider:
# Refresh models for specific provider
provider_config = user_config.get_provider(provider)
if not provider_config:
console.print(f"[red]Provider '{provider}' not found[/red]")
return
if not user_config.supports_model_fetching(provider):
console.print(
f"[red]Provider '{provider}' does not support model fetching[/red]"
)
console.print(
"Only providers of supported types (openai, anthropic, google, openrouter) can refresh models"
)
return
console.print(f"Refreshing models for provider '{provider}'...")
try:
with console.status(f"Fetching models from {provider}..."):
models = fetch_provider_models(provider_config)
user_config.set_provider_models(provider, models)
console.print(
f"[green]Successfully refreshed {len(models)} models for '{provider}'[/green]"
)
# Show some examples
if models:
console.print("\nSample models:")
for model in models[:5]: # Show first 5
console.print(f" - {model['id']}")
if len(models) > 5:
console.print(f" ... and {len(models) - 5} more")
except Exception as e:
console.print(f"[red]Failed to refresh models for '{provider}': {e}[/red]")
else:
# Refresh models for all model-fetchable providers
fetchable_providers = user_config.list_model_fetchable_providers()
if not fetchable_providers:
console.print(
"[yellow]No providers with model fetching support found[/yellow]"
)
console.print(
"Add providers of supported types (openai, anthropic, google, openrouter) to refresh models"
)
return
console.print(f"Refreshing models for {len(fetchable_providers)} providers...")
success_count = 0
failed_providers = []
for provider_name in fetchable_providers:
try:
provider_config = user_config.get_provider(provider_name)
with console.status(f"Fetching models from {provider_name}..."):
models = fetch_provider_models(provider_config)
user_config.set_provider_models(provider_name, models)
console.print(f"[green]✓ {provider_name}: {len(models)} models[/green]")
success_count += 1
except Exception as e:
console.print(f"[red]✗ {provider_name}: {e}[/red]")
failed_providers.append(provider_name)
# Summary
console.print("\n[bold]Summary[/bold]")
console.print(f"Successfully refreshed: {success_count} providers")
if failed_providers:
console.print(
f"Failed: {len(failed_providers)} providers ({', '.join(failed_providers)})"
)
def session_create_entry_point(): def session_create_entry_point():
"""Entry point that directly invokes 'cubbi session create'. """Entry point that directly invokes 'cubbi session create'.

View File

@@ -14,6 +14,14 @@ BUILTIN_IMAGES_DIR = Path(__file__).parent / "images"
# Dynamically loaded from images directory at runtime # Dynamically loaded from images directory at runtime
DEFAULT_IMAGES = {} DEFAULT_IMAGES = {}
# Default API URLs for standard providers
PROVIDER_DEFAULT_URLS = {
"openai": "https://api.openai.com",
"anthropic": "https://api.anthropic.com",
"google": "https://generativelanguage.googleapis.com",
"openrouter": "https://openrouter.ai/api",
}
class ConfigManager: class ConfigManager:
def __init__(self, config_path: Optional[Path] = None): def __init__(self, config_path: Optional[Path] = None):

1125
cubbi/configure.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -4,10 +4,13 @@ import logging
import os import os
import pathlib import pathlib
import sys import sys
import tempfile
import uuid import uuid
from pathlib import Path
from typing import Dict, List, Optional, Tuple from typing import Dict, List, Optional, Tuple
import docker import docker
import yaml
from docker.errors import DockerException, ImageNotFound from docker.errors import DockerException, ImageNotFound
from .config import ConfigManager from .config import ConfigManager
@@ -85,6 +88,89 @@ class ContainerManager:
# This ensures we don't mount the /cubbi-config volume for project-less sessions # This ensures we don't mount the /cubbi-config volume for project-less sessions
return None return None
def _generate_container_config(
self,
image_name: str,
project_url: Optional[str] = None,
uid: Optional[int] = None,
gid: Optional[int] = None,
model: Optional[str] = None,
ssh: bool = False,
run_command: Optional[str] = None,
no_shell: bool = False,
mcp_list: Optional[List[str]] = None,
persistent_links: Optional[List[Dict[str, str]]] = None,
) -> Path:
"""Generate container configuration YAML file"""
providers = {}
for name, provider in self.user_config_manager.list_providers().items():
api_key = provider.get("api_key", "")
if api_key.startswith("${") and api_key.endswith("}"):
env_var = api_key[2:-1]
api_key = os.environ.get(env_var, "")
provider_config = {
"type": provider.get("type"),
"api_key": api_key,
}
if provider.get("base_url"):
provider_config["base_url"] = provider.get("base_url")
if provider.get("models"):
provider_config["models"] = provider.get("models")
providers[name] = provider_config
mcps = []
if mcp_list:
for mcp_name in mcp_list:
mcp_config = self.mcp_manager.get_mcp(mcp_name)
if mcp_config:
mcps.append(mcp_config)
config = {
"version": "1.0",
"user": {"uid": uid or 1000, "gid": gid or 1000},
"providers": providers,
"mcps": mcps,
"project": {
"config_dir": "/cubbi-config",
"image_config_dir": f"/cubbi-config/{image_name}",
},
"ssh": {"enabled": ssh},
}
if project_url:
config["project"]["url"] = project_url
if persistent_links:
config["persistent_links"] = persistent_links
if model:
config["defaults"] = {"model": model}
if run_command:
config["run_command"] = run_command
config["no_shell"] = no_shell
config_file = Path(tempfile.mkdtemp()) / "config.yaml"
with open(config_file, "w") as f:
yaml.dump(config, f)
# Set restrictive permissions (0o600 = read/write for owner only)
config_file.chmod(0o600)
# Set ownership to cubbi user if uid/gid are provided
if uid is not None and gid is not None:
try:
os.chown(config_file, uid, gid)
except (OSError, PermissionError):
# If we can't chown (e.g., running as non-root), just log and continue
logger.warning(f"Could not set ownership of config file to {uid}:{gid}")
return config_file
def list_sessions(self) -> List[Session]: def list_sessions(self) -> List[Session]:
"""List all active Cubbi sessions""" """List all active Cubbi sessions"""
sessions = [] sessions = []
@@ -161,7 +247,6 @@ class ContainerManager:
uid: Optional[int] = None, uid: Optional[int] = None,
gid: Optional[int] = None, gid: Optional[int] = None,
model: Optional[str] = None, model: Optional[str] = None,
provider: Optional[str] = None,
ssh: bool = False, ssh: bool = False,
domains: Optional[List[str]] = None, domains: Optional[List[str]] = None,
) -> Optional[Session]: ) -> Optional[Session]:
@@ -181,8 +266,8 @@ class ContainerManager:
mcp: Optional list of MCP server names to attach to the session mcp: Optional list of MCP server names to attach to the session
uid: Optional user ID for the container process uid: Optional user ID for the container process
gid: Optional group ID for the container process gid: Optional group ID for the container process
model: Optional model to use model: Optional model specification in 'provider/model' format (e.g., 'anthropic/claude-3-5-sonnet')
provider: Optional provider to use Legacy separate model and provider parameters are also supported for backward compatibility
ssh: Whether to start the SSH server in the container (default: False) ssh: Whether to start the SSH server in the container (default: False)
domains: Optional list of domains to restrict network access to (uses network-filter) domains: Optional list of domains to restrict network access to (uses network-filter)
""" """
@@ -213,32 +298,22 @@ class ContainerManager:
# Ensure network exists # Ensure network exists
self._ensure_network() self._ensure_network()
# Prepare environment variables # Minimal environment variables
env_vars = environment or {} env_vars = environment or {}
env_vars["CUBBI_CONFIG_FILE"] = "/cubbi/config.yaml"
# Add CUBBI_USER_ID and CUBBI_GROUP_ID for entrypoint script # Forward specified environment variables from the host to the container
env_vars["CUBBI_USER_ID"] = str(uid) if uid is not None else "1000" if (
env_vars["CUBBI_GROUP_ID"] = str(gid) if gid is not None else "1000" hasattr(image, "environments_to_forward")
and image.environments_to_forward
# Set SSH environment variable ):
env_vars["CUBBI_SSH_ENABLED"] = "true" if ssh else "false" for env_name in image.environments_to_forward:
env_value = os.environ.get(env_name)
# Pass some environment from host environment to container for local development if env_value is not None:
keys = [ env_vars[env_name] = env_value
"OPENAI_API_KEY", print(
"OPENAI_URL", f"Forwarding environment variable {env_name} to container"
"ANTHROPIC_API_KEY", )
"ANTHROPIC_AUTH_TOKEN",
"ANTHROPIC_CUSTOM_HEADERS",
"OPENROUTER_API_KEY",
"GOOGLE_API_KEY",
"LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
"LANGFUSE_INIT_PROJECT_SECRET_KEY",
"LANGFUSE_URL",
]
for key in keys:
if key in os.environ and key not in env_vars:
env_vars[key] = os.environ[key]
# Pull image if needed # Pull image if needed
try: try:
@@ -294,6 +369,7 @@ class ContainerManager:
print(f"Mounting volume: {host_path} -> {container_path}") print(f"Mounting volume: {host_path} -> {container_path}")
# Set up persistent project configuration if project_name is provided # Set up persistent project configuration if project_name is provided
persistent_links = []
project_config_path = self._get_project_config_path(project, project_name) project_config_path = self._get_project_config_path(project, project_name)
if project_config_path: if project_config_path:
print(f"Using project configuration directory: {project_config_path}") print(f"Using project configuration directory: {project_config_path}")
@@ -304,13 +380,8 @@ class ContainerManager:
"mode": "rw", "mode": "rw",
} }
# Add environment variables for config path # Create image-specific config directories and collect persistent links
env_vars["CUBBI_CONFIG_DIR"] = "/cubbi-config"
env_vars["CUBBI_IMAGE_CONFIG_DIR"] = f"/cubbi-config/{image_name}"
# Create image-specific config directories and set up direct volume mounts
if image.persistent_configs: if image.persistent_configs:
persistent_links_data = [] # To store "source:target" pairs for symlinks
print("Setting up persistent configuration directories:") print("Setting up persistent configuration directories:")
for config in image.persistent_configs: for config in image.persistent_configs:
# Get target directory path on host # Get target directory path on host
@@ -327,24 +398,19 @@ class ContainerManager:
# For files, make sure parent directory exists # For files, make sure parent directory exists
elif config.type == "file": elif config.type == "file":
target_dir.parent.mkdir(parents=True, exist_ok=True) target_dir.parent.mkdir(parents=True, exist_ok=True)
# File will be created by the container if needed
# Store the source and target paths for the init script # Store persistent link data for config file
# Note: config.target is the path *within* /cubbi-config persistent_links.append(
persistent_links_data.append(f"{config.source}:{config.target}") {
"source": config.source,
"target": config.target,
"type": config.type,
}
)
print( print(
f" - Prepared host path {target_dir} for symlink target {config.target}" f" - Prepared host path {target_dir} for symlink target {config.target}"
) )
# Set up persistent links
if persistent_links_data:
env_vars["CUBBI_PERSISTENT_LINKS"] = ";".join(
persistent_links_data
)
print(
f"Setting CUBBI_PERSISTENT_LINKS={env_vars['CUBBI_PERSISTENT_LINKS']}"
)
else: else:
print( print(
"No project_name provided - skipping configuration directory setup." "No project_name provided - skipping configuration directory setup."
@@ -394,43 +460,6 @@ class ContainerManager:
# Get MCP status to extract endpoint information # Get MCP status to extract endpoint information
mcp_status = self.mcp_manager.get_mcp_status(mcp_name) mcp_status = self.mcp_manager.get_mcp_status(mcp_name)
# Add MCP environment variables with index
idx = len(mcp_names) - 1 # 0-based index for the current MCP
if mcp_config.get("type") == "remote":
# For remote MCP, set the URL and headers
env_vars[f"MCP_{idx}_URL"] = mcp_config.get("url")
if mcp_config.get("headers"):
# Serialize headers as JSON
import json
env_vars[f"MCP_{idx}_HEADERS"] = json.dumps(
mcp_config.get("headers")
)
else:
# For Docker/proxy MCP, set the connection details
# Use both the container name and the short name for internal Docker DNS resolution
container_name = self.mcp_manager.get_mcp_container_name(
mcp_name
)
# Use the short name (mcp_name) as the primary hostname
env_vars[f"MCP_{idx}_HOST"] = mcp_name
# Default port is 8080 unless specified in status
port = next(
iter(mcp_status.get("ports", {}).values()), 8080
)
env_vars[f"MCP_{idx}_PORT"] = str(port)
# Use the short name in the URL to take advantage of the network alias
env_vars[f"MCP_{idx}_URL"] = f"http://{mcp_name}:{port}/sse"
# For backward compatibility, also set the full container name URL
env_vars[f"MCP_{idx}_CONTAINER_URL"] = (
f"http://{container_name}:{port}/sse"
)
# Set type-specific information
env_vars[f"MCP_{idx}_TYPE"] = mcp_config.get("type")
env_vars[f"MCP_{idx}_NAME"] = mcp_name
except Exception as e: except Exception as e:
print(f"Warning: Failed to start MCP server '{mcp_name}': {e}") print(f"Warning: Failed to start MCP server '{mcp_name}': {e}")
# Get the container name before trying to remove it from the list # Get the container name before trying to remove it from the list
@@ -445,30 +474,8 @@ class ContainerManager:
pass pass
elif mcp_config.get("type") == "remote": elif mcp_config.get("type") == "remote":
# For remote MCP, just set environment variables # Remote MCP - nothing to do here, config will handle it
idx = len(mcp_names) - 1 # 0-based index for the current MCP pass
env_vars[f"MCP_{idx}_URL"] = mcp_config.get("url")
if mcp_config.get("headers"):
# Serialize headers as JSON
import json
env_vars[f"MCP_{idx}_HEADERS"] = json.dumps(
mcp_config.get("headers")
)
# Set type-specific information
env_vars[f"MCP_{idx}_TYPE"] = mcp_config.get("mcp_type", "sse")
env_vars[f"MCP_{idx}_NAME"] = mcp_name
# Set environment variables for MCP count if we have any
if mcp_names:
env_vars["MCP_COUNT"] = str(len(mcp_names))
env_vars["MCP_ENABLED"] = "true"
# Serialize all MCP names as JSON
import json
env_vars["MCP_NAMES"] = json.dumps(mcp_names)
# Add user-specified networks # Add user-specified networks
# Default Cubbi network # Default Cubbi network
@@ -499,39 +506,18 @@ class ContainerManager:
target_shell = "/bin/bash" target_shell = "/bin/bash"
if run_command: if run_command:
# Set environment variable for cubbi-init.sh to pick up
env_vars["CUBBI_RUN_COMMAND"] = run_command
# If no_shell is true, set CUBBI_NO_SHELL environment variable
if no_shell:
env_vars["CUBBI_NO_SHELL"] = "true"
logger.info(
"Setting CUBBI_NO_SHELL=true, container will exit after run command"
)
# Set the container's command to be the final shell (or exit if no_shell is true) # Set the container's command to be the final shell (or exit if no_shell is true)
container_command = [target_shell] container_command = [target_shell]
logger.info( logger.info(f"Using run command with shell {target_shell}")
f"Setting CUBBI_RUN_COMMAND and targeting shell {target_shell}" if no_shell:
) logger.info("Container will exit after run command")
else: else:
# Use default behavior (often defined by image's ENTRYPOINT/CMD) # Use default behavior (often defined by image's ENTRYPOINT/CMD)
# Set the container's command to be the final shell if none specified by Dockerfile CMD
# Note: Dockerfile CMD is ["tail", "-f", "/dev/null"], so this might need adjustment
# if we want interactive shell by default without --run. Let's default to bash for now.
container_command = [target_shell] container_command = [target_shell]
logger.info( logger.info(
"Using default container entrypoint/command for interactive shell." "Using default container entrypoint/command for interactive shell."
) )
# Set default model/provider from user config if not explicitly provided
env_vars["CUBBI_MODEL"] = model or self.user_config_manager.get(
"defaults.model", ""
)
env_vars["CUBBI_PROVIDER"] = provider or self.user_config_manager.get(
"defaults.provider", ""
)
# Handle network-filter if domains are specified # Handle network-filter if domains are specified
network_filter_container = None network_filter_container = None
network_mode = None network_mode = None
@@ -615,6 +601,29 @@ class ContainerManager:
"[yellow]Warning: MCP servers may not be accessible when using domain restrictions.[/yellow]" "[yellow]Warning: MCP servers may not be accessible when using domain restrictions.[/yellow]"
) )
# Generate configuration file
project_url = project if is_git_repo else None
config_file_path = self._generate_container_config(
image_name=image_name,
project_url=project_url,
uid=uid,
gid=gid,
model=model,
ssh=ssh,
run_command=run_command,
no_shell=no_shell,
mcp_list=mcp_names,
persistent_links=persistent_links
if "persistent_links" in locals()
else None,
)
# Mount config file
session_volumes[str(config_file_path)] = {
"bind": "/cubbi/config.yaml",
"mode": "ro",
}
# Create container # Create container
container_params = { container_params = {
"image": image.image, "image": image.image,
@@ -879,33 +888,50 @@ class ContainerManager:
return False return False
try: try:
# First, close the main session container
container = self.client.containers.get(session.container_id) container = self.client.containers.get(session.container_id)
try:
if kill: if kill:
container.kill() container.kill()
else: else:
container.stop() container.stop()
except DockerException:
pass
container.remove() container.remove()
# Check for and close any associated network-filter container
network_filter_name = f"cubbi-network-filter-{session.id}" network_filter_name = f"cubbi-network-filter-{session.id}"
try: try:
network_filter_container = self.client.containers.get( network_filter_container = self.client.containers.get(
network_filter_name network_filter_name
) )
logger.info(f"Stopping network-filter container {network_filter_name}") logger.info(f"Stopping network-filter container {network_filter_name}")
try:
if kill: if kill:
network_filter_container.kill() network_filter_container.kill()
else: else:
network_filter_container.stop() network_filter_container.stop()
except DockerException:
pass
network_filter_container.remove() network_filter_container.remove()
except DockerException: except DockerException:
# Network-filter container might not exist, which is fine
pass pass
self.session_manager.remove_session(session.id) self.session_manager.remove_session(session.id)
return True return True
except DockerException as e: except DockerException as e:
error_message = str(e).lower()
if (
"is not running" in error_message
or "no such container" in error_message
or "not found" in error_message
):
print(
f"Container already stopped/removed, removing session {session.id} from list"
)
self.session_manager.remove_session(session.id)
return True
else:
print(f"Error closing session {session.id}: {e}") print(f"Error closing session {session.id}: {e}")
return False return False
@@ -929,39 +955,41 @@ class ContainerManager:
# No need for session status as we receive it via callback # No need for session status as we receive it via callback
# Define a wrapper to track progress
def close_with_progress(session): def close_with_progress(session):
if not session.container_id: if not session.container_id:
return False return False
try: try:
container = self.client.containers.get(session.container_id) container = self.client.containers.get(session.container_id)
# Stop and remove container
try:
if kill: if kill:
container.kill() container.kill()
else: else:
container.stop() container.stop()
except DockerException:
pass
container.remove() container.remove()
# Check for and close any associated network-filter container
network_filter_name = f"cubbi-network-filter-{session.id}" network_filter_name = f"cubbi-network-filter-{session.id}"
try: try:
network_filter_container = self.client.containers.get( network_filter_container = self.client.containers.get(
network_filter_name network_filter_name
) )
try:
if kill: if kill:
network_filter_container.kill() network_filter_container.kill()
else: else:
network_filter_container.stop() network_filter_container.stop()
except DockerException:
pass
network_filter_container.remove() network_filter_container.remove()
except DockerException: except DockerException:
# Network-filter container might not exist, which is fine
pass pass
# Remove from session storage
self.session_manager.remove_session(session.id) self.session_manager.remove_session(session.id)
# Notify about completion
if progress_callback: if progress_callback:
progress_callback( progress_callback(
session.id, session.id,
@@ -971,6 +999,24 @@ class ContainerManager:
return True return True
except DockerException as e: except DockerException as e:
error_message = str(e).lower()
if (
"is not running" in error_message
or "no such container" in error_message
or "not found" in error_message
):
print(
f"Container already stopped/removed, removing session {session.id} from list"
)
self.session_manager.remove_session(session.id)
if progress_callback:
progress_callback(
session.id,
"completed",
f"{session.name} removed from list (container already stopped)",
)
return True
else:
error_msg = f"Error: {str(e)}" error_msg = f"Error: {str(e)}"
if progress_callback: if progress_callback:
progress_callback(session.id, "failed", error_msg) progress_callback(session.id, "failed", error_msg)

View File

@@ -1,74 +1,44 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""
Aider Plugin for Cubbi
Handles authentication setup and configuration for Aider AI pair programming
"""
import os import os
import stat import stat
from pathlib import Path from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin from cubbi_init import ToolPlugin, cubbi_config, set_ownership
class AiderPlugin(ToolPlugin): class AiderPlugin(ToolPlugin):
"""Plugin for setting up Aider authentication and configuration"""
@property @property
def tool_name(self) -> str: def tool_name(self) -> str:
return "aider" return "aider"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_aider_config_dir(self) -> Path: def _get_aider_config_dir(self) -> Path:
"""Get the Aider configuration directory"""
return Path("/home/cubbi/.aider") return Path("/home/cubbi/.aider")
def _get_aider_cache_dir(self) -> Path: def _get_aider_cache_dir(self) -> Path:
"""Get the Aider cache directory"""
return Path("/home/cubbi/.cache/aider") return Path("/home/cubbi/.cache/aider")
def _ensure_aider_dirs(self) -> tuple[Path, Path]: def _ensure_aider_dirs(self) -> tuple[Path, Path]:
"""Ensure Aider directories exist with correct ownership"""
config_dir = self._get_aider_config_dir() config_dir = self._get_aider_config_dir()
cache_dir = self._get_aider_cache_dir() cache_dir = self._get_aider_cache_dir()
# Create directories self.create_directory_with_ownership(config_dir)
for directory in [config_dir, cache_dir]: self.create_directory_with_ownership(cache_dir)
try:
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
self._set_ownership(directory)
except OSError as e:
self.status.log(
f"Failed to create Aider directory {directory}: {e}", "ERROR"
)
return config_dir, cache_dir return config_dir, cache_dir
def initialize(self) -> bool: def is_already_configured(self) -> bool:
"""Initialize Aider configuration""" config_dir = self._get_aider_config_dir()
env_file = config_dir / ".env"
return env_file.exists()
def configure(self) -> bool:
self.status.log("Setting up Aider configuration...") self.status.log("Setting up Aider configuration...")
# Ensure Aider directories exist
config_dir, cache_dir = self._ensure_aider_dirs() config_dir, cache_dir = self._ensure_aider_dirs()
# Set up environment variables for the session
env_vars = self._create_environment_config() env_vars = self._create_environment_config()
# Create .env file if we have API keys
if env_vars: if env_vars:
env_file = config_dir / ".env" env_file = config_dir / ".env"
success = self._write_env_file(env_file, env_vars) success = self._write_env_file(env_file, env_vars)
@@ -85,41 +55,88 @@ class AiderPlugin(ToolPlugin):
"INFO", "INFO",
) )
# Always return True to allow container to start if not cubbi_config.mcps:
self.status.log("No MCP servers to integrate")
return True return True
def _create_environment_config(self) -> Dict[str, str]: self.status.log(
"""Create environment variable configuration for Aider""" f"Found {len(cubbi_config.mcps)} MCP server(s) - no direct integration available for Aider"
)
return True
def _create_environment_config(self) -> dict[str, str]:
env_vars = {} env_vars = {}
# Map environment variables to Aider configuration provider_config = cubbi_config.get_provider_for_default_model()
if provider_config and cubbi_config.defaults.model:
_, model_name = cubbi_config.defaults.model.split("/", 1)
env_vars["AIDER_MODEL"] = model_name
self.status.log(f"Set Aider model to {model_name}")
if provider_config.type == "anthropic":
env_vars["AIDER_ANTHROPIC_API_KEY"] = provider_config.api_key
self.status.log("Configured Anthropic API key for Aider")
elif provider_config.type == "openai":
env_vars["AIDER_OPENAI_API_KEY"] = provider_config.api_key
if provider_config.base_url:
env_vars["AIDER_OPENAI_API_BASE"] = provider_config.base_url
self.status.log(
f"Set Aider OpenAI API base to {provider_config.base_url}"
)
self.status.log("Configured OpenAI API key for Aider")
elif provider_config.type == "google":
env_vars["GEMINI_API_KEY"] = provider_config.api_key
self.status.log("Configured Google/Gemini API key for Aider")
elif provider_config.type == "openrouter":
env_vars["OPENROUTER_API_KEY"] = provider_config.api_key
self.status.log("Configured OpenRouter API key for Aider")
else:
self.status.log(
f"Provider type '{provider_config.type}' not directly supported by Aider plugin",
"WARNING",
)
else:
self.status.log(
"No default model or provider configured - checking legacy environment variables",
"WARNING",
)
api_key_mappings = { api_key_mappings = {
"OPENAI_API_KEY": "OPENAI_API_KEY", "OPENAI_API_KEY": "AIDER_OPENAI_API_KEY",
"ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY", "ANTHROPIC_API_KEY": "AIDER_ANTHROPIC_API_KEY",
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY", "DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY",
"GEMINI_API_KEY": "GEMINI_API_KEY", "GEMINI_API_KEY": "GEMINI_API_KEY",
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY", "OPENROUTER_API_KEY": "OPENROUTER_API_KEY",
} }
# Check for OpenAI API base URL
openai_url = os.environ.get("OPENAI_URL")
if openai_url:
env_vars["OPENAI_API_BASE"] = openai_url
self.status.log(f"Set OpenAI API base URL to {openai_url}")
# Check for standard API keys
for env_var, aider_var in api_key_mappings.items(): for env_var, aider_var in api_key_mappings.items():
value = os.environ.get(env_var) value = os.environ.get(env_var)
if value: if value:
env_vars[aider_var] = value env_vars[aider_var] = value
provider = env_var.replace("_API_KEY", "").lower() provider = env_var.replace("_API_KEY", "").lower()
self.status.log(f"Added {provider} API key") self.status.log(f"Added {provider} API key from environment")
openai_url = os.environ.get("OPENAI_URL")
if openai_url:
env_vars["AIDER_OPENAI_API_BASE"] = openai_url
self.status.log(
f"Set OpenAI API base URL to {openai_url} from environment"
)
model = os.environ.get("AIDER_MODEL")
if model:
env_vars["AIDER_MODEL"] = model
self.status.log(f"Set model to {model} from environment")
# Handle additional API keys from AIDER_API_KEYS
additional_keys = os.environ.get("AIDER_API_KEYS") additional_keys = os.environ.get("AIDER_API_KEYS")
if additional_keys: if additional_keys:
try: try:
# Parse format: "provider1=key1,provider2=key2"
for pair in additional_keys.split(","): for pair in additional_keys.split(","):
if "=" in pair: if "=" in pair:
provider, key = pair.strip().split("=", 1) provider, key = pair.strip().split("=", 1)
@@ -129,23 +146,14 @@ class AiderPlugin(ToolPlugin):
except Exception as e: except Exception as e:
self.status.log(f"Failed to parse AIDER_API_KEYS: {e}", "WARNING") self.status.log(f"Failed to parse AIDER_API_KEYS: {e}", "WARNING")
# Add model configuration
model = os.environ.get("AIDER_MODEL")
if model:
env_vars["AIDER_MODEL"] = model
self.status.log(f"Set default model to {model}")
# Add git configuration
auto_commits = os.environ.get("AIDER_AUTO_COMMITS", "true") auto_commits = os.environ.get("AIDER_AUTO_COMMITS", "true")
if auto_commits.lower() in ["true", "false"]: if auto_commits.lower() in ["true", "false"]:
env_vars["AIDER_AUTO_COMMITS"] = auto_commits env_vars["AIDER_AUTO_COMMITS"] = auto_commits
# Add dark mode setting
dark_mode = os.environ.get("AIDER_DARK_MODE", "false") dark_mode = os.environ.get("AIDER_DARK_MODE", "false")
if dark_mode.lower() in ["true", "false"]: if dark_mode.lower() in ["true", "false"]:
env_vars["AIDER_DARK_MODE"] = dark_mode env_vars["AIDER_DARK_MODE"] = dark_mode
# Add proxy settings
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]: for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
value = os.environ.get(proxy_var) value = os.environ.get(proxy_var)
if value: if value:
@@ -154,8 +162,7 @@ class AiderPlugin(ToolPlugin):
return env_vars return env_vars
def _write_env_file(self, env_file: Path, env_vars: Dict[str, str]) -> bool: def _write_env_file(self, env_file: Path, env_vars: dict[str, str]) -> bool:
"""Write environment variables to .env file"""
try: try:
content = "\n".join(f"{key}={value}" for key, value in env_vars.items()) content = "\n".join(f"{key}={value}" for key, value in env_vars.items())
@@ -163,8 +170,7 @@ class AiderPlugin(ToolPlugin):
f.write(content) f.write(content)
f.write("\n") f.write("\n")
# Set ownership and secure file permissions (read/write for owner only) set_ownership(env_file)
self._set_ownership(env_file)
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR) os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
self.status.log(f"Created Aider environment file at {env_file}") self.status.log(f"Created Aider environment file at {env_file}")
@@ -173,20 +179,5 @@ class AiderPlugin(ToolPlugin):
self.status.log(f"Failed to write Aider environment file: {e}", "ERROR") self.status.log(f"Failed to write Aider environment file: {e}", "ERROR")
return False return False
def setup_tool_configuration(self) -> bool:
"""Set up Aider configuration - called by base class"""
# Additional tool configuration can be added here if needed
return True
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool: PLUGIN_CLASS = AiderPlugin
"""Integrate Aider with available MCP servers if applicable"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Aider doesn't have native MCP support like Claude Code,
# but we could potentially add custom integrations here
self.status.log(
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
)
return True

View File

@@ -3,84 +3,40 @@ description: Aider AI pair programming environment
version: 1.0.0 version: 1.0.0
maintainer: team@monadical.com maintainer: team@monadical.com
image: monadical/cubbi-aider:latest image: monadical/cubbi-aider:latest
persistent_configs: []
init: environments_to_forward:
pre_command: /cubbi-init.sh # API Keys
command: /entrypoint.sh - OPENAI_API_KEY
- ANTHROPIC_API_KEY
environment: - ANTHROPIC_AUTH_TOKEN
# OpenAI Configuration - ANTHROPIC_CUSTOM_HEADERS
- name: OPENAI_API_KEY - DEEPSEEK_API_KEY
description: OpenAI API key for GPT models - GEMINI_API_KEY
required: false - OPENROUTER_API_KEY
sensitive: true - AIDER_API_KEYS
# Anthropic Configuration
- name: ANTHROPIC_API_KEY
description: Anthropic API key for Claude models
required: false
sensitive: true
# DeepSeek Configuration
- name: DEEPSEEK_API_KEY
description: DeepSeek API key for DeepSeek models
required: false
sensitive: true
# Gemini Configuration
- name: GEMINI_API_KEY
description: Google Gemini API key
required: false
sensitive: true
# OpenRouter Configuration
- name: OPENROUTER_API_KEY
description: OpenRouter API key for various models
required: false
sensitive: true
# Generic provider API keys
- name: AIDER_API_KEYS
description: Additional API keys in format "provider1=key1,provider2=key2"
required: false
sensitive: true
# Model Configuration # Model Configuration
- name: AIDER_MODEL - AIDER_MODEL
description: Default model to use (e.g., sonnet, o3-mini, deepseek) - CUBBI_MODEL
required: false - CUBBI_PROVIDER
# Git Configuration # Git Configuration
- name: AIDER_AUTO_COMMITS - AIDER_AUTO_COMMITS
description: Enable automatic commits (true/false) - AIDER_DARK_MODE
required: false - GIT_AUTHOR_NAME
default: "true" - GIT_AUTHOR_EMAIL
- GIT_COMMITTER_NAME
- name: AIDER_DARK_MODE - GIT_COMMITTER_EMAIL
description: Enable dark mode (true/false)
required: false
default: "false"
# Proxy Configuration # Proxy Configuration
- name: HTTP_PROXY - HTTP_PROXY
description: HTTP proxy server URL - HTTPS_PROXY
required: false - NO_PROXY
- name: HTTPS_PROXY # OpenAI Configuration
description: HTTPS proxy server URL - OPENAI_URL
required: false - OPENAI_API_BASE
- AIDER_OPENAI_API_BASE
volumes: # Timezone (useful for logs and timestamps)
- mountPath: /app - TZ
description: Application directory
persistent_configs:
- source: "/home/cubbi/.aider"
target: "/cubbi-config/aider-settings"
type: "directory"
description: "Aider configuration and history"
- source: "/home/cubbi/.cache/aider"
target: "/cubbi-config/aider-cache"
type: "directory"
description: "Aider cache directory"

View File

@@ -1,81 +1,31 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""
Claude Code Plugin for Cubbi
Handles authentication setup and configuration for Claude Code
"""
import json import json
import os import os
import stat import stat
from pathlib import Path from pathlib import Path
from typing import Any, Dict, Optional
from cubbi_init import ToolPlugin from cubbi_init import ToolPlugin, cubbi_config, set_ownership
# API key mappings from environment variables to Claude Code configuration
API_KEY_MAPPINGS = {
"ANTHROPIC_API_KEY": "api_key",
"ANTHROPIC_AUTH_TOKEN": "auth_token",
"ANTHROPIC_CUSTOM_HEADERS": "custom_headers",
}
# Enterprise integration environment variables
ENTERPRISE_MAPPINGS = {
"CLAUDE_CODE_USE_BEDROCK": "use_bedrock",
"CLAUDE_CODE_USE_VERTEX": "use_vertex",
"HTTP_PROXY": "http_proxy",
"HTTPS_PROXY": "https_proxy",
"DISABLE_TELEMETRY": "disable_telemetry",
}
class ClaudeCodePlugin(ToolPlugin): class ClaudeCodePlugin(ToolPlugin):
"""Plugin for setting up Claude Code authentication and configuration"""
@property @property
def tool_name(self) -> str: def tool_name(self) -> str:
return "claudecode" return "claudecode"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_claude_dir(self) -> Path: def _get_claude_dir(self) -> Path:
"""Get the Claude Code configuration directory"""
return Path("/home/cubbi/.claude") return Path("/home/cubbi/.claude")
def _ensure_claude_dir(self) -> Path: def is_already_configured(self) -> bool:
"""Ensure Claude directory exists with correct ownership""" settings_file = self._get_claude_dir() / "settings.json"
claude_dir = self._get_claude_dir() return settings_file.exists()
try: def configure(self) -> bool:
claude_dir.mkdir(mode=0o700, parents=True, exist_ok=True)
self._set_ownership(claude_dir)
except OSError as e:
self.status.log(
f"Failed to create Claude directory {claude_dir}: {e}", "ERROR"
)
return claude_dir
def initialize(self) -> bool:
"""Initialize Claude Code configuration"""
self.status.log("Setting up Claude Code authentication...") self.status.log("Setting up Claude Code authentication...")
# Ensure Claude directory exists claude_dir = self.create_directory_with_ownership(self._get_claude_dir())
claude_dir = self._ensure_claude_dir() claude_dir.chmod(0o700)
# Create settings configuration
settings = self._create_settings() settings = self._create_settings()
if settings: if settings:
@@ -83,6 +33,7 @@ class ClaudeCodePlugin(ToolPlugin):
success = self._write_settings(settings_file, settings) success = self._write_settings(settings_file, settings)
if success: if success:
self.status.log("✅ Claude Code authentication configured successfully") self.status.log("✅ Claude Code authentication configured successfully")
self._integrate_mcp_servers()
return True return True
else: else:
return False return False
@@ -92,46 +43,52 @@ class ClaudeCodePlugin(ToolPlugin):
" Please set ANTHROPIC_API_KEY environment variable", "WARNING" " Please set ANTHROPIC_API_KEY environment variable", "WARNING"
) )
self.status.log(" Claude Code will run without authentication", "INFO") self.status.log(" Claude Code will run without authentication", "INFO")
# Return True to allow container to start without API key self._integrate_mcp_servers()
# Users can still use Claude Code with their own authentication methods
return True return True
def _create_settings(self) -> Optional[Dict]: def _integrate_mcp_servers(self) -> None:
"""Create Claude Code settings configuration""" if not cubbi_config.mcps:
self.status.log("No MCP servers to integrate")
return
self.status.log("MCP server integration available for Claude Code")
def _create_settings(self) -> dict | None:
settings = {} settings = {}
# Core authentication anthropic_provider = None
for provider_name, provider_config in cubbi_config.providers.items():
if provider_config.type == "anthropic":
anthropic_provider = provider_config
break
if not anthropic_provider or not anthropic_provider.api_key:
api_key = os.environ.get("ANTHROPIC_API_KEY") api_key = os.environ.get("ANTHROPIC_API_KEY")
if not api_key: if not api_key:
return None return None
# Basic authentication setup
settings["apiKey"] = api_key settings["apiKey"] = api_key
else:
settings["apiKey"] = anthropic_provider.api_key
# Custom authorization token (optional)
auth_token = os.environ.get("ANTHROPIC_AUTH_TOKEN") auth_token = os.environ.get("ANTHROPIC_AUTH_TOKEN")
if auth_token: if auth_token:
settings["authToken"] = auth_token settings["authToken"] = auth_token
# Custom headers (optional)
custom_headers = os.environ.get("ANTHROPIC_CUSTOM_HEADERS") custom_headers = os.environ.get("ANTHROPIC_CUSTOM_HEADERS")
if custom_headers: if custom_headers:
try: try:
# Expect JSON string format
settings["customHeaders"] = json.loads(custom_headers) settings["customHeaders"] = json.loads(custom_headers)
except json.JSONDecodeError: except json.JSONDecodeError:
self.status.log( self.status.log(
"⚠️ Invalid ANTHROPIC_CUSTOM_HEADERS format, skipping", "WARNING" "⚠️ Invalid ANTHROPIC_CUSTOM_HEADERS format, skipping", "WARNING"
) )
# Enterprise integration settings
if os.environ.get("CLAUDE_CODE_USE_BEDROCK") == "true": if os.environ.get("CLAUDE_CODE_USE_BEDROCK") == "true":
settings["provider"] = "bedrock" settings["provider"] = "bedrock"
if os.environ.get("CLAUDE_CODE_USE_VERTEX") == "true": if os.environ.get("CLAUDE_CODE_USE_VERTEX") == "true":
settings["provider"] = "vertex" settings["provider"] = "vertex"
# Network proxy settings
http_proxy = os.environ.get("HTTP_PROXY") http_proxy = os.environ.get("HTTP_PROXY")
https_proxy = os.environ.get("HTTPS_PROXY") https_proxy = os.environ.get("HTTPS_PROXY")
if http_proxy or https_proxy: if http_proxy or https_proxy:
@@ -141,11 +98,9 @@ class ClaudeCodePlugin(ToolPlugin):
if https_proxy: if https_proxy:
settings["proxy"]["https"] = https_proxy settings["proxy"]["https"] = https_proxy
# Telemetry settings
if os.environ.get("DISABLE_TELEMETRY") == "true": if os.environ.get("DISABLE_TELEMETRY") == "true":
settings["telemetry"] = {"enabled": False} settings["telemetry"] = {"enabled": False}
# Tool permissions (allow all by default in Cubbi environment)
settings["permissions"] = { settings["permissions"] = {
"tools": { "tools": {
"read": {"allowed": True}, "read": {"allowed": True},
@@ -159,15 +114,12 @@ class ClaudeCodePlugin(ToolPlugin):
return settings return settings
def _write_settings(self, settings_file: Path, settings: Dict) -> bool: def _write_settings(self, settings_file: Path, settings: dict) -> bool:
"""Write settings to Claude Code configuration file"""
try: try:
# Write settings with secure permissions
with open(settings_file, "w") as f: with open(settings_file, "w") as f:
json.dump(settings, f, indent=2) json.dump(settings, f, indent=2)
# Set ownership and secure file permissions (read/write for owner only) set_ownership(settings_file)
self._set_ownership(settings_file)
os.chmod(settings_file, stat.S_IRUSR | stat.S_IWUSR) os.chmod(settings_file, stat.S_IRUSR | stat.S_IWUSR)
self.status.log(f"Created Claude Code settings at {settings_file}") self.status.log(f"Created Claude Code settings at {settings_file}")
@@ -176,18 +128,5 @@ class ClaudeCodePlugin(ToolPlugin):
self.status.log(f"Failed to write Claude Code settings: {e}", "ERROR") self.status.log(f"Failed to write Claude Code settings: {e}", "ERROR")
return False return False
def setup_tool_configuration(self) -> bool:
"""Set up Claude Code configuration - called by base class"""
# Additional tool configuration can be added here if needed
return True
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool: PLUGIN_CLASS = ClaudeCodePlugin
"""Integrate Claude Code with available MCP servers"""
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate")
return True
# Claude Code has built-in MCP support, so we can potentially
# configure MCP servers in the settings if needed
self.status.log("MCP server integration available for Claude Code")
return True

View File

@@ -3,64 +3,13 @@ description: Claude Code AI environment
version: 1.0.0 version: 1.0.0
maintainer: team@monadical.com maintainer: team@monadical.com
image: monadical/cubbi-claudecode:latest image: monadical/cubbi-claudecode:latest
persistent_configs: []
init: environments_to_forward:
pre_command: /cubbi-init.sh - ANTHROPIC_API_KEY
command: /entrypoint.sh - ANTHROPIC_AUTH_TOKEN
- ANTHROPIC_CUSTOM_HEADERS
environment: - CLAUDE_CODE_USE_BEDROCK
# Core Anthropic Authentication - CLAUDE_CODE_USE_VERTEX
- name: ANTHROPIC_API_KEY - HTTP_PROXY
description: Anthropic API key for Claude - HTTPS_PROXY
required: true - DISABLE_TELEMETRY
sensitive: true
# Optional Enterprise Integration
- name: ANTHROPIC_AUTH_TOKEN
description: Custom authorization token for Claude
required: false
sensitive: true
- name: ANTHROPIC_CUSTOM_HEADERS
description: Additional HTTP headers for Claude API requests
required: false
sensitive: true
# Enterprise Deployment Options
- name: CLAUDE_CODE_USE_BEDROCK
description: Use Amazon Bedrock instead of direct API
required: false
- name: CLAUDE_CODE_USE_VERTEX
description: Use Google Vertex AI instead of direct API
required: false
# Network Configuration
- name: HTTP_PROXY
description: HTTP proxy server URL
required: false
- name: HTTPS_PROXY
description: HTTPS proxy server URL
required: false
# Optional Telemetry Control
- name: DISABLE_TELEMETRY
description: Disable Claude Code telemetry
required: false
default: "false"
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.claude"
target: "/cubbi-config/claude-settings"
type: "directory"
description: "Claude Code settings and configuration"
- source: "/home/cubbi/.cache/claude"
target: "/cubbi-config/claude-cache"
type: "directory"
description: "Claude Code cache directory"

View File

@@ -1,76 +1,83 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""
Crush-specific plugin for Cubbi initialization
"""
import json import json
import os
from pathlib import Path from pathlib import Path
from typing import Any, Dict from typing import Any
from cubbi_init import ToolPlugin from cubbi_init import ToolPlugin, cubbi_config, set_ownership
STANDARD_PROVIDERS = ["anthropic", "openai", "google", "openrouter"]
class CrushPlugin(ToolPlugin): class CrushPlugin(ToolPlugin):
"""Plugin for Crush AI coding assistant initialization"""
@property @property
def tool_name(self) -> str: def tool_name(self) -> str:
return "crush" return "crush"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_user_config_path(self) -> Path: def _get_user_config_path(self) -> Path:
"""Get the correct config path for the cubbi user"""
return Path("/home/cubbi/.config/crush") return Path("/home/cubbi/.config/crush")
def _ensure_user_config_dir(self) -> Path: def is_already_configured(self) -> bool:
"""Ensure config directory exists with correct ownership""" config_file = self._get_user_config_path() / "crush.json"
config_dir = self._get_user_config_path() return config_file.exists()
# Create the full directory path def configure(self) -> bool:
try: return self._setup_tool_configuration() and self._integrate_mcp_servers()
config_dir.mkdir(parents=True, exist_ok=True)
except FileExistsError:
# Directory already exists, which is fine
pass
except OSError as e:
self.status.log(
f"Failed to create config directory {config_dir}: {e}", "ERROR"
)
return config_dir
# Set ownership for the directories def _map_provider_to_crush_format(
config_parent = config_dir.parent self, provider_name: str, provider_config, is_default_provider: bool = False
if config_parent.exists(): ) -> dict[str, Any] | None:
self._set_ownership(config_parent) # Handle standard providers without base_url
if not provider_config.base_url:
if provider_config.type in STANDARD_PROVIDERS:
# Populate models for any standard provider that has models
models_list = []
if provider_config.models:
for model in provider_config.models:
model_id = model.get("id", "")
if model_id:
models_list.append({"id": model_id, "name": model_id})
if config_dir.exists(): provider_entry = {
self._set_ownership(config_dir) "api_key": provider_config.api_key,
"models": models_list,
}
return provider_entry
return config_dir # Handle custom providers with base_url
models_list = []
def initialize(self) -> bool: # Add all models for any provider type that has models
"""Initialize Crush configuration""" if provider_config.models:
self._ensure_user_config_dir() for model in provider_config.models:
return self.setup_tool_configuration() model_id = model.get("id", "")
if model_id:
models_list.append({"id": model_id, "name": model_id})
def setup_tool_configuration(self) -> bool: provider_entry = {
"""Set up Crush configuration file""" "api_key": provider_config.api_key,
# Ensure directory exists before writing "base_url": provider_config.base_url,
config_dir = self._ensure_user_config_dir() "models": models_list,
}
if provider_config.type in STANDARD_PROVIDERS:
if provider_config.type == "anthropic":
provider_entry["type"] = "anthropic"
elif provider_config.type == "openai":
provider_entry["type"] = "openai"
elif provider_config.type == "google":
provider_entry["type"] = "gemini"
elif provider_config.type == "openrouter":
provider_entry["type"] = "openai"
provider_entry["name"] = f"{provider_name} ({provider_config.type})"
else:
provider_entry["type"] = "openai"
provider_entry["name"] = f"{provider_name} ({provider_config.type})"
return provider_entry
def _setup_tool_configuration(self) -> bool:
config_dir = self.create_directory_with_ownership(self._get_user_config_path())
if not config_dir.exists(): if not config_dir.exists():
self.status.log( self.status.log(
f"Config directory {config_dir} does not exist and could not be created", f"Config directory {config_dir} does not exist and could not be created",
@@ -78,45 +85,72 @@ class CrushPlugin(ToolPlugin):
) )
return False return False
config_file = config_dir / "config.json" config_file = config_dir / "crush.json"
# Load or initialize configuration config_data = {"$schema": "https://charm.land/crush.json", "providers": {}}
if config_file.exists():
try:
with config_file.open("r") as f:
config_data = json.load(f)
except (json.JSONDecodeError, OSError) as e:
self.status.log(f"Failed to load existing config: {e}", "WARNING")
config_data = {}
else:
config_data = {}
# Set default model and provider if specified default_provider_name = None
# cubbi_model = os.environ.get("CUBBI_MODEL") if cubbi_config.defaults.model:
# cubbi_provider = os.environ.get("CUBBI_PROVIDER") default_provider_name = cubbi_config.defaults.model.split("/", 1)[0]
# XXX i didn't understood yet the configuration file, tbd later.
self.status.log(
f"Found {len(cubbi_config.providers)} configured providers for Crush"
)
for provider_name, provider_config in cubbi_config.providers.items():
is_default_provider = provider_name == default_provider_name
crush_provider = self._map_provider_to_crush_format(
provider_name, provider_config, is_default_provider
)
if crush_provider:
crush_provider_name = (
"gemini" if provider_config.type == "google" else provider_name
)
config_data["providers"][crush_provider_name] = crush_provider
self.status.log(
f"Added {crush_provider_name} provider to Crush configuration{'(default)' if is_default_provider else ''}"
)
if cubbi_config.defaults.model:
provider_part, model_part = cubbi_config.defaults.model.split("/", 1)
config_data["models"] = {
"large": {"provider": provider_part, "model": model_part},
"small": {"provider": provider_part, "model": model_part},
}
self.status.log(f"Set default model to {cubbi_config.defaults.model}")
provider = cubbi_config.providers.get(provider_part)
if provider and provider.base_url:
config_data["providers"][provider_part]["models"].append(
{"id": model_part, "name": model_part}
)
if not config_data["providers"]:
self.status.log(
"No providers configured, skipping Crush configuration file creation"
)
return True
try: try:
with config_file.open("w") as f: with config_file.open("w") as f:
json.dump(config_data, f, indent=2) json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user set_ownership(config_file)
self._set_ownership(config_file)
self.status.log(f"Updated Crush configuration at {config_file}") self.status.log(
f"Created Crush configuration at {config_file} with {len(config_data['providers'])} providers"
)
return True return True
except Exception as e: except Exception as e:
self.status.log(f"Failed to write Crush configuration: {e}", "ERROR") self.status.log(f"Failed to write Crush configuration: {e}", "ERROR")
return False return False
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool: def _integrate_mcp_servers(self) -> bool:
"""Integrate Crush with available MCP servers""" if not cubbi_config.mcps:
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate") self.status.log("No MCP servers to integrate")
return True return True
# Ensure directory exists before writing config_dir = self.create_directory_with_ownership(self._get_user_config_path())
config_dir = self._ensure_user_config_dir()
if not config_dir.exists(): if not config_dir.exists():
self.status.log( self.status.log(
f"Config directory {config_dir} does not exist and could not be created", f"Config directory {config_dir} does not exist and could not be created",
@@ -124,7 +158,7 @@ class CrushPlugin(ToolPlugin):
) )
return False return False
config_file = config_dir / "config.json" config_file = config_dir / "crush.json"
if config_file.exists(): if config_file.exists():
try: try:
@@ -132,35 +166,49 @@ class CrushPlugin(ToolPlugin):
config_data = json.load(f) config_data = json.load(f)
except (json.JSONDecodeError, OSError) as e: except (json.JSONDecodeError, OSError) as e:
self.status.log(f"Failed to load existing config: {e}", "WARNING") self.status.log(f"Failed to load existing config: {e}", "WARNING")
config_data = {} config_data = {
"$schema": "https://charm.land/crush.json",
"providers": {},
}
else: else:
config_data = {} config_data = {"$schema": "https://charm.land/crush.json", "providers": {}}
if "mcp_servers" not in config_data: if "mcps" not in config_data:
config_data["mcp_servers"] = {} config_data["mcps"] = {}
for server in mcp_config["servers"]: for mcp in cubbi_config.mcps:
server_name = server["name"] if mcp.type == "remote":
server_host = server["host"] if mcp.name and mcp.url:
server_url = server["url"] self.status.log(f"Adding remote MCP server: {mcp.name} - {mcp.url}")
config_data["mcps"][mcp.name] = {
if server_name and server_host: "transport": {"type": "sse", "url": mcp.url},
mcp_url = f"http://{server_host}:8080/sse"
self.status.log(f"Adding MCP server: {server_name} - {mcp_url}")
config_data["mcp_servers"][server_name] = {
"uri": mcp_url,
"type": server.get("type", "sse"),
"enabled": True, "enabled": True,
} }
elif server_name and server_url: elif mcp.type == "local":
if mcp.name and mcp.command:
self.status.log( self.status.log(
f"Adding remote MCP server: {server_name} - {server_url}" f"Adding local MCP server: {mcp.name} - {mcp.command}"
) )
# Crush uses stdio type for local MCPs
config_data["mcp_servers"][server_name] = { transport_config = {
"uri": server_url, "type": "stdio",
"type": server.get("type", "sse"), "command": mcp.command,
}
if mcp.args:
transport_config["args"] = mcp.args
if mcp.env:
transport_config["env"] = mcp.env
config_data["mcps"][mcp.name] = {
"transport": transport_config,
"enabled": True,
}
elif mcp.type in ["docker", "proxy"]:
if mcp.name and mcp.host:
mcp_port = mcp.port or 8080
mcp_url = f"http://{mcp.host}:{mcp_port}/sse"
self.status.log(f"Adding MCP server: {mcp.name} - {mcp_url}")
config_data["mcps"][mcp.name] = {
"transport": {"type": "sse", "url": mcp_url},
"enabled": True, "enabled": True,
} }
@@ -168,10 +216,15 @@ class CrushPlugin(ToolPlugin):
with config_file.open("w") as f: with config_file.open("w") as f:
json.dump(config_data, f, indent=2) json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user set_ownership(config_file)
self._set_ownership(config_file)
self.status.log(
f"Integrated {len(cubbi_config.mcps)} MCP servers into Crush configuration"
)
return True return True
except Exception as e: except Exception as e:
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR") self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
return False return False
PLUGIN_CLASS = CrushPlugin

View File

@@ -3,49 +3,14 @@ description: Crush AI coding assistant environment
version: 1.0.0 version: 1.0.0
maintainer: team@monadical.com maintainer: team@monadical.com
image: monadical/cubbi-crush:latest image: monadical/cubbi-crush:latest
persistent_configs: []
init: environments_to_forward:
pre_command: /cubbi-init.sh # API Keys
command: /entrypoint.sh - OPENAI_API_KEY
- ANTHROPIC_API_KEY
environment: - ANTHROPIC_AUTH_TOKEN
- name: OPENAI_API_KEY - ANTHROPIC_CUSTOM_HEADERS
description: OpenAI API key for crush - DEEPSEEK_API_KEY
required: false - GEMINI_API_KEY
sensitive: true - OPENROUTER_API_KEY
- AIDER_API_KEYS
- name: ANTHROPIC_API_KEY
description: Anthropic API key for crush
required: false
sensitive: true
- name: GROQ_API_KEY
description: Groq API key for crush
required: false
sensitive: true
- name: OPENAI_URL
description: Custom OpenAI-compatible API URL
required: false
- name: CUBBI_MODEL
description: AI model to use with crush
required: false
- name: CUBBI_PROVIDER
description: AI provider to use with crush
required: false
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/home/cubbi/.config/crush"
target: "/cubbi-config/crush-config"
type: "directory"
description: "Crush configuration directory"
- source: "/app/.crush"
target: "/cubbi-config/crush-app"
type: "directory"
description: "Crush application data and sessions"

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env -S uv run --script #!/usr/bin/env -S uv run --script
# /// script # /// script
# dependencies = ["ruamel.yaml"] # dependencies = ["ruamel.yaml", "pydantic"]
# /// # ///
""" """
Standalone Cubbi initialization script Standalone Cubbi initialization script
@@ -19,15 +19,104 @@ import sys
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from dataclasses import dataclass, field from dataclasses import dataclass, field
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List from typing import Any
from pydantic import BaseModel
from ruamel.yaml import YAML from ruamel.yaml import YAML
# Status Management class UserConfig(BaseModel):
class StatusManager: uid: int = 1000
"""Manages initialization status and logging""" gid: int = 1000
class ProjectConfig(BaseModel):
url: str | None = None
config_dir: str | None = None
image_config_dir: str | None = None
class PersistentLink(BaseModel):
source: str
target: str
type: str
class ProviderConfig(BaseModel):
type: str
api_key: str
base_url: str | None = None
models: list[dict[str, str]] = []
class MCPConfig(BaseModel):
name: str
type: str
host: str | None = None
port: int | None = None
url: str | None = None
headers: dict[str, str] | None = None
command: str | None = None
args: list[str] = []
env: dict[str, str] = {}
class DefaultsConfig(BaseModel):
model: str | None = None
class SSHConfig(BaseModel):
enabled: bool = False
class CubbiConfig(BaseModel):
version: str = "1.0"
user: UserConfig = UserConfig()
providers: dict[str, ProviderConfig] = {}
mcps: list[MCPConfig] = []
project: ProjectConfig = ProjectConfig()
persistent_links: list[PersistentLink] = []
defaults: DefaultsConfig = DefaultsConfig()
ssh: SSHConfig = SSHConfig()
run_command: str | None = None
no_shell: bool = False
def get_provider_for_default_model(self) -> ProviderConfig | None:
if not self.defaults.model or "/" not in self.defaults.model:
return None
provider_name = self.defaults.model.split("/")[0]
return self.providers.get(provider_name)
def load_cubbi_config() -> CubbiConfig:
config_path = Path("/cubbi/config.yaml")
if not config_path.exists():
return CubbiConfig()
yaml = YAML(typ="safe")
with open(config_path, "r") as f:
config_data = yaml.load(f) or {}
return CubbiConfig(**config_data)
cubbi_config = load_cubbi_config()
def get_user_ids() -> tuple[int, int]:
return cubbi_config.user.uid, cubbi_config.user.gid
def set_ownership(path: Path) -> None:
user_id, group_id = get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError:
pass
class StatusManager:
def __init__( def __init__(
self, log_file: str = "/cubbi/init.log", status_file: str = "/cubbi/init.status" self, log_file: str = "/cubbi/init.log", status_file: str = "/cubbi/init.status"
): ):
@@ -36,12 +125,10 @@ class StatusManager:
self._setup_logging() self._setup_logging()
def _setup_logging(self) -> None: def _setup_logging(self) -> None:
"""Set up logging to both stdout and log file"""
self.log_file.touch(exist_ok=True) self.log_file.touch(exist_ok=True)
self.set_status(False) self.set_status(False)
def log(self, message: str, level: str = "INFO") -> None: def log(self, message: str, level: str = "INFO") -> None:
"""Log a message with timestamp"""
print(message) print(message)
sys.stdout.flush() sys.stdout.flush()
@@ -50,25 +137,19 @@ class StatusManager:
f.flush() f.flush()
def set_status(self, complete: bool) -> None: def set_status(self, complete: bool) -> None:
"""Set initialization completion status"""
status = "true" if complete else "false" status = "true" if complete else "false"
with open(self.status_file, "w") as f: with open(self.status_file, "w") as f:
f.write(f"INIT_COMPLETE={status}\n") f.write(f"INIT_COMPLETE={status}\n")
def start_initialization(self) -> None: def start_initialization(self) -> None:
"""Mark initialization as started"""
self.set_status(False) self.set_status(False)
def complete_initialization(self) -> None: def complete_initialization(self) -> None:
"""Mark initialization as completed"""
self.set_status(True) self.set_status(True)
# Configuration Management
@dataclass @dataclass
class PersistentConfig: class PersistentConfig:
"""Persistent configuration mapping"""
source: str source: str
target: str target: str
type: str = "directory" type: str = "directory"
@@ -77,25 +158,21 @@ class PersistentConfig:
@dataclass @dataclass
class ImageConfig: class ImageConfig:
"""Cubbi image configuration"""
name: str name: str
description: str description: str
version: str version: str
maintainer: str maintainer: str
image: str image: str
persistent_configs: List[PersistentConfig] = field(default_factory=list) persistent_configs: list[PersistentConfig] = field(default_factory=list)
environments_to_forward: list[str] = field(default_factory=list)
class ConfigParser: class ConfigParser:
"""Parses Cubbi image configuration and environment variables"""
def __init__(self, config_file: str = "/cubbi/cubbi_image.yaml"): def __init__(self, config_file: str = "/cubbi/cubbi_image.yaml"):
self.config_file = Path(config_file) self.config_file = Path(config_file)
self.environment: Dict[str, str] = dict(os.environ) self.environment: dict[str, str] = dict(os.environ)
def load_image_config(self) -> ImageConfig: def load_image_config(self) -> ImageConfig:
"""Load and parse the cubbi_image.yaml configuration"""
if not self.config_file.exists(): if not self.config_file.exists():
raise FileNotFoundError(f"Configuration file not found: {self.config_file}") raise FileNotFoundError(f"Configuration file not found: {self.config_file}")
@@ -103,7 +180,6 @@ class ConfigParser:
with open(self.config_file, "r") as f: with open(self.config_file, "r") as f:
config_data = yaml.load(f) config_data = yaml.load(f)
# Parse persistent configurations
persistent_configs = [] persistent_configs = []
for pc_data in config_data.get("persistent_configs", []): for pc_data in config_data.get("persistent_configs", []):
persistent_configs.append(PersistentConfig(**pc_data)) persistent_configs.append(PersistentConfig(**pc_data))
@@ -115,48 +191,16 @@ class ConfigParser:
maintainer=config_data["maintainer"], maintainer=config_data["maintainer"],
image=config_data["image"], image=config_data["image"],
persistent_configs=persistent_configs, persistent_configs=persistent_configs,
environments_to_forward=config_data.get("environments_to_forward", []),
) )
def get_cubbi_config(self) -> Dict[str, Any]:
"""Get standard Cubbi configuration from environment"""
return {
"user_id": int(self.environment.get("CUBBI_USER_ID", "1000")),
"group_id": int(self.environment.get("CUBBI_GROUP_ID", "1000")),
"run_command": self.environment.get("CUBBI_RUN_COMMAND"),
"no_shell": self.environment.get("CUBBI_NO_SHELL", "false").lower()
== "true",
"config_dir": self.environment.get("CUBBI_CONFIG_DIR", "/cubbi-config"),
"persistent_links": self.environment.get("CUBBI_PERSISTENT_LINKS", ""),
}
def get_mcp_config(self) -> Dict[str, Any]:
"""Get MCP server configuration from environment"""
mcp_count = int(self.environment.get("MCP_COUNT", "0"))
mcp_servers = []
for idx in range(mcp_count):
server = {
"name": self.environment.get(f"MCP_{idx}_NAME"),
"type": self.environment.get(f"MCP_{idx}_TYPE"),
"host": self.environment.get(f"MCP_{idx}_HOST"),
"url": self.environment.get(f"MCP_{idx}_URL"),
}
if server["name"]: # Only add if name is present
mcp_servers.append(server)
return {"count": mcp_count, "servers": mcp_servers}
# Core Management Classes
class UserManager: class UserManager:
"""Manages user and group creation/modification in containers"""
def __init__(self, status: StatusManager): def __init__(self, status: StatusManager):
self.status = status self.status = status
self.username = "cubbi" self.username = "cubbi"
def _run_command(self, cmd: list[str]) -> bool: def _run_command(self, cmd: list[str]) -> bool:
"""Run a system command and log the result"""
try: try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True) result = subprocess.run(cmd, capture_output=True, text=True, check=True)
if result.stdout: if result.stdout:
@@ -168,12 +212,10 @@ class UserManager:
return False return False
def setup_user_and_group(self, user_id: int, group_id: int) -> bool: def setup_user_and_group(self, user_id: int, group_id: int) -> bool:
"""Set up user and group with specified IDs"""
self.status.log( self.status.log(
f"Setting up user '{self.username}' with UID: {user_id}, GID: {group_id}" f"Setting up user '{self.username}' with UID: {user_id}, GID: {group_id}"
) )
# Handle group creation/modification
try: try:
existing_group = grp.getgrnam(self.username) existing_group = grp.getgrnam(self.username)
if existing_group.gr_gid != group_id: if existing_group.gr_gid != group_id:
@@ -185,10 +227,7 @@ class UserManager:
): ):
return False return False
except KeyError: except KeyError:
if not self._run_command(["groupadd", "-g", str(group_id), self.username]): self._run_command(["groupadd", "-g", str(group_id), self.username])
return False
# Handle user creation/modification
try: try:
existing_user = pwd.getpwnam(self.username) existing_user = pwd.getpwnam(self.username)
if existing_user.pw_uid != user_id or existing_user.pw_gid != group_id: if existing_user.pw_uid != user_id or existing_user.pw_gid != group_id:
@@ -222,7 +261,6 @@ class UserManager:
): ):
return False return False
# Create the sudoers file entry for the 'cubbi' user
sudoers_command = [ sudoers_command = [
"sh", "sh",
"-c", "-c",
@@ -236,15 +274,12 @@ class UserManager:
class DirectoryManager: class DirectoryManager:
"""Manages directory creation and permission setup"""
def __init__(self, status: StatusManager): def __init__(self, status: StatusManager):
self.status = status self.status = status
def create_directory( def create_directory(
self, path: str, user_id: int, group_id: int, mode: int = 0o755 self, path: str, user_id: int, group_id: int, mode: int = 0o755
) -> bool: ) -> bool:
"""Create a directory with proper ownership and permissions"""
dir_path = Path(path) dir_path = Path(path)
try: try:
@@ -260,7 +295,6 @@ class DirectoryManager:
return False return False
def setup_standard_directories(self, user_id: int, group_id: int) -> bool: def setup_standard_directories(self, user_id: int, group_id: int) -> bool:
"""Set up standard Cubbi directories"""
directories = [ directories = [
("/app", 0o755), ("/app", 0o755),
("/cubbi-config", 0o755), ("/cubbi-config", 0o755),
@@ -317,7 +351,6 @@ class DirectoryManager:
return success return success
def _chown_recursive(self, path: Path, user_id: int, group_id: int) -> None: def _chown_recursive(self, path: Path, user_id: int, group_id: int) -> None:
"""Recursively change ownership of a directory"""
try: try:
os.chown(path, user_id, group_id) os.chown(path, user_id, group_id)
for item in path.iterdir(): for item in path.iterdir():
@@ -332,15 +365,12 @@ class DirectoryManager:
class ConfigManager: class ConfigManager:
"""Manages persistent configuration symlinks and mappings"""
def __init__(self, status: StatusManager): def __init__(self, status: StatusManager):
self.status = status self.status = status
def create_symlink( def create_symlink(
self, source_path: str, target_path: str, user_id: int, group_id: int self, source_path: str, target_path: str, user_id: int, group_id: int
) -> bool: ) -> bool:
"""Create a symlink with proper ownership"""
try: try:
source = Path(source_path) source = Path(source_path)
@@ -367,7 +397,6 @@ class ConfigManager:
def _ensure_target_directory( def _ensure_target_directory(
self, target_path: str, user_id: int, group_id: int self, target_path: str, user_id: int, group_id: int
) -> bool: ) -> bool:
"""Ensure the target directory exists with proper ownership"""
try: try:
target_dir = Path(target_path) target_dir = Path(target_path)
if not target_dir.exists(): if not target_dir.exists():
@@ -385,9 +414,8 @@ class ConfigManager:
return False return False
def setup_persistent_configs( def setup_persistent_configs(
self, persistent_configs: List[PersistentConfig], user_id: int, group_id: int self, persistent_configs: list[PersistentConfig], user_id: int, group_id: int
) -> bool: ) -> bool:
"""Set up persistent configuration symlinks from image config"""
if not persistent_configs: if not persistent_configs:
self.status.log("No persistent configurations defined in image config") self.status.log("No persistent configurations defined in image config")
return True return True
@@ -404,16 +432,22 @@ class ConfigManager:
return success return success
def setup_persistent_link(
self, source: str, target: str, link_type: str, user_id: int, group_id: int
) -> bool:
"""Setup a single persistent link"""
if not self._ensure_target_directory(target, user_id, group_id):
return False
return self.create_symlink(source, target, user_id, group_id)
class CommandManager: class CommandManager:
"""Manages command execution and user switching"""
def __init__(self, status: StatusManager): def __init__(self, status: StatusManager):
self.status = status self.status = status
self.username = "cubbi" self.username = "cubbi"
def run_as_user(self, command: List[str], user: str = None) -> int: def run_as_user(self, command: list[str], user: str = None) -> int:
"""Run a command as the specified user using gosu"""
if user is None: if user is None:
user = self.username user = self.username
@@ -428,15 +462,13 @@ class CommandManager:
return 1 return 1
def run_user_command(self, command: str) -> int: def run_user_command(self, command: str) -> int:
"""Run user-specified command as cubbi user"""
if not command: if not command:
return 0 return 0
self.status.log(f"Executing user command: {command}") self.status.log(f"Executing user command: {command}")
return self.run_as_user(["sh", "-c", command]) return self.run_as_user(["sh", "-c", command])
def exec_as_user(self, args: List[str]) -> None: def exec_as_user(self, args: list[str]) -> None:
"""Execute the final command as cubbi user (replaces current process)"""
if not args: if not args:
args = ["tail", "-f", "/dev/null"] args = ["tail", "-f", "/dev/null"]
@@ -451,31 +483,119 @@ class CommandManager:
sys.exit(1) sys.exit(1)
# Tool Plugin System
class ToolPlugin(ABC): class ToolPlugin(ABC):
"""Base class for tool-specific initialization plugins""" def __init__(self, status: StatusManager, config: dict[str, Any]):
def __init__(self, status: StatusManager, config: Dict[str, Any]):
self.status = status self.status = status
self.config = config self.config = config
@property @property
@abstractmethod @abstractmethod
def tool_name(self) -> str: def tool_name(self) -> str:
"""Return the name of the tool this plugin supports""" pass
def create_directory_with_ownership(self, path: Path) -> Path:
try:
path.mkdir(parents=True, exist_ok=True)
set_ownership(path)
# Also set ownership on parent directories if they were created
parent = path.parent
if parent.exists() and parent != Path("/"):
set_ownership(parent)
except OSError as e:
self.status.log(f"Failed to create directory {path}: {e}", "ERROR")
return path
@abstractmethod
def is_already_configured(self) -> bool:
pass pass
@abstractmethod @abstractmethod
def initialize(self) -> bool: def configure(self) -> bool:
"""Main tool initialization logic"""
pass pass
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool: def get_resolved_model(self) -> dict[str, Any] | None:
"""Integrate with available MCP servers""" model_spec = os.environ.get("CUBBI_MODEL_SPEC", "")
return True if not model_spec:
return None
# Parse provider/model format
if "/" in model_spec:
provider_name, model_name = model_spec.split("/", 1)
else:
# Legacy format - treat as provider name
provider_name = model_spec
model_name = ""
# Get provider type from CUBBI_PROVIDER env var
provider_type = os.environ.get("CUBBI_PROVIDER", provider_name)
# Get base URL if available (for OpenAI-compatible providers)
base_url = None
if provider_type == "openai":
base_url = os.environ.get("OPENAI_URL")
return {
"provider_name": provider_name,
"provider_type": provider_type,
"model_name": model_name,
"base_url": base_url,
"model_spec": model_spec,
}
def get_provider_config(self, provider_name: str) -> dict[str, str]:
provider_config = {}
# Map provider names to their environment variables
if provider_name == "anthropic" or provider_name.startswith("anthropic"):
api_key = os.environ.get("ANTHROPIC_API_KEY")
if api_key:
provider_config["ANTHROPIC_API_KEY"] = api_key
elif provider_name == "openai" or provider_name.startswith("openai"):
api_key = os.environ.get("OPENAI_API_KEY")
base_url = os.environ.get("OPENAI_URL")
if api_key:
provider_config["OPENAI_API_KEY"] = api_key
if base_url:
provider_config["OPENAI_URL"] = base_url
elif provider_name == "google" or provider_name.startswith("google"):
api_key = os.environ.get("GOOGLE_API_KEY")
if api_key:
provider_config["GOOGLE_API_KEY"] = api_key
elif provider_name == "openrouter" or provider_name.startswith("openrouter"):
api_key = os.environ.get("OPENROUTER_API_KEY")
if api_key:
provider_config["OPENROUTER_API_KEY"] = api_key
return provider_config
def get_all_providers_config(self) -> dict[str, dict[str, str]]:
all_providers = {}
# Check for each standard provider
standard_providers = ["anthropic", "openai", "google", "openrouter"]
for provider_name in standard_providers:
provider_config = self.get_provider_config(provider_name)
if provider_config: # Only include providers with API keys
all_providers[provider_name] = provider_config
# Also check for custom OpenAI-compatible providers
# These would have been set up with custom names but use OpenAI env vars
openai_config = self.get_provider_config("openai")
if openai_config and "OPENAI_URL" in openai_config:
# This might be a custom provider - we could check for custom naming
# but for now, we'll just include it as openai
pass
return all_providers
# Main Initializer
class CubbiInitializer: class CubbiInitializer:
"""Main Cubbi initialization orchestrator""" """Main Cubbi initialization orchestrator"""
@@ -487,28 +607,24 @@ class CubbiInitializer:
self.config_manager = ConfigManager(self.status) self.config_manager = ConfigManager(self.status)
self.command_manager = CommandManager(self.status) self.command_manager = CommandManager(self.status)
def run_initialization(self, final_args: List[str]) -> None: def run_initialization(self, final_args: list[str]) -> None:
"""Run the complete initialization process""" """Run the complete initialization process"""
try: try:
self.status.start_initialization() self.status.start_initialization()
# Load configuration # Load configuration
image_config = self.config_parser.load_image_config() image_config = self.config_parser.load_image_config()
cubbi_config = self.config_parser.get_cubbi_config()
mcp_config = self.config_parser.get_mcp_config()
self.status.log(f"Initializing {image_config.name} v{image_config.version}") self.status.log(f"Initializing {image_config.name} v{image_config.version}")
# Core initialization # Core initialization
success = self._run_core_initialization(image_config, cubbi_config) success = self._run_core_initialization(image_config)
if not success: if not success:
self.status.log("Core initialization failed", "ERROR") self.status.log("Core initialization failed", "ERROR")
sys.exit(1) sys.exit(1)
# Tool-specific initialization # Tool-specific initialization
success = self._run_tool_initialization( success = self._run_tool_initialization(image_config)
image_config, cubbi_config, mcp_config
)
if not success: if not success:
self.status.log("Tool initialization failed", "ERROR") self.status.log("Tool initialization failed", "ERROR")
sys.exit(1) sys.exit(1)
@@ -517,16 +633,15 @@ class CubbiInitializer:
self.status.complete_initialization() self.status.complete_initialization()
# Handle commands # Handle commands
self._handle_command_execution(cubbi_config, final_args) self._handle_command_execution(final_args)
except Exception as e: except Exception as e:
self.status.log(f"Initialization failed with error: {e}", "ERROR") self.status.log(f"Initialization failed with error: {e}", "ERROR")
sys.exit(1) sys.exit(1)
def _run_core_initialization(self, image_config, cubbi_config) -> bool: def _run_core_initialization(self, image_config) -> bool:
"""Run core Cubbi initialization steps""" user_id = cubbi_config.user.uid
user_id = cubbi_config["user_id"] group_id = cubbi_config.user.gid
group_id = cubbi_config["group_id"]
if not self.user_manager.setup_user_and_group(user_id, group_id): if not self.user_manager.setup_user_and_group(user_id, group_id):
return False return False
@@ -534,25 +649,29 @@ class CubbiInitializer:
if not self.directory_manager.setup_standard_directories(user_id, group_id): if not self.directory_manager.setup_standard_directories(user_id, group_id):
return False return False
config_path = Path(cubbi_config["config_dir"]) if cubbi_config.project.config_dir:
config_path = Path(cubbi_config.project.config_dir)
if not config_path.exists(): if not config_path.exists():
self.status.log(f"Creating config directory: {cubbi_config['config_dir']}") self.status.log(
f"Creating config directory: {cubbi_config.project.config_dir}"
)
try: try:
config_path.mkdir(parents=True, exist_ok=True) config_path.mkdir(parents=True, exist_ok=True)
os.chown(cubbi_config["config_dir"], user_id, group_id) os.chown(cubbi_config.project.config_dir, user_id, group_id)
except Exception as e: except Exception as e:
self.status.log(f"Failed to create config directory: {e}", "ERROR") self.status.log(f"Failed to create config directory: {e}", "ERROR")
return False return False
if not self.config_manager.setup_persistent_configs( # Setup persistent configs
image_config.persistent_configs, user_id, group_id for link in cubbi_config.persistent_links:
if not self.config_manager.setup_persistent_link(
link.source, link.target, link.type, user_id, group_id
): ):
return False return False
return True return True
def _run_tool_initialization(self, image_config, cubbi_config, mcp_config) -> bool: def _run_tool_initialization(self, image_config) -> bool:
"""Run tool-specific initialization"""
# Look for a tool-specific plugin file in the same directory # Look for a tool-specific plugin file in the same directory
plugin_name = image_config.name.lower().replace("-", "_") plugin_name = image_config.name.lower().replace("-", "_")
plugin_file = Path(__file__).parent / f"{plugin_name}_plugin.py" plugin_file = Path(__file__).parent / f"{plugin_name}_plugin.py"
@@ -571,44 +690,26 @@ class CubbiInitializer:
plugin_module = importlib.util.module_from_spec(spec) plugin_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(plugin_module) spec.loader.exec_module(plugin_module)
# Find the plugin class (should inherit from ToolPlugin) # Get the plugin class from the standard export variable
plugin_class = None if not hasattr(plugin_module, "PLUGIN_CLASS"):
for attr_name in dir(plugin_module):
attr = getattr(plugin_module, attr_name)
if (
isinstance(attr, type)
and hasattr(attr, "tool_name")
and hasattr(attr, "initialize")
and attr_name != "ToolPlugin"
): # Skip the base class
plugin_class = attr
break
if not plugin_class:
self.status.log( self.status.log(
f"No valid plugin class found in {plugin_file}", "ERROR" f"No PLUGIN_CLASS variable found in {plugin_file}", "ERROR"
) )
return False return False
plugin_class = plugin_module.PLUGIN_CLASS
# Instantiate and run the plugin # Instantiate and run the plugin
plugin = plugin_class( plugin = plugin_class(self.status, {"image_config": image_config})
self.status,
{
"image_config": image_config,
"cubbi_config": cubbi_config,
"mcp_config": mcp_config,
},
)
self.status.log(f"Running {plugin.tool_name}-specific initialization") self.status.log(f"Running {plugin.tool_name}-specific initialization")
if not plugin.initialize(): if not plugin.is_already_configured():
self.status.log(f"{plugin.tool_name} initialization failed", "ERROR") if not plugin.configure():
return False self.status.log(f"{plugin.tool_name} configuration failed", "ERROR")
if not plugin.integrate_mcp_servers(mcp_config):
self.status.log(f"{plugin.tool_name} MCP integration failed", "ERROR")
return False return False
else:
self.status.log(f"{plugin.tool_name} is already configured, skipping")
return True return True
@@ -618,22 +719,19 @@ class CubbiInitializer:
) )
return False return False
def _handle_command_execution(self, cubbi_config, final_args): def _handle_command_execution(self, final_args):
"""Handle command execution"""
exit_code = 0 exit_code = 0
if cubbi_config["run_command"]: if cubbi_config.run_command:
self.status.log("--- Executing initial command ---") self.status.log("--- Executing initial command ---")
exit_code = self.command_manager.run_user_command( exit_code = self.command_manager.run_user_command(cubbi_config.run_command)
cubbi_config["run_command"]
)
self.status.log( self.status.log(
f"--- Initial command finished (exit code: {exit_code}) ---" f"--- Initial command finished (exit code: {exit_code}) ---"
) )
if cubbi_config["no_shell"]: if cubbi_config.no_shell:
self.status.log( self.status.log(
"--- CUBBI_NO_SHELL=true, exiting container without starting shell ---" "--- no_shell=true, exiting container without starting shell ---"
) )
sys.exit(exit_code) sys.exit(exit_code)
@@ -641,7 +739,6 @@ class CubbiInitializer:
def main() -> int: def main() -> int:
"""Main CLI entry point"""
import argparse import argparse
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(

View File

@@ -3,33 +3,14 @@ description: Goose AI environment
version: 1.0.0 version: 1.0.0
maintainer: team@monadical.com maintainer: team@monadical.com
image: monadical/cubbi-goose:latest image: monadical/cubbi-goose:latest
persistent_configs: []
init: environments_to_forward:
pre_command: /cubbi-init.sh # API Keys
command: /entrypoint.sh - OPENAI_API_KEY
- ANTHROPIC_API_KEY
environment: - ANTHROPIC_AUTH_TOKEN
- name: LANGFUSE_INIT_PROJECT_PUBLIC_KEY - ANTHROPIC_CUSTOM_HEADERS
description: Langfuse public key - DEEPSEEK_API_KEY
required: false - GEMINI_API_KEY
sensitive: true - OPENROUTER_API_KEY
- AIDER_API_KEYS
- name: LANGFUSE_INIT_PROJECT_SECRET_KEY
description: Langfuse secret key
required: false
sensitive: true
- name: LANGFUSE_URL
description: Langfuse API URL
required: false
default: https://cloud.langfuse.com
volumes:
- mountPath: /app
description: Application directory
persistent_configs:
- source: "/app/.goose"
target: "/cubbi-config/goose-app"
type: "directory"
description: "Goose memory"

View File

@@ -1,75 +1,80 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""
Goose-specific plugin for Cubbi initialization
"""
import os import os
from pathlib import Path from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin from cubbi_init import ToolPlugin, cubbi_config, set_ownership
from ruamel.yaml import YAML from ruamel.yaml import YAML
class GoosePlugin(ToolPlugin): class GoosePlugin(ToolPlugin):
"""Plugin for Goose AI tool initialization"""
@property @property
def tool_name(self) -> str: def tool_name(self) -> str:
return "goose" return "goose"
def _get_user_ids(self) -> tuple[int, int]: def is_already_configured(self) -> bool:
"""Get the cubbi user and group IDs from environment""" config_file = Path("/home/cubbi/.config/goose/config.yaml")
user_id = int(os.environ.get("CUBBI_USER_ID", "1000")) return config_file.exists()
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None: def configure(self) -> bool:
"""Set ownership of a path to the cubbi user""" self._ensure_user_config_dir()
user_id, group_id = self._get_user_ids() if not self.setup_tool_configuration():
try: return False
os.chown(path, user_id, group_id) return self.integrate_mcp_servers()
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_user_config_path(self) -> Path: def _get_user_config_path(self) -> Path:
"""Get the correct config path for the cubbi user"""
return Path("/home/cubbi/.config/goose") return Path("/home/cubbi/.config/goose")
def _ensure_user_config_dir(self) -> Path: def _ensure_user_config_dir(self) -> Path:
"""Ensure config directory exists with correct ownership"""
config_dir = self._get_user_config_path() config_dir = self._get_user_config_path()
return self.create_directory_with_ownership(config_dir)
# Create the full directory path def _write_env_vars_to_profile(self, env_vars: dict) -> None:
try: try:
config_dir.mkdir(parents=True, exist_ok=True) profile_path = Path("/home/cubbi/.bashrc")
except FileExistsError:
# Directory already exists, which is fine env_section_start = "# CUBBI GOOSE ENVIRONMENT VARIABLES"
pass env_section_end = "# END CUBBI GOOSE ENVIRONMENT VARIABLES"
except OSError as e:
if profile_path.exists():
with open(profile_path, "r") as f:
lines = f.readlines()
else:
lines = []
new_lines = []
skip_section = False
for line in lines:
if env_section_start in line:
skip_section = True
elif env_section_end in line:
skip_section = False
continue
elif not skip_section:
new_lines.append(line)
if env_vars:
new_lines.append(f"\n{env_section_start}\n")
for key, value in env_vars.items():
new_lines.append(f'export {key}="{value}"\n')
new_lines.append(f"{env_section_end}\n")
profile_path.parent.mkdir(parents=True, exist_ok=True)
with open(profile_path, "w") as f:
f.writelines(new_lines)
set_ownership(profile_path)
self.status.log( self.status.log(
f"Failed to create config directory {config_dir}: {e}", "ERROR" f"Updated shell profile with {len(env_vars)} environment variables"
) )
return config_dir
# Set ownership for the directories except Exception as e:
config_parent = config_dir.parent self.status.log(
if config_parent.exists(): f"Failed to write environment variables to profile: {e}", "ERROR"
self._set_ownership(config_parent) )
if config_dir.exists():
self._set_ownership(config_dir)
return config_dir
def initialize(self) -> bool:
"""Initialize Goose configuration"""
self._ensure_user_config_dir()
return self.setup_tool_configuration()
def setup_tool_configuration(self) -> bool: def setup_tool_configuration(self) -> bool:
"""Set up Goose configuration file"""
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir() config_dir = self._ensure_user_config_dir()
if not config_dir.exists(): if not config_dir.exists():
self.status.log( self.status.log(
@@ -99,31 +104,58 @@ class GoosePlugin(ToolPlugin):
"type": "builtin", "type": "builtin",
} }
# Update with environment variables # Configure Goose with the default model
goose_model = os.environ.get("CUBBI_MODEL") provider_config = cubbi_config.get_provider_for_default_model()
goose_provider = os.environ.get("CUBBI_PROVIDER") if provider_config and cubbi_config.defaults.model:
_, model_name = cubbi_config.defaults.model.split("/", 1)
if goose_model: # Set Goose model and provider
config_data["GOOSE_MODEL"] = goose_model config_data["GOOSE_MODEL"] = model_name
self.status.log(f"Set GOOSE_MODEL to {goose_model}") config_data["GOOSE_PROVIDER"] = provider_config.type
if goose_provider: # Set ONLY the specific API key for the selected provider
config_data["GOOSE_PROVIDER"] = goose_provider # Set both in current process AND in shell environment file
self.status.log(f"Set GOOSE_PROVIDER to {goose_provider}") env_vars_to_set = {}
# If provider is OpenAI and OPENAI_URL is set, configure OPENAI_HOST if provider_config.type == "anthropic" and provider_config.api_key:
if goose_provider.lower() == "openai": env_vars_to_set["ANTHROPIC_API_KEY"] = provider_config.api_key
openai_url = os.environ.get("OPENAI_URL") self.status.log("Set Anthropic API key for goose")
if openai_url: elif provider_config.type == "openai" and provider_config.api_key:
config_data["OPENAI_HOST"] = openai_url # For OpenAI-compatible providers (including litellm), goose expects OPENAI_API_KEY
self.status.log(f"Set OPENAI_HOST to {openai_url}") env_vars_to_set["OPENAI_API_KEY"] = provider_config.api_key
self.status.log("Set OpenAI API key for goose")
# Set base URL for OpenAI-compatible providers in both env and config
if provider_config.base_url:
env_vars_to_set["OPENAI_BASE_URL"] = provider_config.base_url
config_data["OPENAI_HOST"] = provider_config.base_url
self.status.log(
f"Set OPENAI_BASE_URL and OPENAI_HOST to {provider_config.base_url}"
)
elif provider_config.type == "google" and provider_config.api_key:
env_vars_to_set["GOOGLE_API_KEY"] = provider_config.api_key
self.status.log("Set Google API key for goose")
elif provider_config.type == "openrouter" and provider_config.api_key:
env_vars_to_set["OPENROUTER_API_KEY"] = provider_config.api_key
self.status.log("Set OpenRouter API key for goose")
# Set environment variables for current process (for --run commands)
for key, value in env_vars_to_set.items():
os.environ[key] = value
# Write environment variables to shell profile for interactive sessions
self._write_env_vars_to_profile(env_vars_to_set)
self.status.log(
f"Configured Goose: model={model_name}, provider={provider_config.type}"
)
else:
self.status.log("No default model or provider configured", "WARNING")
try: try:
with config_file.open("w") as f: with config_file.open("w") as f:
yaml.dump(config_data, f) yaml.dump(config_data, f)
# Set ownership of the config file to cubbi user set_ownership(config_file)
self._set_ownership(config_file)
self.status.log(f"Updated Goose configuration at {config_file}") self.status.log(f"Updated Goose configuration at {config_file}")
return True return True
@@ -131,13 +163,11 @@ class GoosePlugin(ToolPlugin):
self.status.log(f"Failed to write Goose configuration: {e}", "ERROR") self.status.log(f"Failed to write Goose configuration: {e}", "ERROR")
return False return False
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool: def integrate_mcp_servers(self) -> bool:
"""Integrate Goose with available MCP servers""" if not cubbi_config.mcps:
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate") self.status.log("No MCP servers to integrate")
return True return True
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir() config_dir = self._ensure_user_config_dir()
if not config_dir.exists(): if not config_dir.exists():
self.status.log( self.status.log(
@@ -158,34 +188,46 @@ class GoosePlugin(ToolPlugin):
if "extensions" not in config_data: if "extensions" not in config_data:
config_data["extensions"] = {} config_data["extensions"] = {}
for server in mcp_config["servers"]: for mcp in cubbi_config.mcps:
server_name = server["name"] if mcp.type == "remote":
server_host = server["host"] if mcp.name and mcp.url:
server_url = server["url"] self.status.log(
f"Adding remote MCP extension: {mcp.name} - {mcp.url}"
if server_name and server_host: )
mcp_url = f"http://{server_host}:8080/sse" config_data["extensions"][mcp.name] = {
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}")
config_data["extensions"][server_name] = {
"enabled": True, "enabled": True,
"name": server_name, "name": mcp.name,
"timeout": 60, "timeout": 60,
"type": server.get("type", "sse"), "type": "sse",
"uri": mcp_url, "uri": mcp.url,
"envs": {}, "envs": {},
} }
elif server_name and server_url: elif mcp.type == "local":
if mcp.name and mcp.command:
self.status.log( self.status.log(
f"Adding remote MCP extension: {server_name} - {server_url}" f"Adding local MCP extension: {mcp.name} - {mcp.command}"
) )
# Goose uses stdio type for local MCPs
config_data["extensions"][server_name] = { config_data["extensions"][mcp.name] = {
"enabled": True, "enabled": True,
"name": server_name, "name": mcp.name,
"timeout": 60, "timeout": 60,
"type": server.get("type", "sse"), "type": "stdio",
"uri": server_url, "command": mcp.command,
"args": mcp.args if mcp.args else [],
"envs": mcp.env if mcp.env else {},
}
elif mcp.type in ["docker", "proxy"]:
if mcp.name and mcp.host:
mcp_port = mcp.port or 8080
mcp_url = f"http://{mcp.host}:{mcp_port}/sse"
self.status.log(f"Adding MCP extension: {mcp.name} - {mcp_url}")
config_data["extensions"][mcp.name] = {
"enabled": True,
"name": mcp.name,
"timeout": 60,
"type": "sse",
"uri": mcp_url,
"envs": {}, "envs": {},
} }
@@ -193,10 +235,12 @@ class GoosePlugin(ToolPlugin):
with config_file.open("w") as f: with config_file.open("w") as f:
yaml.dump(config_data, f) yaml.dump(config_data, f)
# Set ownership of the config file to cubbi user set_ownership(config_file)
self._set_ownership(config_file)
return True return True
except Exception as e: except Exception as e:
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR") self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
return False return False
PLUGIN_CLASS = GoosePlugin

View File

@@ -3,18 +3,14 @@ description: Opencode AI environment
version: 1.0.0 version: 1.0.0
maintainer: team@monadical.com maintainer: team@monadical.com
image: monadical/cubbi-opencode:latest image: monadical/cubbi-opencode:latest
persistent_configs: []
init: environments_to_forward:
pre_command: /cubbi-init.sh # API Keys
command: /entrypoint.sh - OPENAI_API_KEY
- ANTHROPIC_API_KEY
environment: [] - ANTHROPIC_AUTH_TOKEN
volumes: - ANTHROPIC_CUSTOM_HEADERS
- mountPath: /app - DEEPSEEK_API_KEY
description: Application directory - GEMINI_API_KEY
- OPENROUTER_API_KEY
persistent_configs: - AIDER_API_KEYS
- source: "/home/cubbi/.config/opencode"
target: "/cubbi-config/config-opencode"
type: "directory"
description: "Opencode configuration"

View File

@@ -1,268 +1,253 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""
Opencode-specific plugin for Cubbi initialization
"""
import json import json
import os import os
from pathlib import Path from pathlib import Path
from typing import Any, Dict
from cubbi_init import ToolPlugin from cubbi_init import ToolPlugin, cubbi_config, set_ownership
# Map of environment variables to provider names in auth.json # Standard providers that OpenCode supports natively
API_KEY_MAPPINGS = { STANDARD_PROVIDERS: list[str] = ["anthropic", "openai", "google", "openrouter"]
"ANTHROPIC_API_KEY": "anthropic",
"GOOGLE_API_KEY": "google",
"OPENAI_API_KEY": "openai",
"OPENROUTER_API_KEY": "openrouter",
}
class OpencodePlugin(ToolPlugin): class OpencodePlugin(ToolPlugin):
"""Plugin for Opencode AI tool initialization"""
@property @property
def tool_name(self) -> str: def tool_name(self) -> str:
return "opencode" return "opencode"
def _get_user_ids(self) -> tuple[int, int]:
"""Get the cubbi user and group IDs from environment"""
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
return user_id, group_id
def _set_ownership(self, path: Path) -> None:
"""Set ownership of a path to the cubbi user"""
user_id, group_id = self._get_user_ids()
try:
os.chown(path, user_id, group_id)
except OSError as e:
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
def _get_user_config_path(self) -> Path: def _get_user_config_path(self) -> Path:
"""Get the correct config path for the cubbi user"""
return Path("/home/cubbi/.config/opencode") return Path("/home/cubbi/.config/opencode")
def _get_user_data_path(self) -> Path: def is_already_configured(self) -> bool:
"""Get the correct data path for the cubbi user""" config_file = self._get_user_config_path() / "config.json"
return Path("/home/cubbi/.local/share/opencode") return config_file.exists()
def _ensure_user_config_dir(self) -> Path: def configure(self) -> bool:
"""Ensure config directory exists with correct ownership""" self.create_directory_with_ownership(self._get_user_config_path())
config_dir = self._get_user_config_path()
# Create the full directory path
try:
config_dir.mkdir(parents=True, exist_ok=True)
except FileExistsError:
# Directory already exists, which is fine
pass
except OSError as e:
self.status.log(
f"Failed to create config directory {config_dir}: {e}", "ERROR"
)
return config_dir
# Set ownership for the directories
config_parent = config_dir.parent
if config_parent.exists():
self._set_ownership(config_parent)
if config_dir.exists():
self._set_ownership(config_dir)
return config_dir
def _ensure_user_data_dir(self) -> Path:
"""Ensure data directory exists with correct ownership"""
data_dir = self._get_user_data_path()
# Create the full directory path
try:
data_dir.mkdir(parents=True, exist_ok=True)
except FileExistsError:
# Directory already exists, which is fine
pass
except OSError as e:
self.status.log(f"Failed to create data directory {data_dir}: {e}", "ERROR")
return data_dir
# Set ownership for the directories
data_parent = data_dir.parent
if data_parent.exists():
self._set_ownership(data_parent)
if data_dir.exists():
self._set_ownership(data_dir)
return data_dir
def _create_auth_file(self) -> bool:
"""Create auth.json file with configured API keys"""
# Ensure data directory exists
data_dir = self._ensure_user_data_dir()
if not data_dir.exists():
self.status.log(
f"Data directory {data_dir} does not exist and could not be created",
"ERROR",
)
return False
auth_file = data_dir / "auth.json"
auth_data = {}
# Check each API key and add to auth data if present
for env_var, provider in API_KEY_MAPPINGS.items():
api_key = os.environ.get(env_var)
if api_key:
auth_data[provider] = {"type": "api", "key": api_key}
# Add custom endpoint URL for OpenAI if available
if provider == "openai":
openai_url = os.environ.get("OPENAI_URL")
if openai_url:
auth_data[provider]["baseURL"] = openai_url
self.status.log(
f"Added OpenAI custom endpoint URL: {openai_url}"
)
self.status.log(f"Added {provider} API key to auth configuration")
# Only write file if we have at least one API key
if not auth_data:
self.status.log("No API keys found, skipping auth.json creation")
return True
try:
with auth_file.open("w") as f:
json.dump(auth_data, f, indent=2)
# Set ownership of the auth file to cubbi user
self._set_ownership(auth_file)
# Set secure permissions (readable only by owner)
auth_file.chmod(0o600)
self.status.log(f"Created OpenCode auth configuration at {auth_file}")
return True
except Exception as e:
self.status.log(f"Failed to create auth configuration: {e}", "ERROR")
return False
def initialize(self) -> bool:
"""Initialize Opencode configuration"""
self._ensure_user_config_dir()
# Create auth.json file with API keys
auth_success = self._create_auth_file()
# Set up tool configuration
config_success = self.setup_tool_configuration() config_success = self.setup_tool_configuration()
if not config_success:
return False
return auth_success and config_success return self.integrate_mcp_servers()
def setup_tool_configuration(self) -> bool: def setup_tool_configuration(self) -> bool:
"""Set up Opencode configuration file""" config_dir = self._get_user_config_path()
# Ensure directory exists before writing
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.json" config_file = config_dir / "config.json"
# Load or initialize configuration # Initialize configuration with schema
if config_file.exists(): config_data: dict[str, str | dict[str, dict[str, str | dict[str, str]]]] = {
with config_file.open("r") as f: "$schema": "https://opencode.ai/config.json"
config_data = json.load(f) or {} }
else:
config_data = {}
# Set default theme to system # Set default theme to system
config_data.setdefault("theme", "system") config_data["theme"] = "system"
# Update with environment variables # Add providers configuration
opencode_model = os.environ.get("CUBBI_MODEL") config_data["provider"] = {}
opencode_provider = os.environ.get("CUBBI_PROVIDER")
# Configure all available providers
for provider_name, provider_config in cubbi_config.providers.items():
# Check if this is a custom provider (has baseURL)
if provider_config.base_url:
# Custom provider - include baseURL and name
models_dict = {}
# Add all models for any provider type that has models
if provider_config.models:
for model in provider_config.models:
model_id = model.get("id", "")
if model_id:
models_dict[model_id] = {"name": model_id}
provider_entry: dict[str, str | dict[str, str]] = {
"options": {
"apiKey": provider_config.api_key,
"baseURL": provider_config.base_url,
},
"models": models_dict,
}
# Add npm package and name for custom providers
if provider_config.type in STANDARD_PROVIDERS:
# Standard provider with custom URL - determine npm package
if provider_config.type == "anthropic":
provider_entry["npm"] = "@ai-sdk/anthropic"
provider_entry["name"] = f"Anthropic ({provider_name})"
elif provider_config.type == "openai":
provider_entry["npm"] = "@ai-sdk/openai-compatible"
provider_entry["name"] = f"OpenAI Compatible ({provider_name})"
elif provider_config.type == "google":
provider_entry["npm"] = "@ai-sdk/google"
provider_entry["name"] = f"Google ({provider_name})"
elif provider_config.type == "openrouter":
provider_entry["npm"] = "@ai-sdk/openai-compatible"
provider_entry["name"] = f"OpenRouter ({provider_name})"
else:
# Non-standard provider with custom URL
provider_entry["npm"] = "@ai-sdk/openai-compatible"
provider_entry["name"] = provider_name.title()
config_data["provider"][provider_name] = provider_entry
if models_dict:
self.status.log(
f"Added {provider_name} custom provider with {len(models_dict)} models to OpenCode configuration"
)
else:
self.status.log(
f"Added {provider_name} custom provider to OpenCode configuration"
)
else:
# Standard provider without custom URL
if provider_config.type in STANDARD_PROVIDERS:
# Populate models for any provider that has models
models_dict = {}
if provider_config.models:
for model in provider_config.models:
model_id = model.get("id", "")
if model_id:
models_dict[model_id] = {"name": model_id}
config_data["provider"][provider_name] = {
"options": {"apiKey": provider_config.api_key},
"models": models_dict,
}
if models_dict:
self.status.log(
f"Added {provider_name} standard provider with {len(models_dict)} models to OpenCode configuration"
)
else:
self.status.log(
f"Added {provider_name} standard provider to OpenCode configuration"
)
# Set default model
if cubbi_config.defaults.model:
config_data["model"] = cubbi_config.defaults.model
self.status.log(f"Set default model to {config_data['model']}")
# Add the default model to provider if it doesn't already have models
provider_name: str
model_name: str
provider_name, model_name = cubbi_config.defaults.model.split("/", 1)
if provider_name in config_data["provider"]:
provider_config = cubbi_config.providers.get(provider_name)
# Only add default model if provider doesn't already have models populated
if not (provider_config and provider_config.models):
config_data["provider"][provider_name]["models"] = {
model_name: {"name": model_name}
}
self.status.log(
f"Added default model {model_name} to {provider_name} provider"
)
else:
# Fallback to legacy environment variables
opencode_model: str | None = os.environ.get("CUBBI_MODEL")
opencode_provider: str | None = os.environ.get("CUBBI_PROVIDER")
if opencode_model and opencode_provider: if opencode_model and opencode_provider:
config_data["model"] = f"{opencode_provider}/{opencode_model}" config_data["model"] = f"{opencode_provider}/{opencode_model}"
self.status.log(f"Set model to {config_data['model']}") self.status.log(f"Set model to {config_data['model']} (legacy)")
# Add the legacy model to the provider if it exists
if opencode_provider in config_data["provider"]:
config_data["provider"][opencode_provider]["models"] = {
opencode_model: {"name": opencode_model}
}
# Only write config if we have providers configured
if not config_data["provider"]:
self.status.log(
"No providers configured, using minimal OpenCode configuration"
)
config_data = {
"$schema": "https://opencode.ai/config.json",
"theme": "system",
}
try: try:
with config_file.open("w") as f: with config_file.open("w") as f:
json.dump(config_data, f, indent=2) json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user set_ownership(config_file)
self._set_ownership(config_file)
self.status.log(f"Updated Opencode configuration at {config_file}") self.status.log(
f"Updated OpenCode configuration at {config_file} with {len(config_data.get('provider', {}))} providers"
)
return True return True
except Exception as e: except Exception as e:
self.status.log(f"Failed to write Opencode configuration: {e}", "ERROR") self.status.log(f"Failed to write OpenCode configuration: {e}", "ERROR")
return False return False
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool: def integrate_mcp_servers(self) -> bool:
"""Integrate Opencode with available MCP servers""" if not cubbi_config.mcps:
if mcp_config["count"] == 0:
self.status.log("No MCP servers to integrate") self.status.log("No MCP servers to integrate")
return True return True
# Ensure directory exists before writing config_dir = self._get_user_config_path()
config_dir = self._ensure_user_config_dir()
if not config_dir.exists():
self.status.log(
f"Config directory {config_dir} does not exist and could not be created",
"ERROR",
)
return False
config_file = config_dir / "config.json" config_file = config_dir / "config.json"
if config_file.exists(): if config_file.exists():
with config_file.open("r") as f: with config_file.open("r") as f:
config_data = json.load(f) or {} config_data: dict[str, str | dict[str, dict[str, str]]] = (
json.load(f) or {}
)
else: else:
config_data = {} config_data: dict[str, str | dict[str, dict[str, str]]] = {}
if "mcp" not in config_data: if "mcp" not in config_data:
config_data["mcp"] = {} config_data["mcp"] = {}
for server in mcp_config["servers"]: for mcp in cubbi_config.mcps:
server_name = server["name"] if mcp.type == "remote":
server_host = server.get("host") if mcp.name and mcp.url:
server_url = server.get("url") self.status.log(
f"Adding remote MCP extension: {mcp.name} - {mcp.url}"
)
config_data["mcp"][mcp.name] = {
"type": "remote",
"url": mcp.url,
}
elif mcp.type == "local":
if mcp.name and mcp.command:
self.status.log(
f"Adding local MCP extension: {mcp.name} - {mcp.command}"
)
# OpenCode expects command as an array with command and args combined
command_array = [mcp.command]
if mcp.args:
command_array.extend(mcp.args)
if server_name and server_host: mcp_entry: dict[str, str | list[str] | bool | dict[str, str]] = {
mcp_url = f"http://{server_host}:8080/sse" "type": "local",
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}") "command": command_array,
"enabled": True,
config_data["mcp"][server_name] = { }
if mcp.env:
# OpenCode expects environment (not env)
mcp_entry["environment"] = mcp.env
config_data["mcp"][mcp.name] = mcp_entry
elif mcp.type in ["docker", "proxy"]:
if mcp.name and mcp.host:
mcp_port: int = mcp.port or 8080
mcp_url: str = f"http://{mcp.host}:{mcp_port}/sse"
self.status.log(f"Adding MCP extension: {mcp.name} - {mcp_url}")
config_data["mcp"][mcp.name] = {
"type": "remote", "type": "remote",
"url": mcp_url, "url": mcp_url,
} }
elif server_name and server_url:
self.status.log(
f"Adding remote MCP extension: {server_name} - {server_url}"
)
config_data["mcp"][server_name] = {
"type": "remote",
"url": server_url,
}
try: try:
with config_file.open("w") as f: with config_file.open("w") as f:
json.dump(config_data, f, indent=2) json.dump(config_data, f, indent=2)
# Set ownership of the config file to cubbi user set_ownership(config_file)
self._set_ownership(config_file)
return True return True
except Exception as e: except Exception as e:
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR") self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
return False return False
PLUGIN_CLASS = OpencodePlugin

View File

@@ -10,7 +10,7 @@ from typing import Any, Dict, List, Optional
import docker import docker
from docker.errors import DockerException, ImageNotFound, NotFound from docker.errors import DockerException, ImageNotFound, NotFound
from .models import DockerMCP, MCPContainer, MCPStatus, ProxyMCP, RemoteMCP from .models import DockerMCP, LocalMCP, MCPContainer, MCPStatus, ProxyMCP, RemoteMCP
from .user_config import UserConfigManager from .user_config import UserConfigManager
# Configure logging # Configure logging
@@ -250,6 +250,56 @@ class MCPManager:
return mcp_config return mcp_config
def add_local_mcp(
self,
name: str,
command: str,
args: List[str] = None,
env: Dict[str, str] = None,
add_as_default: bool = True,
) -> Dict[str, Any]:
"""Add a local MCP server.
Args:
name: Name of the MCP server
command: Path to executable
args: Command arguments
env: Environment variables to set for the command
add_as_default: Whether to add this MCP to the default MCPs list
Returns:
The MCP configuration dictionary
"""
# Create the Local MCP configuration
local_mcp = LocalMCP(
name=name,
command=command,
args=args or [],
env=env or {},
)
# Add to the configuration
mcps = self.list_mcps()
# Remove existing MCP with the same name if it exists
mcps = [mcp for mcp in mcps if mcp.get("name") != name]
# Add the new MCP
mcp_config = local_mcp.model_dump()
mcps.append(mcp_config)
# Save the configuration
self.config_manager.set("mcps", mcps)
# Add to default MCPs if requested
if add_as_default:
default_mcps = self.config_manager.get("defaults.mcps", [])
if name not in default_mcps:
default_mcps.append(name)
self.config_manager.set("defaults.mcps", default_mcps)
return mcp_config
def remove_mcp(self, name: str) -> bool: def remove_mcp(self, name: str) -> bool:
"""Remove an MCP server configuration. """Remove an MCP server configuration.
@@ -359,6 +409,14 @@ class MCPManager:
"type": "remote", "type": "remote",
} }
elif mcp_type == "local":
# Local MCP servers don't need containers
return {
"status": "not_applicable",
"name": name,
"type": "local",
}
elif mcp_type == "docker": elif mcp_type == "docker":
# Pull the image if needed # Pull the image if needed
try: try:
@@ -637,8 +695,8 @@ ENTRYPOINT ["/entrypoint.sh"]
) )
return True return True
# Remote MCPs don't have containers to stop # Remote and Local MCPs don't have containers to stop
if mcp_config.get("type") == "remote": if mcp_config.get("type") in ["remote", "local"]:
return True return True
# Get the container name # Get the container name
@@ -677,12 +735,12 @@ ENTRYPOINT ["/entrypoint.sh"]
if not mcp_config: if not mcp_config:
raise ValueError(f"MCP server '{name}' not found") raise ValueError(f"MCP server '{name}' not found")
# Remote MCPs don't have containers to restart # Remote and Local MCPs don't have containers to restart
if mcp_config.get("type") == "remote": if mcp_config.get("type") in ["remote", "local"]:
return { return {
"status": "not_applicable", "status": "not_applicable",
"name": name, "name": name,
"type": "remote", "type": mcp_config.get("type"),
} }
# Get the container name # Get the container name
@@ -723,6 +781,16 @@ ENTRYPOINT ["/entrypoint.sh"]
"url": mcp_config.get("url"), "url": mcp_config.get("url"),
} }
# Local MCPs don't have containers
if mcp_config.get("type") == "local":
return {
"status": "not_applicable",
"name": name,
"type": "local",
"command": mcp_config.get("command"),
"args": mcp_config.get("args", []),
}
# Get the container name # Get the container name
container_name = self.get_mcp_container_name(name) container_name = self.get_mcp_container_name(name)
@@ -794,9 +862,11 @@ ENTRYPOINT ["/entrypoint.sh"]
if not mcp_config: if not mcp_config:
raise ValueError(f"MCP server '{name}' not found") raise ValueError(f"MCP server '{name}' not found")
# Remote MCPs don't have logs # Remote and Local MCPs don't have logs
if mcp_config.get("type") == "remote": if mcp_config.get("type") == "remote":
return "Remote MCPs don't have local logs" return "Remote MCPs don't have local logs"
if mcp_config.get("type") == "local":
return "Local MCPs don't have container logs"
# Get the container name # Get the container name
container_name = self.get_mcp_container_name(name) container_name = self.get_mcp_container_name(name)

248
cubbi/model_fetcher.py Normal file
View File

@@ -0,0 +1,248 @@
"""
Model fetching utilities for OpenAI-compatible providers.
"""
import json
import logging
from typing import Dict, List, Optional
import requests
logger = logging.getLogger(__name__)
class ModelFetcher:
"""Fetches model lists from OpenAI-compatible API endpoints."""
def __init__(self, timeout: int = 30):
"""Initialize the model fetcher.
Args:
timeout: Request timeout in seconds
"""
self.timeout = timeout
def fetch_models(
self,
base_url: str,
api_key: Optional[str] = None,
headers: Optional[Dict[str, str]] = None,
provider_type: Optional[str] = None,
) -> List[Dict[str, str]]:
"""Fetch models from an OpenAI-compatible /v1/models endpoint.
Args:
base_url: Base URL of the provider (e.g., "https://api.openai.com" or "https://api.litellm.com")
api_key: Optional API key for authentication
headers: Optional additional headers
provider_type: Optional provider type for authentication handling
Returns:
List of model dictionaries with 'id' and 'name' keys
Raises:
requests.RequestException: If the request fails
ValueError: If the response format is invalid
"""
# Construct the models endpoint URL
models_url = self._build_models_url(base_url)
# Prepare headers
request_headers = self._build_headers(api_key, headers, provider_type)
logger.info(f"Fetching models from {models_url}")
try:
response = requests.get(
models_url, headers=request_headers, timeout=self.timeout
)
response.raise_for_status()
# Parse JSON response
data = response.json()
# Handle provider-specific response formats
if provider_type == "google":
# Google uses {"models": [...]} format
if not isinstance(data, dict) or "models" not in data:
raise ValueError(
f"Invalid Google response format: expected dict with 'models' key, got {type(data)}"
)
models_data = data["models"]
else:
# OpenAI-compatible format uses {"data": [...]}
if not isinstance(data, dict) or "data" not in data:
raise ValueError(
f"Invalid response format: expected dict with 'data' key, got {type(data)}"
)
models_data = data["data"]
if not isinstance(models_data, list):
raise ValueError(
f"Invalid models data: expected list, got {type(models_data)}"
)
# Process models
models = []
for model_item in models_data:
if not isinstance(model_item, dict):
continue
# Handle provider-specific model ID fields
if provider_type == "google":
# Google uses "name" field (e.g., "models/gemini-1.5-pro")
model_id = model_item.get("name", "")
else:
# OpenAI-compatible uses "id" field
model_id = model_item.get("id", "")
if not model_id:
continue
# Skip models with * in their ID as requested
if "*" in model_id:
logger.debug(f"Skipping model with wildcard: {model_id}")
continue
# Create model entry
model = {
"id": model_id,
}
models.append(model)
logger.info(f"Successfully fetched {len(models)} models from {base_url}")
return models
except requests.exceptions.Timeout:
logger.error(f"Request timed out after {self.timeout} seconds")
raise requests.RequestException(f"Request to {models_url} timed out")
except requests.exceptions.ConnectionError as e:
logger.error(f"Connection error: {e}")
raise requests.RequestException(f"Failed to connect to {models_url}")
except requests.exceptions.HTTPError as e:
logger.error(f"HTTP error {e.response.status_code}: {e}")
if e.response.status_code == 401:
raise requests.RequestException(
"Authentication failed: invalid API key"
)
elif e.response.status_code == 403:
raise requests.RequestException(
"Access forbidden: check API key permissions"
)
else:
raise requests.RequestException(
f"HTTP {e.response.status_code} error from {models_url}"
)
except json.JSONDecodeError as e:
logger.error(f"Failed to parse JSON response: {e}")
raise ValueError(f"Invalid JSON response from {models_url}")
def _build_models_url(self, base_url: str) -> str:
"""Build the models endpoint URL from a base URL.
Args:
base_url: Base URL of the provider
Returns:
Complete URL for the /v1/models endpoint
"""
# Remove trailing slash if present
base_url = base_url.rstrip("/")
# Add /v1/models if not already present
if not base_url.endswith("/v1/models"):
if base_url.endswith("/v1"):
base_url += "/models"
else:
base_url += "/v1/models"
return base_url
def _build_headers(
self,
api_key: Optional[str] = None,
additional_headers: Optional[Dict[str, str]] = None,
provider_type: Optional[str] = None,
) -> Dict[str, str]:
"""Build request headers.
Args:
api_key: Optional API key for authentication
additional_headers: Optional additional headers
provider_type: Provider type for specific auth handling
Returns:
Dictionary of headers
"""
headers = {
"Content-Type": "application/json",
"Accept": "application/json",
}
# Add authentication header if API key is provided
if api_key:
if provider_type == "anthropic":
# Anthropic uses x-api-key header
headers["x-api-key"] = api_key
elif provider_type == "google":
# Google uses x-goog-api-key header
headers["x-goog-api-key"] = api_key
else:
# Standard Bearer token for OpenAI, OpenRouter, and custom providers
headers["Authorization"] = f"Bearer {api_key}"
# Add any additional headers
if additional_headers:
headers.update(additional_headers)
return headers
def fetch_provider_models(
provider_config: Dict, timeout: int = 30
) -> List[Dict[str, str]]:
"""Convenience function to fetch models for a provider configuration.
Args:
provider_config: Provider configuration dictionary
timeout: Request timeout in seconds
Returns:
List of model dictionaries
Raises:
ValueError: If provider is not supported or missing required fields
requests.RequestException: If the request fails
"""
import os
from .config import PROVIDER_DEFAULT_URLS
provider_type = provider_config.get("type", "")
base_url = provider_config.get("base_url")
api_key = provider_config.get("api_key", "")
# Resolve environment variables in API key
if api_key.startswith("${") and api_key.endswith("}"):
env_var_name = api_key[2:-1]
api_key = os.environ.get(env_var_name, "")
# Determine base URL - use custom base_url or default for standard providers
if base_url:
# Custom provider with explicit base_url
effective_base_url = base_url
elif provider_type in PROVIDER_DEFAULT_URLS:
# Standard provider - use default URL
effective_base_url = PROVIDER_DEFAULT_URLS[provider_type]
else:
raise ValueError(
f"Unsupported provider type '{provider_type}'. Must be one of: {list(PROVIDER_DEFAULT_URLS.keys())} or have a custom base_url"
)
# Prepare additional headers for specific providers
headers = {}
if provider_type == "anthropic":
# Anthropic uses a different API version header
headers["anthropic-version"] = "2023-06-01"
fetcher = ModelFetcher(timeout=timeout)
return fetcher.fetch_models(effective_base_url, api_key, headers, provider_type)

View File

@@ -33,26 +33,15 @@ class PersistentConfig(BaseModel):
description: str = "" description: str = ""
class VolumeMount(BaseModel):
mountPath: str
description: str = ""
class ImageInit(BaseModel):
pre_command: Optional[str] = None
command: str
class Image(BaseModel): class Image(BaseModel):
name: str name: str
description: str description: str
version: str version: str
maintainer: str maintainer: str
image: str image: str
init: Optional[ImageInit] = None
environment: List[ImageEnvironmentVariable] = [] environment: List[ImageEnvironmentVariable] = []
volumes: List[VolumeMount] = []
persistent_configs: List[PersistentConfig] = [] persistent_configs: List[PersistentConfig] = []
environments_to_forward: List[str] = []
class RemoteMCP(BaseModel): class RemoteMCP(BaseModel):
@@ -82,7 +71,15 @@ class ProxyMCP(BaseModel):
host_port: Optional[int] = None # External port to bind the SSE port to on the host host_port: Optional[int] = None # External port to bind the SSE port to on the host
MCP = Union[RemoteMCP, DockerMCP, ProxyMCP] class LocalMCP(BaseModel):
name: str
type: str = "local"
command: str # Path to executable
args: List[str] = Field(default_factory=list) # Command arguments
env: Dict[str, str] = Field(default_factory=dict) # Environment variables
MCP = Union[RemoteMCP, DockerMCP, ProxyMCP, LocalMCP]
class MCPContainer(BaseModel): class MCPContainer(BaseModel):

View File

@@ -2,7 +2,9 @@
Session storage management for Cubbi Container Tool. Session storage management for Cubbi Container Tool.
""" """
import fcntl
import os import os
from contextlib import contextmanager
from pathlib import Path from pathlib import Path
from typing import Dict, Optional from typing import Dict, Optional
@@ -11,6 +13,31 @@ import yaml
DEFAULT_SESSIONS_FILE = Path.home() / ".config" / "cubbi" / "sessions.yaml" DEFAULT_SESSIONS_FILE = Path.home() / ".config" / "cubbi" / "sessions.yaml"
@contextmanager
def _file_lock(file_path: Path):
"""Context manager for file locking.
Args:
file_path: Path to the file to lock
Yields:
File descriptor with exclusive lock
"""
# Ensure the file exists
file_path.parent.mkdir(parents=True, exist_ok=True)
if not file_path.exists():
file_path.touch(mode=0o600)
# Open file and acquire exclusive lock
fd = open(file_path, "r+")
try:
fcntl.flock(fd.fileno(), fcntl.LOCK_EX)
yield fd
finally:
fcntl.flock(fd.fileno(), fcntl.LOCK_UN)
fd.close()
class SessionManager: class SessionManager:
"""Manager for container sessions.""" """Manager for container sessions."""
@@ -42,9 +69,26 @@ class SessionManager:
return sessions return sessions
def save(self) -> None: def save(self) -> None:
"""Save the sessions to file.""" """Save the sessions to file.
with open(self.sessions_path, "w") as f:
yaml.safe_dump(self.sessions, f) Note: This method acquires a file lock and merges with existing data
to prevent concurrent write issues.
"""
with _file_lock(self.sessions_path) as fd:
# Reload sessions from disk to get latest state
fd.seek(0)
sessions = yaml.safe_load(fd) or {}
# Merge current in-memory sessions with disk state
sessions.update(self.sessions)
# Write back to file
fd.seek(0)
fd.truncate()
yaml.safe_dump(sessions, fd)
# Update in-memory cache
self.sessions = sessions
def add_session(self, session_id: str, session_data: dict) -> None: def add_session(self, session_id: str, session_data: dict) -> None:
"""Add a session to storage. """Add a session to storage.
@@ -53,8 +97,21 @@ class SessionManager:
session_id: The unique session ID session_id: The unique session ID
session_data: The session data (Session model dump as dict) session_data: The session data (Session model dump as dict)
""" """
self.sessions[session_id] = session_data with _file_lock(self.sessions_path) as fd:
self.save() # Reload sessions from disk to get latest state
fd.seek(0)
sessions = yaml.safe_load(fd) or {}
# Apply the modification
sessions[session_id] = session_data
# Write back to file
fd.seek(0)
fd.truncate()
yaml.safe_dump(sessions, fd)
# Update in-memory cache
self.sessions = sessions
def get_session(self, session_id: str) -> Optional[dict]: def get_session(self, session_id: str) -> Optional[dict]:
"""Get a session by ID. """Get a session by ID.
@@ -81,6 +138,19 @@ class SessionManager:
Args: Args:
session_id: The session ID to remove session_id: The session ID to remove
""" """
if session_id in self.sessions: with _file_lock(self.sessions_path) as fd:
del self.sessions[session_id] # Reload sessions from disk to get latest state
self.save() fd.seek(0)
sessions = yaml.safe_load(fd) or {}
# Apply the modification
if session_id in sessions:
del sessions[session_id]
# Write back to file
fd.seek(0)
fd.truncate()
yaml.safe_dump(sessions, fd)
# Update in-memory cache
self.sessions = sessions

View File

@@ -8,8 +8,28 @@ from typing import Any, Dict, List, Optional, Tuple
import yaml import yaml
# Define the environment variable mappings # Define the environment variable mappings for auto-discovery
ENV_MAPPINGS = { STANDARD_PROVIDERS = {
"anthropic": {
"type": "anthropic",
"env_key": "ANTHROPIC_API_KEY",
},
"openai": {
"type": "openai",
"env_key": "OPENAI_API_KEY",
},
"google": {
"type": "google",
"env_key": "GOOGLE_API_KEY",
},
"openrouter": {
"type": "openrouter",
"env_key": "OPENROUTER_API_KEY",
},
}
# Legacy environment variable mappings (kept for backward compatibility)
LEGACY_ENV_MAPPINGS = {
"services.langfuse.url": "LANGFUSE_URL", "services.langfuse.url": "LANGFUSE_URL",
"services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY", "services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
"services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY", "services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY",
@@ -44,6 +64,10 @@ class UserConfigManager:
self.config_path.parent.mkdir(parents=True, exist_ok=True) self.config_path.parent.mkdir(parents=True, exist_ok=True)
# Create default config # Create default config
default_config = self._get_default_config() default_config = self._get_default_config()
# Auto-discover and add providers from environment for new configs
self._auto_discover_providers(default_config)
# Save to file # Save to file
with open(self.config_path, "w") as f: with open(self.config_path, "w") as f:
yaml.safe_dump(default_config, f) yaml.safe_dump(default_config, f)
@@ -85,7 +109,12 @@ class UserConfigManager:
config = {} config = {}
# Merge with defaults for any missing fields # Merge with defaults for any missing fields
return self._merge_with_defaults(config) config = self._merge_with_defaults(config)
# Auto-discover and add providers from environment
self._auto_discover_providers(config)
return config
def _get_default_config(self) -> Dict[str, Any]: def _get_default_config(self) -> Dict[str, Any]:
"""Get the default configuration.""" """Get the default configuration."""
@@ -98,15 +127,11 @@ class UserConfigManager:
"volumes": [], # Default volumes to mount, format: "source:dest" "volumes": [], # Default volumes to mount, format: "source:dest"
"ports": [], # Default ports to forward, format: list of integers "ports": [], # Default ports to forward, format: list of integers
"mcps": [], # Default MCP servers to connect to "mcps": [], # Default MCP servers to connect to
"model": "claude-3-5-sonnet-latest", # Default LLM model to use "model": "anthropic/claude-3-5-sonnet-latest", # Default LLM model (provider/model format)
"provider": "anthropic", # Default LLM provider to use
}, },
"providers": {}, # LLM providers configuration
"services": { "services": {
"langfuse": {}, "langfuse": {}, # Keep langfuse in services as it's not an LLM provider
"openai": {},
"anthropic": {},
"openrouter": {},
"google": {},
}, },
"docker": { "docker": {
"network": "cubbi-network", "network": "cubbi-network",
@@ -148,7 +173,7 @@ class UserConfigManager:
and not key_path.startswith("services.") and not key_path.startswith("services.")
and not any( and not any(
key_path.startswith(section + ".") key_path.startswith(section + ".")
for section in ["defaults", "docker", "remote", "ui"] for section in ["defaults", "docker", "remote", "ui", "providers"]
) )
): ):
service, setting = key_path.split(".", 1) service, setting = key_path.split(".", 1)
@@ -177,7 +202,7 @@ class UserConfigManager:
and not key_path.startswith("services.") and not key_path.startswith("services.")
and not any( and not any(
key_path.startswith(section + ".") key_path.startswith(section + ".")
for section in ["defaults", "docker", "remote", "ui"] for section in ["defaults", "docker", "remote", "ui", "providers"]
) )
): ):
service, setting = key_path.split(".", 1) service, setting = key_path.split(".", 1)
@@ -247,13 +272,22 @@ class UserConfigManager:
def get_environment_variables(self) -> Dict[str, str]: def get_environment_variables(self) -> Dict[str, str]:
"""Get environment variables from the configuration. """Get environment variables from the configuration.
NOTE: API keys are now handled by cubbi_init plugins, not passed from host.
Returns: Returns:
A dictionary of environment variables to set in the container. A dictionary of environment variables to set in the container.
""" """
env_vars = {} env_vars = {}
# Process the service configurations and map to environment variables # Process the legacy service configurations and map to environment variables
for config_path, env_var in ENV_MAPPINGS.items(): # BUT EXCLUDE API KEYS - they're now handled by cubbi_init
for config_path, env_var in LEGACY_ENV_MAPPINGS.items():
# Skip API key environment variables - let cubbi_init handle them
if any(
key_word in env_var.upper() for key_word in ["API_KEY", "SECRET_KEY"]
):
continue
value = self.get(config_path) value = self.get(config_path)
if value: if value:
# Handle environment variable references # Handle environment variable references
@@ -267,6 +301,68 @@ class UserConfigManager:
env_vars[env_var] = str(value) env_vars[env_var] = str(value)
# NOTE: Provider API keys are no longer passed as environment variables
# They are now handled by cubbi_init plugins based on selected model
# This prevents unused API keys from being exposed in containers
return env_vars
def get_provider_environment_variables(self, provider_name: str) -> Dict[str, str]:
"""Get environment variables for a specific provider.
Args:
provider_name: Name of the provider to get environment variables for
Returns:
Dictionary of environment variables for the provider
"""
env_vars = {}
provider_config = self.get_provider(provider_name)
if not provider_config:
return env_vars
provider_type = provider_config.get("type", provider_name)
api_key = provider_config.get("api_key", "")
base_url = provider_config.get("base_url")
# Resolve environment variable references
if api_key.startswith("${") and api_key.endswith("}"):
env_var_name = api_key[2:-1]
resolved_api_key = os.environ.get(env_var_name, "")
else:
resolved_api_key = api_key
if not resolved_api_key:
return env_vars
# Add environment variables based on provider type
if provider_type == "anthropic":
env_vars["ANTHROPIC_API_KEY"] = resolved_api_key
elif provider_type == "openai":
env_vars["OPENAI_API_KEY"] = resolved_api_key
if base_url:
env_vars["OPENAI_URL"] = base_url
elif provider_type == "google":
env_vars["GOOGLE_API_KEY"] = resolved_api_key
elif provider_type == "openrouter":
env_vars["OPENROUTER_API_KEY"] = resolved_api_key
return env_vars
def get_all_providers_environment_variables(self) -> Dict[str, str]:
"""Get environment variables for all configured providers.
Returns:
Dictionary of all provider environment variables
"""
env_vars = {}
providers = self.get("providers", {})
for provider_name in providers.keys():
provider_env = self.get_provider_environment_variables(provider_name)
env_vars.update(provider_env)
return env_vars return env_vars
def list_config(self) -> List[Tuple[str, Any]]: def list_config(self) -> List[Tuple[str, Any]]:
@@ -295,3 +391,384 @@ class UserConfigManager:
_flatten_dict(self.config) _flatten_dict(self.config)
return sorted(result) return sorted(result)
def _auto_discover_providers(self, config: Dict[str, Any]) -> None:
"""Auto-discover providers from environment variables."""
if "providers" not in config:
config["providers"] = {}
for provider_name, provider_info in STANDARD_PROVIDERS.items():
# Skip if provider already configured
if provider_name in config["providers"]:
continue
# Check if environment variable exists
api_key = os.environ.get(provider_info["env_key"])
if api_key:
config["providers"][provider_name] = {
"type": provider_info["type"],
"api_key": f"${{{provider_info['env_key']}}}", # Reference to env var
}
def get_provider(self, provider_name: str) -> Optional[Dict[str, Any]]:
"""Get a provider configuration by name."""
return self.get(f"providers.{provider_name}")
def list_providers(self) -> Dict[str, Dict[str, Any]]:
"""Get all configured providers."""
return self.get("providers", {})
def add_provider(
self,
name: str,
provider_type: str,
api_key: str,
base_url: Optional[str] = None,
env_key: Optional[str] = None,
) -> None:
"""Add a new provider configuration.
Args:
name: Provider name/identifier
provider_type: Type of provider (anthropic, openai, etc.)
api_key: API key value or environment variable reference
base_url: Custom base URL for API calls (optional)
env_key: If provided, use env reference instead of direct api_key
"""
provider_config = {
"type": provider_type,
"api_key": f"${{{env_key}}}" if env_key else api_key,
}
if base_url:
provider_config["base_url"] = base_url
self.set(f"providers.{name}", provider_config)
def remove_provider(self, name: str) -> bool:
"""Remove a provider configuration.
Returns:
True if provider was removed, False if it didn't exist
"""
providers = self.get("providers", {})
if name in providers:
del providers[name]
self.set("providers", providers)
return True
return False
def resolve_model(self, model_spec: str) -> Optional[Dict[str, Any]]:
"""Resolve a model specification (provider/model) to provider config.
Args:
model_spec: Model specification in format "provider/model"
Returns:
Dictionary with resolved provider config and model name
"""
if "/" not in model_spec:
# Legacy format - try to use as provider name with empty model
provider_name = model_spec
model_name = ""
else:
provider_name, model_name = model_spec.split("/", 1)
provider_config = self.get_provider(provider_name)
if not provider_config:
return None
# Resolve environment variable references in API key
api_key = provider_config.get("api_key", "")
if api_key.startswith("${") and api_key.endswith("}"):
env_var_name = api_key[2:-1]
resolved_api_key = os.environ.get(env_var_name, "")
else:
resolved_api_key = api_key
return {
"provider_name": provider_name,
"provider_type": provider_config.get("type", provider_name),
"model_name": model_name,
"api_key": resolved_api_key,
"base_url": provider_config.get("base_url"),
}
# Resource management methods
def list_mcps(self) -> List[str]:
"""Get all configured default MCP servers."""
return self.get("defaults.mcps", [])
def add_mcp(self, name: str) -> None:
"""Add a new default MCP server."""
mcps = self.list_mcps()
if name not in mcps:
mcps.append(name)
self.set("defaults.mcps", mcps)
def remove_mcp(self, name: str) -> bool:
"""Remove a default MCP server.
Returns:
True if MCP was removed, False if it didn't exist
"""
mcps = self.list_mcps()
if name in mcps:
mcps.remove(name)
self.set("defaults.mcps", mcps)
return True
return False
def list_mcp_configurations(self) -> List[Dict[str, Any]]:
"""Get all configured MCP server configurations."""
return self.get("mcps", [])
def get_mcp_configuration(self, name: str) -> Optional[Dict[str, Any]]:
"""Get an MCP configuration by name."""
mcps = self.list_mcp_configurations()
for mcp in mcps:
if mcp.get("name") == name:
return mcp
return None
def add_mcp_configuration(self, mcp_config: Dict[str, Any]) -> None:
"""Add a new MCP server configuration."""
mcps = self.list_mcp_configurations()
# Remove existing MCP with the same name if it exists
mcps = [mcp for mcp in mcps if mcp.get("name") != mcp_config.get("name")]
# Add the new MCP
mcps.append(mcp_config)
# Save the configuration
self.set("mcps", mcps)
def remove_mcp_configuration(self, name: str) -> bool:
"""Remove an MCP server configuration.
Returns:
True if MCP was removed, False if it didn't exist
"""
mcps = self.list_mcp_configurations()
original_length = len(mcps)
# Filter out the MCP with the specified name
mcps = [mcp for mcp in mcps if mcp.get("name") != name]
if len(mcps) < original_length:
self.set("mcps", mcps)
# Also remove from defaults if it's there
self.remove_mcp(name)
return True
return False
def list_networks(self) -> List[str]:
"""Get all configured default networks."""
return self.get("defaults.networks", [])
def add_network(self, name: str) -> None:
"""Add a new default network."""
networks = self.list_networks()
if name not in networks:
networks.append(name)
self.set("defaults.networks", networks)
def remove_network(self, name: str) -> bool:
"""Remove a default network.
Returns:
True if network was removed, False if it didn't exist
"""
networks = self.list_networks()
if name in networks:
networks.remove(name)
self.set("defaults.networks", networks)
return True
return False
def list_volumes(self) -> List[str]:
"""Get all configured default volumes."""
return self.get("defaults.volumes", [])
def add_volume(self, volume: str) -> None:
"""Add a new default volume mapping."""
volumes = self.list_volumes()
if volume not in volumes:
volumes.append(volume)
self.set("defaults.volumes", volumes)
def remove_volume(self, volume: str) -> bool:
"""Remove a default volume mapping.
Returns:
True if volume was removed, False if it didn't exist
"""
volumes = self.list_volumes()
if volume in volumes:
volumes.remove(volume)
self.set("defaults.volumes", volumes)
return True
return False
def list_ports(self) -> List[int]:
"""Get all configured default ports."""
return self.get("defaults.ports", [])
def add_port(self, port: int) -> None:
"""Add a new default port."""
ports = self.list_ports()
if port not in ports:
ports.append(port)
self.set("defaults.ports", ports)
def remove_port(self, port: int) -> bool:
"""Remove a default port.
Returns:
True if port was removed, False if it didn't exist
"""
ports = self.list_ports()
if port in ports:
ports.remove(port)
self.set("defaults.ports", ports)
return True
return False
# Model management methods
def list_provider_models(self, provider_name: str) -> List[Dict[str, str]]:
"""Get all models for a specific provider.
Args:
provider_name: Name of the provider
Returns:
List of model dictionaries with 'id' and 'name' keys
"""
provider_config = self.get_provider(provider_name)
if not provider_config:
return []
models = provider_config.get("models", [])
normalized_models = []
for model in models:
if isinstance(model, str):
normalized_models.append({"id": model})
elif isinstance(model, dict):
model_id = model.get("id", "")
if model_id:
normalized_models.append({"id": model_id})
return normalized_models
def set_provider_models(
self, provider_name: str, models: List[Dict[str, str]]
) -> None:
"""Set the models for a specific provider.
Args:
provider_name: Name of the provider
models: List of model dictionaries with 'id' and optional 'name' keys
"""
provider_config = self.get_provider(provider_name)
if not provider_config:
return
# Normalize models - ensure each has id, name defaults to id
normalized_models = []
for model in models:
if isinstance(model, dict) and "id" in model:
normalized_model = {
"id": model["id"],
}
normalized_models.append(normalized_model)
provider_config["models"] = normalized_models
self.set(f"providers.{provider_name}", provider_config)
def add_provider_model(
self, provider_name: str, model_id: str, model_name: Optional[str] = None
) -> None:
"""Add a model to a provider.
Args:
provider_name: Name of the provider
model_id: ID of the model
model_name: Optional display name for the model (defaults to model_id)
"""
models = self.list_provider_models(provider_name)
for existing_model in models:
if existing_model["id"] == model_id:
return
new_model = {"id": model_id}
models.append(new_model)
self.set_provider_models(provider_name, models)
def remove_provider_model(self, provider_name: str, model_id: str) -> bool:
"""Remove a model from a provider.
Args:
provider_name: Name of the provider
model_id: ID of the model to remove
Returns:
True if model was removed, False if it didn't exist
"""
models = self.list_provider_models(provider_name)
original_length = len(models)
# Filter out the model with the specified ID
models = [model for model in models if model["id"] != model_id]
if len(models) < original_length:
self.set_provider_models(provider_name, models)
return True
return False
def is_provider_openai_compatible(self, provider_name: str) -> bool:
provider_config = self.get_provider(provider_name)
if not provider_config:
return False
provider_type = provider_config.get("type", "")
return provider_type == "openai" and provider_config.get("base_url") is not None
def supports_model_fetching(self, provider_name: str) -> bool:
"""Check if a provider supports model fetching via API."""
from .config import PROVIDER_DEFAULT_URLS
provider = self.get_provider(provider_name)
if not provider:
return False
provider_type = provider.get("type")
base_url = provider.get("base_url")
# Provider supports model fetching if:
# 1. It has a custom base_url (OpenAI-compatible), OR
# 2. It's a standard provider type that we support
return base_url is not None or provider_type in PROVIDER_DEFAULT_URLS
def list_openai_compatible_providers(self) -> List[str]:
providers = self.list_providers()
compatible_providers = []
for provider_name in providers.keys():
if self.is_provider_openai_compatible(provider_name):
compatible_providers.append(provider_name)
return compatible_providers
def list_model_fetchable_providers(self) -> List[str]:
"""List all providers that support model fetching."""
providers = self.list_providers()
fetchable_providers = []
for provider_name in providers.keys():
if self.supports_model_fetching(provider_name):
fetchable_providers.append(provider_name)
return fetchable_providers

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "cubbi" name = "cubbi"
version = "0.4.0" version = "0.5.0"
description = "Cubbi Container Tool" description = "Cubbi Container Tool"
readme = "README.md" readme = "README.md"
requires-python = ">=3.12" requires-python = ">=3.12"
@@ -14,6 +14,8 @@ dependencies = [
"pyyaml>=6.0.1", "pyyaml>=6.0.1",
"rich>=13.6.0", "rich>=13.6.0",
"pydantic>=2.5.0", "pydantic>=2.5.0",
"questionary>=2.0.0",
"requests>=2.32.3",
] ]
classifiers = [ classifiers = [
"Development Status :: 3 - Alpha", "Development Status :: 3 - Alpha",
@@ -45,6 +47,13 @@ cubbix = "cubbi.cli:session_create_entry_point"
line-length = 88 line-length = 88
target-version = "py312" target-version = "py312"
[tool.pytest.ini_options]
# Exclude integration tests by default
addopts = "-v --tb=short -m 'not integration'"
markers = [
"integration: marks tests as integration tests (deselected by default)",
]
[tool.mypy] [tool.mypy]
python_version = "3.12" python_version = "3.12"
warn_return_any = true warn_return_any = true

View File

@@ -0,0 +1,83 @@
# Integration Tests
This directory contains integration tests for cubbi images with different model combinations.
## Test Matrix
The integration tests cover:
- **5 Images**: goose, aider, claudecode, opencode, crush
- **4 Models**: anthropic/claude-sonnet-4-20250514, openai/gpt-4o, openrouter/openai/gpt-4o, litellm/gpt-oss:120b
- **Total**: 20 image/model combinations + additional tests
## Running Tests
### Default (Skip Integration)
```bash
# Regular tests only (integration tests excluded by default)
uv run -m pytest
# Specific test file (excluding integration)
uv run -m pytest tests/test_cli.py
```
### Integration Tests Only
```bash
# Run all integration tests (20 combinations + helpers)
uv run -m pytest -m integration
# Run specific image with all models
uv run -m pytest -m integration -k "goose"
# Run specific model with all images
uv run -m pytest -m integration -k "anthropic"
# Run single combination
uv run -m pytest -m integration -k "goose and anthropic"
# Verbose output with timing
uv run -m pytest -m integration -v -s
```
### Combined Tests
```bash
# Run both regular and integration tests
uv run -m pytest -m "not slow" # or remove the default marker exclusion
```
## Test Structure
### `test_image_model_combination`
- Parametrized test with all image/model combinations
- Tests single prompt/response functionality
- Uses appropriate command syntax for each tool
- Verifies successful completion and basic output
### `test_image_help_command`
- Tests help command for each image
- Ensures basic functionality works
### `test_all_images_available`
- Verifies all required images are built and available
## Command Templates
Each image uses its specific command syntax:
- **goose**: `goose run -t 'prompt' --no-session --quiet`
- **aider**: `aider --message 'prompt' --yes-always --no-fancy-input --no-check-update --no-auto-commits`
- **claudecode**: `claude -p 'prompt'`
- **opencode**: `opencode run -m MODEL 'prompt'`
- **crush**: `crush run 'prompt'`
## Expected Results
All tests should pass when:
1. Images are built (`uv run -m cubbi.cli image build [IMAGE]`)
2. API keys are configured (`uv run -m cubbi.cli configure`)
3. Models are accessible and working
## Debugging Failed Tests
If tests fail, check:
1. Image availability: `uv run -m cubbi.cli image list`
2. Configuration: `uv run -m cubbi.cli config list`
3. Manual test: `uv run -m cubbi.cli session create -i IMAGE -m MODEL --run "COMMAND"`

135
tests/test_integration.py Normal file
View File

@@ -0,0 +1,135 @@
"""Integration tests for cubbi images with different model combinations."""
import subprocess
import pytest
from typing import Dict
IMAGES = ["goose", "aider", "opencode", "crush"]
MODELS = [
"anthropic/claude-sonnet-4-20250514",
"openai/gpt-4o",
"openrouter/openai/gpt-4o",
"litellm/gpt-oss:120b",
]
# Command templates for each tool (based on research)
COMMANDS: Dict[str, str] = {
"goose": "goose run -t '{prompt}' --no-session --quiet",
"aider": "aider --message '{prompt}' --yes-always --no-fancy-input --no-check-update --no-auto-commits",
"opencode": "opencode run '{prompt}'",
"crush": "crush run -q '{prompt}'",
}
def run_cubbi_command(
image: str, model: str, command: str, timeout: int = 20
) -> subprocess.CompletedProcess:
"""Run a cubbi command with specified image, model, and command."""
full_command = [
"uv",
"run",
"-m",
"cubbi.cli",
"session",
"create",
"-i",
image,
"-m",
model,
"--no-connect",
"--no-shell",
"--run",
command,
]
return subprocess.run(
full_command,
capture_output=True,
text=True,
timeout=timeout,
cwd="/home/tito/code/monadical/cubbi",
)
def is_successful_response(result: subprocess.CompletedProcess) -> bool:
"""Check if the cubbi command completed successfully."""
# Check for successful completion markers
return (
result.returncode == 0
and "Initial command finished (exit code: 0)" in result.stdout
and "Command execution complete" in result.stdout
)
@pytest.mark.integration
@pytest.mark.parametrize("image", IMAGES)
@pytest.mark.parametrize("model", MODELS)
def test_image_model_combination(image: str, model: str):
"""Test each image with each model using appropriate command syntax."""
prompt = "What is 2+2?"
# Get the command template for this image
command_template = COMMANDS[image]
# For opencode, we need to substitute the model in the command
if image == "opencode":
command = command_template.format(prompt=prompt, model=model)
else:
command = command_template.format(prompt=prompt)
# Run the test with timeout handling
try:
result = run_cubbi_command(image, model, command)
except subprocess.TimeoutExpired:
pytest.fail(f"Test timed out after 20s for {image} with {model}")
# Check if the command was successful
assert is_successful_response(result), (
f"Failed to run {image} with {model}. "
f"Return code: {result.returncode}\n"
f"Stdout: {result.stdout}\n"
f"Stderr: {result.stderr}"
)
@pytest.mark.integration
def test_all_images_available():
"""Test that all required images are available for testing."""
# Run image list command
result = subprocess.run(
["uv", "run", "-m", "cubbi.cli", "image", "list"],
capture_output=True,
text=True,
timeout=30,
cwd="/home/tito/code/monadical/cubbi",
)
assert result.returncode == 0, f"Failed to list images: {result.stderr}"
for image in IMAGES:
assert image in result.stdout, f"Image {image} not found in available images"
@pytest.mark.integration
def test_claudecode():
"""Test Claude Code without model preselection since it only supports Anthropic."""
command = "claude -p hello"
try:
result = run_cubbi_command("claudecode", MODELS[0], command, timeout=20)
except subprocess.TimeoutExpired:
pytest.fail("Claude Code help command timed out after 20s")
assert is_successful_response(result), (
f"Failed to run Claude Code help command. "
f"Return code: {result.returncode}\n"
f"Stdout: {result.stdout}\n"
f"Stderr: {result.stderr}"
)
if __name__ == "__main__":
# Allow running the test file directly for development
pytest.main([__file__, "-v", "-m", "integration"])

View File

@@ -24,20 +24,6 @@ def execute_command_in_container(container_id, command):
def wait_for_container_init(container_id, timeout=5.0, poll_interval=0.1): def wait_for_container_init(container_id, timeout=5.0, poll_interval=0.1):
"""
Wait for a Cubbi container to complete initialization by polling /cubbi/init.status.
Args:
container_id: Docker container ID
timeout: Maximum time to wait in seconds (default: 5.0)
poll_interval: Time between polls in seconds (default: 0.1)
Returns:
bool: True if initialization completed, False if timed out
Raises:
subprocess.CalledProcessError: If docker exec command fails
"""
start_time = time.time() start_time = time.time()
while time.time() - start_time < timeout: while time.time() - start_time < timeout:

39
uv.lock generated
View File

@@ -78,12 +78,14 @@ wheels = [
[[package]] [[package]]
name = "cubbi" name = "cubbi"
version = "0.4.0" version = "0.5.0"
source = { editable = "." } source = { editable = "." }
dependencies = [ dependencies = [
{ name = "docker" }, { name = "docker" },
{ name = "pydantic" }, { name = "pydantic" },
{ name = "pyyaml" }, { name = "pyyaml" },
{ name = "questionary" },
{ name = "requests" },
{ name = "rich" }, { name = "rich" },
{ name = "typer" }, { name = "typer" },
] ]
@@ -107,6 +109,8 @@ requires-dist = [
{ name = "pydantic", specifier = ">=2.5.0" }, { name = "pydantic", specifier = ">=2.5.0" },
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=7.4.0" }, { name = "pytest", marker = "extra == 'dev'", specifier = ">=7.4.0" },
{ name = "pyyaml", specifier = ">=6.0.1" }, { name = "pyyaml", specifier = ">=6.0.1" },
{ name = "questionary", specifier = ">=2.0.0" },
{ name = "requests", specifier = ">=2.32.3" },
{ name = "rich", specifier = ">=13.6.0" }, { name = "rich", specifier = ">=13.6.0" },
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.1.9" }, { name = "ruff", marker = "extra == 'dev'", specifier = ">=0.1.9" },
{ name = "typer", specifier = ">=0.9.0" }, { name = "typer", specifier = ">=0.9.0" },
@@ -221,6 +225,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/88/5f/e351af9a41f866ac3f1fac4ca0613908d9a41741cfcf2228f4ad853b697d/pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669", size = 20556, upload-time = "2024-04-20T21:34:40.434Z" }, { url = "https://files.pythonhosted.org/packages/88/5f/e351af9a41f866ac3f1fac4ca0613908d9a41741cfcf2228f4ad853b697d/pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669", size = 20556, upload-time = "2024-04-20T21:34:40.434Z" },
] ]
[[package]]
name = "prompt-toolkit"
version = "3.0.51"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "wcwidth" },
]
sdist = { url = "https://files.pythonhosted.org/packages/bb/6e/9d084c929dfe9e3bfe0c6a47e31f78a25c54627d64a66e884a8bf5474f1c/prompt_toolkit-3.0.51.tar.gz", hash = "sha256:931a162e3b27fc90c86f1b48bb1fb2c528c2761475e57c9c06de13311c7b54ed", size = 428940, upload-time = "2025-04-15T09:18:47.731Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ce/4f/5249960887b1fbe561d9ff265496d170b55a735b76724f10ef19f9e40716/prompt_toolkit-3.0.51-py3-none-any.whl", hash = "sha256:52742911fde84e2d423e2f9a4cf1de7d7ac4e51958f648d9540e0fb8db077b07", size = 387810, upload-time = "2025-04-15T09:18:44.753Z" },
]
[[package]] [[package]]
name = "pydantic" name = "pydantic"
version = "2.10.6" version = "2.10.6"
@@ -337,6 +353,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446, upload-time = "2024-08-06T20:33:04.33Z" }, { url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446, upload-time = "2024-08-06T20:33:04.33Z" },
] ]
[[package]]
name = "questionary"
version = "2.1.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "prompt-toolkit" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a8/b8/d16eb579277f3de9e56e5ad25280fab52fc5774117fb70362e8c2e016559/questionary-2.1.0.tar.gz", hash = "sha256:6302cdd645b19667d8f6e6634774e9538bfcd1aad9be287e743d96cacaf95587", size = 26775, upload-time = "2024-12-29T11:49:17.802Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ad/3f/11dd4cd4f39e05128bfd20138faea57bec56f9ffba6185d276e3107ba5b2/questionary-2.1.0-py3-none-any.whl", hash = "sha256:44174d237b68bc828e4878c763a9ad6790ee61990e0ae72927694ead57bab8ec", size = 36747, upload-time = "2024-12-29T11:49:16.734Z" },
]
[[package]] [[package]]
name = "requests" name = "requests"
version = "2.32.3" version = "2.32.3"
@@ -431,3 +459,12 @@ sdist = { url = "https://files.pythonhosted.org/packages/aa/63/e53da845320b757bf
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df", size = 128369, upload-time = "2024-12-22T07:47:28.074Z" }, { url = "https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df", size = 128369, upload-time = "2024-12-22T07:47:28.074Z" },
] ]
[[package]]
name = "wcwidth"
version = "0.2.13"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/6c/63/53559446a878410fc5a5974feb13d31d78d752eb18aeba59c7fef1af7598/wcwidth-0.2.13.tar.gz", hash = "sha256:72ea0c06399eb286d978fdedb6923a9eb47e1c486ce63e9b4e64fc18303972b5", size = 101301, upload-time = "2024-01-06T02:10:57.829Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fd/84/fd2ba7aafacbad3c4201d395674fc6348826569da3c0937e75505ead3528/wcwidth-0.2.13-py2.py3-none-any.whl", hash = "sha256:3da69048e4540d84af32131829ff948f1e022c1c6bdb8d6102117aac784f6859", size = 34166, upload-time = "2024-01-06T02:10:55.763Z" },
]