mirror of
https://github.com/Monadical-SAS/cubbi.git
synced 2025-12-21 12:49:07 +00:00
Compare commits
34 Commits
feature/ge
...
v0.5.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
21cb53b597 | ||
| 10d9e9d3ab | |||
| b788f3f52e | |||
| 3795de1484 | |||
| be171cf2c6 | |||
| b9cffe3008 | |||
| a66843714d | |||
| 407c1a1c9b | |||
| fc819a3861 | |||
| 7d6bc5dbfa | |||
| 310149dc34 | |||
| 3a7b9213b0 | |||
| a709071d10 | |||
| bae951cf7c | |||
| e4c64a54ed | |||
| b7b78ea075 | |||
|
|
de1b3c0976 | ||
| 75c9849315 | |||
| 9dc11582a2 | |||
| 472f030924 | |||
| b8ecad6227 | |||
|
|
fd23e12ff8 | ||
| 2eb15a31f8 | |||
| afae8a13e1 | |||
| d41faf6b30 | |||
| 672b8a8e31 | |||
| da5937e708 | |||
| 4958b07401 | |||
| 4c4e207b67 | |||
| dba7a7c1ef | |||
| 9c8ddbb3f3 | |||
|
|
d750e64608 | ||
|
|
fc0d6b51af | ||
|
|
b28c2bd63e |
17
.github/workflows/conventional_commit_pr.yml
vendored
17
.github/workflows/conventional_commit_pr.yml
vendored
@@ -1,17 +0,0 @@
|
|||||||
name: Conventional commit PR
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
cog_check_job:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
name: check conventional commit compliance
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
# pick the pr HEAD instead of the merge commit
|
|
||||||
ref: ${{ github.event.pull_request.head.sha }}
|
|
||||||
|
|
||||||
- name: Conventional commit check
|
|
||||||
uses: cocogitto/cocogitto-action@v3
|
|
||||||
2
.github/workflows/pytests.yml
vendored
2
.github/workflows/pytests.yml
vendored
@@ -34,7 +34,7 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
uv tool install --with-editable . .
|
uv tool install --with-editable . .
|
||||||
cubbi image build goose
|
cubbi image build goose
|
||||||
cubbi image build gemini-cli
|
cubbi image build aider
|
||||||
|
|
||||||
- name: Tests
|
- name: Tests
|
||||||
run: |
|
run: |
|
||||||
|
|||||||
469
CHANGELOG.md
469
CHANGELOG.md
@@ -1,6 +1,475 @@
|
|||||||
# CHANGELOG
|
# CHANGELOG
|
||||||
|
|
||||||
|
|
||||||
|
## v0.5.0 (2025-12-15)
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
- Crush providers configuration ([#30](https://github.com/Monadical-SAS/cubbi/pull/30),
|
||||||
|
[`a709071`](https://github.com/Monadical-SAS/cubbi/commit/a709071d1008d7b805da86d82fb056e144a328fd))
|
||||||
|
|
||||||
|
- Cubbi configure not working when configuring other provider
|
||||||
|
([#32](https://github.com/Monadical-SAS/cubbi/pull/32),
|
||||||
|
[`310149d`](https://github.com/Monadical-SAS/cubbi/commit/310149dc34bfd41237ee92ff42620bf3f4316634))
|
||||||
|
|
||||||
|
- Ensure Docker containers are always removed when closing sessions
|
||||||
|
([#35](https://github.com/Monadical-SAS/cubbi/pull/35),
|
||||||
|
[`b788f3f`](https://github.com/Monadical-SAS/cubbi/commit/b788f3f52e6f85fd99e1dd117565850dbe13332b))
|
||||||
|
|
||||||
|
When closing sessions with already-stopped containers, the stop/kill operation would raise an
|
||||||
|
exception, preventing container.remove() from being called. This left stopped containers in Docker
|
||||||
|
even though they were removed from cubbi's session tracking.
|
||||||
|
|
||||||
|
The fix wraps stop/kill operations in their own try-except block, allowing the code to always reach
|
||||||
|
container.remove() regardless of whether the container was already stopped.
|
||||||
|
|
||||||
|
- Make groupadd optional (group already may exist, like gid 20 from osx)
|
||||||
|
([`407c1a1`](https://github.com/Monadical-SAS/cubbi/commit/407c1a1c9bc85e06600c762c78905d1bfdf89922))
|
||||||
|
|
||||||
|
- Prevent concurrent YAML corruption in sessions
|
||||||
|
([#36](https://github.com/Monadical-SAS/cubbi/pull/36),
|
||||||
|
[`10d9e9d`](https://github.com/Monadical-SAS/cubbi/commit/10d9e9d3abc135718be667adc574a7b3f8470ff7))
|
||||||
|
|
||||||
|
fix: add file locking to prevent concurrent YAML corruption in sessions
|
||||||
|
|
||||||
|
When multiple cubbi instances run simultaneously, they can corrupt the sessions.yaml file due to
|
||||||
|
concurrent writes. This manifests as malformed YAML entries (e.g., "status:
|
||||||
|
running\ning2dc3ff11:").
|
||||||
|
|
||||||
|
This commit adds: - fcntl-based file locking for all write operations - Read-modify-write pattern
|
||||||
|
that reloads from disk before each write - Proper lock acquisition/release via context manager
|
||||||
|
|
||||||
|
All write operations (add_session, remove_session, save) now: 1. Acquire exclusive lock on
|
||||||
|
sessions.yaml 2. Reload latest state from disk 3. Apply modifications 4. Write atomically to file
|
||||||
|
5. Update in-memory cache 6. Release lock
|
||||||
|
|
||||||
|
This ensures that concurrent cubbi instances can safely modify the sessions file without corruption.
|
||||||
|
|
||||||
|
- Remove container even if already removed
|
||||||
|
([`a668437`](https://github.com/Monadical-SAS/cubbi/commit/a66843714d01d163e2ce17dd4399a0fa64d2be65))
|
||||||
|
|
||||||
|
- Remove persistent_configs of images ([#28](https://github.com/Monadical-SAS/cubbi/pull/28),
|
||||||
|
[`e4c64a5`](https://github.com/Monadical-SAS/cubbi/commit/e4c64a54ed39ba0a65ace75c7f03ff287073e71e))
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
|
||||||
|
- Update README with --no-cache and local MCP server documentation
|
||||||
|
([`3795de1`](https://github.com/Monadical-SAS/cubbi/commit/3795de1484e1df3905c8eb90908ab79927b03194))
|
||||||
|
|
||||||
|
- Added documentation for the new --no-cache flag in image build command - Added documentation for
|
||||||
|
local MCP server support (add-local command) - Updated MCP server types to include local MCP
|
||||||
|
servers - Added examples for all three types of MCP servers (Docker, Remote, Local)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add --no-cache option to image build command
|
||||||
|
([`be171cf`](https://github.com/Monadical-SAS/cubbi/commit/be171cf2c6252dfa926a759915a057a3a6791cc2))
|
||||||
|
|
||||||
|
Added a --no-cache flag to 'cubbi image build' command to allow building Docker images without using
|
||||||
|
the build cache, useful for forcing fresh builds.
|
||||||
|
|
||||||
|
- Add local MCP server support
|
||||||
|
([`b9cffe3`](https://github.com/Monadical-SAS/cubbi/commit/b9cffe3008bccbcf4eaa7c5c03e62215520d8627))
|
||||||
|
|
||||||
|
- Add LocalMCP model for stdio-based MCP servers - Implement add_local_mcp() method in MCPManager -
|
||||||
|
Add 'mcp add-local' CLI command with args and env support - Update cubbi_init.py MCPConfig with
|
||||||
|
command, args, env fields - Add local MCP support in interactive configure tool - Update image
|
||||||
|
plugins (opencode, goose, crush) to handle local MCPs - OpenCode: Maps to "local" type with
|
||||||
|
command array - Goose: Maps to "stdio" type with command/args - Crush: Maps to "stdio" transport
|
||||||
|
type
|
||||||
|
|
||||||
|
Local MCPs run as stdio-based commands inside containers, allowing users to integrate local MCP
|
||||||
|
servers without containerization.
|
||||||
|
|
||||||
|
- Add opencode state/cache to persistent_config
|
||||||
|
([#27](https://github.com/Monadical-SAS/cubbi/pull/27),
|
||||||
|
[`b7b78ea`](https://github.com/Monadical-SAS/cubbi/commit/b7b78ea0754360efe56cf3f3255f90efda737a91))
|
||||||
|
|
||||||
|
- Comprehensive configuration system and environment variable forwarding
|
||||||
|
([#29](https://github.com/Monadical-SAS/cubbi/pull/29),
|
||||||
|
[`bae951c`](https://github.com/Monadical-SAS/cubbi/commit/bae951cf7c4e498b6cdd7cd00836935acbd98e42))
|
||||||
|
|
||||||
|
* feat: migrate container configuration from env vars to YAML config files
|
||||||
|
|
||||||
|
- Replace environment variable-based configuration with structured YAML config files - Add Pydantic
|
||||||
|
models for type-safe configuration management in cubbi_init.py - Update container.py to generate
|
||||||
|
/cubbi/config.yaml and mount into containers - Simplify goose plugin to extract provider from
|
||||||
|
default model format - Remove complex environment variable handling in favor of direct config
|
||||||
|
access - Maintain backward compatibility while enabling cleaner plugin architecture
|
||||||
|
|
||||||
|
* feat: optimize goose plugin to only pass required API key for selected model
|
||||||
|
|
||||||
|
- Update goose plugin to set only the API key for the provider of the selected model - Add selective
|
||||||
|
API key configuration for anthropic, openai, google, and openrouter - Update README.md with
|
||||||
|
comprehensive automated testing documentation - Add litellm/gpt-oss:120b to test.sh model matrix
|
||||||
|
(now 5 images × 4 models = 20 tests) - Include single prompt command syntax for each tool in the
|
||||||
|
documentation
|
||||||
|
|
||||||
|
* feat: add comprehensive integration tests with pytest parametrization
|
||||||
|
|
||||||
|
- Create tests/test_integration.py with parametrized tests for 5 images × 4 models (20 combinations)
|
||||||
|
- Add pytest configuration to exclude integration tests by default - Add integration marker for
|
||||||
|
selective test running - Include help command tests and image availability tests - Document test
|
||||||
|
usage in tests/README_integration.md
|
||||||
|
|
||||||
|
Integration tests cover: - goose, aider, claudecode, opencode, crush images -
|
||||||
|
anthropic/claude-sonnet-4-20250514, openai/gpt-4o, openrouter/openai/gpt-4o, litellm/gpt-oss:120b
|
||||||
|
models - Proper command syntax for each tool - Success validation with exit codes and completion
|
||||||
|
markers
|
||||||
|
|
||||||
|
Usage: - pytest (regular tests only) - pytest -m integration (integration tests only) - pytest -m
|
||||||
|
integration -k "goose" (specific image)
|
||||||
|
|
||||||
|
* feat: update OpenCode plugin with perfect multi-provider configuration
|
||||||
|
|
||||||
|
- Add global STANDARD_PROVIDERS constant for maintainability - Support custom providers (with
|
||||||
|
baseURL) vs standard providers - Custom providers: include npm package, name, baseURL, apiKey,
|
||||||
|
models - Standard providers: include only apiKey and empty models - Use direct API key values from
|
||||||
|
cubbi config instead of env vars - Only add default model to the provider that matches the default
|
||||||
|
model - Use @ai-sdk/openai-compatible for OpenAI-compatible providers - Preserve model names
|
||||||
|
without transformation - All providers get required empty models{} section per OpenCode spec
|
||||||
|
|
||||||
|
This ensures OpenCode can properly recognize and use both native providers (anthropic, openai,
|
||||||
|
google, openrouter) and custom providers (litellm, etc.) with correct configuration format.
|
||||||
|
|
||||||
|
* refactor: model is now a combination of provider/model
|
||||||
|
|
||||||
|
* feat: add separate integration test for Claude Code without model config
|
||||||
|
|
||||||
|
Claude Code is Anthropic-specific and doesn't require model selection like other tools. Created
|
||||||
|
dedicated test that verifies basic functionality without model preselection.
|
||||||
|
|
||||||
|
* feat: update Claude Code and Crush plugins to use new config system
|
||||||
|
|
||||||
|
- Claude Code plugin now uses cubbi_config.providers to get Anthropic API key - Crush plugin updated
|
||||||
|
to use cubbi_config.providers for provider configuration - Both plugins maintain backwards
|
||||||
|
compatibility with environment variables - Consistent plugin structure across all cubbi images
|
||||||
|
|
||||||
|
* feat: add environments_to_forward support for images
|
||||||
|
|
||||||
|
- Add environments_to_forward field to ImageConfig and Image models - Update container creation
|
||||||
|
logic to forward specified environment variables from host - Add environments_to_forward to
|
||||||
|
claudecode cubbi_image.yaml to ensure Anthropic API key is always available - Claude Code now gets
|
||||||
|
required environment variables regardless of model selection - This ensures Claude Code works
|
||||||
|
properly even when other models are specified
|
||||||
|
|
||||||
|
Fixes the issue where Claude Code couldn't access Anthropic API key when using different model
|
||||||
|
configurations.
|
||||||
|
|
||||||
|
* refactor: remove unused environment field from cubbi_image.yaml files
|
||||||
|
|
||||||
|
The 'environment' field was loaded but never processed at runtime. Only 'environments_to_forward' is
|
||||||
|
actually used to pass environment variables from host to container.
|
||||||
|
|
||||||
|
Cleaned up configuration files by removing: - 72 lines from aider/cubbi_image.yaml - 42 lines from
|
||||||
|
claudecode/cubbi_image.yaml - 28 lines from crush/cubbi_image.yaml - 16 lines from
|
||||||
|
goose/cubbi_image.yaml - Empty environment: [] from opencode/cubbi_image.yaml
|
||||||
|
|
||||||
|
This makes the configuration files cleaner and only contains fields that are actually used by the
|
||||||
|
system.
|
||||||
|
|
||||||
|
* feat: implement environment variable forwarding for aider
|
||||||
|
|
||||||
|
Updates aider to automatically receive all relevant environment variables from the host, similar to
|
||||||
|
how opencode works.
|
||||||
|
|
||||||
|
Changes: - Added environments_to_forward field to aider/cubbi_image.yaml with comprehensive list of
|
||||||
|
API keys, configuration, and proxy variables - Updated aider_plugin.py to use cubbi_config system
|
||||||
|
for provider/model setup - Environment variables now forwarded automatically during container
|
||||||
|
creation - Maintains backward compatibility with legacy environment variables
|
||||||
|
|
||||||
|
Environment variables forwarded: - API Keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY,
|
||||||
|
etc. - Configuration: AIDER_MODEL, GIT_* variables, HTTP_PROXY, etc. - Timezone: TZ for proper log
|
||||||
|
timestamps
|
||||||
|
|
||||||
|
Tested: All aider tests pass, environment variables confirmed forwarded.
|
||||||
|
|
||||||
|
* refactor: remove unused volumes and init fields from cubbi_image.yaml files
|
||||||
|
|
||||||
|
Both 'volumes' and 'init' fields were loaded but never processed at runtime. These were incomplete
|
||||||
|
implementations that didn't affect container behavior.
|
||||||
|
|
||||||
|
Removed from all 5 images: - volumes: List with mountPath: /app (incomplete, missing host paths) -
|
||||||
|
init: pre_command and command fields (unused during container creation)
|
||||||
|
|
||||||
|
The cubbi_image.yaml files now only contain fields that are actually used: - Basic metadata (name,
|
||||||
|
description, version, maintainer, image) - persistent_configs (working functionality) -
|
||||||
|
environments_to_forward (working functionality where present)
|
||||||
|
|
||||||
|
This makes the configuration files cleaner and eliminates confusion about what functionality is
|
||||||
|
actually implemented.
|
||||||
|
|
||||||
|
* refactor: remove unused ImageInit and VolumeMount models
|
||||||
|
|
||||||
|
These models were only referenced in the Image model definition but never used at runtime since we
|
||||||
|
removed all init: and volumes: fields from cubbi_image.yaml files.
|
||||||
|
|
||||||
|
Removed: - VolumeMount class (mountPath, description fields) - ImageInit class (pre_command, command
|
||||||
|
fields) - init: Optional[ImageInit] field from Image model - volumes: List[VolumeMount] field from
|
||||||
|
Image model
|
||||||
|
|
||||||
|
The Image model now only contains fields that are actually used: - Basic metadata (name,
|
||||||
|
description, version, maintainer, image) - environment (loaded but unused - kept for future
|
||||||
|
cleanup) - persistent_configs (working functionality) - environments_to_forward (working
|
||||||
|
functionality)
|
||||||
|
|
||||||
|
This makes the data model cleaner and eliminates dead code.
|
||||||
|
|
||||||
|
* feat: add interactive configuration command
|
||||||
|
|
||||||
|
Adds `cubbi configure` command for interactive setup of LLM providers and models through a
|
||||||
|
user-friendly questionnaire interface.
|
||||||
|
|
||||||
|
New features: - Interactive provider configuration (OpenAI, Anthropic, OpenRouter, etc.) - API key
|
||||||
|
management with environment variable references - Model selection with provider/model format
|
||||||
|
validation - Default settings configuration (image, ports, volumes, etc.) - Added questionary
|
||||||
|
dependency for interactive prompts
|
||||||
|
|
||||||
|
Changes: - Added cubbi/configure.py with full interactive configuration logic - Added configure
|
||||||
|
command to cubbi/cli.py - Updated uv.lock with questionary and prompt-toolkit dependencies
|
||||||
|
|
||||||
|
Usage: `cubbi configure`
|
||||||
|
|
||||||
|
* refactor: update integration tests for current functionality
|
||||||
|
|
||||||
|
Updates integration tests to reflect current cubbi functionality:
|
||||||
|
|
||||||
|
test_integration.py: - Simplified image list (removed crush temporarily) - Updated model list with
|
||||||
|
current supported models - Removed outdated help command tests that were timing out - Simplified
|
||||||
|
claudecode test to basic functionality test - Updated command templates for current tool versions
|
||||||
|
|
||||||
|
test_integration_docker.py: - Cleaned up container management tests - Fixed formatting and improved
|
||||||
|
readability - Updated assertion formatting for better error messages
|
||||||
|
|
||||||
|
These changes align the tests with the current state of the codebase and remove tests that were
|
||||||
|
causing timeouts or failures.
|
||||||
|
|
||||||
|
* fix: fix temporary file chmod
|
||||||
|
|
||||||
|
- Dynamic model management for OpenAI-compatible providers
|
||||||
|
([#33](https://github.com/Monadical-SAS/cubbi/pull/33),
|
||||||
|
[`7d6bc5d`](https://github.com/Monadical-SAS/cubbi/commit/7d6bc5dbfa5f4d4ef69a7b806846aebdeec38aa0))
|
||||||
|
|
||||||
|
feat: add models fetch for openai-compatible endpoint
|
||||||
|
|
||||||
|
- Universal model management for all standard providers
|
||||||
|
([#34](https://github.com/Monadical-SAS/cubbi/pull/34),
|
||||||
|
[`fc819a3`](https://github.com/Monadical-SAS/cubbi/commit/fc819a386185330e60946ee4712f268cfed2b66a))
|
||||||
|
|
||||||
|
* fix: add crush plugin support too
|
||||||
|
|
||||||
|
* feat: comprehensive model management for all standard providers
|
||||||
|
|
||||||
|
- Add universal provider support for model fetching (OpenAI, Anthropic, Google, OpenRouter) - Add
|
||||||
|
default API URLs for standard providers in config.py - Enhance model fetcher with
|
||||||
|
provider-specific authentication: * Anthropic: x-api-key header + anthropic-version header *
|
||||||
|
Google: x-goog-api-key header + custom response format handling * OpenAI/OpenRouter: Bearer token
|
||||||
|
(unchanged) - Support Google's unique API response format (models vs data key, name vs id field) -
|
||||||
|
Update CLI commands to work with all supported provider types - Enhance configure interface to
|
||||||
|
include all providers (even those without API keys) - Update both OpenCode and Crush plugins to
|
||||||
|
populate models for all provider types - Add comprehensive provider support detection methods
|
||||||
|
|
||||||
|
### Refactoring
|
||||||
|
|
||||||
|
- Deep clean plugins ([#31](https://github.com/Monadical-SAS/cubbi/pull/31),
|
||||||
|
[`3a7b921`](https://github.com/Monadical-SAS/cubbi/commit/3a7b9213b0d4e5ce0cfb1250624651b242fdc325))
|
||||||
|
|
||||||
|
* refactor: deep clean plugins
|
||||||
|
|
||||||
|
* refactor: modernize plugin system with Python 3.12+ typing and simplified discovery
|
||||||
|
|
||||||
|
- Update typing to Python 3.12+ style (Dict->dict, Optional->union types) - Simplify plugin
|
||||||
|
discovery using PLUGIN_CLASS exports instead of dir() reflection - Add public get_user_ids() and
|
||||||
|
set_ownership() functions in cubbi_init - Add create_directory_with_ownership() helper method to
|
||||||
|
ToolPlugin base class - Replace initialize() + integrate_mcp_servers() pattern with unified
|
||||||
|
configure() - Add is_already_configured() checks to prevent overwriting existing configs - Remove
|
||||||
|
excessive comments and clean up code structure - All 5 plugins updated: goose, opencode,
|
||||||
|
claudecode, aider, crush
|
||||||
|
|
||||||
|
* fix: remove duplicate
|
||||||
|
|
||||||
|
|
||||||
|
## v0.4.0 (2025-08-06)
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
|
||||||
|
- Update readme ([#25](https://github.com/Monadical-SAS/cubbi/pull/25),
|
||||||
|
[`9dc1158`](https://github.com/Monadical-SAS/cubbi/commit/9dc11582a21371a069d407390308340a87358a9f))
|
||||||
|
|
||||||
|
doc: update readme
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add user port support ([#26](https://github.com/Monadical-SAS/cubbi/pull/26),
|
||||||
|
[`75c9849`](https://github.com/Monadical-SAS/cubbi/commit/75c9849315aebb41ffbd5ac942c7eb3c4a151663))
|
||||||
|
|
||||||
|
* feat: add user port support
|
||||||
|
|
||||||
|
* fix: fix unit test and improve isolation
|
||||||
|
|
||||||
|
* refactor: remove some fixture
|
||||||
|
|
||||||
|
- Make opencode beautiful by default ([#24](https://github.com/Monadical-SAS/cubbi/pull/24),
|
||||||
|
[`b8ecad6`](https://github.com/Monadical-SAS/cubbi/commit/b8ecad6227f6a328517edfc442cd9bcf4d3361dc))
|
||||||
|
|
||||||
|
opencode: try having compatible default theme
|
||||||
|
|
||||||
|
- Support for crush ([#23](https://github.com/Monadical-SAS/cubbi/pull/23),
|
||||||
|
[`472f030`](https://github.com/Monadical-SAS/cubbi/commit/472f030924e58973dea0a41188950540550c125d))
|
||||||
|
|
||||||
|
|
||||||
|
## v0.3.0 (2025-07-31)
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
- Claudecode and opencode arm64 images ([#21](https://github.com/Monadical-SAS/cubbi/pull/21),
|
||||||
|
[`dba7a7c`](https://github.com/Monadical-SAS/cubbi/commit/dba7a7c1efcc04570a92ecbc4eee39eb6353aaea))
|
||||||
|
|
||||||
|
- Update readme
|
||||||
|
([`4958b07`](https://github.com/Monadical-SAS/cubbi/commit/4958b07401550fb5a6751b99a257eda6c4558ea4))
|
||||||
|
|
||||||
|
### Continuous Integration
|
||||||
|
|
||||||
|
- Remove conventional commit, as only PR is required
|
||||||
|
([`afae8a1`](https://github.com/Monadical-SAS/cubbi/commit/afae8a13e1ea02801b2e5c9d5c84aa65a32d637c))
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add --mcp-type option for remote MCP servers
|
||||||
|
([`d41faf6`](https://github.com/Monadical-SAS/cubbi/commit/d41faf6b3072d4f8bdb2adc896125c7fd0d6117d))
|
||||||
|
|
||||||
|
Auto-detects connection type from URL (/sse -> sse, /mcp -> streamable_http) or allows manual
|
||||||
|
specification. Updates goose plugin to use actual MCP type instead of hardcoded sse.
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
- Add Claude Code image support ([#16](https://github.com/Monadical-SAS/cubbi/pull/16),
|
||||||
|
[`b28c2bd`](https://github.com/Monadical-SAS/cubbi/commit/b28c2bd63e324f875b2d862be9e0afa4a7a17ffc))
|
||||||
|
|
||||||
|
* feat: add Claude Code image support
|
||||||
|
|
||||||
|
Add a new Cubbi image for Claude Code (Anthropic's official CLI) with: - Full Claude Code CLI
|
||||||
|
functionality via NPM package - Secure API key management with multiple authentication options -
|
||||||
|
Enterprise support (Bedrock, Vertex AI, proxy configuration) - Persistent configuration and cache
|
||||||
|
directories - Comprehensive test suite and documentation
|
||||||
|
|
||||||
|
The image allows users to run Claude Code in containers with proper isolation, persistent settings,
|
||||||
|
and seamless Cubbi integration. It gracefully handles missing API keys to allow flexible
|
||||||
|
authentication.
|
||||||
|
|
||||||
|
Also adds optional Claude Code API keys to container.py for enterprise deployments.
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
* Pre-commit fixes
|
||||||
|
|
||||||
|
---------
|
||||||
|
|
||||||
|
Co-authored-by: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
Co-authored-by: Your Name <you@example.com>
|
||||||
|
|
||||||
|
- Add configuration override in session create with --config/-c
|
||||||
|
([`672b8a8`](https://github.com/Monadical-SAS/cubbi/commit/672b8a8e315598d98f40d269dfcfbde6203cbb57))
|
||||||
|
|
||||||
|
- Add MCP tracking to sessions ([#19](https://github.com/Monadical-SAS/cubbi/pull/19),
|
||||||
|
[`d750e64`](https://github.com/Monadical-SAS/cubbi/commit/d750e64608998f6f3a03928bba18428f576b412f))
|
||||||
|
|
||||||
|
Add mcps field to Session model to track active MCP servers and populate it from container labels in
|
||||||
|
ContainerManager. Enhance MCP remove command to warn when removing servers used by active
|
||||||
|
sessions.
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-authored-by: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
- Add network filtering with domain restrictions
|
||||||
|
([#22](https://github.com/Monadical-SAS/cubbi/pull/22),
|
||||||
|
[`2eb15a3`](https://github.com/Monadical-SAS/cubbi/commit/2eb15a31f8bb97f93461bea5e567cc2ccde3f86c))
|
||||||
|
|
||||||
|
* fix: remove config override logging to prevent API key exposure
|
||||||
|
|
||||||
|
* feat: add network filtering with domain restrictions
|
||||||
|
|
||||||
|
- Add --domains flag to restrict container network access to specific domains/ports - Integrate
|
||||||
|
monadicalsas/network-filter container for network isolation - Support domain patterns like
|
||||||
|
'example.com:443', '*.api.com' - Add defaults.domains configuration option - Automatically handle
|
||||||
|
network-filter container lifecycle - Prevent conflicts between --domains and --network options
|
||||||
|
|
||||||
|
* docs: add --domains option to README usage examples
|
||||||
|
|
||||||
|
* docs: remove wildcard domain example from --domains help
|
||||||
|
|
||||||
|
Wildcard domains are not currently supported by network-filter
|
||||||
|
|
||||||
|
- Add ripgrep and openssh-client in images ([#15](https://github.com/Monadical-SAS/cubbi/pull/15),
|
||||||
|
[`e70ec35`](https://github.com/Monadical-SAS/cubbi/commit/e70ec3538ba4e02a60afedca583da1c35b7b6d7a))
|
||||||
|
|
||||||
|
- Add sudo and sudoers ([#20](https://github.com/Monadical-SAS/cubbi/pull/20),
|
||||||
|
[`9c8ddbb`](https://github.com/Monadical-SAS/cubbi/commit/9c8ddbb3f3f2fc97db9283898b6a85aee7235fae))
|
||||||
|
|
||||||
|
* feat: add sudo and sudoers
|
||||||
|
|
||||||
|
* Update cubbi/images/cubbi_init.py
|
||||||
|
|
||||||
|
Co-authored-by: pr-agent-monadical[bot] <198624643+pr-agent-monadical[bot]@users.noreply.github.com>
|
||||||
|
|
||||||
|
---------
|
||||||
|
|
||||||
|
- Implement Aider AI pair programming support
|
||||||
|
([#17](https://github.com/Monadical-SAS/cubbi/pull/17),
|
||||||
|
[`fc0d6b5`](https://github.com/Monadical-SAS/cubbi/commit/fc0d6b51af12ddb0bd8655309209dd88e7e4d6f1))
|
||||||
|
|
||||||
|
* feat: implement Aider AI pair programming support
|
||||||
|
|
||||||
|
- Add comprehensive Aider Docker image with Python 3.12 and system pip installation - Implement
|
||||||
|
aider_plugin.py for secure API key management and environment configuration - Support multiple LLM
|
||||||
|
providers: OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter - Add persistent configuration for
|
||||||
|
~/.aider/ and ~/.cache/aider/ directories - Create comprehensive documentation with usage examples
|
||||||
|
and troubleshooting - Include automated test suite with 6 test categories covering all
|
||||||
|
functionality - Update container.py to support DEEPSEEK_API_KEY and GEMINI_API_KEY - Integrate
|
||||||
|
with Cubbi CLI for seamless session management
|
||||||
|
|
||||||
|
🤖 Generated with [Claude Code](https://claude.ai/code)
|
||||||
|
|
||||||
|
Co-Authored-By: Claude <noreply@anthropic.com>
|
||||||
|
|
||||||
|
* Fix pytest for aider
|
||||||
|
|
||||||
|
* Fix pre-commit
|
||||||
|
|
||||||
|
---------
|
||||||
|
|
||||||
|
Co-authored-by: Your Name <you@example.com>
|
||||||
|
|
||||||
|
- Include new image opencode ([#14](https://github.com/Monadical-SAS/cubbi/pull/14),
|
||||||
|
[`5fca51e`](https://github.com/Monadical-SAS/cubbi/commit/5fca51e5152dcf7503781eb707fa04414cf33c05))
|
||||||
|
|
||||||
|
* feat: include new image opencode
|
||||||
|
|
||||||
|
* docs: update readme
|
||||||
|
|
||||||
|
- Support config `openai.url` for goose/opencode/aider
|
||||||
|
([`da5937e`](https://github.com/Monadical-SAS/cubbi/commit/da5937e70829b88a66f96c3ce7be7dacfc98facb))
|
||||||
|
|
||||||
|
### Refactoring
|
||||||
|
|
||||||
|
- New image layout and organization ([#13](https://github.com/Monadical-SAS/cubbi/pull/13),
|
||||||
|
[`e5121dd`](https://github.com/Monadical-SAS/cubbi/commit/e5121ddea4230e78a05a85c4ce668e0c169b5ace))
|
||||||
|
|
||||||
|
* refactor: rework how image are defined, in order to create others wrapper for others tools
|
||||||
|
|
||||||
|
* refactor: fix issues with ownership
|
||||||
|
|
||||||
|
* refactor: image share now information with others images type
|
||||||
|
|
||||||
|
* fix: update readme
|
||||||
|
|
||||||
|
|
||||||
## v0.2.0 (2025-05-21)
|
## v0.2.0 (2025-05-21)
|
||||||
|
|
||||||
### Continuous Integration
|
### Continuous Integration
|
||||||
|
|||||||
12
CLAUDE.md
12
CLAUDE.md
@@ -48,3 +48,15 @@ Use uv instead:
|
|||||||
- **Configuration**: Use environment variables with YAML for configuration
|
- **Configuration**: Use environment variables with YAML for configuration
|
||||||
|
|
||||||
Refer to SPECIFICATIONS.md for detailed architecture and implementation guidance.
|
Refer to SPECIFICATIONS.md for detailed architecture and implementation guidance.
|
||||||
|
|
||||||
|
## Cubbi images
|
||||||
|
|
||||||
|
A cubbi image is a flavored docker image that wrap a tool (let's say goose), and dynamically configure the tool when the image is starting. All cubbi images are defined in `cubbi/images` directory.
|
||||||
|
|
||||||
|
Each image must have (let's take goose image for example):
|
||||||
|
- `goose/cubbi_image.yaml`, list of persistent paths, etc.
|
||||||
|
- `goose/Dockerfile`, that is used to build the cubbi image with cubbi tools
|
||||||
|
- `goose/goose_plugin.py`, a plugin file named of the cubbi image name, that is specific for this image, with the intent to configure dynamically the docker image when starting with the preferences of the user (via environment variable). They all import `cubbi_init.py`, but this file is shared accross all images, so it is normal that execution of the plugin import does not work, because the build system will copy the file in place during the build.
|
||||||
|
- `goose/README.md`, a tiny readme about the image
|
||||||
|
|
||||||
|
If you are creating a new image, look about existing images (goose, opencode).
|
||||||
|
|||||||
109
README.md
109
README.md
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
# Cubbi - Container Tool
|
# Cubbi - Container Tool
|
||||||
|
|
||||||
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments. It works with both local Docker and a dedicated remote web service that manages containers in a Docker-in-Docker (DinD) environment. Cubbi also supports connecting to MCP (Model Control Protocol) servers to extend AI tools with additional capabilities.
|
Cubbi is a command-line tool for managing ephemeral containers that run AI tools and development environments, with support for MCP servers. It supports [Aider](https://github.com/Aider-AI/aider), [Crush](https://github.com/charmbracelet/crush), [Claude Code](https://github.com/anthropics/claude-code), [Goose](https://github.com/block/goose), [Opencode](https://github.com/sst/opencode).
|
||||||
|
|
||||||

|

|
||||||

|

|
||||||
@@ -17,7 +17,6 @@ Cubbi is a command-line tool for managing ephemeral containers that run AI tools
|
|||||||
- `cubbix` - Shortcut for `cubbi session create`
|
- `cubbix` - Shortcut for `cubbi session create`
|
||||||
- `cubbix .` - Mount the current directory
|
- `cubbix .` - Mount the current directory
|
||||||
- `cubbix /path/to/dir` - Mount a specific directory
|
- `cubbix /path/to/dir` - Mount a specific directory
|
||||||
- `cubbix https://github.com/user/repo` - Clone a repository
|
|
||||||
|
|
||||||
## 📋 Requirements
|
## 📋 Requirements
|
||||||
|
|
||||||
@@ -27,9 +26,6 @@ Cubbi is a command-line tool for managing ephemeral containers that run AI tools
|
|||||||
## 📥 Installation
|
## 📥 Installation
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Via pip
|
|
||||||
pip install cubbi
|
|
||||||
|
|
||||||
# Via uv
|
# Via uv
|
||||||
uv tool install cubbi
|
uv tool install cubbi
|
||||||
|
|
||||||
@@ -43,6 +39,7 @@ Then compile your first image:
|
|||||||
```bash
|
```bash
|
||||||
cubbi image build goose
|
cubbi image build goose
|
||||||
cubbi image build opencode
|
cubbi image build opencode
|
||||||
|
cubbi image build crush
|
||||||
```
|
```
|
||||||
|
|
||||||
### For Developers
|
### For Developers
|
||||||
@@ -80,9 +77,19 @@ cubbi session connect SESSION_ID
|
|||||||
# Close a session when done
|
# Close a session when done
|
||||||
cubbi session close SESSION_ID
|
cubbi session close SESSION_ID
|
||||||
|
|
||||||
|
# Close a session quickly (kill instead of graceful stop)
|
||||||
|
cubbi session close SESSION_ID --kill
|
||||||
|
|
||||||
|
# Close all sessions at once
|
||||||
|
cubbi session close --all
|
||||||
|
|
||||||
|
# Close all sessions quickly
|
||||||
|
cubbi session close --all --kill
|
||||||
|
|
||||||
# Create a session with a specific image
|
# Create a session with a specific image
|
||||||
cubbix --image goose
|
cubbix --image goose
|
||||||
cubbix --image opencode
|
cubbix --image opencode
|
||||||
|
cubbix --image crush
|
||||||
|
|
||||||
# Create a session with environment variables
|
# Create a session with environment variables
|
||||||
cubbix -e VAR1=value1 -e VAR2=value2
|
cubbix -e VAR1=value1 -e VAR2=value2
|
||||||
@@ -95,9 +102,17 @@ cubbix -v ~/data:/data -v ./configs:/etc/app/config
|
|||||||
cubbix .
|
cubbix .
|
||||||
cubbix /path/to/project
|
cubbix /path/to/project
|
||||||
|
|
||||||
|
# Forward ports from container to host
|
||||||
|
cubbix --port 8000 # Forward port 8000
|
||||||
|
cubbix --port 8000,3000,5173 # Forward multiple ports (comma-separated)
|
||||||
|
cubbix --port 8000 --port 3000 # Forward multiple ports (repeated flag)
|
||||||
|
|
||||||
# Connect to external Docker networks
|
# Connect to external Docker networks
|
||||||
cubbix --network teamnet --network dbnet
|
cubbix --network teamnet --network dbnet
|
||||||
|
|
||||||
|
# Restrict network access to specific domains
|
||||||
|
cubbix --domains github.com --domains "api.example.com:443"
|
||||||
|
|
||||||
# Connect to MCP servers for extended capabilities
|
# Connect to MCP servers for extended capabilities
|
||||||
cubbix --mcp github --mcp jira
|
cubbix --mcp github --mcp jira
|
||||||
|
|
||||||
@@ -125,7 +140,41 @@ cubbix --ssh
|
|||||||
|
|
||||||
## 🖼️ Image Management
|
## 🖼️ Image Management
|
||||||
|
|
||||||
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools:
|
Cubbi includes an image management system that allows you to build, manage, and use Docker images for different AI tools
|
||||||
|
|
||||||
|
**Supported Images**
|
||||||
|
|
||||||
|
| Image Name | Langtrace Support | Single Prompt Command |
|
||||||
|
|------------|-------------------|----------------------|
|
||||||
|
| goose | yes | `goose run -t 'prompt' --no-session --quiet` |
|
||||||
|
| opencode | no | `opencode run -m MODEL 'prompt'` |
|
||||||
|
| claudecode | no | `claude -p 'prompt'` |
|
||||||
|
| aider | no | `aider --message 'prompt' --yes-always --no-fancy-input` |
|
||||||
|
| crush | no | `crush run 'prompt'` |
|
||||||
|
|
||||||
|
**Automated Testing:**
|
||||||
|
|
||||||
|
Each image can be tested with single prompt commands using different models:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test a single image with a specific model
|
||||||
|
cubbix -i goose -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "goose run -t 'What is 2+2?' --no-session --quiet"
|
||||||
|
|
||||||
|
# Test aider with non-interactive flags
|
||||||
|
cubbix -i aider -m openai/gpt-4o --no-connect --no-shell --run "aider --message 'What is 2+2?' --yes-always --no-fancy-input --no-check-update"
|
||||||
|
|
||||||
|
# Test claude-code (note: binary name is 'claude', not 'claude-code')
|
||||||
|
cubbix -i claudecode -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "claude -p 'What is 2+2?'"
|
||||||
|
|
||||||
|
# Test opencode with model specification
|
||||||
|
cubbix -i opencode -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "opencode run -m anthropic/claude-sonnet-4-20250514 'What is 2+2?'"
|
||||||
|
|
||||||
|
# Test crush
|
||||||
|
cubbix -i crush -m anthropic/claude-sonnet-4-20250514 --no-connect --no-shell --run "crush run 'What is 2+2?'"
|
||||||
|
|
||||||
|
# Run comprehensive test suite (requires test.sh script)
|
||||||
|
./test.sh # Tests all images with multiple models: anthropic/claude-sonnet-4-20250514, openai/gpt-4o, openrouter/openai/gpt-4o, litellm/gpt-oss:120b
|
||||||
|
```
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# List available images
|
# List available images
|
||||||
@@ -134,10 +183,15 @@ cubbi image list
|
|||||||
# Get detailed information about an image
|
# Get detailed information about an image
|
||||||
cubbi image info goose
|
cubbi image info goose
|
||||||
cubbi image info opencode
|
cubbi image info opencode
|
||||||
|
cubbi image info crush
|
||||||
|
|
||||||
# Build an image
|
# Build an image
|
||||||
cubbi image build goose
|
cubbi image build goose
|
||||||
cubbi image build opencode
|
cubbi image build opencode
|
||||||
|
cubbi image build crush
|
||||||
|
|
||||||
|
# Build an image without using cache (force fresh build)
|
||||||
|
cubbi image build --no-cache goose
|
||||||
```
|
```
|
||||||
|
|
||||||
Images are defined in the `cubbi/images/` directory, with each subdirectory containing:
|
Images are defined in the `cubbi/images/` directory, with each subdirectory containing:
|
||||||
@@ -222,6 +276,26 @@ cubbi config volume remove /local/path
|
|||||||
|
|
||||||
Default volumes will be combined with any volumes specified using the `-v` flag when creating a session.
|
Default volumes will be combined with any volumes specified using the `-v` flag when creating a session.
|
||||||
|
|
||||||
|
### Default Ports Configuration
|
||||||
|
|
||||||
|
You can configure default ports that will be automatically forwarded in every new session:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List default ports
|
||||||
|
cubbi config port list
|
||||||
|
|
||||||
|
# Add a single port to defaults
|
||||||
|
cubbi config port add 8000
|
||||||
|
|
||||||
|
# Add multiple ports to defaults (comma-separated)
|
||||||
|
cubbi config port add 8000,3000,5173
|
||||||
|
|
||||||
|
# Remove a port from defaults
|
||||||
|
cubbi config port remove 8000
|
||||||
|
```
|
||||||
|
|
||||||
|
Default ports will be combined with any ports specified using the `--port` flag when creating a session.
|
||||||
|
|
||||||
### Default MCP Servers Configuration
|
### Default MCP Servers Configuration
|
||||||
|
|
||||||
You can configure default MCP servers that sessions will automatically connect to:
|
You can configure default MCP servers that sessions will automatically connect to:
|
||||||
@@ -294,7 +368,8 @@ Service credentials like API keys configured in `~/.config/cubbi/config.yaml` ar
|
|||||||
MCP (Model Control Protocol) servers provide tool-calling capabilities to AI models, enhancing their ability to interact with external services, databases, and systems. Cubbi supports multiple types of MCP servers:
|
MCP (Model Control Protocol) servers provide tool-calling capabilities to AI models, enhancing their ability to interact with external services, databases, and systems. Cubbi supports multiple types of MCP servers:
|
||||||
|
|
||||||
1. **Remote HTTP SSE servers** - External MCP servers accessed over HTTP
|
1. **Remote HTTP SSE servers** - External MCP servers accessed over HTTP
|
||||||
2. **Docker-based MCP servers** - Local MCP servers running in Docker containers, with a SSE proxy for stdio-to-SSE conversion
|
2. **Docker-based MCP servers** - MCP servers running in Docker containers, with a SSE proxy for stdio-to-SSE conversion
|
||||||
|
3. **Local MCP servers** - MCP servers running as local processes on your host machine
|
||||||
|
|
||||||
### Managing MCP Servers
|
### Managing MCP Servers
|
||||||
|
|
||||||
@@ -342,12 +417,24 @@ cubbi mcp remove github
|
|||||||
Cubbi supports different types of MCP servers:
|
Cubbi supports different types of MCP servers:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Example of docker-based MCP server
|
# Docker-based MCP server (with proxy)
|
||||||
cubbi mcp add fetch mcp/fetch
|
cubbi mcp add fetch mcp/fetch
|
||||||
cubbi mcp add github -e GITHUB_PERSONAL_ACCESS_TOKEN=xxxx github mcp/github
|
cubbi mcp add github -e GITHUB_PERSONAL_ACCESS_TOKEN=xxxx mcp/github mcp/github-proxy
|
||||||
|
|
||||||
# Example of SSE-based MCP server
|
# Remote HTTP SSE server
|
||||||
cubbi mcp add myserver https://myssemcp.com
|
cubbi mcp add-remote myserver https://myssemcp.com/sse
|
||||||
|
|
||||||
|
# Local MCP server (runs as a local process)
|
||||||
|
cubbi mcp add-local mylocalmcp /path/to/mcp-executable
|
||||||
|
cubbi mcp add-local mylocalmcp /usr/local/bin/mcp-tool --args "--config" --args "/etc/mcp.conf"
|
||||||
|
cubbi mcp add-local mylocalmcp npx --args "@modelcontextprotocol/server-filesystem" --args "/path/to/data"
|
||||||
|
|
||||||
|
# Add environment variables to local MCP servers
|
||||||
|
cubbi mcp add-local mylocalmcp /path/to/mcp-server -e API_KEY=xxx -e BASE_URL=https://api.example.com
|
||||||
|
|
||||||
|
# Prevent adding to default MCPs
|
||||||
|
cubbi mcp add myserver mcp/server --no-default
|
||||||
|
cubbi mcp add-local mylocalmcp /path/to/executable --no-default
|
||||||
```
|
```
|
||||||
|
|
||||||
### Using MCP Servers with Sessions
|
### Using MCP Servers with Sessions
|
||||||
|
|||||||
464
cubbi/cli.py
464
cubbi/cli.py
@@ -14,6 +14,7 @@ from rich.console import Console
|
|||||||
from rich.table import Table
|
from rich.table import Table
|
||||||
|
|
||||||
from .config import ConfigManager
|
from .config import ConfigManager
|
||||||
|
from .configure import run_interactive_config
|
||||||
from .container import ContainerManager
|
from .container import ContainerManager
|
||||||
from .mcp import MCPManager
|
from .mcp import MCPManager
|
||||||
from .models import SessionStatus
|
from .models import SessionStatus
|
||||||
@@ -60,6 +61,12 @@ def main(
|
|||||||
logging.getLogger().setLevel(logging.INFO)
|
logging.getLogger().setLevel(logging.INFO)
|
||||||
|
|
||||||
|
|
||||||
|
@app.command()
|
||||||
|
def configure() -> None:
|
||||||
|
"""Interactive configuration of LLM providers and models"""
|
||||||
|
run_interactive_config()
|
||||||
|
|
||||||
|
|
||||||
@app.command()
|
@app.command()
|
||||||
def version() -> None:
|
def version() -> None:
|
||||||
"""Show Cubbi version information"""
|
"""Show Cubbi version information"""
|
||||||
@@ -142,6 +149,11 @@ def create_session(
|
|||||||
network: List[str] = typer.Option(
|
network: List[str] = typer.Option(
|
||||||
[], "--network", "-N", help="Connect to additional Docker networks"
|
[], "--network", "-N", help="Connect to additional Docker networks"
|
||||||
),
|
),
|
||||||
|
port: List[str] = typer.Option(
|
||||||
|
[],
|
||||||
|
"--port",
|
||||||
|
help="Forward ports (e.g., '8000' or '8000,3000' or multiple --port flags)",
|
||||||
|
),
|
||||||
name: Optional[str] = typer.Option(None, "--name", "-n", help="Session name"),
|
name: Optional[str] = typer.Option(None, "--name", "-n", help="Session name"),
|
||||||
run_command: Optional[str] = typer.Option(
|
run_command: Optional[str] = typer.Option(
|
||||||
None,
|
None,
|
||||||
@@ -168,11 +180,24 @@ def create_session(
|
|||||||
gid: Optional[int] = typer.Option(
|
gid: Optional[int] = typer.Option(
|
||||||
None, "--gid", help="Group ID to run the container as (defaults to host user)"
|
None, "--gid", help="Group ID to run the container as (defaults to host user)"
|
||||||
),
|
),
|
||||||
model: Optional[str] = typer.Option(None, "--model", help="Model to use"),
|
model: Optional[str] = typer.Option(
|
||||||
provider: Optional[str] = typer.Option(
|
None,
|
||||||
None, "--provider", "-p", help="Provider to use"
|
"--model",
|
||||||
|
"-m",
|
||||||
|
help="Model to use in 'provider/model' format (e.g., 'anthropic/claude-3-5-sonnet')",
|
||||||
),
|
),
|
||||||
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
|
ssh: bool = typer.Option(False, "--ssh", help="Start SSH server in the container"),
|
||||||
|
config: List[str] = typer.Option(
|
||||||
|
[],
|
||||||
|
"--config",
|
||||||
|
"-c",
|
||||||
|
help="Override configuration values (KEY=VALUE) for this session only",
|
||||||
|
),
|
||||||
|
domains: List[str] = typer.Option(
|
||||||
|
[],
|
||||||
|
"--domains",
|
||||||
|
help="Restrict network access to specified domains/ports (e.g., 'example.com:443', 'api.github.com')",
|
||||||
|
),
|
||||||
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
|
verbose: bool = typer.Option(False, "--verbose", help="Enable verbose logging"),
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Create a new Cubbi session
|
"""Create a new Cubbi session
|
||||||
@@ -189,16 +214,66 @@ def create_session(
|
|||||||
target_gid = gid if gid is not None else os.getgid()
|
target_gid = gid if gid is not None else os.getgid()
|
||||||
console.print(f"Using UID: {target_uid}, GID: {target_gid}")
|
console.print(f"Using UID: {target_uid}, GID: {target_gid}")
|
||||||
|
|
||||||
# Use default image from user configuration
|
# Create a temporary user config manager with overrides
|
||||||
|
temp_user_config = UserConfigManager()
|
||||||
|
|
||||||
|
# Parse and apply config overrides
|
||||||
|
config_overrides = {}
|
||||||
|
for config_item in config:
|
||||||
|
if "=" in config_item:
|
||||||
|
key, value = config_item.split("=", 1)
|
||||||
|
# Convert string value to appropriate type
|
||||||
|
if value.lower() == "true":
|
||||||
|
typed_value = True
|
||||||
|
elif value.lower() == "false":
|
||||||
|
typed_value = False
|
||||||
|
elif value.isdigit():
|
||||||
|
typed_value = int(value)
|
||||||
|
else:
|
||||||
|
typed_value = value
|
||||||
|
config_overrides[key] = typed_value
|
||||||
|
else:
|
||||||
|
console.print(
|
||||||
|
f"[yellow]Warning: Ignoring invalid config format: {config_item}. Use KEY=VALUE.[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply overrides to temp config (without saving)
|
||||||
|
for key, value in config_overrides.items():
|
||||||
|
# Handle shorthand service paths (e.g., "langfuse.url")
|
||||||
|
if (
|
||||||
|
"." in key
|
||||||
|
and not key.startswith("services.")
|
||||||
|
and not any(
|
||||||
|
key.startswith(section + ".")
|
||||||
|
for section in ["defaults", "docker", "remote", "ui"]
|
||||||
|
)
|
||||||
|
):
|
||||||
|
service, setting = key.split(".", 1)
|
||||||
|
key = f"services.{service}.{setting}"
|
||||||
|
|
||||||
|
# Split the key path and navigate to set the value
|
||||||
|
parts = key.split(".")
|
||||||
|
config_dict = temp_user_config.config
|
||||||
|
|
||||||
|
# Navigate to the containing dictionary
|
||||||
|
for part in parts[:-1]:
|
||||||
|
if part not in config_dict:
|
||||||
|
config_dict[part] = {}
|
||||||
|
config_dict = config_dict[part]
|
||||||
|
|
||||||
|
# Set the value without saving
|
||||||
|
config_dict[parts[-1]] = value
|
||||||
|
|
||||||
|
# Use default image from user configuration (with overrides applied)
|
||||||
if not image:
|
if not image:
|
||||||
image_name = user_config.get(
|
image_name = temp_user_config.get(
|
||||||
"defaults.image", config_manager.config.defaults.get("image", "goose")
|
"defaults.image", config_manager.config.defaults.get("image", "goose")
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
image_name = image
|
image_name = image
|
||||||
|
|
||||||
# Start with environment variables from user configuration
|
# Start with environment variables from user configuration (with overrides applied)
|
||||||
environment = user_config.get_environment_variables()
|
environment = temp_user_config.get_environment_variables()
|
||||||
|
|
||||||
# Override with environment variables from command line
|
# Override with environment variables from command line
|
||||||
for var in env:
|
for var in env:
|
||||||
@@ -214,7 +289,7 @@ def create_session(
|
|||||||
volume_mounts = {}
|
volume_mounts = {}
|
||||||
|
|
||||||
# Get default volumes from user config
|
# Get default volumes from user config
|
||||||
default_volumes = user_config.get("defaults.volumes", [])
|
default_volumes = temp_user_config.get("defaults.volumes", [])
|
||||||
|
|
||||||
# Combine default volumes with user-specified volumes
|
# Combine default volumes with user-specified volumes
|
||||||
all_volumes = default_volumes + list(volume)
|
all_volumes = default_volumes + list(volume)
|
||||||
@@ -241,15 +316,56 @@ def create_session(
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Get default networks from user config
|
# Get default networks from user config
|
||||||
default_networks = user_config.get("defaults.networks", [])
|
default_networks = temp_user_config.get("defaults.networks", [])
|
||||||
|
|
||||||
# Combine default networks with user-specified networks, removing duplicates
|
# Combine default networks with user-specified networks, removing duplicates
|
||||||
all_networks = list(set(default_networks + network))
|
all_networks = list(set(default_networks + network))
|
||||||
|
|
||||||
|
# Get default domains from user config
|
||||||
|
default_domains = temp_user_config.get("defaults.domains", [])
|
||||||
|
|
||||||
|
# Combine default domains with user-specified domains
|
||||||
|
all_domains = default_domains + list(domains)
|
||||||
|
|
||||||
|
# Check for conflict between network and domains
|
||||||
|
if all_domains and all_networks:
|
||||||
|
console.print(
|
||||||
|
"[yellow]Warning: --domains cannot be used with --network. Network restrictions will take precedence.[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get default ports from user config
|
||||||
|
default_ports = temp_user_config.get("defaults.ports", [])
|
||||||
|
|
||||||
|
# Parse and combine ports from command line
|
||||||
|
session_ports = []
|
||||||
|
for port_arg in port:
|
||||||
|
try:
|
||||||
|
parsed_ports = [int(p.strip()) for p in port_arg.split(",")]
|
||||||
|
|
||||||
|
# Validate port ranges
|
||||||
|
invalid_ports = [p for p in parsed_ports if not (1 <= p <= 65535)]
|
||||||
|
if invalid_ports:
|
||||||
|
console.print(
|
||||||
|
f"[red]Error: Invalid ports {invalid_ports}. Ports must be between 1 and 65535[/red]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
session_ports.extend(parsed_ports)
|
||||||
|
except ValueError:
|
||||||
|
console.print(
|
||||||
|
f"[yellow]Warning: Ignoring invalid port format: {port_arg}. Use integers only.[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Combine default ports with session ports, removing duplicates
|
||||||
|
all_ports = list(set(default_ports + session_ports))
|
||||||
|
|
||||||
|
if all_ports:
|
||||||
|
console.print(f"Forwarding ports: {', '.join(map(str, all_ports))}")
|
||||||
|
|
||||||
# Get default MCPs from user config if none specified
|
# Get default MCPs from user config if none specified
|
||||||
all_mcps = mcp if isinstance(mcp, list) else []
|
all_mcps = mcp if isinstance(mcp, list) else []
|
||||||
if not all_mcps:
|
if not all_mcps:
|
||||||
default_mcps = user_config.get("defaults.mcps", [])
|
default_mcps = temp_user_config.get("defaults.mcps", [])
|
||||||
all_mcps = default_mcps
|
all_mcps = default_mcps
|
||||||
|
|
||||||
if default_mcps:
|
if default_mcps:
|
||||||
@@ -258,6 +374,9 @@ def create_session(
|
|||||||
if all_networks:
|
if all_networks:
|
||||||
console.print(f"Networks: {', '.join(all_networks)}")
|
console.print(f"Networks: {', '.join(all_networks)}")
|
||||||
|
|
||||||
|
if all_domains:
|
||||||
|
console.print(f"Domain restrictions: {', '.join(all_domains)}")
|
||||||
|
|
||||||
# Show volumes that will be mounted
|
# Show volumes that will be mounted
|
||||||
if volume_mounts:
|
if volume_mounts:
|
||||||
console.print("Volumes:")
|
console.print("Volumes:")
|
||||||
@@ -277,6 +396,11 @@ def create_session(
|
|||||||
"[yellow]Warning: --no-shell is ignored without --run[/yellow]"
|
"[yellow]Warning: --no-shell is ignored without --run[/yellow]"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Use model from config overrides if not explicitly provided
|
||||||
|
final_model = (
|
||||||
|
model if model is not None else temp_user_config.get("defaults.model")
|
||||||
|
)
|
||||||
|
|
||||||
session = container_manager.create_session(
|
session = container_manager.create_session(
|
||||||
image_name=image_name,
|
image_name=image_name,
|
||||||
project=path_or_url,
|
project=path_or_url,
|
||||||
@@ -286,14 +410,15 @@ def create_session(
|
|||||||
mount_local=mount_local,
|
mount_local=mount_local,
|
||||||
volumes=volume_mounts,
|
volumes=volume_mounts,
|
||||||
networks=all_networks,
|
networks=all_networks,
|
||||||
|
ports=all_ports,
|
||||||
mcp=all_mcps,
|
mcp=all_mcps,
|
||||||
run_command=run_command,
|
run_command=run_command,
|
||||||
no_shell=no_shell,
|
no_shell=no_shell,
|
||||||
uid=target_uid,
|
uid=target_uid,
|
||||||
gid=target_gid,
|
gid=target_gid,
|
||||||
ssh=ssh,
|
ssh=ssh,
|
||||||
model=model,
|
model=final_model,
|
||||||
provider=provider,
|
domains=all_domains,
|
||||||
)
|
)
|
||||||
|
|
||||||
if session:
|
if session:
|
||||||
@@ -307,7 +432,7 @@ def create_session(
|
|||||||
console.print(f" {container_port} -> {host_port}")
|
console.print(f" {container_port} -> {host_port}")
|
||||||
|
|
||||||
# Auto-connect based on user config, unless overridden by --no-connect flag or --no-shell
|
# Auto-connect based on user config, unless overridden by --no-connect flag or --no-shell
|
||||||
auto_connect = user_config.get("defaults.connect", True)
|
auto_connect = temp_user_config.get("defaults.connect", True)
|
||||||
|
|
||||||
# When --no-shell is used with --run, show logs instead of connecting
|
# When --no-shell is used with --run, show logs instead of connecting
|
||||||
if no_shell and run_command:
|
if no_shell and run_command:
|
||||||
@@ -370,6 +495,9 @@ def create_session(
|
|||||||
def close_session(
|
def close_session(
|
||||||
session_id: Optional[str] = typer.Argument(None, help="Session ID to close"),
|
session_id: Optional[str] = typer.Argument(None, help="Session ID to close"),
|
||||||
all_sessions: bool = typer.Option(False, "--all", help="Close all active sessions"),
|
all_sessions: bool = typer.Option(False, "--all", help="Close all active sessions"),
|
||||||
|
kill: bool = typer.Option(
|
||||||
|
False, "--kill", help="Forcefully kill containers instead of graceful stop"
|
||||||
|
),
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Close a Cubbi session or all sessions"""
|
"""Close a Cubbi session or all sessions"""
|
||||||
if all_sessions:
|
if all_sessions:
|
||||||
@@ -393,7 +521,9 @@ def close_session(
|
|||||||
)
|
)
|
||||||
|
|
||||||
# Start closing sessions with progress updates
|
# Start closing sessions with progress updates
|
||||||
count, success = container_manager.close_all_sessions(update_progress)
|
count, success = container_manager.close_all_sessions(
|
||||||
|
update_progress, kill=kill
|
||||||
|
)
|
||||||
|
|
||||||
# Final result
|
# Final result
|
||||||
if success:
|
if success:
|
||||||
@@ -402,7 +532,7 @@ def close_session(
|
|||||||
console.print("[red]Failed to close all sessions[/red]")
|
console.print("[red]Failed to close all sessions[/red]")
|
||||||
elif session_id:
|
elif session_id:
|
||||||
with console.status(f"Closing session {session_id}..."):
|
with console.status(f"Closing session {session_id}..."):
|
||||||
success = container_manager.close_session(session_id)
|
success = container_manager.close_session(session_id, kill=kill)
|
||||||
|
|
||||||
if success:
|
if success:
|
||||||
console.print(f"[green]Session {session_id} closed successfully[/green]")
|
console.print(f"[green]Session {session_id} closed successfully[/green]")
|
||||||
@@ -492,6 +622,9 @@ def build_image(
|
|||||||
push: bool = typer.Option(
|
push: bool = typer.Option(
|
||||||
False, "--push", "-p", help="Push image to registry after building"
|
False, "--push", "-p", help="Push image to registry after building"
|
||||||
),
|
),
|
||||||
|
no_cache: bool = typer.Option(
|
||||||
|
False, "--no-cache", help="Build without using cache"
|
||||||
|
),
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Build an image Docker image"""
|
"""Build an image Docker image"""
|
||||||
# Get image path
|
# Get image path
|
||||||
@@ -556,9 +689,11 @@ def build_image(
|
|||||||
|
|
||||||
# Build the image from temporary directory
|
# Build the image from temporary directory
|
||||||
with console.status(f"Building image {docker_image_name}..."):
|
with console.status(f"Building image {docker_image_name}..."):
|
||||||
result = os.system(
|
build_cmd = f"cd {temp_path} && docker build"
|
||||||
f"cd {temp_path} && docker build -t {docker_image_name} ."
|
if no_cache:
|
||||||
)
|
build_cmd += " --no-cache"
|
||||||
|
build_cmd += f" -t {docker_image_name} ."
|
||||||
|
result = os.system(build_cmd)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
console.print(f"[red]Error preparing build context: {e}[/red]")
|
console.print(f"[red]Error preparing build context: {e}[/red]")
|
||||||
@@ -624,10 +759,18 @@ config_app.add_typer(network_app, name="network", no_args_is_help=True)
|
|||||||
volume_app = typer.Typer(help="Manage default volumes")
|
volume_app = typer.Typer(help="Manage default volumes")
|
||||||
config_app.add_typer(volume_app, name="volume", no_args_is_help=True)
|
config_app.add_typer(volume_app, name="volume", no_args_is_help=True)
|
||||||
|
|
||||||
|
# Create a port subcommand for config
|
||||||
|
port_app = typer.Typer(help="Manage default ports")
|
||||||
|
config_app.add_typer(port_app, name="port", no_args_is_help=True)
|
||||||
|
|
||||||
# Create an MCP subcommand for config
|
# Create an MCP subcommand for config
|
||||||
config_mcp_app = typer.Typer(help="Manage default MCP servers")
|
config_mcp_app = typer.Typer(help="Manage default MCP servers")
|
||||||
config_app.add_typer(config_mcp_app, name="mcp", no_args_is_help=True)
|
config_app.add_typer(config_mcp_app, name="mcp", no_args_is_help=True)
|
||||||
|
|
||||||
|
# Create a models subcommand for config
|
||||||
|
models_app = typer.Typer(help="Manage provider models")
|
||||||
|
config_app.add_typer(models_app, name="models", no_args_is_help=True)
|
||||||
|
|
||||||
|
|
||||||
# MCP configuration commands
|
# MCP configuration commands
|
||||||
@config_mcp_app.command("list")
|
@config_mcp_app.command("list")
|
||||||
@@ -934,6 +1077,91 @@ def remove_volume(
|
|||||||
console.print(f"[green]Removed volume '{volume_to_remove}' from defaults[/green]")
|
console.print(f"[green]Removed volume '{volume_to_remove}' from defaults[/green]")
|
||||||
|
|
||||||
|
|
||||||
|
# Port configuration commands
|
||||||
|
@port_app.command("list")
|
||||||
|
def list_ports() -> None:
|
||||||
|
"""List all default ports"""
|
||||||
|
ports = user_config.get("defaults.ports", [])
|
||||||
|
|
||||||
|
if not ports:
|
||||||
|
console.print("No default ports configured")
|
||||||
|
return
|
||||||
|
|
||||||
|
table = Table(show_header=True, header_style="bold")
|
||||||
|
table.add_column("Port")
|
||||||
|
|
||||||
|
for port in ports:
|
||||||
|
table.add_row(str(port))
|
||||||
|
|
||||||
|
console.print(table)
|
||||||
|
|
||||||
|
|
||||||
|
@port_app.command("add")
|
||||||
|
def add_port(
|
||||||
|
ports_arg: str = typer.Argument(
|
||||||
|
..., help="Port(s) to add to defaults (e.g., '8000' or '8000,3000,5173')"
|
||||||
|
),
|
||||||
|
) -> None:
|
||||||
|
"""Add port(s) to default ports"""
|
||||||
|
current_ports = user_config.get("defaults.ports", [])
|
||||||
|
|
||||||
|
# Parse ports (support comma-separated)
|
||||||
|
try:
|
||||||
|
if "," in ports_arg:
|
||||||
|
new_ports = [int(p.strip()) for p in ports_arg.split(",")]
|
||||||
|
else:
|
||||||
|
new_ports = [int(ports_arg)]
|
||||||
|
except ValueError:
|
||||||
|
console.print(
|
||||||
|
"[red]Error: Invalid port format. Use integers only (e.g., '8000' or '8000,3000')[/red]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# Validate port ranges
|
||||||
|
invalid_ports = [p for p in new_ports if not (1 <= p <= 65535)]
|
||||||
|
if invalid_ports:
|
||||||
|
console.print(
|
||||||
|
f"[red]Error: Invalid ports {invalid_ports}. Ports must be between 1 and 65535[/red]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# Add new ports, avoiding duplicates
|
||||||
|
added_ports = []
|
||||||
|
for port in new_ports:
|
||||||
|
if port not in current_ports:
|
||||||
|
current_ports.append(port)
|
||||||
|
added_ports.append(port)
|
||||||
|
|
||||||
|
if not added_ports:
|
||||||
|
if len(new_ports) == 1:
|
||||||
|
console.print(f"Port {new_ports[0]} is already in defaults")
|
||||||
|
else:
|
||||||
|
console.print(f"All ports {new_ports} are already in defaults")
|
||||||
|
return
|
||||||
|
|
||||||
|
user_config.set("defaults.ports", current_ports)
|
||||||
|
if len(added_ports) == 1:
|
||||||
|
console.print(f"[green]Added port {added_ports[0]} to defaults[/green]")
|
||||||
|
else:
|
||||||
|
console.print(f"[green]Added ports {added_ports} to defaults[/green]")
|
||||||
|
|
||||||
|
|
||||||
|
@port_app.command("remove")
|
||||||
|
def remove_port(
|
||||||
|
port: int = typer.Argument(..., help="Port to remove from defaults"),
|
||||||
|
) -> None:
|
||||||
|
"""Remove a port from default ports"""
|
||||||
|
ports = user_config.get("defaults.ports", [])
|
||||||
|
|
||||||
|
if port not in ports:
|
||||||
|
console.print(f"Port {port} is not in defaults")
|
||||||
|
return
|
||||||
|
|
||||||
|
ports.remove(port)
|
||||||
|
user_config.set("defaults.ports", ports)
|
||||||
|
console.print(f"[green]Removed port {port} from defaults[/green]")
|
||||||
|
|
||||||
|
|
||||||
# MCP Management Commands
|
# MCP Management Commands
|
||||||
|
|
||||||
|
|
||||||
@@ -1506,6 +1734,11 @@ def add_mcp(
|
|||||||
def add_remote_mcp(
|
def add_remote_mcp(
|
||||||
name: str = typer.Argument(..., help="MCP server name"),
|
name: str = typer.Argument(..., help="MCP server name"),
|
||||||
url: str = typer.Argument(..., help="URL of the remote MCP server"),
|
url: str = typer.Argument(..., help="URL of the remote MCP server"),
|
||||||
|
mcp_type: str = typer.Option(
|
||||||
|
"auto",
|
||||||
|
"--mcp-type",
|
||||||
|
help="MCP connection type: sse, streamable_http, stdio, or auto (default: auto)",
|
||||||
|
),
|
||||||
header: List[str] = typer.Option(
|
header: List[str] = typer.Option(
|
||||||
[], "--header", "-H", help="HTTP headers (format: KEY=VALUE)"
|
[], "--header", "-H", help="HTTP headers (format: KEY=VALUE)"
|
||||||
),
|
),
|
||||||
@@ -1514,6 +1747,22 @@ def add_remote_mcp(
|
|||||||
),
|
),
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Add a remote MCP server"""
|
"""Add a remote MCP server"""
|
||||||
|
if mcp_type == "auto":
|
||||||
|
if url.endswith("/sse"):
|
||||||
|
mcp_type = "sse"
|
||||||
|
elif url.endswith("/mcp"):
|
||||||
|
mcp_type = "streamable_http"
|
||||||
|
else:
|
||||||
|
console.print(
|
||||||
|
f"[red]Cannot auto-detect MCP type from URL '{url}'. Please specify --mcp-type (sse, streamable_http, or stdio)[/red]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
elif mcp_type not in ["sse", "streamable_http", "stdio"]:
|
||||||
|
console.print(
|
||||||
|
f"[red]Invalid MCP type '{mcp_type}'. Must be: sse, streamable_http, stdio, or auto[/red]"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
# Parse headers
|
# Parse headers
|
||||||
headers = {}
|
headers = {}
|
||||||
for h in header:
|
for h in header:
|
||||||
@@ -1528,7 +1777,7 @@ def add_remote_mcp(
|
|||||||
try:
|
try:
|
||||||
with console.status(f"Adding remote MCP server '{name}'..."):
|
with console.status(f"Adding remote MCP server '{name}'..."):
|
||||||
mcp_manager.add_remote_mcp(
|
mcp_manager.add_remote_mcp(
|
||||||
name, url, headers, add_as_default=not no_default
|
name, url, headers, mcp_type=mcp_type, add_as_default=not no_default
|
||||||
)
|
)
|
||||||
|
|
||||||
console.print(f"[green]Added remote MCP server '{name}'[/green]")
|
console.print(f"[green]Added remote MCP server '{name}'[/green]")
|
||||||
@@ -1542,6 +1791,50 @@ def add_remote_mcp(
|
|||||||
console.print(f"[red]Error adding remote MCP server: {e}[/red]")
|
console.print(f"[red]Error adding remote MCP server: {e}[/red]")
|
||||||
|
|
||||||
|
|
||||||
|
@mcp_app.command("add-local")
|
||||||
|
def add_local_mcp(
|
||||||
|
name: str = typer.Argument(..., help="MCP server name"),
|
||||||
|
command: str = typer.Argument(..., help="Path to executable"),
|
||||||
|
args: List[str] = typer.Option([], "--args", "-a", help="Command arguments"),
|
||||||
|
env: List[str] = typer.Option(
|
||||||
|
[], "--env", "-e", help="Environment variables (format: KEY=VALUE)"
|
||||||
|
),
|
||||||
|
no_default: bool = typer.Option(
|
||||||
|
False, "--no-default", help="Don't add to default MCPs"
|
||||||
|
),
|
||||||
|
) -> None:
|
||||||
|
"""Add a local MCP server"""
|
||||||
|
# Parse environment variables
|
||||||
|
environment = {}
|
||||||
|
for e in env:
|
||||||
|
if "=" in e:
|
||||||
|
key, value = e.split("=", 1)
|
||||||
|
environment[key] = value
|
||||||
|
else:
|
||||||
|
console.print(f"[yellow]Warning: Ignoring invalid env format: {e}[/yellow]")
|
||||||
|
|
||||||
|
try:
|
||||||
|
with console.status(f"Adding local MCP server '{name}'..."):
|
||||||
|
mcp_manager.add_local_mcp(
|
||||||
|
name,
|
||||||
|
command,
|
||||||
|
args,
|
||||||
|
environment,
|
||||||
|
add_as_default=not no_default,
|
||||||
|
)
|
||||||
|
console.print(f"[green]Added local MCP server '{name}'[/green]")
|
||||||
|
console.print(f"Command: {command}")
|
||||||
|
if args:
|
||||||
|
console.print(f"Arguments: {' '.join(args)}")
|
||||||
|
if not no_default:
|
||||||
|
console.print(f"MCP server '{name}' added to defaults")
|
||||||
|
else:
|
||||||
|
console.print(f"MCP server '{name}' not added to defaults")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[red]Error adding local MCP server: {e}[/red]")
|
||||||
|
|
||||||
|
|
||||||
@mcp_app.command("inspector")
|
@mcp_app.command("inspector")
|
||||||
def run_mcp_inspector(
|
def run_mcp_inspector(
|
||||||
client_port: int = typer.Option(
|
client_port: int = typer.Option(
|
||||||
@@ -1991,6 +2284,139 @@ exec npm start
|
|||||||
console.print("[green]MCP Inspector stopped[/green]")
|
console.print("[green]MCP Inspector stopped[/green]")
|
||||||
|
|
||||||
|
|
||||||
|
# Model management commands
|
||||||
|
@models_app.command("list")
|
||||||
|
def list_models(
|
||||||
|
provider: Optional[str] = typer.Argument(None, help="Provider name (optional)"),
|
||||||
|
) -> None:
|
||||||
|
if provider:
|
||||||
|
# List models for specific provider
|
||||||
|
models = user_config.list_provider_models(provider)
|
||||||
|
|
||||||
|
if not models:
|
||||||
|
if not user_config.get_provider(provider):
|
||||||
|
console.print(f"[red]Provider '{provider}' not found[/red]")
|
||||||
|
else:
|
||||||
|
console.print(f"No models configured for provider '{provider}'")
|
||||||
|
return
|
||||||
|
|
||||||
|
table = Table(show_header=True, header_style="bold")
|
||||||
|
table.add_column("Model ID")
|
||||||
|
|
||||||
|
for model in models:
|
||||||
|
table.add_row(model["id"])
|
||||||
|
|
||||||
|
console.print(f"\n[bold]Models for provider '{provider}'[/bold]")
|
||||||
|
console.print(table)
|
||||||
|
else:
|
||||||
|
# List models for all providers
|
||||||
|
providers = user_config.list_providers()
|
||||||
|
|
||||||
|
if not providers:
|
||||||
|
console.print("No providers configured")
|
||||||
|
return
|
||||||
|
|
||||||
|
table = Table(show_header=True, header_style="bold")
|
||||||
|
table.add_column("Provider")
|
||||||
|
table.add_column("Model ID")
|
||||||
|
|
||||||
|
found_models = False
|
||||||
|
for provider_name in providers.keys():
|
||||||
|
models = user_config.list_provider_models(provider_name)
|
||||||
|
for model in models:
|
||||||
|
table.add_row(provider_name, model["id"])
|
||||||
|
found_models = True
|
||||||
|
|
||||||
|
if found_models:
|
||||||
|
console.print(table)
|
||||||
|
else:
|
||||||
|
console.print("No models configured for any provider")
|
||||||
|
|
||||||
|
|
||||||
|
@models_app.command("refresh")
|
||||||
|
def refresh_models(
|
||||||
|
provider: Optional[str] = typer.Argument(None, help="Provider name (optional)"),
|
||||||
|
) -> None:
|
||||||
|
from .model_fetcher import fetch_provider_models
|
||||||
|
|
||||||
|
if provider:
|
||||||
|
# Refresh models for specific provider
|
||||||
|
provider_config = user_config.get_provider(provider)
|
||||||
|
if not provider_config:
|
||||||
|
console.print(f"[red]Provider '{provider}' not found[/red]")
|
||||||
|
return
|
||||||
|
|
||||||
|
if not user_config.supports_model_fetching(provider):
|
||||||
|
console.print(
|
||||||
|
f"[red]Provider '{provider}' does not support model fetching[/red]"
|
||||||
|
)
|
||||||
|
console.print(
|
||||||
|
"Only providers of supported types (openai, anthropic, google, openrouter) can refresh models"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
console.print(f"Refreshing models for provider '{provider}'...")
|
||||||
|
|
||||||
|
try:
|
||||||
|
with console.status(f"Fetching models from {provider}..."):
|
||||||
|
models = fetch_provider_models(provider_config)
|
||||||
|
|
||||||
|
user_config.set_provider_models(provider, models)
|
||||||
|
console.print(
|
||||||
|
f"[green]Successfully refreshed {len(models)} models for '{provider}'[/green]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Show some examples
|
||||||
|
if models:
|
||||||
|
console.print("\nSample models:")
|
||||||
|
for model in models[:5]: # Show first 5
|
||||||
|
console.print(f" - {model['id']}")
|
||||||
|
if len(models) > 5:
|
||||||
|
console.print(f" ... and {len(models) - 5} more")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[red]Failed to refresh models for '{provider}': {e}[/red]")
|
||||||
|
else:
|
||||||
|
# Refresh models for all model-fetchable providers
|
||||||
|
fetchable_providers = user_config.list_model_fetchable_providers()
|
||||||
|
|
||||||
|
if not fetchable_providers:
|
||||||
|
console.print(
|
||||||
|
"[yellow]No providers with model fetching support found[/yellow]"
|
||||||
|
)
|
||||||
|
console.print(
|
||||||
|
"Add providers of supported types (openai, anthropic, google, openrouter) to refresh models"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
console.print(f"Refreshing models for {len(fetchable_providers)} providers...")
|
||||||
|
|
||||||
|
success_count = 0
|
||||||
|
failed_providers = []
|
||||||
|
|
||||||
|
for provider_name in fetchable_providers:
|
||||||
|
try:
|
||||||
|
provider_config = user_config.get_provider(provider_name)
|
||||||
|
with console.status(f"Fetching models from {provider_name}..."):
|
||||||
|
models = fetch_provider_models(provider_config)
|
||||||
|
|
||||||
|
user_config.set_provider_models(provider_name, models)
|
||||||
|
console.print(f"[green]✓ {provider_name}: {len(models)} models[/green]")
|
||||||
|
success_count += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[red]✗ {provider_name}: {e}[/red]")
|
||||||
|
failed_providers.append(provider_name)
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
console.print("\n[bold]Summary[/bold]")
|
||||||
|
console.print(f"Successfully refreshed: {success_count} providers")
|
||||||
|
if failed_providers:
|
||||||
|
console.print(
|
||||||
|
f"Failed: {len(failed_providers)} providers ({', '.join(failed_providers)})"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def session_create_entry_point():
|
def session_create_entry_point():
|
||||||
"""Entry point that directly invokes 'cubbi session create'.
|
"""Entry point that directly invokes 'cubbi session create'.
|
||||||
|
|
||||||
|
|||||||
@@ -14,6 +14,14 @@ BUILTIN_IMAGES_DIR = Path(__file__).parent / "images"
|
|||||||
# Dynamically loaded from images directory at runtime
|
# Dynamically loaded from images directory at runtime
|
||||||
DEFAULT_IMAGES = {}
|
DEFAULT_IMAGES = {}
|
||||||
|
|
||||||
|
# Default API URLs for standard providers
|
||||||
|
PROVIDER_DEFAULT_URLS = {
|
||||||
|
"openai": "https://api.openai.com",
|
||||||
|
"anthropic": "https://api.anthropic.com",
|
||||||
|
"google": "https://generativelanguage.googleapis.com",
|
||||||
|
"openrouter": "https://openrouter.ai/api",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class ConfigManager:
|
class ConfigManager:
|
||||||
def __init__(self, config_path: Optional[Path] = None):
|
def __init__(self, config_path: Optional[Path] = None):
|
||||||
@@ -64,6 +72,7 @@ class ConfigManager:
|
|||||||
},
|
},
|
||||||
defaults={
|
defaults={
|
||||||
"image": "goose",
|
"image": "goose",
|
||||||
|
"domains": [],
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
1125
cubbi/configure.py
Normal file
1125
cubbi/configure.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -4,15 +4,18 @@ import logging
|
|||||||
import os
|
import os
|
||||||
import pathlib
|
import pathlib
|
||||||
import sys
|
import sys
|
||||||
|
import tempfile
|
||||||
import uuid
|
import uuid
|
||||||
|
from pathlib import Path
|
||||||
from typing import Dict, List, Optional, Tuple
|
from typing import Dict, List, Optional, Tuple
|
||||||
|
|
||||||
import docker
|
import docker
|
||||||
|
import yaml
|
||||||
from docker.errors import DockerException, ImageNotFound
|
from docker.errors import DockerException, ImageNotFound
|
||||||
|
|
||||||
from .config import ConfigManager
|
from .config import ConfigManager
|
||||||
from .mcp import MCPManager
|
from .mcp import MCPManager
|
||||||
from .models import Session, SessionStatus
|
from .models import Image, Session, SessionStatus
|
||||||
from .session import SessionManager
|
from .session import SessionManager
|
||||||
from .user_config import UserConfigManager
|
from .user_config import UserConfigManager
|
||||||
|
|
||||||
@@ -85,6 +88,89 @@ class ContainerManager:
|
|||||||
# This ensures we don't mount the /cubbi-config volume for project-less sessions
|
# This ensures we don't mount the /cubbi-config volume for project-less sessions
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
def _generate_container_config(
|
||||||
|
self,
|
||||||
|
image_name: str,
|
||||||
|
project_url: Optional[str] = None,
|
||||||
|
uid: Optional[int] = None,
|
||||||
|
gid: Optional[int] = None,
|
||||||
|
model: Optional[str] = None,
|
||||||
|
ssh: bool = False,
|
||||||
|
run_command: Optional[str] = None,
|
||||||
|
no_shell: bool = False,
|
||||||
|
mcp_list: Optional[List[str]] = None,
|
||||||
|
persistent_links: Optional[List[Dict[str, str]]] = None,
|
||||||
|
) -> Path:
|
||||||
|
"""Generate container configuration YAML file"""
|
||||||
|
|
||||||
|
providers = {}
|
||||||
|
for name, provider in self.user_config_manager.list_providers().items():
|
||||||
|
api_key = provider.get("api_key", "")
|
||||||
|
if api_key.startswith("${") and api_key.endswith("}"):
|
||||||
|
env_var = api_key[2:-1]
|
||||||
|
api_key = os.environ.get(env_var, "")
|
||||||
|
|
||||||
|
provider_config = {
|
||||||
|
"type": provider.get("type"),
|
||||||
|
"api_key": api_key,
|
||||||
|
}
|
||||||
|
if provider.get("base_url"):
|
||||||
|
provider_config["base_url"] = provider.get("base_url")
|
||||||
|
if provider.get("models"):
|
||||||
|
provider_config["models"] = provider.get("models")
|
||||||
|
|
||||||
|
providers[name] = provider_config
|
||||||
|
|
||||||
|
mcps = []
|
||||||
|
if mcp_list:
|
||||||
|
for mcp_name in mcp_list:
|
||||||
|
mcp_config = self.mcp_manager.get_mcp(mcp_name)
|
||||||
|
if mcp_config:
|
||||||
|
mcps.append(mcp_config)
|
||||||
|
|
||||||
|
config = {
|
||||||
|
"version": "1.0",
|
||||||
|
"user": {"uid": uid or 1000, "gid": gid or 1000},
|
||||||
|
"providers": providers,
|
||||||
|
"mcps": mcps,
|
||||||
|
"project": {
|
||||||
|
"config_dir": "/cubbi-config",
|
||||||
|
"image_config_dir": f"/cubbi-config/{image_name}",
|
||||||
|
},
|
||||||
|
"ssh": {"enabled": ssh},
|
||||||
|
}
|
||||||
|
|
||||||
|
if project_url:
|
||||||
|
config["project"]["url"] = project_url
|
||||||
|
|
||||||
|
if persistent_links:
|
||||||
|
config["persistent_links"] = persistent_links
|
||||||
|
|
||||||
|
if model:
|
||||||
|
config["defaults"] = {"model": model}
|
||||||
|
|
||||||
|
if run_command:
|
||||||
|
config["run_command"] = run_command
|
||||||
|
|
||||||
|
config["no_shell"] = no_shell
|
||||||
|
|
||||||
|
config_file = Path(tempfile.mkdtemp()) / "config.yaml"
|
||||||
|
with open(config_file, "w") as f:
|
||||||
|
yaml.dump(config, f)
|
||||||
|
|
||||||
|
# Set restrictive permissions (0o600 = read/write for owner only)
|
||||||
|
config_file.chmod(0o600)
|
||||||
|
|
||||||
|
# Set ownership to cubbi user if uid/gid are provided
|
||||||
|
if uid is not None and gid is not None:
|
||||||
|
try:
|
||||||
|
os.chown(config_file, uid, gid)
|
||||||
|
except (OSError, PermissionError):
|
||||||
|
# If we can't chown (e.g., running as non-root), just log and continue
|
||||||
|
logger.warning(f"Could not set ownership of config file to {uid}:{gid}")
|
||||||
|
|
||||||
|
return config_file
|
||||||
|
|
||||||
def list_sessions(self) -> List[Session]:
|
def list_sessions(self) -> List[Session]:
|
||||||
"""List all active Cubbi sessions"""
|
"""List all active Cubbi sessions"""
|
||||||
sessions = []
|
sessions = []
|
||||||
@@ -107,12 +193,21 @@ class ContainerManager:
|
|||||||
elif container.status == "created":
|
elif container.status == "created":
|
||||||
status = SessionStatus.CREATING
|
status = SessionStatus.CREATING
|
||||||
|
|
||||||
|
# Get MCP list from container labels
|
||||||
|
mcps_str = labels.get("cubbi.mcps", "")
|
||||||
|
mcps = (
|
||||||
|
[mcp.strip() for mcp in mcps_str.split(",") if mcp.strip()]
|
||||||
|
if mcps_str
|
||||||
|
else []
|
||||||
|
)
|
||||||
|
|
||||||
session = Session(
|
session = Session(
|
||||||
id=session_id,
|
id=session_id,
|
||||||
name=labels.get("cubbi.session.name", f"cubbi-{session_id}"),
|
name=labels.get("cubbi.session.name", f"cubbi-{session_id}"),
|
||||||
image=labels.get("cubbi.image", "unknown"),
|
image=labels.get("cubbi.image", "unknown"),
|
||||||
status=status,
|
status=status,
|
||||||
container_id=container_id,
|
container_id=container_id,
|
||||||
|
mcps=mcps,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Get port mappings
|
# Get port mappings
|
||||||
@@ -145,14 +240,15 @@ class ContainerManager:
|
|||||||
mount_local: bool = False,
|
mount_local: bool = False,
|
||||||
volumes: Optional[Dict[str, Dict[str, str]]] = None,
|
volumes: Optional[Dict[str, Dict[str, str]]] = None,
|
||||||
networks: Optional[List[str]] = None,
|
networks: Optional[List[str]] = None,
|
||||||
|
ports: Optional[List[int]] = None,
|
||||||
mcp: Optional[List[str]] = None,
|
mcp: Optional[List[str]] = None,
|
||||||
run_command: Optional[str] = None,
|
run_command: Optional[str] = None,
|
||||||
no_shell: bool = False,
|
no_shell: bool = False,
|
||||||
uid: Optional[int] = None,
|
uid: Optional[int] = None,
|
||||||
gid: Optional[int] = None,
|
gid: Optional[int] = None,
|
||||||
model: Optional[str] = None,
|
model: Optional[str] = None,
|
||||||
provider: Optional[str] = None,
|
|
||||||
ssh: bool = False,
|
ssh: bool = False,
|
||||||
|
domains: Optional[List[str]] = None,
|
||||||
) -> Optional[Session]:
|
) -> Optional[Session]:
|
||||||
"""Create a new Cubbi session
|
"""Create a new Cubbi session
|
||||||
|
|
||||||
@@ -170,16 +266,29 @@ class ContainerManager:
|
|||||||
mcp: Optional list of MCP server names to attach to the session
|
mcp: Optional list of MCP server names to attach to the session
|
||||||
uid: Optional user ID for the container process
|
uid: Optional user ID for the container process
|
||||||
gid: Optional group ID for the container process
|
gid: Optional group ID for the container process
|
||||||
model: Optional model to use
|
model: Optional model specification in 'provider/model' format (e.g., 'anthropic/claude-3-5-sonnet')
|
||||||
provider: Optional provider to use
|
Legacy separate model and provider parameters are also supported for backward compatibility
|
||||||
ssh: Whether to start the SSH server in the container (default: False)
|
ssh: Whether to start the SSH server in the container (default: False)
|
||||||
|
domains: Optional list of domains to restrict network access to (uses network-filter)
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
# Validate image exists
|
# Try to get image from config first
|
||||||
image = self.config_manager.get_image(image_name)
|
image = self.config_manager.get_image(image_name)
|
||||||
if not image:
|
if not image:
|
||||||
print(f"Image '{image_name}' not found")
|
# If not found in config, treat it as a Docker image name
|
||||||
return None
|
print(
|
||||||
|
f"Image '{image_name}' not found in Cubbi config, using as Docker image..."
|
||||||
|
)
|
||||||
|
image = Image(
|
||||||
|
name=image_name,
|
||||||
|
description=f"Docker image: {image_name}",
|
||||||
|
version="latest",
|
||||||
|
maintainer="unknown",
|
||||||
|
image=image_name,
|
||||||
|
ports=[],
|
||||||
|
volumes=[],
|
||||||
|
persistent_configs=[],
|
||||||
|
)
|
||||||
|
|
||||||
# Generate session ID and name
|
# Generate session ID and name
|
||||||
session_id = self._generate_session_id()
|
session_id = self._generate_session_id()
|
||||||
@@ -189,29 +298,22 @@ class ContainerManager:
|
|||||||
# Ensure network exists
|
# Ensure network exists
|
||||||
self._ensure_network()
|
self._ensure_network()
|
||||||
|
|
||||||
# Prepare environment variables
|
# Minimal environment variables
|
||||||
env_vars = environment or {}
|
env_vars = environment or {}
|
||||||
|
env_vars["CUBBI_CONFIG_FILE"] = "/cubbi/config.yaml"
|
||||||
|
|
||||||
# Add CUBBI_USER_ID and CUBBI_GROUP_ID for entrypoint script
|
# Forward specified environment variables from the host to the container
|
||||||
env_vars["CUBBI_USER_ID"] = str(uid) if uid is not None else "1000"
|
if (
|
||||||
env_vars["CUBBI_GROUP_ID"] = str(gid) if gid is not None else "1000"
|
hasattr(image, "environments_to_forward")
|
||||||
|
and image.environments_to_forward
|
||||||
# Set SSH environment variable
|
):
|
||||||
env_vars["CUBBI_SSH_ENABLED"] = "true" if ssh else "false"
|
for env_name in image.environments_to_forward:
|
||||||
|
env_value = os.environ.get(env_name)
|
||||||
# Pass API keys from host environment to container for local development
|
if env_value is not None:
|
||||||
api_keys = [
|
env_vars[env_name] = env_value
|
||||||
"OPENAI_API_KEY",
|
print(
|
||||||
"ANTHROPIC_API_KEY",
|
f"Forwarding environment variable {env_name} to container"
|
||||||
"OPENROUTER_API_KEY",
|
)
|
||||||
"GOOGLE_API_KEY",
|
|
||||||
"LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
|
||||||
"LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
|
||||||
"LANGFUSE_URL",
|
|
||||||
]
|
|
||||||
for key in api_keys:
|
|
||||||
if key in os.environ and key not in env_vars:
|
|
||||||
env_vars[key] = os.environ[key]
|
|
||||||
|
|
||||||
# Pull image if needed
|
# Pull image if needed
|
||||||
try:
|
try:
|
||||||
@@ -267,6 +369,7 @@ class ContainerManager:
|
|||||||
print(f"Mounting volume: {host_path} -> {container_path}")
|
print(f"Mounting volume: {host_path} -> {container_path}")
|
||||||
|
|
||||||
# Set up persistent project configuration if project_name is provided
|
# Set up persistent project configuration if project_name is provided
|
||||||
|
persistent_links = []
|
||||||
project_config_path = self._get_project_config_path(project, project_name)
|
project_config_path = self._get_project_config_path(project, project_name)
|
||||||
if project_config_path:
|
if project_config_path:
|
||||||
print(f"Using project configuration directory: {project_config_path}")
|
print(f"Using project configuration directory: {project_config_path}")
|
||||||
@@ -277,13 +380,8 @@ class ContainerManager:
|
|||||||
"mode": "rw",
|
"mode": "rw",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Add environment variables for config path
|
# Create image-specific config directories and collect persistent links
|
||||||
env_vars["CUBBI_CONFIG_DIR"] = "/cubbi-config"
|
|
||||||
env_vars["CUBBI_IMAGE_CONFIG_DIR"] = f"/cubbi-config/{image_name}"
|
|
||||||
|
|
||||||
# Create image-specific config directories and set up direct volume mounts
|
|
||||||
if image.persistent_configs:
|
if image.persistent_configs:
|
||||||
persistent_links_data = [] # To store "source:target" pairs for symlinks
|
|
||||||
print("Setting up persistent configuration directories:")
|
print("Setting up persistent configuration directories:")
|
||||||
for config in image.persistent_configs:
|
for config in image.persistent_configs:
|
||||||
# Get target directory path on host
|
# Get target directory path on host
|
||||||
@@ -300,24 +398,19 @@ class ContainerManager:
|
|||||||
# For files, make sure parent directory exists
|
# For files, make sure parent directory exists
|
||||||
elif config.type == "file":
|
elif config.type == "file":
|
||||||
target_dir.parent.mkdir(parents=True, exist_ok=True)
|
target_dir.parent.mkdir(parents=True, exist_ok=True)
|
||||||
# File will be created by the container if needed
|
|
||||||
|
|
||||||
# Store the source and target paths for the init script
|
# Store persistent link data for config file
|
||||||
# Note: config.target is the path *within* /cubbi-config
|
persistent_links.append(
|
||||||
persistent_links_data.append(f"{config.source}:{config.target}")
|
{
|
||||||
|
"source": config.source,
|
||||||
|
"target": config.target,
|
||||||
|
"type": config.type,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
print(
|
print(
|
||||||
f" - Prepared host path {target_dir} for symlink target {config.target}"
|
f" - Prepared host path {target_dir} for symlink target {config.target}"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Set up persistent links
|
|
||||||
if persistent_links_data:
|
|
||||||
env_vars["CUBBI_PERSISTENT_LINKS"] = ";".join(
|
|
||||||
persistent_links_data
|
|
||||||
)
|
|
||||||
print(
|
|
||||||
f"Setting CUBBI_PERSISTENT_LINKS={env_vars['CUBBI_PERSISTENT_LINKS']}"
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
print(
|
print(
|
||||||
"No project_name provided - skipping configuration directory setup."
|
"No project_name provided - skipping configuration directory setup."
|
||||||
@@ -367,43 +460,6 @@ class ContainerManager:
|
|||||||
# Get MCP status to extract endpoint information
|
# Get MCP status to extract endpoint information
|
||||||
mcp_status = self.mcp_manager.get_mcp_status(mcp_name)
|
mcp_status = self.mcp_manager.get_mcp_status(mcp_name)
|
||||||
|
|
||||||
# Add MCP environment variables with index
|
|
||||||
idx = len(mcp_names) - 1 # 0-based index for the current MCP
|
|
||||||
|
|
||||||
if mcp_config.get("type") == "remote":
|
|
||||||
# For remote MCP, set the URL and headers
|
|
||||||
env_vars[f"MCP_{idx}_URL"] = mcp_config.get("url")
|
|
||||||
if mcp_config.get("headers"):
|
|
||||||
# Serialize headers as JSON
|
|
||||||
import json
|
|
||||||
|
|
||||||
env_vars[f"MCP_{idx}_HEADERS"] = json.dumps(
|
|
||||||
mcp_config.get("headers")
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
# For Docker/proxy MCP, set the connection details
|
|
||||||
# Use both the container name and the short name for internal Docker DNS resolution
|
|
||||||
container_name = self.mcp_manager.get_mcp_container_name(
|
|
||||||
mcp_name
|
|
||||||
)
|
|
||||||
# Use the short name (mcp_name) as the primary hostname
|
|
||||||
env_vars[f"MCP_{idx}_HOST"] = mcp_name
|
|
||||||
# Default port is 8080 unless specified in status
|
|
||||||
port = next(
|
|
||||||
iter(mcp_status.get("ports", {}).values()), 8080
|
|
||||||
)
|
|
||||||
env_vars[f"MCP_{idx}_PORT"] = str(port)
|
|
||||||
# Use the short name in the URL to take advantage of the network alias
|
|
||||||
env_vars[f"MCP_{idx}_URL"] = f"http://{mcp_name}:{port}/sse"
|
|
||||||
# For backward compatibility, also set the full container name URL
|
|
||||||
env_vars[f"MCP_{idx}_CONTAINER_URL"] = (
|
|
||||||
f"http://{container_name}:{port}/sse"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Set type-specific information
|
|
||||||
env_vars[f"MCP_{idx}_TYPE"] = mcp_config.get("type")
|
|
||||||
env_vars[f"MCP_{idx}_NAME"] = mcp_name
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Warning: Failed to start MCP server '{mcp_name}': {e}")
|
print(f"Warning: Failed to start MCP server '{mcp_name}': {e}")
|
||||||
# Get the container name before trying to remove it from the list
|
# Get the container name before trying to remove it from the list
|
||||||
@@ -418,30 +474,8 @@ class ContainerManager:
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
elif mcp_config.get("type") == "remote":
|
elif mcp_config.get("type") == "remote":
|
||||||
# For remote MCP, just set environment variables
|
# Remote MCP - nothing to do here, config will handle it
|
||||||
idx = len(mcp_names) - 1 # 0-based index for the current MCP
|
pass
|
||||||
|
|
||||||
env_vars[f"MCP_{idx}_URL"] = mcp_config.get("url")
|
|
||||||
if mcp_config.get("headers"):
|
|
||||||
# Serialize headers as JSON
|
|
||||||
import json
|
|
||||||
|
|
||||||
env_vars[f"MCP_{idx}_HEADERS"] = json.dumps(
|
|
||||||
mcp_config.get("headers")
|
|
||||||
)
|
|
||||||
|
|
||||||
# Set type-specific information
|
|
||||||
env_vars[f"MCP_{idx}_TYPE"] = "remote"
|
|
||||||
env_vars[f"MCP_{idx}_NAME"] = mcp_name
|
|
||||||
|
|
||||||
# Set environment variables for MCP count if we have any
|
|
||||||
if mcp_names:
|
|
||||||
env_vars["MCP_COUNT"] = str(len(mcp_names))
|
|
||||||
env_vars["MCP_ENABLED"] = "true"
|
|
||||||
# Serialize all MCP names as JSON
|
|
||||||
import json
|
|
||||||
|
|
||||||
env_vars["MCP_NAMES"] = json.dumps(mcp_names)
|
|
||||||
|
|
||||||
# Add user-specified networks
|
# Add user-specified networks
|
||||||
# Default Cubbi network
|
# Default Cubbi network
|
||||||
@@ -472,50 +506,134 @@ class ContainerManager:
|
|||||||
target_shell = "/bin/bash"
|
target_shell = "/bin/bash"
|
||||||
|
|
||||||
if run_command:
|
if run_command:
|
||||||
# Set environment variable for cubbi-init.sh to pick up
|
|
||||||
env_vars["CUBBI_RUN_COMMAND"] = run_command
|
|
||||||
|
|
||||||
# If no_shell is true, set CUBBI_NO_SHELL environment variable
|
|
||||||
if no_shell:
|
|
||||||
env_vars["CUBBI_NO_SHELL"] = "true"
|
|
||||||
logger.info(
|
|
||||||
"Setting CUBBI_NO_SHELL=true, container will exit after run command"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Set the container's command to be the final shell (or exit if no_shell is true)
|
# Set the container's command to be the final shell (or exit if no_shell is true)
|
||||||
container_command = [target_shell]
|
container_command = [target_shell]
|
||||||
logger.info(
|
logger.info(f"Using run command with shell {target_shell}")
|
||||||
f"Setting CUBBI_RUN_COMMAND and targeting shell {target_shell}"
|
if no_shell:
|
||||||
)
|
logger.info("Container will exit after run command")
|
||||||
else:
|
else:
|
||||||
# Use default behavior (often defined by image's ENTRYPOINT/CMD)
|
# Use default behavior (often defined by image's ENTRYPOINT/CMD)
|
||||||
# Set the container's command to be the final shell if none specified by Dockerfile CMD
|
|
||||||
# Note: Dockerfile CMD is ["tail", "-f", "/dev/null"], so this might need adjustment
|
|
||||||
# if we want interactive shell by default without --run. Let's default to bash for now.
|
|
||||||
container_command = [target_shell]
|
container_command = [target_shell]
|
||||||
logger.info(
|
logger.info(
|
||||||
"Using default container entrypoint/command for interactive shell."
|
"Using default container entrypoint/command for interactive shell."
|
||||||
)
|
)
|
||||||
|
|
||||||
# Set default model/provider from user config if not explicitly provided
|
# Handle network-filter if domains are specified
|
||||||
env_vars["CUBBI_MODEL"] = model or self.user_config_manager.get(
|
network_filter_container = None
|
||||||
"defaults.model", ""
|
network_mode = None
|
||||||
)
|
|
||||||
env_vars["CUBBI_PROVIDER"] = provider or self.user_config_manager.get(
|
if domains:
|
||||||
"defaults.provider", ""
|
# Check for conflicts
|
||||||
|
if networks:
|
||||||
|
print(
|
||||||
|
"[yellow]Warning: Cannot use --domains with --network. Using domain restrictions only.[/yellow]"
|
||||||
|
)
|
||||||
|
networks = []
|
||||||
|
network_list = [default_network]
|
||||||
|
|
||||||
|
# Create network-filter container
|
||||||
|
network_filter_name = f"cubbi-network-filter-{session_id}"
|
||||||
|
|
||||||
|
# Pull network-filter image if needed
|
||||||
|
network_filter_image = "monadicalsas/network-filter:latest"
|
||||||
|
try:
|
||||||
|
self.client.images.get(network_filter_image)
|
||||||
|
except ImageNotFound:
|
||||||
|
print(f"Pulling network-filter image {network_filter_image}...")
|
||||||
|
self.client.images.pull(network_filter_image)
|
||||||
|
|
||||||
|
# Create and start network-filter container
|
||||||
|
print("Creating network-filter container for domain restrictions...")
|
||||||
|
try:
|
||||||
|
# First check if a network-filter container already exists with this name
|
||||||
|
try:
|
||||||
|
existing = self.client.containers.get(network_filter_name)
|
||||||
|
print(
|
||||||
|
f"Removing existing network-filter container {network_filter_name}"
|
||||||
|
)
|
||||||
|
existing.stop()
|
||||||
|
existing.remove()
|
||||||
|
except DockerException:
|
||||||
|
pass # Container doesn't exist, which is fine
|
||||||
|
|
||||||
|
network_filter_container = self.client.containers.run(
|
||||||
|
image=network_filter_image,
|
||||||
|
name=network_filter_name,
|
||||||
|
hostname=network_filter_name,
|
||||||
|
detach=True,
|
||||||
|
environment={"ALLOWED_DOMAINS": ",".join(domains)},
|
||||||
|
labels={
|
||||||
|
"cubbi.network-filter": "true",
|
||||||
|
"cubbi.session.id": session_id,
|
||||||
|
"cubbi.session.name": session_name,
|
||||||
|
},
|
||||||
|
cap_add=["NET_ADMIN"], # Required for iptables
|
||||||
|
remove=False, # Don't auto-remove on stop
|
||||||
|
)
|
||||||
|
|
||||||
|
# Wait for container to be running
|
||||||
|
import time
|
||||||
|
|
||||||
|
for i in range(10): # Wait up to 10 seconds
|
||||||
|
network_filter_container.reload()
|
||||||
|
if network_filter_container.status == "running":
|
||||||
|
break
|
||||||
|
time.sleep(1)
|
||||||
|
else:
|
||||||
|
raise Exception(
|
||||||
|
f"Network-filter container failed to start. Status: {network_filter_container.status}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Use container ID instead of name for network_mode
|
||||||
|
network_mode = f"container:{network_filter_container.id}"
|
||||||
|
print(
|
||||||
|
f"Network restrictions enabled for domains: {', '.join(domains)}"
|
||||||
|
)
|
||||||
|
print(f"Using network mode: {network_mode}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"[red]Error creating network-filter container: {e}[/red]")
|
||||||
|
raise
|
||||||
|
|
||||||
|
# Warn about MCP limitations when using network-filter
|
||||||
|
if mcp_names:
|
||||||
|
print(
|
||||||
|
"[yellow]Warning: MCP servers may not be accessible when using domain restrictions.[/yellow]"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate configuration file
|
||||||
|
project_url = project if is_git_repo else None
|
||||||
|
config_file_path = self._generate_container_config(
|
||||||
|
image_name=image_name,
|
||||||
|
project_url=project_url,
|
||||||
|
uid=uid,
|
||||||
|
gid=gid,
|
||||||
|
model=model,
|
||||||
|
ssh=ssh,
|
||||||
|
run_command=run_command,
|
||||||
|
no_shell=no_shell,
|
||||||
|
mcp_list=mcp_names,
|
||||||
|
persistent_links=persistent_links
|
||||||
|
if "persistent_links" in locals()
|
||||||
|
else None,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Mount config file
|
||||||
|
session_volumes[str(config_file_path)] = {
|
||||||
|
"bind": "/cubbi/config.yaml",
|
||||||
|
"mode": "ro",
|
||||||
|
}
|
||||||
|
|
||||||
# Create container
|
# Create container
|
||||||
container = self.client.containers.create(
|
container_params = {
|
||||||
image=image.image,
|
"image": image.image,
|
||||||
name=session_name,
|
"name": session_name,
|
||||||
hostname=session_name,
|
"detach": True,
|
||||||
detach=True,
|
"tty": True,
|
||||||
tty=True,
|
"stdin_open": True,
|
||||||
stdin_open=True,
|
"environment": env_vars,
|
||||||
environment=env_vars,
|
"volumes": session_volumes,
|
||||||
volumes=session_volumes,
|
"labels": {
|
||||||
labels={
|
|
||||||
"cubbi.session": "true",
|
"cubbi.session": "true",
|
||||||
"cubbi.session.id": session_id,
|
"cubbi.session.id": session_id,
|
||||||
"cubbi.session.name": session_name,
|
"cubbi.session.name": session_name,
|
||||||
@@ -524,17 +642,32 @@ class ContainerManager:
|
|||||||
"cubbi.project_name": project_name or "",
|
"cubbi.project_name": project_name or "",
|
||||||
"cubbi.mcps": ",".join(mcp_names) if mcp_names else "",
|
"cubbi.mcps": ",".join(mcp_names) if mcp_names else "",
|
||||||
},
|
},
|
||||||
network=network_list[0], # Connect to the first network initially
|
"command": container_command, # Set the command
|
||||||
command=container_command, # Set the command
|
"entrypoint": entrypoint, # Set the entrypoint (might be None)
|
||||||
entrypoint=entrypoint, # Set the entrypoint (might be None)
|
}
|
||||||
ports={f"{port}/tcp": None for port in image.ports},
|
|
||||||
)
|
# Add port forwarding if ports are specified
|
||||||
|
if ports:
|
||||||
|
container_params["ports"] = {f"{port}/tcp": None for port in ports}
|
||||||
|
|
||||||
|
# Use network_mode if domains are specified, otherwise use regular network
|
||||||
|
if network_mode:
|
||||||
|
container_params["network_mode"] = network_mode
|
||||||
|
# Cannot set hostname when using network_mode
|
||||||
|
else:
|
||||||
|
container_params["hostname"] = session_name
|
||||||
|
container_params["network"] = network_list[
|
||||||
|
0
|
||||||
|
] # Connect to the first network initially
|
||||||
|
|
||||||
|
container = self.client.containers.create(**container_params)
|
||||||
|
|
||||||
# Start container
|
# Start container
|
||||||
container.start()
|
container.start()
|
||||||
|
|
||||||
# Connect to additional networks (after the first one in network_list)
|
# Connect to additional networks (after the first one in network_list)
|
||||||
if len(network_list) > 1:
|
# Note: Cannot connect to networks when using network_mode
|
||||||
|
if len(network_list) > 1 and not network_mode:
|
||||||
for network_name in network_list[1:]:
|
for network_name in network_list[1:]:
|
||||||
try:
|
try:
|
||||||
# Get or create the network
|
# Get or create the network
|
||||||
@@ -555,32 +688,35 @@ class ContainerManager:
|
|||||||
container.reload()
|
container.reload()
|
||||||
|
|
||||||
# Connect directly to each MCP's dedicated network
|
# Connect directly to each MCP's dedicated network
|
||||||
for mcp_name in mcp_names:
|
# Note: Cannot connect to networks when using network_mode
|
||||||
try:
|
if not network_mode:
|
||||||
# Get the dedicated network for this MCP
|
for mcp_name in mcp_names:
|
||||||
dedicated_network_name = f"cubbi-mcp-{mcp_name}-network"
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
network = self.client.networks.get(dedicated_network_name)
|
# Get the dedicated network for this MCP
|
||||||
|
dedicated_network_name = f"cubbi-mcp-{mcp_name}-network"
|
||||||
|
|
||||||
# Connect the session container to the MCP's dedicated network
|
try:
|
||||||
network.connect(container, aliases=[session_name])
|
network = self.client.networks.get(dedicated_network_name)
|
||||||
print(
|
|
||||||
f"Connected session to MCP '{mcp_name}' via dedicated network: {dedicated_network_name}"
|
|
||||||
)
|
|
||||||
except DockerException:
|
|
||||||
# print(
|
|
||||||
# f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
|
|
||||||
# )
|
|
||||||
# commented out, may be accessible through another attached network, it's
|
|
||||||
# not mandatory here.
|
|
||||||
pass
|
|
||||||
|
|
||||||
except Exception as e:
|
# Connect the session container to the MCP's dedicated network
|
||||||
print(f"Error connecting session to MCP '{mcp_name}': {e}")
|
network.connect(container, aliases=[session_name])
|
||||||
|
print(
|
||||||
|
f"Connected session to MCP '{mcp_name}' via dedicated network: {dedicated_network_name}"
|
||||||
|
)
|
||||||
|
except DockerException:
|
||||||
|
# print(
|
||||||
|
# f"Error connecting to MCP dedicated network '{dedicated_network_name}': {e}"
|
||||||
|
# )
|
||||||
|
# commented out, may be accessible through another attached network, it's
|
||||||
|
# not mandatory here.
|
||||||
|
pass
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error connecting session to MCP '{mcp_name}': {e}")
|
||||||
|
|
||||||
# Connect to additional user-specified networks
|
# Connect to additional user-specified networks
|
||||||
if networks:
|
# Note: Cannot connect to networks when using network_mode
|
||||||
|
if networks and not network_mode:
|
||||||
for network_name in networks:
|
for network_name in networks:
|
||||||
# Check if already connected to this network
|
# Check if already connected to this network
|
||||||
# NetworkSettings.Networks contains a dict where keys are network names
|
# NetworkSettings.Networks contains a dict where keys are network names
|
||||||
@@ -639,15 +775,29 @@ class ContainerManager:
|
|||||||
|
|
||||||
except DockerException as e:
|
except DockerException as e:
|
||||||
print(f"Error creating session: {e}")
|
print(f"Error creating session: {e}")
|
||||||
|
|
||||||
|
# Clean up network-filter container if it was created
|
||||||
|
if network_filter_container:
|
||||||
|
try:
|
||||||
|
network_filter_container.stop()
|
||||||
|
network_filter_container.remove()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def close_session(self, session_id: str) -> bool:
|
def close_session(self, session_id: str, kill: bool = False) -> bool:
|
||||||
"""Close a Cubbi session"""
|
"""Close a Cubbi session
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session_id: The ID of the session to close
|
||||||
|
kill: If True, forcefully kill the container instead of graceful stop
|
||||||
|
"""
|
||||||
try:
|
try:
|
||||||
sessions = self.list_sessions()
|
sessions = self.list_sessions()
|
||||||
for session in sessions:
|
for session in sessions:
|
||||||
if session.id == session_id:
|
if session.id == session_id:
|
||||||
return self._close_single_session(session)
|
return self._close_single_session(session, kill=kill)
|
||||||
|
|
||||||
print(f"Session '{session_id}' not found")
|
print(f"Session '{session_id}' not found")
|
||||||
return False
|
return False
|
||||||
@@ -724,11 +874,12 @@ class ContainerManager:
|
|||||||
print(f"Error connecting to session: {e}")
|
print(f"Error connecting to session: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def _close_single_session(self, session: Session) -> bool:
|
def _close_single_session(self, session: Session, kill: bool = False) -> bool:
|
||||||
"""Close a single session (helper for parallel processing)
|
"""Close a single session (helper for parallel processing)
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
session: The session to close
|
session: The session to close
|
||||||
|
kill: If True, forcefully kill the container instead of graceful stop
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
bool: Whether the session was successfully closed
|
bool: Whether the session was successfully closed
|
||||||
@@ -738,20 +889,61 @@ class ContainerManager:
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
container = self.client.containers.get(session.container_id)
|
container = self.client.containers.get(session.container_id)
|
||||||
container.stop()
|
|
||||||
|
try:
|
||||||
|
if kill:
|
||||||
|
container.kill()
|
||||||
|
else:
|
||||||
|
container.stop()
|
||||||
|
except DockerException:
|
||||||
|
pass
|
||||||
|
|
||||||
container.remove()
|
container.remove()
|
||||||
|
|
||||||
|
network_filter_name = f"cubbi-network-filter-{session.id}"
|
||||||
|
try:
|
||||||
|
network_filter_container = self.client.containers.get(
|
||||||
|
network_filter_name
|
||||||
|
)
|
||||||
|
logger.info(f"Stopping network-filter container {network_filter_name}")
|
||||||
|
try:
|
||||||
|
if kill:
|
||||||
|
network_filter_container.kill()
|
||||||
|
else:
|
||||||
|
network_filter_container.stop()
|
||||||
|
except DockerException:
|
||||||
|
pass
|
||||||
|
network_filter_container.remove()
|
||||||
|
except DockerException:
|
||||||
|
pass
|
||||||
|
|
||||||
self.session_manager.remove_session(session.id)
|
self.session_manager.remove_session(session.id)
|
||||||
return True
|
return True
|
||||||
except DockerException as e:
|
except DockerException as e:
|
||||||
print(f"Error closing session {session.id}: {e}")
|
error_message = str(e).lower()
|
||||||
return False
|
if (
|
||||||
|
"is not running" in error_message
|
||||||
|
or "no such container" in error_message
|
||||||
|
or "not found" in error_message
|
||||||
|
):
|
||||||
|
print(
|
||||||
|
f"Container already stopped/removed, removing session {session.id} from list"
|
||||||
|
)
|
||||||
|
self.session_manager.remove_session(session.id)
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f"Error closing session {session.id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
def close_all_sessions(self, progress_callback=None) -> Tuple[int, bool]:
|
def close_all_sessions(
|
||||||
|
self, progress_callback=None, kill: bool = False
|
||||||
|
) -> Tuple[int, bool]:
|
||||||
"""Close all Cubbi sessions with parallel processing and progress reporting
|
"""Close all Cubbi sessions with parallel processing and progress reporting
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
progress_callback: Optional callback function to report progress
|
progress_callback: Optional callback function to report progress
|
||||||
The callback should accept (session_id, status, message)
|
The callback should accept (session_id, status, message)
|
||||||
|
kill: If True, forcefully kill containers instead of graceful stop
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
tuple: (number of sessions closed, success)
|
tuple: (number of sessions closed, success)
|
||||||
@@ -763,20 +955,41 @@ class ContainerManager:
|
|||||||
|
|
||||||
# No need for session status as we receive it via callback
|
# No need for session status as we receive it via callback
|
||||||
|
|
||||||
# Define a wrapper to track progress
|
|
||||||
def close_with_progress(session):
|
def close_with_progress(session):
|
||||||
if not session.container_id:
|
if not session.container_id:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
container = self.client.containers.get(session.container_id)
|
container = self.client.containers.get(session.container_id)
|
||||||
# Stop and remove container
|
|
||||||
container.stop()
|
try:
|
||||||
|
if kill:
|
||||||
|
container.kill()
|
||||||
|
else:
|
||||||
|
container.stop()
|
||||||
|
except DockerException:
|
||||||
|
pass
|
||||||
|
|
||||||
container.remove()
|
container.remove()
|
||||||
# Remove from session storage
|
|
||||||
|
network_filter_name = f"cubbi-network-filter-{session.id}"
|
||||||
|
try:
|
||||||
|
network_filter_container = self.client.containers.get(
|
||||||
|
network_filter_name
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
if kill:
|
||||||
|
network_filter_container.kill()
|
||||||
|
else:
|
||||||
|
network_filter_container.stop()
|
||||||
|
except DockerException:
|
||||||
|
pass
|
||||||
|
network_filter_container.remove()
|
||||||
|
except DockerException:
|
||||||
|
pass
|
||||||
|
|
||||||
self.session_manager.remove_session(session.id)
|
self.session_manager.remove_session(session.id)
|
||||||
|
|
||||||
# Notify about completion
|
|
||||||
if progress_callback:
|
if progress_callback:
|
||||||
progress_callback(
|
progress_callback(
|
||||||
session.id,
|
session.id,
|
||||||
@@ -786,11 +999,29 @@ class ContainerManager:
|
|||||||
|
|
||||||
return True
|
return True
|
||||||
except DockerException as e:
|
except DockerException as e:
|
||||||
error_msg = f"Error: {str(e)}"
|
error_message = str(e).lower()
|
||||||
if progress_callback:
|
if (
|
||||||
progress_callback(session.id, "failed", error_msg)
|
"is not running" in error_message
|
||||||
print(f"Error closing session {session.id}: {e}")
|
or "no such container" in error_message
|
||||||
return False
|
or "not found" in error_message
|
||||||
|
):
|
||||||
|
print(
|
||||||
|
f"Container already stopped/removed, removing session {session.id} from list"
|
||||||
|
)
|
||||||
|
self.session_manager.remove_session(session.id)
|
||||||
|
if progress_callback:
|
||||||
|
progress_callback(
|
||||||
|
session.id,
|
||||||
|
"completed",
|
||||||
|
f"{session.name} removed from list (container already stopped)",
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
error_msg = f"Error: {str(e)}"
|
||||||
|
if progress_callback:
|
||||||
|
progress_callback(session.id, "failed", error_msg)
|
||||||
|
print(f"Error closing session {session.id}: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
# Use ThreadPoolExecutor to close sessions in parallel
|
# Use ThreadPoolExecutor to close sessions in parallel
|
||||||
with concurrent.futures.ThreadPoolExecutor(
|
with concurrent.futures.ThreadPoolExecutor(
|
||||||
|
|||||||
@@ -1,11 +1,12 @@
|
|||||||
FROM node:20-slim
|
FROM python:3.12-slim
|
||||||
|
|
||||||
LABEL maintainer="team@monadical.com"
|
LABEL maintainer="team@monadical.com"
|
||||||
LABEL description="Google Gemini CLI for Cubbi"
|
LABEL description="Aider AI pair programming for Cubbi"
|
||||||
|
|
||||||
# Install system dependencies including gosu for user switching
|
# Install system dependencies including gosu for user switching
|
||||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
gosu \
|
gosu \
|
||||||
|
sudo \
|
||||||
passwd \
|
passwd \
|
||||||
bash \
|
bash \
|
||||||
curl \
|
curl \
|
||||||
@@ -20,11 +21,9 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
|
|||||||
ripgrep \
|
ripgrep \
|
||||||
openssh-client \
|
openssh-client \
|
||||||
vim \
|
vim \
|
||||||
python3 \
|
|
||||||
python3-pip \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# Install uv (Python package manager) for cubbi_init.py compatibility
|
# Install uv (Python package manager)
|
||||||
WORKDIR /tmp
|
WORKDIR /tmp
|
||||||
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||||
sh install.sh && \
|
sh install.sh && \
|
||||||
@@ -32,25 +31,26 @@ RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
|||||||
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||||
rm install.sh
|
rm install.sh
|
||||||
|
|
||||||
# Install Gemini CLI globally
|
# Install Aider using pip in system Python (more compatible with user switching)
|
||||||
RUN npm install -g @google/gemini-cli
|
RUN python -m pip install aider-chat
|
||||||
|
|
||||||
# Verify installation
|
# Make sure aider is in PATH
|
||||||
RUN gemini --version
|
ENV PATH="/root/.local/bin:$PATH"
|
||||||
|
|
||||||
# Create app directory
|
# Create app directory
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
# Copy initialization system
|
# Copy initialization system
|
||||||
COPY cubbi_init.py /cubbi/cubbi_init.py
|
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||||
COPY gemini_cli_plugin.py /cubbi/gemini_cli_plugin.py
|
COPY aider_plugin.py /cubbi/aider_plugin.py
|
||||||
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||||
COPY init-status.sh /cubbi/init-status.sh
|
COPY init-status.sh /cubbi/init-status.sh
|
||||||
|
|
||||||
# Make scripts executable
|
# Make scripts executable
|
||||||
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||||
|
|
||||||
# Add init status check to bashrc
|
# Add aider to PATH in bashrc and init status check
|
||||||
|
RUN echo 'PATH="/root/.local/bin:$PATH"' >> /etc/bash.bashrc
|
||||||
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||||
|
|
||||||
# Set up environment
|
# Set up environment
|
||||||
277
cubbi/images/aider/README.md
Normal file
277
cubbi/images/aider/README.md
Normal file
@@ -0,0 +1,277 @@
|
|||||||
|
# Aider for Cubbi
|
||||||
|
|
||||||
|
This image provides Aider (AI pair programming) in a Cubbi container environment.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Aider is an AI pair programming tool that works in your terminal. This Cubbi image integrates Aider with secure API key management, persistent configuration, and support for multiple LLM providers.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Multiple LLM Support**: Works with OpenAI, Anthropic, DeepSeek, Gemini, OpenRouter, and more
|
||||||
|
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
||||||
|
- **Persistent Configuration**: Settings and history preserved across container restarts
|
||||||
|
- **Git Integration**: Automatic commits and git awareness
|
||||||
|
- **Multi-Language Support**: Works with 100+ programming languages
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Set up API Key
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For OpenAI (GPT models)
|
||||||
|
uv run -m cubbi.cli config set services.openai.api_key "your-openai-key"
|
||||||
|
|
||||||
|
# For Anthropic (Claude models)
|
||||||
|
uv run -m cubbi.cli config set services.anthropic.api_key "your-anthropic-key"
|
||||||
|
|
||||||
|
# For DeepSeek (recommended for cost-effectiveness)
|
||||||
|
uv run -m cubbi.cli config set services.deepseek.api_key "your-deepseek-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run Aider Environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Aider container with your project
|
||||||
|
uv run -m cubbi.cli session create --image aider /path/to/your/project
|
||||||
|
|
||||||
|
# Or without a project
|
||||||
|
uv run -m cubbi.cli session create --image aider
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Use Aider
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage
|
||||||
|
aider
|
||||||
|
|
||||||
|
# With specific model
|
||||||
|
aider --model sonnet
|
||||||
|
|
||||||
|
# With specific files
|
||||||
|
aider main.py utils.py
|
||||||
|
|
||||||
|
# One-shot request
|
||||||
|
aider --message "Add error handling to the login function"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Supported API Keys
|
||||||
|
|
||||||
|
- `OPENAI_API_KEY`: OpenAI GPT models (GPT-4, GPT-4o, etc.)
|
||||||
|
- `ANTHROPIC_API_KEY`: Anthropic Claude models (Sonnet, Haiku, etc.)
|
||||||
|
- `DEEPSEEK_API_KEY`: DeepSeek models (cost-effective option)
|
||||||
|
- `GEMINI_API_KEY`: Google Gemini models
|
||||||
|
- `OPENROUTER_API_KEY`: OpenRouter (access to many models)
|
||||||
|
|
||||||
|
### Additional Configuration
|
||||||
|
|
||||||
|
- `AIDER_MODEL`: Default model to use (e.g., "sonnet", "o3-mini", "deepseek")
|
||||||
|
- `AIDER_AUTO_COMMITS`: Enable automatic git commits (default: true)
|
||||||
|
- `AIDER_DARK_MODE`: Enable dark mode interface (default: false)
|
||||||
|
- `AIDER_API_KEYS`: Additional API keys in format "provider1=key1,provider2=key2"
|
||||||
|
|
||||||
|
### Network Configuration
|
||||||
|
|
||||||
|
- `HTTP_PROXY`: HTTP proxy server URL
|
||||||
|
- `HTTPS_PROXY`: HTTPS proxy server URL
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Basic AI Pair Programming
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Aider with your project
|
||||||
|
uv run -m cubbi.cli session create --image aider /path/to/project
|
||||||
|
|
||||||
|
# Inside the container:
|
||||||
|
aider # Start interactive session
|
||||||
|
aider main.py # Work on specific file
|
||||||
|
aider --message "Add tests" # One-shot request
|
||||||
|
```
|
||||||
|
|
||||||
|
### Model Selection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Use Claude Sonnet
|
||||||
|
aider --model sonnet
|
||||||
|
|
||||||
|
# Use GPT-4o
|
||||||
|
aider --model gpt-4o
|
||||||
|
|
||||||
|
# Use DeepSeek (cost-effective)
|
||||||
|
aider --model deepseek
|
||||||
|
|
||||||
|
# Use OpenRouter
|
||||||
|
aider --model openrouter/anthropic/claude-3.5-sonnet
|
||||||
|
```
|
||||||
|
|
||||||
|
### Advanced Features
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Work with multiple files
|
||||||
|
aider src/main.py tests/test_main.py
|
||||||
|
|
||||||
|
# Auto-commit changes
|
||||||
|
aider --auto-commits
|
||||||
|
|
||||||
|
# Read-only mode (won't edit files)
|
||||||
|
aider --read
|
||||||
|
|
||||||
|
# Apply a specific change
|
||||||
|
aider --message "Refactor the database connection code to use connection pooling"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enterprise/Proxy Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# With proxy
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env HTTPS_PROXY="https://proxy.company.com:8080" \
|
||||||
|
/path/to/project
|
||||||
|
|
||||||
|
# With custom model
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env AIDER_MODEL="sonnet" \
|
||||||
|
/path/to/project
|
||||||
|
```
|
||||||
|
|
||||||
|
## Persistent Configuration
|
||||||
|
|
||||||
|
The following directories are automatically persisted:
|
||||||
|
|
||||||
|
- `~/.aider/`: Aider configuration and chat history
|
||||||
|
- `~/.cache/aider/`: Model cache and temporary files
|
||||||
|
|
||||||
|
Configuration files are maintained across container restarts, ensuring your preferences and chat history are preserved.
|
||||||
|
|
||||||
|
## Model Recommendations
|
||||||
|
|
||||||
|
### Best Overall Performance
|
||||||
|
- **Claude 3.5 Sonnet**: Excellent code understanding and generation
|
||||||
|
- **OpenAI GPT-4o**: Strong performance across languages
|
||||||
|
- **Gemini 2.5 Pro**: Good balance of quality and speed
|
||||||
|
|
||||||
|
### Cost-Effective Options
|
||||||
|
- **DeepSeek V3**: Very cost-effective, good quality
|
||||||
|
- **OpenRouter**: Access to multiple models with competitive pricing
|
||||||
|
|
||||||
|
### Free Options
|
||||||
|
- **Gemini 2.5 Pro Exp**: Free tier available
|
||||||
|
- **OpenRouter**: Some free models available
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
cubbi/images/aider/
|
||||||
|
├── Dockerfile # Container image definition
|
||||||
|
├── cubbi_image.yaml # Cubbi image configuration
|
||||||
|
├── aider_plugin.py # Authentication and setup plugin
|
||||||
|
└── README.md # This documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication Flow
|
||||||
|
|
||||||
|
1. **Environment Variables**: API keys passed from Cubbi configuration
|
||||||
|
2. **Plugin Setup**: `aider_plugin.py` creates environment configuration
|
||||||
|
3. **Environment File**: Creates `~/.aider/.env` with API keys
|
||||||
|
4. **Ready**: Aider is ready for use with configured authentication
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
**No API Key Found**
|
||||||
|
```
|
||||||
|
ℹ️ No API keys found - Aider will run without pre-configuration
|
||||||
|
```
|
||||||
|
**Solution**: Set API key in Cubbi configuration:
|
||||||
|
```bash
|
||||||
|
uv run -m cubbi.cli config set services.openai.api_key "your-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Model Not Available**
|
||||||
|
```
|
||||||
|
Error: Model 'xyz' not found
|
||||||
|
```
|
||||||
|
**Solution**: Check available models for your provider:
|
||||||
|
```bash
|
||||||
|
aider --models # List available models
|
||||||
|
```
|
||||||
|
|
||||||
|
**Git Issues**
|
||||||
|
```
|
||||||
|
Git repository not found
|
||||||
|
```
|
||||||
|
**Solution**: Initialize git in your project or mount a git repository:
|
||||||
|
```bash
|
||||||
|
git init
|
||||||
|
# or
|
||||||
|
uv run -m cubbi.cli session create --image aider /path/to/git/project
|
||||||
|
```
|
||||||
|
|
||||||
|
**Network/Proxy Issues**
|
||||||
|
```
|
||||||
|
Connection timeout or proxy errors
|
||||||
|
```
|
||||||
|
**Solution**: Configure proxy settings:
|
||||||
|
```bash
|
||||||
|
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Aider version
|
||||||
|
aider --version
|
||||||
|
|
||||||
|
# List available models
|
||||||
|
aider --models
|
||||||
|
|
||||||
|
# Check configuration
|
||||||
|
cat ~/.aider/.env
|
||||||
|
|
||||||
|
# Verbose output
|
||||||
|
aider --verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- **API Keys**: Stored securely with 0o600 permissions
|
||||||
|
- **Environment**: Isolated container environment
|
||||||
|
- **Git Integration**: Respects .gitignore and git configurations
|
||||||
|
- **Code Safety**: Always review changes before accepting
|
||||||
|
|
||||||
|
## Advanced Configuration
|
||||||
|
|
||||||
|
### Custom Model Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Use with custom API endpoint
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env OPENAI_API_BASE="https://api.custom-provider.com/v1" \
|
||||||
|
--env OPENAI_API_KEY="your-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multiple API Keys
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Configure multiple providers
|
||||||
|
uv run -m cubbi.cli session create --image aider \
|
||||||
|
--env OPENAI_API_KEY="openai-key" \
|
||||||
|
--env ANTHROPIC_API_KEY="anthropic-key" \
|
||||||
|
--env AIDER_API_KEYS="provider1=key1,provider2=key2"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues related to:
|
||||||
|
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
||||||
|
- **Aider Functionality**: Visit [Aider documentation](https://aider.chat/)
|
||||||
|
- **Model Configuration**: Check [LLM documentation](https://aider.chat/docs/llms.html)
|
||||||
|
- **API Keys**: Visit provider documentation (OpenAI, Anthropic, etc.)
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This image configuration is provided under the same license as the Cubbi project. Aider is licensed separately under Apache 2.0.
|
||||||
183
cubbi/images/aider/aider_plugin.py
Executable file
183
cubbi/images/aider/aider_plugin.py
Executable file
@@ -0,0 +1,183 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import os
|
||||||
|
import stat
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from cubbi_init import ToolPlugin, cubbi_config, set_ownership
|
||||||
|
|
||||||
|
|
||||||
|
class AiderPlugin(ToolPlugin):
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "aider"
|
||||||
|
|
||||||
|
def _get_aider_config_dir(self) -> Path:
|
||||||
|
return Path("/home/cubbi/.aider")
|
||||||
|
|
||||||
|
def _get_aider_cache_dir(self) -> Path:
|
||||||
|
return Path("/home/cubbi/.cache/aider")
|
||||||
|
|
||||||
|
def _ensure_aider_dirs(self) -> tuple[Path, Path]:
|
||||||
|
config_dir = self._get_aider_config_dir()
|
||||||
|
cache_dir = self._get_aider_cache_dir()
|
||||||
|
|
||||||
|
self.create_directory_with_ownership(config_dir)
|
||||||
|
self.create_directory_with_ownership(cache_dir)
|
||||||
|
|
||||||
|
return config_dir, cache_dir
|
||||||
|
|
||||||
|
def is_already_configured(self) -> bool:
|
||||||
|
config_dir = self._get_aider_config_dir()
|
||||||
|
env_file = config_dir / ".env"
|
||||||
|
return env_file.exists()
|
||||||
|
|
||||||
|
def configure(self) -> bool:
|
||||||
|
self.status.log("Setting up Aider configuration...")
|
||||||
|
|
||||||
|
config_dir, cache_dir = self._ensure_aider_dirs()
|
||||||
|
|
||||||
|
env_vars = self._create_environment_config()
|
||||||
|
|
||||||
|
if env_vars:
|
||||||
|
env_file = config_dir / ".env"
|
||||||
|
success = self._write_env_file(env_file, env_vars)
|
||||||
|
if success:
|
||||||
|
self.status.log("✅ Aider environment configured successfully")
|
||||||
|
else:
|
||||||
|
self.status.log("⚠️ Failed to write Aider environment file", "WARNING")
|
||||||
|
else:
|
||||||
|
self.status.log(
|
||||||
|
"ℹ️ No API keys found - Aider will run without pre-configuration", "INFO"
|
||||||
|
)
|
||||||
|
self.status.log(
|
||||||
|
" You can configure API keys later using environment variables",
|
||||||
|
"INFO",
|
||||||
|
)
|
||||||
|
|
||||||
|
if not cubbi_config.mcps:
|
||||||
|
self.status.log("No MCP servers to integrate")
|
||||||
|
return True
|
||||||
|
|
||||||
|
self.status.log(
|
||||||
|
f"Found {len(cubbi_config.mcps)} MCP server(s) - no direct integration available for Aider"
|
||||||
|
)
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _create_environment_config(self) -> dict[str, str]:
|
||||||
|
env_vars = {}
|
||||||
|
|
||||||
|
provider_config = cubbi_config.get_provider_for_default_model()
|
||||||
|
if provider_config and cubbi_config.defaults.model:
|
||||||
|
_, model_name = cubbi_config.defaults.model.split("/", 1)
|
||||||
|
|
||||||
|
env_vars["AIDER_MODEL"] = model_name
|
||||||
|
self.status.log(f"Set Aider model to {model_name}")
|
||||||
|
|
||||||
|
if provider_config.type == "anthropic":
|
||||||
|
env_vars["AIDER_ANTHROPIC_API_KEY"] = provider_config.api_key
|
||||||
|
self.status.log("Configured Anthropic API key for Aider")
|
||||||
|
|
||||||
|
elif provider_config.type == "openai":
|
||||||
|
env_vars["AIDER_OPENAI_API_KEY"] = provider_config.api_key
|
||||||
|
if provider_config.base_url:
|
||||||
|
env_vars["AIDER_OPENAI_API_BASE"] = provider_config.base_url
|
||||||
|
self.status.log(
|
||||||
|
f"Set Aider OpenAI API base to {provider_config.base_url}"
|
||||||
|
)
|
||||||
|
self.status.log("Configured OpenAI API key for Aider")
|
||||||
|
|
||||||
|
elif provider_config.type == "google":
|
||||||
|
env_vars["GEMINI_API_KEY"] = provider_config.api_key
|
||||||
|
self.status.log("Configured Google/Gemini API key for Aider")
|
||||||
|
|
||||||
|
elif provider_config.type == "openrouter":
|
||||||
|
env_vars["OPENROUTER_API_KEY"] = provider_config.api_key
|
||||||
|
self.status.log("Configured OpenRouter API key for Aider")
|
||||||
|
|
||||||
|
else:
|
||||||
|
self.status.log(
|
||||||
|
f"Provider type '{provider_config.type}' not directly supported by Aider plugin",
|
||||||
|
"WARNING",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.status.log(
|
||||||
|
"No default model or provider configured - checking legacy environment variables",
|
||||||
|
"WARNING",
|
||||||
|
)
|
||||||
|
|
||||||
|
api_key_mappings = {
|
||||||
|
"OPENAI_API_KEY": "AIDER_OPENAI_API_KEY",
|
||||||
|
"ANTHROPIC_API_KEY": "AIDER_ANTHROPIC_API_KEY",
|
||||||
|
"DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY",
|
||||||
|
"GEMINI_API_KEY": "GEMINI_API_KEY",
|
||||||
|
"OPENROUTER_API_KEY": "OPENROUTER_API_KEY",
|
||||||
|
}
|
||||||
|
|
||||||
|
for env_var, aider_var in api_key_mappings.items():
|
||||||
|
value = os.environ.get(env_var)
|
||||||
|
if value:
|
||||||
|
env_vars[aider_var] = value
|
||||||
|
provider = env_var.replace("_API_KEY", "").lower()
|
||||||
|
self.status.log(f"Added {provider} API key from environment")
|
||||||
|
|
||||||
|
openai_url = os.environ.get("OPENAI_URL")
|
||||||
|
if openai_url:
|
||||||
|
env_vars["AIDER_OPENAI_API_BASE"] = openai_url
|
||||||
|
self.status.log(
|
||||||
|
f"Set OpenAI API base URL to {openai_url} from environment"
|
||||||
|
)
|
||||||
|
|
||||||
|
model = os.environ.get("AIDER_MODEL")
|
||||||
|
if model:
|
||||||
|
env_vars["AIDER_MODEL"] = model
|
||||||
|
self.status.log(f"Set model to {model} from environment")
|
||||||
|
|
||||||
|
additional_keys = os.environ.get("AIDER_API_KEYS")
|
||||||
|
if additional_keys:
|
||||||
|
try:
|
||||||
|
for pair in additional_keys.split(","):
|
||||||
|
if "=" in pair:
|
||||||
|
provider, key = pair.strip().split("=", 1)
|
||||||
|
env_var_name = f"{provider.upper()}_API_KEY"
|
||||||
|
env_vars[env_var_name] = key
|
||||||
|
self.status.log(f"Added {provider} API key from AIDER_API_KEYS")
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to parse AIDER_API_KEYS: {e}", "WARNING")
|
||||||
|
|
||||||
|
auto_commits = os.environ.get("AIDER_AUTO_COMMITS", "true")
|
||||||
|
if auto_commits.lower() in ["true", "false"]:
|
||||||
|
env_vars["AIDER_AUTO_COMMITS"] = auto_commits
|
||||||
|
|
||||||
|
dark_mode = os.environ.get("AIDER_DARK_MODE", "false")
|
||||||
|
if dark_mode.lower() in ["true", "false"]:
|
||||||
|
env_vars["AIDER_DARK_MODE"] = dark_mode
|
||||||
|
|
||||||
|
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
|
||||||
|
value = os.environ.get(proxy_var)
|
||||||
|
if value:
|
||||||
|
env_vars[proxy_var] = value
|
||||||
|
self.status.log(f"Added proxy configuration: {proxy_var}")
|
||||||
|
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
def _write_env_file(self, env_file: Path, env_vars: dict[str, str]) -> bool:
|
||||||
|
try:
|
||||||
|
content = "\n".join(f"{key}={value}" for key, value in env_vars.items())
|
||||||
|
|
||||||
|
with open(env_file, "w") as f:
|
||||||
|
f.write(content)
|
||||||
|
f.write("\n")
|
||||||
|
|
||||||
|
set_ownership(env_file)
|
||||||
|
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
|
||||||
|
|
||||||
|
self.status.log(f"Created Aider environment file at {env_file}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to write Aider environment file: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
PLUGIN_CLASS = AiderPlugin
|
||||||
42
cubbi/images/aider/cubbi_image.yaml
Normal file
42
cubbi/images/aider/cubbi_image.yaml
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
name: aider
|
||||||
|
description: Aider AI pair programming environment
|
||||||
|
version: 1.0.0
|
||||||
|
maintainer: team@monadical.com
|
||||||
|
image: monadical/cubbi-aider:latest
|
||||||
|
persistent_configs: []
|
||||||
|
environments_to_forward:
|
||||||
|
# API Keys
|
||||||
|
- OPENAI_API_KEY
|
||||||
|
- ANTHROPIC_API_KEY
|
||||||
|
- ANTHROPIC_AUTH_TOKEN
|
||||||
|
- ANTHROPIC_CUSTOM_HEADERS
|
||||||
|
- DEEPSEEK_API_KEY
|
||||||
|
- GEMINI_API_KEY
|
||||||
|
- OPENROUTER_API_KEY
|
||||||
|
- AIDER_API_KEYS
|
||||||
|
|
||||||
|
# Model Configuration
|
||||||
|
- AIDER_MODEL
|
||||||
|
- CUBBI_MODEL
|
||||||
|
- CUBBI_PROVIDER
|
||||||
|
|
||||||
|
# Git Configuration
|
||||||
|
- AIDER_AUTO_COMMITS
|
||||||
|
- AIDER_DARK_MODE
|
||||||
|
- GIT_AUTHOR_NAME
|
||||||
|
- GIT_AUTHOR_EMAIL
|
||||||
|
- GIT_COMMITTER_NAME
|
||||||
|
- GIT_COMMITTER_EMAIL
|
||||||
|
|
||||||
|
# Proxy Configuration
|
||||||
|
- HTTP_PROXY
|
||||||
|
- HTTPS_PROXY
|
||||||
|
- NO_PROXY
|
||||||
|
|
||||||
|
# OpenAI Configuration
|
||||||
|
- OPENAI_URL
|
||||||
|
- OPENAI_API_BASE
|
||||||
|
- AIDER_OPENAI_API_BASE
|
||||||
|
|
||||||
|
# Timezone (useful for logs and timestamps)
|
||||||
|
- TZ
|
||||||
274
cubbi/images/aider/test_aider.py
Executable file
274
cubbi/images/aider/test_aider.py
Executable file
@@ -0,0 +1,274 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Comprehensive test script for Aider Cubbi image
|
||||||
|
Tests Docker image build, API key configuration, and Cubbi CLI integration
|
||||||
|
"""
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import re
|
||||||
|
|
||||||
|
|
||||||
|
def run_command(cmd, description="", check=True):
|
||||||
|
"""Run a shell command and return result"""
|
||||||
|
print(f"\n🔍 {description}")
|
||||||
|
print(f"Running: {cmd}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd, shell=True, capture_output=True, text=True, check=check
|
||||||
|
)
|
||||||
|
|
||||||
|
if result.stdout:
|
||||||
|
print("STDOUT:")
|
||||||
|
print(result.stdout)
|
||||||
|
|
||||||
|
if result.stderr:
|
||||||
|
print("STDERR:")
|
||||||
|
print(result.stderr)
|
||||||
|
|
||||||
|
return result
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"❌ Command failed with exit code {e.returncode}")
|
||||||
|
if e.stdout:
|
||||||
|
print("STDOUT:")
|
||||||
|
print(e.stdout)
|
||||||
|
if e.stderr:
|
||||||
|
print("STDERR:")
|
||||||
|
print(e.stderr)
|
||||||
|
if check:
|
||||||
|
raise
|
||||||
|
return e
|
||||||
|
|
||||||
|
|
||||||
|
def test_docker_image_exists():
|
||||||
|
"""Test if the Aider Docker image exists"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Docker Image Existence")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
"docker images monadical/cubbi-aider:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
|
||||||
|
"Checking if Aider Docker image exists",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "monadical/cubbi-aider" in result.stdout:
|
||||||
|
print("✅ Aider Docker image exists")
|
||||||
|
else:
|
||||||
|
print("❌ Aider Docker image not found")
|
||||||
|
assert False, "Aider Docker image not found"
|
||||||
|
|
||||||
|
|
||||||
|
def test_aider_version():
|
||||||
|
"""Test basic Aider functionality in container"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Aider Version")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm monadical/cubbi-aider:latest bash -c 'aider --version'",
|
||||||
|
"Testing Aider version command",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert (
|
||||||
|
"aider" in result.stdout and result.returncode == 0
|
||||||
|
), "Aider version command failed"
|
||||||
|
print("✅ Aider version command works")
|
||||||
|
|
||||||
|
|
||||||
|
def test_api_key_configuration():
|
||||||
|
"""Test API key configuration and environment setup"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing API Key Configuration")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test with multiple API keys
|
||||||
|
test_keys = {
|
||||||
|
"OPENAI_API_KEY": "test-openai-key",
|
||||||
|
"ANTHROPIC_API_KEY": "test-anthropic-key",
|
||||||
|
"DEEPSEEK_API_KEY": "test-deepseek-key",
|
||||||
|
"GEMINI_API_KEY": "test-gemini-key",
|
||||||
|
"OPENROUTER_API_KEY": "test-openrouter-key",
|
||||||
|
}
|
||||||
|
|
||||||
|
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
f"docker run --rm {env_flags} monadical/cubbi-aider:latest bash -c 'cat ~/.aider/.env'",
|
||||||
|
"Testing API key configuration in .env file",
|
||||||
|
)
|
||||||
|
|
||||||
|
success = True
|
||||||
|
for key, value in test_keys.items():
|
||||||
|
if f"{key}={value}" not in result.stdout:
|
||||||
|
print(f"❌ {key} not found in .env file")
|
||||||
|
success = False
|
||||||
|
else:
|
||||||
|
print(f"✅ {key} configured correctly")
|
||||||
|
|
||||||
|
# Test default configuration values
|
||||||
|
if "AIDER_AUTO_COMMITS=true" in result.stdout:
|
||||||
|
print("✅ Default AIDER_AUTO_COMMITS configured")
|
||||||
|
else:
|
||||||
|
print("❌ Default AIDER_AUTO_COMMITS not found")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
if "AIDER_DARK_MODE=false" in result.stdout:
|
||||||
|
print("✅ Default AIDER_DARK_MODE configured")
|
||||||
|
else:
|
||||||
|
print("❌ Default AIDER_DARK_MODE not found")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
assert success, "API key configuration test failed"
|
||||||
|
|
||||||
|
|
||||||
|
def test_cubbi_cli_integration():
|
||||||
|
"""Test Cubbi CLI integration"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Cubbi CLI Integration")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test image listing
|
||||||
|
result = run_command(
|
||||||
|
"uv run -m cubbi.cli image list | grep aider",
|
||||||
|
"Testing Cubbi CLI can see Aider image",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "aider" in result.stdout and "Aider AI pair" in result.stdout:
|
||||||
|
print("✅ Cubbi CLI can list Aider image")
|
||||||
|
else:
|
||||||
|
print("❌ Cubbi CLI cannot see Aider image")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Test session creation with test command
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
test_env = {
|
||||||
|
"OPENAI_API_KEY": "test-session-key",
|
||||||
|
"ANTHROPIC_API_KEY": "test-anthropic-session-key",
|
||||||
|
}
|
||||||
|
|
||||||
|
env_vars = " ".join([f"{k}={v}" for k, v in test_env.items()])
|
||||||
|
|
||||||
|
result = run_command(
|
||||||
|
f"{env_vars} uv run -m cubbi.cli session create --image aider {temp_dir} --no-shell --run \"aider --version && echo 'Cubbi CLI test successful'\"",
|
||||||
|
"Testing Cubbi CLI session creation with Aider",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert (
|
||||||
|
result.returncode == 0
|
||||||
|
and re.search(r"aider \d+\.\d+\.\d+", result.stdout)
|
||||||
|
and "Cubbi CLI test successful" in result.stdout
|
||||||
|
), "Cubbi CLI session creation failed"
|
||||||
|
print("✅ Cubbi CLI session creation works")
|
||||||
|
|
||||||
|
|
||||||
|
def test_persistent_configuration():
|
||||||
|
"""Test persistent configuration directories"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Persistent Configuration")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test that persistent directories are created
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm -e OPENAI_API_KEY='test-key' monadical/cubbi-aider:latest bash -c 'ls -la /home/cubbi/.aider/ && ls -la /home/cubbi/.cache/'",
|
||||||
|
"Testing persistent configuration directories",
|
||||||
|
)
|
||||||
|
|
||||||
|
success = True
|
||||||
|
|
||||||
|
if ".env" in result.stdout:
|
||||||
|
print("✅ .env file created in ~/.aider/")
|
||||||
|
else:
|
||||||
|
print("❌ .env file not found in ~/.aider/")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
if "aider" in result.stdout:
|
||||||
|
print("✅ ~/.cache/aider directory exists")
|
||||||
|
else:
|
||||||
|
print("❌ ~/.cache/aider directory not found")
|
||||||
|
success = False
|
||||||
|
|
||||||
|
assert success, "API key configuration test failed"
|
||||||
|
|
||||||
|
|
||||||
|
def test_plugin_functionality():
|
||||||
|
"""Test the Aider plugin functionality"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("🧪 Testing Plugin Functionality")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test plugin without API keys (should still work)
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm monadical/cubbi-aider:latest bash -c 'echo \"Plugin test without API keys\"'",
|
||||||
|
"Testing plugin functionality without API keys",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "No API keys found - Aider will run without pre-configuration" in result.stdout:
|
||||||
|
print("✅ Plugin handles missing API keys gracefully")
|
||||||
|
else:
|
||||||
|
# This might be in stderr or initialization might have changed
|
||||||
|
print("ℹ️ Plugin API key handling test - check output above")
|
||||||
|
|
||||||
|
# Test plugin with API keys
|
||||||
|
result = run_command(
|
||||||
|
"docker run --rm -e OPENAI_API_KEY='test-plugin-key' monadical/cubbi-aider:latest bash -c 'echo \"Plugin test with API keys\"'",
|
||||||
|
"Testing plugin functionality with API keys",
|
||||||
|
)
|
||||||
|
|
||||||
|
if "Aider environment configured successfully" in result.stdout:
|
||||||
|
print("✅ Plugin configures environment successfully")
|
||||||
|
else:
|
||||||
|
print("❌ Plugin environment configuration failed")
|
||||||
|
assert False, "Plugin environment configuration failed"
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Run all tests"""
|
||||||
|
print("🚀 Starting Aider Cubbi Image Tests")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
tests = [
|
||||||
|
("Docker Image Exists", test_docker_image_exists),
|
||||||
|
("Aider Version", test_aider_version),
|
||||||
|
("API Key Configuration", test_api_key_configuration),
|
||||||
|
("Persistent Configuration", test_persistent_configuration),
|
||||||
|
("Plugin Functionality", test_plugin_functionality),
|
||||||
|
("Cubbi CLI Integration", test_cubbi_cli_integration),
|
||||||
|
]
|
||||||
|
|
||||||
|
results = {}
|
||||||
|
|
||||||
|
for test_name, test_func in tests:
|
||||||
|
try:
|
||||||
|
test_func()
|
||||||
|
results[test_name] = True
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Test '{test_name}' failed with exception: {e}")
|
||||||
|
results[test_name] = False
|
||||||
|
|
||||||
|
# Print summary
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("📊 TEST SUMMARY")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
total_tests = len(tests)
|
||||||
|
passed_tests = sum(1 for result in results.values() if result)
|
||||||
|
failed_tests = total_tests - passed_tests
|
||||||
|
|
||||||
|
for test_name, result in results.items():
|
||||||
|
status = "✅ PASS" if result else "❌ FAIL"
|
||||||
|
print(f"{status} {test_name}")
|
||||||
|
|
||||||
|
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
|
||||||
|
|
||||||
|
if failed_tests == 0:
|
||||||
|
print("\n🎉 All tests passed! Aider image is ready for use.")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
82
cubbi/images/claudecode/Dockerfile
Normal file
82
cubbi/images/claudecode/Dockerfile
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
LABEL maintainer="team@monadical.com"
|
||||||
|
LABEL description="Claude Code for Cubbi"
|
||||||
|
|
||||||
|
# Install system dependencies including gosu for user switching
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
gosu \
|
||||||
|
sudo \
|
||||||
|
passwd \
|
||||||
|
bash \
|
||||||
|
curl \
|
||||||
|
bzip2 \
|
||||||
|
iputils-ping \
|
||||||
|
iproute2 \
|
||||||
|
libxcb1 \
|
||||||
|
libdbus-1-3 \
|
||||||
|
nano \
|
||||||
|
tmux \
|
||||||
|
git-core \
|
||||||
|
ripgrep \
|
||||||
|
openssh-client \
|
||||||
|
vim \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Install uv (Python package manager)
|
||||||
|
WORKDIR /tmp
|
||||||
|
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||||
|
sh install.sh && \
|
||||||
|
mv /root/.local/bin/uv /usr/local/bin/uv && \
|
||||||
|
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||||
|
rm install.sh
|
||||||
|
|
||||||
|
# Install Node.js (for Claude Code NPM package)
|
||||||
|
ARG NODE_VERSION=v22.16.0
|
||||||
|
RUN mkdir -p /opt/node && \
|
||||||
|
ARCH=$(uname -m) && \
|
||||||
|
if [ "$ARCH" = "x86_64" ]; then \
|
||||||
|
NODE_ARCH=linux-x64; \
|
||||||
|
elif [ "$ARCH" = "aarch64" ]; then \
|
||||||
|
NODE_ARCH=linux-arm64; \
|
||||||
|
else \
|
||||||
|
echo "Unsupported architecture"; exit 1; \
|
||||||
|
fi && \
|
||||||
|
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
|
||||||
|
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
|
||||||
|
rm node.tar.gz
|
||||||
|
|
||||||
|
ENV PATH="/opt/node/bin:$PATH"
|
||||||
|
|
||||||
|
# Install Claude Code globally
|
||||||
|
RUN npm install -g @anthropic-ai/claude-code
|
||||||
|
|
||||||
|
# Create app directory
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy initialization system
|
||||||
|
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||||
|
COPY claudecode_plugin.py /cubbi/claudecode_plugin.py
|
||||||
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||||
|
COPY init-status.sh /cubbi/init-status.sh
|
||||||
|
|
||||||
|
# Make scripts executable
|
||||||
|
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||||
|
|
||||||
|
# Add Node.js to PATH in bashrc and init status check
|
||||||
|
RUN echo 'PATH="/opt/node/bin:$PATH"' >> /etc/bash.bashrc
|
||||||
|
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||||
|
|
||||||
|
# Set up environment
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV UV_LINK_MODE=copy
|
||||||
|
|
||||||
|
# Pre-install the cubbi_init
|
||||||
|
RUN /cubbi/cubbi_init.py --help
|
||||||
|
|
||||||
|
# Set WORKDIR to /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||||
|
CMD ["tail", "-f", "/dev/null"]
|
||||||
222
cubbi/images/claudecode/README.md
Normal file
222
cubbi/images/claudecode/README.md
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
# Claude Code for Cubbi
|
||||||
|
|
||||||
|
This image provides Claude Code (Anthropic's official CLI for Claude) in a Cubbi container environment.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Claude Code is an interactive CLI tool that helps with software engineering tasks. This Cubbi image integrates Claude Code with secure API key management, persistent configuration, and enterprise features.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Claude Code CLI**: Full access to Claude's coding capabilities
|
||||||
|
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
||||||
|
- **Persistent Configuration**: Settings and cache preserved across container restarts
|
||||||
|
- **Enterprise Support**: Bedrock and Vertex AI integration
|
||||||
|
- **Network Support**: Proxy configuration for corporate environments
|
||||||
|
- **Tool Permissions**: Pre-configured permissions for all Claude Code tools
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Set up API Key
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set your Anthropic API key in Cubbi configuration
|
||||||
|
cubbi config set services.anthropic.api_key "your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run Claude Code Environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Claude Code container
|
||||||
|
cubbi run claudecode
|
||||||
|
|
||||||
|
# Execute Claude Code commands
|
||||||
|
cubbi exec claudecode "claude 'help me write a Python function'"
|
||||||
|
|
||||||
|
# Start interactive session
|
||||||
|
cubbi exec claudecode "claude"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Required Environment Variables
|
||||||
|
|
||||||
|
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
|
||||||
|
|
||||||
|
### Optional Environment Variables
|
||||||
|
|
||||||
|
- `ANTHROPIC_AUTH_TOKEN`: Custom authorization token for enterprise deployments
|
||||||
|
- `ANTHROPIC_CUSTOM_HEADERS`: Additional HTTP headers (JSON format)
|
||||||
|
- `CLAUDE_CODE_USE_BEDROCK`: Set to "true" to use Amazon Bedrock
|
||||||
|
- `CLAUDE_CODE_USE_VERTEX`: Set to "true" to use Google Vertex AI
|
||||||
|
- `HTTP_PROXY`: HTTP proxy server URL
|
||||||
|
- `HTTPS_PROXY`: HTTPS proxy server URL
|
||||||
|
- `DISABLE_TELEMETRY`: Set to "true" to disable telemetry
|
||||||
|
|
||||||
|
### Advanced Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enterprise deployment with Bedrock
|
||||||
|
cubbi config set environment.claude_code_use_bedrock true
|
||||||
|
cubbi run claudecode
|
||||||
|
|
||||||
|
# With custom proxy
|
||||||
|
cubbi config set network.https_proxy "https://proxy.company.com:8080"
|
||||||
|
cubbi run claudecode
|
||||||
|
|
||||||
|
# Disable telemetry
|
||||||
|
cubbi config set environment.disable_telemetry true
|
||||||
|
cubbi run claudecode
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Basic Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get help
|
||||||
|
cubbi exec claudecode "claude --help"
|
||||||
|
|
||||||
|
# One-time task
|
||||||
|
cubbi exec claudecode "claude 'write a unit test for this function'"
|
||||||
|
|
||||||
|
# Interactive mode
|
||||||
|
cubbi exec claudecode "claude"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Working with Projects
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start Claude Code in your project directory
|
||||||
|
cubbi run claudecode --mount /path/to/your/project:/app
|
||||||
|
cubbi exec claudecode "cd /app && claude"
|
||||||
|
|
||||||
|
# Create a commit
|
||||||
|
cubbi exec claudecode "cd /app && claude commit"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Advanced Features
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run with specific model configuration
|
||||||
|
cubbi exec claudecode "claude -m claude-3-5-sonnet-20241022 'analyze this code'"
|
||||||
|
|
||||||
|
# Use with plan mode
|
||||||
|
cubbi exec claudecode "claude -p 'refactor this function'"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Persistent Configuration
|
||||||
|
|
||||||
|
The following directories are automatically persisted:
|
||||||
|
|
||||||
|
- `~/.claude/`: Claude Code settings and configuration
|
||||||
|
- `~/.cache/claude/`: Claude Code cache and temporary files
|
||||||
|
|
||||||
|
Configuration files are maintained across container restarts, ensuring your settings and preferences are preserved.
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
cubbi/images/claudecode/
|
||||||
|
├── Dockerfile # Container image definition
|
||||||
|
├── cubbi_image.yaml # Cubbi image configuration
|
||||||
|
├── claudecode_plugin.py # Authentication and setup plugin
|
||||||
|
├── cubbi_init.py # Initialization script (shared)
|
||||||
|
├── init-status.sh # Status check script (shared)
|
||||||
|
└── README.md # This documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authentication Flow
|
||||||
|
|
||||||
|
1. **Environment Variables**: API key passed from Cubbi configuration
|
||||||
|
2. **Plugin Setup**: `claudecode_plugin.py` creates `~/.claude/settings.json`
|
||||||
|
3. **Verification**: Plugin verifies Claude Code installation and configuration
|
||||||
|
4. **Ready**: Claude Code is ready for use with configured authentication
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
**API Key Not Set**
|
||||||
|
```
|
||||||
|
⚠️ No authentication configuration found
|
||||||
|
Please set ANTHROPIC_API_KEY environment variable
|
||||||
|
```
|
||||||
|
**Solution**: Set API key in Cubbi configuration:
|
||||||
|
```bash
|
||||||
|
cubbi config set services.anthropic.api_key "your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Claude Code Not Found**
|
||||||
|
```
|
||||||
|
❌ Claude Code not properly installed
|
||||||
|
```
|
||||||
|
**Solution**: Rebuild the container image:
|
||||||
|
```bash
|
||||||
|
docker build -t cubbi-claudecode:latest cubbi/images/claudecode/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Network Issues**
|
||||||
|
```
|
||||||
|
Connection timeout or proxy errors
|
||||||
|
```
|
||||||
|
**Solution**: Configure proxy settings:
|
||||||
|
```bash
|
||||||
|
cubbi config set network.https_proxy "your-proxy-url"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug Mode
|
||||||
|
|
||||||
|
Enable verbose output for debugging:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check configuration
|
||||||
|
cubbi exec claudecode "cat ~/.claude/settings.json"
|
||||||
|
|
||||||
|
# Verify installation
|
||||||
|
cubbi exec claudecode "claude --version"
|
||||||
|
cubbi exec claudecode "which claude"
|
||||||
|
cubbi exec claudecode "node --version"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- **API Keys**: Stored securely with 0o600 permissions
|
||||||
|
- **Configuration**: Settings files have restricted access
|
||||||
|
- **Environment**: Isolated container environment
|
||||||
|
- **Telemetry**: Can be disabled for privacy
|
||||||
|
|
||||||
|
## Development
|
||||||
|
|
||||||
|
### Building the Image
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build locally
|
||||||
|
docker build -t cubbi-claudecode:test cubbi/images/claudecode/
|
||||||
|
|
||||||
|
# Test basic functionality
|
||||||
|
docker run --rm -it \
|
||||||
|
-e ANTHROPIC_API_KEY="your-api-key" \
|
||||||
|
cubbi-claudecode:test \
|
||||||
|
bash -c "claude --version"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run through Cubbi
|
||||||
|
cubbi run claudecode --name test-claude
|
||||||
|
cubbi exec test-claude "claude --version"
|
||||||
|
cubbi stop test-claude
|
||||||
|
```
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues related to:
|
||||||
|
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
||||||
|
- **Claude Code**: Visit [Claude Code documentation](https://docs.anthropic.com/en/docs/claude-code)
|
||||||
|
- **API Keys**: Visit [Anthropic Console](https://console.anthropic.com/)
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This image configuration is provided under the same license as the Cubbi project. Claude Code is licensed separately by Anthropic.
|
||||||
132
cubbi/images/claudecode/claudecode_plugin.py
Executable file
132
cubbi/images/claudecode/claudecode_plugin.py
Executable file
@@ -0,0 +1,132 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import stat
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from cubbi_init import ToolPlugin, cubbi_config, set_ownership
|
||||||
|
|
||||||
|
|
||||||
|
class ClaudeCodePlugin(ToolPlugin):
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "claudecode"
|
||||||
|
|
||||||
|
def _get_claude_dir(self) -> Path:
|
||||||
|
return Path("/home/cubbi/.claude")
|
||||||
|
|
||||||
|
def is_already_configured(self) -> bool:
|
||||||
|
settings_file = self._get_claude_dir() / "settings.json"
|
||||||
|
return settings_file.exists()
|
||||||
|
|
||||||
|
def configure(self) -> bool:
|
||||||
|
self.status.log("Setting up Claude Code authentication...")
|
||||||
|
|
||||||
|
claude_dir = self.create_directory_with_ownership(self._get_claude_dir())
|
||||||
|
claude_dir.chmod(0o700)
|
||||||
|
|
||||||
|
settings = self._create_settings()
|
||||||
|
|
||||||
|
if settings:
|
||||||
|
settings_file = claude_dir / "settings.json"
|
||||||
|
success = self._write_settings(settings_file, settings)
|
||||||
|
if success:
|
||||||
|
self.status.log("✅ Claude Code authentication configured successfully")
|
||||||
|
self._integrate_mcp_servers()
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
self.status.log("⚠️ No authentication configuration found", "WARNING")
|
||||||
|
self.status.log(
|
||||||
|
" Please set ANTHROPIC_API_KEY environment variable", "WARNING"
|
||||||
|
)
|
||||||
|
self.status.log(" Claude Code will run without authentication", "INFO")
|
||||||
|
self._integrate_mcp_servers()
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _integrate_mcp_servers(self) -> None:
|
||||||
|
if not cubbi_config.mcps:
|
||||||
|
self.status.log("No MCP servers to integrate")
|
||||||
|
return
|
||||||
|
|
||||||
|
self.status.log("MCP server integration available for Claude Code")
|
||||||
|
|
||||||
|
def _create_settings(self) -> dict | None:
|
||||||
|
settings = {}
|
||||||
|
|
||||||
|
anthropic_provider = None
|
||||||
|
for provider_name, provider_config in cubbi_config.providers.items():
|
||||||
|
if provider_config.type == "anthropic":
|
||||||
|
anthropic_provider = provider_config
|
||||||
|
break
|
||||||
|
|
||||||
|
if not anthropic_provider or not anthropic_provider.api_key:
|
||||||
|
api_key = os.environ.get("ANTHROPIC_API_KEY")
|
||||||
|
if not api_key:
|
||||||
|
return None
|
||||||
|
settings["apiKey"] = api_key
|
||||||
|
else:
|
||||||
|
settings["apiKey"] = anthropic_provider.api_key
|
||||||
|
|
||||||
|
auth_token = os.environ.get("ANTHROPIC_AUTH_TOKEN")
|
||||||
|
if auth_token:
|
||||||
|
settings["authToken"] = auth_token
|
||||||
|
|
||||||
|
custom_headers = os.environ.get("ANTHROPIC_CUSTOM_HEADERS")
|
||||||
|
if custom_headers:
|
||||||
|
try:
|
||||||
|
settings["customHeaders"] = json.loads(custom_headers)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
self.status.log(
|
||||||
|
"⚠️ Invalid ANTHROPIC_CUSTOM_HEADERS format, skipping", "WARNING"
|
||||||
|
)
|
||||||
|
|
||||||
|
if os.environ.get("CLAUDE_CODE_USE_BEDROCK") == "true":
|
||||||
|
settings["provider"] = "bedrock"
|
||||||
|
|
||||||
|
if os.environ.get("CLAUDE_CODE_USE_VERTEX") == "true":
|
||||||
|
settings["provider"] = "vertex"
|
||||||
|
|
||||||
|
http_proxy = os.environ.get("HTTP_PROXY")
|
||||||
|
https_proxy = os.environ.get("HTTPS_PROXY")
|
||||||
|
if http_proxy or https_proxy:
|
||||||
|
settings["proxy"] = {}
|
||||||
|
if http_proxy:
|
||||||
|
settings["proxy"]["http"] = http_proxy
|
||||||
|
if https_proxy:
|
||||||
|
settings["proxy"]["https"] = https_proxy
|
||||||
|
|
||||||
|
if os.environ.get("DISABLE_TELEMETRY") == "true":
|
||||||
|
settings["telemetry"] = {"enabled": False}
|
||||||
|
|
||||||
|
settings["permissions"] = {
|
||||||
|
"tools": {
|
||||||
|
"read": {"allowed": True},
|
||||||
|
"write": {"allowed": True},
|
||||||
|
"edit": {"allowed": True},
|
||||||
|
"bash": {"allowed": True},
|
||||||
|
"webfetch": {"allowed": True},
|
||||||
|
"websearch": {"allowed": True},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return settings
|
||||||
|
|
||||||
|
def _write_settings(self, settings_file: Path, settings: dict) -> bool:
|
||||||
|
try:
|
||||||
|
with open(settings_file, "w") as f:
|
||||||
|
json.dump(settings, f, indent=2)
|
||||||
|
|
||||||
|
set_ownership(settings_file)
|
||||||
|
os.chmod(settings_file, stat.S_IRUSR | stat.S_IWUSR)
|
||||||
|
|
||||||
|
self.status.log(f"Created Claude Code settings at {settings_file}")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to write Claude Code settings: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
PLUGIN_CLASS = ClaudeCodePlugin
|
||||||
15
cubbi/images/claudecode/cubbi_image.yaml
Normal file
15
cubbi/images/claudecode/cubbi_image.yaml
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
name: claudecode
|
||||||
|
description: Claude Code AI environment
|
||||||
|
version: 1.0.0
|
||||||
|
maintainer: team@monadical.com
|
||||||
|
image: monadical/cubbi-claudecode:latest
|
||||||
|
persistent_configs: []
|
||||||
|
environments_to_forward:
|
||||||
|
- ANTHROPIC_API_KEY
|
||||||
|
- ANTHROPIC_AUTH_TOKEN
|
||||||
|
- ANTHROPIC_CUSTOM_HEADERS
|
||||||
|
- CLAUDE_CODE_USE_BEDROCK
|
||||||
|
- CLAUDE_CODE_USE_VERTEX
|
||||||
|
- HTTP_PROXY
|
||||||
|
- HTTPS_PROXY
|
||||||
|
- DISABLE_TELEMETRY
|
||||||
251
cubbi/images/claudecode/test_claudecode.py
Executable file
251
cubbi/images/claudecode/test_claudecode.py
Executable file
@@ -0,0 +1,251 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Automated test suite for Claude Code Cubbi integration
|
||||||
|
"""
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
|
||||||
|
def run_test(description: str, command: list, timeout: int = 30) -> bool:
|
||||||
|
"""Run a test command and return success status"""
|
||||||
|
print(f"🧪 Testing: {description}")
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
command, capture_output=True, text=True, timeout=timeout
|
||||||
|
)
|
||||||
|
if result.returncode == 0:
|
||||||
|
print(" ✅ PASS")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" ❌ FAIL: {result.stderr}")
|
||||||
|
if result.stdout:
|
||||||
|
print(f" 📋 stdout: {result.stdout}")
|
||||||
|
return False
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
print(f" ⏰ TIMEOUT: Command exceeded {timeout}s")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ❌ ERROR: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def test_suite():
|
||||||
|
"""Run complete test suite"""
|
||||||
|
tests_passed = 0
|
||||||
|
total_tests = 0
|
||||||
|
|
||||||
|
print("🚀 Starting Claude Code Cubbi Integration Test Suite")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# Test 1: Build image
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Build Claude Code image",
|
||||||
|
["docker", "build", "-t", "cubbi-claudecode:test", "cubbi/images/claudecode/"],
|
||||||
|
timeout=180,
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 2: Tag image for Cubbi
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Tag image for Cubbi",
|
||||||
|
["docker", "tag", "cubbi-claudecode:test", "monadical/cubbi-claudecode:latest"],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 3: Basic container startup
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Container startup with test API key",
|
||||||
|
[
|
||||||
|
"docker",
|
||||||
|
"run",
|
||||||
|
"--rm",
|
||||||
|
"-e",
|
||||||
|
"ANTHROPIC_API_KEY=test-key",
|
||||||
|
"cubbi-claudecode:test",
|
||||||
|
"bash",
|
||||||
|
"-c",
|
||||||
|
"claude --version",
|
||||||
|
],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 4: Cubbi image list
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Cubbi image list includes claudecode",
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "image", "list"],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Test 5: Cubbi session creation
|
||||||
|
total_tests += 1
|
||||||
|
session_result = subprocess.run(
|
||||||
|
[
|
||||||
|
"uv",
|
||||||
|
"run",
|
||||||
|
"-m",
|
||||||
|
"cubbi.cli",
|
||||||
|
"session",
|
||||||
|
"create",
|
||||||
|
"--image",
|
||||||
|
"claudecode",
|
||||||
|
"--name",
|
||||||
|
"test-automation",
|
||||||
|
"--no-connect",
|
||||||
|
"--env",
|
||||||
|
"ANTHROPIC_API_KEY=test-key",
|
||||||
|
"--run",
|
||||||
|
"claude --version",
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=60,
|
||||||
|
)
|
||||||
|
|
||||||
|
if session_result.returncode == 0:
|
||||||
|
print("🧪 Testing: Cubbi session creation")
|
||||||
|
print(" ✅ PASS")
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Extract session ID for cleanup
|
||||||
|
session_id = None
|
||||||
|
for line in session_result.stdout.split("\n"):
|
||||||
|
if "Session ID:" in line:
|
||||||
|
session_id = line.split("Session ID: ")[1].strip()
|
||||||
|
break
|
||||||
|
|
||||||
|
if session_id:
|
||||||
|
# Test 6: Session cleanup
|
||||||
|
total_tests += 1
|
||||||
|
if run_test(
|
||||||
|
"Clean up test session",
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||||
|
):
|
||||||
|
tests_passed += 1
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Clean up test session")
|
||||||
|
print(" ⚠️ SKIP: Could not extract session ID")
|
||||||
|
total_tests += 1
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Cubbi session creation")
|
||||||
|
print(f" ❌ FAIL: {session_result.stderr}")
|
||||||
|
total_tests += 2 # This test and cleanup test both fail
|
||||||
|
|
||||||
|
# Test 7: Session without API key
|
||||||
|
total_tests += 1
|
||||||
|
no_key_result = subprocess.run(
|
||||||
|
[
|
||||||
|
"uv",
|
||||||
|
"run",
|
||||||
|
"-m",
|
||||||
|
"cubbi.cli",
|
||||||
|
"session",
|
||||||
|
"create",
|
||||||
|
"--image",
|
||||||
|
"claudecode",
|
||||||
|
"--name",
|
||||||
|
"test-no-key",
|
||||||
|
"--no-connect",
|
||||||
|
"--run",
|
||||||
|
"claude --version",
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=60,
|
||||||
|
)
|
||||||
|
|
||||||
|
if no_key_result.returncode == 0:
|
||||||
|
print("🧪 Testing: Session without API key")
|
||||||
|
print(" ✅ PASS")
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Extract session ID and close
|
||||||
|
session_id = None
|
||||||
|
for line in no_key_result.stdout.split("\n"):
|
||||||
|
if "Session ID:" in line:
|
||||||
|
session_id = line.split("Session ID: ")[1].strip()
|
||||||
|
break
|
||||||
|
|
||||||
|
if session_id:
|
||||||
|
subprocess.run(
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||||
|
capture_output=True,
|
||||||
|
timeout=30,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Session without API key")
|
||||||
|
print(f" ❌ FAIL: {no_key_result.stderr}")
|
||||||
|
|
||||||
|
# Test 8: Persistent configuration test
|
||||||
|
total_tests += 1
|
||||||
|
persist_result = subprocess.run(
|
||||||
|
[
|
||||||
|
"uv",
|
||||||
|
"run",
|
||||||
|
"-m",
|
||||||
|
"cubbi.cli",
|
||||||
|
"session",
|
||||||
|
"create",
|
||||||
|
"--image",
|
||||||
|
"claudecode",
|
||||||
|
"--name",
|
||||||
|
"test-persist-auto",
|
||||||
|
"--project",
|
||||||
|
"test-automation",
|
||||||
|
"--no-connect",
|
||||||
|
"--env",
|
||||||
|
"ANTHROPIC_API_KEY=test-key",
|
||||||
|
"--run",
|
||||||
|
"echo 'automation test' > ~/.claude/automation.txt && cat ~/.claude/automation.txt",
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=60,
|
||||||
|
)
|
||||||
|
|
||||||
|
if persist_result.returncode == 0:
|
||||||
|
print("🧪 Testing: Persistent configuration")
|
||||||
|
print(" ✅ PASS")
|
||||||
|
tests_passed += 1
|
||||||
|
|
||||||
|
# Extract session ID and close
|
||||||
|
session_id = None
|
||||||
|
for line in persist_result.stdout.split("\n"):
|
||||||
|
if "Session ID:" in line:
|
||||||
|
session_id = line.split("Session ID: ")[1].strip()
|
||||||
|
break
|
||||||
|
|
||||||
|
if session_id:
|
||||||
|
subprocess.run(
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "session", "close", session_id],
|
||||||
|
capture_output=True,
|
||||||
|
timeout=30,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
print("🧪 Testing: Persistent configuration")
|
||||||
|
print(f" ❌ FAIL: {persist_result.stderr}")
|
||||||
|
|
||||||
|
print("=" * 60)
|
||||||
|
print(f"📊 Test Results: {tests_passed}/{total_tests} tests passed")
|
||||||
|
|
||||||
|
if tests_passed == total_tests:
|
||||||
|
print("🎉 All tests passed! Claude Code integration is working correctly.")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(
|
||||||
|
f"❌ {total_tests - tests_passed} test(s) failed. Please check the output above."
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main test entry point"""
|
||||||
|
success = test_suite()
|
||||||
|
exit(0 if success else 1)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
62
cubbi/images/crush/Dockerfile
Normal file
62
cubbi/images/crush/Dockerfile
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
FROM python:3.12-slim
|
||||||
|
|
||||||
|
LABEL maintainer="team@monadical.com"
|
||||||
|
LABEL description="Crush AI coding assistant for Cubbi"
|
||||||
|
|
||||||
|
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
||||||
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
|
gosu \
|
||||||
|
sudo \
|
||||||
|
passwd \
|
||||||
|
bash \
|
||||||
|
curl \
|
||||||
|
bzip2 \
|
||||||
|
iputils-ping \
|
||||||
|
iproute2 \
|
||||||
|
libxcb1 \
|
||||||
|
libdbus-1-3 \
|
||||||
|
nano \
|
||||||
|
tmux \
|
||||||
|
git-core \
|
||||||
|
ripgrep \
|
||||||
|
openssh-client \
|
||||||
|
vim \
|
||||||
|
nodejs \
|
||||||
|
npm \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Install deps
|
||||||
|
WORKDIR /tmp
|
||||||
|
RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
||||||
|
sh install.sh && \
|
||||||
|
mv /root/.local/bin/uv /usr/local/bin/uv && \
|
||||||
|
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||||
|
rm install.sh
|
||||||
|
|
||||||
|
# Install crush via npm
|
||||||
|
RUN npm install -g @charmland/crush
|
||||||
|
|
||||||
|
# Create app directory
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy initialization system
|
||||||
|
COPY cubbi_init.py /cubbi/cubbi_init.py
|
||||||
|
COPY crush_plugin.py /cubbi/crush_plugin.py
|
||||||
|
COPY cubbi_image.yaml /cubbi/cubbi_image.yaml
|
||||||
|
COPY init-status.sh /cubbi/init-status.sh
|
||||||
|
RUN chmod +x /cubbi/cubbi_init.py /cubbi/init-status.sh
|
||||||
|
RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.bashrc
|
||||||
|
|
||||||
|
# Set up environment
|
||||||
|
ENV PYTHONUNBUFFERED=1
|
||||||
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
|
ENV UV_LINK_MODE=copy
|
||||||
|
|
||||||
|
# Pre-install the cubbi_init
|
||||||
|
RUN /cubbi/cubbi_init.py --help
|
||||||
|
|
||||||
|
# Set WORKDIR to /app, common practice and expected by cubbi-init.sh
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
ENTRYPOINT ["/cubbi/cubbi_init.py"]
|
||||||
|
CMD ["tail", "-f", "/dev/null"]
|
||||||
77
cubbi/images/crush/README.md
Normal file
77
cubbi/images/crush/README.md
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
# Crush Image for Cubbi
|
||||||
|
|
||||||
|
This image provides a containerized environment for running [Crush](https://github.com/charmbracelet/crush), a terminal-based AI coding assistant.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Pre-configured environment for Crush AI coding assistant
|
||||||
|
- Multi-model support (OpenAI, Anthropic, Groq)
|
||||||
|
- JSON-based configuration
|
||||||
|
- MCP server integration support
|
||||||
|
- Session preservation across runs
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
### AI Provider Configuration
|
||||||
|
|
||||||
|
| Variable | Description | Required | Default |
|
||||||
|
|----------|-------------|----------|---------|
|
||||||
|
| `OPENAI_API_KEY` | OpenAI API key for crush | No | - |
|
||||||
|
| `ANTHROPIC_API_KEY` | Anthropic API key for crush | No | - |
|
||||||
|
| `GROQ_API_KEY` | Groq API key for crush | No | - |
|
||||||
|
| `OPENAI_URL` | Custom OpenAI-compatible API URL | No | - |
|
||||||
|
| `CUBBI_MODEL` | AI model to use with crush | No | - |
|
||||||
|
| `CUBBI_PROVIDER` | AI provider to use with crush | No | - |
|
||||||
|
|
||||||
|
### Cubbi Core Variables
|
||||||
|
|
||||||
|
| Variable | Description | Required | Default |
|
||||||
|
|----------|-------------|----------|---------|
|
||||||
|
| `CUBBI_USER_ID` | UID for the container user | No | `1000` |
|
||||||
|
| `CUBBI_GROUP_ID` | GID for the container user | No | `1000` |
|
||||||
|
| `CUBBI_RUN_COMMAND` | Command to execute after initialization | No | - |
|
||||||
|
| `CUBBI_NO_SHELL` | Exit after command execution | No | `false` |
|
||||||
|
| `CUBBI_CONFIG_DIR` | Directory for persistent configurations | No | `/cubbi-config` |
|
||||||
|
| `CUBBI_PERSISTENT_LINKS` | Semicolon-separated list of source:target symlinks | No | - |
|
||||||
|
|
||||||
|
### MCP Integration Variables
|
||||||
|
|
||||||
|
| Variable | Description | Required |
|
||||||
|
|----------|-------------|----------|
|
||||||
|
| `MCP_COUNT` | Number of available MCP servers | No |
|
||||||
|
| `MCP_NAMES` | JSON array of MCP server names | No |
|
||||||
|
| `MCP_{idx}_NAME` | Name of MCP server at index | No |
|
||||||
|
| `MCP_{idx}_TYPE` | Type of MCP server | No |
|
||||||
|
| `MCP_{idx}_HOST` | Hostname of MCP server | No |
|
||||||
|
| `MCP_{idx}_URL` | Full URL for remote MCP servers | No |
|
||||||
|
|
||||||
|
## Build
|
||||||
|
|
||||||
|
To build this image:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd cubbi/images/crush
|
||||||
|
docker build -t monadical/cubbi-crush:latest .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a new session with this image
|
||||||
|
cubbix -i crush
|
||||||
|
|
||||||
|
# Run crush with specific provider
|
||||||
|
cubbix -i crush -e CUBBI_PROVIDER=openai -e CUBBI_MODEL=gpt-4
|
||||||
|
|
||||||
|
# Test crush installation
|
||||||
|
cubbix -i crush --no-shell --run "crush --help"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Crush uses JSON configuration stored in `/home/cubbi/.config/crush/config.json`. The plugin automatically configures:
|
||||||
|
|
||||||
|
- AI providers based on available API keys
|
||||||
|
- Default models and providers from environment variables
|
||||||
|
- Session preservation settings
|
||||||
|
- MCP server integrations
|
||||||
230
cubbi/images/crush/crush_plugin.py
Normal file
230
cubbi/images/crush/crush_plugin.py
Normal file
@@ -0,0 +1,230 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from cubbi_init import ToolPlugin, cubbi_config, set_ownership
|
||||||
|
|
||||||
|
STANDARD_PROVIDERS = ["anthropic", "openai", "google", "openrouter"]
|
||||||
|
|
||||||
|
|
||||||
|
class CrushPlugin(ToolPlugin):
|
||||||
|
@property
|
||||||
|
def tool_name(self) -> str:
|
||||||
|
return "crush"
|
||||||
|
|
||||||
|
def _get_user_config_path(self) -> Path:
|
||||||
|
return Path("/home/cubbi/.config/crush")
|
||||||
|
|
||||||
|
def is_already_configured(self) -> bool:
|
||||||
|
config_file = self._get_user_config_path() / "crush.json"
|
||||||
|
return config_file.exists()
|
||||||
|
|
||||||
|
def configure(self) -> bool:
|
||||||
|
return self._setup_tool_configuration() and self._integrate_mcp_servers()
|
||||||
|
|
||||||
|
def _map_provider_to_crush_format(
|
||||||
|
self, provider_name: str, provider_config, is_default_provider: bool = False
|
||||||
|
) -> dict[str, Any] | None:
|
||||||
|
# Handle standard providers without base_url
|
||||||
|
if not provider_config.base_url:
|
||||||
|
if provider_config.type in STANDARD_PROVIDERS:
|
||||||
|
# Populate models for any standard provider that has models
|
||||||
|
models_list = []
|
||||||
|
if provider_config.models:
|
||||||
|
for model in provider_config.models:
|
||||||
|
model_id = model.get("id", "")
|
||||||
|
if model_id:
|
||||||
|
models_list.append({"id": model_id, "name": model_id})
|
||||||
|
|
||||||
|
provider_entry = {
|
||||||
|
"api_key": provider_config.api_key,
|
||||||
|
"models": models_list,
|
||||||
|
}
|
||||||
|
return provider_entry
|
||||||
|
|
||||||
|
# Handle custom providers with base_url
|
||||||
|
models_list = []
|
||||||
|
|
||||||
|
# Add all models for any provider type that has models
|
||||||
|
if provider_config.models:
|
||||||
|
for model in provider_config.models:
|
||||||
|
model_id = model.get("id", "")
|
||||||
|
if model_id:
|
||||||
|
models_list.append({"id": model_id, "name": model_id})
|
||||||
|
|
||||||
|
provider_entry = {
|
||||||
|
"api_key": provider_config.api_key,
|
||||||
|
"base_url": provider_config.base_url,
|
||||||
|
"models": models_list,
|
||||||
|
}
|
||||||
|
|
||||||
|
if provider_config.type in STANDARD_PROVIDERS:
|
||||||
|
if provider_config.type == "anthropic":
|
||||||
|
provider_entry["type"] = "anthropic"
|
||||||
|
elif provider_config.type == "openai":
|
||||||
|
provider_entry["type"] = "openai"
|
||||||
|
elif provider_config.type == "google":
|
||||||
|
provider_entry["type"] = "gemini"
|
||||||
|
elif provider_config.type == "openrouter":
|
||||||
|
provider_entry["type"] = "openai"
|
||||||
|
provider_entry["name"] = f"{provider_name} ({provider_config.type})"
|
||||||
|
else:
|
||||||
|
provider_entry["type"] = "openai"
|
||||||
|
provider_entry["name"] = f"{provider_name} ({provider_config.type})"
|
||||||
|
|
||||||
|
return provider_entry
|
||||||
|
|
||||||
|
def _setup_tool_configuration(self) -> bool:
|
||||||
|
config_dir = self.create_directory_with_ownership(self._get_user_config_path())
|
||||||
|
if not config_dir.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"Config directory {config_dir} does not exist and could not be created",
|
||||||
|
"ERROR",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
config_file = config_dir / "crush.json"
|
||||||
|
|
||||||
|
config_data = {"$schema": "https://charm.land/crush.json", "providers": {}}
|
||||||
|
|
||||||
|
default_provider_name = None
|
||||||
|
if cubbi_config.defaults.model:
|
||||||
|
default_provider_name = cubbi_config.defaults.model.split("/", 1)[0]
|
||||||
|
|
||||||
|
self.status.log(
|
||||||
|
f"Found {len(cubbi_config.providers)} configured providers for Crush"
|
||||||
|
)
|
||||||
|
|
||||||
|
for provider_name, provider_config in cubbi_config.providers.items():
|
||||||
|
is_default_provider = provider_name == default_provider_name
|
||||||
|
crush_provider = self._map_provider_to_crush_format(
|
||||||
|
provider_name, provider_config, is_default_provider
|
||||||
|
)
|
||||||
|
if crush_provider:
|
||||||
|
crush_provider_name = (
|
||||||
|
"gemini" if provider_config.type == "google" else provider_name
|
||||||
|
)
|
||||||
|
config_data["providers"][crush_provider_name] = crush_provider
|
||||||
|
self.status.log(
|
||||||
|
f"Added {crush_provider_name} provider to Crush configuration{'(default)' if is_default_provider else ''}"
|
||||||
|
)
|
||||||
|
|
||||||
|
if cubbi_config.defaults.model:
|
||||||
|
provider_part, model_part = cubbi_config.defaults.model.split("/", 1)
|
||||||
|
config_data["models"] = {
|
||||||
|
"large": {"provider": provider_part, "model": model_part},
|
||||||
|
"small": {"provider": provider_part, "model": model_part},
|
||||||
|
}
|
||||||
|
self.status.log(f"Set default model to {cubbi_config.defaults.model}")
|
||||||
|
|
||||||
|
provider = cubbi_config.providers.get(provider_part)
|
||||||
|
if provider and provider.base_url:
|
||||||
|
config_data["providers"][provider_part]["models"].append(
|
||||||
|
{"id": model_part, "name": model_part}
|
||||||
|
)
|
||||||
|
|
||||||
|
if not config_data["providers"]:
|
||||||
|
self.status.log(
|
||||||
|
"No providers configured, skipping Crush configuration file creation"
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
|
||||||
|
try:
|
||||||
|
with config_file.open("w") as f:
|
||||||
|
json.dump(config_data, f, indent=2)
|
||||||
|
|
||||||
|
set_ownership(config_file)
|
||||||
|
|
||||||
|
self.status.log(
|
||||||
|
f"Created Crush configuration at {config_file} with {len(config_data['providers'])} providers"
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to write Crush configuration: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _integrate_mcp_servers(self) -> bool:
|
||||||
|
if not cubbi_config.mcps:
|
||||||
|
self.status.log("No MCP servers to integrate")
|
||||||
|
return True
|
||||||
|
|
||||||
|
config_dir = self.create_directory_with_ownership(self._get_user_config_path())
|
||||||
|
if not config_dir.exists():
|
||||||
|
self.status.log(
|
||||||
|
f"Config directory {config_dir} does not exist and could not be created",
|
||||||
|
"ERROR",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
config_file = config_dir / "crush.json"
|
||||||
|
|
||||||
|
if config_file.exists():
|
||||||
|
try:
|
||||||
|
with config_file.open("r") as f:
|
||||||
|
config_data = json.load(f)
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
self.status.log(f"Failed to load existing config: {e}", "WARNING")
|
||||||
|
config_data = {
|
||||||
|
"$schema": "https://charm.land/crush.json",
|
||||||
|
"providers": {},
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
config_data = {"$schema": "https://charm.land/crush.json", "providers": {}}
|
||||||
|
|
||||||
|
if "mcps" not in config_data:
|
||||||
|
config_data["mcps"] = {}
|
||||||
|
|
||||||
|
for mcp in cubbi_config.mcps:
|
||||||
|
if mcp.type == "remote":
|
||||||
|
if mcp.name and mcp.url:
|
||||||
|
self.status.log(f"Adding remote MCP server: {mcp.name} - {mcp.url}")
|
||||||
|
config_data["mcps"][mcp.name] = {
|
||||||
|
"transport": {"type": "sse", "url": mcp.url},
|
||||||
|
"enabled": True,
|
||||||
|
}
|
||||||
|
elif mcp.type == "local":
|
||||||
|
if mcp.name and mcp.command:
|
||||||
|
self.status.log(
|
||||||
|
f"Adding local MCP server: {mcp.name} - {mcp.command}"
|
||||||
|
)
|
||||||
|
# Crush uses stdio type for local MCPs
|
||||||
|
transport_config = {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": mcp.command,
|
||||||
|
}
|
||||||
|
if mcp.args:
|
||||||
|
transport_config["args"] = mcp.args
|
||||||
|
if mcp.env:
|
||||||
|
transport_config["env"] = mcp.env
|
||||||
|
config_data["mcps"][mcp.name] = {
|
||||||
|
"transport": transport_config,
|
||||||
|
"enabled": True,
|
||||||
|
}
|
||||||
|
elif mcp.type in ["docker", "proxy"]:
|
||||||
|
if mcp.name and mcp.host:
|
||||||
|
mcp_port = mcp.port or 8080
|
||||||
|
mcp_url = f"http://{mcp.host}:{mcp_port}/sse"
|
||||||
|
self.status.log(f"Adding MCP server: {mcp.name} - {mcp_url}")
|
||||||
|
config_data["mcps"][mcp.name] = {
|
||||||
|
"transport": {"type": "sse", "url": mcp_url},
|
||||||
|
"enabled": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
with config_file.open("w") as f:
|
||||||
|
json.dump(config_data, f, indent=2)
|
||||||
|
|
||||||
|
set_ownership(config_file)
|
||||||
|
|
||||||
|
self.status.log(
|
||||||
|
f"Integrated {len(cubbi_config.mcps)} MCP servers into Crush configuration"
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
PLUGIN_CLASS = CrushPlugin
|
||||||
16
cubbi/images/crush/cubbi_image.yaml
Normal file
16
cubbi/images/crush/cubbi_image.yaml
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
name: crush
|
||||||
|
description: Crush AI coding assistant environment
|
||||||
|
version: 1.0.0
|
||||||
|
maintainer: team@monadical.com
|
||||||
|
image: monadical/cubbi-crush:latest
|
||||||
|
persistent_configs: []
|
||||||
|
environments_to_forward:
|
||||||
|
# API Keys
|
||||||
|
- OPENAI_API_KEY
|
||||||
|
- ANTHROPIC_API_KEY
|
||||||
|
- ANTHROPIC_AUTH_TOKEN
|
||||||
|
- ANTHROPIC_CUSTOM_HEADERS
|
||||||
|
- DEEPSEEK_API_KEY
|
||||||
|
- GEMINI_API_KEY
|
||||||
|
- OPENROUTER_API_KEY
|
||||||
|
- AIDER_API_KEYS
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env -S uv run --script
|
#!/usr/bin/env -S uv run --script
|
||||||
# /// script
|
# /// script
|
||||||
# dependencies = ["ruamel.yaml"]
|
# dependencies = ["ruamel.yaml", "pydantic"]
|
||||||
# ///
|
# ///
|
||||||
"""
|
"""
|
||||||
Standalone Cubbi initialization script
|
Standalone Cubbi initialization script
|
||||||
@@ -19,15 +19,104 @@ import sys
|
|||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, List
|
from typing import Any
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
from ruamel.yaml import YAML
|
from ruamel.yaml import YAML
|
||||||
|
|
||||||
|
|
||||||
# Status Management
|
class UserConfig(BaseModel):
|
||||||
class StatusManager:
|
uid: int = 1000
|
||||||
"""Manages initialization status and logging"""
|
gid: int = 1000
|
||||||
|
|
||||||
|
|
||||||
|
class ProjectConfig(BaseModel):
|
||||||
|
url: str | None = None
|
||||||
|
config_dir: str | None = None
|
||||||
|
image_config_dir: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class PersistentLink(BaseModel):
|
||||||
|
source: str
|
||||||
|
target: str
|
||||||
|
type: str
|
||||||
|
|
||||||
|
|
||||||
|
class ProviderConfig(BaseModel):
|
||||||
|
type: str
|
||||||
|
api_key: str
|
||||||
|
base_url: str | None = None
|
||||||
|
models: list[dict[str, str]] = []
|
||||||
|
|
||||||
|
|
||||||
|
class MCPConfig(BaseModel):
|
||||||
|
name: str
|
||||||
|
type: str
|
||||||
|
host: str | None = None
|
||||||
|
port: int | None = None
|
||||||
|
url: str | None = None
|
||||||
|
headers: dict[str, str] | None = None
|
||||||
|
command: str | None = None
|
||||||
|
args: list[str] = []
|
||||||
|
env: dict[str, str] = {}
|
||||||
|
|
||||||
|
|
||||||
|
class DefaultsConfig(BaseModel):
|
||||||
|
model: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class SSHConfig(BaseModel):
|
||||||
|
enabled: bool = False
|
||||||
|
|
||||||
|
|
||||||
|
class CubbiConfig(BaseModel):
|
||||||
|
version: str = "1.0"
|
||||||
|
user: UserConfig = UserConfig()
|
||||||
|
providers: dict[str, ProviderConfig] = {}
|
||||||
|
mcps: list[MCPConfig] = []
|
||||||
|
project: ProjectConfig = ProjectConfig()
|
||||||
|
persistent_links: list[PersistentLink] = []
|
||||||
|
defaults: DefaultsConfig = DefaultsConfig()
|
||||||
|
ssh: SSHConfig = SSHConfig()
|
||||||
|
run_command: str | None = None
|
||||||
|
no_shell: bool = False
|
||||||
|
|
||||||
|
def get_provider_for_default_model(self) -> ProviderConfig | None:
|
||||||
|
if not self.defaults.model or "/" not in self.defaults.model:
|
||||||
|
return None
|
||||||
|
|
||||||
|
provider_name = self.defaults.model.split("/")[0]
|
||||||
|
return self.providers.get(provider_name)
|
||||||
|
|
||||||
|
|
||||||
|
def load_cubbi_config() -> CubbiConfig:
|
||||||
|
config_path = Path("/cubbi/config.yaml")
|
||||||
|
if not config_path.exists():
|
||||||
|
return CubbiConfig()
|
||||||
|
|
||||||
|
yaml = YAML(typ="safe")
|
||||||
|
with open(config_path, "r") as f:
|
||||||
|
config_data = yaml.load(f) or {}
|
||||||
|
|
||||||
|
return CubbiConfig(**config_data)
|
||||||
|
|
||||||
|
|
||||||
|
cubbi_config = load_cubbi_config()
|
||||||
|
|
||||||
|
|
||||||
|
def get_user_ids() -> tuple[int, int]:
|
||||||
|
return cubbi_config.user.uid, cubbi_config.user.gid
|
||||||
|
|
||||||
|
|
||||||
|
def set_ownership(path: Path) -> None:
|
||||||
|
user_id, group_id = get_user_ids()
|
||||||
|
try:
|
||||||
|
os.chown(path, user_id, group_id)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class StatusManager:
|
||||||
def __init__(
|
def __init__(
|
||||||
self, log_file: str = "/cubbi/init.log", status_file: str = "/cubbi/init.status"
|
self, log_file: str = "/cubbi/init.log", status_file: str = "/cubbi/init.status"
|
||||||
):
|
):
|
||||||
@@ -36,12 +125,10 @@ class StatusManager:
|
|||||||
self._setup_logging()
|
self._setup_logging()
|
||||||
|
|
||||||
def _setup_logging(self) -> None:
|
def _setup_logging(self) -> None:
|
||||||
"""Set up logging to both stdout and log file"""
|
|
||||||
self.log_file.touch(exist_ok=True)
|
self.log_file.touch(exist_ok=True)
|
||||||
self.set_status(False)
|
self.set_status(False)
|
||||||
|
|
||||||
def log(self, message: str, level: str = "INFO") -> None:
|
def log(self, message: str, level: str = "INFO") -> None:
|
||||||
"""Log a message with timestamp"""
|
|
||||||
print(message)
|
print(message)
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
|
|
||||||
@@ -50,25 +137,19 @@ class StatusManager:
|
|||||||
f.flush()
|
f.flush()
|
||||||
|
|
||||||
def set_status(self, complete: bool) -> None:
|
def set_status(self, complete: bool) -> None:
|
||||||
"""Set initialization completion status"""
|
|
||||||
status = "true" if complete else "false"
|
status = "true" if complete else "false"
|
||||||
with open(self.status_file, "w") as f:
|
with open(self.status_file, "w") as f:
|
||||||
f.write(f"INIT_COMPLETE={status}\n")
|
f.write(f"INIT_COMPLETE={status}\n")
|
||||||
|
|
||||||
def start_initialization(self) -> None:
|
def start_initialization(self) -> None:
|
||||||
"""Mark initialization as started"""
|
|
||||||
self.set_status(False)
|
self.set_status(False)
|
||||||
|
|
||||||
def complete_initialization(self) -> None:
|
def complete_initialization(self) -> None:
|
||||||
"""Mark initialization as completed"""
|
|
||||||
self.set_status(True)
|
self.set_status(True)
|
||||||
|
|
||||||
|
|
||||||
# Configuration Management
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class PersistentConfig:
|
class PersistentConfig:
|
||||||
"""Persistent configuration mapping"""
|
|
||||||
|
|
||||||
source: str
|
source: str
|
||||||
target: str
|
target: str
|
||||||
type: str = "directory"
|
type: str = "directory"
|
||||||
@@ -77,25 +158,21 @@ class PersistentConfig:
|
|||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ImageConfig:
|
class ImageConfig:
|
||||||
"""Cubbi image configuration"""
|
|
||||||
|
|
||||||
name: str
|
name: str
|
||||||
description: str
|
description: str
|
||||||
version: str
|
version: str
|
||||||
maintainer: str
|
maintainer: str
|
||||||
image: str
|
image: str
|
||||||
persistent_configs: List[PersistentConfig] = field(default_factory=list)
|
persistent_configs: list[PersistentConfig] = field(default_factory=list)
|
||||||
|
environments_to_forward: list[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
class ConfigParser:
|
class ConfigParser:
|
||||||
"""Parses Cubbi image configuration and environment variables"""
|
|
||||||
|
|
||||||
def __init__(self, config_file: str = "/cubbi/cubbi_image.yaml"):
|
def __init__(self, config_file: str = "/cubbi/cubbi_image.yaml"):
|
||||||
self.config_file = Path(config_file)
|
self.config_file = Path(config_file)
|
||||||
self.environment: Dict[str, str] = dict(os.environ)
|
self.environment: dict[str, str] = dict(os.environ)
|
||||||
|
|
||||||
def load_image_config(self) -> ImageConfig:
|
def load_image_config(self) -> ImageConfig:
|
||||||
"""Load and parse the cubbi_image.yaml configuration"""
|
|
||||||
if not self.config_file.exists():
|
if not self.config_file.exists():
|
||||||
raise FileNotFoundError(f"Configuration file not found: {self.config_file}")
|
raise FileNotFoundError(f"Configuration file not found: {self.config_file}")
|
||||||
|
|
||||||
@@ -103,7 +180,6 @@ class ConfigParser:
|
|||||||
with open(self.config_file, "r") as f:
|
with open(self.config_file, "r") as f:
|
||||||
config_data = yaml.load(f)
|
config_data = yaml.load(f)
|
||||||
|
|
||||||
# Parse persistent configurations
|
|
||||||
persistent_configs = []
|
persistent_configs = []
|
||||||
for pc_data in config_data.get("persistent_configs", []):
|
for pc_data in config_data.get("persistent_configs", []):
|
||||||
persistent_configs.append(PersistentConfig(**pc_data))
|
persistent_configs.append(PersistentConfig(**pc_data))
|
||||||
@@ -115,48 +191,16 @@ class ConfigParser:
|
|||||||
maintainer=config_data["maintainer"],
|
maintainer=config_data["maintainer"],
|
||||||
image=config_data["image"],
|
image=config_data["image"],
|
||||||
persistent_configs=persistent_configs,
|
persistent_configs=persistent_configs,
|
||||||
|
environments_to_forward=config_data.get("environments_to_forward", []),
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_cubbi_config(self) -> Dict[str, Any]:
|
|
||||||
"""Get standard Cubbi configuration from environment"""
|
|
||||||
return {
|
|
||||||
"user_id": int(self.environment.get("CUBBI_USER_ID", "1000")),
|
|
||||||
"group_id": int(self.environment.get("CUBBI_GROUP_ID", "1000")),
|
|
||||||
"run_command": self.environment.get("CUBBI_RUN_COMMAND"),
|
|
||||||
"no_shell": self.environment.get("CUBBI_NO_SHELL", "false").lower()
|
|
||||||
== "true",
|
|
||||||
"config_dir": self.environment.get("CUBBI_CONFIG_DIR", "/cubbi-config"),
|
|
||||||
"persistent_links": self.environment.get("CUBBI_PERSISTENT_LINKS", ""),
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_mcp_config(self) -> Dict[str, Any]:
|
|
||||||
"""Get MCP server configuration from environment"""
|
|
||||||
mcp_count = int(self.environment.get("MCP_COUNT", "0"))
|
|
||||||
mcp_servers = []
|
|
||||||
|
|
||||||
for idx in range(mcp_count):
|
|
||||||
server = {
|
|
||||||
"name": self.environment.get(f"MCP_{idx}_NAME"),
|
|
||||||
"type": self.environment.get(f"MCP_{idx}_TYPE"),
|
|
||||||
"host": self.environment.get(f"MCP_{idx}_HOST"),
|
|
||||||
"url": self.environment.get(f"MCP_{idx}_URL"),
|
|
||||||
}
|
|
||||||
if server["name"]: # Only add if name is present
|
|
||||||
mcp_servers.append(server)
|
|
||||||
|
|
||||||
return {"count": mcp_count, "servers": mcp_servers}
|
|
||||||
|
|
||||||
|
|
||||||
# Core Management Classes
|
|
||||||
class UserManager:
|
class UserManager:
|
||||||
"""Manages user and group creation/modification in containers"""
|
|
||||||
|
|
||||||
def __init__(self, status: StatusManager):
|
def __init__(self, status: StatusManager):
|
||||||
self.status = status
|
self.status = status
|
||||||
self.username = "cubbi"
|
self.username = "cubbi"
|
||||||
|
|
||||||
def _run_command(self, cmd: list[str]) -> bool:
|
def _run_command(self, cmd: list[str]) -> bool:
|
||||||
"""Run a system command and log the result"""
|
|
||||||
try:
|
try:
|
||||||
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
|
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
|
||||||
if result.stdout:
|
if result.stdout:
|
||||||
@@ -168,12 +212,10 @@ class UserManager:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
def setup_user_and_group(self, user_id: int, group_id: int) -> bool:
|
def setup_user_and_group(self, user_id: int, group_id: int) -> bool:
|
||||||
"""Set up user and group with specified IDs"""
|
|
||||||
self.status.log(
|
self.status.log(
|
||||||
f"Setting up user '{self.username}' with UID: {user_id}, GID: {group_id}"
|
f"Setting up user '{self.username}' with UID: {user_id}, GID: {group_id}"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Handle group creation/modification
|
|
||||||
try:
|
try:
|
||||||
existing_group = grp.getgrnam(self.username)
|
existing_group = grp.getgrnam(self.username)
|
||||||
if existing_group.gr_gid != group_id:
|
if existing_group.gr_gid != group_id:
|
||||||
@@ -185,10 +227,7 @@ class UserManager:
|
|||||||
):
|
):
|
||||||
return False
|
return False
|
||||||
except KeyError:
|
except KeyError:
|
||||||
if not self._run_command(["groupadd", "-g", str(group_id), self.username]):
|
self._run_command(["groupadd", "-g", str(group_id), self.username])
|
||||||
return False
|
|
||||||
|
|
||||||
# Handle user creation/modification
|
|
||||||
try:
|
try:
|
||||||
existing_user = pwd.getpwnam(self.username)
|
existing_user = pwd.getpwnam(self.username)
|
||||||
if existing_user.pw_uid != user_id or existing_user.pw_gid != group_id:
|
if existing_user.pw_uid != user_id or existing_user.pw_gid != group_id:
|
||||||
@@ -222,19 +261,25 @@ class UserManager:
|
|||||||
):
|
):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
sudoers_command = [
|
||||||
|
"sh",
|
||||||
|
"-c",
|
||||||
|
"echo 'cubbi ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/cubbi && chmod 0440 /etc/sudoers.d/cubbi",
|
||||||
|
]
|
||||||
|
if not self._run_command(sudoers_command):
|
||||||
|
self.status.log("Failed to create sudoers entry for cubbi", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
class DirectoryManager:
|
class DirectoryManager:
|
||||||
"""Manages directory creation and permission setup"""
|
|
||||||
|
|
||||||
def __init__(self, status: StatusManager):
|
def __init__(self, status: StatusManager):
|
||||||
self.status = status
|
self.status = status
|
||||||
|
|
||||||
def create_directory(
|
def create_directory(
|
||||||
self, path: str, user_id: int, group_id: int, mode: int = 0o755
|
self, path: str, user_id: int, group_id: int, mode: int = 0o755
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""Create a directory with proper ownership and permissions"""
|
|
||||||
dir_path = Path(path)
|
dir_path = Path(path)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -250,7 +295,6 @@ class DirectoryManager:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
def setup_standard_directories(self, user_id: int, group_id: int) -> bool:
|
def setup_standard_directories(self, user_id: int, group_id: int) -> bool:
|
||||||
"""Set up standard Cubbi directories"""
|
|
||||||
directories = [
|
directories = [
|
||||||
("/app", 0o755),
|
("/app", 0o755),
|
||||||
("/cubbi-config", 0o755),
|
("/cubbi-config", 0o755),
|
||||||
@@ -307,7 +351,6 @@ class DirectoryManager:
|
|||||||
return success
|
return success
|
||||||
|
|
||||||
def _chown_recursive(self, path: Path, user_id: int, group_id: int) -> None:
|
def _chown_recursive(self, path: Path, user_id: int, group_id: int) -> None:
|
||||||
"""Recursively change ownership of a directory"""
|
|
||||||
try:
|
try:
|
||||||
os.chown(path, user_id, group_id)
|
os.chown(path, user_id, group_id)
|
||||||
for item in path.iterdir():
|
for item in path.iterdir():
|
||||||
@@ -322,15 +365,12 @@ class DirectoryManager:
|
|||||||
|
|
||||||
|
|
||||||
class ConfigManager:
|
class ConfigManager:
|
||||||
"""Manages persistent configuration symlinks and mappings"""
|
|
||||||
|
|
||||||
def __init__(self, status: StatusManager):
|
def __init__(self, status: StatusManager):
|
||||||
self.status = status
|
self.status = status
|
||||||
|
|
||||||
def create_symlink(
|
def create_symlink(
|
||||||
self, source_path: str, target_path: str, user_id: int, group_id: int
|
self, source_path: str, target_path: str, user_id: int, group_id: int
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""Create a symlink with proper ownership"""
|
|
||||||
try:
|
try:
|
||||||
source = Path(source_path)
|
source = Path(source_path)
|
||||||
|
|
||||||
@@ -357,7 +397,6 @@ class ConfigManager:
|
|||||||
def _ensure_target_directory(
|
def _ensure_target_directory(
|
||||||
self, target_path: str, user_id: int, group_id: int
|
self, target_path: str, user_id: int, group_id: int
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""Ensure the target directory exists with proper ownership"""
|
|
||||||
try:
|
try:
|
||||||
target_dir = Path(target_path)
|
target_dir = Path(target_path)
|
||||||
if not target_dir.exists():
|
if not target_dir.exists():
|
||||||
@@ -375,9 +414,8 @@ class ConfigManager:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
def setup_persistent_configs(
|
def setup_persistent_configs(
|
||||||
self, persistent_configs: List[PersistentConfig], user_id: int, group_id: int
|
self, persistent_configs: list[PersistentConfig], user_id: int, group_id: int
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""Set up persistent configuration symlinks from image config"""
|
|
||||||
if not persistent_configs:
|
if not persistent_configs:
|
||||||
self.status.log("No persistent configurations defined in image config")
|
self.status.log("No persistent configurations defined in image config")
|
||||||
return True
|
return True
|
||||||
@@ -394,16 +432,22 @@ class ConfigManager:
|
|||||||
|
|
||||||
return success
|
return success
|
||||||
|
|
||||||
|
def setup_persistent_link(
|
||||||
|
self, source: str, target: str, link_type: str, user_id: int, group_id: int
|
||||||
|
) -> bool:
|
||||||
|
"""Setup a single persistent link"""
|
||||||
|
if not self._ensure_target_directory(target, user_id, group_id):
|
||||||
|
return False
|
||||||
|
|
||||||
|
return self.create_symlink(source, target, user_id, group_id)
|
||||||
|
|
||||||
|
|
||||||
class CommandManager:
|
class CommandManager:
|
||||||
"""Manages command execution and user switching"""
|
|
||||||
|
|
||||||
def __init__(self, status: StatusManager):
|
def __init__(self, status: StatusManager):
|
||||||
self.status = status
|
self.status = status
|
||||||
self.username = "cubbi"
|
self.username = "cubbi"
|
||||||
|
|
||||||
def run_as_user(self, command: List[str], user: str = None) -> int:
|
def run_as_user(self, command: list[str], user: str = None) -> int:
|
||||||
"""Run a command as the specified user using gosu"""
|
|
||||||
if user is None:
|
if user is None:
|
||||||
user = self.username
|
user = self.username
|
||||||
|
|
||||||
@@ -418,15 +462,13 @@ class CommandManager:
|
|||||||
return 1
|
return 1
|
||||||
|
|
||||||
def run_user_command(self, command: str) -> int:
|
def run_user_command(self, command: str) -> int:
|
||||||
"""Run user-specified command as cubbi user"""
|
|
||||||
if not command:
|
if not command:
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
self.status.log(f"Executing user command: {command}")
|
self.status.log(f"Executing user command: {command}")
|
||||||
return self.run_as_user(["sh", "-c", command])
|
return self.run_as_user(["sh", "-c", command])
|
||||||
|
|
||||||
def exec_as_user(self, args: List[str]) -> None:
|
def exec_as_user(self, args: list[str]) -> None:
|
||||||
"""Execute the final command as cubbi user (replaces current process)"""
|
|
||||||
if not args:
|
if not args:
|
||||||
args = ["tail", "-f", "/dev/null"]
|
args = ["tail", "-f", "/dev/null"]
|
||||||
|
|
||||||
@@ -441,31 +483,119 @@ class CommandManager:
|
|||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
# Tool Plugin System
|
|
||||||
class ToolPlugin(ABC):
|
class ToolPlugin(ABC):
|
||||||
"""Base class for tool-specific initialization plugins"""
|
def __init__(self, status: StatusManager, config: dict[str, Any]):
|
||||||
|
|
||||||
def __init__(self, status: StatusManager, config: Dict[str, Any]):
|
|
||||||
self.status = status
|
self.status = status
|
||||||
self.config = config
|
self.config = config
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def tool_name(self) -> str:
|
def tool_name(self) -> str:
|
||||||
"""Return the name of the tool this plugin supports"""
|
pass
|
||||||
|
|
||||||
|
def create_directory_with_ownership(self, path: Path) -> Path:
|
||||||
|
try:
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
set_ownership(path)
|
||||||
|
|
||||||
|
# Also set ownership on parent directories if they were created
|
||||||
|
parent = path.parent
|
||||||
|
if parent.exists() and parent != Path("/"):
|
||||||
|
set_ownership(parent)
|
||||||
|
|
||||||
|
except OSError as e:
|
||||||
|
self.status.log(f"Failed to create directory {path}: {e}", "ERROR")
|
||||||
|
|
||||||
|
return path
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def is_already_configured(self) -> bool:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def initialize(self) -> bool:
|
def configure(self) -> bool:
|
||||||
"""Main tool initialization logic"""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
def get_resolved_model(self) -> dict[str, Any] | None:
|
||||||
"""Integrate with available MCP servers"""
|
model_spec = os.environ.get("CUBBI_MODEL_SPEC", "")
|
||||||
return True
|
if not model_spec:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Parse provider/model format
|
||||||
|
if "/" in model_spec:
|
||||||
|
provider_name, model_name = model_spec.split("/", 1)
|
||||||
|
else:
|
||||||
|
# Legacy format - treat as provider name
|
||||||
|
provider_name = model_spec
|
||||||
|
model_name = ""
|
||||||
|
|
||||||
|
# Get provider type from CUBBI_PROVIDER env var
|
||||||
|
provider_type = os.environ.get("CUBBI_PROVIDER", provider_name)
|
||||||
|
|
||||||
|
# Get base URL if available (for OpenAI-compatible providers)
|
||||||
|
base_url = None
|
||||||
|
if provider_type == "openai":
|
||||||
|
base_url = os.environ.get("OPENAI_URL")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"provider_name": provider_name,
|
||||||
|
"provider_type": provider_type,
|
||||||
|
"model_name": model_name,
|
||||||
|
"base_url": base_url,
|
||||||
|
"model_spec": model_spec,
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_provider_config(self, provider_name: str) -> dict[str, str]:
|
||||||
|
provider_config = {}
|
||||||
|
|
||||||
|
# Map provider names to their environment variables
|
||||||
|
if provider_name == "anthropic" or provider_name.startswith("anthropic"):
|
||||||
|
api_key = os.environ.get("ANTHROPIC_API_KEY")
|
||||||
|
if api_key:
|
||||||
|
provider_config["ANTHROPIC_API_KEY"] = api_key
|
||||||
|
|
||||||
|
elif provider_name == "openai" or provider_name.startswith("openai"):
|
||||||
|
api_key = os.environ.get("OPENAI_API_KEY")
|
||||||
|
base_url = os.environ.get("OPENAI_URL")
|
||||||
|
if api_key:
|
||||||
|
provider_config["OPENAI_API_KEY"] = api_key
|
||||||
|
if base_url:
|
||||||
|
provider_config["OPENAI_URL"] = base_url
|
||||||
|
|
||||||
|
elif provider_name == "google" or provider_name.startswith("google"):
|
||||||
|
api_key = os.environ.get("GOOGLE_API_KEY")
|
||||||
|
if api_key:
|
||||||
|
provider_config["GOOGLE_API_KEY"] = api_key
|
||||||
|
|
||||||
|
elif provider_name == "openrouter" or provider_name.startswith("openrouter"):
|
||||||
|
api_key = os.environ.get("OPENROUTER_API_KEY")
|
||||||
|
if api_key:
|
||||||
|
provider_config["OPENROUTER_API_KEY"] = api_key
|
||||||
|
|
||||||
|
return provider_config
|
||||||
|
|
||||||
|
def get_all_providers_config(self) -> dict[str, dict[str, str]]:
|
||||||
|
all_providers = {}
|
||||||
|
|
||||||
|
# Check for each standard provider
|
||||||
|
standard_providers = ["anthropic", "openai", "google", "openrouter"]
|
||||||
|
|
||||||
|
for provider_name in standard_providers:
|
||||||
|
provider_config = self.get_provider_config(provider_name)
|
||||||
|
if provider_config: # Only include providers with API keys
|
||||||
|
all_providers[provider_name] = provider_config
|
||||||
|
|
||||||
|
# Also check for custom OpenAI-compatible providers
|
||||||
|
# These would have been set up with custom names but use OpenAI env vars
|
||||||
|
openai_config = self.get_provider_config("openai")
|
||||||
|
if openai_config and "OPENAI_URL" in openai_config:
|
||||||
|
# This might be a custom provider - we could check for custom naming
|
||||||
|
# but for now, we'll just include it as openai
|
||||||
|
pass
|
||||||
|
|
||||||
|
return all_providers
|
||||||
|
|
||||||
|
|
||||||
# Main Initializer
|
|
||||||
class CubbiInitializer:
|
class CubbiInitializer:
|
||||||
"""Main Cubbi initialization orchestrator"""
|
"""Main Cubbi initialization orchestrator"""
|
||||||
|
|
||||||
@@ -477,28 +607,24 @@ class CubbiInitializer:
|
|||||||
self.config_manager = ConfigManager(self.status)
|
self.config_manager = ConfigManager(self.status)
|
||||||
self.command_manager = CommandManager(self.status)
|
self.command_manager = CommandManager(self.status)
|
||||||
|
|
||||||
def run_initialization(self, final_args: List[str]) -> None:
|
def run_initialization(self, final_args: list[str]) -> None:
|
||||||
"""Run the complete initialization process"""
|
"""Run the complete initialization process"""
|
||||||
try:
|
try:
|
||||||
self.status.start_initialization()
|
self.status.start_initialization()
|
||||||
|
|
||||||
# Load configuration
|
# Load configuration
|
||||||
image_config = self.config_parser.load_image_config()
|
image_config = self.config_parser.load_image_config()
|
||||||
cubbi_config = self.config_parser.get_cubbi_config()
|
|
||||||
mcp_config = self.config_parser.get_mcp_config()
|
|
||||||
|
|
||||||
self.status.log(f"Initializing {image_config.name} v{image_config.version}")
|
self.status.log(f"Initializing {image_config.name} v{image_config.version}")
|
||||||
|
|
||||||
# Core initialization
|
# Core initialization
|
||||||
success = self._run_core_initialization(image_config, cubbi_config)
|
success = self._run_core_initialization(image_config)
|
||||||
if not success:
|
if not success:
|
||||||
self.status.log("Core initialization failed", "ERROR")
|
self.status.log("Core initialization failed", "ERROR")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
# Tool-specific initialization
|
# Tool-specific initialization
|
||||||
success = self._run_tool_initialization(
|
success = self._run_tool_initialization(image_config)
|
||||||
image_config, cubbi_config, mcp_config
|
|
||||||
)
|
|
||||||
if not success:
|
if not success:
|
||||||
self.status.log("Tool initialization failed", "ERROR")
|
self.status.log("Tool initialization failed", "ERROR")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
@@ -507,16 +633,15 @@ class CubbiInitializer:
|
|||||||
self.status.complete_initialization()
|
self.status.complete_initialization()
|
||||||
|
|
||||||
# Handle commands
|
# Handle commands
|
||||||
self._handle_command_execution(cubbi_config, final_args)
|
self._handle_command_execution(final_args)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.status.log(f"Initialization failed with error: {e}", "ERROR")
|
self.status.log(f"Initialization failed with error: {e}", "ERROR")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
def _run_core_initialization(self, image_config, cubbi_config) -> bool:
|
def _run_core_initialization(self, image_config) -> bool:
|
||||||
"""Run core Cubbi initialization steps"""
|
user_id = cubbi_config.user.uid
|
||||||
user_id = cubbi_config["user_id"]
|
group_id = cubbi_config.user.gid
|
||||||
group_id = cubbi_config["group_id"]
|
|
||||||
|
|
||||||
if not self.user_manager.setup_user_and_group(user_id, group_id):
|
if not self.user_manager.setup_user_and_group(user_id, group_id):
|
||||||
return False
|
return False
|
||||||
@@ -524,25 +649,29 @@ class CubbiInitializer:
|
|||||||
if not self.directory_manager.setup_standard_directories(user_id, group_id):
|
if not self.directory_manager.setup_standard_directories(user_id, group_id):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
config_path = Path(cubbi_config["config_dir"])
|
if cubbi_config.project.config_dir:
|
||||||
if not config_path.exists():
|
config_path = Path(cubbi_config.project.config_dir)
|
||||||
self.status.log(f"Creating config directory: {cubbi_config['config_dir']}")
|
if not config_path.exists():
|
||||||
try:
|
self.status.log(
|
||||||
config_path.mkdir(parents=True, exist_ok=True)
|
f"Creating config directory: {cubbi_config.project.config_dir}"
|
||||||
os.chown(cubbi_config["config_dir"], user_id, group_id)
|
)
|
||||||
except Exception as e:
|
try:
|
||||||
self.status.log(f"Failed to create config directory: {e}", "ERROR")
|
config_path.mkdir(parents=True, exist_ok=True)
|
||||||
return False
|
os.chown(cubbi_config.project.config_dir, user_id, group_id)
|
||||||
|
except Exception as e:
|
||||||
|
self.status.log(f"Failed to create config directory: {e}", "ERROR")
|
||||||
|
return False
|
||||||
|
|
||||||
if not self.config_manager.setup_persistent_configs(
|
# Setup persistent configs
|
||||||
image_config.persistent_configs, user_id, group_id
|
for link in cubbi_config.persistent_links:
|
||||||
):
|
if not self.config_manager.setup_persistent_link(
|
||||||
return False
|
link.source, link.target, link.type, user_id, group_id
|
||||||
|
):
|
||||||
|
return False
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _run_tool_initialization(self, image_config, cubbi_config, mcp_config) -> bool:
|
def _run_tool_initialization(self, image_config) -> bool:
|
||||||
"""Run tool-specific initialization"""
|
|
||||||
# Look for a tool-specific plugin file in the same directory
|
# Look for a tool-specific plugin file in the same directory
|
||||||
plugin_name = image_config.name.lower().replace("-", "_")
|
plugin_name = image_config.name.lower().replace("-", "_")
|
||||||
plugin_file = Path(__file__).parent / f"{plugin_name}_plugin.py"
|
plugin_file = Path(__file__).parent / f"{plugin_name}_plugin.py"
|
||||||
@@ -561,44 +690,26 @@ class CubbiInitializer:
|
|||||||
plugin_module = importlib.util.module_from_spec(spec)
|
plugin_module = importlib.util.module_from_spec(spec)
|
||||||
spec.loader.exec_module(plugin_module)
|
spec.loader.exec_module(plugin_module)
|
||||||
|
|
||||||
# Find the plugin class (should inherit from ToolPlugin)
|
# Get the plugin class from the standard export variable
|
||||||
plugin_class = None
|
if not hasattr(plugin_module, "PLUGIN_CLASS"):
|
||||||
for attr_name in dir(plugin_module):
|
|
||||||
attr = getattr(plugin_module, attr_name)
|
|
||||||
if (
|
|
||||||
isinstance(attr, type)
|
|
||||||
and hasattr(attr, "tool_name")
|
|
||||||
and hasattr(attr, "initialize")
|
|
||||||
and attr_name != "ToolPlugin"
|
|
||||||
): # Skip the base class
|
|
||||||
plugin_class = attr
|
|
||||||
break
|
|
||||||
|
|
||||||
if not plugin_class:
|
|
||||||
self.status.log(
|
self.status.log(
|
||||||
f"No valid plugin class found in {plugin_file}", "ERROR"
|
f"No PLUGIN_CLASS variable found in {plugin_file}", "ERROR"
|
||||||
)
|
)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
plugin_class = plugin_module.PLUGIN_CLASS
|
||||||
|
|
||||||
# Instantiate and run the plugin
|
# Instantiate and run the plugin
|
||||||
plugin = plugin_class(
|
plugin = plugin_class(self.status, {"image_config": image_config})
|
||||||
self.status,
|
|
||||||
{
|
|
||||||
"image_config": image_config,
|
|
||||||
"cubbi_config": cubbi_config,
|
|
||||||
"mcp_config": mcp_config,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
self.status.log(f"Running {plugin.tool_name}-specific initialization")
|
self.status.log(f"Running {plugin.tool_name}-specific initialization")
|
||||||
|
|
||||||
if not plugin.initialize():
|
if not plugin.is_already_configured():
|
||||||
self.status.log(f"{plugin.tool_name} initialization failed", "ERROR")
|
if not plugin.configure():
|
||||||
return False
|
self.status.log(f"{plugin.tool_name} configuration failed", "ERROR")
|
||||||
|
return False
|
||||||
if not plugin.integrate_mcp_servers(mcp_config):
|
else:
|
||||||
self.status.log(f"{plugin.tool_name} MCP integration failed", "ERROR")
|
self.status.log(f"{plugin.tool_name} is already configured, skipping")
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@@ -608,22 +719,19 @@ class CubbiInitializer:
|
|||||||
)
|
)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def _handle_command_execution(self, cubbi_config, final_args):
|
def _handle_command_execution(self, final_args):
|
||||||
"""Handle command execution"""
|
|
||||||
exit_code = 0
|
exit_code = 0
|
||||||
|
|
||||||
if cubbi_config["run_command"]:
|
if cubbi_config.run_command:
|
||||||
self.status.log("--- Executing initial command ---")
|
self.status.log("--- Executing initial command ---")
|
||||||
exit_code = self.command_manager.run_user_command(
|
exit_code = self.command_manager.run_user_command(cubbi_config.run_command)
|
||||||
cubbi_config["run_command"]
|
|
||||||
)
|
|
||||||
self.status.log(
|
self.status.log(
|
||||||
f"--- Initial command finished (exit code: {exit_code}) ---"
|
f"--- Initial command finished (exit code: {exit_code}) ---"
|
||||||
)
|
)
|
||||||
|
|
||||||
if cubbi_config["no_shell"]:
|
if cubbi_config.no_shell:
|
||||||
self.status.log(
|
self.status.log(
|
||||||
"--- CUBBI_NO_SHELL=true, exiting container without starting shell ---"
|
"--- no_shell=true, exiting container without starting shell ---"
|
||||||
)
|
)
|
||||||
sys.exit(exit_code)
|
sys.exit(exit_code)
|
||||||
|
|
||||||
@@ -631,7 +739,6 @@ class CubbiInitializer:
|
|||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
def main() -> int:
|
||||||
"""Main CLI entry point"""
|
|
||||||
import argparse
|
import argparse
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(
|
parser = argparse.ArgumentParser(
|
||||||
|
|||||||
@@ -1,339 +0,0 @@
|
|||||||
# Google Gemini CLI for Cubbi
|
|
||||||
|
|
||||||
This image provides Google Gemini CLI in a Cubbi container environment.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Google Gemini CLI is an AI-powered development tool that allows you to query and edit large codebases, generate applications from PDFs/sketches, automate operational tasks, and integrate with media generation tools using Google's Gemini models.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Advanced AI Models**: Access to Gemini 1.5 Pro, Flash, and other Google AI models
|
|
||||||
- **Codebase Analysis**: Query and edit large codebases intelligently
|
|
||||||
- **Multi-modal Support**: Work with text, images, PDFs, and sketches
|
|
||||||
- **Google Search Grounding**: Ground queries using Google Search for up-to-date information
|
|
||||||
- **Secure Authentication**: API key management through Cubbi's secure environment system
|
|
||||||
- **Persistent Configuration**: Settings and history preserved across container restarts
|
|
||||||
- **Project Integration**: Seamless integration with existing projects
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Set up API Key
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# For Google AI (recommended)
|
|
||||||
uv run -m cubbi.cli config set services.google.api_key "your-gemini-api-key"
|
|
||||||
|
|
||||||
# Alternative using GEMINI_API_KEY
|
|
||||||
uv run -m cubbi.cli config set services.gemini.api_key "your-gemini-api-key"
|
|
||||||
```
|
|
||||||
|
|
||||||
Get your API key from [Google AI Studio](https://aistudio.google.com/apikey).
|
|
||||||
|
|
||||||
### 2. Run Gemini CLI Environment
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start Gemini CLI container with your project
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli /path/to/your/project
|
|
||||||
|
|
||||||
# Or without a project
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Use Gemini CLI
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Basic usage
|
|
||||||
gemini
|
|
||||||
|
|
||||||
# Interactive mode with specific query
|
|
||||||
gemini
|
|
||||||
> Write me a Discord bot that answers questions using a FAQ.md file
|
|
||||||
|
|
||||||
# Analyze existing project
|
|
||||||
gemini
|
|
||||||
> Give me a summary of all changes that went in yesterday
|
|
||||||
|
|
||||||
# Generate from sketch/PDF
|
|
||||||
gemini
|
|
||||||
> Create a web app based on this wireframe.png
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Supported API Keys
|
|
||||||
|
|
||||||
- `GEMINI_API_KEY`: Google AI API key for Gemini models
|
|
||||||
- `GOOGLE_API_KEY`: Alternative Google API key (compatibility)
|
|
||||||
- `GOOGLE_APPLICATION_CREDENTIALS`: Path to Google Cloud service account JSON file
|
|
||||||
|
|
||||||
### Model Configuration
|
|
||||||
|
|
||||||
- `GEMINI_MODEL`: Default model (default: "gemini-1.5-pro")
|
|
||||||
- Available: "gemini-1.5-pro", "gemini-1.5-flash", "gemini-1.0-pro"
|
|
||||||
- `GEMINI_TEMPERATURE`: Model temperature 0.0-2.0 (default: 0.7)
|
|
||||||
- `GEMINI_MAX_TOKENS`: Maximum tokens in response
|
|
||||||
|
|
||||||
### Advanced Configuration
|
|
||||||
|
|
||||||
- `GEMINI_SEARCH_ENABLED`: Enable Google Search grounding (true/false, default: false)
|
|
||||||
- `GEMINI_DEBUG`: Enable debug mode (true/false, default: false)
|
|
||||||
- `GCLOUD_PROJECT`: Google Cloud project ID
|
|
||||||
|
|
||||||
### Network Configuration
|
|
||||||
|
|
||||||
- `HTTP_PROXY`: HTTP proxy server URL
|
|
||||||
- `HTTPS_PROXY`: HTTPS proxy server URL
|
|
||||||
|
|
||||||
## Usage Examples
|
|
||||||
|
|
||||||
### Basic AI Development
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start Gemini CLI with your project
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli /path/to/project
|
|
||||||
|
|
||||||
# Inside the container:
|
|
||||||
gemini # Start interactive session
|
|
||||||
```
|
|
||||||
|
|
||||||
### Codebase Analysis
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Analyze changes
|
|
||||||
gemini
|
|
||||||
> What are the main functions in src/main.py?
|
|
||||||
|
|
||||||
# Code generation
|
|
||||||
gemini
|
|
||||||
> Add error handling to the authentication module
|
|
||||||
|
|
||||||
# Documentation
|
|
||||||
gemini
|
|
||||||
> Generate README documentation for this project
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multi-modal Development
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Work with images
|
|
||||||
gemini
|
|
||||||
> Analyze this architecture diagram and suggest improvements
|
|
||||||
|
|
||||||
# PDF processing
|
|
||||||
gemini
|
|
||||||
> Convert this API specification PDF to OpenAPI YAML
|
|
||||||
|
|
||||||
# Sketch to code
|
|
||||||
gemini
|
|
||||||
> Create a React component based on this UI mockup
|
|
||||||
```
|
|
||||||
|
|
||||||
### Advanced Features
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# With Google Search grounding
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GEMINI_SEARCH_ENABLED="true" \
|
|
||||||
/path/to/project
|
|
||||||
|
|
||||||
# With specific model
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GEMINI_MODEL="gemini-1.5-flash" \
|
|
||||||
--env GEMINI_TEMPERATURE="0.3" \
|
|
||||||
/path/to/project
|
|
||||||
|
|
||||||
# Debug mode
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GEMINI_DEBUG="true" \
|
|
||||||
/path/to/project
|
|
||||||
```
|
|
||||||
|
|
||||||
### Enterprise/Proxy Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# With proxy
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env HTTPS_PROXY="https://proxy.company.com:8080" \
|
|
||||||
/path/to/project
|
|
||||||
|
|
||||||
# With Google Cloud authentication
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
|
|
||||||
--env GCLOUD_PROJECT="your-project-id" \
|
|
||||||
/path/to/project
|
|
||||||
```
|
|
||||||
|
|
||||||
## Persistent Configuration
|
|
||||||
|
|
||||||
The following directories are automatically persisted:
|
|
||||||
|
|
||||||
- `~/.config/gemini/`: Gemini CLI configuration files
|
|
||||||
- `~/.cache/gemini/`: Model cache and temporary files
|
|
||||||
|
|
||||||
Configuration files are maintained across container restarts, ensuring your preferences and session history are preserved.
|
|
||||||
|
|
||||||
## Model Recommendations
|
|
||||||
|
|
||||||
### Best Overall Performance
|
|
||||||
- **Gemini 1.5 Pro**: Excellent reasoning and code understanding
|
|
||||||
- **Gemini 1.5 Flash**: Faster responses, good for iterative development
|
|
||||||
|
|
||||||
### Cost-Effective Options
|
|
||||||
- **Gemini 1.5 Flash**: Lower cost, high speed
|
|
||||||
- **Gemini 1.0 Pro**: Basic model for simple tasks
|
|
||||||
|
|
||||||
### Specialized Use Cases
|
|
||||||
- **Code Analysis**: Gemini 1.5 Pro
|
|
||||||
- **Quick Iterations**: Gemini 1.5 Flash
|
|
||||||
- **Multi-modal Tasks**: Gemini 1.5 Pro (supports images, PDFs)
|
|
||||||
|
|
||||||
## File Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
cubbi/images/gemini-cli/
|
|
||||||
├── Dockerfile # Container image definition
|
|
||||||
├── cubbi_image.yaml # Cubbi image configuration
|
|
||||||
├── gemini_plugin.py # Authentication and setup plugin
|
|
||||||
└── README.md # This documentation
|
|
||||||
```
|
|
||||||
|
|
||||||
## Authentication Flow
|
|
||||||
|
|
||||||
1. **API Key Setup**: API key configured via Cubbi configuration or environment variables
|
|
||||||
2. **Plugin Initialization**: `gemini_plugin.py` creates configuration files
|
|
||||||
3. **Environment File**: Creates `~/.config/gemini/.env` with API key
|
|
||||||
4. **Configuration**: Creates `~/.config/gemini/config.json` with settings
|
|
||||||
5. **Ready**: Gemini CLI is ready for use with configured authentication
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
**No API Key Found**
|
|
||||||
```
|
|
||||||
ℹ️ No API key found - Gemini CLI will require authentication
|
|
||||||
```
|
|
||||||
**Solution**: Set API key in Cubbi configuration:
|
|
||||||
```bash
|
|
||||||
uv run -m cubbi.cli config set services.google.api_key "your-key"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Authentication Failed**
|
|
||||||
```
|
|
||||||
Error: Invalid API key or authentication failed
|
|
||||||
```
|
|
||||||
**Solution**: Verify your API key at [Google AI Studio](https://aistudio.google.com/apikey):
|
|
||||||
```bash
|
|
||||||
# Test your API key
|
|
||||||
curl -H "Content-Type: application/json" \
|
|
||||||
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}' \
|
|
||||||
"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Model Not Available**
|
|
||||||
```
|
|
||||||
Error: Model 'xyz' not found
|
|
||||||
```
|
|
||||||
**Solution**: Use supported models:
|
|
||||||
```bash
|
|
||||||
# List available models (inside container)
|
|
||||||
curl -H "Content-Type: application/json" \
|
|
||||||
"https://generativelanguage.googleapis.com/v1beta/models?key=YOUR_API_KEY"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rate Limit Exceeded**
|
|
||||||
```
|
|
||||||
Error: Quota exceeded
|
|
||||||
```
|
|
||||||
**Solution**: Google AI provides:
|
|
||||||
- 60 requests per minute
|
|
||||||
- 1,000 requests per day
|
|
||||||
- Consider upgrading to Google Cloud for higher limits
|
|
||||||
|
|
||||||
**Network/Proxy Issues**
|
|
||||||
```
|
|
||||||
Connection timeout or proxy errors
|
|
||||||
```
|
|
||||||
**Solution**: Configure proxy settings:
|
|
||||||
```bash
|
|
||||||
uv run -m cubbi.cli config set network.https_proxy "your-proxy-url"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Debug Mode
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Enable debug output
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GEMINI_DEBUG="true"
|
|
||||||
|
|
||||||
# Check configuration
|
|
||||||
cat ~/.config/gemini/config.json
|
|
||||||
|
|
||||||
# Check environment
|
|
||||||
cat ~/.config/gemini/.env
|
|
||||||
|
|
||||||
# Test CLI directly
|
|
||||||
gemini --help
|
|
||||||
```
|
|
||||||
|
|
||||||
## Security Considerations
|
|
||||||
|
|
||||||
- **API Keys**: Stored securely with 0o600 permissions
|
|
||||||
- **Environment**: Isolated container environment
|
|
||||||
- **Configuration**: Secure file permissions for config files
|
|
||||||
- **Google Cloud**: Supports service account authentication for enterprise use
|
|
||||||
|
|
||||||
## Advanced Configuration
|
|
||||||
|
|
||||||
### Custom Model Configuration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Use specific model with custom settings
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GEMINI_MODEL="gemini-1.5-flash" \
|
|
||||||
--env GEMINI_TEMPERATURE="0.2" \
|
|
||||||
--env GEMINI_MAX_TOKENS="8192"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Google Search Integration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Enable Google Search grounding for up-to-date information
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GEMINI_SEARCH_ENABLED="true"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Google Cloud Integration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Use with Google Cloud service account
|
|
||||||
uv run -m cubbi.cli session create --image gemini-cli \
|
|
||||||
--env GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" \
|
|
||||||
--env GCLOUD_PROJECT="your-project-id"
|
|
||||||
```
|
|
||||||
|
|
||||||
## API Limits and Pricing
|
|
||||||
|
|
||||||
### Free Tier (Google AI)
|
|
||||||
- 60 requests per minute
|
|
||||||
- 1,000 requests per day
|
|
||||||
- Personal Google account required
|
|
||||||
|
|
||||||
### Paid Tier (Google Cloud)
|
|
||||||
- Higher rate limits
|
|
||||||
- Enterprise features
|
|
||||||
- Service account authentication
|
|
||||||
- Custom quotas available
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
For issues related to:
|
|
||||||
- **Cubbi Integration**: Check Cubbi documentation or open an issue
|
|
||||||
- **Gemini CLI Functionality**: Visit [Gemini CLI documentation](https://github.com/google-gemini/gemini-cli)
|
|
||||||
- **Google AI Platform**: Visit [Google AI documentation](https://ai.google.dev/)
|
|
||||||
- **API Keys**: Visit [Google AI Studio](https://aistudio.google.com/)
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
This image configuration is provided under the same license as the Cubbi project. Google Gemini CLI is licensed separately by Google.
|
|
||||||
@@ -1,80 +0,0 @@
|
|||||||
name: gemini-cli
|
|
||||||
description: Google Gemini CLI environment for AI-powered development
|
|
||||||
version: 1.0.0
|
|
||||||
maintainer: team@monadical.com
|
|
||||||
image: monadical/cubbi-gemini-cli:latest
|
|
||||||
|
|
||||||
environment:
|
|
||||||
# Google AI Configuration
|
|
||||||
- name: GEMINI_API_KEY
|
|
||||||
description: Google AI API key for Gemini models
|
|
||||||
required: false
|
|
||||||
sensitive: true
|
|
||||||
|
|
||||||
- name: GOOGLE_API_KEY
|
|
||||||
description: Alternative Google API key (compatibility)
|
|
||||||
required: false
|
|
||||||
sensitive: true
|
|
||||||
|
|
||||||
# Google Cloud Configuration
|
|
||||||
- name: GOOGLE_APPLICATION_CREDENTIALS
|
|
||||||
description: Path to Google Cloud service account JSON file
|
|
||||||
required: false
|
|
||||||
sensitive: true
|
|
||||||
|
|
||||||
- name: GCLOUD_PROJECT
|
|
||||||
description: Google Cloud project ID
|
|
||||||
required: false
|
|
||||||
|
|
||||||
# Model Configuration
|
|
||||||
- name: GEMINI_MODEL
|
|
||||||
description: Default Gemini model (e.g., gemini-1.5-pro, gemini-1.5-flash)
|
|
||||||
required: false
|
|
||||||
default: "gemini-1.5-pro"
|
|
||||||
|
|
||||||
- name: GEMINI_TEMPERATURE
|
|
||||||
description: Model temperature (0.0-2.0)
|
|
||||||
required: false
|
|
||||||
default: "0.7"
|
|
||||||
|
|
||||||
- name: GEMINI_MAX_TOKENS
|
|
||||||
description: Maximum tokens in response
|
|
||||||
required: false
|
|
||||||
|
|
||||||
# Search Configuration
|
|
||||||
- name: GEMINI_SEARCH_ENABLED
|
|
||||||
description: Enable Google Search grounding (true/false)
|
|
||||||
required: false
|
|
||||||
default: "false"
|
|
||||||
|
|
||||||
# Proxy Configuration
|
|
||||||
- name: HTTP_PROXY
|
|
||||||
description: HTTP proxy server URL
|
|
||||||
required: false
|
|
||||||
|
|
||||||
- name: HTTPS_PROXY
|
|
||||||
description: HTTPS proxy server URL
|
|
||||||
required: false
|
|
||||||
|
|
||||||
# Debug Configuration
|
|
||||||
- name: GEMINI_DEBUG
|
|
||||||
description: Enable debug mode (true/false)
|
|
||||||
required: false
|
|
||||||
default: "false"
|
|
||||||
|
|
||||||
ports: []
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- mountPath: /app
|
|
||||||
description: Application directory
|
|
||||||
|
|
||||||
persistent_configs:
|
|
||||||
- source: "/home/cubbi/.config/gemini"
|
|
||||||
target: "/cubbi-config/gemini-settings"
|
|
||||||
type: "directory"
|
|
||||||
description: "Gemini CLI configuration and history"
|
|
||||||
|
|
||||||
- source: "/home/cubbi/.cache/gemini"
|
|
||||||
target: "/cubbi-config/gemini-cache"
|
|
||||||
type: "directory"
|
|
||||||
description: "Gemini CLI cache directory"
|
|
||||||
@@ -1,241 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Gemini CLI Plugin for Cubbi
|
|
||||||
Handles authentication setup and configuration for Google Gemini CLI
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import stat
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict
|
|
||||||
|
|
||||||
from cubbi_init import ToolPlugin
|
|
||||||
|
|
||||||
|
|
||||||
class GeminiCliPlugin(ToolPlugin):
|
|
||||||
"""Plugin for setting up Gemini CLI authentication and configuration"""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def tool_name(self) -> str:
|
|
||||||
return "gemini-cli"
|
|
||||||
|
|
||||||
def _get_user_ids(self) -> tuple[int, int]:
|
|
||||||
"""Get the cubbi user and group IDs from environment"""
|
|
||||||
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
|
||||||
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
|
||||||
return user_id, group_id
|
|
||||||
|
|
||||||
def _set_ownership(self, path: Path) -> None:
|
|
||||||
"""Set ownership of a path to the cubbi user"""
|
|
||||||
user_id, group_id = self._get_user_ids()
|
|
||||||
try:
|
|
||||||
os.chown(path, user_id, group_id)
|
|
||||||
except OSError as e:
|
|
||||||
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
|
||||||
|
|
||||||
def _get_gemini_config_dir(self) -> Path:
|
|
||||||
"""Get the Gemini configuration directory"""
|
|
||||||
# Get the actual username from the config if available
|
|
||||||
username = self.config.get("username", "cubbi")
|
|
||||||
return Path(f"/home/{username}/.config/gemini")
|
|
||||||
|
|
||||||
def _get_gemini_cache_dir(self) -> Path:
|
|
||||||
"""Get the Gemini cache directory"""
|
|
||||||
# Get the actual username from the config if available
|
|
||||||
username = self.config.get("username", "cubbi")
|
|
||||||
return Path(f"/home/{username}/.cache/gemini")
|
|
||||||
|
|
||||||
def _ensure_gemini_dirs(self) -> tuple[Path, Path]:
|
|
||||||
"""Ensure Gemini directories exist with correct ownership"""
|
|
||||||
config_dir = self._get_gemini_config_dir()
|
|
||||||
cache_dir = self._get_gemini_cache_dir()
|
|
||||||
|
|
||||||
# Create directories
|
|
||||||
for directory in [config_dir, cache_dir]:
|
|
||||||
try:
|
|
||||||
directory.mkdir(mode=0o755, parents=True, exist_ok=True)
|
|
||||||
self._set_ownership(directory)
|
|
||||||
except OSError as e:
|
|
||||||
self.status.log(
|
|
||||||
f"Failed to create Gemini directory {directory}: {e}", "ERROR"
|
|
||||||
)
|
|
||||||
|
|
||||||
return config_dir, cache_dir
|
|
||||||
|
|
||||||
def initialize(self) -> bool:
|
|
||||||
"""Initialize Gemini CLI configuration"""
|
|
||||||
self.status.log("Setting up Gemini CLI configuration...")
|
|
||||||
|
|
||||||
# Ensure Gemini directories exist
|
|
||||||
config_dir, cache_dir = self._ensure_gemini_dirs()
|
|
||||||
|
|
||||||
# Set up authentication and configuration
|
|
||||||
auth_configured = self._setup_authentication(config_dir)
|
|
||||||
config_created = self._create_configuration_file(config_dir)
|
|
||||||
|
|
||||||
if auth_configured or config_created:
|
|
||||||
self.status.log("✅ Gemini CLI configured successfully")
|
|
||||||
else:
|
|
||||||
self.status.log(
|
|
||||||
"ℹ️ No API key found - Gemini CLI will require authentication",
|
|
||||||
"INFO",
|
|
||||||
)
|
|
||||||
self.status.log(
|
|
||||||
" You can configure API keys using environment variables", "INFO"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Always return True to allow container to start
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _setup_authentication(self, config_dir: Path) -> bool:
|
|
||||||
"""Set up Gemini authentication"""
|
|
||||||
api_key = self._get_api_key()
|
|
||||||
|
|
||||||
if not api_key:
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Create environment file for API key
|
|
||||||
env_file = config_dir / ".env"
|
|
||||||
try:
|
|
||||||
with open(env_file, "w") as f:
|
|
||||||
f.write(f"GEMINI_API_KEY={api_key}\n")
|
|
||||||
|
|
||||||
# Set ownership and secure file permissions
|
|
||||||
self._set_ownership(env_file)
|
|
||||||
os.chmod(env_file, stat.S_IRUSR | stat.S_IWUSR)
|
|
||||||
|
|
||||||
self.status.log(f"Created Gemini environment file at {env_file}")
|
|
||||||
self.status.log("Added Gemini API key")
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.status.log(f"Failed to create environment file: {e}", "ERROR")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _get_api_key(self) -> str:
|
|
||||||
"""Get the Gemini API key from environment variables"""
|
|
||||||
# Check multiple possible environment variable names
|
|
||||||
for key_name in ["GEMINI_API_KEY", "GOOGLE_API_KEY"]:
|
|
||||||
api_key = os.environ.get(key_name)
|
|
||||||
if api_key:
|
|
||||||
return api_key
|
|
||||||
return ""
|
|
||||||
|
|
||||||
def _create_configuration_file(self, config_dir: Path) -> bool:
|
|
||||||
"""Create Gemini CLI configuration file"""
|
|
||||||
try:
|
|
||||||
config = self._build_configuration()
|
|
||||||
|
|
||||||
if not config:
|
|
||||||
return False
|
|
||||||
|
|
||||||
config_file = config_dir / "config.json"
|
|
||||||
with open(config_file, "w") as f:
|
|
||||||
json.dump(config, f, indent=2)
|
|
||||||
|
|
||||||
# Set ownership and permissions
|
|
||||||
self._set_ownership(config_file)
|
|
||||||
os.chmod(config_file, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP)
|
|
||||||
|
|
||||||
self.status.log(f"Created Gemini configuration at {config_file}")
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.status.log(f"Failed to create configuration file: {e}", "ERROR")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _build_configuration(self) -> Dict[str, Any]:
|
|
||||||
"""Build Gemini CLI configuration from environment variables"""
|
|
||||||
config = {}
|
|
||||||
|
|
||||||
# Model configuration
|
|
||||||
model = os.environ.get("GEMINI_MODEL", "gemini-1.5-pro")
|
|
||||||
if model:
|
|
||||||
config["defaultModel"] = model
|
|
||||||
self.status.log(f"Set default model to {model}")
|
|
||||||
|
|
||||||
# Temperature setting
|
|
||||||
temperature = os.environ.get("GEMINI_TEMPERATURE")
|
|
||||||
if temperature:
|
|
||||||
try:
|
|
||||||
temp_value = float(temperature)
|
|
||||||
if 0.0 <= temp_value <= 2.0:
|
|
||||||
config["temperature"] = temp_value
|
|
||||||
self.status.log(f"Set temperature to {temp_value}")
|
|
||||||
else:
|
|
||||||
self.status.log(
|
|
||||||
f"Invalid temperature value {temperature}, using default",
|
|
||||||
"WARNING",
|
|
||||||
)
|
|
||||||
except ValueError:
|
|
||||||
self.status.log(
|
|
||||||
f"Invalid temperature format {temperature}, using default",
|
|
||||||
"WARNING",
|
|
||||||
)
|
|
||||||
|
|
||||||
# Max tokens setting
|
|
||||||
max_tokens = os.environ.get("GEMINI_MAX_TOKENS")
|
|
||||||
if max_tokens:
|
|
||||||
try:
|
|
||||||
tokens_value = int(max_tokens)
|
|
||||||
if tokens_value > 0:
|
|
||||||
config["maxTokens"] = tokens_value
|
|
||||||
self.status.log(f"Set max tokens to {tokens_value}")
|
|
||||||
else:
|
|
||||||
self.status.log(
|
|
||||||
f"Invalid max tokens value {max_tokens}, using default",
|
|
||||||
"WARNING",
|
|
||||||
)
|
|
||||||
except ValueError:
|
|
||||||
self.status.log(
|
|
||||||
f"Invalid max tokens format {max_tokens}, using default",
|
|
||||||
"WARNING",
|
|
||||||
)
|
|
||||||
|
|
||||||
# Search configuration
|
|
||||||
search_enabled = os.environ.get("GEMINI_SEARCH_ENABLED", "false")
|
|
||||||
if search_enabled.lower() in ["true", "false"]:
|
|
||||||
config["searchEnabled"] = search_enabled.lower() == "true"
|
|
||||||
if config["searchEnabled"]:
|
|
||||||
self.status.log("Enabled Google Search grounding")
|
|
||||||
|
|
||||||
# Debug mode
|
|
||||||
debug_mode = os.environ.get("GEMINI_DEBUG", "false")
|
|
||||||
if debug_mode.lower() in ["true", "false"]:
|
|
||||||
config["debug"] = debug_mode.lower() == "true"
|
|
||||||
if config["debug"]:
|
|
||||||
self.status.log("Enabled debug mode")
|
|
||||||
|
|
||||||
# Proxy settings
|
|
||||||
for proxy_var in ["HTTP_PROXY", "HTTPS_PROXY"]:
|
|
||||||
proxy_value = os.environ.get(proxy_var)
|
|
||||||
if proxy_value:
|
|
||||||
config[proxy_var.lower()] = proxy_value
|
|
||||||
self.status.log(f"Added proxy configuration: {proxy_var}")
|
|
||||||
|
|
||||||
# Google Cloud project
|
|
||||||
project = os.environ.get("GCLOUD_PROJECT")
|
|
||||||
if project:
|
|
||||||
config["project"] = project
|
|
||||||
self.status.log(f"Set Google Cloud project to {project}")
|
|
||||||
|
|
||||||
return config
|
|
||||||
|
|
||||||
def setup_tool_configuration(self) -> bool:
|
|
||||||
"""Set up Gemini CLI configuration - called by base class"""
|
|
||||||
# Additional tool configuration can be added here if needed
|
|
||||||
return True
|
|
||||||
|
|
||||||
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
|
||||||
"""Integrate Gemini CLI with available MCP servers if applicable"""
|
|
||||||
if mcp_config["count"] == 0:
|
|
||||||
self.status.log("No MCP servers to integrate")
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Gemini CLI doesn't have native MCP support,
|
|
||||||
# but we could potentially add custom integrations here
|
|
||||||
self.status.log(
|
|
||||||
f"Found {mcp_config['count']} MCP server(s) - no direct integration available"
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
@@ -1,312 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Comprehensive test script for Gemini CLI Cubbi image
|
|
||||||
Tests Docker image build, API key configuration, and Cubbi CLI integration
|
|
||||||
"""
|
|
||||||
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
def run_command(cmd, description="", check=True):
|
|
||||||
"""Run a shell command and return result"""
|
|
||||||
print(f"\n🔍 {description}")
|
|
||||||
print(f"Running: {cmd}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
result = subprocess.run(
|
|
||||||
cmd, shell=True, capture_output=True, text=True, check=check
|
|
||||||
)
|
|
||||||
|
|
||||||
if result.stdout:
|
|
||||||
print("STDOUT:")
|
|
||||||
print(result.stdout)
|
|
||||||
|
|
||||||
if result.stderr:
|
|
||||||
print("STDERR:")
|
|
||||||
print(result.stderr)
|
|
||||||
|
|
||||||
return result
|
|
||||||
except subprocess.CalledProcessError as e:
|
|
||||||
print(f"❌ Command failed with exit code {e.returncode}")
|
|
||||||
if e.stdout:
|
|
||||||
print("STDOUT:")
|
|
||||||
print(e.stdout)
|
|
||||||
if e.stderr:
|
|
||||||
print("STDERR:")
|
|
||||||
print(e.stderr)
|
|
||||||
if check:
|
|
||||||
raise
|
|
||||||
return e
|
|
||||||
|
|
||||||
|
|
||||||
def test_docker_build():
|
|
||||||
"""Test Docker image build"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing Docker Image Build")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Get the directory containing this test file
|
|
||||||
test_dir = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
|
|
||||||
result = run_command(
|
|
||||||
f"cd {test_dir} && docker build -t monadical/cubbi-gemini-cli:latest .",
|
|
||||||
"Building Gemini CLI Docker image",
|
|
||||||
)
|
|
||||||
|
|
||||||
if result.returncode == 0:
|
|
||||||
print("✅ Gemini CLI Docker image built successfully")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("❌ Gemini CLI Docker image build failed")
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def test_docker_image_exists():
|
|
||||||
"""Test if the Gemini CLI Docker image exists"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing Docker Image Existence")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
result = run_command(
|
|
||||||
"docker images monadical/cubbi-gemini-cli:latest --format 'table {{.Repository}}\t{{.Tag}}\t{{.Size}}'",
|
|
||||||
"Checking if Gemini CLI Docker image exists",
|
|
||||||
)
|
|
||||||
|
|
||||||
if "monadical/cubbi-gemini-cli" in result.stdout:
|
|
||||||
print("✅ Gemini CLI Docker image exists")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("❌ Gemini CLI Docker image not found")
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def test_gemini_version():
|
|
||||||
"""Test basic Gemini CLI functionality in container"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing Gemini CLI Version")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
result = run_command(
|
|
||||||
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'gemini --version'",
|
|
||||||
"Testing Gemini CLI version command",
|
|
||||||
)
|
|
||||||
|
|
||||||
if result.returncode == 0 and (
|
|
||||||
"gemini" in result.stdout.lower() or "version" in result.stdout.lower()
|
|
||||||
):
|
|
||||||
print("✅ Gemini CLI version command works")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("❌ Gemini CLI version command failed")
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def test_api_key_configuration():
|
|
||||||
"""Test API key configuration and environment setup"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing API Key Configuration")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Test with multiple API keys
|
|
||||||
test_keys = {
|
|
||||||
"GEMINI_API_KEY": "test-gemini-key",
|
|
||||||
"GOOGLE_API_KEY": "test-google-key",
|
|
||||||
}
|
|
||||||
|
|
||||||
env_flags = " ".join([f'-e {key}="{value}"' for key, value in test_keys.items()])
|
|
||||||
|
|
||||||
result = run_command(
|
|
||||||
f"docker run --rm {env_flags} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/.env 2>/dev/null || echo \"No .env file found\"'",
|
|
||||||
"Testing API key configuration in .env file",
|
|
||||||
)
|
|
||||||
|
|
||||||
success = True
|
|
||||||
if "test-gemini-key" in result.stdout:
|
|
||||||
print("✅ GEMINI_API_KEY configured correctly")
|
|
||||||
else:
|
|
||||||
print("❌ GEMINI_API_KEY not found in configuration")
|
|
||||||
success = False
|
|
||||||
|
|
||||||
return success
|
|
||||||
|
|
||||||
|
|
||||||
def test_configuration_file():
|
|
||||||
"""Test Gemini CLI configuration file creation"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing Configuration File")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
env_vars = "-e GEMINI_API_KEY='test-key' -e GEMINI_MODEL='gemini-1.5-pro' -e GEMINI_TEMPERATURE='0.5'"
|
|
||||||
|
|
||||||
result = run_command(
|
|
||||||
f"docker run --rm {env_vars} monadical/cubbi-gemini-cli:latest bash -c 'cat ~/.config/gemini/config.json 2>/dev/null || echo \"No config file found\"'",
|
|
||||||
"Testing configuration file creation",
|
|
||||||
)
|
|
||||||
|
|
||||||
success = True
|
|
||||||
if "gemini-1.5-pro" in result.stdout:
|
|
||||||
print("✅ Default model configured correctly")
|
|
||||||
else:
|
|
||||||
print("❌ Default model not found in configuration")
|
|
||||||
success = False
|
|
||||||
|
|
||||||
if "0.5" in result.stdout:
|
|
||||||
print("✅ Temperature configured correctly")
|
|
||||||
else:
|
|
||||||
print("❌ Temperature not found in configuration")
|
|
||||||
success = False
|
|
||||||
|
|
||||||
return success
|
|
||||||
|
|
||||||
|
|
||||||
def test_cubbi_cli_integration():
|
|
||||||
"""Test Cubbi CLI integration"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing Cubbi CLI Integration")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Change to project root for cubbi commands
|
|
||||||
project_root = os.path.dirname(
|
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
|
||||||
)
|
|
||||||
|
|
||||||
# Test image listing
|
|
||||||
result = run_command(
|
|
||||||
f"cd {project_root} && uv run -m cubbi.cli image list",
|
|
||||||
"Testing Cubbi CLI can see images",
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
if "gemini-cli" in result.stdout:
|
|
||||||
print("✅ Cubbi CLI can list Gemini CLI image")
|
|
||||||
else:
|
|
||||||
print(
|
|
||||||
"ℹ️ Gemini CLI image not yet registered with Cubbi CLI - this is expected during development"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Test basic cubbi CLI works
|
|
||||||
result = run_command(
|
|
||||||
f"cd {project_root} && uv run -m cubbi.cli --help",
|
|
||||||
"Testing basic Cubbi CLI functionality",
|
|
||||||
)
|
|
||||||
|
|
||||||
if result.returncode == 0 and "cubbi" in result.stdout.lower():
|
|
||||||
print("✅ Cubbi CLI basic functionality works")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("❌ Cubbi CLI basic functionality failed")
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def test_persistent_configuration():
|
|
||||||
"""Test persistent configuration directories"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing Persistent Configuration")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Test that persistent directories are created
|
|
||||||
result = run_command(
|
|
||||||
"docker run --rm -e GEMINI_API_KEY='test-key' monadical/cubbi-gemini-cli:latest bash -c 'ls -la ~/.config/ && ls -la ~/.cache/'",
|
|
||||||
"Testing persistent configuration directories",
|
|
||||||
)
|
|
||||||
|
|
||||||
success = True
|
|
||||||
|
|
||||||
if "gemini" in result.stdout:
|
|
||||||
print("✅ ~/.config/gemini directory exists")
|
|
||||||
else:
|
|
||||||
print("❌ ~/.config/gemini directory not found")
|
|
||||||
success = False
|
|
||||||
|
|
||||||
if "gemini" in result.stdout:
|
|
||||||
print("✅ ~/.cache/gemini directory exists")
|
|
||||||
else:
|
|
||||||
print("❌ ~/.cache/gemini directory not found")
|
|
||||||
success = False
|
|
||||||
|
|
||||||
return success
|
|
||||||
|
|
||||||
|
|
||||||
def test_plugin_functionality():
|
|
||||||
"""Test the Gemini CLI plugin functionality"""
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🧪 Testing Plugin Functionality")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Test plugin without API keys (should still work)
|
|
||||||
result = run_command(
|
|
||||||
"docker run --rm monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test without API keys\"'",
|
|
||||||
"Testing plugin functionality without API keys",
|
|
||||||
)
|
|
||||||
|
|
||||||
if "No API key found - Gemini CLI will require authentication" in result.stdout:
|
|
||||||
print("✅ Plugin handles missing API keys gracefully")
|
|
||||||
else:
|
|
||||||
print("ℹ️ Plugin API key handling test - check output above")
|
|
||||||
|
|
||||||
# Test plugin with API keys
|
|
||||||
result = run_command(
|
|
||||||
"docker run --rm -e GEMINI_API_KEY='test-plugin-key' monadical/cubbi-gemini-cli:latest bash -c 'echo \"Plugin test with API keys\"'",
|
|
||||||
"Testing plugin functionality with API keys",
|
|
||||||
)
|
|
||||||
|
|
||||||
if "Gemini CLI configured successfully" in result.stdout:
|
|
||||||
print("✅ Plugin configures environment successfully")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("❌ Plugin environment configuration failed")
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Run all tests"""
|
|
||||||
print("🚀 Starting Gemini CLI Cubbi Image Tests")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
tests = [
|
|
||||||
("Docker Image Build", test_docker_build),
|
|
||||||
("Docker Image Exists", test_docker_image_exists),
|
|
||||||
("Cubbi CLI Integration", test_cubbi_cli_integration),
|
|
||||||
("Gemini CLI Version", test_gemini_version),
|
|
||||||
("API Key Configuration", test_api_key_configuration),
|
|
||||||
("Configuration File", test_configuration_file),
|
|
||||||
("Persistent Configuration", test_persistent_configuration),
|
|
||||||
("Plugin Functionality", test_plugin_functionality),
|
|
||||||
]
|
|
||||||
|
|
||||||
results = {}
|
|
||||||
|
|
||||||
for test_name, test_func in tests:
|
|
||||||
try:
|
|
||||||
results[test_name] = test_func()
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Test '{test_name}' failed with exception: {e}")
|
|
||||||
results[test_name] = False
|
|
||||||
|
|
||||||
# Print summary
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("📊 TEST SUMMARY")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
total_tests = len(tests)
|
|
||||||
passed_tests = sum(1 for result in results.values() if result)
|
|
||||||
failed_tests = total_tests - passed_tests
|
|
||||||
|
|
||||||
for test_name, result in results.items():
|
|
||||||
status = "✅ PASS" if result else "❌ FAIL"
|
|
||||||
print(f"{status} {test_name}")
|
|
||||||
|
|
||||||
print(f"\nTotal: {total_tests} | Passed: {passed_tests} | Failed: {failed_tests}")
|
|
||||||
|
|
||||||
if failed_tests == 0:
|
|
||||||
print("\n🎉 All tests passed! Gemini CLI image is ready for use.")
|
|
||||||
return 0
|
|
||||||
else:
|
|
||||||
print(f"\n⚠️ {failed_tests} test(s) failed. Please check the output above.")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -6,6 +6,7 @@ LABEL description="Goose for Cubbi"
|
|||||||
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
||||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
gosu \
|
gosu \
|
||||||
|
sudo \
|
||||||
passwd \
|
passwd \
|
||||||
bash \
|
bash \
|
||||||
curl \
|
curl \
|
||||||
|
|||||||
@@ -3,36 +3,14 @@ description: Goose AI environment
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
maintainer: team@monadical.com
|
maintainer: team@monadical.com
|
||||||
image: monadical/cubbi-goose:latest
|
image: monadical/cubbi-goose:latest
|
||||||
|
persistent_configs: []
|
||||||
init:
|
environments_to_forward:
|
||||||
pre_command: /cubbi-init.sh
|
# API Keys
|
||||||
command: /entrypoint.sh
|
- OPENAI_API_KEY
|
||||||
|
- ANTHROPIC_API_KEY
|
||||||
environment:
|
- ANTHROPIC_AUTH_TOKEN
|
||||||
- name: LANGFUSE_INIT_PROJECT_PUBLIC_KEY
|
- ANTHROPIC_CUSTOM_HEADERS
|
||||||
description: Langfuse public key
|
- DEEPSEEK_API_KEY
|
||||||
required: false
|
- GEMINI_API_KEY
|
||||||
sensitive: true
|
- OPENROUTER_API_KEY
|
||||||
|
- AIDER_API_KEYS
|
||||||
- name: LANGFUSE_INIT_PROJECT_SECRET_KEY
|
|
||||||
description: Langfuse secret key
|
|
||||||
required: false
|
|
||||||
sensitive: true
|
|
||||||
|
|
||||||
- name: LANGFUSE_URL
|
|
||||||
description: Langfuse API URL
|
|
||||||
required: false
|
|
||||||
default: https://cloud.langfuse.com
|
|
||||||
|
|
||||||
ports:
|
|
||||||
- 8000
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- mountPath: /app
|
|
||||||
description: Application directory
|
|
||||||
|
|
||||||
persistent_configs:
|
|
||||||
- source: "/app/.goose"
|
|
||||||
target: "/cubbi-config/goose-app"
|
|
||||||
type: "directory"
|
|
||||||
description: "Goose memory"
|
|
||||||
|
|||||||
@@ -1,75 +1,80 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
"""
|
|
||||||
Goose-specific plugin for Cubbi initialization
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict
|
|
||||||
|
|
||||||
from cubbi_init import ToolPlugin
|
from cubbi_init import ToolPlugin, cubbi_config, set_ownership
|
||||||
from ruamel.yaml import YAML
|
from ruamel.yaml import YAML
|
||||||
|
|
||||||
|
|
||||||
class GoosePlugin(ToolPlugin):
|
class GoosePlugin(ToolPlugin):
|
||||||
"""Plugin for Goose AI tool initialization"""
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def tool_name(self) -> str:
|
def tool_name(self) -> str:
|
||||||
return "goose"
|
return "goose"
|
||||||
|
|
||||||
def _get_user_ids(self) -> tuple[int, int]:
|
def is_already_configured(self) -> bool:
|
||||||
"""Get the cubbi user and group IDs from environment"""
|
config_file = Path("/home/cubbi/.config/goose/config.yaml")
|
||||||
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
return config_file.exists()
|
||||||
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
|
||||||
return user_id, group_id
|
|
||||||
|
|
||||||
def _set_ownership(self, path: Path) -> None:
|
def configure(self) -> bool:
|
||||||
"""Set ownership of a path to the cubbi user"""
|
self._ensure_user_config_dir()
|
||||||
user_id, group_id = self._get_user_ids()
|
if not self.setup_tool_configuration():
|
||||||
try:
|
return False
|
||||||
os.chown(path, user_id, group_id)
|
return self.integrate_mcp_servers()
|
||||||
except OSError as e:
|
|
||||||
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
|
||||||
|
|
||||||
def _get_user_config_path(self) -> Path:
|
def _get_user_config_path(self) -> Path:
|
||||||
"""Get the correct config path for the cubbi user"""
|
|
||||||
return Path("/home/cubbi/.config/goose")
|
return Path("/home/cubbi/.config/goose")
|
||||||
|
|
||||||
def _ensure_user_config_dir(self) -> Path:
|
def _ensure_user_config_dir(self) -> Path:
|
||||||
"""Ensure config directory exists with correct ownership"""
|
|
||||||
config_dir = self._get_user_config_path()
|
config_dir = self._get_user_config_path()
|
||||||
|
return self.create_directory_with_ownership(config_dir)
|
||||||
|
|
||||||
# Create the full directory path
|
def _write_env_vars_to_profile(self, env_vars: dict) -> None:
|
||||||
try:
|
try:
|
||||||
config_dir.mkdir(parents=True, exist_ok=True)
|
profile_path = Path("/home/cubbi/.bashrc")
|
||||||
except FileExistsError:
|
|
||||||
# Directory already exists, which is fine
|
env_section_start = "# CUBBI GOOSE ENVIRONMENT VARIABLES"
|
||||||
pass
|
env_section_end = "# END CUBBI GOOSE ENVIRONMENT VARIABLES"
|
||||||
except OSError as e:
|
|
||||||
|
if profile_path.exists():
|
||||||
|
with open(profile_path, "r") as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
else:
|
||||||
|
lines = []
|
||||||
|
|
||||||
|
new_lines = []
|
||||||
|
skip_section = False
|
||||||
|
for line in lines:
|
||||||
|
if env_section_start in line:
|
||||||
|
skip_section = True
|
||||||
|
elif env_section_end in line:
|
||||||
|
skip_section = False
|
||||||
|
continue
|
||||||
|
elif not skip_section:
|
||||||
|
new_lines.append(line)
|
||||||
|
|
||||||
|
if env_vars:
|
||||||
|
new_lines.append(f"\n{env_section_start}\n")
|
||||||
|
for key, value in env_vars.items():
|
||||||
|
new_lines.append(f'export {key}="{value}"\n')
|
||||||
|
new_lines.append(f"{env_section_end}\n")
|
||||||
|
|
||||||
|
profile_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(profile_path, "w") as f:
|
||||||
|
f.writelines(new_lines)
|
||||||
|
|
||||||
|
set_ownership(profile_path)
|
||||||
|
|
||||||
self.status.log(
|
self.status.log(
|
||||||
f"Failed to create config directory {config_dir}: {e}", "ERROR"
|
f"Updated shell profile with {len(env_vars)} environment variables"
|
||||||
)
|
)
|
||||||
return config_dir
|
|
||||||
|
|
||||||
# Set ownership for the directories
|
except Exception as e:
|
||||||
config_parent = config_dir.parent
|
self.status.log(
|
||||||
if config_parent.exists():
|
f"Failed to write environment variables to profile: {e}", "ERROR"
|
||||||
self._set_ownership(config_parent)
|
)
|
||||||
|
|
||||||
if config_dir.exists():
|
|
||||||
self._set_ownership(config_dir)
|
|
||||||
|
|
||||||
return config_dir
|
|
||||||
|
|
||||||
def initialize(self) -> bool:
|
|
||||||
"""Initialize Goose configuration"""
|
|
||||||
self._ensure_user_config_dir()
|
|
||||||
return self.setup_tool_configuration()
|
|
||||||
|
|
||||||
def setup_tool_configuration(self) -> bool:
|
def setup_tool_configuration(self) -> bool:
|
||||||
"""Set up Goose configuration file"""
|
|
||||||
# Ensure directory exists before writing
|
|
||||||
config_dir = self._ensure_user_config_dir()
|
config_dir = self._ensure_user_config_dir()
|
||||||
if not config_dir.exists():
|
if not config_dir.exists():
|
||||||
self.status.log(
|
self.status.log(
|
||||||
@@ -99,24 +104,58 @@ class GoosePlugin(ToolPlugin):
|
|||||||
"type": "builtin",
|
"type": "builtin",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Update with environment variables
|
# Configure Goose with the default model
|
||||||
goose_model = os.environ.get("CUBBI_MODEL")
|
provider_config = cubbi_config.get_provider_for_default_model()
|
||||||
goose_provider = os.environ.get("CUBBI_PROVIDER")
|
if provider_config and cubbi_config.defaults.model:
|
||||||
|
_, model_name = cubbi_config.defaults.model.split("/", 1)
|
||||||
|
|
||||||
if goose_model:
|
# Set Goose model and provider
|
||||||
config_data["GOOSE_MODEL"] = goose_model
|
config_data["GOOSE_MODEL"] = model_name
|
||||||
self.status.log(f"Set GOOSE_MODEL to {goose_model}")
|
config_data["GOOSE_PROVIDER"] = provider_config.type
|
||||||
|
|
||||||
if goose_provider:
|
# Set ONLY the specific API key for the selected provider
|
||||||
config_data["GOOSE_PROVIDER"] = goose_provider
|
# Set both in current process AND in shell environment file
|
||||||
self.status.log(f"Set GOOSE_PROVIDER to {goose_provider}")
|
env_vars_to_set = {}
|
||||||
|
|
||||||
|
if provider_config.type == "anthropic" and provider_config.api_key:
|
||||||
|
env_vars_to_set["ANTHROPIC_API_KEY"] = provider_config.api_key
|
||||||
|
self.status.log("Set Anthropic API key for goose")
|
||||||
|
elif provider_config.type == "openai" and provider_config.api_key:
|
||||||
|
# For OpenAI-compatible providers (including litellm), goose expects OPENAI_API_KEY
|
||||||
|
env_vars_to_set["OPENAI_API_KEY"] = provider_config.api_key
|
||||||
|
self.status.log("Set OpenAI API key for goose")
|
||||||
|
# Set base URL for OpenAI-compatible providers in both env and config
|
||||||
|
if provider_config.base_url:
|
||||||
|
env_vars_to_set["OPENAI_BASE_URL"] = provider_config.base_url
|
||||||
|
config_data["OPENAI_HOST"] = provider_config.base_url
|
||||||
|
self.status.log(
|
||||||
|
f"Set OPENAI_BASE_URL and OPENAI_HOST to {provider_config.base_url}"
|
||||||
|
)
|
||||||
|
elif provider_config.type == "google" and provider_config.api_key:
|
||||||
|
env_vars_to_set["GOOGLE_API_KEY"] = provider_config.api_key
|
||||||
|
self.status.log("Set Google API key for goose")
|
||||||
|
elif provider_config.type == "openrouter" and provider_config.api_key:
|
||||||
|
env_vars_to_set["OPENROUTER_API_KEY"] = provider_config.api_key
|
||||||
|
self.status.log("Set OpenRouter API key for goose")
|
||||||
|
|
||||||
|
# Set environment variables for current process (for --run commands)
|
||||||
|
for key, value in env_vars_to_set.items():
|
||||||
|
os.environ[key] = value
|
||||||
|
|
||||||
|
# Write environment variables to shell profile for interactive sessions
|
||||||
|
self._write_env_vars_to_profile(env_vars_to_set)
|
||||||
|
|
||||||
|
self.status.log(
|
||||||
|
f"Configured Goose: model={model_name}, provider={provider_config.type}"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.status.log("No default model or provider configured", "WARNING")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with config_file.open("w") as f:
|
with config_file.open("w") as f:
|
||||||
yaml.dump(config_data, f)
|
yaml.dump(config_data, f)
|
||||||
|
|
||||||
# Set ownership of the config file to cubbi user
|
set_ownership(config_file)
|
||||||
self._set_ownership(config_file)
|
|
||||||
|
|
||||||
self.status.log(f"Updated Goose configuration at {config_file}")
|
self.status.log(f"Updated Goose configuration at {config_file}")
|
||||||
return True
|
return True
|
||||||
@@ -124,13 +163,11 @@ class GoosePlugin(ToolPlugin):
|
|||||||
self.status.log(f"Failed to write Goose configuration: {e}", "ERROR")
|
self.status.log(f"Failed to write Goose configuration: {e}", "ERROR")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
def integrate_mcp_servers(self) -> bool:
|
||||||
"""Integrate Goose with available MCP servers"""
|
if not cubbi_config.mcps:
|
||||||
if mcp_config["count"] == 0:
|
|
||||||
self.status.log("No MCP servers to integrate")
|
self.status.log("No MCP servers to integrate")
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Ensure directory exists before writing
|
|
||||||
config_dir = self._ensure_user_config_dir()
|
config_dir = self._ensure_user_config_dir()
|
||||||
if not config_dir.exists():
|
if not config_dir.exists():
|
||||||
self.status.log(
|
self.status.log(
|
||||||
@@ -151,45 +188,59 @@ class GoosePlugin(ToolPlugin):
|
|||||||
if "extensions" not in config_data:
|
if "extensions" not in config_data:
|
||||||
config_data["extensions"] = {}
|
config_data["extensions"] = {}
|
||||||
|
|
||||||
for server in mcp_config["servers"]:
|
for mcp in cubbi_config.mcps:
|
||||||
server_name = server["name"]
|
if mcp.type == "remote":
|
||||||
server_host = server["host"]
|
if mcp.name and mcp.url:
|
||||||
server_url = server["url"]
|
self.status.log(
|
||||||
|
f"Adding remote MCP extension: {mcp.name} - {mcp.url}"
|
||||||
if server_name and server_host:
|
)
|
||||||
mcp_url = f"http://{server_host}:8080/sse"
|
config_data["extensions"][mcp.name] = {
|
||||||
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}")
|
"enabled": True,
|
||||||
|
"name": mcp.name,
|
||||||
config_data["extensions"][server_name] = {
|
"timeout": 60,
|
||||||
"enabled": True,
|
"type": "sse",
|
||||||
"name": server_name,
|
"uri": mcp.url,
|
||||||
"timeout": 60,
|
"envs": {},
|
||||||
"type": "sse",
|
}
|
||||||
"uri": mcp_url,
|
elif mcp.type == "local":
|
||||||
"envs": {},
|
if mcp.name and mcp.command:
|
||||||
}
|
self.status.log(
|
||||||
elif server_name and server_url:
|
f"Adding local MCP extension: {mcp.name} - {mcp.command}"
|
||||||
self.status.log(
|
)
|
||||||
f"Adding remote MCP extension: {server_name} - {server_url}"
|
# Goose uses stdio type for local MCPs
|
||||||
)
|
config_data["extensions"][mcp.name] = {
|
||||||
|
"enabled": True,
|
||||||
config_data["extensions"][server_name] = {
|
"name": mcp.name,
|
||||||
"enabled": True,
|
"timeout": 60,
|
||||||
"name": server_name,
|
"type": "stdio",
|
||||||
"timeout": 60,
|
"command": mcp.command,
|
||||||
"type": "sse",
|
"args": mcp.args if mcp.args else [],
|
||||||
"uri": server_url,
|
"envs": mcp.env if mcp.env else {},
|
||||||
"envs": {},
|
}
|
||||||
}
|
elif mcp.type in ["docker", "proxy"]:
|
||||||
|
if mcp.name and mcp.host:
|
||||||
|
mcp_port = mcp.port or 8080
|
||||||
|
mcp_url = f"http://{mcp.host}:{mcp_port}/sse"
|
||||||
|
self.status.log(f"Adding MCP extension: {mcp.name} - {mcp_url}")
|
||||||
|
config_data["extensions"][mcp.name] = {
|
||||||
|
"enabled": True,
|
||||||
|
"name": mcp.name,
|
||||||
|
"timeout": 60,
|
||||||
|
"type": "sse",
|
||||||
|
"uri": mcp_url,
|
||||||
|
"envs": {},
|
||||||
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with config_file.open("w") as f:
|
with config_file.open("w") as f:
|
||||||
yaml.dump(config_data, f)
|
yaml.dump(config_data, f)
|
||||||
|
|
||||||
# Set ownership of the config file to cubbi user
|
set_ownership(config_file)
|
||||||
self._set_ownership(config_file)
|
|
||||||
|
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
|
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
PLUGIN_CLASS = GoosePlugin
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ LABEL description="Opencode for Cubbi"
|
|||||||
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
# Install system dependencies including gosu for user switching and shadow for useradd/groupadd
|
||||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||||
gosu \
|
gosu \
|
||||||
|
sudo \
|
||||||
passwd \
|
passwd \
|
||||||
bash \
|
bash \
|
||||||
curl \
|
curl \
|
||||||
@@ -30,12 +31,22 @@ RUN curl -fsSL https://astral.sh/uv/install.sh -o install.sh && \
|
|||||||
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
mv /root/.local/bin/uvx /usr/local/bin/uvx && \
|
||||||
rm install.sh
|
rm install.sh
|
||||||
|
|
||||||
# Install opencode-ai
|
# Install Node.js
|
||||||
|
ARG NODE_VERSION=v22.16.0
|
||||||
RUN mkdir -p /opt/node && \
|
RUN mkdir -p /opt/node && \
|
||||||
curl -fsSL https://nodejs.org/dist/v22.16.0/node-v22.16.0-linux-x64.tar.gz -o node.tar.gz && \
|
ARCH=$(uname -m) && \
|
||||||
|
if [ "$ARCH" = "x86_64" ]; then \
|
||||||
|
NODE_ARCH=linux-x64; \
|
||||||
|
elif [ "$ARCH" = "aarch64" ]; then \
|
||||||
|
NODE_ARCH=linux-arm64; \
|
||||||
|
else \
|
||||||
|
echo "Unsupported architecture"; exit 1; \
|
||||||
|
fi && \
|
||||||
|
curl -fsSL https://nodejs.org/dist/$NODE_VERSION/node-$NODE_VERSION-$NODE_ARCH.tar.gz -o node.tar.gz && \
|
||||||
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
|
tar -xf node.tar.gz -C /opt/node --strip-components=1 && \
|
||||||
rm node.tar.gz
|
rm node.tar.gz
|
||||||
|
|
||||||
|
|
||||||
ENV PATH="/opt/node/bin:$PATH"
|
ENV PATH="/opt/node/bin:$PATH"
|
||||||
RUN npm i -g yarn
|
RUN npm i -g yarn
|
||||||
RUN npm i -g opencode-ai
|
RUN npm i -g opencode-ai
|
||||||
@@ -56,6 +67,7 @@ RUN echo '[ -x /cubbi/init-status.sh ] && /cubbi/init-status.sh' >> /etc/bash.ba
|
|||||||
ENV PYTHONUNBUFFERED=1
|
ENV PYTHONUNBUFFERED=1
|
||||||
ENV PYTHONDONTWRITEBYTECODE=1
|
ENV PYTHONDONTWRITEBYTECODE=1
|
||||||
ENV UV_LINK_MODE=copy
|
ENV UV_LINK_MODE=copy
|
||||||
|
ENV COLORTERM=truecolor
|
||||||
|
|
||||||
# Pre-install the cubbi_init
|
# Pre-install the cubbi_init
|
||||||
RUN /cubbi/cubbi_init.py --help
|
RUN /cubbi/cubbi_init.py --help
|
||||||
|
|||||||
@@ -3,16 +3,14 @@ description: Opencode AI environment
|
|||||||
version: 1.0.0
|
version: 1.0.0
|
||||||
maintainer: team@monadical.com
|
maintainer: team@monadical.com
|
||||||
image: monadical/cubbi-opencode:latest
|
image: monadical/cubbi-opencode:latest
|
||||||
|
|
||||||
init:
|
|
||||||
pre_command: /cubbi-init.sh
|
|
||||||
command: /entrypoint.sh
|
|
||||||
|
|
||||||
environment: []
|
|
||||||
ports: []
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- mountPath: /app
|
|
||||||
description: Application directory
|
|
||||||
|
|
||||||
persistent_configs: []
|
persistent_configs: []
|
||||||
|
environments_to_forward:
|
||||||
|
# API Keys
|
||||||
|
- OPENAI_API_KEY
|
||||||
|
- ANTHROPIC_API_KEY
|
||||||
|
- ANTHROPIC_AUTH_TOKEN
|
||||||
|
- ANTHROPIC_CUSTOM_HEADERS
|
||||||
|
- DEEPSEEK_API_KEY
|
||||||
|
- GEMINI_API_KEY
|
||||||
|
- OPENROUTER_API_KEY
|
||||||
|
- AIDER_API_KEYS
|
||||||
|
|||||||
@@ -1,255 +1,253 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
"""
|
|
||||||
Opencode-specific plugin for Cubbi initialization
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict
|
|
||||||
|
|
||||||
from cubbi_init import ToolPlugin
|
from cubbi_init import ToolPlugin, cubbi_config, set_ownership
|
||||||
|
|
||||||
# Map of environment variables to provider names in auth.json
|
# Standard providers that OpenCode supports natively
|
||||||
API_KEY_MAPPINGS = {
|
STANDARD_PROVIDERS: list[str] = ["anthropic", "openai", "google", "openrouter"]
|
||||||
"ANTHROPIC_API_KEY": "anthropic",
|
|
||||||
"GOOGLE_API_KEY": "google",
|
|
||||||
"OPENAI_API_KEY": "openai",
|
|
||||||
"OPENROUTER_API_KEY": "openrouter",
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class OpencodePlugin(ToolPlugin):
|
class OpencodePlugin(ToolPlugin):
|
||||||
"""Plugin for Opencode AI tool initialization"""
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def tool_name(self) -> str:
|
def tool_name(self) -> str:
|
||||||
return "opencode"
|
return "opencode"
|
||||||
|
|
||||||
def _get_user_ids(self) -> tuple[int, int]:
|
|
||||||
"""Get the cubbi user and group IDs from environment"""
|
|
||||||
user_id = int(os.environ.get("CUBBI_USER_ID", "1000"))
|
|
||||||
group_id = int(os.environ.get("CUBBI_GROUP_ID", "1000"))
|
|
||||||
return user_id, group_id
|
|
||||||
|
|
||||||
def _set_ownership(self, path: Path) -> None:
|
|
||||||
"""Set ownership of a path to the cubbi user"""
|
|
||||||
user_id, group_id = self._get_user_ids()
|
|
||||||
try:
|
|
||||||
os.chown(path, user_id, group_id)
|
|
||||||
except OSError as e:
|
|
||||||
self.status.log(f"Failed to set ownership for {path}: {e}", "WARNING")
|
|
||||||
|
|
||||||
def _get_user_config_path(self) -> Path:
|
def _get_user_config_path(self) -> Path:
|
||||||
"""Get the correct config path for the cubbi user"""
|
|
||||||
return Path("/home/cubbi/.config/opencode")
|
return Path("/home/cubbi/.config/opencode")
|
||||||
|
|
||||||
def _get_user_data_path(self) -> Path:
|
def is_already_configured(self) -> bool:
|
||||||
"""Get the correct data path for the cubbi user"""
|
config_file = self._get_user_config_path() / "config.json"
|
||||||
return Path("/home/cubbi/.local/share/opencode")
|
return config_file.exists()
|
||||||
|
|
||||||
def _ensure_user_config_dir(self) -> Path:
|
def configure(self) -> bool:
|
||||||
"""Ensure config directory exists with correct ownership"""
|
self.create_directory_with_ownership(self._get_user_config_path())
|
||||||
config_dir = self._get_user_config_path()
|
|
||||||
|
|
||||||
# Create the full directory path
|
|
||||||
try:
|
|
||||||
config_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
except FileExistsError:
|
|
||||||
# Directory already exists, which is fine
|
|
||||||
pass
|
|
||||||
except OSError as e:
|
|
||||||
self.status.log(
|
|
||||||
f"Failed to create config directory {config_dir}: {e}", "ERROR"
|
|
||||||
)
|
|
||||||
return config_dir
|
|
||||||
|
|
||||||
# Set ownership for the directories
|
|
||||||
config_parent = config_dir.parent
|
|
||||||
if config_parent.exists():
|
|
||||||
self._set_ownership(config_parent)
|
|
||||||
|
|
||||||
if config_dir.exists():
|
|
||||||
self._set_ownership(config_dir)
|
|
||||||
|
|
||||||
return config_dir
|
|
||||||
|
|
||||||
def _ensure_user_data_dir(self) -> Path:
|
|
||||||
"""Ensure data directory exists with correct ownership"""
|
|
||||||
data_dir = self._get_user_data_path()
|
|
||||||
|
|
||||||
# Create the full directory path
|
|
||||||
try:
|
|
||||||
data_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
except FileExistsError:
|
|
||||||
# Directory already exists, which is fine
|
|
||||||
pass
|
|
||||||
except OSError as e:
|
|
||||||
self.status.log(f"Failed to create data directory {data_dir}: {e}", "ERROR")
|
|
||||||
return data_dir
|
|
||||||
|
|
||||||
# Set ownership for the directories
|
|
||||||
data_parent = data_dir.parent
|
|
||||||
if data_parent.exists():
|
|
||||||
self._set_ownership(data_parent)
|
|
||||||
|
|
||||||
if data_dir.exists():
|
|
||||||
self._set_ownership(data_dir)
|
|
||||||
|
|
||||||
return data_dir
|
|
||||||
|
|
||||||
def _create_auth_file(self) -> bool:
|
|
||||||
"""Create auth.json file with configured API keys"""
|
|
||||||
# Ensure data directory exists
|
|
||||||
data_dir = self._ensure_user_data_dir()
|
|
||||||
if not data_dir.exists():
|
|
||||||
self.status.log(
|
|
||||||
f"Data directory {data_dir} does not exist and could not be created",
|
|
||||||
"ERROR",
|
|
||||||
)
|
|
||||||
return False
|
|
||||||
|
|
||||||
auth_file = data_dir / "auth.json"
|
|
||||||
auth_data = {}
|
|
||||||
|
|
||||||
# Check each API key and add to auth data if present
|
|
||||||
for env_var, provider in API_KEY_MAPPINGS.items():
|
|
||||||
api_key = os.environ.get(env_var)
|
|
||||||
if api_key:
|
|
||||||
auth_data[provider] = {"type": "api", "key": api_key}
|
|
||||||
self.status.log(f"Added {provider} API key to auth configuration")
|
|
||||||
|
|
||||||
# Only write file if we have at least one API key
|
|
||||||
if not auth_data:
|
|
||||||
self.status.log("No API keys found, skipping auth.json creation")
|
|
||||||
return True
|
|
||||||
|
|
||||||
try:
|
|
||||||
with auth_file.open("w") as f:
|
|
||||||
json.dump(auth_data, f, indent=2)
|
|
||||||
|
|
||||||
# Set ownership of the auth file to cubbi user
|
|
||||||
self._set_ownership(auth_file)
|
|
||||||
|
|
||||||
# Set secure permissions (readable only by owner)
|
|
||||||
auth_file.chmod(0o600)
|
|
||||||
|
|
||||||
self.status.log(f"Created OpenCode auth configuration at {auth_file}")
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
self.status.log(f"Failed to create auth configuration: {e}", "ERROR")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def initialize(self) -> bool:
|
|
||||||
"""Initialize Opencode configuration"""
|
|
||||||
self._ensure_user_config_dir()
|
|
||||||
|
|
||||||
# Create auth.json file with API keys
|
|
||||||
auth_success = self._create_auth_file()
|
|
||||||
|
|
||||||
# Set up tool configuration
|
|
||||||
config_success = self.setup_tool_configuration()
|
config_success = self.setup_tool_configuration()
|
||||||
|
if not config_success:
|
||||||
|
return False
|
||||||
|
|
||||||
return auth_success and config_success
|
return self.integrate_mcp_servers()
|
||||||
|
|
||||||
def setup_tool_configuration(self) -> bool:
|
def setup_tool_configuration(self) -> bool:
|
||||||
"""Set up Opencode configuration file"""
|
config_dir = self._get_user_config_path()
|
||||||
# Ensure directory exists before writing
|
|
||||||
config_dir = self._ensure_user_config_dir()
|
|
||||||
if not config_dir.exists():
|
|
||||||
self.status.log(
|
|
||||||
f"Config directory {config_dir} does not exist and could not be created",
|
|
||||||
"ERROR",
|
|
||||||
)
|
|
||||||
return False
|
|
||||||
|
|
||||||
config_file = config_dir / "config.json"
|
config_file = config_dir / "config.json"
|
||||||
|
|
||||||
# Load or initialize configuration
|
# Initialize configuration with schema
|
||||||
if config_file.exists():
|
config_data: dict[str, str | dict[str, dict[str, str | dict[str, str]]]] = {
|
||||||
with config_file.open("r") as f:
|
"$schema": "https://opencode.ai/config.json"
|
||||||
config_data = json.load(f) or {}
|
}
|
||||||
|
|
||||||
|
# Set default theme to system
|
||||||
|
config_data["theme"] = "system"
|
||||||
|
|
||||||
|
# Add providers configuration
|
||||||
|
config_data["provider"] = {}
|
||||||
|
|
||||||
|
# Configure all available providers
|
||||||
|
for provider_name, provider_config in cubbi_config.providers.items():
|
||||||
|
# Check if this is a custom provider (has baseURL)
|
||||||
|
if provider_config.base_url:
|
||||||
|
# Custom provider - include baseURL and name
|
||||||
|
models_dict = {}
|
||||||
|
|
||||||
|
# Add all models for any provider type that has models
|
||||||
|
if provider_config.models:
|
||||||
|
for model in provider_config.models:
|
||||||
|
model_id = model.get("id", "")
|
||||||
|
if model_id:
|
||||||
|
models_dict[model_id] = {"name": model_id}
|
||||||
|
|
||||||
|
provider_entry: dict[str, str | dict[str, str]] = {
|
||||||
|
"options": {
|
||||||
|
"apiKey": provider_config.api_key,
|
||||||
|
"baseURL": provider_config.base_url,
|
||||||
|
},
|
||||||
|
"models": models_dict,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add npm package and name for custom providers
|
||||||
|
if provider_config.type in STANDARD_PROVIDERS:
|
||||||
|
# Standard provider with custom URL - determine npm package
|
||||||
|
if provider_config.type == "anthropic":
|
||||||
|
provider_entry["npm"] = "@ai-sdk/anthropic"
|
||||||
|
provider_entry["name"] = f"Anthropic ({provider_name})"
|
||||||
|
elif provider_config.type == "openai":
|
||||||
|
provider_entry["npm"] = "@ai-sdk/openai-compatible"
|
||||||
|
provider_entry["name"] = f"OpenAI Compatible ({provider_name})"
|
||||||
|
elif provider_config.type == "google":
|
||||||
|
provider_entry["npm"] = "@ai-sdk/google"
|
||||||
|
provider_entry["name"] = f"Google ({provider_name})"
|
||||||
|
elif provider_config.type == "openrouter":
|
||||||
|
provider_entry["npm"] = "@ai-sdk/openai-compatible"
|
||||||
|
provider_entry["name"] = f"OpenRouter ({provider_name})"
|
||||||
|
else:
|
||||||
|
# Non-standard provider with custom URL
|
||||||
|
provider_entry["npm"] = "@ai-sdk/openai-compatible"
|
||||||
|
provider_entry["name"] = provider_name.title()
|
||||||
|
|
||||||
|
config_data["provider"][provider_name] = provider_entry
|
||||||
|
if models_dict:
|
||||||
|
self.status.log(
|
||||||
|
f"Added {provider_name} custom provider with {len(models_dict)} models to OpenCode configuration"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.status.log(
|
||||||
|
f"Added {provider_name} custom provider to OpenCode configuration"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Standard provider without custom URL
|
||||||
|
if provider_config.type in STANDARD_PROVIDERS:
|
||||||
|
# Populate models for any provider that has models
|
||||||
|
models_dict = {}
|
||||||
|
if provider_config.models:
|
||||||
|
for model in provider_config.models:
|
||||||
|
model_id = model.get("id", "")
|
||||||
|
if model_id:
|
||||||
|
models_dict[model_id] = {"name": model_id}
|
||||||
|
|
||||||
|
config_data["provider"][provider_name] = {
|
||||||
|
"options": {"apiKey": provider_config.api_key},
|
||||||
|
"models": models_dict,
|
||||||
|
}
|
||||||
|
|
||||||
|
if models_dict:
|
||||||
|
self.status.log(
|
||||||
|
f"Added {provider_name} standard provider with {len(models_dict)} models to OpenCode configuration"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.status.log(
|
||||||
|
f"Added {provider_name} standard provider to OpenCode configuration"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set default model
|
||||||
|
if cubbi_config.defaults.model:
|
||||||
|
config_data["model"] = cubbi_config.defaults.model
|
||||||
|
self.status.log(f"Set default model to {config_data['model']}")
|
||||||
|
|
||||||
|
# Add the default model to provider if it doesn't already have models
|
||||||
|
provider_name: str
|
||||||
|
model_name: str
|
||||||
|
provider_name, model_name = cubbi_config.defaults.model.split("/", 1)
|
||||||
|
if provider_name in config_data["provider"]:
|
||||||
|
provider_config = cubbi_config.providers.get(provider_name)
|
||||||
|
# Only add default model if provider doesn't already have models populated
|
||||||
|
if not (provider_config and provider_config.models):
|
||||||
|
config_data["provider"][provider_name]["models"] = {
|
||||||
|
model_name: {"name": model_name}
|
||||||
|
}
|
||||||
|
self.status.log(
|
||||||
|
f"Added default model {model_name} to {provider_name} provider"
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
config_data = {}
|
# Fallback to legacy environment variables
|
||||||
|
opencode_model: str | None = os.environ.get("CUBBI_MODEL")
|
||||||
|
opencode_provider: str | None = os.environ.get("CUBBI_PROVIDER")
|
||||||
|
|
||||||
# Update with environment variables
|
if opencode_model and opencode_provider:
|
||||||
opencode_model = os.environ.get("CUBBI_MODEL")
|
config_data["model"] = f"{opencode_provider}/{opencode_model}"
|
||||||
opencode_provider = os.environ.get("CUBBI_PROVIDER")
|
self.status.log(f"Set model to {config_data['model']} (legacy)")
|
||||||
|
|
||||||
if opencode_model and opencode_provider:
|
# Add the legacy model to the provider if it exists
|
||||||
config_data["model"] = f"{opencode_provider}/{opencode_model}"
|
if opencode_provider in config_data["provider"]:
|
||||||
self.status.log(f"Set model to {config_data['model']}")
|
config_data["provider"][opencode_provider]["models"] = {
|
||||||
|
opencode_model: {"name": opencode_model}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Only write config if we have providers configured
|
||||||
|
if not config_data["provider"]:
|
||||||
|
self.status.log(
|
||||||
|
"No providers configured, using minimal OpenCode configuration"
|
||||||
|
)
|
||||||
|
config_data = {
|
||||||
|
"$schema": "https://opencode.ai/config.json",
|
||||||
|
"theme": "system",
|
||||||
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with config_file.open("w") as f:
|
with config_file.open("w") as f:
|
||||||
json.dump(config_data, f, indent=2)
|
json.dump(config_data, f, indent=2)
|
||||||
|
|
||||||
# Set ownership of the config file to cubbi user
|
set_ownership(config_file)
|
||||||
self._set_ownership(config_file)
|
|
||||||
|
|
||||||
self.status.log(f"Updated Opencode configuration at {config_file}")
|
self.status.log(
|
||||||
|
f"Updated OpenCode configuration at {config_file} with {len(config_data.get('provider', {}))} providers"
|
||||||
|
)
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.status.log(f"Failed to write Opencode configuration: {e}", "ERROR")
|
self.status.log(f"Failed to write OpenCode configuration: {e}", "ERROR")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def integrate_mcp_servers(self, mcp_config: Dict[str, Any]) -> bool:
|
def integrate_mcp_servers(self) -> bool:
|
||||||
"""Integrate Opencode with available MCP servers"""
|
if not cubbi_config.mcps:
|
||||||
if mcp_config["count"] == 0:
|
|
||||||
self.status.log("No MCP servers to integrate")
|
self.status.log("No MCP servers to integrate")
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Ensure directory exists before writing
|
config_dir = self._get_user_config_path()
|
||||||
config_dir = self._ensure_user_config_dir()
|
|
||||||
if not config_dir.exists():
|
|
||||||
self.status.log(
|
|
||||||
f"Config directory {config_dir} does not exist and could not be created",
|
|
||||||
"ERROR",
|
|
||||||
)
|
|
||||||
return False
|
|
||||||
|
|
||||||
config_file = config_dir / "config.json"
|
config_file = config_dir / "config.json"
|
||||||
|
|
||||||
if config_file.exists():
|
if config_file.exists():
|
||||||
with config_file.open("r") as f:
|
with config_file.open("r") as f:
|
||||||
config_data = json.load(f) or {}
|
config_data: dict[str, str | dict[str, dict[str, str]]] = (
|
||||||
|
json.load(f) or {}
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
config_data = {}
|
config_data: dict[str, str | dict[str, dict[str, str]]] = {}
|
||||||
|
|
||||||
if "mcp" not in config_data:
|
if "mcp" not in config_data:
|
||||||
config_data["mcp"] = {}
|
config_data["mcp"] = {}
|
||||||
|
|
||||||
for server in mcp_config["servers"]:
|
for mcp in cubbi_config.mcps:
|
||||||
server_name = server["name"]
|
if mcp.type == "remote":
|
||||||
server_host = server.get("host")
|
if mcp.name and mcp.url:
|
||||||
server_url = server.get("url")
|
self.status.log(
|
||||||
|
f"Adding remote MCP extension: {mcp.name} - {mcp.url}"
|
||||||
|
)
|
||||||
|
config_data["mcp"][mcp.name] = {
|
||||||
|
"type": "remote",
|
||||||
|
"url": mcp.url,
|
||||||
|
}
|
||||||
|
elif mcp.type == "local":
|
||||||
|
if mcp.name and mcp.command:
|
||||||
|
self.status.log(
|
||||||
|
f"Adding local MCP extension: {mcp.name} - {mcp.command}"
|
||||||
|
)
|
||||||
|
# OpenCode expects command as an array with command and args combined
|
||||||
|
command_array = [mcp.command]
|
||||||
|
if mcp.args:
|
||||||
|
command_array.extend(mcp.args)
|
||||||
|
|
||||||
if server_name and server_host:
|
mcp_entry: dict[str, str | list[str] | bool | dict[str, str]] = {
|
||||||
mcp_url = f"http://{server_host}:8080/sse"
|
"type": "local",
|
||||||
self.status.log(f"Adding MCP extension: {server_name} - {mcp_url}")
|
"command": command_array,
|
||||||
|
"enabled": True,
|
||||||
config_data["mcp"][server_name] = {
|
}
|
||||||
"type": "remote",
|
if mcp.env:
|
||||||
"url": mcp_url,
|
# OpenCode expects environment (not env)
|
||||||
}
|
mcp_entry["environment"] = mcp.env
|
||||||
elif server_name and server_url:
|
config_data["mcp"][mcp.name] = mcp_entry
|
||||||
self.status.log(
|
elif mcp.type in ["docker", "proxy"]:
|
||||||
f"Adding remote MCP extension: {server_name} - {server_url}"
|
if mcp.name and mcp.host:
|
||||||
)
|
mcp_port: int = mcp.port or 8080
|
||||||
|
mcp_url: str = f"http://{mcp.host}:{mcp_port}/sse"
|
||||||
config_data["mcp"][server_name] = {
|
self.status.log(f"Adding MCP extension: {mcp.name} - {mcp_url}")
|
||||||
"type": "remote",
|
config_data["mcp"][mcp.name] = {
|
||||||
"url": server_url,
|
"type": "remote",
|
||||||
}
|
"url": mcp_url,
|
||||||
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with config_file.open("w") as f:
|
with config_file.open("w") as f:
|
||||||
json.dump(config_data, f, indent=2)
|
json.dump(config_data, f, indent=2)
|
||||||
|
|
||||||
# Set ownership of the config file to cubbi user
|
set_ownership(config_file)
|
||||||
self._set_ownership(config_file)
|
|
||||||
|
|
||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
|
self.status.log(f"Failed to integrate MCP servers: {e}", "ERROR")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
PLUGIN_CLASS = OpencodePlugin
|
||||||
|
|||||||
86
cubbi/mcp.py
86
cubbi/mcp.py
@@ -10,7 +10,7 @@ from typing import Any, Dict, List, Optional
|
|||||||
import docker
|
import docker
|
||||||
from docker.errors import DockerException, ImageNotFound, NotFound
|
from docker.errors import DockerException, ImageNotFound, NotFound
|
||||||
|
|
||||||
from .models import DockerMCP, MCPContainer, MCPStatus, ProxyMCP, RemoteMCP
|
from .models import DockerMCP, LocalMCP, MCPContainer, MCPStatus, ProxyMCP, RemoteMCP
|
||||||
from .user_config import UserConfigManager
|
from .user_config import UserConfigManager
|
||||||
|
|
||||||
# Configure logging
|
# Configure logging
|
||||||
@@ -79,6 +79,7 @@ class MCPManager:
|
|||||||
name: str,
|
name: str,
|
||||||
url: str,
|
url: str,
|
||||||
headers: Dict[str, str] = None,
|
headers: Dict[str, str] = None,
|
||||||
|
mcp_type: Optional[str] = None,
|
||||||
add_as_default: bool = True,
|
add_as_default: bool = True,
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
"""Add a remote MCP server.
|
"""Add a remote MCP server.
|
||||||
@@ -97,6 +98,7 @@ class MCPManager:
|
|||||||
name=name,
|
name=name,
|
||||||
url=url,
|
url=url,
|
||||||
headers=headers or {},
|
headers=headers or {},
|
||||||
|
mcp_type=mcp_type,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Add to the configuration
|
# Add to the configuration
|
||||||
@@ -248,6 +250,56 @@ class MCPManager:
|
|||||||
|
|
||||||
return mcp_config
|
return mcp_config
|
||||||
|
|
||||||
|
def add_local_mcp(
|
||||||
|
self,
|
||||||
|
name: str,
|
||||||
|
command: str,
|
||||||
|
args: List[str] = None,
|
||||||
|
env: Dict[str, str] = None,
|
||||||
|
add_as_default: bool = True,
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Add a local MCP server.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: Name of the MCP server
|
||||||
|
command: Path to executable
|
||||||
|
args: Command arguments
|
||||||
|
env: Environment variables to set for the command
|
||||||
|
add_as_default: Whether to add this MCP to the default MCPs list
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The MCP configuration dictionary
|
||||||
|
"""
|
||||||
|
# Create the Local MCP configuration
|
||||||
|
local_mcp = LocalMCP(
|
||||||
|
name=name,
|
||||||
|
command=command,
|
||||||
|
args=args or [],
|
||||||
|
env=env or {},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add to the configuration
|
||||||
|
mcps = self.list_mcps()
|
||||||
|
|
||||||
|
# Remove existing MCP with the same name if it exists
|
||||||
|
mcps = [mcp for mcp in mcps if mcp.get("name") != name]
|
||||||
|
|
||||||
|
# Add the new MCP
|
||||||
|
mcp_config = local_mcp.model_dump()
|
||||||
|
mcps.append(mcp_config)
|
||||||
|
|
||||||
|
# Save the configuration
|
||||||
|
self.config_manager.set("mcps", mcps)
|
||||||
|
|
||||||
|
# Add to default MCPs if requested
|
||||||
|
if add_as_default:
|
||||||
|
default_mcps = self.config_manager.get("defaults.mcps", [])
|
||||||
|
if name not in default_mcps:
|
||||||
|
default_mcps.append(name)
|
||||||
|
self.config_manager.set("defaults.mcps", default_mcps)
|
||||||
|
|
||||||
|
return mcp_config
|
||||||
|
|
||||||
def remove_mcp(self, name: str) -> bool:
|
def remove_mcp(self, name: str) -> bool:
|
||||||
"""Remove an MCP server configuration.
|
"""Remove an MCP server configuration.
|
||||||
|
|
||||||
@@ -357,6 +409,14 @@ class MCPManager:
|
|||||||
"type": "remote",
|
"type": "remote",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
elif mcp_type == "local":
|
||||||
|
# Local MCP servers don't need containers
|
||||||
|
return {
|
||||||
|
"status": "not_applicable",
|
||||||
|
"name": name,
|
||||||
|
"type": "local",
|
||||||
|
}
|
||||||
|
|
||||||
elif mcp_type == "docker":
|
elif mcp_type == "docker":
|
||||||
# Pull the image if needed
|
# Pull the image if needed
|
||||||
try:
|
try:
|
||||||
@@ -635,8 +695,8 @@ ENTRYPOINT ["/entrypoint.sh"]
|
|||||||
)
|
)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Remote MCPs don't have containers to stop
|
# Remote and Local MCPs don't have containers to stop
|
||||||
if mcp_config.get("type") == "remote":
|
if mcp_config.get("type") in ["remote", "local"]:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Get the container name
|
# Get the container name
|
||||||
@@ -675,12 +735,12 @@ ENTRYPOINT ["/entrypoint.sh"]
|
|||||||
if not mcp_config:
|
if not mcp_config:
|
||||||
raise ValueError(f"MCP server '{name}' not found")
|
raise ValueError(f"MCP server '{name}' not found")
|
||||||
|
|
||||||
# Remote MCPs don't have containers to restart
|
# Remote and Local MCPs don't have containers to restart
|
||||||
if mcp_config.get("type") == "remote":
|
if mcp_config.get("type") in ["remote", "local"]:
|
||||||
return {
|
return {
|
||||||
"status": "not_applicable",
|
"status": "not_applicable",
|
||||||
"name": name,
|
"name": name,
|
||||||
"type": "remote",
|
"type": mcp_config.get("type"),
|
||||||
}
|
}
|
||||||
|
|
||||||
# Get the container name
|
# Get the container name
|
||||||
@@ -721,6 +781,16 @@ ENTRYPOINT ["/entrypoint.sh"]
|
|||||||
"url": mcp_config.get("url"),
|
"url": mcp_config.get("url"),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Local MCPs don't have containers
|
||||||
|
if mcp_config.get("type") == "local":
|
||||||
|
return {
|
||||||
|
"status": "not_applicable",
|
||||||
|
"name": name,
|
||||||
|
"type": "local",
|
||||||
|
"command": mcp_config.get("command"),
|
||||||
|
"args": mcp_config.get("args", []),
|
||||||
|
}
|
||||||
|
|
||||||
# Get the container name
|
# Get the container name
|
||||||
container_name = self.get_mcp_container_name(name)
|
container_name = self.get_mcp_container_name(name)
|
||||||
|
|
||||||
@@ -792,9 +862,11 @@ ENTRYPOINT ["/entrypoint.sh"]
|
|||||||
if not mcp_config:
|
if not mcp_config:
|
||||||
raise ValueError(f"MCP server '{name}' not found")
|
raise ValueError(f"MCP server '{name}' not found")
|
||||||
|
|
||||||
# Remote MCPs don't have logs
|
# Remote and Local MCPs don't have logs
|
||||||
if mcp_config.get("type") == "remote":
|
if mcp_config.get("type") == "remote":
|
||||||
return "Remote MCPs don't have local logs"
|
return "Remote MCPs don't have local logs"
|
||||||
|
if mcp_config.get("type") == "local":
|
||||||
|
return "Local MCPs don't have container logs"
|
||||||
|
|
||||||
# Get the container name
|
# Get the container name
|
||||||
container_name = self.get_mcp_container_name(name)
|
container_name = self.get_mcp_container_name(name)
|
||||||
|
|||||||
248
cubbi/model_fetcher.py
Normal file
248
cubbi/model_fetcher.py
Normal file
@@ -0,0 +1,248 @@
|
|||||||
|
"""
|
||||||
|
Model fetching utilities for OpenAI-compatible providers.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
from typing import Dict, List, Optional
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ModelFetcher:
|
||||||
|
"""Fetches model lists from OpenAI-compatible API endpoints."""
|
||||||
|
|
||||||
|
def __init__(self, timeout: int = 30):
|
||||||
|
"""Initialize the model fetcher.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
timeout: Request timeout in seconds
|
||||||
|
"""
|
||||||
|
self.timeout = timeout
|
||||||
|
|
||||||
|
def fetch_models(
|
||||||
|
self,
|
||||||
|
base_url: str,
|
||||||
|
api_key: Optional[str] = None,
|
||||||
|
headers: Optional[Dict[str, str]] = None,
|
||||||
|
provider_type: Optional[str] = None,
|
||||||
|
) -> List[Dict[str, str]]:
|
||||||
|
"""Fetch models from an OpenAI-compatible /v1/models endpoint.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
base_url: Base URL of the provider (e.g., "https://api.openai.com" or "https://api.litellm.com")
|
||||||
|
api_key: Optional API key for authentication
|
||||||
|
headers: Optional additional headers
|
||||||
|
provider_type: Optional provider type for authentication handling
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of model dictionaries with 'id' and 'name' keys
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
requests.RequestException: If the request fails
|
||||||
|
ValueError: If the response format is invalid
|
||||||
|
"""
|
||||||
|
# Construct the models endpoint URL
|
||||||
|
models_url = self._build_models_url(base_url)
|
||||||
|
|
||||||
|
# Prepare headers
|
||||||
|
request_headers = self._build_headers(api_key, headers, provider_type)
|
||||||
|
|
||||||
|
logger.info(f"Fetching models from {models_url}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = requests.get(
|
||||||
|
models_url, headers=request_headers, timeout=self.timeout
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
# Parse JSON response
|
||||||
|
data = response.json()
|
||||||
|
|
||||||
|
# Handle provider-specific response formats
|
||||||
|
if provider_type == "google":
|
||||||
|
# Google uses {"models": [...]} format
|
||||||
|
if not isinstance(data, dict) or "models" not in data:
|
||||||
|
raise ValueError(
|
||||||
|
f"Invalid Google response format: expected dict with 'models' key, got {type(data)}"
|
||||||
|
)
|
||||||
|
models_data = data["models"]
|
||||||
|
else:
|
||||||
|
# OpenAI-compatible format uses {"data": [...]}
|
||||||
|
if not isinstance(data, dict) or "data" not in data:
|
||||||
|
raise ValueError(
|
||||||
|
f"Invalid response format: expected dict with 'data' key, got {type(data)}"
|
||||||
|
)
|
||||||
|
models_data = data["data"]
|
||||||
|
|
||||||
|
if not isinstance(models_data, list):
|
||||||
|
raise ValueError(
|
||||||
|
f"Invalid models data: expected list, got {type(models_data)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Process models
|
||||||
|
models = []
|
||||||
|
for model_item in models_data:
|
||||||
|
if not isinstance(model_item, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Handle provider-specific model ID fields
|
||||||
|
if provider_type == "google":
|
||||||
|
# Google uses "name" field (e.g., "models/gemini-1.5-pro")
|
||||||
|
model_id = model_item.get("name", "")
|
||||||
|
else:
|
||||||
|
# OpenAI-compatible uses "id" field
|
||||||
|
model_id = model_item.get("id", "")
|
||||||
|
|
||||||
|
if not model_id:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Skip models with * in their ID as requested
|
||||||
|
if "*" in model_id:
|
||||||
|
logger.debug(f"Skipping model with wildcard: {model_id}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Create model entry
|
||||||
|
model = {
|
||||||
|
"id": model_id,
|
||||||
|
}
|
||||||
|
models.append(model)
|
||||||
|
|
||||||
|
logger.info(f"Successfully fetched {len(models)} models from {base_url}")
|
||||||
|
return models
|
||||||
|
|
||||||
|
except requests.exceptions.Timeout:
|
||||||
|
logger.error(f"Request timed out after {self.timeout} seconds")
|
||||||
|
raise requests.RequestException(f"Request to {models_url} timed out")
|
||||||
|
except requests.exceptions.ConnectionError as e:
|
||||||
|
logger.error(f"Connection error: {e}")
|
||||||
|
raise requests.RequestException(f"Failed to connect to {models_url}")
|
||||||
|
except requests.exceptions.HTTPError as e:
|
||||||
|
logger.error(f"HTTP error {e.response.status_code}: {e}")
|
||||||
|
if e.response.status_code == 401:
|
||||||
|
raise requests.RequestException(
|
||||||
|
"Authentication failed: invalid API key"
|
||||||
|
)
|
||||||
|
elif e.response.status_code == 403:
|
||||||
|
raise requests.RequestException(
|
||||||
|
"Access forbidden: check API key permissions"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise requests.RequestException(
|
||||||
|
f"HTTP {e.response.status_code} error from {models_url}"
|
||||||
|
)
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
logger.error(f"Failed to parse JSON response: {e}")
|
||||||
|
raise ValueError(f"Invalid JSON response from {models_url}")
|
||||||
|
|
||||||
|
def _build_models_url(self, base_url: str) -> str:
|
||||||
|
"""Build the models endpoint URL from a base URL.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
base_url: Base URL of the provider
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete URL for the /v1/models endpoint
|
||||||
|
"""
|
||||||
|
# Remove trailing slash if present
|
||||||
|
base_url = base_url.rstrip("/")
|
||||||
|
|
||||||
|
# Add /v1/models if not already present
|
||||||
|
if not base_url.endswith("/v1/models"):
|
||||||
|
if base_url.endswith("/v1"):
|
||||||
|
base_url += "/models"
|
||||||
|
else:
|
||||||
|
base_url += "/v1/models"
|
||||||
|
|
||||||
|
return base_url
|
||||||
|
|
||||||
|
def _build_headers(
|
||||||
|
self,
|
||||||
|
api_key: Optional[str] = None,
|
||||||
|
additional_headers: Optional[Dict[str, str]] = None,
|
||||||
|
provider_type: Optional[str] = None,
|
||||||
|
) -> Dict[str, str]:
|
||||||
|
"""Build request headers.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
api_key: Optional API key for authentication
|
||||||
|
additional_headers: Optional additional headers
|
||||||
|
provider_type: Provider type for specific auth handling
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary of headers
|
||||||
|
"""
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Accept": "application/json",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add authentication header if API key is provided
|
||||||
|
if api_key:
|
||||||
|
if provider_type == "anthropic":
|
||||||
|
# Anthropic uses x-api-key header
|
||||||
|
headers["x-api-key"] = api_key
|
||||||
|
elif provider_type == "google":
|
||||||
|
# Google uses x-goog-api-key header
|
||||||
|
headers["x-goog-api-key"] = api_key
|
||||||
|
else:
|
||||||
|
# Standard Bearer token for OpenAI, OpenRouter, and custom providers
|
||||||
|
headers["Authorization"] = f"Bearer {api_key}"
|
||||||
|
|
||||||
|
# Add any additional headers
|
||||||
|
if additional_headers:
|
||||||
|
headers.update(additional_headers)
|
||||||
|
|
||||||
|
return headers
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_provider_models(
|
||||||
|
provider_config: Dict, timeout: int = 30
|
||||||
|
) -> List[Dict[str, str]]:
|
||||||
|
"""Convenience function to fetch models for a provider configuration.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider_config: Provider configuration dictionary
|
||||||
|
timeout: Request timeout in seconds
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of model dictionaries
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If provider is not supported or missing required fields
|
||||||
|
requests.RequestException: If the request fails
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
from .config import PROVIDER_DEFAULT_URLS
|
||||||
|
|
||||||
|
provider_type = provider_config.get("type", "")
|
||||||
|
base_url = provider_config.get("base_url")
|
||||||
|
api_key = provider_config.get("api_key", "")
|
||||||
|
|
||||||
|
# Resolve environment variables in API key
|
||||||
|
if api_key.startswith("${") and api_key.endswith("}"):
|
||||||
|
env_var_name = api_key[2:-1]
|
||||||
|
api_key = os.environ.get(env_var_name, "")
|
||||||
|
|
||||||
|
# Determine base URL - use custom base_url or default for standard providers
|
||||||
|
if base_url:
|
||||||
|
# Custom provider with explicit base_url
|
||||||
|
effective_base_url = base_url
|
||||||
|
elif provider_type in PROVIDER_DEFAULT_URLS:
|
||||||
|
# Standard provider - use default URL
|
||||||
|
effective_base_url = PROVIDER_DEFAULT_URLS[provider_type]
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
f"Unsupported provider type '{provider_type}'. Must be one of: {list(PROVIDER_DEFAULT_URLS.keys())} or have a custom base_url"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Prepare additional headers for specific providers
|
||||||
|
headers = {}
|
||||||
|
if provider_type == "anthropic":
|
||||||
|
# Anthropic uses a different API version header
|
||||||
|
headers["anthropic-version"] = "2023-06-01"
|
||||||
|
|
||||||
|
fetcher = ModelFetcher(timeout=timeout)
|
||||||
|
return fetcher.fetch_models(effective_base_url, api_key, headers, provider_type)
|
||||||
@@ -33,27 +33,15 @@ class PersistentConfig(BaseModel):
|
|||||||
description: str = ""
|
description: str = ""
|
||||||
|
|
||||||
|
|
||||||
class VolumeMount(BaseModel):
|
|
||||||
mountPath: str
|
|
||||||
description: str = ""
|
|
||||||
|
|
||||||
|
|
||||||
class ImageInit(BaseModel):
|
|
||||||
pre_command: Optional[str] = None
|
|
||||||
command: str
|
|
||||||
|
|
||||||
|
|
||||||
class Image(BaseModel):
|
class Image(BaseModel):
|
||||||
name: str
|
name: str
|
||||||
description: str
|
description: str
|
||||||
version: str
|
version: str
|
||||||
maintainer: str
|
maintainer: str
|
||||||
image: str
|
image: str
|
||||||
init: Optional[ImageInit] = None
|
|
||||||
environment: List[ImageEnvironmentVariable] = []
|
environment: List[ImageEnvironmentVariable] = []
|
||||||
ports: List[int] = []
|
|
||||||
volumes: List[VolumeMount] = []
|
|
||||||
persistent_configs: List[PersistentConfig] = []
|
persistent_configs: List[PersistentConfig] = []
|
||||||
|
environments_to_forward: List[str] = []
|
||||||
|
|
||||||
|
|
||||||
class RemoteMCP(BaseModel):
|
class RemoteMCP(BaseModel):
|
||||||
@@ -61,6 +49,7 @@ class RemoteMCP(BaseModel):
|
|||||||
type: str = "remote"
|
type: str = "remote"
|
||||||
url: str
|
url: str
|
||||||
headers: Dict[str, str] = Field(default_factory=dict)
|
headers: Dict[str, str] = Field(default_factory=dict)
|
||||||
|
mcp_type: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class DockerMCP(BaseModel):
|
class DockerMCP(BaseModel):
|
||||||
@@ -82,7 +71,15 @@ class ProxyMCP(BaseModel):
|
|||||||
host_port: Optional[int] = None # External port to bind the SSE port to on the host
|
host_port: Optional[int] = None # External port to bind the SSE port to on the host
|
||||||
|
|
||||||
|
|
||||||
MCP = Union[RemoteMCP, DockerMCP, ProxyMCP]
|
class LocalMCP(BaseModel):
|
||||||
|
name: str
|
||||||
|
type: str = "local"
|
||||||
|
command: str # Path to executable
|
||||||
|
args: List[str] = Field(default_factory=list) # Command arguments
|
||||||
|
env: Dict[str, str] = Field(default_factory=dict) # Environment variables
|
||||||
|
|
||||||
|
|
||||||
|
MCP = Union[RemoteMCP, DockerMCP, ProxyMCP, LocalMCP]
|
||||||
|
|
||||||
|
|
||||||
class MCPContainer(BaseModel):
|
class MCPContainer(BaseModel):
|
||||||
@@ -102,6 +99,7 @@ class Session(BaseModel):
|
|||||||
status: SessionStatus
|
status: SessionStatus
|
||||||
container_id: Optional[str] = None
|
container_id: Optional[str] = None
|
||||||
ports: Dict[int, int] = Field(default_factory=dict)
|
ports: Dict[int, int] = Field(default_factory=dict)
|
||||||
|
mcps: List[str] = Field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
class Config(BaseModel):
|
class Config(BaseModel):
|
||||||
@@ -109,5 +107,5 @@ class Config(BaseModel):
|
|||||||
images: Dict[str, Image] = Field(default_factory=dict)
|
images: Dict[str, Image] = Field(default_factory=dict)
|
||||||
defaults: Dict[str, object] = Field(
|
defaults: Dict[str, object] = Field(
|
||||||
default_factory=dict
|
default_factory=dict
|
||||||
) # Can store strings, booleans, or other values
|
) # Can store strings, booleans, lists, or other values
|
||||||
mcps: List[Dict[str, Any]] = Field(default_factory=list)
|
mcps: List[Dict[str, Any]] = Field(default_factory=list)
|
||||||
|
|||||||
@@ -2,7 +2,9 @@
|
|||||||
Session storage management for Cubbi Container Tool.
|
Session storage management for Cubbi Container Tool.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import fcntl
|
||||||
import os
|
import os
|
||||||
|
from contextlib import contextmanager
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Optional
|
from typing import Dict, Optional
|
||||||
|
|
||||||
@@ -11,6 +13,31 @@ import yaml
|
|||||||
DEFAULT_SESSIONS_FILE = Path.home() / ".config" / "cubbi" / "sessions.yaml"
|
DEFAULT_SESSIONS_FILE = Path.home() / ".config" / "cubbi" / "sessions.yaml"
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def _file_lock(file_path: Path):
|
||||||
|
"""Context manager for file locking.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: Path to the file to lock
|
||||||
|
|
||||||
|
Yields:
|
||||||
|
File descriptor with exclusive lock
|
||||||
|
"""
|
||||||
|
# Ensure the file exists
|
||||||
|
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
if not file_path.exists():
|
||||||
|
file_path.touch(mode=0o600)
|
||||||
|
|
||||||
|
# Open file and acquire exclusive lock
|
||||||
|
fd = open(file_path, "r+")
|
||||||
|
try:
|
||||||
|
fcntl.flock(fd.fileno(), fcntl.LOCK_EX)
|
||||||
|
yield fd
|
||||||
|
finally:
|
||||||
|
fcntl.flock(fd.fileno(), fcntl.LOCK_UN)
|
||||||
|
fd.close()
|
||||||
|
|
||||||
|
|
||||||
class SessionManager:
|
class SessionManager:
|
||||||
"""Manager for container sessions."""
|
"""Manager for container sessions."""
|
||||||
|
|
||||||
@@ -42,9 +69,26 @@ class SessionManager:
|
|||||||
return sessions
|
return sessions
|
||||||
|
|
||||||
def save(self) -> None:
|
def save(self) -> None:
|
||||||
"""Save the sessions to file."""
|
"""Save the sessions to file.
|
||||||
with open(self.sessions_path, "w") as f:
|
|
||||||
yaml.safe_dump(self.sessions, f)
|
Note: This method acquires a file lock and merges with existing data
|
||||||
|
to prevent concurrent write issues.
|
||||||
|
"""
|
||||||
|
with _file_lock(self.sessions_path) as fd:
|
||||||
|
# Reload sessions from disk to get latest state
|
||||||
|
fd.seek(0)
|
||||||
|
sessions = yaml.safe_load(fd) or {}
|
||||||
|
|
||||||
|
# Merge current in-memory sessions with disk state
|
||||||
|
sessions.update(self.sessions)
|
||||||
|
|
||||||
|
# Write back to file
|
||||||
|
fd.seek(0)
|
||||||
|
fd.truncate()
|
||||||
|
yaml.safe_dump(sessions, fd)
|
||||||
|
|
||||||
|
# Update in-memory cache
|
||||||
|
self.sessions = sessions
|
||||||
|
|
||||||
def add_session(self, session_id: str, session_data: dict) -> None:
|
def add_session(self, session_id: str, session_data: dict) -> None:
|
||||||
"""Add a session to storage.
|
"""Add a session to storage.
|
||||||
@@ -53,8 +97,21 @@ class SessionManager:
|
|||||||
session_id: The unique session ID
|
session_id: The unique session ID
|
||||||
session_data: The session data (Session model dump as dict)
|
session_data: The session data (Session model dump as dict)
|
||||||
"""
|
"""
|
||||||
self.sessions[session_id] = session_data
|
with _file_lock(self.sessions_path) as fd:
|
||||||
self.save()
|
# Reload sessions from disk to get latest state
|
||||||
|
fd.seek(0)
|
||||||
|
sessions = yaml.safe_load(fd) or {}
|
||||||
|
|
||||||
|
# Apply the modification
|
||||||
|
sessions[session_id] = session_data
|
||||||
|
|
||||||
|
# Write back to file
|
||||||
|
fd.seek(0)
|
||||||
|
fd.truncate()
|
||||||
|
yaml.safe_dump(sessions, fd)
|
||||||
|
|
||||||
|
# Update in-memory cache
|
||||||
|
self.sessions = sessions
|
||||||
|
|
||||||
def get_session(self, session_id: str) -> Optional[dict]:
|
def get_session(self, session_id: str) -> Optional[dict]:
|
||||||
"""Get a session by ID.
|
"""Get a session by ID.
|
||||||
@@ -81,6 +138,19 @@ class SessionManager:
|
|||||||
Args:
|
Args:
|
||||||
session_id: The session ID to remove
|
session_id: The session ID to remove
|
||||||
"""
|
"""
|
||||||
if session_id in self.sessions:
|
with _file_lock(self.sessions_path) as fd:
|
||||||
del self.sessions[session_id]
|
# Reload sessions from disk to get latest state
|
||||||
self.save()
|
fd.seek(0)
|
||||||
|
sessions = yaml.safe_load(fd) or {}
|
||||||
|
|
||||||
|
# Apply the modification
|
||||||
|
if session_id in sessions:
|
||||||
|
del sessions[session_id]
|
||||||
|
|
||||||
|
# Write back to file
|
||||||
|
fd.seek(0)
|
||||||
|
fd.truncate()
|
||||||
|
yaml.safe_dump(sessions, fd)
|
||||||
|
|
||||||
|
# Update in-memory cache
|
||||||
|
self.sessions = sessions
|
||||||
|
|||||||
@@ -8,12 +8,33 @@ from typing import Any, Dict, List, Optional, Tuple
|
|||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
# Define the environment variable mappings
|
# Define the environment variable mappings for auto-discovery
|
||||||
ENV_MAPPINGS = {
|
STANDARD_PROVIDERS = {
|
||||||
|
"anthropic": {
|
||||||
|
"type": "anthropic",
|
||||||
|
"env_key": "ANTHROPIC_API_KEY",
|
||||||
|
},
|
||||||
|
"openai": {
|
||||||
|
"type": "openai",
|
||||||
|
"env_key": "OPENAI_API_KEY",
|
||||||
|
},
|
||||||
|
"google": {
|
||||||
|
"type": "google",
|
||||||
|
"env_key": "GOOGLE_API_KEY",
|
||||||
|
},
|
||||||
|
"openrouter": {
|
||||||
|
"type": "openrouter",
|
||||||
|
"env_key": "OPENROUTER_API_KEY",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
# Legacy environment variable mappings (kept for backward compatibility)
|
||||||
|
LEGACY_ENV_MAPPINGS = {
|
||||||
"services.langfuse.url": "LANGFUSE_URL",
|
"services.langfuse.url": "LANGFUSE_URL",
|
||||||
"services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
"services.langfuse.public_key": "LANGFUSE_INIT_PROJECT_PUBLIC_KEY",
|
||||||
"services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
"services.langfuse.secret_key": "LANGFUSE_INIT_PROJECT_SECRET_KEY",
|
||||||
"services.openai.api_key": "OPENAI_API_KEY",
|
"services.openai.api_key": "OPENAI_API_KEY",
|
||||||
|
"services.openai.url": "OPENAI_URL",
|
||||||
"services.anthropic.api_key": "ANTHROPIC_API_KEY",
|
"services.anthropic.api_key": "ANTHROPIC_API_KEY",
|
||||||
"services.openrouter.api_key": "OPENROUTER_API_KEY",
|
"services.openrouter.api_key": "OPENROUTER_API_KEY",
|
||||||
"services.google.api_key": "GOOGLE_API_KEY",
|
"services.google.api_key": "GOOGLE_API_KEY",
|
||||||
@@ -43,6 +64,10 @@ class UserConfigManager:
|
|||||||
self.config_path.parent.mkdir(parents=True, exist_ok=True)
|
self.config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
# Create default config
|
# Create default config
|
||||||
default_config = self._get_default_config()
|
default_config = self._get_default_config()
|
||||||
|
|
||||||
|
# Auto-discover and add providers from environment for new configs
|
||||||
|
self._auto_discover_providers(default_config)
|
||||||
|
|
||||||
# Save to file
|
# Save to file
|
||||||
with open(self.config_path, "w") as f:
|
with open(self.config_path, "w") as f:
|
||||||
yaml.safe_dump(default_config, f)
|
yaml.safe_dump(default_config, f)
|
||||||
@@ -84,7 +109,12 @@ class UserConfigManager:
|
|||||||
config = {}
|
config = {}
|
||||||
|
|
||||||
# Merge with defaults for any missing fields
|
# Merge with defaults for any missing fields
|
||||||
return self._merge_with_defaults(config)
|
config = self._merge_with_defaults(config)
|
||||||
|
|
||||||
|
# Auto-discover and add providers from environment
|
||||||
|
self._auto_discover_providers(config)
|
||||||
|
|
||||||
|
return config
|
||||||
|
|
||||||
def _get_default_config(self) -> Dict[str, Any]:
|
def _get_default_config(self) -> Dict[str, Any]:
|
||||||
"""Get the default configuration."""
|
"""Get the default configuration."""
|
||||||
@@ -95,16 +125,13 @@ class UserConfigManager:
|
|||||||
"mount_local": True,
|
"mount_local": True,
|
||||||
"networks": [], # Default networks to connect to (besides cubbi-network)
|
"networks": [], # Default networks to connect to (besides cubbi-network)
|
||||||
"volumes": [], # Default volumes to mount, format: "source:dest"
|
"volumes": [], # Default volumes to mount, format: "source:dest"
|
||||||
|
"ports": [], # Default ports to forward, format: list of integers
|
||||||
"mcps": [], # Default MCP servers to connect to
|
"mcps": [], # Default MCP servers to connect to
|
||||||
"model": "claude-3-5-sonnet-latest", # Default LLM model to use
|
"model": "anthropic/claude-3-5-sonnet-latest", # Default LLM model (provider/model format)
|
||||||
"provider": "anthropic", # Default LLM provider to use
|
|
||||||
},
|
},
|
||||||
|
"providers": {}, # LLM providers configuration
|
||||||
"services": {
|
"services": {
|
||||||
"langfuse": {},
|
"langfuse": {}, # Keep langfuse in services as it's not an LLM provider
|
||||||
"openai": {},
|
|
||||||
"anthropic": {},
|
|
||||||
"openrouter": {},
|
|
||||||
"google": {},
|
|
||||||
},
|
},
|
||||||
"docker": {
|
"docker": {
|
||||||
"network": "cubbi-network",
|
"network": "cubbi-network",
|
||||||
@@ -146,7 +173,7 @@ class UserConfigManager:
|
|||||||
and not key_path.startswith("services.")
|
and not key_path.startswith("services.")
|
||||||
and not any(
|
and not any(
|
||||||
key_path.startswith(section + ".")
|
key_path.startswith(section + ".")
|
||||||
for section in ["defaults", "docker", "remote", "ui"]
|
for section in ["defaults", "docker", "remote", "ui", "providers"]
|
||||||
)
|
)
|
||||||
):
|
):
|
||||||
service, setting = key_path.split(".", 1)
|
service, setting = key_path.split(".", 1)
|
||||||
@@ -175,7 +202,7 @@ class UserConfigManager:
|
|||||||
and not key_path.startswith("services.")
|
and not key_path.startswith("services.")
|
||||||
and not any(
|
and not any(
|
||||||
key_path.startswith(section + ".")
|
key_path.startswith(section + ".")
|
||||||
for section in ["defaults", "docker", "remote", "ui"]
|
for section in ["defaults", "docker", "remote", "ui", "providers"]
|
||||||
)
|
)
|
||||||
):
|
):
|
||||||
service, setting = key_path.split(".", 1)
|
service, setting = key_path.split(".", 1)
|
||||||
@@ -245,13 +272,22 @@ class UserConfigManager:
|
|||||||
def get_environment_variables(self) -> Dict[str, str]:
|
def get_environment_variables(self) -> Dict[str, str]:
|
||||||
"""Get environment variables from the configuration.
|
"""Get environment variables from the configuration.
|
||||||
|
|
||||||
|
NOTE: API keys are now handled by cubbi_init plugins, not passed from host.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A dictionary of environment variables to set in the container.
|
A dictionary of environment variables to set in the container.
|
||||||
"""
|
"""
|
||||||
env_vars = {}
|
env_vars = {}
|
||||||
|
|
||||||
# Process the service configurations and map to environment variables
|
# Process the legacy service configurations and map to environment variables
|
||||||
for config_path, env_var in ENV_MAPPINGS.items():
|
# BUT EXCLUDE API KEYS - they're now handled by cubbi_init
|
||||||
|
for config_path, env_var in LEGACY_ENV_MAPPINGS.items():
|
||||||
|
# Skip API key environment variables - let cubbi_init handle them
|
||||||
|
if any(
|
||||||
|
key_word in env_var.upper() for key_word in ["API_KEY", "SECRET_KEY"]
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
value = self.get(config_path)
|
value = self.get(config_path)
|
||||||
if value:
|
if value:
|
||||||
# Handle environment variable references
|
# Handle environment variable references
|
||||||
@@ -265,6 +301,68 @@ class UserConfigManager:
|
|||||||
|
|
||||||
env_vars[env_var] = str(value)
|
env_vars[env_var] = str(value)
|
||||||
|
|
||||||
|
# NOTE: Provider API keys are no longer passed as environment variables
|
||||||
|
# They are now handled by cubbi_init plugins based on selected model
|
||||||
|
# This prevents unused API keys from being exposed in containers
|
||||||
|
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
def get_provider_environment_variables(self, provider_name: str) -> Dict[str, str]:
|
||||||
|
"""Get environment variables for a specific provider.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider_name: Name of the provider to get environment variables for
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary of environment variables for the provider
|
||||||
|
"""
|
||||||
|
env_vars = {}
|
||||||
|
provider_config = self.get_provider(provider_name)
|
||||||
|
|
||||||
|
if not provider_config:
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
provider_type = provider_config.get("type", provider_name)
|
||||||
|
api_key = provider_config.get("api_key", "")
|
||||||
|
base_url = provider_config.get("base_url")
|
||||||
|
|
||||||
|
# Resolve environment variable references
|
||||||
|
if api_key.startswith("${") and api_key.endswith("}"):
|
||||||
|
env_var_name = api_key[2:-1]
|
||||||
|
resolved_api_key = os.environ.get(env_var_name, "")
|
||||||
|
else:
|
||||||
|
resolved_api_key = api_key
|
||||||
|
|
||||||
|
if not resolved_api_key:
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
# Add environment variables based on provider type
|
||||||
|
if provider_type == "anthropic":
|
||||||
|
env_vars["ANTHROPIC_API_KEY"] = resolved_api_key
|
||||||
|
elif provider_type == "openai":
|
||||||
|
env_vars["OPENAI_API_KEY"] = resolved_api_key
|
||||||
|
if base_url:
|
||||||
|
env_vars["OPENAI_URL"] = base_url
|
||||||
|
elif provider_type == "google":
|
||||||
|
env_vars["GOOGLE_API_KEY"] = resolved_api_key
|
||||||
|
elif provider_type == "openrouter":
|
||||||
|
env_vars["OPENROUTER_API_KEY"] = resolved_api_key
|
||||||
|
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
def get_all_providers_environment_variables(self) -> Dict[str, str]:
|
||||||
|
"""Get environment variables for all configured providers.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary of all provider environment variables
|
||||||
|
"""
|
||||||
|
env_vars = {}
|
||||||
|
providers = self.get("providers", {})
|
||||||
|
|
||||||
|
for provider_name in providers.keys():
|
||||||
|
provider_env = self.get_provider_environment_variables(provider_name)
|
||||||
|
env_vars.update(provider_env)
|
||||||
|
|
||||||
return env_vars
|
return env_vars
|
||||||
|
|
||||||
def list_config(self) -> List[Tuple[str, Any]]:
|
def list_config(self) -> List[Tuple[str, Any]]:
|
||||||
@@ -293,3 +391,384 @@ class UserConfigManager:
|
|||||||
|
|
||||||
_flatten_dict(self.config)
|
_flatten_dict(self.config)
|
||||||
return sorted(result)
|
return sorted(result)
|
||||||
|
|
||||||
|
def _auto_discover_providers(self, config: Dict[str, Any]) -> None:
|
||||||
|
"""Auto-discover providers from environment variables."""
|
||||||
|
if "providers" not in config:
|
||||||
|
config["providers"] = {}
|
||||||
|
|
||||||
|
for provider_name, provider_info in STANDARD_PROVIDERS.items():
|
||||||
|
# Skip if provider already configured
|
||||||
|
if provider_name in config["providers"]:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if environment variable exists
|
||||||
|
api_key = os.environ.get(provider_info["env_key"])
|
||||||
|
if api_key:
|
||||||
|
config["providers"][provider_name] = {
|
||||||
|
"type": provider_info["type"],
|
||||||
|
"api_key": f"${{{provider_info['env_key']}}}", # Reference to env var
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_provider(self, provider_name: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get a provider configuration by name."""
|
||||||
|
return self.get(f"providers.{provider_name}")
|
||||||
|
|
||||||
|
def list_providers(self) -> Dict[str, Dict[str, Any]]:
|
||||||
|
"""Get all configured providers."""
|
||||||
|
return self.get("providers", {})
|
||||||
|
|
||||||
|
def add_provider(
|
||||||
|
self,
|
||||||
|
name: str,
|
||||||
|
provider_type: str,
|
||||||
|
api_key: str,
|
||||||
|
base_url: Optional[str] = None,
|
||||||
|
env_key: Optional[str] = None,
|
||||||
|
) -> None:
|
||||||
|
"""Add a new provider configuration.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: Provider name/identifier
|
||||||
|
provider_type: Type of provider (anthropic, openai, etc.)
|
||||||
|
api_key: API key value or environment variable reference
|
||||||
|
base_url: Custom base URL for API calls (optional)
|
||||||
|
env_key: If provided, use env reference instead of direct api_key
|
||||||
|
"""
|
||||||
|
provider_config = {
|
||||||
|
"type": provider_type,
|
||||||
|
"api_key": f"${{{env_key}}}" if env_key else api_key,
|
||||||
|
}
|
||||||
|
|
||||||
|
if base_url:
|
||||||
|
provider_config["base_url"] = base_url
|
||||||
|
|
||||||
|
self.set(f"providers.{name}", provider_config)
|
||||||
|
|
||||||
|
def remove_provider(self, name: str) -> bool:
|
||||||
|
"""Remove a provider configuration.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if provider was removed, False if it didn't exist
|
||||||
|
"""
|
||||||
|
providers = self.get("providers", {})
|
||||||
|
if name in providers:
|
||||||
|
del providers[name]
|
||||||
|
self.set("providers", providers)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def resolve_model(self, model_spec: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Resolve a model specification (provider/model) to provider config.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
model_spec: Model specification in format "provider/model"
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with resolved provider config and model name
|
||||||
|
"""
|
||||||
|
if "/" not in model_spec:
|
||||||
|
# Legacy format - try to use as provider name with empty model
|
||||||
|
provider_name = model_spec
|
||||||
|
model_name = ""
|
||||||
|
else:
|
||||||
|
provider_name, model_name = model_spec.split("/", 1)
|
||||||
|
|
||||||
|
provider_config = self.get_provider(provider_name)
|
||||||
|
if not provider_config:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Resolve environment variable references in API key
|
||||||
|
api_key = provider_config.get("api_key", "")
|
||||||
|
if api_key.startswith("${") and api_key.endswith("}"):
|
||||||
|
env_var_name = api_key[2:-1]
|
||||||
|
resolved_api_key = os.environ.get(env_var_name, "")
|
||||||
|
else:
|
||||||
|
resolved_api_key = api_key
|
||||||
|
|
||||||
|
return {
|
||||||
|
"provider_name": provider_name,
|
||||||
|
"provider_type": provider_config.get("type", provider_name),
|
||||||
|
"model_name": model_name,
|
||||||
|
"api_key": resolved_api_key,
|
||||||
|
"base_url": provider_config.get("base_url"),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Resource management methods
|
||||||
|
def list_mcps(self) -> List[str]:
|
||||||
|
"""Get all configured default MCP servers."""
|
||||||
|
return self.get("defaults.mcps", [])
|
||||||
|
|
||||||
|
def add_mcp(self, name: str) -> None:
|
||||||
|
"""Add a new default MCP server."""
|
||||||
|
mcps = self.list_mcps()
|
||||||
|
if name not in mcps:
|
||||||
|
mcps.append(name)
|
||||||
|
self.set("defaults.mcps", mcps)
|
||||||
|
|
||||||
|
def remove_mcp(self, name: str) -> bool:
|
||||||
|
"""Remove a default MCP server.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if MCP was removed, False if it didn't exist
|
||||||
|
"""
|
||||||
|
mcps = self.list_mcps()
|
||||||
|
if name in mcps:
|
||||||
|
mcps.remove(name)
|
||||||
|
self.set("defaults.mcps", mcps)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def list_mcp_configurations(self) -> List[Dict[str, Any]]:
|
||||||
|
"""Get all configured MCP server configurations."""
|
||||||
|
return self.get("mcps", [])
|
||||||
|
|
||||||
|
def get_mcp_configuration(self, name: str) -> Optional[Dict[str, Any]]:
|
||||||
|
"""Get an MCP configuration by name."""
|
||||||
|
mcps = self.list_mcp_configurations()
|
||||||
|
for mcp in mcps:
|
||||||
|
if mcp.get("name") == name:
|
||||||
|
return mcp
|
||||||
|
return None
|
||||||
|
|
||||||
|
def add_mcp_configuration(self, mcp_config: Dict[str, Any]) -> None:
|
||||||
|
"""Add a new MCP server configuration."""
|
||||||
|
mcps = self.list_mcp_configurations()
|
||||||
|
|
||||||
|
# Remove existing MCP with the same name if it exists
|
||||||
|
mcps = [mcp for mcp in mcps if mcp.get("name") != mcp_config.get("name")]
|
||||||
|
|
||||||
|
# Add the new MCP
|
||||||
|
mcps.append(mcp_config)
|
||||||
|
|
||||||
|
# Save the configuration
|
||||||
|
self.set("mcps", mcps)
|
||||||
|
|
||||||
|
def remove_mcp_configuration(self, name: str) -> bool:
|
||||||
|
"""Remove an MCP server configuration.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if MCP was removed, False if it didn't exist
|
||||||
|
"""
|
||||||
|
mcps = self.list_mcp_configurations()
|
||||||
|
original_length = len(mcps)
|
||||||
|
|
||||||
|
# Filter out the MCP with the specified name
|
||||||
|
mcps = [mcp for mcp in mcps if mcp.get("name") != name]
|
||||||
|
|
||||||
|
if len(mcps) < original_length:
|
||||||
|
self.set("mcps", mcps)
|
||||||
|
|
||||||
|
# Also remove from defaults if it's there
|
||||||
|
self.remove_mcp(name)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def list_networks(self) -> List[str]:
|
||||||
|
"""Get all configured default networks."""
|
||||||
|
return self.get("defaults.networks", [])
|
||||||
|
|
||||||
|
def add_network(self, name: str) -> None:
|
||||||
|
"""Add a new default network."""
|
||||||
|
networks = self.list_networks()
|
||||||
|
if name not in networks:
|
||||||
|
networks.append(name)
|
||||||
|
self.set("defaults.networks", networks)
|
||||||
|
|
||||||
|
def remove_network(self, name: str) -> bool:
|
||||||
|
"""Remove a default network.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if network was removed, False if it didn't exist
|
||||||
|
"""
|
||||||
|
networks = self.list_networks()
|
||||||
|
if name in networks:
|
||||||
|
networks.remove(name)
|
||||||
|
self.set("defaults.networks", networks)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def list_volumes(self) -> List[str]:
|
||||||
|
"""Get all configured default volumes."""
|
||||||
|
return self.get("defaults.volumes", [])
|
||||||
|
|
||||||
|
def add_volume(self, volume: str) -> None:
|
||||||
|
"""Add a new default volume mapping."""
|
||||||
|
volumes = self.list_volumes()
|
||||||
|
if volume not in volumes:
|
||||||
|
volumes.append(volume)
|
||||||
|
self.set("defaults.volumes", volumes)
|
||||||
|
|
||||||
|
def remove_volume(self, volume: str) -> bool:
|
||||||
|
"""Remove a default volume mapping.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if volume was removed, False if it didn't exist
|
||||||
|
"""
|
||||||
|
volumes = self.list_volumes()
|
||||||
|
if volume in volumes:
|
||||||
|
volumes.remove(volume)
|
||||||
|
self.set("defaults.volumes", volumes)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def list_ports(self) -> List[int]:
|
||||||
|
"""Get all configured default ports."""
|
||||||
|
return self.get("defaults.ports", [])
|
||||||
|
|
||||||
|
def add_port(self, port: int) -> None:
|
||||||
|
"""Add a new default port."""
|
||||||
|
ports = self.list_ports()
|
||||||
|
if port not in ports:
|
||||||
|
ports.append(port)
|
||||||
|
self.set("defaults.ports", ports)
|
||||||
|
|
||||||
|
def remove_port(self, port: int) -> bool:
|
||||||
|
"""Remove a default port.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if port was removed, False if it didn't exist
|
||||||
|
"""
|
||||||
|
ports = self.list_ports()
|
||||||
|
if port in ports:
|
||||||
|
ports.remove(port)
|
||||||
|
self.set("defaults.ports", ports)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Model management methods
|
||||||
|
def list_provider_models(self, provider_name: str) -> List[Dict[str, str]]:
|
||||||
|
"""Get all models for a specific provider.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider_name: Name of the provider
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of model dictionaries with 'id' and 'name' keys
|
||||||
|
"""
|
||||||
|
provider_config = self.get_provider(provider_name)
|
||||||
|
if not provider_config:
|
||||||
|
return []
|
||||||
|
|
||||||
|
models = provider_config.get("models", [])
|
||||||
|
normalized_models = []
|
||||||
|
for model in models:
|
||||||
|
if isinstance(model, str):
|
||||||
|
normalized_models.append({"id": model})
|
||||||
|
elif isinstance(model, dict):
|
||||||
|
model_id = model.get("id", "")
|
||||||
|
if model_id:
|
||||||
|
normalized_models.append({"id": model_id})
|
||||||
|
|
||||||
|
return normalized_models
|
||||||
|
|
||||||
|
def set_provider_models(
|
||||||
|
self, provider_name: str, models: List[Dict[str, str]]
|
||||||
|
) -> None:
|
||||||
|
"""Set the models for a specific provider.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider_name: Name of the provider
|
||||||
|
models: List of model dictionaries with 'id' and optional 'name' keys
|
||||||
|
"""
|
||||||
|
provider_config = self.get_provider(provider_name)
|
||||||
|
if not provider_config:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Normalize models - ensure each has id, name defaults to id
|
||||||
|
normalized_models = []
|
||||||
|
for model in models:
|
||||||
|
if isinstance(model, dict) and "id" in model:
|
||||||
|
normalized_model = {
|
||||||
|
"id": model["id"],
|
||||||
|
}
|
||||||
|
normalized_models.append(normalized_model)
|
||||||
|
|
||||||
|
provider_config["models"] = normalized_models
|
||||||
|
self.set(f"providers.{provider_name}", provider_config)
|
||||||
|
|
||||||
|
def add_provider_model(
|
||||||
|
self, provider_name: str, model_id: str, model_name: Optional[str] = None
|
||||||
|
) -> None:
|
||||||
|
"""Add a model to a provider.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider_name: Name of the provider
|
||||||
|
model_id: ID of the model
|
||||||
|
model_name: Optional display name for the model (defaults to model_id)
|
||||||
|
"""
|
||||||
|
models = self.list_provider_models(provider_name)
|
||||||
|
|
||||||
|
for existing_model in models:
|
||||||
|
if existing_model["id"] == model_id:
|
||||||
|
return
|
||||||
|
|
||||||
|
new_model = {"id": model_id}
|
||||||
|
models.append(new_model)
|
||||||
|
self.set_provider_models(provider_name, models)
|
||||||
|
|
||||||
|
def remove_provider_model(self, provider_name: str, model_id: str) -> bool:
|
||||||
|
"""Remove a model from a provider.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
provider_name: Name of the provider
|
||||||
|
model_id: ID of the model to remove
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if model was removed, False if it didn't exist
|
||||||
|
"""
|
||||||
|
models = self.list_provider_models(provider_name)
|
||||||
|
original_length = len(models)
|
||||||
|
|
||||||
|
# Filter out the model with the specified ID
|
||||||
|
models = [model for model in models if model["id"] != model_id]
|
||||||
|
|
||||||
|
if len(models) < original_length:
|
||||||
|
self.set_provider_models(provider_name, models)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def is_provider_openai_compatible(self, provider_name: str) -> bool:
|
||||||
|
provider_config = self.get_provider(provider_name)
|
||||||
|
if not provider_config:
|
||||||
|
return False
|
||||||
|
|
||||||
|
provider_type = provider_config.get("type", "")
|
||||||
|
return provider_type == "openai" and provider_config.get("base_url") is not None
|
||||||
|
|
||||||
|
def supports_model_fetching(self, provider_name: str) -> bool:
|
||||||
|
"""Check if a provider supports model fetching via API."""
|
||||||
|
from .config import PROVIDER_DEFAULT_URLS
|
||||||
|
|
||||||
|
provider = self.get_provider(provider_name)
|
||||||
|
if not provider:
|
||||||
|
return False
|
||||||
|
|
||||||
|
provider_type = provider.get("type")
|
||||||
|
base_url = provider.get("base_url")
|
||||||
|
|
||||||
|
# Provider supports model fetching if:
|
||||||
|
# 1. It has a custom base_url (OpenAI-compatible), OR
|
||||||
|
# 2. It's a standard provider type that we support
|
||||||
|
return base_url is not None or provider_type in PROVIDER_DEFAULT_URLS
|
||||||
|
|
||||||
|
def list_openai_compatible_providers(self) -> List[str]:
|
||||||
|
providers = self.list_providers()
|
||||||
|
compatible_providers = []
|
||||||
|
|
||||||
|
for provider_name in providers.keys():
|
||||||
|
if self.is_provider_openai_compatible(provider_name):
|
||||||
|
compatible_providers.append(provider_name)
|
||||||
|
|
||||||
|
return compatible_providers
|
||||||
|
|
||||||
|
def list_model_fetchable_providers(self) -> List[str]:
|
||||||
|
"""List all providers that support model fetching."""
|
||||||
|
providers = self.list_providers()
|
||||||
|
fetchable_providers = []
|
||||||
|
|
||||||
|
for provider_name in providers.keys():
|
||||||
|
if self.supports_model_fetching(provider_name):
|
||||||
|
fetchable_providers.append(provider_name)
|
||||||
|
|
||||||
|
return fetchable_providers
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "cubbi"
|
name = "cubbi"
|
||||||
version = "0.2.0"
|
version = "0.5.0"
|
||||||
description = "Cubbi Container Tool"
|
description = "Cubbi Container Tool"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.12"
|
requires-python = ">=3.12"
|
||||||
@@ -14,6 +14,8 @@ dependencies = [
|
|||||||
"pyyaml>=6.0.1",
|
"pyyaml>=6.0.1",
|
||||||
"rich>=13.6.0",
|
"rich>=13.6.0",
|
||||||
"pydantic>=2.5.0",
|
"pydantic>=2.5.0",
|
||||||
|
"questionary>=2.0.0",
|
||||||
|
"requests>=2.32.3",
|
||||||
]
|
]
|
||||||
classifiers = [
|
classifiers = [
|
||||||
"Development Status :: 3 - Alpha",
|
"Development Status :: 3 - Alpha",
|
||||||
@@ -45,6 +47,13 @@ cubbix = "cubbi.cli:session_create_entry_point"
|
|||||||
line-length = 88
|
line-length = 88
|
||||||
target-version = "py312"
|
target-version = "py312"
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
# Exclude integration tests by default
|
||||||
|
addopts = "-v --tb=short -m 'not integration'"
|
||||||
|
markers = [
|
||||||
|
"integration: marks tests as integration tests (deselected by default)",
|
||||||
|
]
|
||||||
|
|
||||||
[tool.mypy]
|
[tool.mypy]
|
||||||
python_version = "3.12"
|
python_version = "3.12"
|
||||||
warn_return_any = true
|
warn_return_any = true
|
||||||
|
|||||||
83
tests/README_integration.md
Normal file
83
tests/README_integration.md
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
# Integration Tests
|
||||||
|
|
||||||
|
This directory contains integration tests for cubbi images with different model combinations.
|
||||||
|
|
||||||
|
## Test Matrix
|
||||||
|
|
||||||
|
The integration tests cover:
|
||||||
|
- **5 Images**: goose, aider, claudecode, opencode, crush
|
||||||
|
- **4 Models**: anthropic/claude-sonnet-4-20250514, openai/gpt-4o, openrouter/openai/gpt-4o, litellm/gpt-oss:120b
|
||||||
|
- **Total**: 20 image/model combinations + additional tests
|
||||||
|
|
||||||
|
## Running Tests
|
||||||
|
|
||||||
|
### Default (Skip Integration)
|
||||||
|
```bash
|
||||||
|
# Regular tests only (integration tests excluded by default)
|
||||||
|
uv run -m pytest
|
||||||
|
|
||||||
|
# Specific test file (excluding integration)
|
||||||
|
uv run -m pytest tests/test_cli.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Tests Only
|
||||||
|
```bash
|
||||||
|
# Run all integration tests (20 combinations + helpers)
|
||||||
|
uv run -m pytest -m integration
|
||||||
|
|
||||||
|
# Run specific image with all models
|
||||||
|
uv run -m pytest -m integration -k "goose"
|
||||||
|
|
||||||
|
# Run specific model with all images
|
||||||
|
uv run -m pytest -m integration -k "anthropic"
|
||||||
|
|
||||||
|
# Run single combination
|
||||||
|
uv run -m pytest -m integration -k "goose and anthropic"
|
||||||
|
|
||||||
|
# Verbose output with timing
|
||||||
|
uv run -m pytest -m integration -v -s
|
||||||
|
```
|
||||||
|
|
||||||
|
### Combined Tests
|
||||||
|
```bash
|
||||||
|
# Run both regular and integration tests
|
||||||
|
uv run -m pytest -m "not slow" # or remove the default marker exclusion
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Structure
|
||||||
|
|
||||||
|
### `test_image_model_combination`
|
||||||
|
- Parametrized test with all image/model combinations
|
||||||
|
- Tests single prompt/response functionality
|
||||||
|
- Uses appropriate command syntax for each tool
|
||||||
|
- Verifies successful completion and basic output
|
||||||
|
|
||||||
|
### `test_image_help_command`
|
||||||
|
- Tests help command for each image
|
||||||
|
- Ensures basic functionality works
|
||||||
|
|
||||||
|
### `test_all_images_available`
|
||||||
|
- Verifies all required images are built and available
|
||||||
|
|
||||||
|
## Command Templates
|
||||||
|
|
||||||
|
Each image uses its specific command syntax:
|
||||||
|
- **goose**: `goose run -t 'prompt' --no-session --quiet`
|
||||||
|
- **aider**: `aider --message 'prompt' --yes-always --no-fancy-input --no-check-update --no-auto-commits`
|
||||||
|
- **claudecode**: `claude -p 'prompt'`
|
||||||
|
- **opencode**: `opencode run -m MODEL 'prompt'`
|
||||||
|
- **crush**: `crush run 'prompt'`
|
||||||
|
|
||||||
|
## Expected Results
|
||||||
|
|
||||||
|
All tests should pass when:
|
||||||
|
1. Images are built (`uv run -m cubbi.cli image build [IMAGE]`)
|
||||||
|
2. API keys are configured (`uv run -m cubbi.cli configure`)
|
||||||
|
3. Models are accessible and working
|
||||||
|
|
||||||
|
## Debugging Failed Tests
|
||||||
|
|
||||||
|
If tests fail, check:
|
||||||
|
1. Image availability: `uv run -m cubbi.cli image list`
|
||||||
|
2. Configuration: `uv run -m cubbi.cli config list`
|
||||||
|
3. Manual test: `uv run -m cubbi.cli session create -i IMAGE -m MODEL --run "COMMAND"`
|
||||||
@@ -2,17 +2,18 @@
|
|||||||
Common test fixtures for Cubbi Container tests.
|
Common test fixtures for Cubbi Container tests.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import uuid
|
|
||||||
import tempfile
|
import tempfile
|
||||||
import pytest
|
import uuid
|
||||||
import docker
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
from cubbi.container import ContainerManager
|
import docker
|
||||||
from cubbi.session import SessionManager
|
import pytest
|
||||||
|
|
||||||
from cubbi.config import ConfigManager
|
from cubbi.config import ConfigManager
|
||||||
|
from cubbi.container import ContainerManager
|
||||||
from cubbi.models import Session, SessionStatus
|
from cubbi.models import Session, SessionStatus
|
||||||
|
from cubbi.session import SessionManager
|
||||||
from cubbi.user_config import UserConfigManager
|
from cubbi.user_config import UserConfigManager
|
||||||
|
|
||||||
|
|
||||||
@@ -41,13 +42,6 @@ requires_docker = pytest.mark.skipif(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def temp_dir():
|
|
||||||
"""Create a temporary directory for test files."""
|
|
||||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
|
||||||
yield Path(tmp_dir)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def temp_config_dir():
|
def temp_config_dir():
|
||||||
"""Create a temporary directory for configuration files."""
|
"""Create a temporary directory for configuration files."""
|
||||||
@@ -56,76 +50,26 @@ def temp_config_dir():
|
|||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def isolated_config(temp_config_dir):
|
def mock_container_manager(isolate_cubbi_config):
|
||||||
"""Provide an isolated UserConfigManager instance."""
|
"""Mock the ContainerManager class with proper behaviors for testing."""
|
||||||
config_path = temp_config_dir / "config.yaml"
|
|
||||||
config_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
return UserConfigManager(str(config_path))
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def isolated_session_manager(temp_config_dir):
|
|
||||||
"""Create an isolated session manager for testing."""
|
|
||||||
sessions_path = temp_config_dir / "sessions.yaml"
|
|
||||||
return SessionManager(sessions_path)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def isolated_config_manager():
|
|
||||||
"""Create an isolated config manager for testing."""
|
|
||||||
config_manager = ConfigManager()
|
|
||||||
# Ensure we're using the built-in images, not trying to load from user config
|
|
||||||
return config_manager
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def mock_session_manager():
|
|
||||||
"""Mock the SessionManager class."""
|
|
||||||
with patch("cubbi.cli.session_manager") as mock_manager:
|
|
||||||
yield mock_manager
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def mock_container_manager():
|
|
||||||
"""Mock the ContainerManager class with proper initialization."""
|
|
||||||
mock_session = Session(
|
mock_session = Session(
|
||||||
id="test-session-id",
|
id="test-session-id",
|
||||||
name="test-session",
|
name="test-session",
|
||||||
image="goose",
|
image="goose",
|
||||||
status=SessionStatus.RUNNING,
|
status=SessionStatus.RUNNING,
|
||||||
ports={"8080": "8080"},
|
ports={8080: 32768},
|
||||||
)
|
)
|
||||||
|
|
||||||
with patch("cubbi.cli.container_manager") as mock_manager:
|
container_manager = isolate_cubbi_config["container_manager"]
|
||||||
# Set behaviors to avoid TypeErrors
|
|
||||||
mock_manager.list_sessions.return_value = []
|
|
||||||
mock_manager.create_session.return_value = mock_session
|
|
||||||
mock_manager.close_session.return_value = True
|
|
||||||
mock_manager.close_all_sessions.return_value = (3, True)
|
|
||||||
# MCP-related mocks
|
|
||||||
mock_manager.get_mcp_status.return_value = {
|
|
||||||
"status": "running",
|
|
||||||
"container_id": "test-id",
|
|
||||||
}
|
|
||||||
mock_manager.start_mcp.return_value = {
|
|
||||||
"status": "running",
|
|
||||||
"container_id": "test-id",
|
|
||||||
}
|
|
||||||
mock_manager.stop_mcp.return_value = True
|
|
||||||
mock_manager.restart_mcp.return_value = {
|
|
||||||
"status": "running",
|
|
||||||
"container_id": "test-id",
|
|
||||||
}
|
|
||||||
mock_manager.get_mcp_logs.return_value = "Test log output"
|
|
||||||
yield mock_manager
|
|
||||||
|
|
||||||
|
# Patch the container manager methods for mocking
|
||||||
@pytest.fixture
|
with (
|
||||||
def container_manager(isolated_session_manager, isolated_config_manager):
|
patch.object(container_manager, "list_sessions", return_value=[]),
|
||||||
"""Create a container manager with isolated components."""
|
patch.object(container_manager, "create_session", return_value=mock_session),
|
||||||
return ContainerManager(
|
patch.object(container_manager, "close_session", return_value=True),
|
||||||
config_manager=isolated_config_manager, session_manager=isolated_session_manager
|
patch.object(container_manager, "close_all_sessions", return_value=(3, True)),
|
||||||
)
|
):
|
||||||
|
yield container_manager
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
@@ -137,28 +81,23 @@ def cli_runner():
|
|||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def test_file_content(temp_dir):
|
def test_file_content(temp_config_dir):
|
||||||
"""Create a test file with content in the temporary directory."""
|
"""Create a test file with content in a temporary directory."""
|
||||||
test_content = "This is a test file for volume mounting"
|
test_content = "This is a test file for volume mounting"
|
||||||
test_file = temp_dir / "test_volume_file.txt"
|
test_file = temp_config_dir / "test_volume_file.txt"
|
||||||
with open(test_file, "w") as f:
|
with open(test_file, "w") as f:
|
||||||
f.write(test_content)
|
f.write(test_content)
|
||||||
return test_file, test_content
|
return test_file, test_content
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def test_network_name():
|
def docker_test_network():
|
||||||
"""Generate a unique network name for testing."""
|
|
||||||
return f"cubbi-test-network-{uuid.uuid4().hex[:8]}"
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def docker_test_network(test_network_name):
|
|
||||||
"""Create a Docker network for testing and clean it up after."""
|
"""Create a Docker network for testing and clean it up after."""
|
||||||
if not is_docker_available():
|
if not is_docker_available():
|
||||||
pytest.skip("Docker is not available")
|
pytest.skip("Docker is not available")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
test_network_name = f"cubbi-test-network-{uuid.uuid4().hex[:8]}"
|
||||||
client = docker.from_env()
|
client = docker.from_env()
|
||||||
network = client.networks.create(test_network_name, driver="bridge")
|
network = client.networks.create(test_network_name, driver="bridge")
|
||||||
|
|
||||||
@@ -172,8 +111,59 @@ def docker_test_network(test_network_name):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(autouse=True, scope="function")
|
||||||
|
def isolate_cubbi_config(temp_config_dir):
|
||||||
|
"""
|
||||||
|
Automatically isolate all Cubbi configuration for every test.
|
||||||
|
|
||||||
|
This fixture ensures that tests never touch the user's real configuration
|
||||||
|
by patching both ConfigManager and UserConfigManager in cli.py to use
|
||||||
|
temporary directories.
|
||||||
|
"""
|
||||||
|
# Create isolated config instances with temporary paths
|
||||||
|
config_path = temp_config_dir / "config.yaml"
|
||||||
|
user_config_path = temp_config_dir / "user_config.yaml"
|
||||||
|
|
||||||
|
# Create the ConfigManager with a custom config path
|
||||||
|
isolated_config_manager = ConfigManager(config_path)
|
||||||
|
|
||||||
|
# Create the UserConfigManager with a custom config path
|
||||||
|
isolated_user_config = UserConfigManager(str(user_config_path))
|
||||||
|
|
||||||
|
# Create isolated session manager
|
||||||
|
sessions_path = temp_config_dir / "sessions.yaml"
|
||||||
|
isolated_session_manager = SessionManager(sessions_path)
|
||||||
|
|
||||||
|
# Create isolated container manager
|
||||||
|
isolated_container_manager = ContainerManager(
|
||||||
|
isolated_config_manager, isolated_session_manager, isolated_user_config
|
||||||
|
)
|
||||||
|
|
||||||
|
# Patch all the global instances in cli.py and the UserConfigManager class
|
||||||
|
with (
|
||||||
|
patch("cubbi.cli.config_manager", isolated_config_manager),
|
||||||
|
patch("cubbi.cli.user_config", isolated_user_config),
|
||||||
|
patch("cubbi.cli.session_manager", isolated_session_manager),
|
||||||
|
patch("cubbi.cli.container_manager", isolated_container_manager),
|
||||||
|
patch("cubbi.cli.UserConfigManager", return_value=isolated_user_config),
|
||||||
|
):
|
||||||
|
# Create isolated MCP manager with isolated user config
|
||||||
|
from cubbi.mcp import MCPManager
|
||||||
|
|
||||||
|
isolated_mcp_manager = MCPManager(config_manager=isolated_user_config)
|
||||||
|
|
||||||
|
# Patch the global mcp_manager instance
|
||||||
|
with patch("cubbi.cli.mcp_manager", isolated_mcp_manager):
|
||||||
|
yield {
|
||||||
|
"config_manager": isolated_config_manager,
|
||||||
|
"user_config": isolated_user_config,
|
||||||
|
"session_manager": isolated_session_manager,
|
||||||
|
"container_manager": isolated_container_manager,
|
||||||
|
"mcp_manager": isolated_mcp_manager,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def patched_config_manager(isolated_config):
|
def patched_config_manager(isolate_cubbi_config):
|
||||||
"""Patch the UserConfigManager in cli.py to use our isolated instance."""
|
"""Compatibility fixture - returns the isolated user config."""
|
||||||
with patch("cubbi.cli.user_config", isolated_config):
|
return isolate_cubbi_config["user_config"]
|
||||||
yield isolated_config
|
|
||||||
|
|||||||
@@ -189,4 +189,103 @@ def test_config_reset(cli_runner, patched_config_manager, monkeypatch):
|
|||||||
assert patched_config_manager.get("defaults.image") == "goose"
|
assert patched_config_manager.get("defaults.image") == "goose"
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_list_empty(cli_runner, patched_config_manager):
|
||||||
|
"""Test listing ports when none are configured."""
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "No default ports configured" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_add_single(cli_runner, patched_config_manager):
|
||||||
|
"""Test adding a single port."""
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "8000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Added port 8000 to defaults" in result.stdout
|
||||||
|
|
||||||
|
# Verify it was added
|
||||||
|
ports = patched_config_manager.get("defaults.ports")
|
||||||
|
assert 8000 in ports
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_add_multiple(cli_runner, patched_config_manager):
|
||||||
|
"""Test adding multiple ports with comma separation."""
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "8000,3000,5173"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Added ports [8000, 3000, 5173] to defaults" in result.stdout
|
||||||
|
|
||||||
|
# Verify they were added
|
||||||
|
ports = patched_config_manager.get("defaults.ports")
|
||||||
|
assert 8000 in ports
|
||||||
|
assert 3000 in ports
|
||||||
|
assert 5173 in ports
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_add_duplicate(cli_runner, patched_config_manager):
|
||||||
|
"""Test adding a port that already exists."""
|
||||||
|
# Add a port first
|
||||||
|
patched_config_manager.set("defaults.ports", [8000])
|
||||||
|
|
||||||
|
# Try to add the same port again
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "8000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Port 8000 is already in defaults" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_add_invalid_format(cli_runner, patched_config_manager):
|
||||||
|
"""Test adding an invalid port format."""
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "invalid"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Error: Invalid port format" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_add_invalid_range(cli_runner, patched_config_manager):
|
||||||
|
"""Test adding a port outside valid range."""
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "70000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Error: Invalid ports [70000]" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_list_with_ports(cli_runner, patched_config_manager):
|
||||||
|
"""Test listing ports when some are configured."""
|
||||||
|
# Add some ports
|
||||||
|
patched_config_manager.set("defaults.ports", [8000, 3000])
|
||||||
|
|
||||||
|
# List ports
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "8000" in result.stdout
|
||||||
|
assert "3000" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_remove(cli_runner, patched_config_manager):
|
||||||
|
"""Test removing a port."""
|
||||||
|
# Add a port first
|
||||||
|
patched_config_manager.set("defaults.ports", [8000])
|
||||||
|
|
||||||
|
# Remove the port
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "remove", "8000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Removed port 8000 from defaults" in result.stdout
|
||||||
|
|
||||||
|
# Verify it's gone
|
||||||
|
ports = patched_config_manager.get("defaults.ports")
|
||||||
|
assert 8000 not in ports
|
||||||
|
|
||||||
|
|
||||||
|
def test_port_remove_not_found(cli_runner, patched_config_manager):
|
||||||
|
"""Test removing a port that doesn't exist."""
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "remove", "8000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Port 8000 is not in defaults" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
# patched_config_manager fixture is now in conftest.py
|
# patched_config_manager fixture is now in conftest.py
|
||||||
|
|||||||
90
tests/test_config_isolation.py
Normal file
90
tests/test_config_isolation.py
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
"""
|
||||||
|
Test that configuration isolation works correctly and doesn't touch user's real config.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
from cubbi.cli import app
|
||||||
|
|
||||||
|
|
||||||
|
def test_config_isolation_preserves_user_config(cli_runner, isolate_cubbi_config):
|
||||||
|
"""Test that test isolation doesn't affect user's real configuration."""
|
||||||
|
|
||||||
|
# Get the user's real config path
|
||||||
|
real_config_path = Path.home() / ".config" / "cubbi" / "config.yaml"
|
||||||
|
|
||||||
|
# If the user has a real config, store its content before test
|
||||||
|
original_content = None
|
||||||
|
if real_config_path.exists():
|
||||||
|
with open(real_config_path, "r") as f:
|
||||||
|
original_content = f.read()
|
||||||
|
|
||||||
|
# Run some config modification commands in the test
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "9999"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
|
||||||
|
result = cli_runner.invoke(app, ["config", "set", "defaults.image", "test-image"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
|
||||||
|
# Verify the user's real config is unchanged
|
||||||
|
if original_content is not None:
|
||||||
|
with open(real_config_path, "r") as f:
|
||||||
|
current_content = f.read()
|
||||||
|
assert current_content == original_content
|
||||||
|
else:
|
||||||
|
# If no real config existed, it should still not exist
|
||||||
|
assert not real_config_path.exists()
|
||||||
|
|
||||||
|
|
||||||
|
def test_isolated_config_works_independently(cli_runner, isolate_cubbi_config):
|
||||||
|
"""Test that the isolated config works correctly for tests."""
|
||||||
|
|
||||||
|
# Add a port to isolated config
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "8888"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Added port 8888 to defaults" in result.stdout
|
||||||
|
|
||||||
|
# Verify it appears in the list
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "8888" in result.stdout
|
||||||
|
|
||||||
|
# Remove the port
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "remove", "8888"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Removed port 8888 from defaults" in result.stdout
|
||||||
|
|
||||||
|
# Verify it's gone
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "No default ports configured" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_each_test_gets_fresh_config(cli_runner, isolate_cubbi_config):
|
||||||
|
"""Test that each test gets a fresh, isolated configuration."""
|
||||||
|
|
||||||
|
# This test should start with empty ports (fresh config)
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "No default ports configured" in result.stdout
|
||||||
|
|
||||||
|
# Add a port
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "add", "7777"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
|
||||||
|
# Verify it's there
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "7777" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_another_fresh_config_test(cli_runner, isolate_cubbi_config):
|
||||||
|
"""Another test to verify each test gets a completely fresh config."""
|
||||||
|
|
||||||
|
# This test should also start with empty ports (independent of previous test)
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "No default ports configured" in result.stdout
|
||||||
|
|
||||||
|
# The port from the previous test should not be here
|
||||||
|
result = cli_runner.invoke(app, ["config", "port", "list"])
|
||||||
|
assert "7777" not in result.stdout
|
||||||
135
tests/test_integration.py
Normal file
135
tests/test_integration.py
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
"""Integration tests for cubbi images with different model combinations."""
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
import pytest
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
|
|
||||||
|
IMAGES = ["goose", "aider", "opencode", "crush"]
|
||||||
|
|
||||||
|
MODELS = [
|
||||||
|
"anthropic/claude-sonnet-4-20250514",
|
||||||
|
"openai/gpt-4o",
|
||||||
|
"openrouter/openai/gpt-4o",
|
||||||
|
"litellm/gpt-oss:120b",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Command templates for each tool (based on research)
|
||||||
|
COMMANDS: Dict[str, str] = {
|
||||||
|
"goose": "goose run -t '{prompt}' --no-session --quiet",
|
||||||
|
"aider": "aider --message '{prompt}' --yes-always --no-fancy-input --no-check-update --no-auto-commits",
|
||||||
|
"opencode": "opencode run '{prompt}'",
|
||||||
|
"crush": "crush run -q '{prompt}'",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def run_cubbi_command(
|
||||||
|
image: str, model: str, command: str, timeout: int = 20
|
||||||
|
) -> subprocess.CompletedProcess:
|
||||||
|
"""Run a cubbi command with specified image, model, and command."""
|
||||||
|
full_command = [
|
||||||
|
"uv",
|
||||||
|
"run",
|
||||||
|
"-m",
|
||||||
|
"cubbi.cli",
|
||||||
|
"session",
|
||||||
|
"create",
|
||||||
|
"-i",
|
||||||
|
image,
|
||||||
|
"-m",
|
||||||
|
model,
|
||||||
|
"--no-connect",
|
||||||
|
"--no-shell",
|
||||||
|
"--run",
|
||||||
|
command,
|
||||||
|
]
|
||||||
|
|
||||||
|
return subprocess.run(
|
||||||
|
full_command,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=timeout,
|
||||||
|
cwd="/home/tito/code/monadical/cubbi",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def is_successful_response(result: subprocess.CompletedProcess) -> bool:
|
||||||
|
"""Check if the cubbi command completed successfully."""
|
||||||
|
# Check for successful completion markers
|
||||||
|
return (
|
||||||
|
result.returncode == 0
|
||||||
|
and "Initial command finished (exit code: 0)" in result.stdout
|
||||||
|
and "Command execution complete" in result.stdout
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
@pytest.mark.parametrize("image", IMAGES)
|
||||||
|
@pytest.mark.parametrize("model", MODELS)
|
||||||
|
def test_image_model_combination(image: str, model: str):
|
||||||
|
"""Test each image with each model using appropriate command syntax."""
|
||||||
|
prompt = "What is 2+2?"
|
||||||
|
|
||||||
|
# Get the command template for this image
|
||||||
|
command_template = COMMANDS[image]
|
||||||
|
|
||||||
|
# For opencode, we need to substitute the model in the command
|
||||||
|
if image == "opencode":
|
||||||
|
command = command_template.format(prompt=prompt, model=model)
|
||||||
|
else:
|
||||||
|
command = command_template.format(prompt=prompt)
|
||||||
|
|
||||||
|
# Run the test with timeout handling
|
||||||
|
try:
|
||||||
|
result = run_cubbi_command(image, model, command)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
pytest.fail(f"Test timed out after 20s for {image} with {model}")
|
||||||
|
|
||||||
|
# Check if the command was successful
|
||||||
|
assert is_successful_response(result), (
|
||||||
|
f"Failed to run {image} with {model}. "
|
||||||
|
f"Return code: {result.returncode}\n"
|
||||||
|
f"Stdout: {result.stdout}\n"
|
||||||
|
f"Stderr: {result.stderr}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
def test_all_images_available():
|
||||||
|
"""Test that all required images are available for testing."""
|
||||||
|
# Run image list command
|
||||||
|
result = subprocess.run(
|
||||||
|
["uv", "run", "-m", "cubbi.cli", "image", "list"],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=30,
|
||||||
|
cwd="/home/tito/code/monadical/cubbi",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result.returncode == 0, f"Failed to list images: {result.stderr}"
|
||||||
|
|
||||||
|
for image in IMAGES:
|
||||||
|
assert image in result.stdout, f"Image {image} not found in available images"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.integration
|
||||||
|
def test_claudecode():
|
||||||
|
"""Test Claude Code without model preselection since it only supports Anthropic."""
|
||||||
|
command = "claude -p hello"
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = run_cubbi_command("claudecode", MODELS[0], command, timeout=20)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
pytest.fail("Claude Code help command timed out after 20s")
|
||||||
|
|
||||||
|
assert is_successful_response(result), (
|
||||||
|
f"Failed to run Claude Code help command. "
|
||||||
|
f"Return code: {result.returncode}\n"
|
||||||
|
f"Stdout: {result.stdout}\n"
|
||||||
|
f"Stderr: {result.stderr}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Allow running the test file directly for development
|
||||||
|
pytest.main([__file__, "-v", "-m", "integration"])
|
||||||
@@ -6,6 +6,8 @@ These tests require Docker to be running.
|
|||||||
import subprocess
|
import subprocess
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
|
import docker
|
||||||
|
|
||||||
|
|
||||||
# Import the requires_docker decorator from conftest
|
# Import the requires_docker decorator from conftest
|
||||||
from conftest import requires_docker
|
from conftest import requires_docker
|
||||||
@@ -21,13 +23,42 @@ def execute_command_in_container(container_id, command):
|
|||||||
return result.stdout.strip()
|
return result.stdout.strip()
|
||||||
|
|
||||||
|
|
||||||
|
def wait_for_container_init(container_id, timeout=5.0, poll_interval=0.1):
|
||||||
|
start_time = time.time()
|
||||||
|
|
||||||
|
while time.time() - start_time < timeout:
|
||||||
|
try:
|
||||||
|
# Check if /cubbi/init.status contains INIT_COMPLETE=true
|
||||||
|
result = execute_command_in_container(
|
||||||
|
container_id,
|
||||||
|
"grep -q 'INIT_COMPLETE=true' /cubbi/init.status 2>/dev/null && echo 'COMPLETE' || echo 'PENDING'",
|
||||||
|
)
|
||||||
|
|
||||||
|
if result == "COMPLETE":
|
||||||
|
return True
|
||||||
|
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
# File might not exist yet or container not ready, continue polling
|
||||||
|
pass
|
||||||
|
|
||||||
|
time.sleep(poll_interval)
|
||||||
|
|
||||||
|
# Timeout reached
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
@requires_docker
|
@requires_docker
|
||||||
def test_integration_session_create_with_volumes(container_manager, test_file_content):
|
def test_integration_session_create_with_volumes(
|
||||||
|
isolate_cubbi_config, test_file_content
|
||||||
|
):
|
||||||
"""Test creating a session with a volume mount."""
|
"""Test creating a session with a volume mount."""
|
||||||
test_file, test_content = test_file_content
|
test_file, test_content = test_file_content
|
||||||
session = None
|
session = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
# Get the isolated container manager
|
||||||
|
container_manager = isolate_cubbi_config["container_manager"]
|
||||||
|
|
||||||
# Create a session with a volume mount
|
# Create a session with a volume mount
|
||||||
session = container_manager.create_session(
|
session = container_manager.create_session(
|
||||||
image_name="goose",
|
image_name="goose",
|
||||||
@@ -39,8 +70,9 @@ def test_integration_session_create_with_volumes(container_manager, test_file_co
|
|||||||
assert session is not None
|
assert session is not None
|
||||||
assert session.status == "running"
|
assert session.status == "running"
|
||||||
|
|
||||||
# Give container time to fully start
|
# Wait for container initialization to complete
|
||||||
time.sleep(2)
|
init_success = wait_for_container_init(session.container_id)
|
||||||
|
assert init_success, "Container initialization timed out"
|
||||||
|
|
||||||
# Verify the file exists in the container and has correct content
|
# Verify the file exists in the container and has correct content
|
||||||
container_content = execute_command_in_container(
|
container_content = execute_command_in_container(
|
||||||
@@ -50,19 +82,22 @@ def test_integration_session_create_with_volumes(container_manager, test_file_co
|
|||||||
assert container_content == test_content
|
assert container_content == test_content
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
# Clean up the container
|
# Clean up the container (use kill for faster test cleanup)
|
||||||
if session and session.container_id:
|
if session and session.container_id:
|
||||||
container_manager.close_session(session.id)
|
container_manager.close_session(session.id, kill=True)
|
||||||
|
|
||||||
|
|
||||||
@requires_docker
|
@requires_docker
|
||||||
def test_integration_session_create_with_networks(
|
def test_integration_session_create_with_networks(
|
||||||
container_manager, docker_test_network
|
isolate_cubbi_config, docker_test_network
|
||||||
):
|
):
|
||||||
"""Test creating a session connected to a custom network."""
|
"""Test creating a session connected to a custom network."""
|
||||||
session = None
|
session = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
# Get the isolated container manager
|
||||||
|
container_manager = isolate_cubbi_config["container_manager"]
|
||||||
|
|
||||||
# Create a session with the test network
|
# Create a session with the test network
|
||||||
session = container_manager.create_session(
|
session = container_manager.create_session(
|
||||||
image_name="goose",
|
image_name="goose",
|
||||||
@@ -74,8 +109,9 @@ def test_integration_session_create_with_networks(
|
|||||||
assert session is not None
|
assert session is not None
|
||||||
assert session.status == "running"
|
assert session.status == "running"
|
||||||
|
|
||||||
# Give container time to fully start
|
# Wait for container initialization to complete
|
||||||
time.sleep(2)
|
init_success = wait_for_container_init(session.container_id)
|
||||||
|
assert init_success, "Container initialization timed out"
|
||||||
|
|
||||||
# Verify the container is connected to the test network
|
# Verify the container is connected to the test network
|
||||||
# Use inspect to check network connections
|
# Use inspect to check network connections
|
||||||
@@ -97,6 +133,240 @@ def test_integration_session_create_with_networks(
|
|||||||
assert int(network_interfaces) >= 2
|
assert int(network_interfaces) >= 2
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
# Clean up the container
|
# Clean up the container (use kill for faster test cleanup)
|
||||||
if session and session.container_id:
|
if session and session.container_id:
|
||||||
container_manager.close_session(session.id)
|
container_manager.close_session(session.id, kill=True)
|
||||||
|
|
||||||
|
|
||||||
|
@requires_docker
|
||||||
|
def test_integration_session_create_with_ports(isolate_cubbi_config):
|
||||||
|
"""Test creating a session with port forwarding."""
|
||||||
|
session = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Get the isolated container manager
|
||||||
|
container_manager = isolate_cubbi_config["container_manager"]
|
||||||
|
|
||||||
|
# Create a session with port forwarding
|
||||||
|
session = container_manager.create_session(
|
||||||
|
image_name="goose",
|
||||||
|
session_name=f"cubbi-test-ports-{uuid.uuid4().hex[:8]}",
|
||||||
|
mount_local=False, # Don't mount current directory
|
||||||
|
ports=[8080, 9000], # Forward these ports
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session is not None
|
||||||
|
assert session.status == "running"
|
||||||
|
|
||||||
|
# Verify ports are mapped
|
||||||
|
assert isinstance(session.ports, dict)
|
||||||
|
assert 8080 in session.ports
|
||||||
|
assert 9000 in session.ports
|
||||||
|
|
||||||
|
# Verify port mappings are valid (host ports should be assigned)
|
||||||
|
assert isinstance(session.ports[8080], int)
|
||||||
|
assert isinstance(session.ports[9000], int)
|
||||||
|
assert session.ports[8080] > 0
|
||||||
|
assert session.ports[9000] > 0
|
||||||
|
|
||||||
|
# Wait for container initialization to complete
|
||||||
|
init_success = wait_for_container_init(session.container_id)
|
||||||
|
assert init_success, "Container initialization timed out"
|
||||||
|
|
||||||
|
# Verify Docker port mappings using Docker client
|
||||||
|
import docker
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = client.containers.get(session.container_id)
|
||||||
|
container_ports = container.attrs["NetworkSettings"]["Ports"]
|
||||||
|
|
||||||
|
# Verify both ports are exposed
|
||||||
|
assert "8080/tcp" in container_ports
|
||||||
|
assert "9000/tcp" in container_ports
|
||||||
|
|
||||||
|
# Verify host port bindings exist
|
||||||
|
assert container_ports["8080/tcp"] is not None
|
||||||
|
assert container_ports["9000/tcp"] is not None
|
||||||
|
assert len(container_ports["8080/tcp"]) > 0
|
||||||
|
assert len(container_ports["9000/tcp"]) > 0
|
||||||
|
|
||||||
|
# Verify host ports match session.ports
|
||||||
|
host_port_8080 = int(container_ports["8080/tcp"][0]["HostPort"])
|
||||||
|
host_port_9000 = int(container_ports["9000/tcp"][0]["HostPort"])
|
||||||
|
assert session.ports[8080] == host_port_8080
|
||||||
|
assert session.ports[9000] == host_port_9000
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Clean up the container (use kill for faster test cleanup)
|
||||||
|
if session and session.container_id:
|
||||||
|
container_manager.close_session(session.id, kill=True)
|
||||||
|
|
||||||
|
|
||||||
|
@requires_docker
|
||||||
|
def test_integration_session_create_no_ports(isolate_cubbi_config):
|
||||||
|
"""Test creating a session without port forwarding."""
|
||||||
|
session = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Get the isolated container manager
|
||||||
|
container_manager = isolate_cubbi_config["container_manager"]
|
||||||
|
|
||||||
|
# Create a session without ports
|
||||||
|
session = container_manager.create_session(
|
||||||
|
image_name="goose",
|
||||||
|
session_name=f"cubbi-test-no-ports-{uuid.uuid4().hex[:8]}",
|
||||||
|
mount_local=False, # Don't mount current directory
|
||||||
|
ports=[], # No ports
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session is not None
|
||||||
|
assert session.status == "running"
|
||||||
|
|
||||||
|
# Verify no ports are mapped
|
||||||
|
assert isinstance(session.ports, dict)
|
||||||
|
assert len(session.ports) == 0
|
||||||
|
|
||||||
|
# Wait for container initialization to complete
|
||||||
|
init_success = wait_for_container_init(session.container_id)
|
||||||
|
assert init_success, "Container initialization timed out"
|
||||||
|
|
||||||
|
# Verify Docker has no port mappings
|
||||||
|
import docker
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = client.containers.get(session.container_id)
|
||||||
|
container_ports = container.attrs["NetworkSettings"]["Ports"]
|
||||||
|
|
||||||
|
# Should have no port mappings (empty dict or None values)
|
||||||
|
for port_spec, bindings in container_ports.items():
|
||||||
|
assert bindings is None or len(bindings) == 0
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Clean up the container (use kill for faster test cleanup)
|
||||||
|
if session and session.container_id:
|
||||||
|
container_manager.close_session(session.id, kill=True)
|
||||||
|
|
||||||
|
|
||||||
|
@requires_docker
|
||||||
|
def test_integration_session_create_with_single_port(isolate_cubbi_config):
|
||||||
|
"""Test creating a session with a single port forward."""
|
||||||
|
session = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Get the isolated container manager
|
||||||
|
container_manager = isolate_cubbi_config["container_manager"]
|
||||||
|
|
||||||
|
# Create a session with single port
|
||||||
|
session = container_manager.create_session(
|
||||||
|
image_name="goose",
|
||||||
|
session_name=f"cubbi-test-single-port-{uuid.uuid4().hex[:8]}",
|
||||||
|
mount_local=False, # Don't mount current directory
|
||||||
|
ports=[3000], # Single port
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session is not None
|
||||||
|
assert session.status == "running"
|
||||||
|
|
||||||
|
# Verify single port is mapped
|
||||||
|
assert isinstance(session.ports, dict)
|
||||||
|
assert len(session.ports) == 1
|
||||||
|
assert 3000 in session.ports
|
||||||
|
assert isinstance(session.ports[3000], int)
|
||||||
|
assert session.ports[3000] > 0
|
||||||
|
|
||||||
|
# Wait for container initialization to complete
|
||||||
|
init_success = wait_for_container_init(session.container_id)
|
||||||
|
assert init_success, "Container initialization timed out"
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = client.containers.get(session.container_id)
|
||||||
|
container_ports = container.attrs["NetworkSettings"]["Ports"]
|
||||||
|
|
||||||
|
# Should have exactly one port mapping
|
||||||
|
port_mappings = {
|
||||||
|
k: v for k, v in container_ports.items() if v is not None and len(v) > 0
|
||||||
|
}
|
||||||
|
assert len(port_mappings) == 1
|
||||||
|
assert "3000/tcp" in port_mappings
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Clean up the container (use kill for faster test cleanup)
|
||||||
|
if session and session.container_id:
|
||||||
|
container_manager.close_session(session.id, kill=True)
|
||||||
|
|
||||||
|
|
||||||
|
@requires_docker
|
||||||
|
def test_integration_kill_vs_stop_speed(isolate_cubbi_config):
|
||||||
|
"""Test that kill is faster than stop for container termination."""
|
||||||
|
import time
|
||||||
|
|
||||||
|
# Get the isolated container manager
|
||||||
|
container_manager = isolate_cubbi_config["container_manager"]
|
||||||
|
|
||||||
|
# Create two identical sessions for comparison
|
||||||
|
session_stop = None
|
||||||
|
session_kill = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create first session (will be stopped gracefully)
|
||||||
|
session_stop = container_manager.create_session(
|
||||||
|
image_name="goose",
|
||||||
|
session_name=f"cubbi-test-stop-{uuid.uuid4().hex[:8]}",
|
||||||
|
mount_local=False,
|
||||||
|
ports=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create second session (will be killed)
|
||||||
|
session_kill = container_manager.create_session(
|
||||||
|
image_name="goose",
|
||||||
|
session_name=f"cubbi-test-kill-{uuid.uuid4().hex[:8]}",
|
||||||
|
mount_local=False,
|
||||||
|
ports=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session_stop is not None
|
||||||
|
assert session_kill is not None
|
||||||
|
|
||||||
|
# Wait for both containers to initialize
|
||||||
|
init_success_stop = wait_for_container_init(session_stop.container_id)
|
||||||
|
init_success_kill = wait_for_container_init(session_kill.container_id)
|
||||||
|
assert init_success_stop, "Stop test container initialization timed out"
|
||||||
|
assert init_success_kill, "Kill test container initialization timed out"
|
||||||
|
|
||||||
|
# Time graceful stop
|
||||||
|
start_time = time.time()
|
||||||
|
container_manager.close_session(session_stop.id, kill=False)
|
||||||
|
stop_time = time.time() - start_time
|
||||||
|
session_stop = None # Mark as cleaned up
|
||||||
|
|
||||||
|
# Time kill
|
||||||
|
start_time = time.time()
|
||||||
|
container_manager.close_session(session_kill.id, kill=True)
|
||||||
|
kill_time = time.time() - start_time
|
||||||
|
session_kill = None # Mark as cleaned up
|
||||||
|
|
||||||
|
# Kill should be faster than stop (usually by several seconds)
|
||||||
|
# We use a generous threshold since system performance can vary
|
||||||
|
assert (
|
||||||
|
kill_time < stop_time
|
||||||
|
), f"Kill ({kill_time:.2f}s) should be faster than stop ({stop_time:.2f}s)"
|
||||||
|
|
||||||
|
# Verify both methods successfully closed the containers
|
||||||
|
# (containers should no longer be in the session list)
|
||||||
|
remaining_sessions = container_manager.list_sessions()
|
||||||
|
session_ids = [s.id for s in remaining_sessions]
|
||||||
|
assert session_stop.id if session_stop else "stop-session" not in session_ids
|
||||||
|
assert session_kill.id if session_kill else "kill-session" not in session_ids
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Clean up any remaining containers
|
||||||
|
if session_stop and session_stop.container_id:
|
||||||
|
try:
|
||||||
|
container_manager.close_session(session_stop.id, kill=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
if session_kill and session_kill.container_id:
|
||||||
|
try:
|
||||||
|
container_manager.close_session(session_kill.id, kill=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ def test_mcp_list_empty(cli_runner, patched_config_manager):
|
|||||||
assert "No MCP servers configured" in result.stdout
|
assert "No MCP servers configured" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
def test_mcp_add_remote(cli_runner, patched_config_manager):
|
def test_mcp_add_remote(cli_runner, isolate_cubbi_config):
|
||||||
"""Test adding a remote MCP server and listing it."""
|
"""Test adding a remote MCP server and listing it."""
|
||||||
# Add a remote MCP server
|
# Add a remote MCP server
|
||||||
result = cli_runner.invoke(
|
result = cli_runner.invoke(
|
||||||
@@ -49,7 +49,7 @@ def test_mcp_add_remote(cli_runner, patched_config_manager):
|
|||||||
assert "http://mcp-se" in result.stdout # Truncated in table view
|
assert "http://mcp-se" in result.stdout # Truncated in table view
|
||||||
|
|
||||||
|
|
||||||
def test_mcp_add(cli_runner, patched_config_manager):
|
def test_mcp_add(cli_runner, isolate_cubbi_config):
|
||||||
"""Test adding a proxy-based MCP server and listing it."""
|
"""Test adding a proxy-based MCP server and listing it."""
|
||||||
# Add a Docker MCP server
|
# Add a Docker MCP server
|
||||||
result = cli_runner.invoke(
|
result = cli_runner.invoke(
|
||||||
@@ -93,21 +93,212 @@ def test_mcp_remove(cli_runner, patched_config_manager):
|
|||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the get_mcp and remove_mcp methods
|
# Mock the container_manager.list_sessions to return sessions without MCPs
|
||||||
with patch("cubbi.cli.mcp_manager.get_mcp") as mock_get_mcp:
|
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||||
# First make get_mcp return our MCP
|
mock_list_sessions.return_value = []
|
||||||
mock_get_mcp.return_value = {
|
|
||||||
"name": "test-mcp",
|
|
||||||
"type": "remote",
|
|
||||||
"url": "http://test-server.com/sse",
|
|
||||||
"headers": {"Authorization": "Bearer test-token"},
|
|
||||||
}
|
|
||||||
|
|
||||||
# Remove the MCP server
|
# Mock the remove_mcp method
|
||||||
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||||
|
# Make remove_mcp return True (successful removal)
|
||||||
|
mock_remove_mcp.return_value = True
|
||||||
|
|
||||||
# Just check it ran successfully with exit code 0
|
# Remove the MCP server
|
||||||
assert result.exit_code == 0
|
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
||||||
|
|
||||||
|
# Just check it ran successfully with exit code 0
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Removed MCP server 'test-mcp'" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_mcp_remove_with_active_sessions(cli_runner, patched_config_manager):
|
||||||
|
"""Test removing an MCP server that is used by active sessions."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Add a remote MCP server
|
||||||
|
patched_config_manager.set(
|
||||||
|
"mcps",
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"name": "test-mcp",
|
||||||
|
"type": "remote",
|
||||||
|
"url": "http://test-server.com/sse",
|
||||||
|
"headers": {"Authorization": "Bearer test-token"},
|
||||||
|
}
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create mock sessions that use the MCP
|
||||||
|
mock_sessions = [
|
||||||
|
Session(
|
||||||
|
id="session-1",
|
||||||
|
name="test-session-1",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="container-1",
|
||||||
|
mcps=["test-mcp", "other-mcp"],
|
||||||
|
),
|
||||||
|
Session(
|
||||||
|
id="session-2",
|
||||||
|
name="test-session-2",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="container-2",
|
||||||
|
mcps=["other-mcp"], # This one doesn't use test-mcp
|
||||||
|
),
|
||||||
|
Session(
|
||||||
|
id="session-3",
|
||||||
|
name="test-session-3",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="container-3",
|
||||||
|
mcps=["test-mcp"], # This one uses test-mcp
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
# Mock the container_manager.list_sessions to return our sessions
|
||||||
|
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||||
|
mock_list_sessions.return_value = mock_sessions
|
||||||
|
|
||||||
|
# Mock the remove_mcp method
|
||||||
|
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||||
|
# Make remove_mcp return True (successful removal)
|
||||||
|
mock_remove_mcp.return_value = True
|
||||||
|
|
||||||
|
# Remove the MCP server
|
||||||
|
result = cli_runner.invoke(app, ["mcp", "remove", "test-mcp"])
|
||||||
|
|
||||||
|
# Check it ran successfully with exit code 0
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Removed MCP server 'test-mcp'" in result.stdout
|
||||||
|
# Check warning about affected sessions
|
||||||
|
assert (
|
||||||
|
"Warning: Found 2 active sessions using MCP 'test-mcp'" in result.stdout
|
||||||
|
)
|
||||||
|
assert "session-1" in result.stdout
|
||||||
|
assert "session-3" in result.stdout
|
||||||
|
# session-2 should not be mentioned since it doesn't use test-mcp
|
||||||
|
assert "session-2" not in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_mcp_remove_nonexistent(cli_runner, patched_config_manager):
|
||||||
|
"""Test removing a non-existent MCP server."""
|
||||||
|
# No MCPs configured
|
||||||
|
patched_config_manager.set("mcps", [])
|
||||||
|
|
||||||
|
# Mock the container_manager.list_sessions to return empty list
|
||||||
|
with patch("cubbi.cli.container_manager.list_sessions") as mock_list_sessions:
|
||||||
|
mock_list_sessions.return_value = []
|
||||||
|
|
||||||
|
# Mock the remove_mcp method to return False (MCP not found)
|
||||||
|
with patch("cubbi.cli.mcp_manager.remove_mcp") as mock_remove_mcp:
|
||||||
|
mock_remove_mcp.return_value = False
|
||||||
|
|
||||||
|
# Try to remove a non-existent MCP server
|
||||||
|
result = cli_runner.invoke(app, ["mcp", "remove", "nonexistent-mcp"])
|
||||||
|
|
||||||
|
# Check it ran successfully but reported not found
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "MCP server 'nonexistent-mcp' not found" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_mcps_attribute():
|
||||||
|
"""Test that Session model has mcps attribute and can be populated correctly."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Test that Session can be created with mcps attribute
|
||||||
|
session = Session(
|
||||||
|
id="test-session",
|
||||||
|
name="test-session",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="test-container",
|
||||||
|
mcps=["mcp1", "mcp2"],
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session.mcps == ["mcp1", "mcp2"]
|
||||||
|
|
||||||
|
# Test that Session can be created with empty mcps list
|
||||||
|
session_empty = Session(
|
||||||
|
id="test-session-2",
|
||||||
|
name="test-session-2",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
container_id="test-container-2",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert session_empty.mcps == [] # Should default to empty list
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_mcps_from_container_labels():
|
||||||
|
"""Test that Session mcps are correctly populated from container labels."""
|
||||||
|
from unittest.mock import Mock
|
||||||
|
from cubbi.container import ContainerManager
|
||||||
|
|
||||||
|
# Mock a container with MCP labels
|
||||||
|
mock_container = Mock()
|
||||||
|
mock_container.id = "test-container-id"
|
||||||
|
mock_container.status = "running"
|
||||||
|
mock_container.labels = {
|
||||||
|
"cubbi.session": "true",
|
||||||
|
"cubbi.session.id": "test-session",
|
||||||
|
"cubbi.session.name": "test-session-name",
|
||||||
|
"cubbi.image": "goose",
|
||||||
|
"cubbi.mcps": "mcp1,mcp2,mcp3", # Test with multiple MCPs
|
||||||
|
}
|
||||||
|
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
|
||||||
|
|
||||||
|
# Mock Docker client
|
||||||
|
mock_client = Mock()
|
||||||
|
mock_client.containers.list.return_value = [mock_container]
|
||||||
|
|
||||||
|
# Create container manager with mocked client
|
||||||
|
with patch("cubbi.container.docker.from_env") as mock_docker:
|
||||||
|
mock_docker.return_value = mock_client
|
||||||
|
mock_client.ping.return_value = True
|
||||||
|
|
||||||
|
container_manager = ContainerManager()
|
||||||
|
sessions = container_manager.list_sessions()
|
||||||
|
|
||||||
|
assert len(sessions) == 1
|
||||||
|
session = sessions[0]
|
||||||
|
assert session.id == "test-session"
|
||||||
|
assert session.mcps == ["mcp1", "mcp2", "mcp3"]
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_mcps_from_empty_container_labels():
|
||||||
|
"""Test that Session mcps are correctly handled when container has no MCP labels."""
|
||||||
|
from unittest.mock import Mock
|
||||||
|
from cubbi.container import ContainerManager
|
||||||
|
|
||||||
|
# Mock a container without MCP labels
|
||||||
|
mock_container = Mock()
|
||||||
|
mock_container.id = "test-container-id"
|
||||||
|
mock_container.status = "running"
|
||||||
|
mock_container.labels = {
|
||||||
|
"cubbi.session": "true",
|
||||||
|
"cubbi.session.id": "test-session",
|
||||||
|
"cubbi.session.name": "test-session-name",
|
||||||
|
"cubbi.image": "goose",
|
||||||
|
# No cubbi.mcps label
|
||||||
|
}
|
||||||
|
mock_container.attrs = {"NetworkSettings": {"Ports": {}}}
|
||||||
|
|
||||||
|
# Mock Docker client
|
||||||
|
mock_client = Mock()
|
||||||
|
mock_client.containers.list.return_value = [mock_container]
|
||||||
|
|
||||||
|
# Create container manager with mocked client
|
||||||
|
with patch("cubbi.container.docker.from_env") as mock_docker:
|
||||||
|
mock_docker.return_value = mock_client
|
||||||
|
mock_client.ping.return_value = True
|
||||||
|
|
||||||
|
container_manager = ContainerManager()
|
||||||
|
sessions = container_manager.list_sessions()
|
||||||
|
|
||||||
|
assert len(sessions) == 1
|
||||||
|
session = sessions[0]
|
||||||
|
assert session.id == "test-session"
|
||||||
|
assert session.mcps == [] # Should be empty list when no MCPs
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.requires_docker
|
@pytest.mark.requires_docker
|
||||||
@@ -159,10 +350,12 @@ def test_mcp_status(cli_runner, patched_config_manager, mock_container_manager):
|
|||||||
|
|
||||||
|
|
||||||
@pytest.mark.requires_docker
|
@pytest.mark.requires_docker
|
||||||
def test_mcp_start(cli_runner, patched_config_manager, mock_container_manager):
|
def test_mcp_start(cli_runner, isolate_cubbi_config):
|
||||||
"""Test starting an MCP server."""
|
"""Test starting an MCP server."""
|
||||||
|
mcp_manager = isolate_cubbi_config["mcp_manager"]
|
||||||
|
|
||||||
# Add a Docker MCP
|
# Add a Docker MCP
|
||||||
patched_config_manager.set(
|
isolate_cubbi_config["user_config"].set(
|
||||||
"mcps",
|
"mcps",
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -174,25 +367,30 @@ def test_mcp_start(cli_runner, patched_config_manager, mock_container_manager):
|
|||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the start operation
|
# Mock the start_mcp method to avoid actual Docker operations
|
||||||
mock_container_manager.start_mcp.return_value = {
|
with patch.object(
|
||||||
"container_id": "test-container-id",
|
mcp_manager,
|
||||||
"status": "running",
|
"start_mcp",
|
||||||
}
|
return_value={
|
||||||
|
"container_id": "test-container-id",
|
||||||
|
"status": "running",
|
||||||
|
},
|
||||||
|
):
|
||||||
|
# Start the MCP
|
||||||
|
result = cli_runner.invoke(app, ["mcp", "start", "test-docker-mcp"])
|
||||||
|
|
||||||
# Start the MCP
|
assert result.exit_code == 0
|
||||||
result = cli_runner.invoke(app, ["mcp", "start", "test-docker-mcp"])
|
assert "Started MCP server" in result.stdout
|
||||||
|
assert "test-docker-mcp" in result.stdout
|
||||||
assert result.exit_code == 0
|
|
||||||
assert "Started MCP server" in result.stdout
|
|
||||||
assert "test-docker-mcp" in result.stdout
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.requires_docker
|
@pytest.mark.requires_docker
|
||||||
def test_mcp_stop(cli_runner, patched_config_manager, mock_container_manager):
|
def test_mcp_stop(cli_runner, isolate_cubbi_config):
|
||||||
"""Test stopping an MCP server."""
|
"""Test stopping an MCP server."""
|
||||||
|
mcp_manager = isolate_cubbi_config["mcp_manager"]
|
||||||
|
|
||||||
# Add a Docker MCP
|
# Add a Docker MCP
|
||||||
patched_config_manager.set(
|
isolate_cubbi_config["user_config"].set(
|
||||||
"mcps",
|
"mcps",
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -204,22 +402,23 @@ def test_mcp_stop(cli_runner, patched_config_manager, mock_container_manager):
|
|||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the stop operation
|
# Mock the stop_mcp method to avoid actual Docker operations
|
||||||
mock_container_manager.stop_mcp.return_value = True
|
with patch.object(mcp_manager, "stop_mcp", return_value=True):
|
||||||
|
# Stop the MCP
|
||||||
|
result = cli_runner.invoke(app, ["mcp", "stop", "test-docker-mcp"])
|
||||||
|
|
||||||
# Stop the MCP
|
assert result.exit_code == 0
|
||||||
result = cli_runner.invoke(app, ["mcp", "stop", "test-docker-mcp"])
|
assert "Stopped and removed MCP server" in result.stdout
|
||||||
|
assert "test-docker-mcp" in result.stdout
|
||||||
assert result.exit_code == 0
|
|
||||||
assert "Stopped and removed MCP server" in result.stdout
|
|
||||||
assert "test-docker-mcp" in result.stdout
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.requires_docker
|
@pytest.mark.requires_docker
|
||||||
def test_mcp_restart(cli_runner, patched_config_manager, mock_container_manager):
|
def test_mcp_restart(cli_runner, isolate_cubbi_config):
|
||||||
"""Test restarting an MCP server."""
|
"""Test restarting an MCP server."""
|
||||||
|
mcp_manager = isolate_cubbi_config["mcp_manager"]
|
||||||
|
|
||||||
# Add a Docker MCP
|
# Add a Docker MCP
|
||||||
patched_config_manager.set(
|
isolate_cubbi_config["user_config"].set(
|
||||||
"mcps",
|
"mcps",
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
@@ -231,18 +430,21 @@ def test_mcp_restart(cli_runner, patched_config_manager, mock_container_manager)
|
|||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the restart operation
|
# Mock the restart_mcp method to avoid actual Docker operations
|
||||||
mock_container_manager.restart_mcp.return_value = {
|
with patch.object(
|
||||||
"container_id": "test-container-id",
|
mcp_manager,
|
||||||
"status": "running",
|
"restart_mcp",
|
||||||
}
|
return_value={
|
||||||
|
"container_id": "test-container-id",
|
||||||
|
"status": "running",
|
||||||
|
},
|
||||||
|
):
|
||||||
|
# Restart the MCP
|
||||||
|
result = cli_runner.invoke(app, ["mcp", "restart", "test-docker-mcp"])
|
||||||
|
|
||||||
# Restart the MCP
|
assert result.exit_code == 0
|
||||||
result = cli_runner.invoke(app, ["mcp", "restart", "test-docker-mcp"])
|
assert "Restarted MCP server" in result.stdout
|
||||||
|
assert "test-docker-mcp" in result.stdout
|
||||||
assert result.exit_code == 0
|
|
||||||
assert "Restarted MCP server" in result.stdout
|
|
||||||
assert "test-docker-mcp" in result.stdout
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.requires_docker
|
@pytest.mark.requires_docker
|
||||||
|
|||||||
@@ -83,7 +83,9 @@ def test_session_close(cli_runner, mock_container_manager):
|
|||||||
|
|
||||||
assert result.exit_code == 0
|
assert result.exit_code == 0
|
||||||
assert "closed successfully" in result.stdout
|
assert "closed successfully" in result.stdout
|
||||||
mock_container_manager.close_session.assert_called_once_with("test-session-id")
|
mock_container_manager.close_session.assert_called_once_with(
|
||||||
|
"test-session-id", kill=False
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_session_close_all(cli_runner, mock_container_manager):
|
def test_session_close_all(cli_runner, mock_container_manager):
|
||||||
@@ -113,6 +115,197 @@ def test_session_close_all(cli_runner, mock_container_manager):
|
|||||||
mock_container_manager.close_all_sessions.assert_called_once()
|
mock_container_manager.close_all_sessions.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_create_with_ports(
|
||||||
|
cli_runner, mock_container_manager, patched_config_manager
|
||||||
|
):
|
||||||
|
"""Test session creation with port forwarding."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Mock the create_session to return a session with ports
|
||||||
|
mock_session = Session(
|
||||||
|
id="test-session-id",
|
||||||
|
name="test-session",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
ports={8000: 32768, 3000: 32769},
|
||||||
|
)
|
||||||
|
mock_container_manager.create_session.return_value = mock_session
|
||||||
|
|
||||||
|
result = cli_runner.invoke(app, ["session", "create", "--port", "8000,3000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Session created successfully" in result.stdout
|
||||||
|
assert "Forwarding ports: 8000, 3000" in result.stdout
|
||||||
|
|
||||||
|
# Verify create_session was called with correct ports
|
||||||
|
mock_container_manager.create_session.assert_called_once()
|
||||||
|
call_args = mock_container_manager.create_session.call_args
|
||||||
|
assert call_args.kwargs["ports"] == [8000, 3000]
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_create_with_default_ports(
|
||||||
|
cli_runner, mock_container_manager, patched_config_manager
|
||||||
|
):
|
||||||
|
"""Test session creation using default ports."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Set up default ports
|
||||||
|
patched_config_manager.set("defaults.ports", [8080, 9000])
|
||||||
|
|
||||||
|
# Mock the create_session to return a session with ports
|
||||||
|
mock_session = Session(
|
||||||
|
id="test-session-id",
|
||||||
|
name="test-session",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
ports={8080: 32768, 9000: 32769},
|
||||||
|
)
|
||||||
|
mock_container_manager.create_session.return_value = mock_session
|
||||||
|
|
||||||
|
result = cli_runner.invoke(app, ["session", "create"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Session created successfully" in result.stdout
|
||||||
|
assert "Forwarding ports: 8080, 9000" in result.stdout
|
||||||
|
|
||||||
|
# Verify create_session was called with default ports
|
||||||
|
mock_container_manager.create_session.assert_called_once()
|
||||||
|
call_args = mock_container_manager.create_session.call_args
|
||||||
|
assert call_args.kwargs["ports"] == [8080, 9000]
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_create_combine_default_and_custom_ports(
|
||||||
|
cli_runner, mock_container_manager, patched_config_manager
|
||||||
|
):
|
||||||
|
"""Test session creation combining default and custom ports."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Set up default ports
|
||||||
|
patched_config_manager.set("defaults.ports", [8080])
|
||||||
|
|
||||||
|
# Mock the create_session to return a session with combined ports
|
||||||
|
mock_session = Session(
|
||||||
|
id="test-session-id",
|
||||||
|
name="test-session",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
ports={8080: 32768, 3000: 32769},
|
||||||
|
)
|
||||||
|
mock_container_manager.create_session.return_value = mock_session
|
||||||
|
|
||||||
|
result = cli_runner.invoke(app, ["session", "create", "--port", "3000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Session created successfully" in result.stdout
|
||||||
|
# Ports should be combined and deduplicated
|
||||||
|
assert "Forwarding ports:" in result.stdout
|
||||||
|
|
||||||
|
# Verify create_session was called with combined ports
|
||||||
|
mock_container_manager.create_session.assert_called_once()
|
||||||
|
call_args = mock_container_manager.create_session.call_args
|
||||||
|
# Should contain both default (8080) and custom (3000) ports
|
||||||
|
assert set(call_args.kwargs["ports"]) == {8080, 3000}
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_create_invalid_port_format(
|
||||||
|
cli_runner, mock_container_manager, patched_config_manager
|
||||||
|
):
|
||||||
|
"""Test session creation with invalid port format."""
|
||||||
|
result = cli_runner.invoke(app, ["session", "create", "--port", "invalid"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Warning: Ignoring invalid port format" in result.stdout
|
||||||
|
|
||||||
|
# Session creation should continue with empty ports list (invalid port ignored)
|
||||||
|
mock_container_manager.create_session.assert_called_once()
|
||||||
|
call_args = mock_container_manager.create_session.call_args
|
||||||
|
assert call_args.kwargs["ports"] == [] # Invalid port should be ignored
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_create_invalid_port_range(
|
||||||
|
cli_runner, mock_container_manager, patched_config_manager
|
||||||
|
):
|
||||||
|
"""Test session creation with port outside valid range."""
|
||||||
|
result = cli_runner.invoke(app, ["session", "create", "--port", "70000"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "Error: Invalid ports [70000]" in result.stdout
|
||||||
|
|
||||||
|
# Session creation should not happen due to early return
|
||||||
|
mock_container_manager.create_session.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_list_shows_ports(cli_runner, mock_container_manager):
|
||||||
|
"""Test that session list shows port mappings."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
mock_session = Session(
|
||||||
|
id="test-session-id",
|
||||||
|
name="test-session",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
ports={8000: 32768, 3000: 32769},
|
||||||
|
)
|
||||||
|
mock_container_manager.list_sessions.return_value = [mock_session]
|
||||||
|
|
||||||
|
result = cli_runner.invoke(app, ["session", "list"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "8000:32768" in result.stdout
|
||||||
|
assert "3000:32769" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_close_with_kill_flag(
|
||||||
|
cli_runner, mock_container_manager, patched_config_manager
|
||||||
|
):
|
||||||
|
"""Test session close with --kill flag."""
|
||||||
|
result = cli_runner.invoke(app, ["session", "close", "test-session-id", "--kill"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
|
||||||
|
# Verify close_session was called with kill=True
|
||||||
|
mock_container_manager.close_session.assert_called_once_with(
|
||||||
|
"test-session-id", kill=True
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_session_close_all_with_kill_flag(
|
||||||
|
cli_runner, mock_container_manager, patched_config_manager
|
||||||
|
):
|
||||||
|
"""Test session close --all with --kill flag."""
|
||||||
|
from cubbi.models import Session, SessionStatus
|
||||||
|
|
||||||
|
# Mock some sessions to close
|
||||||
|
mock_sessions = [
|
||||||
|
Session(
|
||||||
|
id="session-1",
|
||||||
|
name="Session 1",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
ports={},
|
||||||
|
),
|
||||||
|
Session(
|
||||||
|
id="session-2",
|
||||||
|
name="Session 2",
|
||||||
|
image="goose",
|
||||||
|
status=SessionStatus.RUNNING,
|
||||||
|
ports={},
|
||||||
|
),
|
||||||
|
]
|
||||||
|
mock_container_manager.list_sessions.return_value = mock_sessions
|
||||||
|
mock_container_manager.close_all_sessions.return_value = (2, True)
|
||||||
|
|
||||||
|
result = cli_runner.invoke(app, ["session", "close", "--all", "--kill"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert "2 sessions closed successfully" in result.stdout
|
||||||
|
|
||||||
|
# Verify close_all_sessions was called with kill=True
|
||||||
|
mock_container_manager.close_all_sessions.assert_called_once()
|
||||||
|
call_args = mock_container_manager.close_all_sessions.call_args
|
||||||
|
assert call_args.kwargs["kill"] is True
|
||||||
|
|
||||||
|
|
||||||
# For more complex tests that need actual Docker,
|
# For more complex tests that need actual Docker,
|
||||||
# we've implemented them in test_integration_docker.py
|
# we've implemented them in test_integration_docker.py
|
||||||
# They will run automatically if Docker is available
|
# They will run automatically if Docker is available
|
||||||
|
|||||||
41
uv.lock
generated
41
uv.lock
generated
@@ -1,5 +1,5 @@
|
|||||||
version = 1
|
version = 1
|
||||||
revision = 2
|
revision = 3
|
||||||
requires-python = ">=3.12"
|
requires-python = ">=3.12"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -78,12 +78,14 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cubbi"
|
name = "cubbi"
|
||||||
version = "0.2.0"
|
version = "0.5.0"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "docker" },
|
{ name = "docker" },
|
||||||
{ name = "pydantic" },
|
{ name = "pydantic" },
|
||||||
{ name = "pyyaml" },
|
{ name = "pyyaml" },
|
||||||
|
{ name = "questionary" },
|
||||||
|
{ name = "requests" },
|
||||||
{ name = "rich" },
|
{ name = "rich" },
|
||||||
{ name = "typer" },
|
{ name = "typer" },
|
||||||
]
|
]
|
||||||
@@ -107,6 +109,8 @@ requires-dist = [
|
|||||||
{ name = "pydantic", specifier = ">=2.5.0" },
|
{ name = "pydantic", specifier = ">=2.5.0" },
|
||||||
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=7.4.0" },
|
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=7.4.0" },
|
||||||
{ name = "pyyaml", specifier = ">=6.0.1" },
|
{ name = "pyyaml", specifier = ">=6.0.1" },
|
||||||
|
{ name = "questionary", specifier = ">=2.0.0" },
|
||||||
|
{ name = "requests", specifier = ">=2.32.3" },
|
||||||
{ name = "rich", specifier = ">=13.6.0" },
|
{ name = "rich", specifier = ">=13.6.0" },
|
||||||
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.1.9" },
|
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.1.9" },
|
||||||
{ name = "typer", specifier = ">=0.9.0" },
|
{ name = "typer", specifier = ">=0.9.0" },
|
||||||
@@ -221,6 +225,18 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/88/5f/e351af9a41f866ac3f1fac4ca0613908d9a41741cfcf2228f4ad853b697d/pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669", size = 20556, upload-time = "2024-04-20T21:34:40.434Z" },
|
{ url = "https://files.pythonhosted.org/packages/88/5f/e351af9a41f866ac3f1fac4ca0613908d9a41741cfcf2228f4ad853b697d/pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669", size = 20556, upload-time = "2024-04-20T21:34:40.434Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "prompt-toolkit"
|
||||||
|
version = "3.0.51"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "wcwidth" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/bb/6e/9d084c929dfe9e3bfe0c6a47e31f78a25c54627d64a66e884a8bf5474f1c/prompt_toolkit-3.0.51.tar.gz", hash = "sha256:931a162e3b27fc90c86f1b48bb1fb2c528c2761475e57c9c06de13311c7b54ed", size = 428940, upload-time = "2025-04-15T09:18:47.731Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ce/4f/5249960887b1fbe561d9ff265496d170b55a735b76724f10ef19f9e40716/prompt_toolkit-3.0.51-py3-none-any.whl", hash = "sha256:52742911fde84e2d423e2f9a4cf1de7d7ac4e51958f648d9540e0fb8db077b07", size = 387810, upload-time = "2025-04-15T09:18:44.753Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pydantic"
|
name = "pydantic"
|
||||||
version = "2.10.6"
|
version = "2.10.6"
|
||||||
@@ -337,6 +353,18 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446, upload-time = "2024-08-06T20:33:04.33Z" },
|
{ url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446, upload-time = "2024-08-06T20:33:04.33Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "questionary"
|
||||||
|
version = "2.1.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "prompt-toolkit" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/a8/b8/d16eb579277f3de9e56e5ad25280fab52fc5774117fb70362e8c2e016559/questionary-2.1.0.tar.gz", hash = "sha256:6302cdd645b19667d8f6e6634774e9538bfcd1aad9be287e743d96cacaf95587", size = 26775, upload-time = "2024-12-29T11:49:17.802Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ad/3f/11dd4cd4f39e05128bfd20138faea57bec56f9ffba6185d276e3107ba5b2/questionary-2.1.0-py3-none-any.whl", hash = "sha256:44174d237b68bc828e4878c763a9ad6790ee61990e0ae72927694ead57bab8ec", size = 36747, upload-time = "2024-12-29T11:49:16.734Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "requests"
|
name = "requests"
|
||||||
version = "2.32.3"
|
version = "2.32.3"
|
||||||
@@ -431,3 +459,12 @@ sdist = { url = "https://files.pythonhosted.org/packages/aa/63/e53da845320b757bf
|
|||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df", size = 128369, upload-time = "2024-12-22T07:47:28.074Z" },
|
{ url = "https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df", size = 128369, upload-time = "2024-12-22T07:47:28.074Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "wcwidth"
|
||||||
|
version = "0.2.13"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/6c/63/53559446a878410fc5a5974feb13d31d78d752eb18aeba59c7fef1af7598/wcwidth-0.2.13.tar.gz", hash = "sha256:72ea0c06399eb286d978fdedb6923a9eb47e1c486ce63e9b4e64fc18303972b5", size = 101301, upload-time = "2024-01-06T02:10:57.829Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/fd/84/fd2ba7aafacbad3c4201d395674fc6348826569da3c0937e75505ead3528/wcwidth-0.2.13-py2.py3-none-any.whl", hash = "sha256:3da69048e4540d84af32131829ff948f1e022c1c6bdb8d6102117aac784f6859", size = 34166, upload-time = "2024-01-06T02:10:55.763Z" },
|
||||||
|
]
|
||||||
|
|||||||
Reference in New Issue
Block a user