14 Commits

Author SHA1 Message Date
562f9bb65e fix: preserve terminal env vars through sudo in macOS daemon mode
sudo resets the environment, stripping TERM, COLORTERM, COLUMNS, LINES,
and other terminal-related variables that TUI apps need to render. This
caused TUI apps like opencode to show a blank screen in daemon mode.

Fix by injecting terminal and proxy env vars via `env` after `sudo` in
the daemon mode command pipeline. Also move PTY device ioctl/read/write
rules into the base sandbox profile so inherited terminals work without
requiring AllowPty.
2026-02-26 17:39:33 -06:00
9d5d852860 feat: switch macOS learning mode from fs_usage to eslogger
Replace fs_usage (reports Mach thread IDs, requiring process name matching
with false positives) with eslogger (Endpoint Security framework, reports
real Unix PIDs via audit_token.pid plus fork events for process tree tracking).

Key changes:
- Daemon starts eslogger instead of fs_usage, with early-exit detection
  and clear Full Disk Access error messaging
- New two-pass eslogger JSON parser: pass 1 builds PID tree from fork
  events, pass 2 filters filesystem events by PID set
- Remove runtime PID polling (StartPIDTracking, pollDescendantPIDs) —
  process tree is now built post-hoc from the eslogger log
- Platform-specific generateLearnedTemplatePlatform() for darwin/linux/stub
- Refactor TraceResult and GenerateLearnedTemplate to be platform-agnostic
2026-02-26 17:23:43 -06:00
e05b54ec1b chore: ignore tun2socks source directory in gitignore 2026-02-26 09:56:28 -06:00
cb474b2d99 feat: add macOS daemon support with group-based pf routing
- Add daemon CLI subcommand (install/uninstall/status/run)
- Download tun2socks for darwin platforms in Makefile
- Export ExtractTun2Socks and add darwin embed support
- Use group-based pf filtering instead of user-based for transparent
proxying
- Install sudoers rule for passwordless sandbox-exec with _greywall
group
- Add nolint directives for gosec false positives on sudoers 0440 perms
- Fix lint issues: lowercase errors, fmt.Fprintf, nolint comments
2026-02-26 09:56:22 -06:00
cfe29d2c0b feat: switch macOS daemon from user-based to group-based pf routing
Sandboxed commands previously ran as `sudo -u _greywall`, breaking user
identity (home dir, SSH keys, git config). Now uses `sudo -u #<uid> -g
_greywall` so the process keeps the real user's identity while pf
matches
on EGID for traffic routing.

Key changes:
- pf rules use `group <GID>` instead of `user _greywall`
- GID resolved dynamically at daemon startup (not hardcoded, since macOS
  system groups like com.apple.access_ssh may claim preferred IDs)
- Sudoers rule installed at /etc/sudoers.d/greywall (validated with
visudo)
- Invoking user added to _greywall group via dscl (not dseditgroup,
which
  clobbers group attributes)
- tun2socks device discovery scans both stdout and stderr (fixes 10s
  timeout caused by STACK message going to stdout)
- Always-on daemon logging for session create/destroy events
2026-02-26 09:56:15 -06:00
4ea4592d75 docs: add macOS learning mode analysis with fs_usage approach
Document fs_usage as a viable alternative to strace for macOS
--learning mode. SIP blocks all dtrace-based tools (dtrace, dtruss,
opensnoop) even with sudo, but fs_usage uses the kdebug kernel
facility which is unaffected. Requires admin access only for the
passive monitor process — the sandboxed command stays unprivileged.
2026-02-22 19:07:30 -06:00
62bf37d481 fix: bind-mount greywall binary for Landlock wrapper re-execution
Some checks failed
Build and test / Build (push) Successful in 16s
Build and test / Lint (push) Failing after 1m19s
Build and test / Test (Linux) (push) Failing after 42s
The Landlock wrapper re-executes the greywall binary inside the sandbox
with --landlock-apply. When greywall is run from a path outside the CWD
(e.g., ~/bin/greywall from /home/user/project), the binary doesn't exist
inside the sandbox because only system paths and CWD are mounted. This
adds a --ro-bind for the greywall executable so the wrapper always works
regardless of where the binary is located.
2026-02-22 16:56:45 -06:00
ed6517cc24 fix: make xdg_runtime_dir writable for desktop application
Some checks failed
Build and test / Lint (push) Failing after 1m7s
Build and test / Test (Linux) (push) Failing after 29s
Build and test / Build (push) Successful in 14s
2026-02-22 12:04:01 -06:00
2061dfe63b docs: rewrite README to reflect current architecture
Remove stale references from the pre-GreyHaven era: `-t code` template
flag, `import --claude` subcommand, `allowedDomains` config, `bpftrace`
dependency, and HTTP 403 error messaging.

Update to reflect current features: tun2socks transparent proxying,
learning mode with strace-based template generation, port forwarding,
deny-by-default filesystem reads, environment hardening, shell
completions, and GreyHaven proxy/DNS defaults.
2026-02-17 07:15:02 -06:00
5aeb9c86c0 fix: resolve all golangci-lint v2 warnings (29 issues)
Some checks failed
Build and test / Build (push) Successful in 11s
Build and test / Lint (push) Failing after 1m15s
Build and test / Test (Linux) (push) Failing after 42s
Migrate to golangci-lint v2 config format and fix all lint issues:
- errcheck: add explicit error handling for Close/Remove calls
- gocritic: convert if-else chains to switch statements
- gosec: tighten file permissions, add nolint for intentional cases
- staticcheck: lowercase error strings, simplify boolean returns

Also update Makefile to install golangci-lint v2 and update CLAUDE.md.
2026-02-13 19:20:40 -06:00
626eaa1895 fix: upgrade golangci
Some checks failed
Build and test / Build (push) Successful in 12s
Build and test / Lint (push) Failing after 1m15s
Build and test / Test (Linux) (push) Failing after 40s
2026-02-13 19:13:37 -06:00
18c18ec3a8 fix: avoid creating directory at file path in allowRead bwrap mounts
Some checks failed
Build and test / Build (push) Successful in 12s
Build and test / Lint (push) Failing after 1m17s
Build and test / Test (Linux) (push) Failing after 44s
intermediaryDirs() was called with the full path including the leaf
component, causing --dir to be emitted for files like ~/.npmrc. This
created a directory at that path, making the subsequent --ro-bind fail
with "Can't create file at ...: Is a directory".

Now checks isDirectory() and uses filepath.Dir() for file paths so
intermediary dirs are only created up to the parent.
2026-02-13 13:53:19 -06:00
f4c9422f77 feat: migrate CI and releases from GitHub Actions to Gitea Actions
Retarget GoReleaser to publish to Gitea (gitea_urls, release.gitea,
changelog.use: gitea). Add Gitea Actions workflows for build/test,
release, and benchmarks — adapted from GitHub equivalents with macOS
jobs and SLSA provenance dropped. Old .github/workflows/ kept in place.
2026-02-13 12:20:32 -06:00
c19370f8b3 feat: deny-by-default filesystem isolation
Some checks failed
Build and test / Lint (push) Failing after 1m16s
Build and test / Build (push) Successful in 13s
Build and test / Test (Linux) (push) Failing after 41s
Build and test / Test (macOS) (push) Has been cancelled
- Deny-by-default filesystem isolation for Linux (Landlock) and macOS (Seatbelt)
- Prevent learning mode from collapsing read paths to $HOME
- Add Linux deny-by-default lessons to experience docs
2026-02-13 11:39:18 -06:00
53 changed files with 7643 additions and 327 deletions

View File

@@ -0,0 +1,101 @@
name: Benchmarks
on:
workflow_dispatch:
inputs:
min_runs:
description: "Minimum benchmark runs"
required: false
default: "30"
quick:
description: "Quick mode (fewer runs)"
required: false
default: "false"
type: boolean
jobs:
benchmark-linux:
name: Benchmark (Linux)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Download dependencies
run: go mod download
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y \
bubblewrap \
socat \
uidmap \
curl \
netcat-openbsd \
ripgrep \
hyperfine \
jq \
bc
# Configure subuid/subgid
echo "$(whoami):100000:65536" | sudo tee -a /etc/subuid
echo "$(whoami):100000:65536" | sudo tee -a /etc/subgid
sudo chmod u+s $(which bwrap)
- name: Install benchstat
run: go install golang.org/x/perf/cmd/benchstat@latest
- name: Build greywall
run: make build-ci
- name: Run Go microbenchmarks
run: |
mkdir -p benchmarks
go test -run=^$ -bench=. -benchmem -count=10 ./internal/sandbox/... | tee benchmarks/go-bench-linux.txt
- name: Run CLI benchmarks
run: |
MIN_RUNS="${{ github.event.inputs.min_runs || '30' }}"
QUICK="${{ github.event.inputs.quick || 'false' }}"
if [[ "$QUICK" == "true" ]]; then
./scripts/benchmark.sh -q -o benchmarks
else
./scripts/benchmark.sh -n "$MIN_RUNS" -o benchmarks
fi
- name: Upload benchmark results
uses: actions/upload-artifact@v4
with:
name: benchmark-results-linux
path: benchmarks/
retention-days: 30
- name: Display results
run: |
echo "=== Linux Benchmark Results ==="
echo ""
for f in benchmarks/*.md; do
[[ -f "$f" ]] && cat "$f"
done
echo ""
echo "=== Go Microbenchmarks ==="
grep -E '^Benchmark|^ok|^PASS' benchmarks/go-bench-linux.txt | head -50 || true

115
.gitea/workflows/main.yml Normal file
View File

@@ -0,0 +1,115 @@
name: Build and test
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- name: Download dependencies
run: go mod download
- name: Build
run: make build-ci
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- name: Download dependencies
run: go mod download
- name: Lint
uses: golangci/golangci-lint-action@v6
with:
install-mode: goinstall
version: v1.64.8
test-linux:
name: Test (Linux)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Download dependencies
run: go mod download
- name: Install Linux sandbox dependencies
run: |
sudo apt-get update
sudo apt-get install -y \
bubblewrap \
socat \
uidmap \
curl \
netcat-openbsd \
ripgrep
# Configure subuid/subgid for the runner user (required for unprivileged user namespaces)
echo "$(whoami):100000:65536" | sudo tee -a /etc/subuid
echo "$(whoami):100000:65536" | sudo tee -a /etc/subgid
# Make bwrap setuid so it can create namespaces as non-root user
sudo chmod u+s $(which bwrap)
- name: Verify sandbox dependencies
run: |
echo "=== Checking sandbox dependencies ==="
bwrap --version
socat -V | head -1
echo "User namespaces enabled: $(cat /proc/sys/kernel/unprivileged_userns_clone 2>/dev/null || echo 'check not available')"
echo "Kernel version: $(uname -r)"
echo "uidmap installed: $(which newuidmap 2>/dev/null && echo yes || echo no)"
echo "subuid configured: $(grep $(whoami) /etc/subuid 2>/dev/null || echo 'not configured')"
echo "bwrap setuid: $(ls -la $(which bwrap) | grep -q '^-rws' && echo yes || echo no)"
echo "=== Testing bwrap basic functionality ==="
bwrap --ro-bind / / -- /bin/echo "bwrap works!"
echo "=== Testing bwrap with user namespace ==="
bwrap --ro-bind / / --unshare-user --uid 0 --gid 0 -- /bin/echo "bwrap user namespace works!"
- name: Run unit and integration tests
run: make test-ci
- name: Build binary for smoke tests
run: make build-ci
- name: Run smoke tests
run: GREYWALL_TEST_NETWORK=1 ./scripts/smoke_test.sh ./greywall

View File

@@ -0,0 +1,62 @@
name: Release
on:
push:
tags:
- "v*"
run-name: "Release ${{ github.ref_name }}"
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
cache: true
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser
version: "~> v2"
args: release --clean
env:
GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }}
GORELEASER_FORCE_TOKEN: gitea
publish-version:
needs: [goreleaser]
runs-on: ubuntu-latest
steps:
- name: Checkout gh-pages
uses: actions/checkout@v4
with:
ref: gh-pages
- name: Update latest version
run: |
echo "${{ github.ref_name }}" > latest.txt
cat > latest.json << EOF
{
"version": "${{ github.ref_name }}",
"published_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"url": "https://gitea.app.monadical.io/monadical/greywall/releases/tag/${{ github.ref_name }}"
}
EOF
- name: Commit and push to gh-pages
run: |
git config user.name "gitea-actions[bot]"
git config user.email "gitea-actions[bot]@noreply.gitea.app.monadical.io"
git add latest.txt latest.json
git commit -m "Update latest version to ${{ github.ref_name }}" || echo "No changes to commit"
git push origin gh-pages

3
.gitignore vendored
View File

@@ -32,3 +32,6 @@ mem.out
# Embedded binaries (downloaded at build time)
internal/sandbox/bin/tun2socks-*
# tun2socks source/build directory
/tun2socks/

View File

@@ -1,40 +1,49 @@
version: "2"
run:
timeout: 5m
modules-download-mode: readonly
linters-settings:
gci:
sections:
- standard
- default
- prefix(gitea.app.monadical.io/monadical/greywall)
gofmt:
simplify: true
goimports:
local-prefixes: gitea.app.monadical.io/monadical/greywall
gocritic:
disabled-checks:
- singleCaseSwitch
revive:
rules:
- name: exported
disabled: true
linters:
enable-all: false
disable-all: true
default: none
enable:
- staticcheck
- errcheck
- gosimple
- govet
- unused
- ineffassign
- gosec
- gocritic
- revive
- gofumpt
- gosec
- govet
- ineffassign
- misspell
issues:
exclude-use-default: false
- revive
- staticcheck
- unused
settings:
gocritic:
disabled-checks:
- singleCaseSwitch
revive:
rules:
- name: exported
disabled: true
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
formatters:
enable:
- gofumpt
settings:
gci:
sections:
- standard
- default
- prefix(gitea.app.monadical.io/monadical/greywall)
gofmt:
simplify: true
goimports:
local-prefixes:
- gitea.app.monadical.io/monadical/greywall
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -1,5 +1,10 @@
version: 2
gitea_urls:
api: https://gitea.app.monadical.io/api/v1
download: https://gitea.app.monadical.io
skip_tls_verify: false
before:
hooks:
- go mod tidy
@@ -42,7 +47,7 @@ checksum:
changelog:
sort: asc
use: github
use: gitea
format: "{{ .SHA }}: {{ .Message }}{{ with .AuthorUsername }} (@{{ . }}){{ end }}"
filters:
exclude:
@@ -76,7 +81,7 @@ changelog:
order: 9999
release:
github:
gitea:
owner: monadical
name: greywall
draft: false

View File

@@ -54,7 +54,7 @@ scripts/ Smoke tests, benchmarks, release
- **Language:** Go 1.25+
- **Formatter:** `gofumpt` (enforced in CI)
- **Linter:** `golangci-lint` v1.64.8 (config in `.golangci.yml`)
- **Linter:** `golangci-lint` v2 (config in `.golangci.yml`)
- **Import order:** stdlib, third-party, local (`gitea.app.monadical.io/monadical/greywall`)
- **Platform code:** build tags (`//go:build linux`, `//go:build darwin`) with `*_stub.go` for unsupported platforms
- **Error handling:** custom error types (e.g., `CommandBlockedError`)

View File

@@ -8,24 +8,24 @@ BINARY_UNIX=$(BINARY_NAME)_unix
TUN2SOCKS_VERSION=v2.5.2
TUN2SOCKS_BIN_DIR=internal/sandbox/bin
.PHONY: all build build-ci build-linux test test-ci clean deps install-lint-tools setup setup-ci run fmt lint release release-minor download-tun2socks help
.PHONY: all build build-ci build-linux build-darwin test test-ci clean deps install-lint-tools setup setup-ci run fmt lint release release-minor download-tun2socks help
all: build
TUN2SOCKS_PLATFORMS=linux-amd64 linux-arm64 darwin-amd64 darwin-arm64
download-tun2socks:
@echo "Downloading tun2socks $(TUN2SOCKS_VERSION)..."
@mkdir -p $(TUN2SOCKS_BIN_DIR)
@curl -sL "https://github.com/xjasonlyu/tun2socks/releases/download/$(TUN2SOCKS_VERSION)/tun2socks-linux-amd64.zip" -o /tmp/tun2socks-linux-amd64.zip
@unzip -o -q /tmp/tun2socks-linux-amd64.zip -d /tmp/tun2socks-amd64
@mv /tmp/tun2socks-amd64/tun2socks-linux-amd64 $(TUN2SOCKS_BIN_DIR)/tun2socks-linux-amd64
@chmod +x $(TUN2SOCKS_BIN_DIR)/tun2socks-linux-amd64
@rm -rf /tmp/tun2socks-linux-amd64.zip /tmp/tun2socks-amd64
@curl -sL "https://github.com/xjasonlyu/tun2socks/releases/download/$(TUN2SOCKS_VERSION)/tun2socks-linux-arm64.zip" -o /tmp/tun2socks-linux-arm64.zip
@unzip -o -q /tmp/tun2socks-linux-arm64.zip -d /tmp/tun2socks-arm64
@mv /tmp/tun2socks-arm64/tun2socks-linux-arm64 $(TUN2SOCKS_BIN_DIR)/tun2socks-linux-arm64
@chmod +x $(TUN2SOCKS_BIN_DIR)/tun2socks-linux-arm64
@rm -rf /tmp/tun2socks-linux-arm64.zip /tmp/tun2socks-arm64
@echo "tun2socks binaries downloaded to $(TUN2SOCKS_BIN_DIR)/"
@for platform in $(TUN2SOCKS_PLATFORMS); do \
if [ ! -f $(TUN2SOCKS_BIN_DIR)/tun2socks-$$platform ]; then \
echo "Downloading tun2socks-$$platform $(TUN2SOCKS_VERSION)..."; \
curl -sL "https://github.com/xjasonlyu/tun2socks/releases/download/$(TUN2SOCKS_VERSION)/tun2socks-$$platform.zip" -o /tmp/tun2socks-$$platform.zip; \
unzip -o -q /tmp/tun2socks-$$platform.zip -d /tmp/tun2socks-$$platform; \
mv /tmp/tun2socks-$$platform/tun2socks-$$platform $(TUN2SOCKS_BIN_DIR)/tun2socks-$$platform; \
chmod +x $(TUN2SOCKS_BIN_DIR)/tun2socks-$$platform; \
rm -rf /tmp/tun2socks-$$platform.zip /tmp/tun2socks-$$platform; \
fi; \
done
build: download-tun2socks
@echo "Building $(BINARY_NAME)..."
@@ -53,6 +53,7 @@ clean:
rm -f $(BINARY_UNIX)
rm -f coverage.out
rm -f $(TUN2SOCKS_BIN_DIR)/tun2socks-linux-*
rm -f $(TUN2SOCKS_BIN_DIR)/tun2socks-darwin-*
deps:
@echo "Downloading dependencies..."
@@ -63,14 +64,14 @@ build-linux: download-tun2socks
@echo "Building for Linux..."
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 $(GOBUILD) -o $(BINARY_UNIX) -v ./cmd/greywall
build-darwin:
build-darwin: download-tun2socks
@echo "Building for macOS..."
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 $(GOBUILD) -o $(BINARY_NAME)_darwin -v ./cmd/greywall
install-lint-tools:
@echo "Installing linting tools..."
go install mvdan.cc/gofumpt@latest
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest
@echo "Linting tools installed"
setup: deps install-lint-tools

124
README.md
View File

@@ -2,20 +2,20 @@
**The sandboxing layer of the GreyHaven platform.**
Greywall wraps commands in a sandbox that blocks network access by default and restricts filesystem operations. It is the core sandboxing component of the GreyHaven platform, providing defense-in-depth for running untrusted code.
Greywall wraps commands in a sandbox that blocks network access by default and restricts filesystem operations. On Linux, it uses tun2socks for truly transparent proxying: all TCP/UDP traffic is captured at the kernel level via a TUN device and forwarded through an external SOCKS5 proxy. No application awareness needed.
```bash
# Block all network access (default)
greywall curl https://example.com # → 403 Forbidden
# Block all network access (default — no proxy running = no connectivity)
greywall -- curl https://example.com
# Allow specific domains
greywall -t code npm install # → uses 'code' template with npm/pypi/etc allowed
# Route traffic through an external SOCKS5 proxy
greywall --proxy socks5://localhost:1080 -- curl https://example.com
# Block dangerous commands
greywall -c "rm -rf /" # → blocked by command deny rules
```
Greywall also works as a permission manager for CLI agents. **Greywall works with popular coding agents like Claude Code, Codex, Gemini CLI, Cursor Agent, OpenCode, Factory (Droid) CLI, etc.** See [agents.md](./docs/agents.md) for more details.
Greywall also works as a permission manager for CLI agents. See [agents.md](./docs/agents.md) for integration with Claude Code, Codex, Gemini CLI, OpenCode, and others.
## Install
@@ -39,83 +39,123 @@ go install gitea.app.monadical.io/monadical/greywall/cmd/greywall@latest
```bash
git clone https://gitea.app.monadical.io/monadical/greywall
cd greywall
go build -o greywall ./cmd/greywall
make setup && make build
```
</details>
**Additional requirements for Linux:**
**Linux dependencies:**
- `bubblewrap` (for sandboxing)
- `socat` (for network bridging)
- `bpftrace` (optional, for filesystem violation visibility when monitoring with `-m`)
- `bubblewrap` — container-free sandboxing (required)
- `socat` network bridging (required)
Check dependency status with `greywall --version`.
## Usage
### Basic
### Basic commands
```bash
# Run command with all network blocked (no domains allowed by default)
greywall curl https://example.com
# Run with all network blocked (default)
greywall -- curl https://example.com
# Run with shell expansion
greywall -c "echo hello && ls"
# Route through a SOCKS5 proxy
greywall --proxy socks5://localhost:1080 -- npm install
# Expose a port for inbound connections (e.g., dev servers)
greywall -p 3000 -c "npm run dev"
# Enable debug logging
greywall -d curl https://example.com
greywall -d -- curl https://example.com
# Use a template
greywall -t code -- claude # Runs Claude Code using `code` template config
# Monitor sandbox violations
greywall -m -- npm install
# Monitor mode (shows violations)
greywall -m npm install
# Show available Linux security features
greywall --linux-features
# Show all commands and options
greywall --help
# Show version and dependency status
greywall --version
```
### Learning mode
Greywall can trace a command's filesystem access and generate a config template automatically:
```bash
# Run in learning mode — traces file access via strace
greywall --learning -- opencode
# List generated templates
greywall templates list
# Show a template's content
greywall templates show opencode
# Next run auto-loads the learned template
greywall -- opencode
```
### Configuration
Greywall reads from `~/.config/greywall/greywall.json` by default (or `~/Library/Application Support/greywall/greywall.json` on macOS).
```json
```jsonc
{
"extends": "code",
"network": { "allowedDomains": ["private.company.com"] },
"filesystem": { "allowWrite": ["."] },
"command": { "deny": ["git push", "npm publish"] }
// Route traffic through an external SOCKS5 proxy
"network": {
"proxyUrl": "socks5://localhost:1080",
"dnsAddr": "localhost:5353"
},
// Control filesystem access
"filesystem": {
"defaultDenyRead": true,
"allowRead": ["~/.config/myapp"],
"allowWrite": ["."],
"denyWrite": ["~/.ssh/**"],
"denyRead": ["~/.ssh/id_*", ".env"]
},
// Block dangerous commands
"command": {
"deny": ["git push", "npm publish"]
}
}
```
Use `greywall --settings ./custom.json` to specify a different config.
Use `greywall --settings ./custom.json` to specify a different config file.
### Import from Claude Code
```bash
greywall import --claude --save
```
By default (when connected to GreyHaven), traffic routes through the GreyHaven SOCKS5 proxy at `localhost:42052` with DNS via `localhost:42053`.
## Features
- **Network isolation** - All outbound blocked by default; allowlist domains via config
- **Filesystem restrictions** - Control read/write access paths
- **Command blocking** - Deny dangerous commands like `rm -rf /`, `git push`
- **SSH Command Filtering** - Control which hosts and commands are allowed over SSH
- **Built-in templates** - Pre-configured rulesets for common workflows
- **Violation monitoring** - Real-time logging of blocked requests (`-m`)
- **Cross-platform** - macOS (sandbox-exec) + Linux (bubblewrap)
- **Transparent proxy** All TCP/UDP traffic captured at the kernel level via tun2socks and routed through an external SOCKS5 proxy (Linux)
- **Network isolation** — All outbound blocked by default; traffic only flows when a proxy is available
- **Filesystem restrictions** Deny-by-default read mode, controlled write paths, sensitive file protection
- **Learning mode** — Trace filesystem access with strace and auto-generate config templates
- **Command blocking** — Deny dangerous commands (`rm -rf /`, `git push`, `shutdown`, etc.)
- **SSH filtering** — Control which hosts and commands are allowed over SSH
- **Environment hardening** — Strips dangerous env vars (`LD_PRELOAD`, `DYLD_*`, etc.)
- **Violation monitoring** — Real-time logging of sandbox violations (`-m`)
- **Shell completions** — `greywall completion bash|zsh|fish|powershell`
- **Cross-platform** — Linux (bubblewrap + seccomp + Landlock + eBPF) and macOS (sandbox-exec)
Greywall can be used as a Go package or CLI tool.
Greywall can also be used as a [Go package](docs/library.md).
## Documentation
- [Index](/docs/README.md)
- [Documentation Index](docs/README.md)
- [Quickstart Guide](docs/quickstart.md)
- [Why Greywall](docs/why-greywall.md)
- [Configuration Reference](docs/configuration.md)
- [Security Model](docs/security-model.md)
- [Architecture](ARCHITECTURE.md)
- [Linux Security Features](docs/linux-security-features.md)
- [AI Agent Integration](docs/agents.md)
- [Library Usage (Go)](docs/library.md)
- [Examples](examples/)
- [Troubleshooting](docs/troubleshooting.md)
## Attribution

1151
analysis.md Normal file

File diff suppressed because it is too large Load Diff

229
cmd/greywall/daemon.go Normal file
View File

@@ -0,0 +1,229 @@
package main
import (
"bufio"
"fmt"
"os"
"os/signal"
"path/filepath"
"runtime"
"strings"
"syscall"
"github.com/spf13/cobra"
"gitea.app.monadical.io/monadical/greywall/internal/daemon"
"gitea.app.monadical.io/monadical/greywall/internal/sandbox"
)
// newDaemonCmd creates the daemon subcommand tree:
//
// greywall daemon
// install - Install the LaunchDaemon (requires root)
// uninstall - Uninstall the LaunchDaemon (requires root)
// run - Run the daemon (called by LaunchDaemon plist)
// status - Show daemon status
func newDaemonCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "daemon",
Short: "Manage the greywall background daemon",
Long: `Manage the greywall LaunchDaemon for transparent network sandboxing on macOS.
The daemon runs as a system service and manages the tun2socks tunnel, DNS relay,
and pf rules that enable transparent proxy routing for sandboxed processes.
Commands:
sudo greywall daemon install Install and start the daemon
sudo greywall daemon uninstall Stop and remove the daemon
greywall daemon status Check daemon status
greywall daemon run Run the daemon (used by LaunchDaemon)`,
}
cmd.AddCommand(
newDaemonInstallCmd(),
newDaemonUninstallCmd(),
newDaemonRunCmd(),
newDaemonStatusCmd(),
)
return cmd
}
// newDaemonInstallCmd creates the "daemon install" subcommand.
func newDaemonInstallCmd() *cobra.Command {
return &cobra.Command{
Use: "install",
Short: "Install the greywall LaunchDaemon (requires root)",
Long: `Install greywall as a macOS LaunchDaemon. This command:
1. Creates a system user (_greywall) for sandboxed process isolation
2. Copies the greywall binary to /usr/local/bin/greywall
3. Extracts and installs the tun2socks binary
4. Installs a LaunchDaemon plist for automatic startup
5. Loads and starts the daemon
Requires root privileges: sudo greywall daemon install`,
RunE: func(cmd *cobra.Command, args []string) error {
exePath, err := os.Executable()
if err != nil {
return fmt.Errorf("failed to determine executable path: %w", err)
}
exePath, err = filepath.EvalSymlinks(exePath)
if err != nil {
return fmt.Errorf("failed to resolve executable path: %w", err)
}
// Extract embedded tun2socks binary to a temp file.
tun2socksPath, err := sandbox.ExtractTun2Socks()
if err != nil {
return fmt.Errorf("failed to extract tun2socks: %w", err)
}
defer os.Remove(tun2socksPath) //nolint:errcheck // temp file cleanup
if err := daemon.Install(exePath, tun2socksPath, debug); err != nil {
return err
}
fmt.Println()
fmt.Println("To check status: greywall daemon status")
fmt.Println("To uninstall: sudo greywall daemon uninstall")
return nil
},
}
}
// newDaemonUninstallCmd creates the "daemon uninstall" subcommand.
func newDaemonUninstallCmd() *cobra.Command {
var force bool
cmd := &cobra.Command{
Use: "uninstall",
Short: "Uninstall the greywall LaunchDaemon (requires root)",
Long: `Uninstall the greywall LaunchDaemon. This command:
1. Stops and unloads the daemon
2. Removes the LaunchDaemon plist
3. Removes installed files
4. Removes the _greywall system user and group
Requires root privileges: sudo greywall daemon uninstall`,
RunE: func(cmd *cobra.Command, args []string) error {
if !force {
fmt.Println("The following will be removed:")
fmt.Printf(" - LaunchDaemon plist: %s\n", daemon.LaunchDaemonPlistPath)
fmt.Printf(" - Binary: %s\n", daemon.InstallBinaryPath)
fmt.Printf(" - Lib directory: %s\n", daemon.InstallLibDir)
fmt.Printf(" - Socket: %s\n", daemon.DefaultSocketPath)
fmt.Printf(" - Sudoers file: %s\n", daemon.SudoersFilePath)
fmt.Printf(" - System user/group: %s\n", daemon.SandboxUserName)
fmt.Println()
fmt.Print("Proceed with uninstall? [y/N] ")
reader := bufio.NewReader(os.Stdin)
answer, _ := reader.ReadString('\n')
answer = strings.TrimSpace(strings.ToLower(answer))
if answer != "y" && answer != "yes" {
fmt.Println("Uninstall cancelled.")
return nil
}
}
if err := daemon.Uninstall(debug); err != nil {
return err
}
fmt.Println()
fmt.Println("The greywall daemon has been uninstalled.")
return nil
},
}
cmd.Flags().BoolVarP(&force, "force", "f", false, "Skip confirmation prompt")
return cmd
}
// newDaemonRunCmd creates the "daemon run" subcommand. This is invoked by
// the LaunchDaemon plist and should not normally be called manually.
func newDaemonRunCmd() *cobra.Command {
return &cobra.Command{
Use: "run",
Short: "Run the daemon process (called by LaunchDaemon)",
Hidden: true, // Not intended for direct user invocation.
RunE: runDaemon,
}
}
// newDaemonStatusCmd creates the "daemon status" subcommand.
func newDaemonStatusCmd() *cobra.Command {
return &cobra.Command{
Use: "status",
Short: "Show the daemon status",
Long: `Check whether the greywall daemon is installed and running. Does not require root.`,
RunE: func(cmd *cobra.Command, args []string) error {
installed := daemon.IsInstalled()
running := daemon.IsRunning()
fmt.Printf("Greywall daemon status:\n")
fmt.Printf(" Installed: %s\n", boolStatus(installed))
fmt.Printf(" Running: %s\n", boolStatus(running))
fmt.Printf(" Plist: %s\n", daemon.LaunchDaemonPlistPath)
fmt.Printf(" Binary: %s\n", daemon.InstallBinaryPath)
fmt.Printf(" User: %s\n", daemon.SandboxUserName)
fmt.Printf(" Group: %s (pf routing)\n", daemon.SandboxGroupName)
fmt.Printf(" Sudoers: %s\n", daemon.SudoersFilePath)
fmt.Printf(" Socket: %s\n", daemon.DefaultSocketPath)
if !installed {
fmt.Println()
fmt.Println("The daemon is not installed. Run: sudo greywall daemon install")
} else if !running {
fmt.Println()
fmt.Println("The daemon is installed but not running.")
fmt.Printf("Check logs: cat /var/log/greywall.log\n")
fmt.Printf("Start it: sudo launchctl load %s\n", daemon.LaunchDaemonPlistPath)
}
return nil
},
}
}
// runDaemon is the main entry point for the daemon process. It starts the
// Unix socket server and blocks until a termination signal is received.
// CLI clients connect to the server to request sessions (which create
// utun tunnels, DNS relays, and pf rules on demand).
func runDaemon(cmd *cobra.Command, args []string) error {
tun2socksPath := filepath.Join(daemon.InstallLibDir, "tun2socks-darwin-"+runtime.GOARCH)
if _, err := os.Stat(tun2socksPath); err != nil {
return fmt.Errorf("tun2socks binary not found at %s (run 'sudo greywall daemon install' first)", tun2socksPath)
}
daemon.Logf("Starting daemon (tun2socks=%s, socket=%s)", tun2socksPath, daemon.DefaultSocketPath)
srv := daemon.NewServer(daemon.DefaultSocketPath, tun2socksPath, debug)
if err := srv.Start(); err != nil {
return fmt.Errorf("failed to start daemon server: %w", err)
}
daemon.Logf("Daemon started, listening on %s", daemon.DefaultSocketPath)
// Wait for termination signal.
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
sig := <-sigCh
daemon.Logf("Received signal %s, shutting down", sig)
if err := srv.Stop(); err != nil {
daemon.Logf("Shutdown error: %v", err)
return err
}
daemon.Logf("Daemon stopped")
return nil
}
// boolStatus returns a human-readable string for a boolean status value.
func boolStatus(b bool) string {
if b {
return "yes"
}
return "no"
}

View File

@@ -111,6 +111,7 @@ Configuration file format:
rootCmd.AddCommand(newCompletionCmd(rootCmd))
rootCmd.AddCommand(newTemplatesCmd())
rootCmd.AddCommand(newDaemonCmd())
if err := rootCmd.Execute(); err != nil {
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
@@ -203,19 +204,20 @@ func runCommand(cmd *cobra.Command, args []string) error {
if templatePath != "" {
learnedCfg, loadErr := config.Load(templatePath)
if loadErr != nil {
switch {
case loadErr != nil:
if debug {
fmt.Fprintf(os.Stderr, "[greywall] Warning: failed to load learned template: %v\n", loadErr)
}
} else if learnedCfg != nil {
case learnedCfg != nil:
cfg = config.Merge(cfg, learnedCfg)
if debug {
fmt.Fprintf(os.Stderr, "[greywall] Auto-loaded learned template for %q\n", templateLabel)
}
} else if templateName != "" {
case templateName != "":
// Explicit --template but file doesn't exist
return fmt.Errorf("learned template %q not found at %s\nRun: greywall templates list", templateName, templatePath)
} else if cmdName != "" {
case cmdName != "":
// No template found for this command - suggest creating one
fmt.Fprintf(os.Stderr, "[greywall] No learned template for %q. Run with --learning to create one.\n", cmdName)
}
@@ -265,7 +267,7 @@ func runCommand(cmd *cobra.Command, args []string) error {
// Learning mode setup
if learning {
if err := sandbox.CheckStraceAvailable(); err != nil {
if err := sandbox.CheckLearningAvailable(); err != nil {
return err
}
fmt.Fprintf(os.Stderr, "[greywall] Learning mode: tracing filesystem access for %q\n", cmdName)
@@ -303,6 +305,7 @@ func runCommand(cmd *cobra.Command, args []string) error {
if debug {
fmt.Fprintf(os.Stderr, "[greywall] Sandboxed command: %s\n", sandboxedCommand)
fmt.Fprintf(os.Stderr, "[greywall] Executing: sh -c %q\n", sandboxedCommand)
}
hardenedEnv := sandbox.GetHardenedEnv()
@@ -326,6 +329,11 @@ func runCommand(cmd *cobra.Command, args []string) error {
return fmt.Errorf("failed to start command: %w", err)
}
// Record root PID for macOS learning mode (eslogger uses this for process tree tracking)
if learning && platform.Detect() == platform.MacOS && execCmd.Process != nil {
manager.SetLearningRootPID(execCmd.Process.Pid)
}
// Start Linux monitors (eBPF tracing for filesystem violations)
var linuxMonitors *sandbox.LinuxMonitors
if monitor && execCmd.Process != nil {
@@ -503,7 +511,7 @@ Examples:
RunE: func(cmd *cobra.Command, args []string) error {
name := args[0]
templatePath := sandbox.LearnedTemplatePath(name)
data, err := os.ReadFile(templatePath)
data, err := os.ReadFile(templatePath) //nolint:gosec // user-specified template path - intentional
if err != nil {
if os.IsNotExist(err) {
return fmt.Errorf("template %q not found\nRun: greywall templates list", name)
@@ -593,12 +601,12 @@ parseCommand:
// Find the executable
execPath, err := exec.LookPath(command[0])
if err != nil {
fmt.Fprintf(os.Stderr, "[greywall:landlock-wrapper] Error: command not found: %s\n", command[0])
fmt.Fprintf(os.Stderr, "[greywall:landlock-wrapper] Error: command not found: %s\n", command[0]) //nolint:gosec // logging to stderr, not web output
os.Exit(127)
}
if debugMode {
fmt.Fprintf(os.Stderr, "[greywall:landlock-wrapper] Exec: %s %v\n", execPath, command[1:])
fmt.Fprintf(os.Stderr, "[greywall:landlock-wrapper] Exec: %s %v\n", execPath, command[1:]) //nolint:gosec // logging to stderr, not web output
}
// Sanitize environment (strips LD_PRELOAD, etc.)

View File

@@ -93,3 +93,29 @@ echo 'kernel.apparmor_restrict_unprivileged_userns=0' | sudo tee /etc/sysctl.d/9
```
**Alternative:** Accept the limitation — greywall still works for filesystem sandboxing, seccomp, and Landlock. Network access is blocked outright rather than redirected through a proxy.
---
## Linux: symlinked system dirs invisible after `--tmpfs /`
**Problem:** On merged-usr distros (Arch, Fedora, modern Ubuntu), `/bin`, `/sbin`, `/lib`, `/lib64` are symlinks (e.g., `/bin -> usr/bin`). When switching from `--ro-bind / /` to `--tmpfs /` for deny-by-default isolation, these symlinks don't exist in the empty root. The `canMountOver()` helper explicitly rejects symlinks, so `--ro-bind /bin /bin` was silently skipped. Result: `execvp /usr/bin/bash: No such file or directory` — bash exists at `/usr/bin/bash` but the dynamic linker at `/lib64/ld-linux-x86-64.so.2` can't be found because `/lib64` is missing.
**Diagnosis:** The error message is misleading. `execvp` reports "No such file or directory" both when the binary is missing and when the ELF interpreter (dynamic linker) is missing. The actual binary `/usr/bin/bash` existed via the `/usr` bind-mount, but the symlink `/lib64 -> usr/lib` was gone.
**Fix:** Check each system path with `isSymlink()` before mounting. Symlinks get `--symlink <target> <path>` (bwrap recreates the symlink inside the sandbox); real directories get `--ro-bind`. On Arch: `--symlink usr/bin /bin`, `--symlink usr/bin /sbin`, `--symlink usr/lib /lib`, `--symlink usr/lib /lib64`.
---
## Linux: Landlock denies reads on bind-mounted /dev/null
**Problem:** To mask `.env` files inside CWD, the initial approach used `--ro-bind /dev/null <cwd>/.env`. Inside the sandbox, `.env` appeared as a character device (bind mounts preserve file type). Landlock's `LANDLOCK_ACCESS_FS_READ_FILE` right only covers regular files, not character devices. Result: `cat .env` returned "Permission denied" instead of empty content.
**Fix:** Use an empty regular file (`/tmp/greywall/empty`, 0 bytes, mode 0444) as the mask source instead of `/dev/null`. Landlock sees a regular file and allows the read. The file is created once in a fixed location under the greywall temp dir.
---
## Linux: mandatory deny paths override sensitive file masks
**Problem:** In deny-by-default mode, `buildDenyByDefaultMounts()` correctly masked `.env` with `--ro-bind /tmp/greywall/empty <cwd>/.env`. But later in `WrapCommandLinuxWithOptions()`, the mandatory deny paths section called `getMandatoryDenyPaths()` which included `.env` files (added for write protection). It then applied `--ro-bind <cwd>/.env <cwd>/.env`, binding the real file over the empty mask. bwrap applies mounts in order, so the later ro-bind undid the masking.
**Fix:** Track paths already masked by `buildDenyByDefaultMounts()` in a set. Skip those paths in the mandatory deny section to preserve the empty-file overlay.

View File

@@ -36,7 +36,7 @@ type NetworkConfig struct {
// FilesystemConfig defines filesystem restrictions.
type FilesystemConfig struct {
DefaultDenyRead bool `json:"defaultDenyRead,omitempty"` // If true, deny reads by default except system paths and AllowRead
DefaultDenyRead *bool `json:"defaultDenyRead,omitempty"` // If nil or true, deny reads by default except system paths, CWD, and AllowRead
AllowRead []string `json:"allowRead"` // Paths to allow reading (used when DefaultDenyRead is true)
DenyRead []string `json:"denyRead"`
AllowWrite []string `json:"allowWrite"`
@@ -44,6 +44,12 @@ type FilesystemConfig struct {
AllowGitConfig bool `json:"allowGitConfig,omitempty"`
}
// IsDefaultDenyRead returns whether deny-by-default read mode is enabled.
// Defaults to true when not explicitly set (nil).
func (f *FilesystemConfig) IsDefaultDenyRead() bool {
return f.DefaultDenyRead == nil || *f.DefaultDenyRead
}
// CommandConfig defines command restrictions.
type CommandConfig struct {
Deny []string `json:"deny"`
@@ -417,8 +423,8 @@ func Merge(base, override *Config) *Config {
},
Filesystem: FilesystemConfig{
// Boolean fields: true if either enables it
DefaultDenyRead: base.Filesystem.DefaultDenyRead || override.Filesystem.DefaultDenyRead,
// Pointer field: override wins if set, otherwise base (nil = deny-by-default)
DefaultDenyRead: mergeOptionalBool(base.Filesystem.DefaultDenyRead, override.Filesystem.DefaultDenyRead),
// Append slices
AllowRead: mergeStrings(base.Filesystem.AllowRead, override.Filesystem.AllowRead),

View File

@@ -410,7 +410,7 @@ func TestMerge(t *testing.T) {
t.Run("merge defaultDenyRead and allowRead", func(t *testing.T) {
base := &Config{
Filesystem: FilesystemConfig{
DefaultDenyRead: true,
DefaultDenyRead: boolPtr(true),
AllowRead: []string{"/home/user/project"},
},
}
@@ -421,13 +421,40 @@ func TestMerge(t *testing.T) {
}
result := Merge(base, override)
if !result.Filesystem.DefaultDenyRead {
t.Error("expected DefaultDenyRead to be true (from base)")
if !result.Filesystem.IsDefaultDenyRead() {
t.Error("expected IsDefaultDenyRead() to be true (from base)")
}
if len(result.Filesystem.AllowRead) != 2 {
t.Errorf("expected 2 allowRead paths, got %d: %v", len(result.Filesystem.AllowRead), result.Filesystem.AllowRead)
}
})
t.Run("defaultDenyRead nil defaults to true", func(t *testing.T) {
base := &Config{
Filesystem: FilesystemConfig{},
}
result := Merge(base, nil)
if !result.Filesystem.IsDefaultDenyRead() {
t.Error("expected IsDefaultDenyRead() to be true when nil (deny-by-default)")
}
})
t.Run("defaultDenyRead explicit false overrides", func(t *testing.T) {
base := &Config{
Filesystem: FilesystemConfig{
DefaultDenyRead: boolPtr(true),
},
}
override := &Config{
Filesystem: FilesystemConfig{
DefaultDenyRead: boolPtr(false),
},
}
result := Merge(base, override)
if result.Filesystem.IsDefaultDenyRead() {
t.Error("expected IsDefaultDenyRead() to be false (override explicit false)")
}
})
}
func boolPtr(b bool) *bool {

181
internal/daemon/client.go Normal file
View File

@@ -0,0 +1,181 @@
package daemon
import (
"encoding/json"
"fmt"
"net"
"os"
"time"
)
const (
// clientDialTimeout is the maximum time to wait when connecting to the daemon.
clientDialTimeout = 5 * time.Second
// clientReadTimeout is the maximum time to wait for a response from the daemon.
clientReadTimeout = 30 * time.Second
)
// Client communicates with the greywall daemon over a Unix socket using
// newline-delimited JSON.
type Client struct {
socketPath string
debug bool
}
// NewClient creates a new daemon client that connects to the given Unix socket path.
func NewClient(socketPath string, debug bool) *Client {
return &Client{
socketPath: socketPath,
debug: debug,
}
}
// CreateSession asks the daemon to create a new sandbox session with the given
// proxy URL and optional DNS address. Returns the session info on success.
func (c *Client) CreateSession(proxyURL, dnsAddr string) (*Response, error) {
req := Request{
Action: "create_session",
ProxyURL: proxyURL,
DNSAddr: dnsAddr,
}
resp, err := c.sendRequest(req)
if err != nil {
return nil, fmt.Errorf("create session request failed: %w", err)
}
if !resp.OK {
return resp, fmt.Errorf("create session failed: %s", resp.Error)
}
return resp, nil
}
// DestroySession asks the daemon to tear down the session with the given ID.
func (c *Client) DestroySession(sessionID string) error {
req := Request{
Action: "destroy_session",
SessionID: sessionID,
}
resp, err := c.sendRequest(req)
if err != nil {
return fmt.Errorf("destroy session request failed: %w", err)
}
if !resp.OK {
return fmt.Errorf("destroy session failed: %s", resp.Error)
}
return nil
}
// StartLearning asks the daemon to start an fs_usage trace for learning mode.
func (c *Client) StartLearning() (*Response, error) {
req := Request{
Action: "start_learning",
}
resp, err := c.sendRequest(req)
if err != nil {
return nil, fmt.Errorf("start learning request failed: %w", err)
}
if !resp.OK {
return resp, fmt.Errorf("start learning failed: %s", resp.Error)
}
return resp, nil
}
// StopLearning asks the daemon to stop the fs_usage trace for the given learning session.
func (c *Client) StopLearning(learningID string) error {
req := Request{
Action: "stop_learning",
LearningID: learningID,
}
resp, err := c.sendRequest(req)
if err != nil {
return fmt.Errorf("stop learning request failed: %w", err)
}
if !resp.OK {
return fmt.Errorf("stop learning failed: %s", resp.Error)
}
return nil
}
// Status queries the daemon for its current status.
func (c *Client) Status() (*Response, error) {
req := Request{
Action: "status",
}
resp, err := c.sendRequest(req)
if err != nil {
return nil, fmt.Errorf("status request failed: %w", err)
}
if !resp.OK {
return resp, fmt.Errorf("status request failed: %s", resp.Error)
}
return resp, nil
}
// IsRunning checks whether the daemon is reachable by attempting to connect
// to the Unix socket. Returns true if the connection succeeds.
func (c *Client) IsRunning() bool {
conn, err := net.DialTimeout("unix", c.socketPath, clientDialTimeout)
if err != nil {
return false
}
_ = conn.Close()
return true
}
// sendRequest connects to the daemon Unix socket, sends a JSON-encoded request,
// and reads back a JSON-encoded response.
func (c *Client) sendRequest(req Request) (*Response, error) {
c.logDebug("Connecting to daemon at %s", c.socketPath)
conn, err := net.DialTimeout("unix", c.socketPath, clientDialTimeout)
if err != nil {
return nil, fmt.Errorf("failed to connect to daemon at %s: %w", c.socketPath, err)
}
defer conn.Close() //nolint:errcheck // best-effort close on request completion
// Set a read deadline for the response.
if err := conn.SetReadDeadline(time.Now().Add(clientReadTimeout)); err != nil {
return nil, fmt.Errorf("failed to set read deadline: %w", err)
}
// Send the request as newline-delimited JSON.
encoder := json.NewEncoder(conn)
if err := encoder.Encode(req); err != nil {
return nil, fmt.Errorf("failed to send request: %w", err)
}
c.logDebug("Sent request: action=%s", req.Action)
// Read the response.
decoder := json.NewDecoder(conn)
var resp Response
if err := decoder.Decode(&resp); err != nil {
return nil, fmt.Errorf("failed to read response: %w", err)
}
c.logDebug("Received response: ok=%v", resp.OK)
return &resp, nil
}
// logDebug writes a debug message to stderr with the [greywall:daemon] prefix.
func (c *Client) logDebug(format string, args ...interface{}) {
if c.debug {
fmt.Fprintf(os.Stderr, "[greywall:daemon] "+format+"\n", args...)
}
}

186
internal/daemon/dns.go Normal file
View File

@@ -0,0 +1,186 @@
//go:build darwin || linux
package daemon
import (
"fmt"
"net"
"os"
"sync"
"time"
)
const (
// maxDNSPacketSize is the maximum UDP packet size we accept.
// DNS can theoretically be up to 65535 bytes, but practically much smaller.
maxDNSPacketSize = 4096
// upstreamTimeout is the time we wait for a response from the upstream DNS server.
upstreamTimeout = 5 * time.Second
)
// DNSRelay is a UDP DNS relay that forwards DNS queries from sandboxed processes
// to a configured upstream DNS server. It operates as a simple packet relay without
// parsing DNS protocol contents.
type DNSRelay struct {
udpConn *net.UDPConn
targetAddr string // upstream DNS server address (host:port)
listenAddr string // address we're listening on
wg sync.WaitGroup
done chan struct{}
debug bool
}
// NewDNSRelay creates a new DNS relay that listens on listenAddr and forwards
// queries to dnsAddr. The listenAddr will typically be "127.0.0.2:53" (loopback alias).
// The dnsAddr must be in "host:port" format (e.g. "1.1.1.1:53").
func NewDNSRelay(listenAddr, dnsAddr string, debug bool) (*DNSRelay, error) {
// Validate the upstream DNS address is parseable as host:port.
targetHost, targetPort, err := net.SplitHostPort(dnsAddr)
if err != nil {
return nil, fmt.Errorf("invalid DNS address %q: %w", dnsAddr, err)
}
if targetHost == "" {
return nil, fmt.Errorf("invalid DNS address %q: empty host", dnsAddr)
}
if targetPort == "" {
return nil, fmt.Errorf("invalid DNS address %q: empty port", dnsAddr)
}
// Resolve and bind the listen address.
udpAddr, err := net.ResolveUDPAddr("udp", listenAddr)
if err != nil {
return nil, fmt.Errorf("failed to resolve listen address %q: %w", listenAddr, err)
}
conn, err := net.ListenUDP("udp", udpAddr)
if err != nil {
return nil, fmt.Errorf("failed to bind UDP socket on %q: %w", listenAddr, err)
}
return &DNSRelay{
udpConn: conn,
targetAddr: dnsAddr,
listenAddr: conn.LocalAddr().String(),
done: make(chan struct{}),
debug: debug,
}, nil
}
// ListenAddr returns the actual address the relay is listening on.
// This is useful when port 0 was used to get an ephemeral port.
func (d *DNSRelay) ListenAddr() string {
return d.listenAddr
}
// Start begins the DNS relay loop. It reads incoming UDP packets from the
// listening socket and spawns a goroutine per query to forward it to the
// upstream DNS server and relay the response back.
func (d *DNSRelay) Start() error {
if d.debug {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Listening on %s, forwarding to %s\n", d.listenAddr, d.targetAddr)
}
d.wg.Add(1)
go d.readLoop()
return nil
}
// Stop shuts down the DNS relay. It signals the read loop to stop, closes the
// listening socket, and waits for all in-flight queries to complete.
func (d *DNSRelay) Stop() {
close(d.done)
_ = d.udpConn.Close()
d.wg.Wait()
if d.debug {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Stopped\n")
}
}
// readLoop is the main loop that reads incoming DNS queries from the listening socket.
func (d *DNSRelay) readLoop() {
defer d.wg.Done()
buf := make([]byte, maxDNSPacketSize)
for {
n, clientAddr, err := d.udpConn.ReadFromUDP(buf)
if err != nil {
select {
case <-d.done:
// Shutting down, expected error from closed socket.
return
default:
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Read error: %v\n", err)
continue
}
}
if n == 0 {
continue
}
// Copy the packet data so the buffer can be reused immediately.
query := make([]byte, n)
copy(query, buf[:n])
d.wg.Add(1)
go d.handleQuery(query, clientAddr)
}
}
// handleQuery forwards a single DNS query to the upstream server and relays
// the response back to the original client. It creates a dedicated UDP connection
// to the upstream server to avoid multiplexing complexity.
func (d *DNSRelay) handleQuery(query []byte, clientAddr *net.UDPAddr) {
defer d.wg.Done()
if d.debug {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Query from %s (%d bytes)\n", clientAddr, len(query))
}
// Create a dedicated UDP connection to the upstream DNS server.
upstreamConn, err := net.Dial("udp", d.targetAddr)
if err != nil {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Failed to connect to upstream %s: %v\n", d.targetAddr, err)
return
}
defer upstreamConn.Close() //nolint:errcheck // best-effort cleanup of per-query UDP connection
// Send the query to the upstream server.
if _, err := upstreamConn.Write(query); err != nil {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Failed to send query to upstream: %v\n", err)
return
}
// Wait for the response with a timeout.
if err := upstreamConn.SetReadDeadline(time.Now().Add(upstreamTimeout)); err != nil {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Failed to set read deadline: %v\n", err)
return
}
resp := make([]byte, maxDNSPacketSize)
n, err := upstreamConn.Read(resp)
if err != nil {
if d.debug {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Upstream response error: %v\n", err)
}
return
}
// Send the response back to the original client.
if _, err := d.udpConn.WriteToUDP(resp[:n], clientAddr); err != nil {
// The listening socket may have been closed during shutdown.
select {
case <-d.done:
return
default:
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Failed to send response to %s: %v\n", clientAddr, err)
}
}
if d.debug {
fmt.Fprintf(os.Stderr, "[greywall:dns-relay] Response to %s (%d bytes)\n", clientAddr, n)
}
}

296
internal/daemon/dns_test.go Normal file
View File

@@ -0,0 +1,296 @@
//go:build darwin || linux
package daemon
import (
"bytes"
"net"
"sync"
"testing"
"time"
)
// startMockDNSServer starts a UDP server that echoes back whatever it receives,
// prefixed with "RESP:" to distinguish responses from queries.
// Returns the server's address and a cleanup function.
func startMockDNSServer(t *testing.T) (string, func()) {
t.Helper()
addr, err := net.ResolveUDPAddr("udp", "127.0.0.1:0")
if err != nil {
t.Fatalf("Failed to resolve address: %v", err)
}
conn, err := net.ListenUDP("udp", addr)
if err != nil {
t.Fatalf("Failed to start mock DNS server: %v", err)
}
done := make(chan struct{})
go func() {
buf := make([]byte, maxDNSPacketSize)
for {
n, remoteAddr, err := conn.ReadFromUDP(buf)
if err != nil {
select {
case <-done:
return
default:
continue
}
}
// Echo back with "RESP:" prefix.
resp := append([]byte("RESP:"), buf[:n]...)
_, _ = conn.WriteToUDP(resp, remoteAddr) // best-effort in test
}
}()
cleanup := func() {
close(done)
_ = conn.Close()
}
return conn.LocalAddr().String(), cleanup
}
// startSilentDNSServer starts a UDP server that accepts connections but never
// responds, simulating an upstream timeout.
func startSilentDNSServer(t *testing.T) (string, func()) {
t.Helper()
addr, err := net.ResolveUDPAddr("udp", "127.0.0.1:0")
if err != nil {
t.Fatalf("Failed to resolve address: %v", err)
}
conn, err := net.ListenUDP("udp", addr)
if err != nil {
t.Fatalf("Failed to start silent DNS server: %v", err)
}
cleanup := func() {
_ = conn.Close()
}
return conn.LocalAddr().String(), cleanup
}
func TestDNSRelay_ForwardPacket(t *testing.T) {
// Start a mock upstream DNS server.
upstreamAddr, cleanupUpstream := startMockDNSServer(t)
defer cleanupUpstream()
// Create and start the relay.
relay, err := NewDNSRelay("127.0.0.1:0", upstreamAddr, true)
if err != nil {
t.Fatalf("Failed to create DNS relay: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Failed to start DNS relay: %v", err)
}
defer relay.Stop()
// Send a query through the relay.
clientConn, err := net.Dial("udp", relay.ListenAddr())
if err != nil {
t.Fatalf("Failed to connect to relay: %v", err)
}
defer clientConn.Close() //nolint:errcheck // test cleanup
query := []byte("test-dns-query")
if _, err := clientConn.Write(query); err != nil {
t.Fatalf("Failed to send query: %v", err)
}
// Read the response.
if err := clientConn.SetReadDeadline(time.Now().Add(5 * time.Second)); err != nil {
t.Fatalf("Failed to set read deadline: %v", err)
}
buf := make([]byte, maxDNSPacketSize)
n, err := clientConn.Read(buf)
if err != nil {
t.Fatalf("Failed to read response: %v", err)
}
expected := append([]byte("RESP:"), query...)
if !bytes.Equal(buf[:n], expected) {
t.Errorf("Unexpected response: got %q, want %q", buf[:n], expected)
}
}
func TestDNSRelay_UpstreamTimeout(t *testing.T) {
// Start a silent upstream server that never responds.
upstreamAddr, cleanupUpstream := startSilentDNSServer(t)
defer cleanupUpstream()
// Create and start the relay.
relay, err := NewDNSRelay("127.0.0.1:0", upstreamAddr, false)
if err != nil {
t.Fatalf("Failed to create DNS relay: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Failed to start DNS relay: %v", err)
}
defer relay.Stop()
// Send a query through the relay.
clientConn, err := net.Dial("udp", relay.ListenAddr())
if err != nil {
t.Fatalf("Failed to connect to relay: %v", err)
}
defer clientConn.Close() //nolint:errcheck // test cleanup
query := []byte("test-dns-timeout")
if _, err := clientConn.Write(query); err != nil {
t.Fatalf("Failed to send query: %v", err)
}
// The relay should not send back a response because upstream timed out.
// Set a short deadline on the client side; we expect no data.
if err := clientConn.SetReadDeadline(time.Now().Add(6 * time.Second)); err != nil {
t.Fatalf("Failed to set read deadline: %v", err)
}
buf := make([]byte, maxDNSPacketSize)
_, err = clientConn.Read(buf)
if err == nil {
t.Fatal("Expected timeout error reading from relay, but got a response")
}
// Verify it was a timeout error.
netErr, ok := err.(net.Error)
if !ok || !netErr.Timeout() {
t.Fatalf("Expected timeout error, got: %v", err)
}
}
func TestDNSRelay_ConcurrentQueries(t *testing.T) {
// Start a mock upstream DNS server.
upstreamAddr, cleanupUpstream := startMockDNSServer(t)
defer cleanupUpstream()
// Create and start the relay.
relay, err := NewDNSRelay("127.0.0.1:0", upstreamAddr, false)
if err != nil {
t.Fatalf("Failed to create DNS relay: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Failed to start DNS relay: %v", err)
}
defer relay.Stop()
const numQueries = 20
var wg sync.WaitGroup
errors := make(chan error, numQueries)
for i := range numQueries {
wg.Add(1)
go func(id int) {
defer wg.Done()
clientConn, err := net.Dial("udp", relay.ListenAddr())
if err != nil {
errors <- err
return
}
defer clientConn.Close() //nolint:errcheck // test cleanup
query := []byte("concurrent-query-" + string(rune('A'+id))) //nolint:gosec // test uses small range 0-19, no overflow
if _, err := clientConn.Write(query); err != nil {
errors <- err
return
}
if err := clientConn.SetReadDeadline(time.Now().Add(5 * time.Second)); err != nil {
errors <- err
return
}
buf := make([]byte, maxDNSPacketSize)
n, err := clientConn.Read(buf)
if err != nil {
errors <- err
return
}
expected := append([]byte("RESP:"), query...)
if !bytes.Equal(buf[:n], expected) {
errors <- &unexpectedResponseError{got: buf[:n], want: expected}
}
}(i)
}
wg.Wait()
close(errors)
for err := range errors {
t.Errorf("Concurrent query error: %v", err)
}
}
func TestDNSRelay_ListenAddr(t *testing.T) {
// Use port 0 to get an ephemeral port.
relay, err := NewDNSRelay("127.0.0.1:0", "1.1.1.1:53", false)
if err != nil {
t.Fatalf("Failed to create DNS relay: %v", err)
}
defer relay.Stop()
addr := relay.ListenAddr()
if addr == "" {
t.Fatal("ListenAddr returned empty string")
}
host, port, err := net.SplitHostPort(addr)
if err != nil {
t.Fatalf("ListenAddr returned invalid address %q: %v", addr, err)
}
if host != "127.0.0.1" {
t.Errorf("Expected host 127.0.0.1, got %q", host)
}
if port == "0" {
t.Error("Expected assigned port, got 0")
}
}
func TestNewDNSRelay_InvalidDNSAddr(t *testing.T) {
tests := []struct {
name string
dnsAddr string
}{
{"missing port", "1.1.1.1"},
{"empty string", ""},
{"empty host", ":53"},
{"empty port", "1.1.1.1:"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := NewDNSRelay("127.0.0.1:0", tt.dnsAddr, false)
if err == nil {
t.Errorf("Expected error for DNS address %q, got nil", tt.dnsAddr)
}
})
}
}
func TestNewDNSRelay_InvalidListenAddr(t *testing.T) {
_, err := NewDNSRelay("invalid-addr", "1.1.1.1:53", false)
if err == nil {
t.Error("Expected error for invalid listen address, got nil")
}
}
// unexpectedResponseError is used to report mismatched responses in concurrent tests.
type unexpectedResponseError struct {
got []byte
want []byte
}
func (e *unexpectedResponseError) Error() string {
return "unexpected response: got " + string(e.got) + ", want " + string(e.want)
}

554
internal/daemon/launchd.go Normal file
View File

@@ -0,0 +1,554 @@
//go:build darwin
package daemon
import (
"fmt"
"io"
"net"
"os"
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"time"
)
const (
LaunchDaemonLabel = "co.greyhaven.greywall"
LaunchDaemonPlistPath = "/Library/LaunchDaemons/co.greyhaven.greywall.plist"
InstallBinaryPath = "/usr/local/bin/greywall"
InstallLibDir = "/usr/local/lib/greywall"
SandboxUserName = "_greywall"
SandboxUserUID = "399" // System user range on macOS
SandboxGroupName = "_greywall" // Group used for pf routing (same name as user)
SudoersFilePath = "/etc/sudoers.d/greywall"
DefaultSocketPath = "/var/run/greywall.sock"
)
// Install performs the full LaunchDaemon installation flow:
// 1. Verify running as root
// 2. Create system user _greywall
// 3. Create /usr/local/lib/greywall/ directory and copy tun2socks
// 4. Copy the current binary to /usr/local/bin/greywall
// 5. Generate and write the LaunchDaemon plist
// 6. Set proper permissions, load the daemon, and verify it starts
func Install(currentBinaryPath, tun2socksPath string, debug bool) error {
if os.Getuid() != 0 {
return fmt.Errorf("daemon install must be run as root (use sudo)")
}
// Step 1: Create system user and group.
if err := createSandboxUser(debug); err != nil {
return fmt.Errorf("failed to create sandbox user: %w", err)
}
// Step 1b: Install sudoers rule for group-based sandbox-exec.
if err := installSudoersRule(debug); err != nil {
return fmt.Errorf("failed to install sudoers rule: %w", err)
}
// Step 1c: Add invoking user to _greywall group.
addInvokingUserToGroup(debug)
// Step 2: Create lib directory and copy tun2socks.
logDebug(debug, "Creating directory %s", InstallLibDir)
if err := os.MkdirAll(InstallLibDir, 0o755); err != nil { //nolint:gosec // system lib directory needs 0755 for daemon access
return fmt.Errorf("failed to create %s: %w", InstallLibDir, err)
}
tun2socksDst := filepath.Join(InstallLibDir, "tun2socks-darwin-"+runtime.GOARCH)
logDebug(debug, "Copying tun2socks to %s", tun2socksDst)
if err := copyFile(tun2socksPath, tun2socksDst, 0o755); err != nil {
return fmt.Errorf("failed to install tun2socks: %w", err)
}
// Step 3: Copy binary to install path.
if err := os.MkdirAll(filepath.Dir(InstallBinaryPath), 0o755); err != nil { //nolint:gosec // /usr/local/bin needs 0755
return fmt.Errorf("failed to create %s: %w", filepath.Dir(InstallBinaryPath), err)
}
logDebug(debug, "Copying binary from %s to %s", currentBinaryPath, InstallBinaryPath)
if err := copyFile(currentBinaryPath, InstallBinaryPath, 0o755); err != nil {
return fmt.Errorf("failed to install binary: %w", err)
}
// Step 4: Generate and write plist.
plist := generatePlist()
logDebug(debug, "Writing plist to %s", LaunchDaemonPlistPath)
if err := os.WriteFile(LaunchDaemonPlistPath, []byte(plist), 0o644); err != nil { //nolint:gosec // LaunchDaemon plist requires 0644 per macOS convention
return fmt.Errorf("failed to write plist: %w", err)
}
// Step 5: Set ownership to root:wheel.
logDebug(debug, "Setting ownership on %s to root:wheel", LaunchDaemonPlistPath)
if err := runCmd(debug, "chown", "root:wheel", LaunchDaemonPlistPath); err != nil {
return fmt.Errorf("failed to set plist ownership: %w", err)
}
// Step 6: Load the daemon.
logDebug(debug, "Loading LaunchDaemon")
if err := runCmd(debug, "launchctl", "load", LaunchDaemonPlistPath); err != nil {
return fmt.Errorf("failed to load daemon: %w", err)
}
// Step 7: Verify the daemon actually started.
running := false
for range 10 {
time.Sleep(500 * time.Millisecond)
if IsRunning() {
running = true
break
}
}
Logf("Daemon installed successfully.")
Logf(" Plist: %s", LaunchDaemonPlistPath)
Logf(" Binary: %s", InstallBinaryPath)
Logf(" Tun2socks: %s", tun2socksDst)
actualUID := readDsclAttr(SandboxUserName, "UniqueID", true)
actualGID := readDsclAttr(SandboxGroupName, "PrimaryGroupID", false)
Logf(" User: %s (UID %s)", SandboxUserName, actualUID)
Logf(" Group: %s (GID %s, pf routing)", SandboxGroupName, actualGID)
Logf(" Sudoers: %s", SudoersFilePath)
Logf(" Log: /var/log/greywall.log")
if !running {
Logf(" Status: NOT RUNNING (check /var/log/greywall.log)")
return fmt.Errorf("daemon was loaded but failed to start; check /var/log/greywall.log")
}
Logf(" Status: running")
return nil
}
// Uninstall performs the full LaunchDaemon uninstallation flow. It attempts
// every cleanup step even if individual steps fail, collecting errors along
// the way.
func Uninstall(debug bool) error {
if os.Getuid() != 0 {
return fmt.Errorf("daemon uninstall must be run as root (use sudo)")
}
var errs []string
// Step 1: Unload daemon (best effort).
logDebug(debug, "Unloading LaunchDaemon")
if err := runCmd(debug, "launchctl", "unload", LaunchDaemonPlistPath); err != nil {
errs = append(errs, fmt.Sprintf("unload daemon: %v", err))
}
// Step 2: Remove plist file.
logDebug(debug, "Removing plist %s", LaunchDaemonPlistPath)
if err := os.Remove(LaunchDaemonPlistPath); err != nil && !os.IsNotExist(err) {
errs = append(errs, fmt.Sprintf("remove plist: %v", err))
}
// Step 3: Remove lib directory.
logDebug(debug, "Removing directory %s", InstallLibDir)
if err := os.RemoveAll(InstallLibDir); err != nil {
errs = append(errs, fmt.Sprintf("remove lib dir: %v", err))
}
// Step 4: Remove installed binary, but only if it differs from the
// currently running executable.
currentExe, exeErr := os.Executable()
if exeErr != nil {
currentExe = ""
}
resolvedCurrent, _ := filepath.EvalSymlinks(currentExe)
resolvedInstall, _ := filepath.EvalSymlinks(InstallBinaryPath)
if resolvedCurrent != resolvedInstall {
logDebug(debug, "Removing binary %s", InstallBinaryPath)
if err := os.Remove(InstallBinaryPath); err != nil && !os.IsNotExist(err) {
errs = append(errs, fmt.Sprintf("remove binary: %v", err))
}
} else {
logDebug(debug, "Skipping binary removal (currently running from %s)", InstallBinaryPath)
}
// Step 5: Remove system user and group.
if err := removeSandboxUser(debug); err != nil {
errs = append(errs, fmt.Sprintf("remove sandbox user: %v", err))
}
// Step 6: Remove socket file if it exists.
logDebug(debug, "Removing socket %s", DefaultSocketPath)
if err := os.Remove(DefaultSocketPath); err != nil && !os.IsNotExist(err) {
errs = append(errs, fmt.Sprintf("remove socket: %v", err))
}
// Step 6b: Remove sudoers file.
logDebug(debug, "Removing sudoers file %s", SudoersFilePath)
if err := os.Remove(SudoersFilePath); err != nil && !os.IsNotExist(err) {
errs = append(errs, fmt.Sprintf("remove sudoers file: %v", err))
}
// Step 7: Remove pf anchor lines from /etc/pf.conf.
if err := removeAnchorFromPFConf(debug); err != nil {
errs = append(errs, fmt.Sprintf("remove pf anchor: %v", err))
}
if len(errs) > 0 {
Logf("Uninstall completed with warnings:")
for _, e := range errs {
Logf(" - %s", e)
}
return nil // partial cleanup is not a fatal error
}
Logf("Daemon uninstalled successfully.")
return nil
}
// generatePlist returns the LaunchDaemon plist XML content.
func generatePlist() string {
return `<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>` + LaunchDaemonLabel + `</string>
<key>ProgramArguments</key>
<array>
<string>` + InstallBinaryPath + `</string>
<string>daemon</string>
<string>run</string>
</array>
<key>RunAtLoad</key><true/>
<key>KeepAlive</key><true/>
<key>StandardOutPath</key>
<string>/var/log/greywall.log</string>
<key>StandardErrorPath</key>
<string>/var/log/greywall.log</string>
</dict>
</plist>
`
}
// IsInstalled returns true if the LaunchDaemon plist file exists.
func IsInstalled() bool {
_, err := os.Stat(LaunchDaemonPlistPath)
return err == nil
}
// IsRunning returns true if the daemon is currently running. It first tries
// connecting to the Unix socket (works without root), then falls back to
// launchctl print which can inspect the system domain without root.
func IsRunning() bool {
// Primary check: try to connect to the daemon socket. This proves the
// daemon is actually running and accepting connections.
conn, err := net.DialTimeout("unix", DefaultSocketPath, 2*time.Second)
if err == nil {
_ = conn.Close()
return true
}
// Fallback: launchctl print system/<label> works without root on modern
// macOS (unlike launchctl list which only shows the caller's domain).
//nolint:gosec // LaunchDaemonLabel is a constant
out, err := exec.Command("launchctl", "print", "system/"+LaunchDaemonLabel).CombinedOutput()
if err != nil {
return false
}
return strings.Contains(string(out), "state = running")
}
// createSandboxUser creates the _greywall system user and group on macOS
// using dscl (Directory Service command line utility).
//
// If the user/group already exist with valid IDs, they are reused. Otherwise
// a free UID/GID is found dynamically (the hardcoded SandboxUserUID is only
// a preferred default — macOS system groups like com.apple.access_ssh may
// already claim it).
func createSandboxUser(debug bool) error {
userPath := "/Users/" + SandboxUserName
groupPath := "/Groups/" + SandboxUserName
// Check if user already exists with a valid UniqueID.
existingUID := readDsclAttr(SandboxUserName, "UniqueID", true)
existingGID := readDsclAttr(SandboxGroupName, "PrimaryGroupID", false)
if existingUID != "" && existingGID != "" {
logDebug(debug, "System user %s (UID %s) and group (GID %s) already exist",
SandboxUserName, existingUID, existingGID)
return nil
}
// Find a free ID. Try the preferred default first, then scan.
id := SandboxUserUID
if !isIDFree(id, debug) {
var err error
id, err = findFreeSystemID(debug)
if err != nil {
return fmt.Errorf("failed to find free UID/GID: %w", err)
}
logDebug(debug, "Preferred ID %s is taken, using %s instead", SandboxUserUID, id)
}
// Create the group record FIRST (so the GID exists before the user references it).
logDebug(debug, "Ensuring system group %s (GID %s)", SandboxGroupName, id)
if existingGID == "" {
groupCmds := [][]string{
{"dscl", ".", "-create", groupPath},
{"dscl", ".", "-create", groupPath, "PrimaryGroupID", id},
{"dscl", ".", "-create", groupPath, "RealName", "Greywall Sandbox"},
}
for _, args := range groupCmds {
if err := runDsclCreate(debug, args); err != nil {
return err
}
}
// Verify the GID was actually set (runDsclCreate may have skipped it).
actualGID := readDsclAttr(SandboxGroupName, "PrimaryGroupID", false)
if actualGID == "" {
return fmt.Errorf("failed to set PrimaryGroupID on group %s (GID %s may be taken)", SandboxGroupName, id)
}
}
// Create the user record.
logDebug(debug, "Ensuring system user %s (UID %s)", SandboxUserName, id)
if existingUID == "" {
userCmds := [][]string{
{"dscl", ".", "-create", userPath},
{"dscl", ".", "-create", userPath, "UniqueID", id},
{"dscl", ".", "-create", userPath, "PrimaryGroupID", id},
{"dscl", ".", "-create", userPath, "UserShell", "/usr/bin/false"},
{"dscl", ".", "-create", userPath, "RealName", "Greywall Sandbox"},
{"dscl", ".", "-create", userPath, "NFSHomeDirectory", "/var/empty"},
}
for _, args := range userCmds {
if err := runDsclCreate(debug, args); err != nil {
return err
}
}
}
logDebug(debug, "System user and group %s ready (ID %s)", SandboxUserName, id)
return nil
}
// readDsclAttr reads a single attribute from a user or group record.
// Returns empty string if the record or attribute does not exist.
func readDsclAttr(name, attr string, isUser bool) string {
recordType := "/Groups/"
if isUser {
recordType = "/Users/"
}
//nolint:gosec // name and attr are controlled constants
out, err := exec.Command("dscl", ".", "-read", recordType+name, attr).Output()
if err != nil {
return ""
}
// Output format: "AttrName: value"
parts := strings.SplitN(strings.TrimSpace(string(out)), ": ", 2)
if len(parts) != 2 {
return ""
}
return strings.TrimSpace(parts[1])
}
// isIDFree checks whether a given numeric ID is available as both a UID and GID.
func isIDFree(id string, debug bool) bool {
// Check if any user has this UniqueID.
//nolint:gosec // id is a controlled numeric string
out, err := exec.Command("dscl", ".", "-search", "/Users", "UniqueID", id).Output()
if err == nil && strings.TrimSpace(string(out)) != "" {
logDebug(debug, "ID %s is taken by a user", id)
return false
}
// Check if any group has this PrimaryGroupID.
//nolint:gosec // id is a controlled numeric string
out, err = exec.Command("dscl", ".", "-search", "/Groups", "PrimaryGroupID", id).Output()
if err == nil && strings.TrimSpace(string(out)) != "" {
logDebug(debug, "ID %s is taken by a group", id)
return false
}
return true
}
// findFreeSystemID scans the macOS system ID range (350499) for a UID/GID
// pair that is not in use by any existing user or group.
func findFreeSystemID(debug bool) (string, error) {
for i := 350; i < 500; i++ {
id := strconv.Itoa(i)
if isIDFree(id, debug) {
return id, nil
}
}
return "", fmt.Errorf("no free system UID/GID found in range 350-499")
}
// runDsclCreate runs a dscl -create command, silently ignoring
// eDSRecordAlreadyExists errors (idempotent for repeated installs).
func runDsclCreate(debug bool, args []string) error {
err := runCmd(debug, args[0], args[1:]...)
if err != nil && strings.Contains(err.Error(), "eDSRecordAlreadyExists") {
logDebug(debug, "Already exists, skipping: %s", strings.Join(args, " "))
return nil
}
if err != nil {
return fmt.Errorf("dscl command failed (%s): %w", strings.Join(args, " "), err)
}
return nil
}
// removeSandboxUser removes the _greywall system user and group.
func removeSandboxUser(debug bool) error {
var errs []string
userPath := "/Users/" + SandboxUserName
groupPath := "/Groups/" + SandboxUserName
if userExists(SandboxUserName) {
logDebug(debug, "Removing system user %s", SandboxUserName)
if err := runCmd(debug, "dscl", ".", "-delete", userPath); err != nil {
errs = append(errs, fmt.Sprintf("delete user: %v", err))
}
}
// Check if group exists before trying to remove.
logDebug(debug, "Removing system group %s", SandboxUserName)
if err := runCmd(debug, "dscl", ".", "-delete", groupPath); err != nil {
// Group may not exist; only record error if it's not a "not found" case.
errStr := err.Error()
if !strings.Contains(errStr, "not found") && !strings.Contains(errStr, "does not exist") {
errs = append(errs, fmt.Sprintf("delete group: %v", err))
}
}
if len(errs) > 0 {
return fmt.Errorf("sandbox user removal issues: %s", strings.Join(errs, "; "))
}
return nil
}
// userExists checks if a user exists on macOS by querying the directory service.
func userExists(username string) bool {
//nolint:gosec // username is a controlled constant
err := exec.Command("dscl", ".", "-read", "/Users/"+username).Run()
return err == nil
}
// installSudoersRule writes a sudoers rule that allows members of the
// _greywall group to run sandbox-exec as any user with group _greywall,
// without a password. The rule is validated with visudo -cf before install.
func installSudoersRule(debug bool) error {
rule := fmt.Sprintf("%%%s ALL = (ALL:%s) NOPASSWD: /usr/bin/sandbox-exec *\n",
SandboxGroupName, SandboxGroupName)
logDebug(debug, "Writing sudoers rule to %s", SudoersFilePath)
// Ensure /etc/sudoers.d exists.
if err := os.MkdirAll(filepath.Dir(SudoersFilePath), 0o755); err != nil { //nolint:gosec // /etc/sudoers.d must be 0755
return fmt.Errorf("failed to create sudoers directory: %w", err)
}
// Write to a temp file first, then validate with visudo.
tmpFile := SudoersFilePath + ".tmp"
if err := os.WriteFile(tmpFile, []byte(rule), 0o440); err != nil { //nolint:gosec // sudoers files require 0440 per sudo(8)
return fmt.Errorf("failed to write sudoers temp file: %w", err)
}
// Validate syntax before installing.
//nolint:gosec // tmpFile is a controlled path
if err := runCmd(debug, "visudo", "-cf", tmpFile); err != nil {
_ = os.Remove(tmpFile)
return fmt.Errorf("sudoers validation failed: %w", err)
}
// Move validated file into place.
if err := os.Rename(tmpFile, SudoersFilePath); err != nil {
_ = os.Remove(tmpFile)
return fmt.Errorf("failed to install sudoers file: %w", err)
}
// Ensure correct ownership (root:wheel) and permissions (0440).
if err := runCmd(debug, "chown", "root:wheel", SudoersFilePath); err != nil {
return fmt.Errorf("failed to set sudoers ownership: %w", err)
}
if err := os.Chmod(SudoersFilePath, 0o440); err != nil { //nolint:gosec // sudoers files require 0440 per sudo(8)
return fmt.Errorf("failed to set sudoers permissions: %w", err)
}
logDebug(debug, "Sudoers rule installed: %s", SudoersFilePath)
return nil
}
// addInvokingUserToGroup adds the real invoking user (detected via SUDO_USER)
// to the _greywall group so they can use sudo -g _greywall. This is non-fatal;
// if it fails, a manual instruction is printed.
//
// We use dscl -append (not dseditgroup) because dseditgroup can reset group
// attributes like PrimaryGroupID on freshly created groups.
func addInvokingUserToGroup(debug bool) {
realUser := os.Getenv("SUDO_USER")
if realUser == "" || realUser == "root" {
Logf("Note: Could not detect invoking user (SUDO_USER not set).")
Logf(" You may need to manually add your user to the %s group:", SandboxGroupName)
Logf(" sudo dscl . -append /Groups/%s GroupMembership YOUR_USERNAME", SandboxGroupName)
return
}
groupPath := "/Groups/" + SandboxGroupName
logDebug(debug, "Adding user %s to group %s", realUser, SandboxGroupName)
//nolint:gosec // realUser comes from SUDO_USER env var set by sudo
err := runCmd(debug, "dscl", ".", "-append", groupPath, "GroupMembership", realUser)
if err != nil {
Logf("Warning: failed to add %s to group %s: %v", realUser, SandboxGroupName, err)
Logf(" You may need to run manually:")
Logf(" sudo dscl . -append %s GroupMembership %s", groupPath, realUser)
} else {
Logf(" User %s added to group %s", realUser, SandboxGroupName)
}
}
// copyFile copies a file from src to dst with the given permissions.
func copyFile(src, dst string, perm os.FileMode) error {
srcFile, err := os.Open(src) //nolint:gosec // src is from os.Executable or user flag
if err != nil {
return fmt.Errorf("open source %s: %w", src, err)
}
defer srcFile.Close() //nolint:errcheck // read-only file; close error is not actionable
dstFile, err := os.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm) //nolint:gosec // dst is a controlled install path constant
if err != nil {
return fmt.Errorf("create destination %s: %w", dst, err)
}
defer dstFile.Close() //nolint:errcheck // best-effort close; errors from Chmod/Copy are checked
if _, err := io.Copy(dstFile, srcFile); err != nil {
return fmt.Errorf("copy data: %w", err)
}
if err := dstFile.Chmod(perm); err != nil {
return fmt.Errorf("set permissions on %s: %w", dst, err)
}
return nil
}
// runCmd executes a command and returns an error if it fails. When debug is
// true, the command is logged before execution.
func runCmd(debug bool, name string, args ...string) error {
logDebug(debug, "exec: %s %s", name, strings.Join(args, " "))
//nolint:gosec // arguments are constructed from internal constants
cmd := exec.Command(name, args...)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("%s failed: %w (output: %s)", name, err, strings.TrimSpace(string(output)))
}
return nil
}
// logDebug writes a timestamped debug message to stderr.
func logDebug(debug bool, format string, args ...interface{}) {
if debug {
Logf(format, args...)
}
}
// Logf writes a timestamped message to stderr with the [greywall:daemon] prefix.
func Logf(format string, args ...interface{}) {
ts := time.Now().Format("2006-01-02 15:04:05")
fmt.Fprintf(os.Stderr, ts+" [greywall:daemon] "+format+"\n", args...)
}

View File

@@ -0,0 +1,37 @@
//go:build !darwin
package daemon
import "fmt"
const (
LaunchDaemonLabel = "co.greyhaven.greywall"
LaunchDaemonPlistPath = "/Library/LaunchDaemons/co.greyhaven.greywall.plist"
InstallBinaryPath = "/usr/local/bin/greywall"
InstallLibDir = "/usr/local/lib/greywall"
SandboxUserName = "_greywall"
SandboxUserUID = "399"
SandboxGroupName = "_greywall"
SudoersFilePath = "/etc/sudoers.d/greywall"
DefaultSocketPath = "/var/run/greywall.sock"
)
// Install is only supported on macOS.
func Install(currentBinaryPath, tun2socksPath string, debug bool) error {
return fmt.Errorf("daemon install is only supported on macOS")
}
// Uninstall is only supported on macOS.
func Uninstall(debug bool) error {
return fmt.Errorf("daemon uninstall is only supported on macOS")
}
// IsInstalled always returns false on non-macOS platforms.
func IsInstalled() bool {
return false
}
// IsRunning always returns false on non-macOS platforms.
func IsRunning() bool {
return false
}

View File

@@ -0,0 +1,129 @@
//go:build darwin
package daemon
import (
"strings"
"testing"
)
func TestGeneratePlist(t *testing.T) {
plist := generatePlist()
// Verify it is valid-looking XML with the expected plist header.
if !strings.HasPrefix(plist, `<?xml version="1.0" encoding="UTF-8"?>`) {
t.Error("plist should start with XML declaration")
}
if !strings.Contains(plist, `<!DOCTYPE plist PUBLIC`) {
t.Error("plist should contain DOCTYPE declaration")
}
if !strings.Contains(plist, `<plist version="1.0">`) {
t.Error("plist should contain plist version tag")
}
// Verify the label matches the constant.
expectedLabel := "<string>" + LaunchDaemonLabel + "</string>"
if !strings.Contains(plist, expectedLabel) {
t.Errorf("plist should contain label %q", LaunchDaemonLabel)
}
// Verify program arguments.
if !strings.Contains(plist, "<string>"+InstallBinaryPath+"</string>") {
t.Errorf("plist should reference binary path %q", InstallBinaryPath)
}
if !strings.Contains(plist, "<string>daemon</string>") {
t.Error("plist should contain 'daemon' argument")
}
if !strings.Contains(plist, "<string>run</string>") {
t.Error("plist should contain 'run' argument")
}
// Verify RunAtLoad and KeepAlive.
if !strings.Contains(plist, "<key>RunAtLoad</key><true/>") {
t.Error("plist should have RunAtLoad set to true")
}
if !strings.Contains(plist, "<key>KeepAlive</key><true/>") {
t.Error("plist should have KeepAlive set to true")
}
// Verify log paths.
if !strings.Contains(plist, "/var/log/greywall.log") {
t.Error("plist should reference /var/log/greywall.log for stdout/stderr")
}
}
func TestGeneratePlistProgramArguments(t *testing.T) {
plist := generatePlist()
// Verify the ProgramArguments array contains exactly 3 entries in order.
// The array should be: /usr/local/bin/greywall, daemon, run
argStart := strings.Index(plist, "<key>ProgramArguments</key>")
if argStart == -1 {
t.Fatal("plist should contain ProgramArguments key")
}
// Extract the array section.
arrayStart := strings.Index(plist[argStart:], "<array>")
arrayEnd := strings.Index(plist[argStart:], "</array>")
if arrayStart == -1 || arrayEnd == -1 {
t.Fatal("ProgramArguments should contain an array")
}
arrayContent := plist[argStart+arrayStart : argStart+arrayEnd]
expectedArgs := []string{InstallBinaryPath, "daemon", "run"}
for _, arg := range expectedArgs {
if !strings.Contains(arrayContent, "<string>"+arg+"</string>") {
t.Errorf("ProgramArguments array should contain %q", arg)
}
}
}
func TestIsInstalledReturnsFalse(t *testing.T) {
// On a test machine without the daemon installed, this should return false.
// We cannot guarantee the daemon is not installed, but on most dev machines
// it will not be. This test verifies the function runs without panicking.
result := IsInstalled()
// We only validate the function returns a bool without error.
// On CI/dev machines the plist should not exist.
_ = result
}
func TestIsRunningReturnsFalse(t *testing.T) {
// On a test machine without the daemon running, this should return false.
// Similar to TestIsInstalledReturnsFalse, we verify it runs cleanly.
result := IsRunning()
_ = result
}
func TestConstants(t *testing.T) {
// Verify constants have expected values.
tests := []struct {
name string
got string
expected string
}{
{"LaunchDaemonLabel", LaunchDaemonLabel, "co.greyhaven.greywall"},
{"LaunchDaemonPlistPath", LaunchDaemonPlistPath, "/Library/LaunchDaemons/co.greyhaven.greywall.plist"},
{"InstallBinaryPath", InstallBinaryPath, "/usr/local/bin/greywall"},
{"InstallLibDir", InstallLibDir, "/usr/local/lib/greywall"},
{"SandboxUserName", SandboxUserName, "_greywall"},
{"SandboxUserUID", SandboxUserUID, "399"},
{"SandboxGroupName", SandboxGroupName, "_greywall"},
{"SudoersFilePath", SudoersFilePath, "/etc/sudoers.d/greywall"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if tt.got != tt.expected {
t.Errorf("%s = %q, want %q", tt.name, tt.got, tt.expected)
}
})
}
}

246
internal/daemon/relay.go Normal file
View File

@@ -0,0 +1,246 @@
//go:build darwin || linux
package daemon
import (
"fmt"
"io"
"net"
"net/url"
"os"
"sync"
"sync/atomic"
"time"
)
const (
defaultMaxConns = 256
connIdleTimeout = 5 * time.Minute
upstreamDialTimout = 10 * time.Second
)
// Relay is a pure Go TCP relay that forwards connections from local listeners
// to an upstream SOCKS5 proxy address. It does NOT implement the SOCKS5 protocol;
// it blindly forwards bytes between the local connection and the upstream proxy.
type Relay struct {
listeners []net.Listener // both IPv4 and IPv6 listeners
targetAddr string // external SOCKS5 proxy host:port
port int // assigned port
wg sync.WaitGroup
done chan struct{}
debug bool
maxConns int // max concurrent connections (default 256)
activeConns atomic.Int32 // current active connections
}
// NewRelay parses a proxy URL to extract host:port and binds listeners on both
// 127.0.0.1 and [::1] using the same port. The port is dynamically assigned
// from the first (IPv4) bind. If the IPv6 bind fails, the relay continues
// with IPv4 only. Binding both addresses prevents IPv6 port squatting attacks.
func NewRelay(proxyURL string, debug bool) (*Relay, error) {
u, err := url.Parse(proxyURL)
if err != nil {
return nil, fmt.Errorf("invalid proxy URL %q: %w", proxyURL, err)
}
host := u.Hostname()
port := u.Port()
if host == "" || port == "" {
return nil, fmt.Errorf("proxy URL must include host and port: %q", proxyURL)
}
targetAddr := net.JoinHostPort(host, port)
// Bind IPv4 first to get a dynamically assigned port.
ipv4Listener, err := net.Listen("tcp4", "127.0.0.1:0")
if err != nil {
return nil, fmt.Errorf("failed to bind IPv4 listener: %w", err)
}
assignedPort := ipv4Listener.Addr().(*net.TCPAddr).Port
listeners := []net.Listener{ipv4Listener}
// Bind IPv6 on the same port. If it fails, log and continue with IPv4 only.
ipv6Addr := fmt.Sprintf("[::1]:%d", assignedPort)
ipv6Listener, err := net.Listen("tcp6", ipv6Addr)
if err != nil {
if debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] IPv6 bind on %s failed, continuing IPv4 only: %v\n", ipv6Addr, err)
}
} else {
listeners = append(listeners, ipv6Listener)
}
if debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] Bound %d listener(s) on port %d -> %s\n", len(listeners), assignedPort, targetAddr)
}
return &Relay{
listeners: listeners,
targetAddr: targetAddr,
port: assignedPort,
done: make(chan struct{}),
debug: debug,
maxConns: defaultMaxConns,
}, nil
}
// Port returns the local port the relay is listening on.
func (r *Relay) Port() int {
return r.port
}
// Start begins accepting connections on all listeners. Each accepted connection
// is handled in its own goroutine with bidirectional forwarding to the upstream
// proxy address. Start returns immediately; use Stop to shut down.
func (r *Relay) Start() error {
for _, ln := range r.listeners {
r.wg.Add(1)
go r.acceptLoop(ln)
}
return nil
}
// Stop gracefully shuts down the relay by closing all listeners and waiting
// for in-flight connections to finish.
func (r *Relay) Stop() {
close(r.done)
for _, ln := range r.listeners {
_ = ln.Close()
}
r.wg.Wait()
}
// acceptLoop runs the accept loop for a single listener.
func (r *Relay) acceptLoop(ln net.Listener) {
defer r.wg.Done()
for {
conn, err := ln.Accept()
if err != nil {
select {
case <-r.done:
return
default:
}
// Transient accept error; continue.
if r.debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] Accept error: %v\n", err)
}
continue
}
r.wg.Add(1)
go r.handleConn(conn)
}
}
// handleConn handles a single accepted connection by dialing the upstream
// proxy and performing bidirectional byte forwarding.
func (r *Relay) handleConn(local net.Conn) {
defer r.wg.Done()
remoteAddr := local.RemoteAddr().String()
// Enforce max concurrent connections.
current := r.activeConns.Add(1)
if int(current) > r.maxConns {
r.activeConns.Add(-1)
if r.debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] Connection from %s rejected: max connections (%d) reached\n", remoteAddr, r.maxConns)
}
_ = local.Close()
return
}
defer r.activeConns.Add(-1)
if r.debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] Connection accepted from %s\n", remoteAddr)
}
// Dial the upstream proxy.
upstream, err := net.DialTimeout("tcp", r.targetAddr, upstreamDialTimout)
if err != nil {
fmt.Fprintf(os.Stderr, "[greywall:relay] WARNING: upstream connect to %s failed: %v\n", r.targetAddr, err)
_ = local.Close()
return
}
if r.debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] Upstream connected: %s -> %s\n", remoteAddr, r.targetAddr)
}
// Bidirectional copy with proper TCP half-close.
var (
localToUpstream int64
upstreamToLocal int64
copyWg sync.WaitGroup
)
copyWg.Add(2)
// local -> upstream
go func() {
defer copyWg.Done()
localToUpstream = r.copyWithHalfClose(upstream, local)
}()
// upstream -> local
go func() {
defer copyWg.Done()
upstreamToLocal = r.copyWithHalfClose(local, upstream)
}()
copyWg.Wait()
_ = local.Close()
_ = upstream.Close()
if r.debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] Connection closed %s (sent=%d recv=%d)\n", remoteAddr, localToUpstream, upstreamToLocal)
}
}
// copyWithHalfClose copies data from src to dst, setting an idle timeout on
// each read. When the source reaches EOF, it signals a TCP half-close on dst
// via CloseWrite (if available) rather than a full Close.
func (r *Relay) copyWithHalfClose(dst, src net.Conn) int64 {
buf := make([]byte, 32*1024)
var written int64
for {
// Reset idle timeout before each read.
if err := src.SetReadDeadline(time.Now().Add(connIdleTimeout)); err != nil {
break
}
nr, readErr := src.Read(buf)
if nr > 0 {
// Reset write deadline for each write.
if err := dst.SetWriteDeadline(time.Now().Add(connIdleTimeout)); err != nil {
break
}
nw, writeErr := dst.Write(buf[:nr])
written += int64(nw)
if writeErr != nil {
break
}
if nw != nr {
break
}
}
if readErr != nil {
// Source hit EOF or error: signal half-close on destination.
if tcpDst, ok := dst.(*net.TCPConn); ok {
_ = tcpDst.CloseWrite()
}
if readErr != io.EOF {
// Unexpected error; connection may have timed out.
if r.debug {
fmt.Fprintf(os.Stderr, "[greywall:relay] Copy error: %v\n", readErr)
}
}
break
}
}
return written
}

View File

@@ -0,0 +1,373 @@
//go:build darwin || linux
package daemon
import (
"bytes"
"fmt"
"io"
"net"
"sync"
"testing"
"time"
)
// startEchoServer starts a TCP server that echoes back everything it receives.
// It returns the listener and its address.
func startEchoServer(t *testing.T) net.Listener {
t.Helper()
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("failed to start echo server: %v", err)
}
go func() {
for {
conn, err := ln.Accept()
if err != nil {
return
}
go func(c net.Conn) {
defer c.Close() //nolint:errcheck // test cleanup
_, _ = io.Copy(c, c)
}(conn)
}
}()
return ln
}
// startBlackHoleServer accepts connections but never reads/writes, then closes.
func startBlackHoleServer(t *testing.T) net.Listener {
t.Helper()
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("failed to start black hole server: %v", err)
}
go func() {
for {
conn, err := ln.Accept()
if err != nil {
return
}
_ = conn.Close()
}
}()
return ln
}
func TestRelayBidirectionalForward(t *testing.T) {
// Start a mock upstream (echo server) acting as the "SOCKS5 proxy".
echo := startEchoServer(t)
defer echo.Close() //nolint:errcheck // test cleanup
echoAddr := echo.Addr().String()
proxyURL := fmt.Sprintf("socks5://%s", echoAddr)
relay, err := NewRelay(proxyURL, true)
if err != nil {
t.Fatalf("NewRelay failed: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer relay.Stop()
// Connect through the relay.
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", relay.Port()))
if err != nil {
t.Fatalf("failed to connect to relay: %v", err)
}
defer conn.Close() //nolint:errcheck // test cleanup
// Send data and verify it echoes back.
msg := []byte("hello, relay!")
if _, err := conn.Write(msg); err != nil {
t.Fatalf("write failed: %v", err)
}
buf := make([]byte, len(msg))
_ = conn.SetReadDeadline(time.Now().Add(2 * time.Second))
if _, err := io.ReadFull(conn, buf); err != nil {
t.Fatalf("read failed: %v", err)
}
if !bytes.Equal(buf, msg) {
t.Fatalf("expected %q, got %q", msg, buf)
}
}
func TestRelayMultipleMessages(t *testing.T) {
echo := startEchoServer(t)
defer echo.Close() //nolint:errcheck // test cleanup
proxyURL := fmt.Sprintf("socks5://%s", echo.Addr().String())
relay, err := NewRelay(proxyURL, false)
if err != nil {
t.Fatalf("NewRelay failed: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer relay.Stop()
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", relay.Port()))
if err != nil {
t.Fatalf("failed to connect to relay: %v", err)
}
defer conn.Close() //nolint:errcheck // test cleanup
// Send multiple messages and verify each echoes back.
for i := 0; i < 10; i++ {
msg := []byte(fmt.Sprintf("message-%d", i))
if _, err := conn.Write(msg); err != nil {
t.Fatalf("write %d failed: %v", i, err)
}
buf := make([]byte, len(msg))
_ = conn.SetReadDeadline(time.Now().Add(2 * time.Second))
if _, err := io.ReadFull(conn, buf); err != nil {
t.Fatalf("read %d failed: %v", i, err)
}
if !bytes.Equal(buf, msg) {
t.Fatalf("message %d: expected %q, got %q", i, msg, buf)
}
}
}
func TestRelayUpstreamConnectionFailure(t *testing.T) {
// Find a port that is not listening.
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatal(err)
}
deadPort := ln.Addr().(*net.TCPAddr).Port
_ = ln.Close() // close immediately so nothing is listening
proxyURL := fmt.Sprintf("socks5://127.0.0.1:%d", deadPort)
relay, err := NewRelay(proxyURL, true)
if err != nil {
t.Fatalf("NewRelay failed: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer relay.Stop()
// Connect to the relay. The relay should accept the connection but then
// fail to reach the upstream, causing the local side to be closed.
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", relay.Port()))
if err != nil {
t.Fatalf("failed to connect to relay: %v", err)
}
defer conn.Close() //nolint:errcheck // test cleanup
// The relay should close the connection after failing upstream dial.
_ = conn.SetReadDeadline(time.Now().Add(2 * time.Second))
buf := make([]byte, 1)
_, readErr := conn.Read(buf)
if readErr == nil {
t.Fatal("expected read error (connection should be closed), got nil")
}
}
func TestRelayConcurrentConnections(t *testing.T) {
echo := startEchoServer(t)
defer echo.Close() //nolint:errcheck // test cleanup
proxyURL := fmt.Sprintf("socks5://%s", echo.Addr().String())
relay, err := NewRelay(proxyURL, false)
if err != nil {
t.Fatalf("NewRelay failed: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer relay.Stop()
const numConns = 50
var wg sync.WaitGroup
errors := make(chan error, numConns)
for i := 0; i < numConns; i++ {
wg.Add(1)
go func(idx int) {
defer wg.Done()
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", relay.Port()))
if err != nil {
errors <- fmt.Errorf("conn %d: dial failed: %w", idx, err)
return
}
defer conn.Close() //nolint:errcheck // test cleanup
msg := []byte(fmt.Sprintf("concurrent-%d", idx))
if _, err := conn.Write(msg); err != nil {
errors <- fmt.Errorf("conn %d: write failed: %w", idx, err)
return
}
buf := make([]byte, len(msg))
_ = conn.SetReadDeadline(time.Now().Add(5 * time.Second))
if _, err := io.ReadFull(conn, buf); err != nil {
errors <- fmt.Errorf("conn %d: read failed: %w", idx, err)
return
}
if !bytes.Equal(buf, msg) {
errors <- fmt.Errorf("conn %d: expected %q, got %q", idx, msg, buf)
}
}(i)
}
wg.Wait()
close(errors)
for err := range errors {
t.Error(err)
}
}
func TestRelayMaxConnsLimit(t *testing.T) {
// Use a black hole server so connections stay open.
bh := startBlackHoleServer(t)
defer bh.Close() //nolint:errcheck // test cleanup
proxyURL := fmt.Sprintf("socks5://%s", bh.Addr().String())
relay, err := NewRelay(proxyURL, true)
if err != nil {
t.Fatalf("NewRelay failed: %v", err)
}
// Set a very low limit for testing.
relay.maxConns = 2
if err := relay.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer relay.Stop()
// The black hole server closes connections immediately, so the relay's
// handleConn will finish quickly. Instead, use an echo server that holds
// connections open to truly test the limit.
// We just verify the relay starts and stops cleanly with the low limit.
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", relay.Port()))
if err != nil {
t.Fatalf("failed to connect: %v", err)
}
_ = conn.Close()
}
func TestRelayTCPHalfClose(t *testing.T) {
// Start a server that reads everything, then sends a response, then closes.
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("failed to listen: %v", err)
}
defer ln.Close() //nolint:errcheck // test cleanup
response := []byte("server-response-after-client-close")
go func() {
conn, err := ln.Accept()
if err != nil {
return
}
defer conn.Close() //nolint:errcheck // test cleanup
// Read all data from client until EOF (client did CloseWrite).
data, err := io.ReadAll(conn)
if err != nil {
return
}
_ = data
// Now send a response back (the write direction is still open).
_, _ = conn.Write(response)
// Signal we're done writing.
if tc, ok := conn.(*net.TCPConn); ok {
_ = tc.CloseWrite()
}
}()
proxyURL := fmt.Sprintf("socks5://%s", ln.Addr().String())
relay, err := NewRelay(proxyURL, true)
if err != nil {
t.Fatalf("NewRelay failed: %v", err)
}
if err := relay.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer relay.Stop()
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", relay.Port()))
if err != nil {
t.Fatalf("failed to connect to relay: %v", err)
}
defer conn.Close() //nolint:errcheck // test cleanup
// Send data to the server.
clientMsg := []byte("client-data")
if _, err := conn.Write(clientMsg); err != nil {
t.Fatalf("write failed: %v", err)
}
// Half-close our write side; the server should now receive EOF and send its response.
tcpConn, ok := conn.(*net.TCPConn)
if !ok {
t.Fatal("expected *net.TCPConn")
}
if err := tcpConn.CloseWrite(); err != nil {
t.Fatalf("CloseWrite failed: %v", err)
}
// Read the server's response through the relay.
_ = conn.SetReadDeadline(time.Now().Add(3 * time.Second))
got, err := io.ReadAll(conn)
if err != nil {
t.Fatalf("ReadAll failed: %v", err)
}
if !bytes.Equal(got, response) {
t.Fatalf("expected %q, got %q", response, got)
}
}
func TestRelayPort(t *testing.T) {
echo := startEchoServer(t)
defer echo.Close() //nolint:errcheck // test cleanup
proxyURL := fmt.Sprintf("socks5://%s", echo.Addr().String())
relay, err := NewRelay(proxyURL, false)
if err != nil {
t.Fatalf("NewRelay failed: %v", err)
}
defer relay.Stop()
port := relay.Port()
if port <= 0 || port > 65535 {
t.Fatalf("invalid port: %d", port)
}
}
func TestNewRelayInvalidURL(t *testing.T) {
tests := []struct {
name string
proxyURL string
}{
{"missing port", "socks5://127.0.0.1"},
{"missing host", "socks5://:1080"},
{"empty", ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := NewRelay(tt.proxyURL, false)
if err == nil {
t.Fatal("expected error, got nil")
}
})
}
}

613
internal/daemon/server.go Normal file
View File

@@ -0,0 +1,613 @@
package daemon
import (
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"net"
"os"
"os/exec"
"os/user"
"strings"
"sync"
"syscall"
"time"
)
// Protocol types for JSON communication over Unix socket (newline-delimited).
// Request from CLI to daemon.
type Request struct {
Action string `json:"action"` // "create_session", "destroy_session", "status", "start_learning", "stop_learning"
ProxyURL string `json:"proxy_url,omitempty"` // for create_session
DNSAddr string `json:"dns_addr,omitempty"` // for create_session
SessionID string `json:"session_id,omitempty"` // for destroy_session
LearningID string `json:"learning_id,omitempty"` // for stop_learning
}
// Response from daemon to CLI.
type Response struct {
OK bool `json:"ok"`
Error string `json:"error,omitempty"`
SessionID string `json:"session_id,omitempty"`
TunDevice string `json:"tun_device,omitempty"`
SandboxUser string `json:"sandbox_user,omitempty"`
SandboxGroup string `json:"sandbox_group,omitempty"`
// Status response fields.
Running bool `json:"running,omitempty"`
ActiveSessions int `json:"active_sessions,omitempty"`
// Learning response fields.
LearningID string `json:"learning_id,omitempty"`
LearningLog string `json:"learning_log,omitempty"`
}
// Session tracks an active sandbox session.
type Session struct {
ID string
ProxyURL string
DNSAddr string
CreatedAt time.Time
}
// Server listens on a Unix socket and manages sandbox sessions. It orchestrates
// TunManager (utun + pf) and DNSRelay lifecycle for each session.
type Server struct {
socketPath string
listener net.Listener
tunManager *TunManager
dnsRelay *DNSRelay
sessions map[string]*Session
mu sync.Mutex
done chan struct{}
wg sync.WaitGroup
debug bool
tun2socksPath string
sandboxGID string // cached numeric GID for the sandbox group
// Learning mode state
esloggerCmd *exec.Cmd // running eslogger process
esloggerLogPath string // temp file path for eslogger output
esloggerDone chan error // receives result of cmd.Wait() (set once, reused for stop)
learningID string // current learning session ID
}
// NewServer creates a new daemon server that will listen on the given Unix socket path.
func NewServer(socketPath, tun2socksPath string, debug bool) *Server {
return &Server{
socketPath: socketPath,
tun2socksPath: tun2socksPath,
sessions: make(map[string]*Session),
done: make(chan struct{}),
debug: debug,
}
}
// Start begins listening on the Unix socket and accepting connections.
// It removes any stale socket file before binding.
func (s *Server) Start() error {
// Pre-resolve the sandbox group GID so session creation is fast
// and doesn't depend on OpenDirectory latency.
grp, err := user.LookupGroup(SandboxGroupName)
if err != nil {
Logf("Warning: could not resolve group %s at startup: %v (will retry per-session)", SandboxGroupName, err)
} else {
s.sandboxGID = grp.Gid
Logf("Resolved group %s → GID %s", SandboxGroupName, s.sandboxGID)
}
// Remove stale socket file if it exists.
if _, err := os.Stat(s.socketPath); err == nil {
s.logDebug("Removing stale socket file %s", s.socketPath)
if err := os.Remove(s.socketPath); err != nil {
return fmt.Errorf("failed to remove stale socket %s: %w", s.socketPath, err)
}
}
ln, err := net.Listen("unix", s.socketPath)
if err != nil {
return fmt.Errorf("failed to listen on %s: %w", s.socketPath, err)
}
s.listener = ln
// Set socket permissions so any local user can connect to the daemon.
// The socket is localhost-only (Unix domain socket); access control is
// handled at the daemon protocol level, not file permissions.
if err := os.Chmod(s.socketPath, 0o666); err != nil { //nolint:gosec // daemon socket needs 0666 so non-root CLI can connect
_ = ln.Close()
_ = os.Remove(s.socketPath)
return fmt.Errorf("failed to set socket permissions: %w", err)
}
s.logDebug("Listening on %s", s.socketPath)
s.wg.Add(1)
go s.acceptLoop()
return nil
}
// Stop gracefully shuts down the server. It stops accepting new connections,
// tears down all active sessions, and removes the socket file.
func (s *Server) Stop() error {
// Signal shutdown.
select {
case <-s.done:
// Already closed.
default:
close(s.done)
}
// Close the listener to unblock acceptLoop.
if s.listener != nil {
_ = s.listener.Close()
}
// Wait for the accept loop and any in-flight handlers to finish.
s.wg.Wait()
// Tear down all active sessions and learning.
s.mu.Lock()
var errs []string
// Stop learning session if active
if s.esloggerCmd != nil && s.esloggerCmd.Process != nil {
s.logDebug("Stopping eslogger during shutdown")
_ = s.esloggerCmd.Process.Kill()
if s.esloggerDone != nil {
<-s.esloggerDone
}
s.esloggerCmd = nil
s.esloggerDone = nil
s.learningID = ""
}
if s.esloggerLogPath != "" {
_ = os.Remove(s.esloggerLogPath)
s.esloggerLogPath = ""
}
for id := range s.sessions {
s.logDebug("Stopping session %s during shutdown", id)
}
if s.tunManager != nil {
if err := s.tunManager.Stop(); err != nil {
errs = append(errs, fmt.Sprintf("stop tun manager: %v", err))
}
s.tunManager = nil
}
if s.dnsRelay != nil {
s.dnsRelay.Stop()
s.dnsRelay = nil
}
s.sessions = make(map[string]*Session)
s.mu.Unlock()
// Remove the socket file.
if err := os.Remove(s.socketPath); err != nil && !os.IsNotExist(err) {
errs = append(errs, fmt.Sprintf("remove socket: %v", err))
}
if len(errs) > 0 {
return fmt.Errorf("stop errors: %s", join(errs, "; "))
}
s.logDebug("Server stopped")
return nil
}
// ActiveSessions returns the number of currently active sessions.
func (s *Server) ActiveSessions() int {
s.mu.Lock()
defer s.mu.Unlock()
return len(s.sessions)
}
// acceptLoop runs the main accept loop for the Unix socket listener.
func (s *Server) acceptLoop() {
defer s.wg.Done()
for {
conn, err := s.listener.Accept()
if err != nil {
select {
case <-s.done:
return
default:
}
s.logDebug("Accept error: %v", err)
continue
}
s.wg.Add(1)
go s.handleConnection(conn)
}
}
// handleConnection reads a single JSON request from the connection, dispatches
// it to the appropriate handler, and writes the JSON response back.
func (s *Server) handleConnection(conn net.Conn) {
defer s.wg.Done()
defer conn.Close() //nolint:errcheck // best-effort close after handling request
// Set a read deadline to prevent hung connections.
if err := conn.SetReadDeadline(time.Now().Add(30 * time.Second)); err != nil {
s.logDebug("Failed to set read deadline: %v", err)
return
}
decoder := json.NewDecoder(conn)
encoder := json.NewEncoder(conn)
var req Request
if err := decoder.Decode(&req); err != nil {
s.logDebug("Failed to decode request: %v", err)
resp := Response{OK: false, Error: fmt.Sprintf("invalid request: %v", err)}
_ = encoder.Encode(resp) // best-effort error response
return
}
Logf("Received request: action=%s", req.Action)
var resp Response
switch req.Action {
case "create_session":
resp = s.handleCreateSession(req)
case "destroy_session":
resp = s.handleDestroySession(req)
case "start_learning":
resp = s.handleStartLearning()
case "stop_learning":
resp = s.handleStopLearning(req)
case "status":
resp = s.handleStatus()
default:
resp = Response{OK: false, Error: fmt.Sprintf("unknown action: %q", req.Action)}
}
if err := encoder.Encode(resp); err != nil {
s.logDebug("Failed to encode response: %v", err)
}
}
// handleCreateSession creates a new sandbox session with a utun tunnel,
// optional DNS relay, and pf rules for the sandbox group.
func (s *Server) handleCreateSession(req Request) Response {
s.mu.Lock()
defer s.mu.Unlock()
if req.ProxyURL == "" {
return Response{OK: false, Error: "proxy_url is required"}
}
// Phase 1: only one session at a time.
if len(s.sessions) > 0 {
Logf("Rejecting create_session: %d session(s) already active", len(s.sessions))
return Response{OK: false, Error: "a session is already active (only one session supported in Phase 1)"}
}
Logf("Creating session: proxy=%s dns=%s", req.ProxyURL, req.DNSAddr)
// Step 1: Create and start TunManager.
tm := NewTunManager(s.tun2socksPath, req.ProxyURL, s.debug)
if err := tm.Start(); err != nil {
return Response{OK: false, Error: fmt.Sprintf("failed to start tunnel: %v", err)}
}
// Step 2: Create DNS relay if dns_addr is provided.
var dr *DNSRelay
if req.DNSAddr != "" {
var err error
dr, err = NewDNSRelay(dnsRelayIP+":"+dnsRelayPort, req.DNSAddr, s.debug)
if err != nil {
_ = tm.Stop() // best-effort cleanup
return Response{OK: false, Error: fmt.Sprintf("failed to create DNS relay: %v", err)}
}
if err := dr.Start(); err != nil {
_ = tm.Stop() // best-effort cleanup
return Response{OK: false, Error: fmt.Sprintf("failed to start DNS relay: %v", err)}
}
}
// Step 3: Resolve the sandbox group GID. pfctl in the LaunchDaemon
// context cannot resolve group names via OpenDirectory, so we use the
// cached GID (resolved at startup) or look it up now.
sandboxGID := s.sandboxGID
if sandboxGID == "" {
grp, err := user.LookupGroup(SandboxGroupName)
if err != nil {
_ = tm.Stop()
if dr != nil {
dr.Stop()
}
return Response{OK: false, Error: fmt.Sprintf("failed to resolve group %s: %v", SandboxGroupName, err)}
}
sandboxGID = grp.Gid
s.sandboxGID = sandboxGID
}
Logf("Loading pf rules for group %s (GID %s)", SandboxGroupName, sandboxGID)
if err := tm.LoadPFRules(sandboxGID); err != nil {
if dr != nil {
dr.Stop()
}
_ = tm.Stop() // best-effort cleanup
return Response{OK: false, Error: fmt.Sprintf("failed to load pf rules: %v", err)}
}
// Step 4: Generate session ID and store.
sessionID, err := generateSessionID()
if err != nil {
if dr != nil {
dr.Stop()
}
_ = tm.UnloadPFRules() // best-effort cleanup
_ = tm.Stop() // best-effort cleanup
return Response{OK: false, Error: fmt.Sprintf("failed to generate session ID: %v", err)}
}
session := &Session{
ID: sessionID,
ProxyURL: req.ProxyURL,
DNSAddr: req.DNSAddr,
CreatedAt: time.Now(),
}
s.sessions[sessionID] = session
s.tunManager = tm
s.dnsRelay = dr
Logf("Session created: id=%s device=%s group=%s(GID %s)", sessionID, tm.TunDevice(), SandboxGroupName, sandboxGID)
return Response{
OK: true,
SessionID: sessionID,
TunDevice: tm.TunDevice(),
SandboxUser: SandboxUserName,
SandboxGroup: SandboxGroupName,
}
}
// handleDestroySession tears down an existing session by unloading pf rules,
// stopping the tunnel, and stopping the DNS relay.
func (s *Server) handleDestroySession(req Request) Response {
s.mu.Lock()
defer s.mu.Unlock()
if req.SessionID == "" {
return Response{OK: false, Error: "session_id is required"}
}
Logf("Destroying session: id=%s", req.SessionID)
session, ok := s.sessions[req.SessionID]
if !ok {
Logf("Session %q not found (active sessions: %d)", req.SessionID, len(s.sessions))
return Response{OK: false, Error: fmt.Sprintf("session %q not found", req.SessionID)}
}
var errs []string
// Step 1: Unload pf rules.
if s.tunManager != nil {
if err := s.tunManager.UnloadPFRules(); err != nil {
errs = append(errs, fmt.Sprintf("unload pf rules: %v", err))
}
}
// Step 2: Stop tun manager.
if s.tunManager != nil {
if err := s.tunManager.Stop(); err != nil {
errs = append(errs, fmt.Sprintf("stop tun manager: %v", err))
}
s.tunManager = nil
}
// Step 3: Stop DNS relay.
if s.dnsRelay != nil {
s.dnsRelay.Stop()
s.dnsRelay = nil
}
// Step 4: Remove session.
delete(s.sessions, session.ID)
if len(errs) > 0 {
Logf("Session %s destroyed with errors: %v", session.ID, errs)
return Response{OK: false, Error: fmt.Sprintf("session destroyed with errors: %s", join(errs, "; "))}
}
Logf("Session destroyed: id=%s (remaining: %d)", session.ID, len(s.sessions))
return Response{OK: true}
}
// handleStartLearning starts an eslogger trace for learning mode.
// eslogger uses the Endpoint Security framework and reports real Unix PIDs
// via audit_token.pid, plus fork events for process tree tracking.
func (s *Server) handleStartLearning() Response {
s.mu.Lock()
defer s.mu.Unlock()
// Only one learning session at a time
if s.learningID != "" {
return Response{OK: false, Error: "a learning session is already active"}
}
// Create temp file for eslogger output.
// The daemon runs as root but the CLI reads this file as a normal user,
// so we must make it world-readable.
logFile, err := os.CreateTemp("", "greywall-eslogger-*.log")
if err != nil {
return Response{OK: false, Error: fmt.Sprintf("failed to create temp file: %v", err)}
}
logPath := logFile.Name()
if err := os.Chmod(logPath, 0o644); err != nil { //nolint:gosec // intentionally world-readable so non-root CLI can parse the log
_ = logFile.Close()
_ = os.Remove(logPath) //nolint:gosec // logPath from os.CreateTemp, not user input
return Response{OK: false, Error: fmt.Sprintf("failed to set log file permissions: %v", err)}
}
// Create a separate file for eslogger stderr so we can diagnose failures.
stderrFile, err := os.CreateTemp("", "greywall-eslogger-stderr-*.log")
if err != nil {
_ = logFile.Close()
_ = os.Remove(logPath) //nolint:gosec // logPath from os.CreateTemp, not user input
return Response{OK: false, Error: fmt.Sprintf("failed to create stderr file: %v", err)}
}
stderrPath := stderrFile.Name()
// Start eslogger with filesystem events + fork for process tree tracking.
// eslogger outputs one JSON object per line to stdout.
cmd := exec.Command("eslogger", "open", "create", "write", "unlink", "rename", "link", "truncate", "fork") //nolint:gosec // daemon-controlled command
cmd.Stdout = logFile
cmd.Stderr = stderrFile
if err := cmd.Start(); err != nil {
_ = logFile.Close()
_ = stderrFile.Close()
_ = os.Remove(logPath) //nolint:gosec // logPath from os.CreateTemp, not user input
_ = os.Remove(stderrPath) //nolint:gosec // stderrPath from os.CreateTemp, not user input
return Response{OK: false, Error: fmt.Sprintf("failed to start eslogger: %v", err)}
}
// Generate learning ID
learningID, err := generateSessionID()
if err != nil {
_ = cmd.Process.Kill()
_ = logFile.Close()
_ = stderrFile.Close()
_ = os.Remove(logPath) //nolint:gosec // logPath from os.CreateTemp, not user input
_ = os.Remove(stderrPath) //nolint:gosec // stderrPath from os.CreateTemp, not user input
return Response{OK: false, Error: fmt.Sprintf("failed to generate learning ID: %v", err)}
}
// Wait briefly for eslogger to initialize, then check if it exited early
// (e.g., missing Full Disk Access permission).
exitCh := make(chan error, 1)
go func() {
exitCh <- cmd.Wait()
}()
select {
case waitErr := <-exitCh:
// eslogger exited during startup — read stderr for the error message
_ = stderrFile.Close()
stderrContent, _ := os.ReadFile(stderrPath) //nolint:gosec // stderrPath from os.CreateTemp
_ = os.Remove(stderrPath) //nolint:gosec
_ = logFile.Close()
_ = os.Remove(logPath) //nolint:gosec
errMsg := strings.TrimSpace(string(stderrContent))
if errMsg == "" {
errMsg = fmt.Sprintf("eslogger exited: %v", waitErr)
}
if strings.Contains(errMsg, "Full Disk Access") {
errMsg += "\n\nGrant Full Disk Access to /usr/local/bin/greywall:\n" +
" System Settings → Privacy & Security → Full Disk Access → add /usr/local/bin/greywall\n" +
"Then reinstall the daemon: sudo greywall daemon uninstall -f && sudo greywall daemon install"
}
return Response{OK: false, Error: fmt.Sprintf("eslogger failed to start: %s", errMsg)}
case <-time.After(500 * time.Millisecond):
// eslogger is still running after 500ms — good, it initialized successfully
}
s.esloggerCmd = cmd
s.esloggerLogPath = logPath
s.esloggerDone = exitCh
s.learningID = learningID
// Clean up stderr file now that eslogger is running
_ = stderrFile.Close()
_ = os.Remove(stderrPath) //nolint:gosec
Logf("Learning session started: id=%s log=%s pid=%d", learningID, logPath, cmd.Process.Pid)
return Response{
OK: true,
LearningID: learningID,
LearningLog: logPath,
}
}
// handleStopLearning stops the eslogger trace for a learning session.
func (s *Server) handleStopLearning(req Request) Response {
s.mu.Lock()
defer s.mu.Unlock()
if req.LearningID == "" {
return Response{OK: false, Error: "learning_id is required"}
}
if s.learningID == "" || s.learningID != req.LearningID {
return Response{OK: false, Error: fmt.Sprintf("learning session %q not found", req.LearningID)}
}
if s.esloggerCmd != nil && s.esloggerCmd.Process != nil {
// Send SIGINT to eslogger for graceful shutdown (flushes buffers)
_ = s.esloggerCmd.Process.Signal(syscall.SIGINT)
// Reuse the wait channel from startup (cmd.Wait already called there)
if s.esloggerDone != nil {
select {
case <-s.esloggerDone:
// Exited cleanly
case <-time.After(5 * time.Second):
// Force kill after timeout
_ = s.esloggerCmd.Process.Kill()
<-s.esloggerDone
}
}
}
Logf("Learning session stopped: id=%s", s.learningID)
s.esloggerCmd = nil
s.esloggerDone = nil
s.learningID = ""
// Don't remove the log file — the CLI needs to read it
s.esloggerLogPath = ""
return Response{OK: true}
}
// handleStatus returns the current daemon status including whether it is running
// and how many sessions are active.
func (s *Server) handleStatus() Response {
s.mu.Lock()
defer s.mu.Unlock()
return Response{
OK: true,
Running: true,
ActiveSessions: len(s.sessions),
}
}
// generateSessionID produces a cryptographically random hex session identifier.
func generateSessionID() (string, error) {
b := make([]byte, 16)
if _, err := rand.Read(b); err != nil {
return "", fmt.Errorf("failed to read random bytes: %w", err)
}
return hex.EncodeToString(b), nil
}
// join concatenates string slices with a separator. This avoids importing
// the strings package solely for strings.Join.
func join(parts []string, sep string) string {
if len(parts) == 0 {
return ""
}
result := parts[0]
for _, p := range parts[1:] {
result += sep + p
}
return result
}
// logDebug writes a timestamped debug message to stderr.
func (s *Server) logDebug(format string, args ...interface{}) {
if s.debug {
Logf(format, args...)
}
}

View File

@@ -0,0 +1,527 @@
package daemon
import (
"encoding/json"
"net"
"os"
"path/filepath"
"testing"
"time"
)
// testSocketPath returns a temporary Unix socket path for testing.
// macOS limits Unix socket paths to 104 bytes, so we use a short temp directory
// under /tmp rather than the longer t.TempDir() paths.
func testSocketPath(t *testing.T) string {
t.Helper()
dir, err := os.MkdirTemp("/tmp", "gw-")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
sockPath := filepath.Join(dir, "d.sock")
t.Cleanup(func() {
_ = os.RemoveAll(dir)
})
return sockPath
}
func TestServerStartStop(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
// Verify socket file exists.
info, err := os.Stat(sockPath)
if err != nil {
t.Fatalf("Socket file not found: %v", err)
}
// Verify socket permissions (0666 — any local user can connect).
perm := info.Mode().Perm()
if perm != 0o666 {
t.Errorf("Expected socket permissions 0666, got %o", perm)
}
// Verify no active sessions at start.
if n := srv.ActiveSessions(); n != 0 {
t.Errorf("Expected 0 active sessions, got %d", n)
}
if err := srv.Stop(); err != nil {
t.Fatalf("Stop failed: %v", err)
}
// Verify socket file is removed after stop.
if _, err := os.Stat(sockPath); !os.IsNotExist(err) {
t.Error("Socket file should be removed after stop")
}
}
func TestServerStartRemovesStaleSocket(t *testing.T) {
sockPath := testSocketPath(t)
// Create a stale socket file.
if err := os.WriteFile(sockPath, []byte("stale"), 0o600); err != nil {
t.Fatalf("Failed to create stale file: %v", err)
}
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed with stale socket: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
// Verify the server is listening by connecting.
conn, err := net.DialTimeout("unix", sockPath, 2*time.Second)
if err != nil {
t.Fatalf("Failed to connect to server: %v", err)
}
_ = conn.Close()
}
func TestServerDoubleStop(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", false)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
// First stop should succeed.
if err := srv.Stop(); err != nil {
t.Fatalf("First stop failed: %v", err)
}
// Second stop should not panic (socket already removed).
_ = srv.Stop()
}
func TestProtocolStatus(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
// Send a status request.
resp := sendTestRequest(t, sockPath, Request{Action: "status"})
if !resp.OK {
t.Fatalf("Expected OK=true, got error: %s", resp.Error)
}
if !resp.Running {
t.Error("Expected Running=true")
}
if resp.ActiveSessions != 0 {
t.Errorf("Expected 0 active sessions, got %d", resp.ActiveSessions)
}
}
func TestProtocolUnknownAction(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
resp := sendTestRequest(t, sockPath, Request{Action: "unknown_action"})
if resp.OK {
t.Fatal("Expected OK=false for unknown action")
}
if resp.Error == "" {
t.Error("Expected error message for unknown action")
}
}
func TestProtocolCreateSessionMissingProxy(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
// Create session without proxy_url should fail.
resp := sendTestRequest(t, sockPath, Request{
Action: "create_session",
})
if resp.OK {
t.Fatal("Expected OK=false for missing proxy URL")
}
if resp.Error == "" {
t.Error("Expected error message for missing proxy URL")
}
}
func TestProtocolCreateSessionTunFailure(t *testing.T) {
sockPath := testSocketPath(t)
// Use a nonexistent tun2socks path so TunManager.Start() will fail.
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
// Create session should fail because tun2socks binary does not exist.
resp := sendTestRequest(t, sockPath, Request{
Action: "create_session",
ProxyURL: "socks5://127.0.0.1:1080",
})
if resp.OK {
t.Fatal("Expected OK=false when tun2socks is not available")
}
if resp.Error == "" {
t.Error("Expected error message when tun2socks fails")
}
// Verify no session was created.
if srv.ActiveSessions() != 0 {
t.Error("Expected 0 active sessions after failed create")
}
}
func TestProtocolDestroySessionMissingID(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
resp := sendTestRequest(t, sockPath, Request{
Action: "destroy_session",
})
if resp.OK {
t.Fatal("Expected OK=false for missing session ID")
}
if resp.Error == "" {
t.Error("Expected error message for missing session ID")
}
}
func TestProtocolDestroySessionNotFound(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
resp := sendTestRequest(t, sockPath, Request{
Action: "destroy_session",
SessionID: "nonexistent-session-id",
})
if resp.OK {
t.Fatal("Expected OK=false for nonexistent session")
}
if resp.Error == "" {
t.Error("Expected error message for nonexistent session")
}
}
func TestProtocolInvalidJSON(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
// Send invalid JSON to the server.
conn, err := net.DialTimeout("unix", sockPath, 2*time.Second)
if err != nil {
t.Fatalf("Failed to connect: %v", err)
}
defer conn.Close() //nolint:errcheck // test cleanup
if _, err := conn.Write([]byte("not valid json\n")); err != nil {
t.Fatalf("Failed to write: %v", err)
}
// Read error response.
_ = conn.SetReadDeadline(time.Now().Add(5 * time.Second))
decoder := json.NewDecoder(conn)
var resp Response
if err := decoder.Decode(&resp); err != nil {
t.Fatalf("Failed to decode error response: %v", err)
}
if resp.OK {
t.Fatal("Expected OK=false for invalid JSON")
}
if resp.Error == "" {
t.Error("Expected error message for invalid JSON")
}
}
func TestClientIsRunning(t *testing.T) {
sockPath := testSocketPath(t)
client := NewClient(sockPath, true)
// Server not started yet.
if client.IsRunning() {
t.Error("Expected IsRunning=false when server is not started")
}
// Start the server.
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
// Now the client should detect the server.
if !client.IsRunning() {
t.Error("Expected IsRunning=true when server is running")
}
}
func TestClientStatus(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
client := NewClient(sockPath, true)
resp, err := client.Status()
if err != nil {
t.Fatalf("Status failed: %v", err)
}
if !resp.OK {
t.Fatalf("Expected OK=true, got error: %s", resp.Error)
}
if !resp.Running {
t.Error("Expected Running=true")
}
if resp.ActiveSessions != 0 {
t.Errorf("Expected 0 active sessions, got %d", resp.ActiveSessions)
}
}
func TestClientDestroySessionNotFound(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
client := NewClient(sockPath, true)
err := client.DestroySession("nonexistent-id")
if err == nil {
t.Fatal("Expected error for nonexistent session")
}
}
func TestClientConnectionRefused(t *testing.T) {
sockPath := testSocketPath(t)
// No server running.
client := NewClient(sockPath, true)
_, err := client.Status()
if err == nil {
t.Fatal("Expected error when server is not running")
}
_, err = client.CreateSession("socks5://127.0.0.1:1080", "")
if err == nil {
t.Fatal("Expected error when server is not running")
}
err = client.DestroySession("some-id")
if err == nil {
t.Fatal("Expected error when server is not running")
}
}
func TestProtocolMultipleStatusRequests(t *testing.T) {
sockPath := testSocketPath(t)
srv := NewServer(sockPath, "/nonexistent/tun2socks", true)
if err := srv.Start(); err != nil {
t.Fatalf("Start failed: %v", err)
}
defer srv.Stop() //nolint:errcheck // test cleanup
// Send multiple status requests sequentially (each on a new connection).
for i := 0; i < 5; i++ {
resp := sendTestRequest(t, sockPath, Request{Action: "status"})
if !resp.OK {
t.Fatalf("Request %d: expected OK=true, got error: %s", i, resp.Error)
}
}
}
func TestProtocolRequestResponseJSON(t *testing.T) {
// Test that protocol types serialize/deserialize correctly.
req := Request{
Action: "create_session",
ProxyURL: "socks5://127.0.0.1:1080",
DNSAddr: "1.1.1.1:53",
SessionID: "test-session",
}
data, err := json.Marshal(req)
if err != nil {
t.Fatalf("Failed to marshal request: %v", err)
}
var decoded Request
if err := json.Unmarshal(data, &decoded); err != nil {
t.Fatalf("Failed to unmarshal request: %v", err)
}
if decoded.Action != req.Action {
t.Errorf("Action: got %q, want %q", decoded.Action, req.Action)
}
if decoded.ProxyURL != req.ProxyURL {
t.Errorf("ProxyURL: got %q, want %q", decoded.ProxyURL, req.ProxyURL)
}
if decoded.DNSAddr != req.DNSAddr {
t.Errorf("DNSAddr: got %q, want %q", decoded.DNSAddr, req.DNSAddr)
}
if decoded.SessionID != req.SessionID {
t.Errorf("SessionID: got %q, want %q", decoded.SessionID, req.SessionID)
}
resp := Response{
OK: true,
SessionID: "abc123",
TunDevice: "utun7",
SandboxUser: "_greywall",
SandboxGroup: "_greywall",
Running: true,
ActiveSessions: 1,
}
data, err = json.Marshal(resp)
if err != nil {
t.Fatalf("Failed to marshal response: %v", err)
}
var decodedResp Response
if err := json.Unmarshal(data, &decodedResp); err != nil {
t.Fatalf("Failed to unmarshal response: %v", err)
}
if decodedResp.OK != resp.OK {
t.Errorf("OK: got %v, want %v", decodedResp.OK, resp.OK)
}
if decodedResp.SessionID != resp.SessionID {
t.Errorf("SessionID: got %q, want %q", decodedResp.SessionID, resp.SessionID)
}
if decodedResp.TunDevice != resp.TunDevice {
t.Errorf("TunDevice: got %q, want %q", decodedResp.TunDevice, resp.TunDevice)
}
if decodedResp.SandboxUser != resp.SandboxUser {
t.Errorf("SandboxUser: got %q, want %q", decodedResp.SandboxUser, resp.SandboxUser)
}
if decodedResp.SandboxGroup != resp.SandboxGroup {
t.Errorf("SandboxGroup: got %q, want %q", decodedResp.SandboxGroup, resp.SandboxGroup)
}
if decodedResp.Running != resp.Running {
t.Errorf("Running: got %v, want %v", decodedResp.Running, resp.Running)
}
if decodedResp.ActiveSessions != resp.ActiveSessions {
t.Errorf("ActiveSessions: got %d, want %d", decodedResp.ActiveSessions, resp.ActiveSessions)
}
}
func TestProtocolResponseOmitEmpty(t *testing.T) {
// Verify omitempty works: error-only response should not include session fields.
resp := Response{OK: false, Error: "something went wrong"}
data, err := json.Marshal(resp)
if err != nil {
t.Fatalf("Failed to marshal: %v", err)
}
var raw map[string]interface{}
if err := json.Unmarshal(data, &raw); err != nil {
t.Fatalf("Failed to unmarshal to map: %v", err)
}
// These fields should be omitted due to omitempty.
for _, key := range []string{"session_id", "tun_device", "sandbox_user", "sandbox_group"} {
if _, exists := raw[key]; exists {
t.Errorf("Expected %q to be omitted from JSON, but it was present", key)
}
}
// Error should be present.
if _, exists := raw["error"]; !exists {
t.Error("Expected 'error' field in JSON")
}
}
func TestGenerateSessionID(t *testing.T) {
// Verify session IDs are unique and properly formatted.
seen := make(map[string]bool)
for i := 0; i < 100; i++ {
id, err := generateSessionID()
if err != nil {
t.Fatalf("generateSessionID failed: %v", err)
}
if len(id) != 32 { // 16 bytes = 32 hex chars
t.Errorf("Expected 32-char hex ID, got %d chars: %q", len(id), id)
}
if seen[id] {
t.Errorf("Duplicate session ID: %s", id)
}
seen[id] = true
}
}
// sendTestRequest connects to the server, sends a JSON request, and returns
// the JSON response. This is a low-level helper that bypasses the Client
// to test the raw protocol.
func sendTestRequest(t *testing.T, sockPath string, req Request) Response {
t.Helper()
conn, err := net.DialTimeout("unix", sockPath, 2*time.Second)
if err != nil {
t.Fatalf("Failed to connect to server: %v", err)
}
defer conn.Close() //nolint:errcheck // test cleanup
_ = conn.SetDeadline(time.Now().Add(5 * time.Second))
encoder := json.NewEncoder(conn)
if err := encoder.Encode(req); err != nil {
t.Fatalf("Failed to encode request: %v", err)
}
decoder := json.NewDecoder(conn)
var resp Response
if err := decoder.Decode(&resp); err != nil {
t.Fatalf("Failed to decode response: %v", err)
}
return resp
}

570
internal/daemon/tun.go Normal file
View File

@@ -0,0 +1,570 @@
//go:build darwin
package daemon
import (
"bufio"
"fmt"
"io"
"os"
"os/exec"
"regexp"
"strings"
"sync"
"time"
)
const (
tunIP = "198.18.0.1"
dnsRelayIP = "127.0.0.2"
dnsRelayPort = "15353" // high port to avoid conflicts with system DNS (mDNSResponder, Docker/Lima)
pfAnchorName = "co.greyhaven.greywall"
// tun2socksStopGracePeriod is the time to wait for tun2socks to exit
// after SIGTERM before sending SIGKILL.
tun2socksStopGracePeriod = 5 * time.Second
)
// utunDevicePattern matches "utunN" device names in tun2socks output or ifconfig.
var utunDevicePattern = regexp.MustCompile(`(utun\d+)`)
// TunManager handles utun device creation via tun2socks, tun2socks process
// lifecycle, and pf (packet filter) rule management for routing sandboxed
// traffic through the tunnel on macOS.
type TunManager struct {
tunDevice string // e.g., "utun7"
tun2socksPath string // path to tun2socks binary
tun2socksCmd *exec.Cmd // running tun2socks process
proxyURL string // SOCKS5 proxy URL for tun2socks
pfAnchor string // pf anchor name
debug bool
done chan struct{}
mu sync.Mutex
}
// NewTunManager creates a new TunManager that will use the given tun2socks
// binary and SOCKS5 proxy URL. The pf anchor is set to "co.greyhaven.greywall".
func NewTunManager(tun2socksPath string, proxyURL string, debug bool) *TunManager {
return &TunManager{
tun2socksPath: tun2socksPath,
proxyURL: proxyURL,
pfAnchor: pfAnchorName,
debug: debug,
done: make(chan struct{}),
}
}
// Start brings up the full tunnel stack:
// 1. Start tun2socks with "-device utun" (it auto-creates a utunN device)
// 2. Discover which utunN device was created
// 3. Configure the utun interface IP
// 4. Set up a loopback alias for the DNS relay
// 5. Load pf anchor rules (deferred until LoadPFRules is called explicitly)
func (t *TunManager) Start() error {
t.mu.Lock()
defer t.mu.Unlock()
if t.tun2socksCmd != nil {
return fmt.Errorf("tun manager already started")
}
// Step 1: Start tun2socks. It creates the utun device automatically.
if err := t.startTun2Socks(); err != nil {
return fmt.Errorf("failed to start tun2socks: %w", err)
}
// Step 2: Configure the utun interface with a point-to-point IP.
if err := t.configureInterface(); err != nil {
_ = t.stopTun2Socks()
return fmt.Errorf("failed to configure interface %s: %w", t.tunDevice, err)
}
// Step 3: Add a loopback alias for the DNS relay address.
if err := t.addLoopbackAlias(); err != nil {
_ = t.stopTun2Socks()
return fmt.Errorf("failed to add loopback alias: %w", err)
}
t.logDebug("Tunnel stack started: device=%s proxy=%s", t.tunDevice, t.proxyURL)
return nil
}
// Stop tears down the tunnel stack in reverse order:
// 1. Unload pf rules
// 2. Stop tun2socks (SIGTERM, then SIGKILL after grace period)
// 3. Remove loopback alias
// 4. The utun device is destroyed automatically when tun2socks exits
func (t *TunManager) Stop() error {
t.mu.Lock()
defer t.mu.Unlock()
var errs []string
// Signal the monitoring goroutine to stop.
select {
case <-t.done:
// Already closed.
default:
close(t.done)
}
// Step 1: Unload pf rules (best effort).
if err := t.unloadPFRulesLocked(); err != nil {
errs = append(errs, fmt.Sprintf("unload pf rules: %v", err))
}
// Step 2: Stop tun2socks.
if err := t.stopTun2Socks(); err != nil {
errs = append(errs, fmt.Sprintf("stop tun2socks: %v", err))
}
// Step 3: Remove loopback alias (best effort).
if err := t.removeLoopbackAlias(); err != nil {
errs = append(errs, fmt.Sprintf("remove loopback alias: %v", err))
}
if len(errs) > 0 {
return fmt.Errorf("stop errors: %s", strings.Join(errs, "; "))
}
t.logDebug("Tunnel stack stopped")
return nil
}
// TunDevice returns the name of the utun device (e.g., "utun7").
// Returns an empty string if the tunnel has not been started.
func (t *TunManager) TunDevice() string {
t.mu.Lock()
defer t.mu.Unlock()
return t.tunDevice
}
// LoadPFRules loads pf anchor rules that route traffic from the given sandbox
// group through the utun device. The rules:
// - Route all TCP from the sandbox group through the utun interface
// - Redirect DNS (UDP port 53) from the sandbox group to the local DNS relay
//
// This requires root privileges and an active pf firewall.
func (t *TunManager) LoadPFRules(sandboxGroup string) error {
t.mu.Lock()
defer t.mu.Unlock()
if t.tunDevice == "" {
return fmt.Errorf("tunnel not started, no device available")
}
// Ensure the anchor reference exists in the main pf.conf.
if err := t.ensureAnchorInPFConf(); err != nil {
return fmt.Errorf("failed to ensure pf anchor: %w", err)
}
// Build the anchor rules. pf requires strict ordering:
// translation (rdr) before filtering (pass).
// Note: macOS pf does not support "group" in rdr rules, so DNS
// redirection uses a two-step approach:
// 1. rdr on lo0 — redirects DNS arriving on loopback to our relay
// 2. pass out route-to lo0 — sends sandbox group's DNS to loopback
// 3. pass out route-to utun — sends sandbox group's TCP through tunnel
rules := fmt.Sprintf(
"rdr on lo0 proto udp from any to any port 53 -> %s port %s\n"+
"pass out on !lo0 route-to (lo0 127.0.0.1) proto udp from any to any port 53 group %s\n"+
"pass out route-to (%s %s) proto tcp from any to any group %s\n",
dnsRelayIP, dnsRelayPort,
sandboxGroup,
t.tunDevice, tunIP, sandboxGroup,
)
t.logDebug("Loading pf rules into anchor %s:\n%s", t.pfAnchor, rules)
// Load the rules into the anchor.
//nolint:gosec // arguments are controlled internal constants, not user input
cmd := exec.Command("pfctl", "-a", t.pfAnchor, "-f", "-")
cmd.Stdin = strings.NewReader(rules)
cmd.Stderr = os.Stderr
if output, err := cmd.Output(); err != nil {
return fmt.Errorf("pfctl load rules failed: %w (output: %s)", err, string(output))
}
// Enable pf if it is not already enabled.
if err := t.enablePF(); err != nil {
// Non-fatal: pf may already be enabled.
t.logDebug("Warning: failed to enable pf (may already be active): %v", err)
}
t.logDebug("pf rules loaded for group %s on %s", sandboxGroup, t.tunDevice)
return nil
}
// UnloadPFRules removes the pf rules from the anchor.
func (t *TunManager) UnloadPFRules() error {
t.mu.Lock()
defer t.mu.Unlock()
return t.unloadPFRulesLocked()
}
// startTun2Socks launches the tun2socks process with "-device utun" so that it
// auto-creates a utun device. The device name is discovered by scanning tun2socks
// stderr output for the utunN identifier.
func (t *TunManager) startTun2Socks() error {
//nolint:gosec // tun2socksPath is an internal path, not user input
cmd := exec.Command(t.tun2socksPath, "-device", "utun", "-proxy", t.proxyURL)
// Capture both stdout and stderr to discover the device name.
// tun2socks may log the device name on either stream depending on version.
stderrPipe, err := cmd.StderrPipe()
if err != nil {
return fmt.Errorf("failed to create stderr pipe: %w", err)
}
stdoutPipe, err := cmd.StdoutPipe()
if err != nil {
return fmt.Errorf("failed to create stdout pipe: %w", err)
}
if err := cmd.Start(); err != nil {
return fmt.Errorf("failed to start tun2socks: %w", err)
}
t.tun2socksCmd = cmd
// Read both stdout and stderr to discover the utun device name.
// tun2socks logs the device name shortly after startup
// (e.g., "level=INFO msg=[STACK] tun://utun7 <-> ...").
deviceCh := make(chan string, 2) // buffered for both goroutines
stderrLines := make(chan string, 100)
// scanPipe scans lines from a pipe, looking for the utun device name.
scanPipe := func(pipe io.Reader, label string) {
scanner := bufio.NewScanner(pipe)
for scanner.Scan() {
line := scanner.Text()
fmt.Fprintf(os.Stderr, "[greywall:tun] tun2socks(%s): %s\n", label, line) //nolint:gosec // logging tun2socks output
if match := utunDevicePattern.FindString(line); match != "" {
select {
case deviceCh <- match:
default:
// Already found by the other pipe.
}
}
select {
case stderrLines <- line:
default:
}
}
}
go scanPipe(stderrPipe, "stderr")
go scanPipe(stdoutPipe, "stdout")
// Wait for the device name with a timeout.
select {
case device := <-deviceCh:
if device == "" {
t.logDebug("Empty device from tun2socks output, trying ifconfig")
device, err = t.discoverUtunFromIfconfig()
if err != nil {
_ = cmd.Process.Kill()
return fmt.Errorf("failed to discover utun device: %w", err)
}
}
t.tunDevice = device
case <-time.After(10 * time.Second):
// Timeout: try ifconfig fallback.
t.logDebug("Timeout waiting for tun2socks device name, trying ifconfig")
device, err := t.discoverUtunFromIfconfig()
if err != nil {
_ = cmd.Process.Kill()
return fmt.Errorf("tun2socks did not report device name within timeout: %w", err)
}
t.tunDevice = device
}
t.logDebug("tun2socks started (pid=%d, device=%s)", cmd.Process.Pid, t.tunDevice)
// Monitor tun2socks in the background.
go t.monitorTun2Socks(stderrLines)
return nil
}
// discoverUtunFromIfconfig runs ifconfig and looks for a utun device. This is
// used as a fallback when we cannot parse the device name from tun2socks output.
func (t *TunManager) discoverUtunFromIfconfig() (string, error) {
out, err := exec.Command("ifconfig").Output()
if err != nil {
return "", fmt.Errorf("ifconfig failed: %w", err)
}
// Look for utun interfaces. We scan for lines starting with "utunN:"
// and return the highest-numbered one (most recently created).
ifPattern := regexp.MustCompile(`^(utun\d+):`)
var lastDevice string
for _, line := range strings.Split(string(out), "\n") {
if m := ifPattern.FindStringSubmatch(line); m != nil {
lastDevice = m[1]
}
}
if lastDevice == "" {
return "", fmt.Errorf("no utun device found in ifconfig output")
}
return lastDevice, nil
}
// monitorTun2Socks watches the tun2socks process and logs if it exits unexpectedly.
func (t *TunManager) monitorTun2Socks(stderrLines <-chan string) {
if t.tun2socksCmd == nil || t.tun2socksCmd.Process == nil {
return
}
// Drain any remaining stderr lines.
go func() {
for range stderrLines {
// Already logged in the scanner goroutine when debug is on.
}
}()
err := t.tun2socksCmd.Wait()
select {
case <-t.done:
// Expected shutdown.
t.logDebug("tun2socks exited (expected shutdown)")
default:
// Unexpected exit.
fmt.Fprintf(os.Stderr, "[greywall:tun] ERROR: tun2socks exited unexpectedly: %v\n", err)
}
}
// stopTun2Socks sends SIGTERM to the tun2socks process and waits for it to exit.
// If it does not exit within the grace period, SIGKILL is sent.
func (t *TunManager) stopTun2Socks() error {
if t.tun2socksCmd == nil || t.tun2socksCmd.Process == nil {
return nil
}
t.logDebug("Stopping tun2socks (pid=%d)", t.tun2socksCmd.Process.Pid)
// Send SIGTERM.
if err := t.tun2socksCmd.Process.Signal(os.Interrupt); err != nil {
// Process may have already exited.
t.logDebug("SIGTERM failed (process may have exited): %v", err)
t.tun2socksCmd = nil
return nil
}
// Wait for exit with a timeout.
exited := make(chan error, 1)
go func() {
// Wait may have already been called by the monitor goroutine,
// in which case this will return immediately.
exited <- t.tun2socksCmd.Wait()
}()
select {
case err := <-exited:
if err != nil {
t.logDebug("tun2socks exited with: %v", err)
}
case <-time.After(tun2socksStopGracePeriod):
t.logDebug("tun2socks did not exit after SIGTERM, sending SIGKILL")
_ = t.tun2socksCmd.Process.Kill()
}
t.tun2socksCmd = nil
return nil
}
// configureInterface sets up the utun interface with a point-to-point IP address.
func (t *TunManager) configureInterface() error {
t.logDebug("Configuring interface %s with IP %s", t.tunDevice, tunIP)
//nolint:gosec // tunDevice and tunIP are controlled internal values
cmd := exec.Command("ifconfig", t.tunDevice, tunIP, tunIP, "up")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("ifconfig %s failed: %w (output: %s)", t.tunDevice, err, string(output))
}
return nil
}
// addLoopbackAlias adds an alias IP on lo0 for the DNS relay.
func (t *TunManager) addLoopbackAlias() error {
t.logDebug("Adding loopback alias %s on lo0", dnsRelayIP)
cmd := exec.Command("ifconfig", "lo0", "alias", dnsRelayIP, "up")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("ifconfig lo0 alias failed: %w (output: %s)", err, string(output))
}
return nil
}
// removeLoopbackAlias removes the DNS relay alias from lo0.
func (t *TunManager) removeLoopbackAlias() error {
t.logDebug("Removing loopback alias %s from lo0", dnsRelayIP)
cmd := exec.Command("ifconfig", "lo0", "-alias", dnsRelayIP)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("ifconfig lo0 -alias failed: %w (output: %s)", err, string(output))
}
return nil
}
// ensureAnchorInPFConf checks whether the pf anchor reference exists in
// /etc/pf.conf. If not, it inserts the anchor lines at the correct positions
// (pf requires strict ordering: rdr-anchor before anchor, both before load anchor)
// and reloads the main ruleset.
func (t *TunManager) ensureAnchorInPFConf() error {
const pfConfPath = "/etc/pf.conf"
anchorLine := fmt.Sprintf(`anchor "%s"`, t.pfAnchor)
rdrAnchorLine := fmt.Sprintf(`rdr-anchor "%s"`, t.pfAnchor)
data, err := os.ReadFile(pfConfPath)
if err != nil {
return fmt.Errorf("failed to read %s: %w", pfConfPath, err)
}
lines := strings.Split(string(data), "\n")
// Line-level presence check avoids substring false positives
// (e.g. 'anchor "X"' matching inside 'rdr-anchor "X"').
hasAnchor := false
hasRdrAnchor := false
lastRdrIdx := -1
lastAnchorIdx := -1
for i, line := range lines {
trimmed := strings.TrimSpace(line)
if trimmed == rdrAnchorLine {
hasRdrAnchor = true
}
if trimmed == anchorLine {
hasAnchor = true
}
if strings.HasPrefix(trimmed, "rdr-anchor ") {
lastRdrIdx = i
}
// Standalone "anchor" lines — not rdr-anchor, nat-anchor, etc.
if strings.HasPrefix(trimmed, "anchor ") {
lastAnchorIdx = i
}
}
if hasAnchor && hasRdrAnchor {
t.logDebug("pf anchor already present in %s", pfConfPath)
return nil
}
t.logDebug("Adding pf anchor to %s", pfConfPath)
// Insert at the correct positions. Process in reverse index order
// so earlier insertions don't shift later indices.
var result []string
for i, line := range lines {
result = append(result, line)
if !hasRdrAnchor && i == lastRdrIdx {
result = append(result, rdrAnchorLine)
}
if !hasAnchor && i == lastAnchorIdx {
result = append(result, anchorLine)
}
}
// Fallback: if no existing rdr-anchor/anchor found, append at end.
if !hasRdrAnchor && lastRdrIdx == -1 {
result = append(result, rdrAnchorLine)
}
if !hasAnchor && lastAnchorIdx == -1 {
result = append(result, anchorLine)
}
newContent := strings.Join(result, "\n")
//nolint:gosec // pf.conf must be writable by root; the daemon runs as root
if err := os.WriteFile(pfConfPath, []byte(newContent), 0o644); err != nil {
return fmt.Errorf("failed to write %s: %w", pfConfPath, err)
}
// Reload the main pf.conf so the anchor reference is recognized.
//nolint:gosec // pfConfPath is a constant
reloadCmd := exec.Command("pfctl", "-f", pfConfPath)
if output, err := reloadCmd.CombinedOutput(); err != nil {
return fmt.Errorf("pfctl reload failed: %w (output: %s)", err, string(output))
}
t.logDebug("pf anchor added and pf.conf reloaded")
return nil
}
// enablePF enables the pf firewall if it is not already active.
func (t *TunManager) enablePF() error {
// Check current pf status.
out, err := exec.Command("pfctl", "-s", "info").CombinedOutput()
if err == nil && strings.Contains(string(out), "Status: Enabled") {
t.logDebug("pf is already enabled")
return nil
}
t.logDebug("Enabling pf")
cmd := exec.Command("pfctl", "-e")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("pfctl -e failed: %w (output: %s)", err, string(output))
}
return nil
}
// unloadPFRulesLocked flushes all rules from the pf anchor. Must be called
// with t.mu held.
func (t *TunManager) unloadPFRulesLocked() error {
t.logDebug("Flushing pf anchor %s", t.pfAnchor)
//nolint:gosec // pfAnchor is a controlled internal constant
cmd := exec.Command("pfctl", "-a", t.pfAnchor, "-F", "all")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("pfctl flush anchor failed: %w (output: %s)", err, string(output))
}
return nil
}
// removeAnchorFromPFConf removes greywall anchor lines from /etc/pf.conf.
// Called during uninstall to clean up.
func removeAnchorFromPFConf(debug bool) error {
const pfConfPath = "/etc/pf.conf"
anchorLine := fmt.Sprintf(`anchor "%s"`, pfAnchorName)
rdrAnchorLine := fmt.Sprintf(`rdr-anchor "%s"`, pfAnchorName)
data, err := os.ReadFile(pfConfPath)
if err != nil {
return fmt.Errorf("failed to read %s: %w", pfConfPath, err)
}
lines := strings.Split(string(data), "\n")
var filtered []string
removed := 0
for _, line := range lines {
trimmed := strings.TrimSpace(line)
if trimmed == anchorLine || trimmed == rdrAnchorLine {
removed++
continue
}
filtered = append(filtered, line)
}
if removed == 0 {
logDebug(debug, "No pf anchor lines to remove from %s", pfConfPath)
return nil
}
//nolint:gosec // pf.conf must be writable by root; the daemon runs as root
if err := os.WriteFile(pfConfPath, []byte(strings.Join(filtered, "\n")), 0o644); err != nil {
return fmt.Errorf("failed to write %s: %w", pfConfPath, err)
}
logDebug(debug, "Removed %d pf anchor lines from %s", removed, pfConfPath)
return nil
}
// logDebug writes a debug message to stderr with the [greywall:tun] prefix.
func (t *TunManager) logDebug(format string, args ...interface{}) {
if t.debug {
fmt.Fprintf(os.Stderr, "[greywall:tun] "+format+"\n", args...)
}
}

View File

@@ -0,0 +1,38 @@
//go:build !darwin
package daemon
import "fmt"
// TunManager is a stub for non-macOS platforms.
type TunManager struct{}
// NewTunManager returns an error on non-macOS platforms.
func NewTunManager(tun2socksPath string, proxyURL string, debug bool) *TunManager {
return &TunManager{}
}
// Start returns an error on non-macOS platforms.
func (t *TunManager) Start() error {
return fmt.Errorf("tun manager is only available on macOS")
}
// Stop returns an error on non-macOS platforms.
func (t *TunManager) Stop() error {
return fmt.Errorf("tun manager is only available on macOS")
}
// TunDevice returns an empty string on non-macOS platforms.
func (t *TunManager) TunDevice() string {
return ""
}
// LoadPFRules returns an error on non-macOS platforms.
func (t *TunManager) LoadPFRules(sandboxUser string) error {
return fmt.Errorf("pf rules are only available on macOS")
}
// UnloadPFRules returns an error on non-macOS platforms.
func (t *TunManager) UnloadPFRules() error {
return fmt.Errorf("pf rules are only available on macOS")
}

View File

@@ -333,7 +333,7 @@ func execBenchCommand(b *testing.B, command string, workDir string) {
shell = "/bin/bash"
}
cmd := exec.CommandContext(ctx, shell, "-c", command)
cmd := exec.CommandContext(ctx, shell, "-c", command) //nolint:gosec // test helper running shell commands
cmd.Dir = workDir
cmd.Stdout = &bytes.Buffer{}
cmd.Stderr = &bytes.Buffer{}

View File

@@ -28,6 +28,30 @@ var DangerousDirectories = []string{
".claude/agents",
}
// SensitiveProjectFiles lists files within the project directory that should be
// denied for both read and write access. These commonly contain secrets.
var SensitiveProjectFiles = []string{
".env",
".env.local",
".env.development",
".env.production",
".env.staging",
".env.test",
}
// GetSensitiveProjectPaths returns concrete paths for sensitive files within the
// given directory. Only returns paths for files that actually exist.
func GetSensitiveProjectPaths(cwd string) []string {
var paths []string
for _, f := range SensitiveProjectFiles {
p := filepath.Join(cwd, f)
if _, err := os.Stat(p); err == nil {
paths = append(paths, p)
}
}
return paths
}
// GetDefaultWritePaths returns system paths that should be writable for commands to work.
func GetDefaultWritePaths() []string {
home, _ := os.UserHomeDir()

View File

@@ -123,13 +123,17 @@ func assertContains(t *testing.T, haystack, needle string) {
// ============================================================================
// testConfig creates a test configuration with sensible defaults.
// Uses legacy mode (defaultDenyRead=false) for predictable testing of
// existing integration tests. Use testConfigDenyByDefault() for tests
// that specifically test deny-by-default behavior.
func testConfig() *config.Config {
return &config.Config{
Network: config.NetworkConfig{},
Filesystem: config.FilesystemConfig{
DenyRead: []string{},
AllowWrite: []string{},
DenyWrite: []string{},
DefaultDenyRead: boolPtr(false), // Legacy mode for existing tests
DenyRead: []string{},
AllowWrite: []string{},
DenyWrite: []string{},
},
Command: config.CommandConfig{
Deny: []string{},
@@ -241,7 +245,7 @@ func executeShellCommandWithTimeout(t *testing.T, command string, workDir string
shell = "/bin/bash"
}
cmd := exec.CommandContext(ctx, shell, "-c", command)
cmd := exec.CommandContext(ctx, shell, "-c", command) //nolint:gosec // test helper running shell commands
cmd.Dir = workDir
var stdout, stderr bytes.Buffer

View File

@@ -10,6 +10,13 @@ import (
"strings"
)
// TraceResult holds parsed read and write paths from a system trace log
// (strace on Linux, eslogger on macOS).
type TraceResult struct {
WritePaths []string
ReadPaths []string
}
// wellKnownParents are directories under $HOME where applications typically
// create their own subdirectory (e.g., ~/.cache/opencode, ~/.config/opencode).
var wellKnownParents = []string{
@@ -52,14 +59,9 @@ func SanitizeTemplateName(name string) string {
return sanitized
}
// GenerateLearnedTemplate parses an strace log, collapses paths, and saves a template.
// GenerateLearnedTemplate takes a parsed trace result, collapses paths, and saves a template.
// Returns the path where the template was saved.
func GenerateLearnedTemplate(straceLogPath, cmdName string, debug bool) (string, error) {
result, err := ParseStraceLog(straceLogPath, debug)
if err != nil {
return "", fmt.Errorf("failed to parse strace log: %w", err)
}
func GenerateLearnedTemplate(result *TraceResult, cmdName string, debug bool) (string, error) {
home, _ := os.UserHomeDir()
// Filter write paths: remove default writable and sensitive paths
@@ -87,6 +89,43 @@ func GenerateLearnedTemplate(straceLogPath, cmdName string, debug bool) (string,
allowWrite = append(allowWrite, toTildePath(p, home))
}
// Filter read paths: remove system defaults, CWD subtree, and sensitive paths
cwd, _ := os.Getwd()
var filteredReads []string
defaultReadable := GetDefaultReadablePaths()
for _, p := range result.ReadPaths {
// Skip system defaults
isDefault := false
for _, dp := range defaultReadable {
if p == dp || strings.HasPrefix(p, dp+"/") {
isDefault = true
break
}
}
if isDefault {
continue
}
// Skip CWD subtree (auto-included)
if cwd != "" && (p == cwd || strings.HasPrefix(p, cwd+"/")) {
continue
}
// Skip sensitive paths
if isSensitivePath(p, home) {
if debug {
fmt.Fprintf(os.Stderr, "[greywall] Skipping sensitive read path: %s\n", p)
}
continue
}
filteredReads = append(filteredReads, p)
}
// Collapse read paths and convert to tilde-relative
collapsedReads := CollapsePaths(filteredReads)
var allowRead []string
for _, p := range collapsedReads {
allowRead = append(allowRead, toTildePath(p, home))
}
// Convert read paths to tilde-relative for display
var readDisplay []string
for _, p := range result.ReadPaths {
@@ -103,6 +142,13 @@ func GenerateLearnedTemplate(straceLogPath, cmdName string, debug bool) (string,
}
}
if len(allowRead) > 0 {
fmt.Fprintf(os.Stderr, "[greywall] Additional read paths (beyond system + CWD):\n")
for _, p := range allowRead {
fmt.Fprintf(os.Stderr, "[greywall] %s\n", p)
}
}
if len(allowWrite) > 1 { // >1 because "." is always included
fmt.Fprintf(os.Stderr, "[greywall] Discovered write paths (collapsed):\n")
for _, p := range allowWrite {
@@ -118,15 +164,15 @@ func GenerateLearnedTemplate(straceLogPath, cmdName string, debug bool) (string,
fmt.Fprintf(os.Stderr, "\n")
// Build template
template := buildTemplate(cmdName, allowWrite)
template := buildTemplate(cmdName, allowRead, allowWrite)
// Save template
templatePath := LearnedTemplatePath(cmdName)
if err := os.MkdirAll(filepath.Dir(templatePath), 0o755); err != nil {
if err := os.MkdirAll(filepath.Dir(templatePath), 0o750); err != nil {
return "", fmt.Errorf("failed to create template directory: %w", err)
}
if err := os.WriteFile(templatePath, []byte(template), 0o644); err != nil {
if err := os.WriteFile(templatePath, []byte(template), 0o600); err != nil {
return "", fmt.Errorf("failed to write template: %w", err)
}
@@ -176,13 +222,20 @@ func CollapsePaths(paths []string) []string {
}
}
// For standalone paths, use their parent directory
// For standalone paths, use their parent directory — but never collapse to $HOME
for _, p := range standalone {
result = append(result, filepath.Dir(p))
parent := filepath.Dir(p)
if parent == home {
// Keep exact file path to avoid opening entire home directory
result = append(result, p)
} else {
result = append(result, parent)
}
}
// Sort and deduplicate (remove sub-paths of other paths)
// Sort, remove exact duplicates, then remove sub-paths of other paths
sort.Strings(result)
result = removeDuplicates(result)
result = deduplicateSubPaths(result)
return result
@@ -255,11 +308,7 @@ func isSensitivePath(path, home string) bool {
// Check GPG
gnupgDir := filepath.Join(home, ".gnupg")
if strings.HasPrefix(path, gnupgDir+"/") {
return true
}
return false
return strings.HasPrefix(path, gnupgDir+"/")
}
// getDangerousFilePatterns returns denyWrite entries for DangerousFiles.
@@ -318,6 +367,20 @@ func ListLearnedTemplates() ([]LearnedTemplateInfo, error) {
return templates, nil
}
// removeDuplicates removes exact duplicate strings from a sorted slice.
func removeDuplicates(paths []string) []string {
if len(paths) <= 1 {
return paths
}
result := []string{paths[0]}
for i := 1; i < len(paths); i++ {
if paths[i] != paths[i-1] {
result = append(result, paths[i])
}
}
return result
}
// deduplicateSubPaths removes paths that are sub-paths of other paths in the list.
// Assumes the input is sorted.
func deduplicateSubPaths(paths []string) []string {
@@ -345,9 +408,18 @@ func deduplicateSubPaths(paths []string) []string {
return result
}
// getSensitiveProjectDenyPatterns returns denyRead entries for sensitive project files.
func getSensitiveProjectDenyPatterns() []string {
return []string{
".env",
".env.*",
}
}
// buildTemplate generates the JSONC template content for a learned config.
func buildTemplate(cmdName string, allowWrite []string) string {
func buildTemplate(cmdName string, allowRead, allowWrite []string) string {
type fsConfig struct {
AllowRead []string `json:"allowRead,omitempty"`
AllowWrite []string `json:"allowWrite"`
DenyWrite []string `json:"denyWrite"`
DenyRead []string `json:"denyRead"`
@@ -356,19 +428,23 @@ func buildTemplate(cmdName string, allowWrite []string) string {
Filesystem fsConfig `json:"filesystem"`
}
// Combine sensitive read patterns with .env project patterns
denyRead := append(getSensitiveReadPatterns(), getSensitiveProjectDenyPatterns()...)
cfg := templateConfig{
Filesystem: fsConfig{
AllowRead: allowRead,
AllowWrite: allowWrite,
DenyWrite: getDangerousFilePatterns(),
DenyRead: getSensitiveReadPatterns(),
DenyRead: denyRead,
},
}
data, _ := json.MarshalIndent(cfg, "", " ")
var sb strings.Builder
sb.WriteString(fmt.Sprintf("// Learned template for %q\n", cmdName))
sb.WriteString(fmt.Sprintf("// Generated by: greywall --learning -- %s\n", cmdName))
fmt.Fprintf(&sb, "// Learned template for %q\n", cmdName)
fmt.Fprintf(&sb, "// Generated by: greywall --learning -- %s\n", cmdName)
sb.WriteString("// Review and adjust paths as needed\n")
sb.Write(data)
sb.WriteString("\n")

View File

@@ -0,0 +1,459 @@
//go:build darwin
package sandbox
import (
"bufio"
"encoding/json"
"fmt"
"os"
"strings"
"gitea.app.monadical.io/monadical/greywall/internal/daemon"
)
// opClass classifies a filesystem operation.
type opClass int
const (
opSkip opClass = iota
opRead
opWrite
)
// fwriteFlag is the macOS FWRITE flag value (O_WRONLY or O_RDWR includes this).
const fwriteFlag = 0x0002
// eslogger JSON types — mirrors the real Endpoint Security framework output.
// eslogger emits one JSON object per line to stdout.
//
// Key structural details from real eslogger output:
// - event_type is an integer (e.g., 10=open, 11=fork, 13=create, 32=unlink, 33=write, 41=truncate)
// - Event data is nested under event.{event_name} (e.g., event.open, event.fork)
// - write/unlink/truncate use "target" not "file"
// - create uses destination.existing_file
// - fork child has full process info including audit_token
// esloggerEvent is the top-level event from eslogger.
type esloggerEvent struct {
EventType int `json:"event_type"`
Process esloggerProcess `json:"process"`
Event map[string]json.RawMessage `json:"event"`
}
type esloggerProcess struct {
AuditToken esloggerAuditToken `json:"audit_token"`
Executable esloggerExec `json:"executable"`
PPID int `json:"ppid"`
}
type esloggerAuditToken struct {
PID int `json:"pid"`
}
type esloggerExec struct {
Path string `json:"path"`
PathTruncated bool `json:"path_truncated"`
}
// Event-specific types.
type esloggerOpenEvent struct {
File esloggerFile `json:"file"`
Fflag int `json:"fflag"`
}
type esloggerTargetEvent struct {
Target esloggerFile `json:"target"`
}
type esloggerCreateEvent struct {
DestinationType int `json:"destination_type"`
Destination esloggerCreateDest `json:"destination"`
}
type esloggerCreateDest struct {
ExistingFile *esloggerFile `json:"existing_file,omitempty"`
NewPath *esloggerNewPath `json:"new_path,omitempty"`
}
type esloggerNewPath struct {
Dir esloggerFile `json:"dir"`
Filename string `json:"filename"`
}
type esloggerRenameEvent struct {
Source esloggerFile `json:"source"`
Destination esloggerFile `json:"destination_new_path"` // TODO: verify actual field name
}
type esloggerForkEvent struct {
Child esloggerForkChild `json:"child"`
}
type esloggerForkChild struct {
AuditToken esloggerAuditToken `json:"audit_token"`
Executable esloggerExec `json:"executable"`
PPID int `json:"ppid"`
}
type esloggerLinkEvent struct {
Source esloggerFile `json:"source"`
TargetDir esloggerFile `json:"target_dir"`
}
type esloggerFile struct {
Path string `json:"path"`
PathTruncated bool `json:"path_truncated"`
}
// CheckLearningAvailable verifies that eslogger exists and the daemon is running.
func CheckLearningAvailable() error {
if _, err := os.Stat("/usr/bin/eslogger"); err != nil {
return fmt.Errorf("eslogger not found at /usr/bin/eslogger (requires macOS 13+): %w", err)
}
client := daemon.NewClient(daemon.DefaultSocketPath, false)
if !client.IsRunning() {
return fmt.Errorf("greywall daemon is not running (required for macOS learning mode)\n\n" +
" Install and start: sudo greywall daemon install\n" +
" Check status: greywall daemon status")
}
return nil
}
// eventName extracts the event name string from the event map.
// eslogger nests event data under event.{name}, e.g., event.open, event.fork.
func eventName(ev *esloggerEvent) string {
for key := range ev.Event {
return key
}
return ""
}
// ParseEsloggerLog reads an eslogger JSON log, builds the process tree from
// fork events starting at rootPID, then filters filesystem events by the PID set.
// Uses a two-pass approach: pass 1 scans fork events to build the PID tree,
// pass 2 filters filesystem events by the PID set.
func ParseEsloggerLog(logPath string, rootPID int, debug bool) (*TraceResult, error) {
home, _ := os.UserHomeDir()
seenWrite := make(map[string]bool)
seenRead := make(map[string]bool)
result := &TraceResult{}
// Pass 1: Build the PID set from fork events.
pidSet := map[int]bool{rootPID: true}
forkEvents, err := scanForkEvents(logPath)
if err != nil {
return nil, err
}
// BFS: expand PID set using fork parent→child relationships.
// We may need multiple rounds since a child can itself fork.
changed := true
for changed {
changed = false
for _, fe := range forkEvents {
if pidSet[fe.parentPID] && !pidSet[fe.childPID] {
pidSet[fe.childPID] = true
changed = true
}
}
}
if debug {
fmt.Fprintf(os.Stderr, "[greywall] eslogger PID tree from root %d: %d PIDs\n", rootPID, len(pidSet))
}
// Pass 2: Scan filesystem events, filter by PID set.
f, err := os.Open(logPath) //nolint:gosec // daemon-controlled temp file path
if err != nil {
return nil, fmt.Errorf("failed to open eslogger log: %w", err)
}
defer func() { _ = f.Close() }()
scanner := bufio.NewScanner(f)
scanner.Buffer(make([]byte, 0, 256*1024), 4*1024*1024)
lineCount := 0
matchedLines := 0
writeCount := 0
readCount := 0
for scanner.Scan() {
line := scanner.Bytes()
lineCount++
var ev esloggerEvent
if err := json.Unmarshal(line, &ev); err != nil {
continue
}
name := eventName(&ev)
// Skip fork events (already processed in pass 1)
if name == "fork" {
continue
}
// Filter by PID set
pid := ev.Process.AuditToken.PID
if !pidSet[pid] {
continue
}
matchedLines++
// Extract path and classify operation
paths, class := classifyEsloggerEvent(&ev, name)
if class == opSkip || len(paths) == 0 {
continue
}
for _, path := range paths {
if shouldFilterPathMacOS(path, home) {
continue
}
switch class {
case opWrite:
writeCount++
if !seenWrite[path] {
seenWrite[path] = true
result.WritePaths = append(result.WritePaths, path)
}
case opRead:
readCount++
if !seenRead[path] {
seenRead[path] = true
result.ReadPaths = append(result.ReadPaths, path)
}
}
}
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("error reading eslogger log: %w", err)
}
if debug {
fmt.Fprintf(os.Stderr, "[greywall] Parsed eslogger log: %d lines, %d matched PIDs, %d writes, %d reads, %d unique write paths, %d unique read paths\n",
lineCount, matchedLines, writeCount, readCount, len(result.WritePaths), len(result.ReadPaths))
}
return result, nil
}
// forkRecord stores a parent→child PID relationship from a fork event.
type forkRecord struct {
parentPID int
childPID int
}
// scanForkEvents reads the log and extracts all fork parent→child PID pairs.
func scanForkEvents(logPath string) ([]forkRecord, error) {
f, err := os.Open(logPath) //nolint:gosec // daemon-controlled temp file path
if err != nil {
return nil, fmt.Errorf("failed to open eslogger log: %w", err)
}
defer func() { _ = f.Close() }()
scanner := bufio.NewScanner(f)
scanner.Buffer(make([]byte, 0, 256*1024), 4*1024*1024)
var forks []forkRecord
for scanner.Scan() {
line := scanner.Bytes()
// Quick pre-check to avoid parsing non-fork lines.
// Fork events have "fork" as a key in the event object.
if !strings.Contains(string(line), `"fork"`) {
continue
}
var ev esloggerEvent
if err := json.Unmarshal(line, &ev); err != nil {
continue
}
forkRaw, ok := ev.Event["fork"]
if !ok {
continue
}
var fe esloggerForkEvent
if err := json.Unmarshal(forkRaw, &fe); err != nil {
continue
}
forks = append(forks, forkRecord{
parentPID: ev.Process.AuditToken.PID,
childPID: fe.Child.AuditToken.PID,
})
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("error reading eslogger log for fork events: %w", err)
}
return forks, nil
}
// classifyEsloggerEvent extracts paths and classifies the operation from an eslogger event.
// The event name is the key inside the event map (e.g., "open", "fork", "write").
func classifyEsloggerEvent(ev *esloggerEvent, name string) ([]string, opClass) {
eventRaw, ok := ev.Event[name]
if !ok {
return nil, opSkip
}
switch name {
case "open":
var oe esloggerOpenEvent
if err := json.Unmarshal(eventRaw, &oe); err != nil {
return nil, opSkip
}
path := oe.File.Path
if path == "" || oe.File.PathTruncated {
return nil, opSkip
}
if oe.Fflag&fwriteFlag != 0 {
return []string{path}, opWrite
}
return []string{path}, opRead
case "create":
var ce esloggerCreateEvent
if err := json.Unmarshal(eventRaw, &ce); err != nil {
return nil, opSkip
}
// create events use destination.existing_file or destination.new_path
if ce.Destination.ExistingFile != nil {
path := ce.Destination.ExistingFile.Path
if path != "" && !ce.Destination.ExistingFile.PathTruncated {
return []string{path}, opWrite
}
}
if ce.Destination.NewPath != nil {
dir := ce.Destination.NewPath.Dir.Path
filename := ce.Destination.NewPath.Filename
if dir != "" && filename != "" {
return []string{dir + "/" + filename}, opWrite
}
}
return nil, opSkip
case "write", "unlink", "truncate":
// These events use "target" not "file"
var te esloggerTargetEvent
if err := json.Unmarshal(eventRaw, &te); err != nil {
return nil, opSkip
}
path := te.Target.Path
if path == "" || te.Target.PathTruncated {
return nil, opSkip
}
return []string{path}, opWrite
case "rename":
var re esloggerRenameEvent
if err := json.Unmarshal(eventRaw, &re); err != nil {
return nil, opSkip
}
var paths []string
if re.Source.Path != "" && !re.Source.PathTruncated {
paths = append(paths, re.Source.Path)
}
if re.Destination.Path != "" && !re.Destination.PathTruncated {
paths = append(paths, re.Destination.Path)
}
if len(paths) == 0 {
return nil, opSkip
}
return paths, opWrite
case "link":
var le esloggerLinkEvent
if err := json.Unmarshal(eventRaw, &le); err != nil {
return nil, opSkip
}
var paths []string
if le.Source.Path != "" && !le.Source.PathTruncated {
paths = append(paths, le.Source.Path)
}
if le.TargetDir.Path != "" && !le.TargetDir.PathTruncated {
paths = append(paths, le.TargetDir.Path)
}
if len(paths) == 0 {
return nil, opSkip
}
return paths, opWrite
default:
return nil, opSkip
}
}
// shouldFilterPathMacOS returns true if a path should be excluded from macOS learning results.
func shouldFilterPathMacOS(path, home string) bool {
if path == "" || !strings.HasPrefix(path, "/") {
return true
}
// macOS system path prefixes to filter
systemPrefixes := []string{
"/dev/",
"/private/var/run/",
"/private/var/db/",
"/private/var/folders/",
"/System/",
"/Library/",
"/usr/lib/",
"/usr/share/",
"/private/etc/",
"/tmp/",
"/private/tmp/",
}
for _, prefix := range systemPrefixes {
if strings.HasPrefix(path, prefix) {
return true
}
}
// Filter .dylib files (macOS shared libraries)
if strings.HasSuffix(path, ".dylib") {
return true
}
// Filter greywall infrastructure files
if strings.Contains(path, "greywall-") {
return true
}
// Filter paths outside home directory
if home != "" && !strings.HasPrefix(path, home+"/") {
return true
}
// Filter exact home directory match
if path == home {
return true
}
// Filter shell infrastructure directories (PATH lookups, plugin dirs)
if home != "" {
shellInfraPrefixes := []string{
home + "/.antigen/",
home + "/.oh-my-zsh/",
home + "/.pyenv/shims/",
home + "/.bun/bin/",
home + "/.local/bin/",
}
for _, prefix := range shellInfraPrefixes {
if strings.HasPrefix(path, prefix) {
return true
}
}
}
return false
}

View File

@@ -0,0 +1,557 @@
//go:build darwin
package sandbox
import (
"encoding/json"
"os"
"path/filepath"
"strings"
"testing"
)
// makeEsloggerLine builds a single JSON line matching real eslogger output format.
// event_type is an int, and event data is nested under event.{eventName}.
func makeEsloggerLine(eventName string, eventTypeInt int, pid int, eventData interface{}) string {
eventJSON, _ := json.Marshal(eventData)
ev := map[string]interface{}{
"event_type": eventTypeInt,
"process": map[string]interface{}{
"audit_token": map[string]interface{}{
"pid": pid,
},
"executable": map[string]interface{}{
"path": "/usr/bin/test",
"path_truncated": false,
},
"ppid": 1,
},
"event": map[string]json.RawMessage{
eventName: json.RawMessage(eventJSON),
},
}
data, _ := json.Marshal(ev)
return string(data)
}
func TestClassifyEsloggerEvent(t *testing.T) {
tests := []struct {
name string
eventName string
eventData interface{}
expectPaths []string
expectClass opClass
}{
{
name: "open read-only",
eventName: "open",
eventData: map[string]interface{}{
"file": map[string]interface{}{"path": "/Users/test/file.txt", "path_truncated": false},
"fflag": 0x0001, // FREAD only
},
expectPaths: []string{"/Users/test/file.txt"},
expectClass: opRead,
},
{
name: "open with write flag",
eventName: "open",
eventData: map[string]interface{}{
"file": map[string]interface{}{"path": "/Users/test/file.txt", "path_truncated": false},
"fflag": 0x0003, // FREAD | FWRITE
},
expectPaths: []string{"/Users/test/file.txt"},
expectClass: opWrite,
},
{
name: "create event with existing_file",
eventName: "create",
eventData: map[string]interface{}{
"destination_type": 0,
"destination": map[string]interface{}{
"existing_file": map[string]interface{}{"path": "/Users/test/new.txt", "path_truncated": false},
},
},
expectPaths: []string{"/Users/test/new.txt"},
expectClass: opWrite,
},
{
name: "write event uses target",
eventName: "write",
eventData: map[string]interface{}{
"target": map[string]interface{}{"path": "/Users/test/data.db", "path_truncated": false},
},
expectPaths: []string{"/Users/test/data.db"},
expectClass: opWrite,
},
{
name: "unlink event uses target",
eventName: "unlink",
eventData: map[string]interface{}{
"target": map[string]interface{}{"path": "/Users/test/old.txt", "path_truncated": false},
},
expectPaths: []string{"/Users/test/old.txt"},
expectClass: opWrite,
},
{
name: "truncate event uses target",
eventName: "truncate",
eventData: map[string]interface{}{
"target": map[string]interface{}{"path": "/Users/test/trunc.log", "path_truncated": false},
},
expectPaths: []string{"/Users/test/trunc.log"},
expectClass: opWrite,
},
{
name: "rename event with source and destination",
eventName: "rename",
eventData: map[string]interface{}{
"source": map[string]interface{}{"path": "/Users/test/old.txt", "path_truncated": false},
"destination_new_path": map[string]interface{}{"path": "/Users/test/new.txt", "path_truncated": false},
},
expectPaths: []string{"/Users/test/old.txt", "/Users/test/new.txt"},
expectClass: opWrite,
},
{
name: "truncated path is skipped",
eventName: "open",
eventData: map[string]interface{}{
"file": map[string]interface{}{"path": "/Users/test/very/long/path", "path_truncated": true},
"fflag": 0x0001,
},
expectPaths: nil,
expectClass: opSkip,
},
{
name: "empty path is skipped",
eventName: "write",
eventData: map[string]interface{}{
"target": map[string]interface{}{"path": "", "path_truncated": false},
},
expectPaths: nil,
expectClass: opSkip,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
eventJSON, _ := json.Marshal(tt.eventData)
ev := &esloggerEvent{
EventType: 0,
Event: map[string]json.RawMessage{
tt.eventName: json.RawMessage(eventJSON),
},
}
paths, class := classifyEsloggerEvent(ev, tt.eventName)
if class != tt.expectClass {
t.Errorf("class = %d, want %d", class, tt.expectClass)
}
if tt.expectPaths == nil {
if len(paths) != 0 {
t.Errorf("paths = %v, want nil", paths)
}
} else {
if len(paths) != len(tt.expectPaths) {
t.Errorf("paths = %v, want %v", paths, tt.expectPaths)
} else {
for i, p := range paths {
if p != tt.expectPaths[i] {
t.Errorf("paths[%d] = %q, want %q", i, p, tt.expectPaths[i])
}
}
}
}
})
}
}
func TestParseEsloggerLog(t *testing.T) {
home, _ := os.UserHomeDir()
// Root PID is 100; it forks child PID 101, which forks grandchild 102.
// PID 200 is an unrelated process.
lines := []string{
// Fork: root (100) -> child (101)
makeEsloggerLine("fork", 11, 100, map[string]interface{}{
"child": map[string]interface{}{
"audit_token": map[string]interface{}{"pid": 101},
"executable": map[string]interface{}{"path": "/usr/bin/child", "path_truncated": false},
"ppid": 100,
},
}),
// Fork: child (101) -> grandchild (102)
makeEsloggerLine("fork", 11, 101, map[string]interface{}{
"child": map[string]interface{}{
"audit_token": map[string]interface{}{"pid": 102},
"executable": map[string]interface{}{"path": "/usr/bin/grandchild", "path_truncated": false},
"ppid": 101,
},
}),
// Write by root process (should be included) — write uses "target"
makeEsloggerLine("write", 33, 100, map[string]interface{}{
"target": map[string]interface{}{"path": filepath.Join(home, ".cache/testapp/db.sqlite"), "path_truncated": false},
}),
// Create by child (should be included) — create uses destination.existing_file
makeEsloggerLine("create", 13, 101, map[string]interface{}{
"destination_type": 0,
"destination": map[string]interface{}{
"existing_file": map[string]interface{}{"path": filepath.Join(home, ".config/testapp/conf.json"), "path_truncated": false},
},
}),
// Open (read-only) by grandchild (should be included as read)
makeEsloggerLine("open", 10, 102, map[string]interface{}{
"file": map[string]interface{}{"path": filepath.Join(home, ".config/testapp/extra.json"), "path_truncated": false},
"fflag": 0x0001,
}),
// Open (write) by grandchild (should be included as write)
makeEsloggerLine("open", 10, 102, map[string]interface{}{
"file": map[string]interface{}{"path": filepath.Join(home, ".cache/testapp/version"), "path_truncated": false},
"fflag": 0x0003,
}),
// Write by unrelated PID 200 (should NOT be included)
makeEsloggerLine("write", 33, 200, map[string]interface{}{
"target": map[string]interface{}{"path": filepath.Join(home, ".cache/otherapp/data"), "path_truncated": false},
}),
// System path write by root PID (should be filtered)
makeEsloggerLine("write", 33, 100, map[string]interface{}{
"target": map[string]interface{}{"path": "/dev/null", "path_truncated": false},
}),
// Unlink by child (should be included) — unlink uses "target"
makeEsloggerLine("unlink", 32, 101, map[string]interface{}{
"target": map[string]interface{}{"path": filepath.Join(home, ".cache/testapp/old.tmp"), "path_truncated": false},
}),
}
logContent := strings.Join(lines, "\n")
logFile := filepath.Join(t.TempDir(), "eslogger.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o600); err != nil {
t.Fatal(err)
}
result, err := ParseEsloggerLog(logFile, 100, false)
if err != nil {
t.Fatalf("ParseEsloggerLog() error: %v", err)
}
// Check write paths
expectedWrites := map[string]bool{
filepath.Join(home, ".cache/testapp/db.sqlite"): false,
filepath.Join(home, ".config/testapp/conf.json"): false,
filepath.Join(home, ".cache/testapp/version"): false,
filepath.Join(home, ".cache/testapp/old.tmp"): false,
}
for _, p := range result.WritePaths {
if _, ok := expectedWrites[p]; ok {
expectedWrites[p] = true
}
}
for p, found := range expectedWrites {
if !found {
t.Errorf("WritePaths missing expected: %q, got: %v", p, result.WritePaths)
}
}
// Check that unrelated PID 200 paths were not included
for _, p := range result.WritePaths {
if strings.Contains(p, "otherapp") {
t.Errorf("WritePaths should not contain otherapp path: %q", p)
}
}
// Check read paths
expectedReads := map[string]bool{
filepath.Join(home, ".config/testapp/extra.json"): false,
}
for _, p := range result.ReadPaths {
if _, ok := expectedReads[p]; ok {
expectedReads[p] = true
}
}
for p, found := range expectedReads {
if !found {
t.Errorf("ReadPaths missing expected: %q, got: %v", p, result.ReadPaths)
}
}
}
func TestParseEsloggerLogForkChaining(t *testing.T) {
home, _ := os.UserHomeDir()
// Test deep fork chains: 100 -> 101 -> 102 -> 103
lines := []string{
makeEsloggerLine("fork", 11, 100, map[string]interface{}{
"child": map[string]interface{}{
"audit_token": map[string]interface{}{"pid": 101},
"executable": map[string]interface{}{"path": "/bin/sh", "path_truncated": false},
"ppid": 100,
},
}),
makeEsloggerLine("fork", 11, 101, map[string]interface{}{
"child": map[string]interface{}{
"audit_token": map[string]interface{}{"pid": 102},
"executable": map[string]interface{}{"path": "/usr/bin/node", "path_truncated": false},
"ppid": 101,
},
}),
makeEsloggerLine("fork", 11, 102, map[string]interface{}{
"child": map[string]interface{}{
"audit_token": map[string]interface{}{"pid": 103},
"executable": map[string]interface{}{"path": "/usr/bin/ruby", "path_truncated": false},
"ppid": 102,
},
}),
// Write from the deepest child
makeEsloggerLine("write", 33, 103, map[string]interface{}{
"target": map[string]interface{}{"path": filepath.Join(home, ".cache/app/deep.log"), "path_truncated": false},
}),
}
logContent := strings.Join(lines, "\n")
logFile := filepath.Join(t.TempDir(), "eslogger.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o600); err != nil {
t.Fatal(err)
}
result, err := ParseEsloggerLog(logFile, 100, false)
if err != nil {
t.Fatalf("ParseEsloggerLog() error: %v", err)
}
// The deep child's write should be included
found := false
for _, p := range result.WritePaths {
if strings.Contains(p, "deep.log") {
found = true
break
}
}
if !found {
t.Errorf("WritePaths should include deep child write, got: %v", result.WritePaths)
}
}
func TestShouldFilterPathMacOS(t *testing.T) {
home := "/Users/testuser"
tests := []struct {
path string
expected bool
}{
{"/dev/null", true},
{"/private/var/run/syslog", true},
{"/private/var/db/something", true},
{"/private/var/folders/xx/yy", true},
{"/System/Library/Frameworks/foo", true},
{"/Library/Preferences/com.apple.foo", true},
{"/usr/lib/libSystem.B.dylib", true},
{"/usr/share/zoneinfo/UTC", true},
{"/private/etc/hosts", true},
{"/tmp/somefile", true},
{"/private/tmp/somefile", true},
{"/usr/local/lib/libfoo.dylib", true}, // .dylib
{"/other/user/file", true}, // outside home
{"/Users/testuser", true}, // exact home match
{"", true}, // empty
{"relative/path", true}, // relative
{"/Users/testuser/.cache/app/db", false},
{"/Users/testuser/project/main.go", false},
{"/Users/testuser/.config/app/conf.json", false},
{"/tmp/greywall-eslogger-abc.log", true}, // greywall infrastructure
{"/Users/testuser/.antigen/bundles/rupa/z/zig", true}, // shell infra
{"/Users/testuser/.oh-my-zsh/plugins/git/git.plugin.zsh", true}, // shell infra
{"/Users/testuser/.pyenv/shims/ruby", true}, // shell infra
{"/Users/testuser/.bun/bin/node", true}, // shell infra
{"/Users/testuser/.local/bin/rg", true}, // shell infra
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
got := shouldFilterPathMacOS(tt.path, home)
if got != tt.expected {
t.Errorf("shouldFilterPathMacOS(%q, %q) = %v, want %v", tt.path, home, got, tt.expected)
}
})
}
}
func TestCheckLearningAvailable(t *testing.T) {
err := CheckLearningAvailable()
if err != nil {
t.Logf("learning not available (expected when daemon not running): %v", err)
}
}
func TestParseEsloggerLogEmpty(t *testing.T) {
logFile := filepath.Join(t.TempDir(), "empty.log")
if err := os.WriteFile(logFile, []byte(""), 0o600); err != nil {
t.Fatal(err)
}
result, err := ParseEsloggerLog(logFile, 100, false)
if err != nil {
t.Fatalf("ParseEsloggerLog() error: %v", err)
}
if len(result.WritePaths) != 0 {
t.Errorf("expected 0 write paths, got %d", len(result.WritePaths))
}
if len(result.ReadPaths) != 0 {
t.Errorf("expected 0 read paths, got %d", len(result.ReadPaths))
}
}
func TestParseEsloggerLogMalformedJSON(t *testing.T) {
lines := []string{
"not valid json at all",
"{partial json",
makeEsloggerLine("write", 33, 100, map[string]interface{}{
"target": map[string]interface{}{"path": "/Users/test/.cache/app/good.txt", "path_truncated": false},
}),
}
logContent := strings.Join(lines, "\n")
logFile := filepath.Join(t.TempDir(), "malformed.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o600); err != nil {
t.Fatal(err)
}
// Should not error — malformed lines are skipped
result, err := ParseEsloggerLog(logFile, 100, false)
if err != nil {
t.Fatalf("ParseEsloggerLog() error: %v", err)
}
_ = result
}
func TestScanForkEvents(t *testing.T) {
lines := []string{
makeEsloggerLine("fork", 11, 100, map[string]interface{}{
"child": map[string]interface{}{
"audit_token": map[string]interface{}{"pid": 101},
"executable": map[string]interface{}{"path": "/bin/sh", "path_truncated": false},
"ppid": 100,
},
}),
makeEsloggerLine("write", 33, 100, map[string]interface{}{
"target": map[string]interface{}{"path": "/Users/test/file.txt", "path_truncated": false},
}),
makeEsloggerLine("fork", 11, 101, map[string]interface{}{
"child": map[string]interface{}{
"audit_token": map[string]interface{}{"pid": 102},
"executable": map[string]interface{}{"path": "/usr/bin/node", "path_truncated": false},
"ppid": 101,
},
}),
}
logContent := strings.Join(lines, "\n")
logFile := filepath.Join(t.TempDir(), "forks.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o600); err != nil {
t.Fatal(err)
}
forks, err := scanForkEvents(logFile)
if err != nil {
t.Fatalf("scanForkEvents() error: %v", err)
}
if len(forks) != 2 {
t.Fatalf("expected 2 fork records, got %d", len(forks))
}
expected := []forkRecord{
{parentPID: 100, childPID: 101},
{parentPID: 101, childPID: 102},
}
for i, f := range forks {
if f.parentPID != expected[i].parentPID || f.childPID != expected[i].childPID {
t.Errorf("fork[%d] = {parent:%d, child:%d}, want {parent:%d, child:%d}",
i, f.parentPID, f.childPID, expected[i].parentPID, expected[i].childPID)
}
}
}
func TestFwriteFlag(t *testing.T) {
if fwriteFlag != 0x0002 {
t.Errorf("fwriteFlag = 0x%04x, want 0x0002", fwriteFlag)
}
tests := []struct {
name string
fflag int
isWrite bool
}{
{"FREAD only", 0x0001, false},
{"FWRITE only", 0x0002, true},
{"FREAD|FWRITE", 0x0003, true},
{"FREAD|FWRITE|O_CREAT", 0x0203, true},
{"zero", 0x0000, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.fflag&fwriteFlag != 0
if got != tt.isWrite {
t.Errorf("fflag 0x%04x & FWRITE = %v, want %v", tt.fflag, got, tt.isWrite)
}
})
}
}
func TestParseEsloggerLogLink(t *testing.T) {
home, _ := os.UserHomeDir()
lines := []string{
makeEsloggerLine("link", 42, 100, map[string]interface{}{
"source": map[string]interface{}{"path": filepath.Join(home, ".cache/app/source.txt"), "path_truncated": false},
"target_dir": map[string]interface{}{"path": filepath.Join(home, ".cache/app/links"), "path_truncated": false},
}),
}
logContent := strings.Join(lines, "\n")
logFile := filepath.Join(t.TempDir(), "link.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o600); err != nil {
t.Fatal(err)
}
result, err := ParseEsloggerLog(logFile, 100, false)
if err != nil {
t.Fatalf("ParseEsloggerLog() error: %v", err)
}
expectedWrites := map[string]bool{
filepath.Join(home, ".cache/app/source.txt"): false,
filepath.Join(home, ".cache/app/links"): false,
}
for _, p := range result.WritePaths {
if _, ok := expectedWrites[p]; ok {
expectedWrites[p] = true
}
}
for p, found := range expectedWrites {
if !found {
t.Errorf("WritePaths missing expected: %q, got: %v", p, result.WritePaths)
}
}
}
func TestParseEsloggerLogDebugOutput(t *testing.T) {
home, _ := os.UserHomeDir()
lines := []string{
makeEsloggerLine("write", 33, 100, map[string]interface{}{
"target": map[string]interface{}{"path": filepath.Join(home, ".cache/app/test.txt"), "path_truncated": false},
}),
}
logContent := strings.Join(lines, "\n")
logFile := filepath.Join(t.TempDir(), "debug.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o600); err != nil {
t.Fatal(err)
}
// Just verify debug=true doesn't panic
_, err := ParseEsloggerLog(logFile, 100, true)
if err != nil {
t.Fatalf("ParseEsloggerLog() with debug=true error: %v", err)
}
}

View File

@@ -20,14 +20,8 @@ var straceSyscallRegex = regexp.MustCompile(
// openatWriteFlags matches O_WRONLY, O_RDWR, O_CREAT, O_TRUNC, O_APPEND flags in strace output.
var openatWriteFlags = regexp.MustCompile(`O_(?:WRONLY|RDWR|CREAT|TRUNC|APPEND)`)
// StraceResult holds parsed read and write paths from an strace log.
type StraceResult struct {
WritePaths []string
ReadPaths []string
}
// CheckStraceAvailable verifies that strace is installed and accessible.
func CheckStraceAvailable() error {
// CheckLearningAvailable verifies that strace is installed and accessible.
func CheckLearningAvailable() error {
_, err := exec.LookPath("strace")
if err != nil {
return fmt.Errorf("strace is required for learning mode but not found: %w\n\nInstall it with: sudo apt install strace (Debian/Ubuntu) or sudo pacman -S strace (Arch)", err)
@@ -36,17 +30,17 @@ func CheckStraceAvailable() error {
}
// ParseStraceLog reads an strace output file and extracts unique read and write paths.
func ParseStraceLog(logPath string, debug bool) (*StraceResult, error) {
func ParseStraceLog(logPath string, debug bool) (*TraceResult, error) {
f, err := os.Open(logPath) //nolint:gosec // user-controlled path from temp file - intentional
if err != nil {
return nil, fmt.Errorf("failed to open strace log: %w", err)
}
defer f.Close()
defer func() { _ = f.Close() }()
home, _ := os.UserHomeDir()
seenWrite := make(map[string]bool)
seenRead := make(map[string]bool)
result := &StraceResult{}
result := &TraceResult{}
scanner := bufio.NewScanner(f)
// Increase buffer for long strace lines

View File

@@ -127,7 +127,7 @@ func TestParseStraceLog(t *testing.T) {
}, "\n")
logFile := filepath.Join(t.TempDir(), "strace.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o644); err != nil {
if err := os.WriteFile(logFile, []byte(logContent), 0o600); err != nil {
t.Fatal(err)
}
@@ -233,10 +233,10 @@ func TestExtractReadPath(t *testing.T) {
}
}
func TestCheckStraceAvailable(t *testing.T) {
func TestCheckLearningAvailable(t *testing.T) {
// This test just verifies the function doesn't panic.
// The result depends on whether strace is installed on the test system.
err := CheckStraceAvailable()
err := CheckLearningAvailable()
if err != nil {
t.Logf("strace not available (expected in some CI environments): %v", err)
}

View File

@@ -1,21 +1,10 @@
//go:build !linux
//go:build !linux && !darwin
package sandbox
import "fmt"
// StraceResult holds parsed read and write paths from an strace log.
type StraceResult struct {
WritePaths []string
ReadPaths []string
}
// CheckStraceAvailable returns an error on non-Linux platforms.
func CheckStraceAvailable() error {
return fmt.Errorf("learning mode is only available on Linux (requires strace and bubblewrap)")
}
// ParseStraceLog returns an error on non-Linux platforms.
func ParseStraceLog(logPath string, debug bool) (*StraceResult, error) {
return nil, fmt.Errorf("strace log parsing is only available on Linux")
// CheckLearningAvailable returns an error on unsupported platforms.
func CheckLearningAvailable() error {
return fmt.Errorf("learning mode is only available on Linux (requires strace) and macOS (requires eslogger + daemon)")
}

View File

@@ -70,14 +70,13 @@ func TestFindApplicationDirectory(t *testing.T) {
func TestCollapsePaths(t *testing.T) {
// Temporarily override home for testing
origHome := os.Getenv("HOME")
os.Setenv("HOME", "/home/testuser")
defer os.Setenv("HOME", origHome)
t.Setenv("HOME", "/home/testuser")
tests := []struct {
name string
paths []string
contains []string // paths that should be in the result
name string
paths []string
contains []string // paths that should be in the result
notContains []string // paths that must NOT be in the result
}{
{
name: "multiple paths under same app dir",
@@ -111,6 +110,33 @@ func TestCollapsePaths(t *testing.T) {
"/home/testuser/.config/opencode",
},
},
{
name: "files directly under home stay as exact paths",
paths: []string{
"/home/testuser/.gitignore",
"/home/testuser/.npmrc",
},
contains: []string{
"/home/testuser/.gitignore",
"/home/testuser/.npmrc",
},
notContains: []string{"/home/testuser"},
},
{
name: "mix of home files and app dir paths",
paths: []string{
"/home/testuser/.gitignore",
"/home/testuser/.cache/opencode/db/main.sqlite",
"/home/testuser/.cache/opencode/version",
"/home/testuser/.npmrc",
},
contains: []string{
"/home/testuser/.gitignore",
"/home/testuser/.npmrc",
"/home/testuser/.cache/opencode",
},
notContains: []string{"/home/testuser"},
},
}
for _, tt := range tests {
@@ -134,6 +160,13 @@ func TestCollapsePaths(t *testing.T) {
t.Errorf("CollapsePaths() = %v, missing expected path %q", got, want)
}
}
for _, bad := range tt.notContains {
for _, g := range got {
if g == bad {
t.Errorf("CollapsePaths() = %v, should NOT contain %q", got, bad)
}
}
}
})
}
}
@@ -284,9 +317,7 @@ func TestToTildePath(t *testing.T) {
func TestListLearnedTemplates(t *testing.T) {
// Use a temp dir to isolate from real user config
tmpDir := t.TempDir()
origConfigDir := os.Getenv("XDG_CONFIG_HOME")
os.Setenv("XDG_CONFIG_HOME", tmpDir)
defer os.Setenv("XDG_CONFIG_HOME", origConfigDir)
t.Setenv("XDG_CONFIG_HOME", tmpDir)
// Initially empty
templates, err := ListLearnedTemplates()
@@ -299,10 +330,18 @@ func TestListLearnedTemplates(t *testing.T) {
// Create some templates
dir := LearnedTemplateDir()
os.MkdirAll(dir, 0o755)
os.WriteFile(filepath.Join(dir, "opencode.json"), []byte("{}"), 0o644)
os.WriteFile(filepath.Join(dir, "myapp.json"), []byte("{}"), 0o644)
os.WriteFile(filepath.Join(dir, "notjson.txt"), []byte(""), 0o644) // should be ignored
if err := os.MkdirAll(dir, 0o750); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(filepath.Join(dir, "opencode.json"), []byte("{}"), 0o600); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(filepath.Join(dir, "myapp.json"), []byte("{}"), 0o600); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(filepath.Join(dir, "notjson.txt"), []byte(""), 0o600); err != nil {
t.Fatal(err)
}
templates, err = ListLearnedTemplates()
if err != nil {
@@ -325,8 +364,9 @@ func TestListLearnedTemplates(t *testing.T) {
}
func TestBuildTemplate(t *testing.T) {
allowRead := []string{"~/external-data"}
allowWrite := []string{".", "~/.cache/opencode", "~/.config/opencode"}
result := buildTemplate("opencode", allowWrite)
result := buildTemplate("opencode", allowRead, allowWrite)
// Check header comments
if !strings.Contains(result, `Learned template for "opencode"`) {
@@ -340,6 +380,12 @@ func TestBuildTemplate(t *testing.T) {
}
// Check content
if !strings.Contains(result, `"allowRead"`) {
t.Error("template missing allowRead field")
}
if !strings.Contains(result, `"~/external-data"`) {
t.Error("template missing expected allowRead path")
}
if !strings.Contains(result, `"allowWrite"`) {
t.Error("template missing allowWrite field")
}
@@ -352,31 +398,44 @@ func TestBuildTemplate(t *testing.T) {
if !strings.Contains(result, `"denyRead"`) {
t.Error("template missing denyRead field")
}
// Check .env patterns are included in denyRead
if !strings.Contains(result, `".env"`) {
t.Error("template missing .env in denyRead")
}
if !strings.Contains(result, `".env.*"`) {
t.Error("template missing .env.* in denyRead")
}
}
func TestBuildTemplateNoAllowRead(t *testing.T) {
result := buildTemplate("simple-cmd", nil, []string{"."})
// When allowRead is nil, it should be omitted from JSON
if strings.Contains(result, `"allowRead"`) {
t.Error("template should omit allowRead when nil")
}
}
func TestGenerateLearnedTemplate(t *testing.T) {
// Create a temp dir for templates
tmpDir := t.TempDir()
origConfigDir := os.Getenv("XDG_CONFIG_HOME")
os.Setenv("XDG_CONFIG_HOME", tmpDir)
defer os.Setenv("XDG_CONFIG_HOME", origConfigDir)
t.Setenv("XDG_CONFIG_HOME", tmpDir)
// Create a fake strace log
home, _ := os.UserHomeDir()
logContent := strings.Join([]string{
`12345 openat(AT_FDCWD, "` + filepath.Join(home, ".cache/testapp/db.sqlite") + `", O_WRONLY|O_CREAT, 0644) = 3`,
`12345 openat(AT_FDCWD, "` + filepath.Join(home, ".cache/testapp/version") + `", O_WRONLY|O_CREAT, 0644) = 3`,
`12345 mkdirat(AT_FDCWD, "` + filepath.Join(home, ".config/testapp") + `", 0755) = 0`,
`12345 openat(AT_FDCWD, "/tmp/somefile", O_WRONLY|O_CREAT, 0644) = 3`,
`12345 openat(AT_FDCWD, "/proc/self/maps", O_RDONLY) = 3`,
}, "\n")
logFile := filepath.Join(tmpDir, "strace.log")
if err := os.WriteFile(logFile, []byte(logContent), 0o644); err != nil {
t.Fatal(err)
// Build a TraceResult directly (platform-independent test)
result := &TraceResult{
WritePaths: []string{
filepath.Join(home, ".cache/testapp/db.sqlite"),
filepath.Join(home, ".cache/testapp/version"),
filepath.Join(home, ".config/testapp"),
},
ReadPaths: []string{
filepath.Join(home, ".config/testapp/conf.json"),
},
}
templatePath, err := GenerateLearnedTemplate(logFile, "testapp", false)
templatePath, err := GenerateLearnedTemplate(result, "testapp", false)
if err != nil {
t.Fatalf("GenerateLearnedTemplate() error: %v", err)
}
@@ -386,7 +445,7 @@ func TestGenerateLearnedTemplate(t *testing.T) {
}
// Read and verify template
data, err := os.ReadFile(templatePath)
data, err := os.ReadFile(templatePath) //nolint:gosec // reading test-generated template file
if err != nil {
t.Fatalf("failed to read template: %v", err)
}

View File

@@ -371,6 +371,12 @@ func getMandatoryDenyPaths(cwd string) []string {
paths = append(paths, p)
}
// Sensitive project files (e.g. .env) in cwd
for _, f := range SensitiveProjectFiles {
p := filepath.Join(cwd, f)
paths = append(paths, p)
}
// Git hooks in cwd
paths = append(paths, filepath.Join(cwd, ".git/hooks"))
@@ -389,6 +395,203 @@ func getMandatoryDenyPaths(cwd string) []string {
return paths
}
// buildDenyByDefaultMounts builds bwrap arguments for deny-by-default filesystem isolation.
// Starts with --tmpfs / (empty root), then selectively mounts system paths read-only,
// CWD read-write, and user tooling paths read-only. Sensitive files within CWD are masked.
func buildDenyByDefaultMounts(cfg *config.Config, cwd string, debug bool) []string {
var args []string
home, _ := os.UserHomeDir()
// Start with empty root
args = append(args, "--tmpfs", "/")
// System paths (read-only) - on modern distros (Arch, Fedora, etc.),
// /bin, /sbin, /lib, /lib64 are often symlinks to /usr/*. We must
// recreate these as symlinks via --symlink so the dynamic linker
// and shell can be found. Real directories get bind-mounted.
systemPaths := []string{"/usr", "/bin", "/sbin", "/lib", "/lib64", "/etc", "/opt", "/run"}
for _, p := range systemPaths {
if !fileExists(p) {
continue
}
if isSymlink(p) {
// Recreate the symlink inside the sandbox (e.g., /bin -> usr/bin)
target, err := os.Readlink(p)
if err == nil {
args = append(args, "--symlink", target, p)
}
} else {
args = append(args, "--ro-bind", p, p)
}
}
// /sys needs to be accessible for system info
if fileExists("/sys") && canMountOver("/sys") {
args = append(args, "--ro-bind", "/sys", "/sys")
}
// CWD: create intermediary dirs and bind read-write
if cwd != "" && fileExists(cwd) {
for _, dir := range intermediaryDirs("/", cwd) {
// Skip dirs that are already mounted as system paths
if isSystemMountPoint(dir) {
continue
}
args = append(args, "--dir", dir)
}
args = append(args, "--bind", cwd, cwd)
}
// User tooling paths from GetDefaultReadablePaths() (read-only)
// Filter out paths already mounted (system dirs, /dev, /proc, /tmp, macOS-specific)
if home != "" {
boundDirs := make(map[string]bool)
for _, p := range GetDefaultReadablePaths() {
// Skip system paths (already bound above), special mounts, and macOS paths
if isSystemMountPoint(p) || p == "/dev" || p == "/proc" || p == "/sys" ||
p == "/tmp" || p == "/private/tmp" ||
strings.HasPrefix(p, "/System") || strings.HasPrefix(p, "/Library") ||
strings.HasPrefix(p, "/Applications") || strings.HasPrefix(p, "/private/") ||
strings.HasPrefix(p, "/nix") || strings.HasPrefix(p, "/snap") ||
p == "/usr/local" || p == "/opt/homebrew" {
continue
}
if !strings.HasPrefix(p, home) {
continue // Only user tooling paths need intermediary dirs
}
if !fileExists(p) || !canMountOver(p) {
continue
}
// Create intermediary dirs between root and this path
for _, dir := range intermediaryDirs("/", p) {
if !boundDirs[dir] && !isSystemMountPoint(dir) && dir != cwd {
boundDirs[dir] = true
args = append(args, "--dir", dir)
}
}
args = append(args, "--ro-bind", p, p)
}
// Shell config files in home (read-only, literal files)
shellConfigs := []string{".bashrc", ".bash_profile", ".profile", ".zshrc", ".zprofile", ".zshenv", ".inputrc"}
homeIntermedaryAdded := boundDirs[home]
for _, f := range shellConfigs {
p := filepath.Join(home, f)
if fileExists(p) && canMountOver(p) {
if !homeIntermedaryAdded {
for _, dir := range intermediaryDirs("/", home) {
if !boundDirs[dir] && !isSystemMountPoint(dir) {
boundDirs[dir] = true
args = append(args, "--dir", dir)
}
}
homeIntermedaryAdded = true
}
args = append(args, "--ro-bind", p, p)
}
}
// Home tool caches (read-only, for package managers/configs)
homeCaches := []string{".cache", ".npm", ".cargo", ".rustup", ".local", ".config"}
for _, d := range homeCaches {
p := filepath.Join(home, d)
if fileExists(p) && canMountOver(p) {
if !homeIntermedaryAdded {
for _, dir := range intermediaryDirs("/", home) {
if !boundDirs[dir] && !isSystemMountPoint(dir) {
boundDirs[dir] = true
args = append(args, "--dir", dir)
}
}
homeIntermedaryAdded = true
}
args = append(args, "--ro-bind", p, p)
}
}
}
// User-specified allowRead paths (read-only)
if cfg != nil && cfg.Filesystem.AllowRead != nil {
boundPaths := make(map[string]bool)
expandedPaths := ExpandGlobPatterns(cfg.Filesystem.AllowRead)
for _, p := range expandedPaths {
if fileExists(p) && canMountOver(p) &&
!strings.HasPrefix(p, "/dev/") && !strings.HasPrefix(p, "/proc/") && !boundPaths[p] {
boundPaths[p] = true
// Create intermediary dirs if needed.
// For files, only create dirs up to the parent to avoid
// creating a directory at the file's path.
dirTarget := p
if !isDirectory(p) {
dirTarget = filepath.Dir(p)
}
for _, dir := range intermediaryDirs("/", dirTarget) {
if !isSystemMountPoint(dir) {
args = append(args, "--dir", dir)
}
}
args = append(args, "--ro-bind", p, p)
}
}
for _, p := range cfg.Filesystem.AllowRead {
normalized := NormalizePath(p)
if !ContainsGlobChars(normalized) && fileExists(normalized) && canMountOver(normalized) &&
!strings.HasPrefix(normalized, "/dev/") && !strings.HasPrefix(normalized, "/proc/") && !boundPaths[normalized] {
boundPaths[normalized] = true
dirTarget := normalized
if !isDirectory(normalized) {
dirTarget = filepath.Dir(normalized)
}
for _, dir := range intermediaryDirs("/", dirTarget) {
if !isSystemMountPoint(dir) {
args = append(args, "--dir", dir)
}
}
args = append(args, "--ro-bind", normalized, normalized)
}
}
}
// Mask sensitive project files within CWD by overlaying an empty regular file.
// We use an empty file instead of /dev/null because Landlock's READ_FILE right
// doesn't cover character devices, causing "Permission denied" on /dev/null mounts.
if cwd != "" {
var emptyFile string
for _, f := range SensitiveProjectFiles {
p := filepath.Join(cwd, f)
if fileExists(p) {
if emptyFile == "" {
emptyFile = filepath.Join(os.TempDir(), "greywall", "empty")
_ = os.MkdirAll(filepath.Dir(emptyFile), 0o750)
_ = os.WriteFile(emptyFile, nil, 0o444) //nolint:gosec // intentionally world-readable empty file for bind-mount masking
}
args = append(args, "--ro-bind", emptyFile, p)
if debug {
fmt.Fprintf(os.Stderr, "[greywall:linux] Masking sensitive file: %s\n", p)
}
}
}
}
return args
}
// isSystemMountPoint returns true if the path is a top-level system directory
// that gets mounted directly under --tmpfs / (bwrap auto-creates these).
func isSystemMountPoint(path string) bool {
switch path {
case "/usr", "/bin", "/sbin", "/lib", "/lib64", "/etc", "/opt", "/run", "/sys",
"/dev", "/proc", "/tmp",
// macOS
"/System", "/Library", "/Applications", "/private",
// Package managers
"/nix", "/snap", "/usr/local", "/opt/homebrew":
return true
}
return false
}
// WrapCommandLinux wraps a command with Linux bubblewrap sandbox.
// It uses available security features (Landlock, seccomp) with graceful fallback.
func WrapCommandLinux(cfg *config.Config, command string, proxyBridge *ProxyBridge, dnsBridge *DnsBridge, reverseBridge *ReverseBridge, tun2socksPath string, debug bool) (string, error) {
@@ -478,54 +681,29 @@ func WrapCommandLinuxWithOptions(cfg *config.Config, command string, proxyBridge
bwrapArgs = append(bwrapArgs, "--bind", cwd, cwd)
}
// Make XDG_RUNTIME_DIR writable so dconf and other runtime services
// (Wayland, PulseAudio, D-Bus) work inside the sandbox.
// Writes to /run/ are already filtered out by the learning parser.
xdgRuntime := os.Getenv("XDG_RUNTIME_DIR")
if xdgRuntime != "" && fileExists(xdgRuntime) {
bwrapArgs = append(bwrapArgs, "--bind", xdgRuntime, xdgRuntime)
}
}
defaultDenyRead := cfg != nil && cfg.Filesystem.DefaultDenyRead
defaultDenyRead := cfg != nil && cfg.Filesystem.IsDefaultDenyRead()
if opts.Learning {
switch {
case opts.Learning:
// Skip defaultDenyRead logic in learning mode (already set up above)
} else if defaultDenyRead {
// In defaultDenyRead mode, we only bind essential system paths read-only
// and user-specified allowRead paths. Everything else is inaccessible.
case defaultDenyRead:
// Deny-by-default mode: start with empty root, then whitelist system paths + CWD
if opts.Debug {
fmt.Fprintf(os.Stderr, "[greywall:linux] DefaultDenyRead mode enabled - binding only essential system paths\n")
fmt.Fprintf(os.Stderr, "[greywall:linux] DefaultDenyRead mode enabled - tmpfs root with selective mounts\n")
}
// Bind essential system paths read-only
// Skip /dev, /proc, /tmp as they're mounted with special options below
for _, systemPath := range GetDefaultReadablePaths() {
if systemPath == "/dev" || systemPath == "/proc" || systemPath == "/tmp" ||
systemPath == "/private/tmp" {
continue
}
if fileExists(systemPath) {
bwrapArgs = append(bwrapArgs, "--ro-bind", systemPath, systemPath)
}
}
// Bind user-specified allowRead paths
if cfg != nil && cfg.Filesystem.AllowRead != nil {
boundPaths := make(map[string]bool)
expandedPaths := ExpandGlobPatterns(cfg.Filesystem.AllowRead)
for _, p := range expandedPaths {
if fileExists(p) && !strings.HasPrefix(p, "/dev/") && !strings.HasPrefix(p, "/proc/") && !boundPaths[p] {
boundPaths[p] = true
bwrapArgs = append(bwrapArgs, "--ro-bind", p, p)
}
}
// Add non-glob paths
for _, p := range cfg.Filesystem.AllowRead {
normalized := NormalizePath(p)
if !ContainsGlobChars(normalized) && fileExists(normalized) &&
!strings.HasPrefix(normalized, "/dev/") && !strings.HasPrefix(normalized, "/proc/") && !boundPaths[normalized] {
boundPaths[normalized] = true
bwrapArgs = append(bwrapArgs, "--ro-bind", normalized, normalized)
}
}
}
} else {
// Default mode: bind entire root filesystem read-only
bwrapArgs = append(bwrapArgs, buildDenyByDefaultMounts(cfg, cwd, opts.Debug)...)
default:
// Legacy mode: bind entire root filesystem read-only
bwrapArgs = append(bwrapArgs, "--ro-bind", "/", "/")
}
@@ -679,10 +857,20 @@ func WrapCommandLinuxWithOptions(cfg *config.Config, command string, proxyBridge
// subdirectory dangerous files without full tree walks that hang on large dirs.
mandatoryDeny := getMandatoryDenyPaths(cwd)
// In deny-by-default mode, sensitive project files are already masked
// with --ro-bind /dev/null by buildDenyByDefaultMounts(). Skip them here
// to avoid overriding the /dev/null mask with a real ro-bind.
maskedPaths := make(map[string]bool)
if defaultDenyRead {
for _, f := range SensitiveProjectFiles {
maskedPaths[filepath.Join(cwd, f)] = true
}
}
// Deduplicate
seen := make(map[string]bool)
for _, p := range mandatoryDeny {
if !seen[p] && fileExists(p) {
if !seen[p] && fileExists(p) && !maskedPaths[p] {
seen[p] = true
bwrapArgs = append(bwrapArgs, "--ro-bind", p, p)
}
@@ -750,7 +938,7 @@ func WrapCommandLinuxWithOptions(cfg *config.Config, command string, proxyBridge
// Supported by glibc, Go 1.21+, c-ares, and most DNS resolver libraries.
_, _ = tmpResolv.WriteString("nameserver 1.1.1.1\nnameserver 8.8.8.8\noptions use-vc\n")
}
tmpResolv.Close()
_ = tmpResolv.Close()
dnsRelayResolvConf = tmpResolv.Name()
bwrapArgs = append(bwrapArgs, "--ro-bind", dnsRelayResolvConf, "/etc/resolv.conf")
if opts.Debug {
@@ -788,6 +976,14 @@ func WrapCommandLinuxWithOptions(cfg *config.Config, command string, proxyBridge
fmt.Fprintf(os.Stderr, "[greywall:linux] Skipping Landlock wrapper (running as library, not greywall CLI)\n")
}
// Bind-mount the greywall binary into the sandbox so the Landlock wrapper
// can re-execute it. Without this, running greywall from a directory that
// isn't the CWD (e.g., ~/bin/greywall from /home/user/project) would fail
// because the binary path doesn't exist inside the sandbox.
if useLandlockWrapper && greywallExePath != "" {
bwrapArgs = append(bwrapArgs, "--ro-bind", greywallExePath, greywallExePath)
}
bwrapArgs = append(bwrapArgs, "--", shellPath, "-c")
// Build the inner command that sets up tun2socks and runs the user command
@@ -898,7 +1094,8 @@ sleep 0.3
// after the main command exits; the user can Ctrl+C to stop it.
// A SIGCHLD trap kills strace once its direct child exits, handling
// the common case of background daemons (LSP servers, watchers).
if opts.Learning && opts.StraceLogPath != "" {
switch {
case opts.Learning && opts.StraceLogPath != "":
innerScript.WriteString(fmt.Sprintf(`# Learning mode: trace filesystem access (foreground for terminal access)
strace -f -qq -I2 -e trace=openat,open,creat,mkdir,mkdirat,unlinkat,renameat,renameat2,symlinkat,linkat -o %s -- %s
GREYWALL_STRACE_EXIT=$?
@@ -910,7 +1107,7 @@ exit $GREYWALL_STRACE_EXIT
`,
ShellQuoteSingle(opts.StraceLogPath), command,
))
} else if useLandlockWrapper {
case useLandlockWrapper:
// Use Landlock wrapper if available
// Pass config via environment variable (serialized as JSON)
// This ensures allowWrite/denyWrite rules are properly applied
@@ -931,7 +1128,7 @@ exit $GREYWALL_STRACE_EXIT
// Use exec to replace bash with the wrapper (which will exec the command)
innerScript.WriteString(fmt.Sprintf("exec %s\n", ShellQuote(wrapperArgs)))
} else {
default:
innerScript.WriteString(command)
innerScript.WriteString("\n")
}

View File

@@ -370,7 +370,7 @@ func suggestInstallCmd(features *LinuxFeatures) string {
}
func readSysctl(name string) string {
data, err := os.ReadFile("/proc/sys/" + name)
data, err := os.ReadFile("/proc/sys/" + name) //nolint:gosec // reading sysctl values - trusted kernel path
if err != nil {
return ""
}

View File

@@ -81,17 +81,55 @@ func ApplyLandlockFromConfig(cfg *config.Config, cwd string, socketPaths []strin
}
}
// Current working directory - read access (may be upgraded to write below)
// Current working directory - read+write access (project directory)
if cwd != "" {
if err := ruleset.AllowRead(cwd); err != nil && debug {
fmt.Fprintf(os.Stderr, "[greywall:landlock] Warning: failed to add cwd read path: %v\n", err)
if err := ruleset.AllowReadWrite(cwd); err != nil && debug {
fmt.Fprintf(os.Stderr, "[greywall:landlock] Warning: failed to add cwd read/write path: %v\n", err)
}
}
// Home directory - read access
if home, err := os.UserHomeDir(); err == nil {
if err := ruleset.AllowRead(home); err != nil && debug {
fmt.Fprintf(os.Stderr, "[greywall:landlock] Warning: failed to add home read path: %v\n", err)
// Home directory - read access only when not in deny-by-default mode.
// In deny-by-default mode, only specific user tooling paths are allowed,
// not the entire home directory. Landlock can't selectively deny files
// within an allowed directory, so we rely on bwrap mount overlays for
// .env file masking.
defaultDenyRead := cfg != nil && cfg.Filesystem.IsDefaultDenyRead()
if !defaultDenyRead {
if home, err := os.UserHomeDir(); err == nil {
if err := ruleset.AllowRead(home); err != nil && debug {
fmt.Fprintf(os.Stderr, "[greywall:landlock] Warning: failed to add home read path: %v\n", err)
}
}
} else {
// In deny-by-default mode, allow specific user tooling paths
if home, err := os.UserHomeDir(); err == nil {
for _, p := range GetDefaultReadablePaths() {
if strings.HasPrefix(p, home) {
if err := ruleset.AllowRead(p); err != nil && debug {
fmt.Fprintf(os.Stderr, "[greywall:landlock] Warning: failed to add user tooling path %s: %v\n", p, err)
}
}
}
// Shell configs
shellConfigs := []string{".bashrc", ".bash_profile", ".profile", ".zshrc", ".zprofile", ".zshenv", ".inputrc"}
for _, f := range shellConfigs {
p := filepath.Join(home, f)
if err := ruleset.AllowRead(p); err != nil && debug {
if !os.IsNotExist(err) {
fmt.Fprintf(os.Stderr, "[greywall:landlock] Warning: failed to add shell config %s: %v\n", p, err)
}
}
}
// Home caches
homeCaches := []string{".cache", ".npm", ".cargo", ".rustup", ".local", ".config"}
for _, d := range homeCaches {
p := filepath.Join(home, d)
if err := ruleset.AllowRead(p); err != nil && debug {
if !os.IsNotExist(err) {
fmt.Fprintf(os.Stderr, "[greywall:landlock] Warning: failed to add home cache %s: %v\n", p, err)
}
}
}
}
}
@@ -163,7 +201,7 @@ type LandlockRuleset struct {
func NewLandlockRuleset(debug bool) (*LandlockRuleset, error) {
features := DetectLinuxFeatures()
if !features.CanUseLandlock() {
return nil, fmt.Errorf("Landlock not available (kernel %d.%d, need 5.13+)",
return nil, fmt.Errorf("landlock not available (kernel %d.%d, need 5.13+)",
features.KernelMajor, features.KernelMinor)
}
@@ -400,7 +438,7 @@ func (l *LandlockRuleset) addPathRule(path string, access uint64) error {
// Apply applies the Landlock ruleset to the current process.
func (l *LandlockRuleset) Apply() error {
if !l.initialized {
return fmt.Errorf("Landlock ruleset not initialized")
return fmt.Errorf("landlock ruleset not initialized")
}
// Set NO_NEW_PRIVS first (required for Landlock)

View File

@@ -64,12 +64,12 @@ func (b *ReverseBridge) Cleanup() {}
// WrapCommandLinux returns an error on non-Linux platforms.
func WrapCommandLinux(cfg *config.Config, command string, proxyBridge *ProxyBridge, dnsBridge *DnsBridge, reverseBridge *ReverseBridge, tun2socksPath string, debug bool) (string, error) {
return "", fmt.Errorf("Linux sandbox not available on this platform")
return "", fmt.Errorf("linux sandbox not available on this platform")
}
// WrapCommandLinuxWithOptions returns an error on non-Linux platforms.
func WrapCommandLinuxWithOptions(cfg *config.Config, command string, proxyBridge *ProxyBridge, dnsBridge *DnsBridge, reverseBridge *ReverseBridge, tun2socksPath string, opts LinuxSandboxOptions) (string, error) {
return "", fmt.Errorf("Linux sandbox not available on this platform")
return "", fmt.Errorf("linux sandbox not available on this platform")
}
// StartLinuxMonitor returns nil on non-Linux platforms.

View File

@@ -37,6 +37,7 @@ type MacOSSandboxParams struct {
AllowLocalBinding bool
AllowLocalOutbound bool
DefaultDenyRead bool
Cwd string // Current working directory (for deny-by-default CWD allowlisting)
ReadAllowPaths []string
ReadDenyPaths []string
WriteAllowPaths []string
@@ -44,6 +45,8 @@ type MacOSSandboxParams struct {
AllowPty bool
AllowGitConfig bool
Shell string
DaemonMode bool // When true, pf handles network routing; Seatbelt allows network-outbound
DaemonSocketPath string // Daemon socket to deny access to from sandboxed process
}
// GlobToRegex converts a glob pattern to a regex for macOS sandbox profiles.
@@ -146,13 +149,13 @@ func getTmpdirParent() []string {
}
// generateReadRules generates filesystem read rules for the sandbox profile.
func generateReadRules(defaultDenyRead bool, allowPaths, denyPaths []string, logTag string) []string {
func generateReadRules(defaultDenyRead bool, cwd string, allowPaths, denyPaths []string, logTag string) []string {
var rules []string
if defaultDenyRead {
// When defaultDenyRead is enabled:
// 1. Allow file-read-metadata globally (needed for directory traversal, stat, etc.)
// 2. Allow file-read-data only for system paths + user-specified allowRead paths
// 2. Allow file-read-data only for system paths + CWD + user-specified allowRead paths
// This lets programs see what files exist but not read their contents.
// Allow metadata operations globally (stat, readdir, etc.) and root dir (for path resolution)
@@ -167,6 +170,44 @@ func generateReadRules(defaultDenyRead bool, allowPaths, denyPaths []string, log
)
}
// Allow reading CWD (full recursive read access)
if cwd != "" {
rules = append(rules,
"(allow file-read-data",
fmt.Sprintf(" (subpath %s))", escapePath(cwd)),
)
// Allow ancestor directory traversal (literal only, so programs can resolve CWD path)
for _, ancestor := range getAncestorDirectories(cwd) {
rules = append(rules,
fmt.Sprintf("(allow file-read-data (literal %s))", escapePath(ancestor)),
)
}
}
// Allow home shell configs and tool caches (read-only)
home, _ := os.UserHomeDir()
if home != "" {
// Shell config files (literal access)
shellConfigs := []string{".bashrc", ".bash_profile", ".profile", ".zshrc", ".zprofile", ".zshenv", ".inputrc"}
for _, f := range shellConfigs {
p := filepath.Join(home, f)
rules = append(rules,
fmt.Sprintf("(allow file-read-data (literal %s))", escapePath(p)),
)
}
// Home tool caches (subpath access for package managers/configs)
homeCaches := []string{".cache", ".npm", ".cargo", ".rustup", ".local", ".config", ".nvm", ".pyenv", ".rbenv", ".asdf"}
for _, d := range homeCaches {
p := filepath.Join(home, d)
rules = append(rules,
"(allow file-read-data",
fmt.Sprintf(" (subpath %s))", escapePath(p)),
)
}
}
// Allow reading data from user-specified paths
for _, pathPattern := range allowPaths {
normalized := NormalizePath(pathPattern)
@@ -184,6 +225,24 @@ func generateReadRules(defaultDenyRead bool, allowPaths, denyPaths []string, log
)
}
}
// Deny sensitive files within CWD (Seatbelt evaluates deny before allow)
if cwd != "" {
for _, f := range SensitiveProjectFiles {
p := filepath.Join(cwd, f)
rules = append(rules,
"(deny file-read*",
fmt.Sprintf(" (literal %s)", escapePath(p)),
fmt.Sprintf(" (with message %q))", logTag),
)
}
// Also deny .env.* pattern via regex
rules = append(rules,
"(deny file-read*",
fmt.Sprintf(" (regex %s)", escapePath("^"+regexp.QuoteMeta(cwd)+"/\\.env\\..*$")),
fmt.Sprintf(" (with message %q))", logTag),
)
}
} else {
// Allow all reads by default
rules = append(rules, "(allow file-read*)")
@@ -220,9 +279,19 @@ func generateReadRules(defaultDenyRead bool, allowPaths, denyPaths []string, log
}
// generateWriteRules generates filesystem write rules for the sandbox profile.
func generateWriteRules(allowPaths, denyPaths []string, allowGitConfig bool, logTag string) []string {
// When cwd is non-empty, it is automatically included in the write allow paths.
func generateWriteRules(cwd string, allowPaths, denyPaths []string, allowGitConfig bool, logTag string) []string {
var rules []string
// Auto-allow CWD for writes (project directory should be writable)
if cwd != "" {
rules = append(rules,
"(allow file-write*",
fmt.Sprintf(" (subpath %s)", escapePath(cwd)),
fmt.Sprintf(" (with message %q))", logTag),
)
}
// Allow TMPDIR parent on macOS
for _, tmpdirParent := range getTmpdirParent() {
normalized := NormalizePath(tmpdirParent)
@@ -254,8 +323,11 @@ func generateWriteRules(allowPaths, denyPaths []string, allowGitConfig bool, log
}
// Combine user-specified and mandatory deny patterns
cwd, _ := os.Getwd()
mandatoryDeny := GetMandatoryDenyPatterns(cwd, allowGitConfig)
mandatoryCwd := cwd
if mandatoryCwd == "" {
mandatoryCwd, _ = os.Getwd()
}
mandatoryDeny := GetMandatoryDenyPatterns(mandatoryCwd, allowGitConfig)
allDenyPaths := make([]string, 0, len(denyPaths)+len(mandatoryDeny))
allDenyPaths = append(allDenyPaths, denyPaths...)
allDenyPaths = append(allDenyPaths, mandatoryDeny...)
@@ -352,8 +424,8 @@ func GenerateSandboxProfile(params MacOSSandboxParams) string {
// Header
profile.WriteString("(version 1)\n")
profile.WriteString(fmt.Sprintf("(deny default (with message %q))\n\n", logTag))
profile.WriteString(fmt.Sprintf("; LogTag: %s\n\n", logTag))
fmt.Fprintf(&profile, "(deny default (with message %q))\n\n", logTag)
fmt.Fprintf(&profile, "; LogTag: %s\n\n", logTag)
// Essential permissions - based on Chrome sandbox policy
profile.WriteString(`; Essential permissions - based on Chrome sandbox policy
@@ -484,6 +556,7 @@ func GenerateSandboxProfile(params MacOSSandboxParams) string {
(allow file-ioctl (literal "/dev/urandom"))
(allow file-ioctl (literal "/dev/dtracehelper"))
(allow file-ioctl (literal "/dev/tty"))
(allow file-ioctl (regex #"^/dev/ttys"))
(allow file-ioctl file-read-data file-write-data
(require-all
@@ -492,13 +565,34 @@ func GenerateSandboxProfile(params MacOSSandboxParams) string {
)
)
; Inherited terminal access (TUI apps need read/write on the actual PTY device)
(allow file-read-data file-write-data (regex #"^/dev/ttys"))
`)
// Network rules
profile.WriteString("; Network\n")
if !params.NeedsNetworkRestriction {
switch {
case params.DaemonMode:
// In daemon mode, pf handles network routing: all traffic from the
// _greywall user is routed through utun → tun2socks → proxy.
// Seatbelt must allow network-outbound so packets reach pf.
// The proxy allowlist is enforced by the external SOCKS5 proxy.
profile.WriteString("(allow network-outbound)\n")
// Allow local binding for servers if configured.
if params.AllowLocalBinding {
profile.WriteString(`(allow network-bind (local ip "localhost:*"))
(allow network-inbound (local ip "localhost:*"))
`)
}
// Explicitly deny access to the daemon socket to prevent the
// sandboxed process from manipulating daemon sessions.
if params.DaemonSocketPath != "" {
fmt.Fprintf(&profile, "(deny network-outbound (remote unix-socket (path-literal %s)))\n", escapePath(params.DaemonSocketPath))
}
case !params.NeedsNetworkRestriction:
profile.WriteString("(allow network*)\n")
} else {
default:
if params.AllowLocalBinding {
// Allow binding and inbound connections on localhost (for servers)
profile.WriteString(`(allow network-bind (local ip "localhost:*"))
@@ -516,44 +610,37 @@ func GenerateSandboxProfile(params MacOSSandboxParams) string {
} else if len(params.AllowUnixSockets) > 0 {
for _, socketPath := range params.AllowUnixSockets {
normalized := NormalizePath(socketPath)
profile.WriteString(fmt.Sprintf("(allow network* (subpath %s))\n", escapePath(normalized)))
fmt.Fprintf(&profile, "(allow network* (subpath %s))\n", escapePath(normalized))
}
}
// Allow outbound to the external proxy host:port
if params.ProxyHost != "" && params.ProxyPort != "" {
profile.WriteString(fmt.Sprintf(`(allow network-outbound (remote ip "%s:%s"))
`, params.ProxyHost, params.ProxyPort))
fmt.Fprintf(&profile, "(allow network-outbound (remote ip \"%s:%s\"))\n", params.ProxyHost, params.ProxyPort)
}
}
profile.WriteString("\n")
// Read rules
profile.WriteString("; File read\n")
for _, rule := range generateReadRules(params.DefaultDenyRead, params.ReadAllowPaths, params.ReadDenyPaths, logTag) {
for _, rule := range generateReadRules(params.DefaultDenyRead, params.Cwd, params.ReadAllowPaths, params.ReadDenyPaths, logTag) {
profile.WriteString(rule + "\n")
}
profile.WriteString("\n")
// Write rules
profile.WriteString("; File write\n")
for _, rule := range generateWriteRules(params.WriteAllowPaths, params.WriteDenyPaths, params.AllowGitConfig, logTag) {
for _, rule := range generateWriteRules(params.Cwd, params.WriteAllowPaths, params.WriteDenyPaths, params.AllowGitConfig, logTag) {
profile.WriteString(rule + "\n")
}
// PTY support
// PTY allocation support (creating new pseudo-terminals)
if params.AllowPty {
profile.WriteString(`
; Pseudo-terminal (pty) support
; Pseudo-terminal allocation (pty) support
(allow pseudo-tty)
(allow file-ioctl
(literal "/dev/ptmx")
(regex #"^/dev/ttys")
)
(allow file-read* file-write*
(literal "/dev/ptmx")
(regex #"^/dev/ttys")
)
(allow file-ioctl (literal "/dev/ptmx"))
(allow file-read* file-write* (literal "/dev/ptmx"))
`)
}
@@ -561,7 +648,11 @@ func GenerateSandboxProfile(params MacOSSandboxParams) string {
}
// WrapCommandMacOS wraps a command with macOS sandbox restrictions.
func WrapCommandMacOS(cfg *config.Config, command string, exposedPorts []int, debug bool) (string, error) {
// When daemonSession is non-nil, the command runs as the _greywall user
// with network-outbound allowed (pf routes traffic through utun → proxy).
func WrapCommandMacOS(cfg *config.Config, command string, exposedPorts []int, daemonSession *DaemonSession, debug bool) (string, error) {
cwd, _ := os.Getwd()
// Build allow paths: default + configured
allowPaths := append(GetDefaultWritePaths(), cfg.Filesystem.AllowWrite...)
@@ -585,9 +676,13 @@ func WrapCommandMacOS(cfg *config.Config, command string, exposedPorts []int, de
}
}
// Determine if we're using daemon-mode (transparent proxying via pf + utun)
daemonMode := daemonSession != nil
// Restrict network unless proxy is configured to an external host
// If no proxy: block all outbound. If proxy: allow outbound only to proxy.
needsNetworkRestriction := true
// In daemon mode, network restriction is handled by pf, not Seatbelt.
needsNetworkRestriction := !daemonMode
params := MacOSSandboxParams{
Command: command,
@@ -599,13 +694,16 @@ func WrapCommandMacOS(cfg *config.Config, command string, exposedPorts []int, de
AllowAllUnixSockets: cfg.Network.AllowAllUnixSockets,
AllowLocalBinding: allowLocalBinding,
AllowLocalOutbound: allowLocalOutbound,
DefaultDenyRead: cfg.Filesystem.DefaultDenyRead,
DefaultDenyRead: cfg.Filesystem.IsDefaultDenyRead(),
Cwd: cwd,
ReadAllowPaths: cfg.Filesystem.AllowRead,
ReadDenyPaths: cfg.Filesystem.DenyRead,
WriteAllowPaths: allowPaths,
WriteDenyPaths: cfg.Filesystem.DenyWrite,
AllowPty: cfg.AllowPty,
AllowGitConfig: cfg.Filesystem.AllowGitConfig,
DaemonMode: daemonMode,
DaemonSocketPath: "/var/run/greywall.sock",
}
if debug && len(exposedPorts) > 0 {
@@ -614,6 +712,10 @@ func WrapCommandMacOS(cfg *config.Config, command string, exposedPorts []int, de
if debug && allowLocalBinding && !allowLocalOutbound {
fmt.Fprintf(os.Stderr, "[greywall:macos] Blocking localhost outbound (AllowLocalOutbound=false)\n")
}
if debug && daemonMode {
fmt.Fprintf(os.Stderr, "[greywall:macos] Daemon mode: transparent proxying via pf + utun (group=%s, device=%s)\n",
daemonSession.SandboxGroup, daemonSession.TunDevice)
}
profile := GenerateSandboxProfile(params)
@@ -627,14 +729,30 @@ func WrapCommandMacOS(cfg *config.Config, command string, exposedPorts []int, de
return "", fmt.Errorf("shell %q not found: %w", shell, err)
}
proxyEnvs := GenerateProxyEnvVars(cfg.Network.ProxyURL)
// Build the command
// env VAR1=val1 VAR2=val2 sandbox-exec -p 'profile' shell -c 'command'
var parts []string
parts = append(parts, "env")
parts = append(parts, proxyEnvs...)
parts = append(parts, "sandbox-exec", "-p", profile, shellPath, "-c", command)
if daemonMode {
// In daemon mode: run as the real user but with EGID=_greywall via sudo.
// pf routes all traffic from group _greywall through utun → tun2socks → proxy.
// Using -u #<uid> preserves the user's identity (home dir, SSH keys, etc.)
// while -g _greywall sets the effective GID for pf matching.
//
// sudo resets the environment, so we use `env` after sudo to re-inject
// terminal vars (TERM, COLORTERM, etc.) needed for TUI apps and proxy vars.
uid := fmt.Sprintf("#%d", os.Getuid())
proxyEnvs := GenerateProxyEnvVars(cfg.Network.ProxyURL)
termEnvs := getTerminalEnvVars()
parts = append(parts, "sudo", "-u", uid, "-g", daemonSession.SandboxGroup, "env")
parts = append(parts, proxyEnvs...)
parts = append(parts, termEnvs...)
parts = append(parts, "sandbox-exec", "-p", profile, shellPath, "-c", command)
} else {
// Non-daemon mode: use proxy env vars for best-effort proxying.
proxyEnvs := GenerateProxyEnvVars(cfg.Network.ProxyURL)
parts = append(parts, "env")
parts = append(parts, proxyEnvs...)
parts = append(parts, "sandbox-exec", "-p", profile, shellPath, "-c", command)
}
return ShellQuote(parts), nil
}

View File

@@ -1,6 +1,7 @@
package sandbox
import (
"fmt"
"strings"
"testing"
@@ -108,7 +109,8 @@ func buildMacOSParamsForTest(cfg *config.Config) MacOSSandboxParams {
AllowAllUnixSockets: cfg.Network.AllowAllUnixSockets,
AllowLocalBinding: allowLocalBinding,
AllowLocalOutbound: allowLocalOutbound,
DefaultDenyRead: cfg.Filesystem.DefaultDenyRead,
DefaultDenyRead: cfg.Filesystem.IsDefaultDenyRead(),
Cwd: "/tmp/test-project",
ReadAllowPaths: cfg.Filesystem.AllowRead,
ReadDenyPaths: cfg.Filesystem.DenyRead,
WriteAllowPaths: allowPaths,
@@ -175,38 +177,46 @@ func TestMacOS_DefaultDenyRead(t *testing.T) {
tests := []struct {
name string
defaultDenyRead bool
cwd string
allowRead []string
wantContainsBlanketAllow bool
wantContainsMetadataAllow bool
wantContainsSystemAllows bool
wantContainsUserAllowRead bool
wantContainsCwdAllow bool
}{
{
name: "default mode - blanket allow read",
name: "legacy mode - blanket allow read",
defaultDenyRead: false,
cwd: "/home/user/project",
allowRead: nil,
wantContainsBlanketAllow: true,
wantContainsMetadataAllow: false,
wantContainsSystemAllows: false,
wantContainsUserAllowRead: false,
wantContainsCwdAllow: false,
},
{
name: "defaultDenyRead enabled - metadata allow, system data allows",
name: "defaultDenyRead enabled - metadata allow, system data allows, CWD allow",
defaultDenyRead: true,
cwd: "/home/user/project",
allowRead: nil,
wantContainsBlanketAllow: false,
wantContainsMetadataAllow: true,
wantContainsSystemAllows: true,
wantContainsUserAllowRead: false,
wantContainsCwdAllow: true,
},
{
name: "defaultDenyRead with allowRead paths",
defaultDenyRead: true,
allowRead: []string{"/home/user/project"},
cwd: "/home/user/project",
allowRead: []string{"/home/user/other"},
wantContainsBlanketAllow: false,
wantContainsMetadataAllow: true,
wantContainsSystemAllows: true,
wantContainsUserAllowRead: true,
wantContainsCwdAllow: true,
},
}
@@ -215,6 +225,7 @@ func TestMacOS_DefaultDenyRead(t *testing.T) {
params := MacOSSandboxParams{
Command: "echo test",
DefaultDenyRead: tt.defaultDenyRead,
Cwd: tt.cwd,
ReadAllowPaths: tt.allowRead,
}
@@ -236,6 +247,13 @@ func TestMacOS_DefaultDenyRead(t *testing.T) {
t.Errorf("system path allows = %v, want %v\nProfile:\n%s", hasSystemAllows, tt.wantContainsSystemAllows, profile)
}
if tt.wantContainsCwdAllow && tt.cwd != "" {
hasCwdAllow := strings.Contains(profile, fmt.Sprintf(`(subpath %q)`, tt.cwd))
if !hasCwdAllow {
t.Errorf("CWD path %q not found in profile", tt.cwd)
}
}
if tt.wantContainsUserAllowRead && len(tt.allowRead) > 0 {
hasUserAllow := strings.Contains(profile, tt.allowRead[0])
if !hasUserAllow {

View File

@@ -5,9 +5,20 @@ import (
"os"
"gitea.app.monadical.io/monadical/greywall/internal/config"
"gitea.app.monadical.io/monadical/greywall/internal/daemon"
"gitea.app.monadical.io/monadical/greywall/internal/platform"
)
// DaemonSession holds the state from an active daemon session on macOS.
// When a daemon session is active, traffic is routed through pf + utun
// instead of using env-var proxy settings.
type DaemonSession struct {
SessionID string
TunDevice string
SandboxUser string
SandboxGroup string
}
// Manager handles sandbox initialization and command wrapping.
type Manager struct {
config *config.Config
@@ -19,9 +30,16 @@ type Manager struct {
debug bool
monitor bool
initialized bool
learning bool // learning mode: permissive sandbox with strace
straceLogPath string // host-side temp file for strace output
learning bool // learning mode: permissive sandbox with strace/eslogger
straceLogPath string // host-side temp file for strace output (Linux)
commandName string // name of the command being learned
// macOS daemon session fields
daemonClient *daemon.Client
daemonSession *DaemonSession
// macOS learning mode fields
learningID string // daemon learning session ID
learningLog string // eslogger log file path
learningRootPID int // root PID of the command being learned
}
// NewManager creates a new sandbox manager.
@@ -63,11 +81,58 @@ func (m *Manager) Initialize() error {
return fmt.Errorf("sandbox is not supported on platform: %s", platform.Detect())
}
// On macOS in learning mode, use the daemon for eslogger tracing only.
// No TUN/pf/DNS session needed — the command runs unsandboxed.
if platform.Detect() == platform.MacOS && m.learning {
client := daemon.NewClient(daemon.DefaultSocketPath, m.debug)
if !client.IsRunning() {
return fmt.Errorf("greywall daemon is not running (required for macOS learning mode)\n\n" +
" Install and start: sudo greywall daemon install\n" +
" Check status: greywall daemon status")
}
m.logDebug("Daemon is running, requesting learning session")
resp, err := client.StartLearning()
if err != nil {
return fmt.Errorf("failed to start learning session: %w", err)
}
m.daemonClient = client
m.learningID = resp.LearningID
m.learningLog = resp.LearningLog
m.logDebug("Learning session started: id=%s log=%s", m.learningID, m.learningLog)
m.initialized = true
return nil
}
// On macOS, the daemon is required for transparent proxying.
// Without it, env-var proxying is unreliable (only works for tools that
// honor HTTP_PROXY) and gives users a false sense of security.
if platform.Detect() == platform.MacOS && m.config.Network.ProxyURL != "" {
client := daemon.NewClient(daemon.DefaultSocketPath, m.debug)
if !client.IsRunning() {
return fmt.Errorf("greywall daemon is not running (required for macOS network sandboxing)\n\n" +
" Install and start: sudo greywall daemon install\n" +
" Check status: greywall daemon status")
}
m.logDebug("Daemon is running, requesting session")
resp, err := client.CreateSession(m.config.Network.ProxyURL, m.config.Network.DnsAddr)
if err != nil {
return fmt.Errorf("failed to create daemon session: %w", err)
}
m.daemonClient = client
m.daemonSession = &DaemonSession{
SessionID: resp.SessionID,
TunDevice: resp.TunDevice,
SandboxUser: resp.SandboxUser,
SandboxGroup: resp.SandboxGroup,
}
m.logDebug("Daemon session created: id=%s device=%s user=%s group=%s", resp.SessionID, resp.TunDevice, resp.SandboxUser, resp.SandboxGroup)
}
// On Linux, set up proxy bridge and tun2socks if proxy is configured
if platform.Detect() == platform.Linux {
if m.config.Network.ProxyURL != "" {
// Extract embedded tun2socks binary
tun2socksPath, err := extractTun2Socks()
tun2socksPath, err := ExtractTun2Socks()
if err != nil {
m.logDebug("Failed to extract tun2socks: %v (will fall back to env-var proxying)", err)
} else {
@@ -78,7 +143,7 @@ func (m *Manager) Initialize() error {
bridge, err := NewProxyBridge(m.config.Network.ProxyURL, m.debug)
if err != nil {
if m.tun2socksPath != "" {
os.Remove(m.tun2socksPath)
_ = os.Remove(m.tun2socksPath)
}
return fmt.Errorf("failed to initialize proxy bridge: %w", err)
}
@@ -90,7 +155,7 @@ func (m *Manager) Initialize() error {
if err != nil {
m.proxyBridge.Cleanup()
if m.tun2socksPath != "" {
os.Remove(m.tun2socksPath)
_ = os.Remove(m.tun2socksPath)
}
return fmt.Errorf("failed to initialize DNS bridge: %w", err)
}
@@ -108,7 +173,7 @@ func (m *Manager) Initialize() error {
m.proxyBridge.Cleanup()
}
if m.tun2socksPath != "" {
os.Remove(m.tun2socksPath)
_ = os.Remove(m.tun2socksPath)
}
return fmt.Errorf("failed to initialize reverse bridge: %w", err)
}
@@ -148,7 +213,11 @@ func (m *Manager) WrapCommand(command string) (string, error) {
plat := platform.Detect()
switch plat {
case platform.MacOS:
return WrapCommandMacOS(m.config, command, m.exposedPorts, m.debug)
if m.learning {
// In learning mode, run command directly (no sandbox-exec wrapping)
return command, nil
}
return WrapCommandMacOS(m.config, command, m.exposedPorts, m.daemonSession, m.debug)
case platform.Linux:
if m.learning {
return m.wrapCommandLearning(command)
@@ -166,7 +235,7 @@ func (m *Manager) wrapCommandLearning(command string) (string, error) {
if err != nil {
return "", fmt.Errorf("failed to create strace log file: %w", err)
}
tmpFile.Close()
_ = tmpFile.Close()
m.straceLogPath = tmpFile.Name()
m.logDebug("Strace log file: %s", m.straceLogPath)
@@ -181,26 +250,42 @@ func (m *Manager) wrapCommandLearning(command string) (string, error) {
})
}
// GenerateLearnedTemplate generates a config template from the strace log collected during learning.
// GenerateLearnedTemplate generates a config template from the trace log collected during learning.
// Platform-specific implementation in manager_linux.go / manager_darwin.go.
func (m *Manager) GenerateLearnedTemplate(cmdName string) (string, error) {
if m.straceLogPath == "" {
return "", fmt.Errorf("no strace log available (was learning mode enabled?)")
}
return m.generateLearnedTemplatePlatform(cmdName)
}
templatePath, err := GenerateLearnedTemplate(m.straceLogPath, cmdName, m.debug)
if err != nil {
return "", err
}
// Clean up strace log since we've processed it
os.Remove(m.straceLogPath)
m.straceLogPath = ""
return templatePath, nil
// SetLearningRootPID records the root PID of the command being learned.
// The eslogger log parser uses this to build the process tree from fork events.
func (m *Manager) SetLearningRootPID(pid int) {
m.learningRootPID = pid
m.logDebug("Set learning root PID: %d", pid)
}
// Cleanup stops the proxies and cleans up resources.
func (m *Manager) Cleanup() {
// Stop macOS learning session if active
if m.daemonClient != nil && m.learningID != "" {
m.logDebug("Stopping learning session %s", m.learningID)
if err := m.daemonClient.StopLearning(m.learningID); err != nil {
m.logDebug("Warning: failed to stop learning session: %v", err)
}
m.learningID = ""
}
// Destroy macOS daemon session if active.
if m.daemonClient != nil && m.daemonSession != nil {
m.logDebug("Destroying daemon session %s", m.daemonSession.SessionID)
if err := m.daemonClient.DestroySession(m.daemonSession.SessionID); err != nil {
m.logDebug("Warning: failed to destroy daemon session: %v", err)
}
m.daemonSession = nil
}
// Clear daemon client after all daemon interactions
m.daemonClient = nil
if m.reverseBridge != nil {
m.reverseBridge.Cleanup()
}
@@ -211,12 +296,16 @@ func (m *Manager) Cleanup() {
m.proxyBridge.Cleanup()
}
if m.tun2socksPath != "" {
os.Remove(m.tun2socksPath)
_ = os.Remove(m.tun2socksPath)
}
if m.straceLogPath != "" {
os.Remove(m.straceLogPath)
_ = os.Remove(m.straceLogPath)
m.straceLogPath = ""
}
if m.learningLog != "" {
_ = os.Remove(m.learningLog)
m.learningLog = ""
}
m.logDebug("Sandbox manager cleaned up")
}

View File

@@ -0,0 +1,42 @@
//go:build darwin
package sandbox
import (
"fmt"
"os"
)
// generateLearnedTemplatePlatform stops the daemon eslogger session,
// parses the eslogger log with PID-based process tree filtering,
// and generates a template (macOS).
func (m *Manager) generateLearnedTemplatePlatform(cmdName string) (string, error) {
if m.learningLog == "" {
return "", fmt.Errorf("no eslogger log available (was learning mode enabled?)")
}
// Stop daemon learning session
if m.daemonClient != nil && m.learningID != "" {
if err := m.daemonClient.StopLearning(m.learningID); err != nil {
m.logDebug("Warning: failed to stop learning session: %v", err)
}
}
// Parse eslogger log with root PID for process tree tracking
result, err := ParseEsloggerLog(m.learningLog, m.learningRootPID, m.debug)
if err != nil {
return "", fmt.Errorf("failed to parse eslogger log: %w", err)
}
templatePath, err := GenerateLearnedTemplate(result, cmdName, m.debug)
if err != nil {
return "", err
}
// Clean up eslogger log
_ = os.Remove(m.learningLog)
m.learningLog = ""
m.learningID = ""
return templatePath, nil
}

View File

@@ -0,0 +1,31 @@
//go:build linux
package sandbox
import (
"fmt"
"os"
)
// generateLearnedTemplatePlatform parses the strace log and generates a template (Linux).
func (m *Manager) generateLearnedTemplatePlatform(cmdName string) (string, error) {
if m.straceLogPath == "" {
return "", fmt.Errorf("no strace log available (was learning mode enabled?)")
}
result, err := ParseStraceLog(m.straceLogPath, m.debug)
if err != nil {
return "", fmt.Errorf("failed to parse strace log: %w", err)
}
templatePath, err := GenerateLearnedTemplate(result, cmdName, m.debug)
if err != nil {
return "", err
}
// Clean up strace log since we've processed it
_ = os.Remove(m.straceLogPath)
m.straceLogPath = ""
return templatePath, nil
}

View File

@@ -0,0 +1,10 @@
//go:build !linux && !darwin
package sandbox
import "fmt"
// generateLearnedTemplatePlatform returns an error on unsupported platforms.
func (m *Manager) generateLearnedTemplatePlatform(cmdName string) (string, error) {
return "", fmt.Errorf("learning mode is not supported on this platform")
}

View File

@@ -66,7 +66,7 @@ func (m *LogMonitor) Start() error {
for scanner.Scan() {
line := scanner.Text()
if violation := parseViolation(line); violation != "" {
fmt.Fprintf(os.Stderr, "%s\n", violation)
fmt.Fprintf(os.Stderr, "%s\n", violation) //nolint:gosec // logging to stderr, not web output
}
}
}()

View File

@@ -13,9 +13,9 @@ import (
//go:embed bin/tun2socks-linux-*
var tun2socksFS embed.FS
// extractTun2Socks writes the embedded tun2socks binary to a temp file and returns its path.
// ExtractTun2Socks writes the embedded tun2socks binary to a temp file and returns its path.
// The caller is responsible for removing the file when done.
func extractTun2Socks() (string, error) {
func ExtractTun2Socks() (string, error) {
var arch string
switch runtime.GOARCH {
case "amd64":
@@ -38,14 +38,14 @@ func extractTun2Socks() (string, error) {
}
if _, err := tmpFile.Write(data); err != nil {
tmpFile.Close()
os.Remove(tmpFile.Name())
_ = tmpFile.Close()
_ = os.Remove(tmpFile.Name())
return "", fmt.Errorf("tun2socks: failed to write binary: %w", err)
}
tmpFile.Close()
_ = tmpFile.Close()
if err := os.Chmod(tmpFile.Name(), 0o755); err != nil {
os.Remove(tmpFile.Name())
if err := os.Chmod(tmpFile.Name(), 0o755); err != nil { //nolint:gosec // executable binary needs execute permission
_ = os.Remove(tmpFile.Name())
return "", fmt.Errorf("tun2socks: failed to make executable: %w", err)
}

View File

@@ -0,0 +1,53 @@
//go:build darwin
package sandbox
import (
"embed"
"fmt"
"io/fs"
"os"
"runtime"
)
//go:embed bin/tun2socks-darwin-*
var tun2socksFS embed.FS
// ExtractTun2Socks writes the embedded tun2socks binary to a temp file and returns its path.
// The caller is responsible for removing the file when done.
func ExtractTun2Socks() (string, error) {
var arch string
switch runtime.GOARCH {
case "amd64":
arch = "amd64"
case "arm64":
arch = "arm64"
default:
return "", fmt.Errorf("tun2socks: unsupported architecture %s", runtime.GOARCH)
}
name := fmt.Sprintf("bin/tun2socks-darwin-%s", arch)
data, err := fs.ReadFile(tun2socksFS, name)
if err != nil {
return "", fmt.Errorf("tun2socks: embedded binary not found for %s: %w", arch, err)
}
tmpFile, err := os.CreateTemp("", "greywall-tun2socks-*")
if err != nil {
return "", fmt.Errorf("tun2socks: failed to create temp file: %w", err)
}
if _, err := tmpFile.Write(data); err != nil {
_ = tmpFile.Close()
_ = os.Remove(tmpFile.Name()) //nolint:gosec // path from os.CreateTemp, not user input
return "", fmt.Errorf("tun2socks: failed to write binary: %w", err)
}
_ = tmpFile.Close()
if err := os.Chmod(tmpFile.Name(), 0o755); err != nil { //nolint:gosec // executable binary needs execute permission
_ = os.Remove(tmpFile.Name()) //nolint:gosec // path from os.CreateTemp, not user input
return "", fmt.Errorf("tun2socks: failed to make executable: %w", err)
}
return tmpFile.Name(), nil
}

View File

@@ -1,10 +1,10 @@
//go:build !linux
//go:build !linux && !darwin
package sandbox
import "fmt"
// extractTun2Socks is not available on non-Linux platforms.
func extractTun2Socks() (string, error) {
return "", fmt.Errorf("tun2socks is only available on Linux")
// ExtractTun2Socks is not available on unsupported platforms.
func ExtractTun2Socks() (string, error) {
return "", fmt.Errorf("tun2socks is only available on Linux and macOS")
}

View File

@@ -86,6 +86,31 @@ func GenerateProxyEnvVars(proxyURL string) []string {
return envVars
}
// getTerminalEnvVars returns KEY=VALUE entries for terminal-related environment
// variables that are set in the current process. These must be re-injected after
// sudo (which resets the environment) so that TUI apps can detect terminal
// capabilities, size, and color support.
func getTerminalEnvVars() []string {
termVars := []string{
"TERM",
"COLORTERM",
"COLUMNS",
"LINES",
"TERMINFO",
"TERMINFO_DIRS",
"LANG",
"LC_ALL",
"LC_CTYPE",
}
var envs []string
for _, key := range termVars {
if val := os.Getenv(key); val != "" {
envs = append(envs, key+"="+val)
}
}
return envs
}
// EncodeSandboxedCommand encodes a command for sandbox monitoring.
func EncodeSandboxedCommand(command string) string {
if len(command) > 100 {

View File

@@ -149,5 +149,5 @@ git push origin "$NEW_VERSION"
echo ""
info "✓ Released $NEW_VERSION"
info "GitHub Actions will now build and publish the release."
info "Gitea Actions will now build and publish the release."
info "Watch progress at: https://gitea.app.monadical.io/monadical/greywall/actions"