diff --git a/KNOWN_ISSUES.md b/KNOWN_ISSUES.md index a3e0487..6af6b4d 100644 --- a/KNOWN_ISSUES.md +++ b/KNOWN_ISSUES.md @@ -13,7 +13,7 @@ Technical debt identified during codebase analysis. Address these before adding **Discovered**: 2026-03-24 during v0.13.0 pre-release audit **Files exceeding 400-line limit:** -- `backends/openshift/sync.py` — 595 lines +- `backends/openshift/sync.py` — dead code for OpenShift (kept for Podman base class) - `cli/commands.py` — 580 lines - `backends/podman/backend.py` — 504 lines - `backends/openshift/proxy.py` — 484 lines @@ -23,7 +23,7 @@ Technical debt identified during codebase analysis. Address these before adding **Methods exceeding 50-line limit:** - `workflow.py` — `harvest_session()` (~102 lines), `status_sessions()` (~84), `reset_session()` (~72) - `cli/commands.py` — `session_cp()` (~75 lines) -- `backends/openshift/sync.py` — `_sync_agent_config()` (~90 lines) +- `backends/openshift/sync.py` — dead code (see above) - `backends/podman/backend.py` — `create_session()` (~95 lines) **Classes exceeding 20-method limit:** @@ -83,14 +83,8 @@ Declarative resources (already applied as JSON via `oc apply`): - 3 NetworkPolicies (agent egress, proxy egress, proxy ingress) — `backends/openshift/proxy.py` - 1 StatefulSet with PVC template (agent pod, runs `tini -- sleep infinity`) — `backends/openshift/resources.py` -Post-apply imperative steps (the blocking problem): -- Poll `oc get pod` for readiness -- `oc exec mkdir -p /credentials` -- `oc cp` stub GCP ADC, gitconfig, gitignore, sandbox config script -- `oc rsync` agent config directory (~/.claude/) -- `oc exec` jq to rewrite plugin install paths -- `oc exec touch /credentials/.ready` (signals entrypoint to proceed) -- `oc exec entrypoint-session.sh` (starts agent headless) +Post-apply imperative steps (resolved — config now mounted via ConfigMap): +- Poll `oc get pod` for readiness (still present, but standard K8s usage) Build resources (shared, coupled to create): - BuildConfig + ImageStream — `backends/openshift/build.py` (binary build from local dir) @@ -99,13 +93,13 @@ Build resources (shared, coupled to create): **Gap 1 — No manifest export layer.** Each resource builder calls `oc apply -f -` inline. There is no way to collect all resource specs and write them to disk as YAML. Fix: add a `ManifestCollector` that accumulates resource dicts and can either apply them or write to a directory. Resource builders return dicts instead of applying directly. -**Gap 2 — Config injected into running pods via `oc cp`/`oc exec`.** `sync.py:ConfigSyncer.sync_full_config()` pushes files into a `/credentials/` tmpfs mount after the pod starts. The entrypoint polls `/credentials/.ready` for 300 seconds. Fix: prepare the config directory locally before container start, then mount it as a volume (ConfigMap in K8s, bind mount in Podman). The entrypoint runs directly with config already present — same code path for both backends, no conditional branching needed. +**Gap 2 — Config injected into running pods via `oc cp`/`oc exec`.** (Resolved) All config files (stub GCP ADC, gitconfig user.name/email, sandbox config script) are now packaged into a ConfigMap and mounted at `/credentials` before the container starts. No `oc cp`/`oc exec` is needed. Cursor auth and global gitignore syncing were removed entirely. **Gap 3 — Secrets created inline during `paude create`.** CA cert is generated via openssl and credentials are gathered from the host environment, both stored as K8s Secrets during `paude create`. Fix: users pre-create secrets out-of-band (`oc create secret`, sealed-secrets, ESO, vault) and pass names via `--ca-secret` / `--creds-secret` flags. CA generation becomes a helper command (`paude setup-proxy-ca`). Paude manifests just reference secret names, never contain secret data. **Gap 4 — Image builds coupled to session creation.** `build.py` creates BuildConfig/ImageStream and runs `oc start-build --from-dir=...` which uploads local files. Fix: separate `paude build` from `paude create`. Emitted YAML references a pre-built image by tag or digest. -**Gap 5 — Container starts with `sleep infinity`, agent launched via `oc exec`.** The StatefulSet command is `tini -- sleep infinity` because the entrypoint can't run until config is pushed. Fix: once config is mounted as volumes (Gap 2), the StatefulSet command becomes `entrypoint-session.sh` directly. No `sleep infinity` + `oc exec` dance. +**Gap 5 — Container starts with `sleep infinity`, agent launched via `oc exec`.** (Resolved) The StatefulSet command is now `tini -- entrypoint-session.sh` with `PAUDE_HEADLESS=1`. Config is pre-mounted via ConfigMap (Gap 2), so the entrypoint runs directly. No `sleep infinity` + `oc exec` pattern. **Gap 6 — Interactive operations (`oc exec`, `oc port-forward`, connect).** No fix needed. These are operational commands that work against running resources. They are orthogonal to GitOps — declarative manages the desired state, interactive commands are for human access. @@ -128,13 +122,13 @@ Phase 3 — Externalize secrets (low-medium effort, high value): - Paude manifests reference secret names, never generate secret data inline - Existing inline secret creation remains as default for backward compatibility -Phase 4 — Config as mounted volumes (high effort, high value): -- Prepare config directory locally before container start -- Package as ConfigMap (K8s) or bind mount (Podman) — same entrypoint for both -- Move plugin path rewriting from jq/oc-exec to pure Python at prep time -- Remove `sleep infinity` + `oc exec` pattern; entrypoint runs directly as container command -- Remove `/credentials/.ready` polling from entrypoint (config always present at start) -- Files: `sync.py`, `resources.py`, `entrypoint-session.sh`, Podman backend +Phase 4 — Config as mounted volumes (done for OpenShift): +- Agent config sync and plugin path rewriting already removed (done) +- Remaining config (stub ADC, gitconfig user.name/email, sandbox script) packaged as ConfigMap (done) +- `sleep infinity` + `oc exec` pattern removed; entrypoint runs directly (done) +- ConfigMap includes `.ready` marker so entrypoint skips wait (done) +- Cursor auth and global gitignore syncing removed (done) +- Podman backend still uses old `podman cp`/`podman exec` pattern (future work) ## Security Hardening Backlog diff --git a/containers/paude/entrypoint-lib-config.sh b/containers/paude/entrypoint-lib-config.sh index af45ccf..af09c4a 100644 --- a/containers/paude/entrypoint-lib-config.sh +++ b/containers/paude/entrypoint-lib-config.sh @@ -1,54 +1,7 @@ #!/bin/bash -# Agent config copy and PVC persistence utilities for the paude entrypoint. +# Agent config PVC persistence utilities for the paude entrypoint. # Sourced by entrypoint-session.sh — not run standalone. -# Copy agent config from a source directory into $HOME. -# Handles: recursive copy, config file relocation, plugin permissions, OpenShift ownership. -# Skips runtime directories/files that are generated by the agent inside the -# container — these may be persisted on PVC and must not be overwritten by -# (stale or empty) host-side copies. This is defense-in-depth: the host-side -# rsync already excludes these, but we guard here too. -# Args: source_path (directory containing agent config files) -copy_agent_config() { - local source_path="$1" - local dest="$HOME/$AGENT_CONFIG_DIR" - - mkdir -p "$dest" - chmod g+rwX "$dest" 2>/dev/null || true - - # Copy items individually, skipping runtime state that the agent generates - # inside the container. Keep in sync with _CLAUDE_CONFIG_EXCLUDES in - # src/paude/agents/claude.py — this is defense-in-depth (host-side rsync - # already excludes these). - for item in "$source_path"/* "$source_path"/.*; do - [[ ! -e "$item" ]] && continue - local name - name="${item##*/}" - case "$name" in - .|..) continue ;; - backups|cache|debug|downloads|file-history|paste-cache|plans|\ - session-env|sessions|shell-snapshots|statsig|tasks|todos|projects|.git) - [[ -d "$item" ]] && continue ;; - history.jsonl|stats-cache.json) - [[ -f "$item" ]] && continue ;; - esac - cp -dR --preserve=mode,timestamps "$item" "$dest/" 2>/dev/null || true - done - - # Handle config file specially - goes to ~/. - # Use cp instead of mv to write through symlinks (on re-execution, - # persist_agent_config may have already symlinked paths to /pvc). - if [[ -n "$AGENT_CONFIG_FILE" ]] && [[ -n "$AGENT_CONFIG_FILE_BASENAME" ]] && [[ -f "$HOME/$AGENT_CONFIG_DIR/$AGENT_CONFIG_FILE_BASENAME" ]]; then - cp -f "$HOME/$AGENT_CONFIG_DIR/$AGENT_CONFIG_FILE_BASENAME" "$HOME/$AGENT_CONFIG_FILE" 2>/dev/null || true - rm -f "$HOME/$AGENT_CONFIG_DIR/$AGENT_CONFIG_FILE_BASENAME" 2>/dev/null || true - chmod g+rw "$HOME/$AGENT_CONFIG_FILE" 2>/dev/null || true - fi - - # g+rwX sets read/write and execute on directories (X = execute only if dir) - # This covers plugins/ and all other subdirectories in one pass - chmod -R g+rwX "$HOME/$AGENT_CONFIG_DIR" 2>/dev/null || true -} - # Persist agent config on the PVC volume so it survives container recreation. # Creates symlinks: $HOME/$AGENT_CONFIG_DIR -> /pvc/$AGENT_CONFIG_DIR # $HOME/$AGENT_CONFIG_FILE -> /pvc/$AGENT_CONFIG_FILE diff --git a/containers/paude/entrypoint-lib-credentials.sh b/containers/paude/entrypoint-lib-credentials.sh index b923fcb..9335763 100644 --- a/containers/paude/entrypoint-lib-credentials.sh +++ b/containers/paude/entrypoint-lib-credentials.sh @@ -98,11 +98,6 @@ setup_credentials() { ln -sf "$config_path/gcloud" "$HOME/.config/gcloud" fi - # Copy agent config (need to be writable, so copy instead of symlink) - if [[ -d "$config_path/$AGENT_NAME" ]]; then - copy_agent_config "$config_path/$AGENT_NAME" - fi - # Set up gitconfig via symlink if [[ -f "$config_path/gitconfig" ]]; then rm -f "$HOME/.gitconfig" 2>/dev/null || true diff --git a/containers/paude/entrypoint-session.sh b/containers/paude/entrypoint-session.sh index c19471a..f3a521e 100644 --- a/containers/paude/entrypoint-session.sh +++ b/containers/paude/entrypoint-session.sh @@ -13,10 +13,6 @@ AGENT_CONFIG_FILE="${PAUDE_AGENT_CONFIG_FILE:-.claude.json}" AGENT_INSTALL_SCRIPT="${PAUDE_AGENT_INSTALL_SCRIPT:-curl -fsSL https://claude.ai/install.sh | bash}" AGENT_SESSION_NAME="${PAUDE_AGENT_SESSION_NAME:-claude}" AGENT_LAUNCH_CMD="${PAUDE_AGENT_LAUNCH_CMD:-claude}" -AGENT_SEED_DIR="${PAUDE_AGENT_SEED_DIR:-/tmp/claude.seed}" -AGENT_SEED_FILE="${PAUDE_AGENT_SEED_FILE:-/tmp/claude.json.seed}" -# Derive basename for config file (e.g., ".claude.json" -> "claude.json") -AGENT_CONFIG_FILE_BASENAME="${AGENT_CONFIG_FILE#.}" # Backward compat: PAUDE_AGENT_ARGS > PAUDE_CLAUDE_ARGS > positional args AGENT_ARGS="${PAUDE_AGENT_ARGS:-${PAUDE_CLAUDE_ARGS:-$*}}" @@ -135,34 +131,25 @@ attach_to_session() { exec tmux -u attach -t "$AGENT_SESSION_NAME" } -# On reconnect (tmux session already exists), skip config copy and sandbox -# config — re-copying from host seed mounts would overwrite in-container -# state (prompt history, project data, conversation context). +# On reconnect (tmux session already exists), skip sandbox config — +# reapplying would overwrite in-container state. if tmux -u has-session -t "$AGENT_SESSION_NAME" 2>/dev/null; then exit_if_headless "already running" attach_to_session reconnect fi -# Legacy: Copy seed files if provided via Secret mount (Podman backend fallback) -if [[ -d "$AGENT_SEED_DIR" ]] && [[ ! -d /credentials ]]; then - copy_agent_config "$AGENT_SEED_DIR" -fi - -# Also check for separate config file seed mount (Podman backend) -if [[ -n "$AGENT_SEED_FILE" ]] && { [[ -f "$AGENT_SEED_FILE" ]] || [[ -L "$AGENT_SEED_FILE" ]]; }; then - if [[ -n "$AGENT_CONFIG_FILE" ]]; then - cp -L "$AGENT_SEED_FILE" "$HOME/$AGENT_CONFIG_FILE" 2>/dev/null || true - chmod g+rw "$HOME/$AGENT_CONFIG_FILE" 2>/dev/null || true +# Apply agent sandbox config (generated by Python, mounted via ConfigMap or synced) +if [[ "${PAUDE_SUPPRESS_PROMPTS:-}" == "1" ]]; then + _SANDBOX_CFG="" + for _candidate in "$HOME/.paude/agent-sandbox-config.sh" "/credentials/agent-sandbox-config.sh"; do + [[ -f "$_candidate" ]] && _SANDBOX_CFG="$_candidate" && break + done + if [[ -n "$_SANDBOX_CFG" ]]; then + source "$_SANDBOX_CFG" 2>>/tmp/sandbox-config.log \ + || echo "agent-sandbox-config.sh failed: $?" >> /tmp/sandbox-config.log fi fi -# Apply agent sandbox config (generated by Python, synced before entrypoint runs) -_SANDBOX_CFG="$HOME/.paude/agent-sandbox-config.sh" -if [[ "${PAUDE_SUPPRESS_PROMPTS:-}" == "1" ]] && [[ -f "$_SANDBOX_CFG" ]]; then - source "$_SANDBOX_CFG" 2>>/tmp/sandbox-config.log \ - || echo "agent-sandbox-config.sh failed: $?" >> /tmp/sandbox-config.log -fi - # Session workspace setup # For persistent sessions, workspace is at /workspace (mounted volume) WORKSPACE="${PAUDE_WORKSPACE:-/workspace}" diff --git a/docs/OPENSHIFT.md b/docs/OPENSHIFT.md index 7984d65..7899885 100644 --- a/docs/OPENSHIFT.md +++ b/docs/OPENSHIFT.md @@ -118,31 +118,19 @@ Credentials are stored in RAM-only storage for enhanced security: **Configuration Sync:** -Configuration is synced via `oc cp` to tmpfs on session start and reconnect: +Minimal configuration is synced via `oc cp` to tmpfs on session start: -**Synced from host:** -- `~/.config/gcloud` → gcloud credentials for Vertex AI authentication +- Stub GCP ADC (sentinel values — real auth handled by proxy sidecar) - `~/.gitconfig` → Git identity configuration -- `~/.claude/` → Agent config directory (for Claude Code), including: - - `settings.json`, `settings.local.json` - Core settings - - `plugins/` - Installed plugins and marketplace metadata - - `CLAUDE.md` - Global instructions -- `~/.claude.json` → Agent preferences +- `~/.config/git/ignore` → Global gitignore +- Agent sandbox config script (onboarding/trust prompt suppression) +- Cursor auth.json (for Cursor agent only) -**Excluded (session-specific):** -- `history.jsonl`, `tasks/`, `todos/`, `plans/` - Session state -- `sessions/`, `session-env/`, `projects/` - Session metadata -- `cache/`, `stats-cache.json`, `statsig/` - Caches -- `debug/`, `file-history/`, `shell-snapshots/` - Debug logs -- `backups/`, `downloads/`, `paste-cache/`, `.git/` - Misc - -Plugin paths are automatically rewritten from host paths to container paths. +Agent config directories (`~/.claude/`, `~/.gemini/`, etc.) are **not** synced from the host. The agent starts with vanilla config — the sandbox config script generates the minimum viable settings. This improves security (no host credentials leak into containers) and simplifies the path to GitOps-compatible session creation. **Credential Refresh:** -- **First connect** (after pod start): Full sync of gcloud, claude config, and gitconfig -- **Reconnect** (subsequent connects): Only gcloud credentials refreshed (fast) -- This ensures fresh OAuth tokens propagate if you re-authenticate locally -- Long-running pods stay current with local credential changes +- **First connect** (after pod start): Full sync of stub credentials, gitconfig, and sandbox config +- **Reconnect** (subsequent connects): Stub credentials and sandbox config refreshed (fast) ### Network Filtering @@ -261,9 +249,9 @@ For merge conflicts, use normal git workflows (rebase, merge, etc.). │ │ │ │ + tmux │ │ ┌───────────────────────┐ │ │ │ │ │ └────────────┘ │ │ tmpfs: /credentials │ │ │ │ │ │ │ │ (RAM-only, ephemeral) │ │ │ -│ │ │ Mounts: │ │ - gcloud creds │ │ │ -│ │ │ - /pvc (PVC) │ │ - ~/.claude/ dir │ │ │ -│ │ │ - /credentials │ │ - gitconfig │ │ │ +│ │ │ Mounts: │ │ - stub gcloud creds │ │ │ +│ │ │ - /pvc (PVC) │ │ - gitconfig │ │ │ +│ │ │ - /credentials │ │ - sandbox config │ │ │ │ │ │ (tmpfs) │ └───────────────────────┘ │ │ │ │ └──────────────────┘ │ │ │ │ ↑ │ │ @@ -275,7 +263,6 @@ For merge conflicts, use normal git workflows (rebase, merge, etc.). ┌────────┴────────┐ │ Local Machine │ │ - workspace │ - │ - ~/.claude/ │ │ - credentials │ │ - paude CLI │ └─────────────────┘ @@ -288,7 +275,7 @@ For merge conflicts, use normal git workflows (rebase, merge, etc.). | Session Persistence | Yes (named volumes) | Yes (tmux + PVC) | | Network Disconnect | Session lost | Session preserved | | Code Sync | git push/pull | git push/pull | -| Config Sync | Mounted at start | oc cp at connect | +| Config Sync | oc cp at connect | oc cp at connect | | Multi-machine | No | Yes | | Resource Isolation | Container | Pod + namespace | | Setup Complexity | Low | Medium | diff --git a/src/paude/agents/base.py b/src/paude/agents/base.py index 1bfc605..cff079b 100644 --- a/src/paude/agents/base.py +++ b/src/paude/agents/base.py @@ -26,9 +26,6 @@ class AgentConfig: passthrough_env_prefixes: Host env var prefixes to forward. config_dir_name: Config directory under HOME (e.g., ".claude"). config_file_name: Config file under HOME (e.g., ".claude.json"), or None. - config_excludes: Rsync excludes for config sync. - config_sync_files_only: When non-empty, only these files (relative to - config dir) are copied instead of rsyncing the entire directory. activity_files: Paths (relative to config dir) for activity detection. yolo_flag: CLI flag to skip permissions (e.g., "--dangerously-skip-permissions"). @@ -53,8 +50,6 @@ class AgentConfig: passthrough_env_prefixes: list[str] = field(default_factory=list) config_dir_name: str = ".claude" config_file_name: str | None = ".claude.json" - config_excludes: list[str] = field(default_factory=list) - config_sync_files_only: list[str] = field(default_factory=list) activity_files: list[str] = field(default_factory=list) yolo_flag: str | None = "--dangerously-skip-permissions" clear_command: str | None = "/clear" diff --git a/src/paude/agents/claude.py b/src/paude/agents/claude.py index c9e4d30..2f3f1cb 100644 --- a/src/paude/agents/claude.py +++ b/src/paude/agents/claude.py @@ -10,29 +10,6 @@ build_provider_credentials, pipefail_install_lines, ) -from paude.mounts import resolve_path - -# Keep in sync with the case statement in copy_agent_config() in -# containers/paude/entrypoint-session.sh (defense-in-depth). -_CLAUDE_CONFIG_EXCLUDES = [ - "/backups", - "/cache", - "/debug", - "/downloads", - "/file-history", - "/history.jsonl", - "/paste-cache", - "/plans", - "/session-env", - "/sessions", - "/shell-snapshots", - "/stats-cache.json", - "/statsig", - "/tasks", - "/todos", - "/projects", - "/.git", -] _CLAUDE_ACTIVITY_FILES = [ "history.jsonl", @@ -60,7 +37,6 @@ def __init__(self, provider: str | None = None) -> None: passthrough_env_prefixes=creds.passthrough_env_prefixes, config_dir_name=".claude", config_file_name=".claude.json", - config_excludes=list(_CLAUDE_CONFIG_EXCLUDES), activity_files=list(_CLAUDE_ACTIVITY_FILES), yolo_flag="--dangerously-skip-permissions", clear_command="/clear", @@ -136,26 +112,7 @@ def launch_command(self, args: str) -> str: return "claude" def host_config_mounts(self, home: Path) -> list[str]: - mounts: list[str] = [] - - # Claude seed directory (ro) - claude_dir = home / ".claude" - resolved_claude = resolve_path(claude_dir) - if resolved_claude and resolved_claude.is_dir(): - mounts.extend(["-v", f"{resolved_claude}:/tmp/claude.seed:ro"]) - - # Plugins at original host path (ro) - plugins_dir = resolved_claude / "plugins" - if plugins_dir.is_dir(): - mounts.extend(["-v", f"{plugins_dir}:{plugins_dir}:ro"]) - - # claude.json seed (ro) - claude_json = home / ".claude.json" - resolved_claude_json = resolve_path(claude_json) - if resolved_claude_json and resolved_claude_json.is_file(): - mounts.extend(["-v", f"{resolved_claude_json}:/tmp/claude.json.seed:ro"]) - - return mounts + return [] def build_environment(self) -> dict[str, str]: return build_environment_from_config(self._config) diff --git a/src/paude/agents/cursor.py b/src/paude/agents/cursor.py index cf01612..3f12174 100644 --- a/src/paude/agents/cursor.py +++ b/src/paude/agents/cursor.py @@ -33,8 +33,6 @@ def __init__(self, provider: str | None = None) -> None: passthrough_env_prefixes=creds.passthrough_env_prefixes, config_dir_name=".cursor", config_file_name=None, - config_excludes=[], - config_sync_files_only=["cli-config.json"], activity_files=[], yolo_flag="--yolo", clear_command="/clear", @@ -74,14 +72,7 @@ def apply_sandbox_config( cli_config="{home}/.cursor/cli-config.json" mkdir -p "{home}/.cursor" 2>/dev/null || true -# Seed from host cli-config.json if available (carries auth tokens) -if [ -f /tmp/cursor-cli-config.seed ]; then - cp /tmp/cursor-cli-config.seed "$cli_config" - chmod g+rw "$cli_config" 2>/dev/null || true -fi - -# Ensure version field exists so CLI doesn't prompt for setup, -# and force HTTP/1.1 for agent inference (HTTP/2 bypasses proxy). +# Create minimal cli-config with version and HTTP/1.1 proxy settings. if [ -f "$cli_config" ]; then jq '. * {{"version": (.version // 1), "network": {{"useHttp1ForAgent": true}}}}' \ "$cli_config" > "${{cli_config}}.tmp" \ @@ -125,15 +116,6 @@ def launch_command(self, args: str) -> str: def host_config_mounts(self, home: Path) -> list[str]: mounts: list[str] = [] - # IMPORTANT: Only mount cli-config.json, NEVER the entire .cursor directory. - # ~/.cursor contains Cursor IDE data (extensions/, worktrees/, etc.) that - # can exceed 1 GB and 26k+ files. Only cli-config.json is needed for CLI - # auth tokens from `agent login`. - cli_config = home / ".cursor" / "cli-config.json" - resolved = resolve_path(cli_config) - if resolved and resolved.is_file(): - mounts.extend(["-v", f"{resolved}:/tmp/cursor-cli-config.seed:ro"]) - # Mount auth.json (accessToken/refreshToken) from ~/.config/cursor/ auth_json = home / ".config" / "cursor" / "auth.json" resolved_auth = resolve_path(auth_json) diff --git a/src/paude/agents/gemini.py b/src/paude/agents/gemini.py index fd31820..3668ff5 100644 --- a/src/paude/agents/gemini.py +++ b/src/paude/agents/gemini.py @@ -9,7 +9,6 @@ build_environment_from_config, build_provider_credentials, ) -from paude.mounts import resolve_path class GeminiAgent: @@ -33,7 +32,6 @@ def __init__(self, provider: str | None = None) -> None: passthrough_env_prefixes=creds.passthrough_env_prefixes, config_dir_name=".gemini", config_file_name=None, - config_excludes=[], activity_files=[], yolo_flag="--yolo", clear_command="/clear", @@ -86,15 +84,7 @@ def launch_command(self, args: str) -> str: return "gemini" def host_config_mounts(self, home: Path) -> list[str]: - mounts: list[str] = [] - - # Gemini seed directory (ro) - gemini_dir = home / ".gemini" - resolved_gemini = resolve_path(gemini_dir) - if resolved_gemini and resolved_gemini.is_dir(): - mounts.extend(["-v", f"{resolved_gemini}:/tmp/gemini.seed:ro"]) - - return mounts + return [] def build_environment(self) -> dict[str, str]: return build_environment_from_config(self._config) diff --git a/src/paude/agents/openclaw.py b/src/paude/agents/openclaw.py index 0bcaff1..3c90ac9 100644 --- a/src/paude/agents/openclaw.py +++ b/src/paude/agents/openclaw.py @@ -9,7 +9,6 @@ build_environment_from_config, build_provider_credentials, ) -from paude.mounts import resolve_path class OpenClawAgent: @@ -38,8 +37,6 @@ def __init__(self, provider: str | None = None) -> None: passthrough_env_prefixes=creds.passthrough_env_prefixes, config_dir_name=".openclaw", config_file_name=None, - config_excludes=[], - config_sync_files_only=[], activity_files=[], yolo_flag=None, clear_command=None, @@ -204,14 +201,7 @@ def launch_command(self, args: str) -> str: return cmd def host_config_mounts(self, home: Path) -> list[str]: - mounts: list[str] = [] - - config_dir = home / ".openclaw" - resolved = resolve_path(config_dir) - if resolved and resolved.is_dir(): - mounts.extend(["-v", f"{resolved}:/tmp/openclaw.seed:ro"]) - - return mounts + return [] def build_environment(self) -> dict[str, str]: return build_environment_from_config(self._config) diff --git a/src/paude/backends/openshift/backend.py b/src/paude/backends/openshift/backend.py index f9b22ba..669c8a8 100644 --- a/src/paude/backends/openshift/backend.py +++ b/src/paude/backends/openshift/backend.py @@ -18,7 +18,6 @@ from paude.backends.openshift.session_domains import SessionDomainManager from paude.backends.openshift.session_lifecycle import SessionLifecycleManager from paude.backends.openshift.session_lookup import SessionLookup -from paude.backends.openshift.sync import ConfigSyncer class OpenShiftBackend: @@ -41,7 +40,6 @@ def __init__(self, config: OpenShiftConfig | None = None) -> None: # Lazy-initialized collaborators self._lookup_instance: SessionLookup | None = None - self._syncer_instance: ConfigSyncer | None = None self._builder_instance: BuildOrchestrator | None = None self._proxy_instance: ProxyManager | None = None self._pod_waiter_instance: PodWaiter | None = None @@ -68,12 +66,6 @@ def _lookup(self) -> SessionLookup: self._lookup_instance = SessionLookup(self._oc, self.namespace) return self._lookup_instance - @property - def _syncer(self) -> ConfigSyncer: - if self._syncer_instance is None: - self._syncer_instance = ConfigSyncer(self._oc, self.namespace) - return self._syncer_instance - @property def _builder(self) -> BuildOrchestrator: if self._builder_instance is None: @@ -98,7 +90,7 @@ def _pod_waiter(self) -> PodWaiter: def _connector(self) -> SessionConnector: if self._connector_instance is None: self._connector_instance = SessionConnector( - self._oc, self.namespace, self._config, self._lookup, self._syncer + self._oc, self.namespace, self._config, self._lookup ) return self._connector_instance @@ -110,7 +102,6 @@ def _lifecycle(self) -> SessionLifecycleManager: self.namespace, self._config, self._lookup, - self._syncer, self._builder, self._proxy, self._pod_waiter, @@ -205,9 +196,7 @@ def stop_session(self, name: str) -> None: self._lifecycle.stop_session(name) def start_agent_headless(self, name: str) -> None: - from paude.backends.shared import pod_name - - self._lifecycle.start_agent_headless_in_pod(pod_name(name)) + """No-op: agent starts automatically via entrypoint.""" # ------------------------------------------------------------------------- # Connection and exec diff --git a/src/paude/backends/openshift/resources.py b/src/paude/backends/openshift/resources.py index 62dc0be..de948b4 100644 --- a/src/paude/backends/openshift/resources.py +++ b/src/paude/backends/openshift/resources.py @@ -4,6 +4,7 @@ import hashlib import os +import subprocess from datetime import UTC, datetime from pathlib import Path from typing import Any @@ -41,6 +42,83 @@ def _generate_session_name(workspace: Path) -> str: return f"{sanitized}-{path_hash}" +def config_map_name(session_name: str) -> str: + """Return the ConfigMap name for a session.""" + return f"paude-config-{session_name}" + + +def _read_git_user_config() -> str: + """Read user.name and user.email from host git config. + + Returns a minimal gitconfig string with only these two fields. + """ + try: + result = subprocess.run( + ["git", "config", "--global", "--list"], + capture_output=True, + text=True, + check=False, + ) + except FileNotFoundError: + return "" + if result.returncode != 0: + return "" + + lines = ["[user]"] + for line in result.stdout.splitlines(): + if line.startswith("user.name="): + lines.append(f"\tname = {line.split('=', 1)[1]}") + elif line.startswith("user.email="): + lines.append(f"\temail = {line.split('=', 1)[1]}") + if len(lines) == 1: + return "" + return "\n".join(lines) + "\n" + + +def build_config_map( + session_name: str, + namespace: str, + agent_name: str = "claude", + provider: str | None = None, + workspace: str = "", + args: str = "", + yolo: bool = False, +) -> dict[str, Any]: + """Build a ConfigMap spec containing session config files. + + The ConfigMap replaces the old ``oc cp``/``oc exec`` config sync + by pre-mounting all config files before the container starts. + """ + from paude.backends.shared import STUB_ADC_JSON, generate_sandbox_config_script + from paude.constants import CONTAINER_WORKSPACE + + ws = workspace or CONTAINER_WORKSPACE + + data: dict[str, str] = { + "gcloud-adc": STUB_ADC_JSON, + "agent-sandbox-config.sh": generate_sandbox_config_script( + agent_name, ws, args, provider=provider, yolo=yolo + ), + ".ready": "", + } + + data["gitconfig"] = _read_git_user_config() + + return { + "apiVersion": "v1", + "kind": "ConfigMap", + "metadata": { + "name": config_map_name(session_name), + "namespace": namespace, + "labels": { + "app": "paude", + "paude.io/session-name": session_name, + }, + }, + "data": data, + } + + class StatefulSetBuilder: """Builder for Kubernetes StatefulSet specifications. @@ -85,6 +163,7 @@ def __init__( self._pvc_size = "10Gi" self._storage_class: str | None = None self._ca_secret_name: str | None = None + self._config_map_name: str | None = None def with_otel_endpoint(self, endpoint: str | None) -> StatefulSetBuilder: """Set the OTEL endpoint (for annotation). @@ -137,6 +216,23 @@ def with_ca_secret(self, secret_name: str) -> StatefulSetBuilder: self._ca_secret_name = secret_name return self + def with_config_map(self, name: str) -> StatefulSetBuilder: + """Mount a ConfigMap at /credentials instead of emptyDir. + + When set, the container command runs ``entrypoint-session.sh`` + (with ``sleep infinity`` to keep the container alive after the + entrypoint exits in headless mode) since all config is available + at mount time. + + Args: + name: Name of the ConfigMap to mount. + + Returns: + Self for method chaining. + """ + self._config_map_name = name + return self + def with_pvc( self, size: str = "10Gi", @@ -190,15 +286,35 @@ def _build_metadata(self, created_at: str) -> dict[str, Any]: def _build_volumes(self) -> list[dict[str, Any]]: """Build the volumes list for the pod spec.""" - volumes: list[dict[str, Any]] = [ - { + if self._config_map_name: + cred_volume: dict[str, Any] = { + "name": "credentials", + "configMap": { + "name": self._config_map_name, + "defaultMode": 0o644, + "items": [ + { + "key": "gcloud-adc", + "path": "gcloud/application_default_credentials.json", + }, + {"key": "gitconfig", "path": "gitconfig"}, + { + "key": "agent-sandbox-config.sh", + "path": "agent-sandbox-config.sh", + }, + {"key": ".ready", "path": ".ready"}, + ], + }, + } + else: + cred_volume = { "name": "credentials", "emptyDir": { "medium": "Memory", "sizeLimit": "100Mi", }, - }, - ] + } + volumes: list[dict[str, Any]] = [cred_volume] if self._ca_secret_name: volumes.append( { @@ -267,11 +383,23 @@ def _build_container_spec(self) -> dict[str, Any]: k: {**v, "nvidia.com/gpu": gpu_count} for k, v in resources.items() } + if self._config_map_name: + command = [ + "tini", + "--", + "bash", + "-c", + "/usr/local/bin/entrypoint-session.sh && exec sleep infinity", + ] + env_list.append({"name": "PAUDE_HEADLESS", "value": "1"}) + else: + command = ["tini", "--", "sleep", "infinity"] + return { "name": "paude", "image": self._image, "imagePullPolicy": image_pull_policy, - "command": ["tini", "--", "sleep", "infinity"], + "command": command, "stdin": True, "tty": True, "env": env_list, diff --git a/src/paude/backends/openshift/session_connection.py b/src/paude/backends/openshift/session_connection.py index afd6fe0..9e5b6d1 100644 --- a/src/paude/backends/openshift/session_connection.py +++ b/src/paude/backends/openshift/session_connection.py @@ -15,12 +15,10 @@ launch_port_forward, ) from paude.backends.openshift.session_lookup import SessionLookup -from paude.backends.openshift.sync import ConfigSyncer from paude.backends.port_forward_utils import check_running_pid, log_file from paude.backends.shared import ( PAUDE_LABEL_AGENT, PAUDE_LABEL_PROVIDER, - PAUDE_LABEL_YOLO, resource_name, ) @@ -135,20 +133,15 @@ def __init__( namespace: str, config: OpenShiftConfig, lookup: SessionLookup, - syncer: ConfigSyncer, ) -> None: self._oc = oc self._namespace = namespace self._config = config self._lookup = lookup - self._syncer = syncer def connect_session(self, name: str, github_token: str | None = None) -> int: """Attach to a running session. - On first connect: syncs full configuration. - On reconnect: only refreshes credentials (fast). - Returns: Exit code from the attached session. """ @@ -156,7 +149,6 @@ def connect_session(self, name: str, github_token: str | None = None) -> int: if pname is None: return 1 - self._sync_for_connect(pname, name) pf_result, port_urls = self._start_port_forward(name, pname) stop_event = threading.Event() @@ -253,66 +245,10 @@ def _provider_from_sts(sts: dict[str, object] | None) -> str | None: value = labels.get(PAUDE_LABEL_PROVIDER) return str(value) if value is not None else None - @staticmethod - def _yolo_from_sts(sts: dict[str, object] | None) -> bool: - """Extract the YOLO flag from a StatefulSet's labels.""" - if not sts: - return False - metadata: dict[str, object] = sts.get("metadata", {}) # type: ignore[assignment] - labels: dict[str, object] = metadata.get("labels", {}) # type: ignore[assignment] - return labels.get(PAUDE_LABEL_YOLO) == "1" - def _get_session_agent_name(self, session_name: str) -> str: """Look up the agent name from StatefulSet labels.""" return self._lookup.get_session_agent_name(session_name) - def _get_session_provider(self, session_name: str) -> str | None: - """Look up the provider name from StatefulSet labels.""" - return self._lookup.get_session_provider(session_name) - - def _sync_for_connect(self, pname: str, name: str) -> None: - """Sync config for a connect operation (no credentials synced).""" - sts = self._lookup.get_statefulset(name) - agent_name = self._agent_name_from_sts(sts) - provider = self._provider_from_sts(sts) - yolo = self._yolo_from_sts(sts) - agent_args = self._extract_env_from_sts(sts, "PAUDE_AGENT_ARGS") - - if self._syncer.is_config_synced(pname): - self._syncer.sync_credentials( - pname, - verbose=False, - agent_name=agent_name, - provider=provider, - args=agent_args, - yolo=yolo, - ) - else: - self._syncer.sync_full_config( - pname, - verbose=False, - agent_name=agent_name, - provider=provider, - args=agent_args, - yolo=yolo, - ) - - @staticmethod - def _extract_env_from_sts(sts: dict[str, object] | None, var_name: str) -> str: - """Extract an env var value from a StatefulSet spec.""" - if not sts: - return "" - spec: dict[str, object] = sts.get("spec", {}) # type: ignore[assignment] - template: dict[str, object] = spec.get("template", {}) # type: ignore[assignment] - pod_spec: dict[str, object] = template.get("spec", {}) # type: ignore[assignment] - containers: list[dict[str, object]] = pod_spec.get("containers", []) # type: ignore[assignment] - for container in containers: - env_list: list[dict[str, str]] = container.get("env", []) # type: ignore[assignment] - for env in env_list: - if env.get("name") == var_name: - return str(env.get("value", "")) - return "" - def _read_openclaw_token(self, pname: str, ns: str) -> str | None: """Read the OpenClaw auth token from the pod's config file.""" from paude.backends.shared import OPENCLAW_AUTH_READER_SCRIPT diff --git a/src/paude/backends/openshift/session_lifecycle.py b/src/paude/backends/openshift/session_lifecycle.py index 09daada..eb7497b 100644 --- a/src/paude/backends/openshift/session_lifecycle.py +++ b/src/paude/backends/openshift/session_lifecycle.py @@ -23,9 +23,10 @@ from paude.backends.openshift.resources import ( StatefulSetBuilder, _generate_session_name, + build_config_map, + config_map_name, ) from paude.backends.openshift.session_lookup import SessionLookup -from paude.backends.openshift.sync import ConfigSyncer from paude.backends.shared import ( PAUDE_LABEL_AGENT, PAUDE_LABEL_PROVIDER, @@ -58,7 +59,6 @@ def __init__( namespace: str, config: OpenShiftConfig, lookup: SessionLookup, - syncer: ConfigSyncer, builder: BuildOrchestrator, proxy: ProxyManager, pod_waiter: PodWaiter, @@ -67,7 +67,6 @@ def __init__( self._namespace = namespace self._config = config self._lookup = lookup - self._syncer = syncer self._builder = builder self._proxy = proxy self._pod_waiter = pod_waiter @@ -243,8 +242,27 @@ def _apply_and_wait( *, ca_secret: str | None = None, ) -> None: - """Generate StatefulSet spec, apply it, wait for readiness, sync config.""" + """Create ConfigMap and StatefulSet, then wait for readiness.""" ns = self._namespace + + # Create ConfigMap with all config files so the entrypoint can + # run directly without post-apply oc cp/exec. + cm_spec = build_config_map( + session_name, + ns, + agent_name=config.agent, + provider=config.provider, + workspace=str(config.workspace), + args=session_env.get("PAUDE_AGENT_ARGS", ""), + yolo=config.yolo, + ) + cm_name = config_map_name(session_name) + print( + f"Creating ConfigMap/{cm_name} in namespace {ns}...", + file=sys.stderr, + ) + self._oc.run("apply", "-f", "-", input_data=json.dumps(cm_spec)) + sts_spec = self._generate_statefulset_spec( session_name=session_name, image=config.image, @@ -258,6 +276,7 @@ def _apply_and_wait( yolo=config.yolo, otel_endpoint=config.otel_endpoint, ca_secret=ca_secret, + config_map=cm_name, ) print( @@ -273,52 +292,6 @@ def _apply_and_wait( print(f"Waiting for pod {pname} to be ready...", file=sys.stderr) self._pod_waiter.wait_for_ready(pname) - self._syncer.sync_full_config( - pname, - agent_name=config.agent, - provider=config.provider, - args=session_env.get("PAUDE_AGENT_ARGS", ""), - yolo=config.yolo, - ) - - def start_agent_headless_in_pod(self, pname: str) -> None: - """Start the agent in headless mode inside the pod.""" - from paude.backends.openshift.exceptions import OcTimeoutError - from paude.backends.openshift.oc import OC_EXEC_TIMEOUT - from paude.constants import CONTAINER_ENTRYPOINT - - try: - result = self._oc.run( - "exec", - pname, - "-n", - self._namespace, - "--", - "env", - "PAUDE_HEADLESS=1", - CONTAINER_ENTRYPOINT, - check=False, - timeout=OC_EXEC_TIMEOUT, - ) - if result.returncode != 0: - print( - f"Warning: headless agent start failed (exit {result.returncode}). " - f"Agent will start on next 'paude connect'.", - file=sys.stderr, - ) - except OcTimeoutError: - print( - "Warning: headless agent start timed out. " - "Agent will start on next 'paude connect'.", - file=sys.stderr, - ) - except Exception as exc: - print( - f"Warning: headless agent start failed ({exc}). " - f"Agent will start on next 'paude connect'.", - file=sys.stderr, - ) - def delete_session(self, name: str, confirm: bool = False) -> None: """Delete a session and all its resources. @@ -394,6 +367,17 @@ def delete_session(self, name: str, confirm: bool = False) -> None: check=False, ) + cm = config_map_name(name) + print(f"Deleting ConfigMap/{cm}...", file=sys.stderr) + self._oc.run( + "delete", + "configmap", + cm, + "-n", + ns, + check=False, + ) + self._proxy.delete_resources(name) print( @@ -513,6 +497,7 @@ def _generate_statefulset_spec( yolo: bool = False, otel_endpoint: str | None = None, ca_secret: str | None = None, + config_map: str | None = None, ) -> dict[str, Any]: """Generate a Kubernetes StatefulSet specification.""" builder = ( @@ -533,4 +518,6 @@ def _generate_statefulset_spec( ) if ca_secret: builder = builder.with_ca_secret(ca_secret) + if config_map: + builder = builder.with_config_map(config_map) return builder.build() diff --git a/src/paude/backends/openshift/sync.py b/src/paude/backends/openshift/sync.py index 2bbc65b..3f198c7 100644 --- a/src/paude/backends/openshift/sync.py +++ b/src/paude/backends/openshift/sync.py @@ -5,7 +5,6 @@ import sys import tempfile from pathlib import Path -from typing import TYPE_CHECKING from paude.backends.openshift.exceptions import OcTimeoutError, OpenShiftError from paude.backends.openshift.oc import ( @@ -15,10 +14,7 @@ OcClient, ) from paude.backends.sync_base import CONFIG_PATH, BaseConfigSyncer -from paude.constants import CONTAINER_HOME, GCP_ADC_FILENAME, SANDBOX_CONFIG_TARGET - -if TYPE_CHECKING: - from paude.agents.base import Agent +from paude.constants import GCP_ADC_FILENAME, SANDBOX_CONFIG_TARGET class ConfigSyncer(BaseConfigSyncer): @@ -55,81 +51,6 @@ def _copy_file(self, local_path: str, container_path: str, *, context: str) -> b except Exception: # noqa: S110 return False - def _copy_dir( - self, - local_dir: str, - container_path: str, - *, - excludes: list[str] | None = None, - context: str, - ) -> bool: - exclude_args: list[str] = [] - if excludes: - for pattern in excludes: - exclude_args.extend(["--exclude", pattern]) - - success = self.rsync_with_retry( - f"{local_dir}/", - f"{self._target}:{container_path}", - exclude_args, - ) - if not success: - print( - f" Warning: Failed to {context} ({local_dir}/) - plugins may not work", - file=sys.stderr, - ) - return success - - def _rewrite_plugin_paths(self, agent_path: str, agent: Agent, home: Path) -> None: - config_dir_name = agent.config.config_dir_name - container_plugins_path = f"{CONTAINER_HOME}/{config_dir_name}/plugins" - - installed_plugins = f"{agent_path}/plugins/installed_plugins.json" - jq_expr = ( - ".plugins |= with_entries(.value |= map(" - "if .installPath then " - '.installPath = ($prefix + "/" + ' - '(.installPath | split("/") | .[-3:] | join("/"))) ' - "else . end))" - ) - self._oc.run( - "exec", - self._target, - "-n", - self._namespace, - "--", - "bash", - "-c", - f'if [ -f "{installed_plugins}" ]; then ' - f"jq --arg prefix \"{container_plugins_path}/cache\" '{jq_expr}' " - f'"{installed_plugins}" > "{installed_plugins}.tmp" && ' - f'mv "{installed_plugins}.tmp" "{installed_plugins}"; fi', - check=False, - timeout=OC_EXEC_TIMEOUT, - ) - - known_marketplaces = f"{agent_path}/plugins/known_marketplaces.json" - jq_expr2 = ( - "with_entries(if .value.installLocation then " - '.value.installLocation = ($prefix + "/marketplaces/" + .key) ' - "else . end)" - ) - self._oc.run( - "exec", - self._target, - "-n", - self._namespace, - "--", - "bash", - "-c", - f'if [ -f "{known_marketplaces}" ]; then ' - f"jq --arg prefix \"{container_plugins_path}\" '{jq_expr2}' " - f'"{known_marketplaces}" > "{known_marketplaces}.tmp" && ' - f'mv "{known_marketplaces}.tmp" "{known_marketplaces}"; fi', - check=False, - timeout=OC_EXEC_TIMEOUT, - ) - # -- OpenShift-specific public API ------------------------------------- def rsync_with_retry( @@ -231,7 +152,7 @@ def sync_full_config( ) -> None: """Sync all configuration to pod /credentials/ directory. - Full sync including stub gcloud credentials, agent config, gitconfig, + Full sync including stub gcloud credentials, gitconfig, global gitignore, and sandbox config. No real credentials are synced — all authentication is handled by the proxy sidecar. @@ -247,7 +168,7 @@ def sync_full_config( print("Syncing configuration to pod...", file=sys.stderr) - self._prepare_config_directory(agent_name=agent_name) + self._prepare_config_directory() self._sync_stub_gcloud_credentials() self._sync_config_files(agent_name) self._sync_sandbox_config( @@ -358,7 +279,7 @@ def _cp_content_to_pod(self, content: str, dest_path: str) -> None: file=sys.stderr, ) - def _prepare_config_directory(self, agent_name: str = "claude") -> None: + def _prepare_config_directory(self) -> None: """Prepare the config directory on the pod.""" prep_result = self._oc.run( "exec", @@ -368,7 +289,7 @@ def _prepare_config_directory(self, agent_name: str = "claude") -> None: "--", "bash", "-c", - f"mkdir -p {CONFIG_PATH}/gcloud {CONFIG_PATH}/{agent_name} && " + f"mkdir -p {CONFIG_PATH}/gcloud && " f"(chmod -R g+rwX {CONFIG_PATH} 2>/dev/null || true)", check=False, timeout=OC_EXEC_TIMEOUT, diff --git a/src/paude/backends/podman/sync.py b/src/paude/backends/podman/sync.py index 6432781..1485378 100644 --- a/src/paude/backends/podman/sync.py +++ b/src/paude/backends/podman/sync.py @@ -7,16 +7,10 @@ from __future__ import annotations import sys -from pathlib import Path -from typing import TYPE_CHECKING from paude.backends.sync_base import CONFIG_PATH, BaseConfigSyncer -from paude.constants import CONTAINER_HOME from paude.container.engine import ContainerEngine -if TYPE_CHECKING: - from paude.agents.base import Agent - class ConfigSyncer(BaseConfigSyncer): """Podman-specific config syncer using podman cp/exec.""" @@ -34,9 +28,8 @@ def sync(self, cname: str, agent_name: str) -> None: return self._target = cname - agent_path = f"{CONFIG_PATH}/{agent_name}" - self._prepare_directory(agent_path) + self._prepare_directory() self._sync_config_files(agent_name) self._finalize() @@ -62,40 +55,7 @@ def _copy_file(self, local_path: str, container_path: str, *, context: str) -> b context=context, ) - def _copy_dir( - self, - local_dir: str, - container_path: str, - *, - excludes: list[str] | None = None, - context: str, - ) -> bool: - if excludes: - import shutil - import tempfile - - patterns = {e.strip("/") for e in excludes} - - def _ignore(_dir: str, entries: list[str]) -> set[str]: - return {e for e in entries if e in patterns} - - with tempfile.TemporaryDirectory() as tmp: - filtered = str(Path(tmp) / "filtered") - shutil.copytree(local_dir, filtered, ignore=_ignore) - return self._run_step( - "cp", - f"{filtered}/.", - f"{self._target}:{container_path}", - context=context, - ) - return self._run_step( - "cp", - f"{local_dir}/.", - f"{self._target}:{container_path}", - context=context, - ) - - def _prepare_directory(self, agent_path: str) -> None: + def _prepare_directory(self) -> None: t = self._target self._run_step( "exec", @@ -117,62 +77,6 @@ def _prepare_directory(self, agent_path: str) -> None: CONFIG_PATH, context="set credentials directory ownership", ) - self._run_step( - "exec", - "--user", - "root", - t, - "mkdir", - "-p", - agent_path, - context="create agent credentials directory", - ) - - def _rewrite_plugin_paths(self, agent_path: str, agent: Agent, home: Path) -> None: - host_home = str(home) - container_home = CONTAINER_HOME - if host_home == container_home: - return - - plugin_files = [ - f"{agent_path}/plugins/installed_plugins.json", - f"{agent_path}/plugins/known_marketplaces.json", - ] - t = self._target - for plugin_file in plugin_files: - exists = self._run_step( - "exec", - "--user", - "root", - t, - "test", - "-f", - plugin_file, - context=f"check plugin file exists: {plugin_file}", - ) - if not exists: - continue - - python_script = ( - "from pathlib import Path\n" - "p = Path(__import__('sys').argv[1])\n" - "old = __import__('sys').argv[2]\n" - "new = __import__('sys').argv[3]\n" - "p.write_text(p.read_text().replace(old, new))\n" - ) - self._run_step( - "exec", - "--user", - "root", - t, - "python3", - "-c", - python_script, - plugin_file, - host_home, - container_home, - context=f"rewrite plugin home paths in {plugin_file}", - ) def _finalize(self) -> None: t = self._target diff --git a/src/paude/backends/shared.py b/src/paude/backends/shared.py index befbbbb..4e097e5 100644 --- a/src/paude/backends/shared.py +++ b/src/paude/backends/shared.py @@ -95,14 +95,6 @@ def enrich_port_url(url: str, token: str | None) -> str: return f"{url}/#token={token}" if token else url -def config_file_basename(config_file_name: str) -> str: - """Strip leading dot from config file name. - - Example: '.claude.json' -> 'claude.json' - """ - return config_file_name.lstrip(".") - - def build_agent_env(config: AgentConfig) -> dict[str, str]: """Build agent env vars for container entrypoint parameterization.""" env: dict[str, str] = { @@ -113,13 +105,8 @@ def build_agent_env(config: AgentConfig) -> dict[str, str]: "PAUDE_AGENT_SESSION_NAME": config.session_name, "PAUDE_AGENT_LAUNCH_CMD": config.process_name, } - env["PAUDE_AGENT_SEED_DIR"] = f"/tmp/{config.name}.seed" # noqa: S108 if config.config_file_name: - basename = config_file_basename(config.config_file_name) env["PAUDE_AGENT_CONFIG_FILE"] = config.config_file_name - env["PAUDE_AGENT_SEED_FILE"] = f"/tmp/{basename}.seed" # noqa: S108 - else: - env["PAUDE_AGENT_SEED_FILE"] = "" return env diff --git a/src/paude/backends/sync_base.py b/src/paude/backends/sync_base.py index 0f88f76..477e52d 100644 --- a/src/paude/backends/sync_base.py +++ b/src/paude/backends/sync_base.py @@ -9,12 +9,6 @@ from abc import ABC, abstractmethod from pathlib import Path -from typing import TYPE_CHECKING - -from paude.backends.shared import config_file_basename - -if TYPE_CHECKING: - from paude.agents.base import Agent CONFIG_PATH = "/credentials" @@ -28,10 +22,6 @@ class BaseConfigSyncer(ABC): Subclasses store ``_target`` (container/pod name) as instance state set at the start of each public sync call. This is not thread-safe; each syncer instance should be used from a single thread at a time. - - Note: ``_copy_dir`` receives ``excludes`` from the shared orchestration. - Transports that lack native exclude support (e.g. podman cp) should - filter locally before copying. """ # -- abstract transport methods ---------------------------------------- @@ -40,99 +30,20 @@ class BaseConfigSyncer(ABC): def _copy_file(self, local_path: str, container_path: str, *, context: str) -> bool: """Copy a single file into the container. Returns True on success.""" - @abstractmethod - def _copy_dir( - self, - local_dir: str, - container_path: str, - *, - excludes: list[str] | None = None, - context: str, - ) -> bool: - """Copy directory contents into the container. Returns True on success.""" - - @abstractmethod - def _rewrite_plugin_paths(self, agent_path: str, agent: Agent, home: Path) -> None: - """Rewrite absolute host paths in plugin metadata files.""" - # -- shared orchestration ---------------------------------------------- def _sync_config_files(self, agent_name: str) -> None: """Sync config files common to all backends. - Copies agent config directory/files, cursor auth, gitconfig, - and rewrites plugin paths. Subclasses call this from their - public sync methods, wrapping with backend-specific prepare - and finalize steps. + Copies gitconfig to the container. + Subclasses call this from their public sync methods, wrapping + with backend-specific prepare and finalize steps. """ - from paude.agents import get_agent - - agent = get_agent(agent_name) home = Path.home() - agent_path = f"{CONFIG_PATH}/{agent_name}" - - config_synced = self._sync_agent_config(agent_path, agent, home) - self._sync_agent_config_file(agent_path, agent, home) - if agent_name == "cursor": - self._sync_cursor_auth(home) self._sync_gitconfig(home) - self._sync_global_gitignore(home) - if config_synced: - plugins_dir = home / agent.config.config_dir_name / "plugins" - if plugins_dir.is_dir(): - self._rewrite_plugin_paths(agent_path, agent, home) # -- shared step implementations --------------------------------------- - def _sync_agent_config(self, agent_path: str, agent: Agent, home: Path) -> bool: - """Sync agent config directory. Returns True on success.""" - config_dir = home / agent.config.config_dir_name - if not config_dir.is_dir(): - return True - - if agent.config.config_sync_files_only: - for filename in agent.config.config_sync_files_only: - filepath = config_dir / filename - if filepath.exists(): - self._copy_file( - str(filepath), - f"{agent_path}/{filename}", - context=f"copy agent config file {filename}", - ) - return True - - return self._copy_dir( - str(config_dir), - agent_path, - excludes=list(agent.config.config_excludes), - context="copy agent config directory", - ) - - def _sync_agent_config_file( - self, agent_path: str, agent: Agent, home: Path - ) -> None: - """Sync agent config file (e.g., .claude.json).""" - if not agent.config.config_file_name: - return - config_file = home / agent.config.config_file_name - if config_file.is_file(): - basename = config_file_basename(agent.config.config_file_name) - self._copy_file( - str(config_file), - f"{agent_path}/{basename}", - context=f"copy agent config file {agent.config.config_file_name}", - ) - - def _sync_cursor_auth(self, home: Path) -> None: - """Sync Cursor auth.json from ~/.config/cursor/.""" - auth_json = home / ".config" / "cursor" / "auth.json" - if auth_json.is_file(): - self._copy_file( - str(auth_json), - f"{CONFIG_PATH}/cursor-auth.json", - context="copy cursor auth.json", - ) - def _sync_gitconfig(self, home: Path) -> None: """Sync ~/.gitconfig.""" gitconfig = home / ".gitconfig" @@ -142,13 +53,3 @@ def _sync_gitconfig(self, home: Path) -> None: f"{CONFIG_PATH}/gitconfig", context="copy gitconfig", ) - - def _sync_global_gitignore(self, home: Path) -> None: - """Sync ~/.config/git/ignore (global gitignore).""" - global_gitignore = home / ".config" / "git" / "ignore" - if global_gitignore.is_file(): - self._copy_file( - str(global_gitignore), - f"{CONFIG_PATH}/gitignore-global", - context="copy global gitignore", - ) diff --git a/src/paude/cli/create_openshift.py b/src/paude/cli/create_openshift.py index 78df393..cf5ceba 100644 --- a/src/paude/cli/create_openshift.py +++ b/src/paude/cli/create_openshift.py @@ -155,6 +155,3 @@ def create_openshift_session( if config and config.post_create_command: _run_post_create_command(os_backend, session.name, config.post_create_command) - - # Start the agent after git push so wait_for_git finds .git immediately - os_backend.start_agent_headless(session.name) diff --git a/src/paude/cli/upgrade.py b/src/paude/cli/upgrade.py index 31b6ee9..7343a2a 100644 --- a/src/paude/cli/upgrade.py +++ b/src/paude/cli/upgrade.py @@ -621,12 +621,3 @@ def _upgrade_openshift( pname = pod_name(name) typer.echo(f"Waiting for pod {pname} to be ready...", err=True) backend._pod_waiter.wait_for_ready(pname) - - # Re-sync config (no credentials — proxy handles all auth) - backend._syncer.sync_full_config( - pname, - agent_name=agent_name, - provider=provider_name, - ) - - backend.start_agent_headless(name) diff --git a/tests/integration/test_podman_backend.py b/tests/integration/test_podman_backend.py index 6cbb241..087aac7 100644 --- a/tests/integration/test_podman_backend.py +++ b/tests/integration/test_podman_backend.py @@ -686,9 +686,6 @@ def test_gemini_agent_env_vars_set( check=True, ) - assert _get_container_env(container_name, "PAUDE_AGENT_SEED_DIR") == ( - "/tmp/gemini.seed" - ) assert _get_container_env(container_name, "PAUDE_AGENT_NAME") == "gemini" finally: diff --git a/tests/integration/test_upgrade_openshift.py b/tests/integration/test_upgrade_openshift.py index 65fb9da..4dc8d8c 100644 --- a/tests/integration/test_upgrade_openshift.py +++ b/tests/integration/test_upgrade_openshift.py @@ -4,7 +4,7 @@ import json from pathlib import Path -from unittest.mock import MagicMock, patch +from unittest.mock import patch import pytest @@ -58,6 +58,7 @@ def test_upgrade_preserves_volume_and_labels( yolo=True, agent="gemini", wait_for_ready=False, + env={"PAUDE_SKIP_AGENT_INSTALL": "1"}, ) openshift_backend.create_session(config) @@ -105,7 +106,7 @@ def test_upgrade_preserves_volume_and_labels( pre_labels = sts_json.get("metadata", {}).get("labels", {}) assert pre_labels.get(PAUDE_LABEL_VERSION) == OLD_VERSION - # 4. Run upgrade with mocked image building and config sync + # 4. Run upgrade with mocked image building with ( patch.object( openshift_backend, @@ -117,7 +118,6 @@ def test_upgrade_preserves_volume_and_labels( "ensure_proxy_image_via_build", return_value=kubernetes_test_image, ), - patch.object(openshift_backend, "_syncer_instance", MagicMock()), patch.object(openshift_backend, "start_agent_headless"), ): from paude.cli.upgrade import _upgrade_openshift diff --git a/tests/test_agents.py b/tests/test_agents.py index ec7b4bf..4d5b400 100644 --- a/tests/test_agents.py +++ b/tests/test_agents.py @@ -84,7 +84,6 @@ def test_defaults(self) -> None: assert cfg.env_vars == {} assert cfg.passthrough_env_vars == [] assert cfg.passthrough_env_prefixes == [] - assert cfg.config_excludes == [] assert cfg.activity_files == [] assert cfg.extra_domain_aliases == ["claude"] assert cfg.exposed_ports == [] @@ -130,14 +129,6 @@ def test_yolo_flag(self) -> None: def test_clear_command(self) -> None: assert ClaudeAgent().config.clear_command == "/clear" - def test_config_excludes_not_empty(self) -> None: - cfg = ClaudeAgent().config - assert len(cfg.config_excludes) > 0 - assert "/projects" in cfg.config_excludes - - def test_config_sync_files_only_empty(self) -> None: - assert ClaudeAgent().config.config_sync_files_only == [] - def test_passthrough_vars(self) -> None: cfg = ClaudeAgent().config assert "ANTHROPIC_VERTEX_PROJECT_ID" in cfg.passthrough_env_vars @@ -216,22 +207,7 @@ def test_mounts_claude_dir(self, tmp_path: Path) -> None: claude_dir = tmp_path / ".claude" claude_dir.mkdir() mounts = ClaudeAgent().host_config_mounts(tmp_path) - assert "-v" in mounts - assert any("/tmp/claude.seed:ro" in m for m in mounts) - - def test_mounts_plugins(self, tmp_path: Path) -> None: - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - plugins = claude_dir / "plugins" - plugins.mkdir() - mounts = ClaudeAgent().host_config_mounts(tmp_path) - assert any("plugins" in m and ":ro" in m for m in mounts) - - def test_mounts_claude_json(self, tmp_path: Path) -> None: - claude_json = tmp_path / ".claude.json" - claude_json.write_text("{}") - mounts = ClaudeAgent().host_config_mounts(tmp_path) - assert any("/tmp/claude.json.seed:ro" in m for m in mounts) + assert mounts == [] class TestClaudeAgentBuildEnvironment: @@ -348,12 +324,6 @@ def test_extra_domain_aliases(self) -> None: def test_env_vars_empty(self) -> None: assert GeminiAgent().config.env_vars == {} - def test_config_excludes_empty(self) -> None: - assert GeminiAgent().config.config_excludes == [] - - def test_config_sync_files_only_empty(self) -> None: - assert GeminiAgent().config.config_sync_files_only == [] - def test_activity_files_empty(self) -> None: assert GeminiAgent().config.activity_files == [] @@ -408,8 +378,7 @@ def test_mounts_gemini_dir(self, tmp_path: Path) -> None: gemini_dir = tmp_path / ".gemini" gemini_dir.mkdir() mounts = GeminiAgent().host_config_mounts(tmp_path) - assert "-v" in mounts - assert any("/tmp/gemini.seed:ro" in m for m in mounts) + assert mounts == [] def test_no_config_file_mount(self, tmp_path: Path) -> None: gemini_json = tmp_path / ".gemini.json" @@ -523,12 +492,6 @@ def test_env_vars(self) -> None: "NODE_USE_ENV_PROXY": "1", } - def test_config_excludes_empty(self) -> None: - assert CursorAgent().config.config_excludes == [] - - def test_config_sync_files_only(self) -> None: - assert CursorAgent().config.config_sync_files_only == ["cli-config.json"] - def test_activity_files_empty(self) -> None: assert CursorAgent().config.activity_files == [] @@ -610,23 +573,6 @@ def test_empty_when_dir_exists_but_no_cli_config(self, tmp_path: Path) -> None: mounts = CursorAgent().host_config_mounts(tmp_path) assert mounts == [] - def test_mounts_cli_config_json(self, tmp_path: Path) -> None: - cursor_dir = tmp_path / ".cursor" - cursor_dir.mkdir() - cli_config = cursor_dir / "cli-config.json" - cli_config.write_text("{}") - mounts = CursorAgent().host_config_mounts(tmp_path) - assert "-v" in mounts - assert any("/tmp/cursor-cli-config.seed:ro" in m for m in mounts) - - def test_does_not_mount_entire_cursor_dir(self, tmp_path: Path) -> None: - cursor_dir = tmp_path / ".cursor" - cursor_dir.mkdir() - (cursor_dir / "cli-config.json").write_text("{}") - (cursor_dir / "extensions").mkdir() - mounts = CursorAgent().host_config_mounts(tmp_path) - assert not any("cursor.seed" in m for m in mounts) - def test_mounts_auth_json_when_exists(self, tmp_path: Path) -> None: config_cursor = tmp_path / ".config" / "cursor" config_cursor.mkdir(parents=True) @@ -639,7 +585,7 @@ def test_no_auth_json_mount_when_missing(self, tmp_path: Path) -> None: mounts = CursorAgent().host_config_mounts(tmp_path) assert not any("cursor-auth.seed" in m for m in mounts) - def test_mounts_both_cli_config_and_auth_json(self, tmp_path: Path) -> None: + def test_mounts_only_auth_json_when_both_exist(self, tmp_path: Path) -> None: cursor_dir = tmp_path / ".cursor" cursor_dir.mkdir() (cursor_dir / "cli-config.json").write_text("{}") @@ -647,7 +593,7 @@ def test_mounts_both_cli_config_and_auth_json(self, tmp_path: Path) -> None: config_cursor.mkdir(parents=True) (config_cursor / "auth.json").write_text("{}") mounts = CursorAgent().host_config_mounts(tmp_path) - assert any("/tmp/cursor-cli-config.seed:ro" in m for m in mounts) + assert not any("cursor-cli-config.seed" in m for m in mounts) assert any("/tmp/cursor-auth.seed:ro" in m for m in mounts) @@ -693,10 +639,6 @@ def test_uses_jq(self) -> None: script = CursorAgent().apply_sandbox_config("/home/paude", "/workspace", "") assert "jq" in script - def test_seeds_from_host_cli_config(self) -> None: - script = CursorAgent().apply_sandbox_config("/home/paude", "/workspace", "") - assert "/tmp/cursor-cli-config.seed" in script - def test_home_path_parameterized(self) -> None: script = CursorAgent().apply_sandbox_config("/custom/home", "/workspace", "") assert "/custom/home/.cursor" in script @@ -1016,8 +958,7 @@ def test_mounts_openclaw_dir(self, tmp_path: Path) -> None: openclaw_dir = tmp_path / ".openclaw" openclaw_dir.mkdir() mounts = OpenClawAgent().host_config_mounts(tmp_path) - assert "-v" in mounts - assert any("/tmp/openclaw.seed:ro" in m for m in mounts) + assert mounts == [] class TestOpenClawAgentBuildEnvironment: diff --git a/tests/test_build_agent_env.py b/tests/test_build_agent_env.py index bae7d0d..793dd5f 100644 --- a/tests/test_build_agent_env.py +++ b/tests/test_build_agent_env.py @@ -3,7 +3,6 @@ from __future__ import annotations from paude.agents.base import AgentConfig -from paude.backends.shared import build_agent_env def _make_config(**overrides: object) -> AgentConfig: @@ -17,38 +16,3 @@ def _make_config(**overrides: object) -> AgentConfig: } defaults.update(overrides) return AgentConfig(**defaults) # type: ignore[arg-type] - - -class TestBuildAgentEnvSeedPaths: - """Tests for PAUDE_AGENT_SEED_DIR and PAUDE_AGENT_SEED_FILE env vars.""" - - def test_claude_config_seed_paths(self) -> None: - """Claude-like config produces correct seed paths.""" - config = _make_config( - name="claude", - config_file_name=".claude.json", - ) - env = build_agent_env(config) - assert env["PAUDE_AGENT_SEED_DIR"] == "/tmp/claude.seed" - assert env["PAUDE_AGENT_SEED_FILE"] == "/tmp/claude.json.seed" - - def test_no_config_file_produces_empty_seed_file(self) -> None: - """Agent with no config_file_name produces empty PAUDE_AGENT_SEED_FILE.""" - config = _make_config( - name="gemini", - config_file_name=None, - ) - env = build_agent_env(config) - assert env["PAUDE_AGENT_SEED_DIR"] == "/tmp/gemini.seed" - assert env["PAUDE_AGENT_SEED_FILE"] == "" - - def test_seed_dir_uses_agent_name(self) -> None: - """Seed dir is derived from agent name, not config dir.""" - config = _make_config( - name="codex", - config_dir_name=".codex-config", - config_file_name=".codex.json", - ) - env = build_agent_env(config) - assert env["PAUDE_AGENT_SEED_DIR"] == "/tmp/codex.seed" - assert env["PAUDE_AGENT_SEED_FILE"] == "/tmp/codex.json.seed" diff --git a/tests/test_entrypoint_seed_copy.py b/tests/test_entrypoint_seed_copy.py index 83c943f..6049ab1 100644 --- a/tests/test_entrypoint_seed_copy.py +++ b/tests/test_entrypoint_seed_copy.py @@ -1,10 +1,8 @@ -"""Tests for entrypoint-session.sh seed copy logic (Podman backend). +"""Tests for entrypoint-session.sh config logic (Podman backend). -These tests exercise the bash seed copy block by extracting it into a -minimal script, running it in a temporary directory, and verifying results. - -A contract test also validates that entrypoint-session.sh itself contains the -expected cp -a pattern and not the old file-by-file loop. +These tests exercise the bash config blocks (seed copy, persist, sandbox) +by extracting them into minimal scripts, running them in a temporary +directory, and verifying results. """ from __future__ import annotations @@ -42,53 +40,6 @@ def _read_all_entrypoint_files() -> str: ) -def _build_script(home_dir: str, seed_dir: str, credentials_dir: str | None) -> str: - """Build a minimal bash script that replicates the seed copy logic. - - Args: - home_dir: Path to use as HOME. - seed_dir: Path to use as /tmp/claude.seed. - credentials_dir: Path to use as /credentials, or None to skip. - When None, CRED_DIR is set to a non-existent path under home_dir. - """ - # Guard: if credentials_dir is set, create it so the -d test passes - credentials_check = "" - if credentials_dir is not None: - credentials_check = f'mkdir -p "{credentials_dir}"' - - # When no credentials_dir, use a guaranteed-nonexistent path under tmp_path - cred_dir_value = credentials_dir or f"{home_dir}/.no-credentials" - - return textwrap.dedent(f"""\ - #!/bin/bash - set -e - export HOME="{home_dir}" - SEED_DIR="{seed_dir}" - CRED_DIR="{cred_dir_value}" - {credentials_check} - - # Replicate the seed copy block from entrypoint-session.sh - if [[ -d "$SEED_DIR" ]] && [[ ! -d "$CRED_DIR" ]]; then - mkdir -p "$HOME/.claude" - chmod g+rwX "$HOME/.claude" 2>/dev/null || true - - cp -Rp "$SEED_DIR/." "$HOME/.claude/" 2>/dev/null || true - - if [[ -f "$HOME/.claude/claude.json" ]]; then - cp -f "$HOME/.claude/claude.json" "$HOME/.claude.json" 2>/dev/null || true - rm -f "$HOME/.claude/claude.json" 2>/dev/null || true - chmod g+rw "$HOME/.claude.json" 2>/dev/null || true - fi - - if [[ -d "$HOME/.claude/plugins" ]]; then - chmod -R g+rwX "$HOME/.claude/plugins" 2>/dev/null || true - fi - - chmod -R g+rwX "$HOME/.claude" 2>/dev/null || true - fi - """) - - def _run_script(script: str) -> subprocess.CompletedProcess[str]: """Run a bash script and return the result.""" return subprocess.run( @@ -106,16 +57,6 @@ class TestEntrypointContract: If the entrypoint is reverted, these tests catch it. """ - def test_entrypoint_uses_recursive_copy(self) -> None: - """The entrypoint must use recursive cp for seed copy, not a file loop.""" - content = _read_all_entrypoint_files() - assert "cp -dR" in content, ( - "entrypoint files must use 'cp -dR' for recursive seed copy" - ) - assert "$AGENT_SEED_DIR" in content or "/tmp/claude.seed" in content, ( - "entrypoint files must reference seed directory variable" - ) - def test_entrypoint_sources_sandbox_config_script(self) -> None: """The entrypoint must source the Python-generated sandbox config script.""" content = ENTRYPOINT_PATH.read_text() @@ -126,17 +67,6 @@ def test_entrypoint_sources_sandbox_config_script(self) -> None: "entrypoint-session.sh must check PAUDE_SUPPRESS_PROMPTS before sourcing" ) - def test_entrypoint_checks_tmux_before_seed_copy(self) -> None: - """tmux has-session check must appear before the seed copy block.""" - content = ENTRYPOINT_PATH.read_text() - tmux_check_pos = content.find("tmux -u has-session") - seed_copy_pos = content.find('copy_agent_config "$AGENT_SEED_DIR"') - assert tmux_check_pos != -1, "entrypoint must check for existing tmux session" - assert seed_copy_pos != -1, "entrypoint must have seed copy block" - assert tmux_check_pos < seed_copy_pos, ( - "tmux session check must come before seed config copy" - ) - def test_entrypoint_checks_tmux_before_sandbox_config(self) -> None: """tmux has-session check must appear before sandbox config sourcing.""" content = ENTRYPOINT_PATH.read_text() @@ -171,254 +101,6 @@ def test_entrypoint_has_selinux_remediation(self) -> None: "chcon must use --reference=/pvc to inherit PVC SELinux context" ) - def test_entrypoint_no_old_file_loop(self) -> None: - """The old file-by-file loop pattern must not be present.""" - content = _read_all_entrypoint_files() - assert "for f in /tmp/claude.seed/*" not in content, ( - "entrypoint files still contain the old file-by-file loop" - ) - - def test_entrypoint_handles_claude_json_after_copy(self) -> None: - """Config file must be moved (not copied separately) after recursive copy.""" - content = ENTRYPOINT_LIB_CONFIG_PATH.read_text() - # Find the recursive copy in copy_agent_config function - cp_pos = content.find("cp -dR --preserve=mode,timestamps") - if cp_pos == -1: - cp_pos = content.find('cp -a "$AGENT_SEED_DIR/."') - if cp_pos == -1: - cp_pos = content.find("cp -a /tmp/claude.seed/.") - assert cp_pos != -1, "Missing recursive copy command for seed dir" - # Find the mv that comes after this specific cp -a - mv_pos = max( - content.find("AGENT_CONFIG_FILE_BASENAME", cp_pos + 1), - content.find("claude.json", cp_pos + 1), - ) - assert mv_pos != -1, "Missing mv command for config file after cp -a" - assert mv_pos > cp_pos, "mv must come after cp -a" - - -class TestSeedCopyRegularFiles: - """Test that regular files are copied from seed.""" - - def test_copies_regular_files(self, tmp_path: Path) -> None: - """Regular files like settings.json are copied to ~/.claude/.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - (seed / "settings.json").write_text('{"key": "value"}') - (seed / "projects.json").write_text("[]") - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - assert (home / ".claude" / "settings.json").read_text() == '{"key": "value"}' - assert (home / ".claude" / "projects.json").read_text() == "[]" - - -class TestSeedCopyDirectories: - """Test that directories (like commands/) are recursively copied.""" - - def test_copies_directories_recursively(self, tmp_path: Path) -> None: - """Directories like commands/ with nested subdirs are fully copied.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - # Create commands/ with nested structure - commands = seed / "commands" - commands.mkdir() - (commands / "skill1.md").write_text("# Skill 1") - - subdir = commands / "subdir" - subdir.mkdir() - (subdir / "skill2.md").write_text("# Skill 2") - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - assert (home / ".claude" / "commands" / "skill1.md").read_text() == "# Skill 1" - assert ( - home / ".claude" / "commands" / "subdir" / "skill2.md" - ).read_text() == "# Skill 2" - - -class TestSeedCopyHiddenFiles: - """Test that hidden files (dotfiles) are copied. - - The old glob-based loop (for f in seed/*) skipped hidden files. - cp -a copies everything including dotfiles, which is the desired behavior. - """ - - def test_copies_dotfiles(self, tmp_path: Path) -> None: - """Hidden files like .gitignore inside seed are copied.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - (seed / ".some-hidden-config").write_text("hidden") - (seed / "settings.json").write_text("{}") - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - assert (home / ".claude" / ".some-hidden-config").read_text() == "hidden" - assert (home / ".claude" / "settings.json").read_text() == "{}" - - -class TestSeedCopySymlinks: - """Test symlink handling with cp -a. - - cp -a preserves symlinks (unlike the old cp -L which dereferenced them). - This matches the OpenShift backend behavior. Symlinks to files within the - seed tree should work; symlinks pointing outside will be preserved as-is. - """ - - def test_copies_symlinks_to_local_targets(self, tmp_path: Path) -> None: - """Symlinks pointing within the seed tree are preserved and functional.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - (seed / "real-file.json").write_text('{"real": true}') - (seed / "link-to-file.json").symlink_to("real-file.json") - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - link_dest = home / ".claude" / "link-to-file.json" - assert link_dest.is_symlink() - assert link_dest.read_text() == '{"real": true}' - - -class TestSeedCopyClaudeJson: - """Test claude.json special handling.""" - - def test_claude_json_moved_to_home_root(self, tmp_path: Path) -> None: - """claude.json ends up at ~/.claude.json, not ~/.claude/claude.json.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - (seed / "claude.json").write_text('{"config": true}') - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - assert (home / ".claude.json").read_text() == '{"config": true}' - assert not (home / ".claude" / "claude.json").exists() - - def test_other_files_unaffected_by_claude_json_move(self, tmp_path: Path) -> None: - """Other files aren't disturbed when claude.json is moved.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - (seed / "claude.json").write_text('{"config": true}') - (seed / "settings.json").write_text('{"settings": true}') - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - assert (home / ".claude" / "settings.json").read_text() == '{"settings": true}' - assert (home / ".claude.json").read_text() == '{"config": true}' - - -class TestSeedCopySkipsWithCredentials: - """Test that seed copy is skipped when /credentials exists.""" - - def test_skips_when_credentials_dir_exists(self, tmp_path: Path) -> None: - """No copy happens when credentials directory exists (OpenShift path).""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - cred = tmp_path / "credentials" - # cred dir will be created by the script - - (seed / "settings.json").write_text('{"key": "value"}') - - script = _build_script(str(home), str(seed), str(cred)) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - assert not (home / ".claude").exists() - - -class TestSeedCopyEmptySeed: - """Test behavior with an empty seed directory.""" - - def test_empty_seed_creates_claude_dir_without_error(self, tmp_path: Path) -> None: - """Empty seed directory should succeed and create ~/.claude/.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - # seed is intentionally empty - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - assert (home / ".claude").is_dir() - # No claude.json should appear - assert not (home / ".claude.json").exists() - - -class TestSeedCopyMixedContent: - """Test copying a mix of files and directories.""" - - def test_copies_files_and_directories_together(self, tmp_path: Path) -> None: - """Mix of files, directories, and nested content all get copied.""" - home = tmp_path / "home" - home.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - # Regular files - (seed / "settings.json").write_text('{"settings": true}') - (seed / "claude.json").write_text('{"claude": true}') - - # Directory with files - commands = seed / "commands" - commands.mkdir() - (commands / "my-skill.md").write_text("# My Skill") - - # Plugins directory - plugins = seed / "plugins" - plugins.mkdir() - (plugins / "plugin.json").write_text('{"plugin": true}') - - script = _build_script(str(home), str(seed), None) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - # Regular file copied - assert (home / ".claude" / "settings.json").read_text() == '{"settings": true}' - # claude.json moved to home root - assert (home / ".claude.json").read_text() == '{"claude": true}' - assert not (home / ".claude" / "claude.json").exists() - # Directory copied - assert ( - home / ".claude" / "commands" / "my-skill.md" - ).read_text() == "# My Skill" - # Plugins directory copied - assert ( - home / ".claude" / "plugins" / "plugin.json" - ).read_text() == '{"plugin": true}' - def _build_gemini_sandbox_script( home_dir: str, @@ -830,50 +512,6 @@ def _build_persist_script( """) -def _build_persist_and_copy_script( - home_dir: str, - pvc_dir: str, - seed_dir: str, - agent_config_dir: str = ".claude", - agent_config_file: str = ".claude.json", -) -> str: - """Build a script that runs persist_agent_config then copy_agent_config.""" - persist_fn = _persist_bash_function(pvc_dir) - return textwrap.dedent(f"""\ - #!/bin/bash - set -e - export HOME="{home_dir}" - AGENT_CONFIG_DIR="{agent_config_dir}" - AGENT_CONFIG_FILE="{agent_config_file}" - AGENT_CONFIG_FILE_BASENAME="${{AGENT_CONFIG_FILE#.}}" - - {persist_fn} - copy_agent_config() {{ - local source_path="$1" - - mkdir -p "$HOME/$AGENT_CONFIG_DIR" - chmod g+rwX "$HOME/$AGENT_CONFIG_DIR" 2>/dev/null || true - - cp -Rp "$source_path/." "$HOME/$AGENT_CONFIG_DIR/" 2>/dev/null || true - - if [[ -n "$AGENT_CONFIG_FILE" ]] && [[ -n "$AGENT_CONFIG_FILE_BASENAME" ]] && [[ -f "$HOME/$AGENT_CONFIG_DIR/$AGENT_CONFIG_FILE_BASENAME" ]]; then - cp -f "$HOME/$AGENT_CONFIG_DIR/$AGENT_CONFIG_FILE_BASENAME" "$HOME/$AGENT_CONFIG_FILE" 2>/dev/null || true - rm -f "$HOME/$AGENT_CONFIG_DIR/$AGENT_CONFIG_FILE_BASENAME" 2>/dev/null || true - chmod g+rw "$HOME/$AGENT_CONFIG_FILE" 2>/dev/null || true - fi - - if [[ -d "$HOME/$AGENT_CONFIG_DIR/plugins" ]]; then - chmod -R g+rwX "$HOME/$AGENT_CONFIG_DIR/plugins" 2>/dev/null || true - fi - - chmod -R g+rwX "$HOME/$AGENT_CONFIG_DIR" 2>/dev/null || true - }} - - persist_agent_config - copy_agent_config "{seed_dir}" - """) - - class TestPersistAgentConfig: """Tests for persist_agent_config() — symlinks config to PVC.""" @@ -1052,36 +690,6 @@ def test_setup_credentials_called_before_persist(self) -> None: "so host config is merged into PVC without clobbering runtime state" ) - def test_copy_agent_config_skips_runtime_dirs(self) -> None: - """copy_agent_config skip list must match Python _CLAUDE_CONFIG_EXCLUDES.""" - from paude.agents.claude import _CLAUDE_CONFIG_EXCLUDES - - content = ENTRYPOINT_LIB_CONFIG_PATH.read_text() - func_start = content.find("copy_agent_config()") - func_end = content.find("\n}", func_start) - func_body = content[func_start:func_end] - - for pattern in _CLAUDE_CONFIG_EXCLUDES: - name = pattern.lstrip("/") - assert name in func_body, ( - f"copy_agent_config must skip '{name}' — present in " - f"_CLAUDE_CONFIG_EXCLUDES but missing from entrypoint case statement" - ) - - def test_entrypoint_uses_cp_not_mv_for_config_file(self) -> None: - """copy_agent_config must use cp -f (not mv) for config file relocation.""" - content = ENTRYPOINT_LIB_CONFIG_PATH.read_text() - # Find copy_agent_config function body - func_start = content.find("copy_agent_config()") - func_end = content.find("\n}", func_start) - func_body = content[func_start:func_end] - assert "cp -f" in func_body, ( - "copy_agent_config must use 'cp -f' to write through symlinks" - ) - assert 'mv "$HOME/$AGENT_CONFIG_DIR' not in func_body, ( - "copy_agent_config must not use 'mv' which breaks symlinks" - ) - def test_sandbox_config_python_uses_cp_for_claude_json(self) -> None: """Claude agent's apply_sandbox_config must use cp+rm, not mv.""" from paude.agents.claude import ClaudeAgent @@ -1096,86 +704,6 @@ def test_sandbox_config_python_uses_cp_for_claude_json(self) -> None: ) -class TestCopyThroughSymlinks: - """Tests that copy_agent_config works correctly through symlinks.""" - - def test_seed_copy_writes_through_symlink(self, tmp_path: Path) -> None: - """Seed config copy writes into PVC through the symlink.""" - home = tmp_path / "home" - home.mkdir() - pvc = tmp_path / "pvc" - pvc.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - (seed / "settings.json").write_text('{"from": "seed"}') - (seed / "claude.json").write_text('{"config": true}') - - script = _build_persist_and_copy_script(str(home), str(pvc), str(seed)) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - # Data lives on PVC - assert (pvc / ".claude" / "settings.json").read_text() == '{"from": "seed"}' - # Config file written through symlink - assert json.loads((pvc / ".claude.json").read_text())["config"] is True - # Accessible through HOME symlinks - assert (home / ".claude" / "settings.json").read_text() == '{"from": "seed"}' - assert json.loads((home / ".claude.json").read_text())["config"] is True - - def test_seed_copy_preserves_existing_pvc_files(self, tmp_path: Path) -> None: - """Seed copy is additive: existing PVC files not in seed survive.""" - home = tmp_path / "home" - home.mkdir() - pvc = tmp_path / "pvc" - pvc.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - # Pre-existing PVC state (from previous container) - pvc_claude = pvc / ".claude" - pvc_claude.mkdir() - (pvc_claude / "history.jsonl").write_text("old-history\n") - projects = pvc_claude / "projects" - projects.mkdir() - (projects / "session.json").write_text('{"old": "session"}') - - # Seed has some config files - (seed / "settings.json").write_text('{"new": "settings"}') - - script = _build_persist_and_copy_script(str(home), str(pvc), str(seed)) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - # New seed content is applied - assert (pvc / ".claude" / "settings.json").read_text() == '{"new": "settings"}' - # Old PVC state survives (additive copy) - assert (pvc / ".claude" / "history.jsonl").read_text() == "old-history\n" - assert ( - pvc / ".claude" / "projects" / "session.json" - ).read_text() == '{"old": "session"}' - - def test_config_file_symlink_preserved_after_copy(self, tmp_path: Path) -> None: - """Config file symlink is not broken by copy_agent_config's cp -f.""" - home = tmp_path / "home" - home.mkdir() - pvc = tmp_path / "pvc" - pvc.mkdir() - seed = tmp_path / "seed" - seed.mkdir() - - (seed / "claude.json").write_text('{"seeded": true}') - - script = _build_persist_and_copy_script(str(home), str(pvc), str(seed)) - result = _run_script(script) - assert result.returncode == 0, result.stderr - - # Symlink is preserved (not replaced by a regular file) - assert (home / ".claude.json").is_symlink() - # Data went through to PVC - assert json.loads((pvc / ".claude.json").read_text())["seeded"] is True - - class TestCursorSandboxConfig: """Tests for Cursor agent sandbox config generation and execution.""" diff --git a/tests/test_mounts.py b/tests/test_mounts.py index 0fe59bc..bef2124 100644 --- a/tests/test_mounts.py +++ b/tests/test_mounts.py @@ -35,8 +35,8 @@ def test_gcloud_not_bind_mounted(self, tmp_path: Path): assert ".config/gcloud" not in mount_str - def test_claude_seed_mount_read_only(self, tmp_path: Path): - """Claude seed mount is read-only when present.""" + def test_claude_no_seed_mount(self, tmp_path: Path): + """Claude agent no longer bind-mounts ~/.claude as a seed.""" home = tmp_path / "home" home.mkdir() claude = home / ".claude" @@ -45,23 +45,7 @@ def test_claude_seed_mount_read_only(self, tmp_path: Path): mounts = build_mounts(home) mount_str = " ".join(mounts) - assert "/tmp/claude.seed:ro" in mount_str - - def test_plugins_mounted_at_original_path(self, tmp_path: Path): - """Plugins mounted at original host path.""" - home = tmp_path / "home" - home.mkdir() - claude = home / ".claude" - claude.mkdir() - plugins = claude / "plugins" - plugins.mkdir() - - mounts = build_mounts(home) - mount_str = " ".join(mounts) - - # Plugins should be mounted at their original path, not /tmp/ - assert str(plugins) in mount_str - assert f"{plugins}:{plugins}:ro" in mount_str + assert "claude.seed" not in mount_str def test_gitconfig_mount_read_only(self, tmp_path: Path): """gitconfig mount is read-only when present.""" @@ -75,18 +59,6 @@ def test_gitconfig_mount_read_only(self, tmp_path: Path): assert "/home/paude/.gitconfig:ro" in mount_str - def test_claude_json_mount_read_only(self, tmp_path: Path): - """claude.json mount is read-only when present.""" - home = tmp_path / "home" - home.mkdir() - claude_json = home / ".claude.json" - claude_json.write_text('{"settings": {}}') - - mounts = build_mounts(home) - mount_str = " ".join(mounts) - - assert "/tmp/claude.json.seed:ro" in mount_str - def test_include_config_false_skips_all_config_mounts(self, tmp_path: Path): """include_config=False returns no config or gitconfig mounts.""" home = tmp_path / "home" @@ -103,7 +75,7 @@ def test_include_config_true_is_default(self, tmp_path: Path): """Default behavior includes config mounts.""" home = tmp_path / "home" home.mkdir() - (home / ".claude").mkdir() + (home / ".gitconfig").write_text("[user]\n name = Test\n") mounts_default = build_mounts(home) mounts_explicit = build_mounts(home, include_config=True) diff --git a/tests/test_openshift_backend.py b/tests/test_openshift_backend.py index 1896df5..0431d3d 100644 --- a/tests/test_openshift_backend.py +++ b/tests/test_openshift_backend.py @@ -17,6 +17,7 @@ OpenShiftBackend, OpenShiftConfig, ) +from paude.backends.openshift.resources import StatefulSetBuilder _FAKE_CA = ("FAKE_CERT_PEM", "FAKE_KEY_PEM") @@ -332,15 +333,16 @@ def run_side_effect(*args, **kwargs): return_value=_FAKE_CA, ) @patch("subprocess.run") - def test_create_session_waits_for_pod_and_syncs_config( + def test_create_session_waits_for_pod_and_creates_configmap( self, mock_run: MagicMock, mock_ca: MagicMock ) -> None: - """Create session waits for pod ready and syncs config.""" - calls_log = [] + """Create session waits for pod ready and creates ConfigMap.""" + calls_log: list[Any] = [] def run_side_effect(*args, **kwargs): cmd = args[0] if args else kwargs.get("args", []) - calls_log.append(cmd) + input_data = kwargs.get("input") + calls_log.append((cmd, input_data)) # Return "Running" for pod status check if "get" in cmd and "pod" in cmd and "jsonpath" in str(cmd): return MagicMock(returncode=0, stdout="Running", stderr="") @@ -364,13 +366,24 @@ def run_side_effect(*args, **kwargs): # Verify pod status check was called (waiting for pod ready) pod_status_calls = [ - c for c in calls_log if "get" in c and "pod" in c and "jsonpath" in str(c) + c + for c, _ in calls_log + if "get" in c and "pod" in c and "jsonpath" in str(c) ] assert len(pod_status_calls) >= 1, "Should check pod status" - # Verify sync was called (exec mkdir for config directory) - sync_calls = [c for c in calls_log if "exec" in c and "mkdir" in str(c)] - assert len(sync_calls) >= 1, "Should sync config to pod" + # Verify ConfigMap was applied (contains config data) + apply_inputs = [inp for c, inp in calls_log if "apply" in c and inp is not None] + configmap_applied = any( + '"kind": "ConfigMap"' in inp or '"kind":"ConfigMap"' in inp + for inp in apply_inputs + if isinstance(inp, str) + ) + assert configmap_applied, "Should apply ConfigMap" + + # Verify no oc exec/cp calls (no imperative sync) + exec_calls = [c for c, _ in calls_log if "exec" in c] + assert len(exec_calls) == 0, "Should not use oc exec for config sync" # Verify session is returned as running assert session.status == "running" @@ -1060,7 +1073,6 @@ def _make_connector(self, context: str | None = None) -> Any: namespace="test-ns", config=config, lookup=MagicMock(), - syncer=MagicMock(), ) def test_exec_cmd_without_port_urls(self) -> None: @@ -2255,51 +2267,12 @@ def run_side_effect(*args, **kwargs): assert "check_proxy_ready" not in call_order -class TestConnectSessionRefreshesCredentials: - """Tests for connect_session syncing credentials/config.""" +class TestConnectSessionNoSync: + """Tests for connect_session without sync (config mounted via ConfigMap).""" @patch("subprocess.run") - def test_connect_session_full_sync_on_first_connect( - self, mock_run: MagicMock - ) -> None: - """connect_session calls _syncer.sync_full_config on first connect. - - When .ready doesn't exist (first connect after start), full config - sync is performed including gcloud, claude config, and gitconfig. - """ - - def run_side_effect(*args, **kwargs): - cmd = args[0] if args else kwargs.get("args", []) - - if "get" in cmd and "pod" in cmd: - return MagicMock(returncode=0, stdout="Running", stderr="") - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = run_side_effect - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - # Mock _syncer.is_config_synced to return False (first connect) - with patch.object(backend._syncer, "is_config_synced", return_value=False): - with patch.object(backend._syncer, "sync_full_config") as mock_full_sync: - with patch.object(backend._syncer, "sync_credentials") as mock_creds: - with patch("subprocess.run", mock_run): - backend.connect_session("test") - - # First connect: full config sync - mock_full_sync.assert_called_once() - mock_creds.assert_not_called() - assert "paude-test-0" in str(mock_full_sync.call_args) - - @patch("subprocess.run") - def test_connect_session_credentials_only_on_reconnect( - self, mock_run: MagicMock - ) -> None: - """connect_session calls _syncer.sync_credentials on reconnect. - - When .ready exists (reconnect), only gcloud credentials are refreshed - for faster reconnection. - """ + def test_connect_session_does_not_sync(self, mock_run: MagicMock) -> None: + """connect_session does not sync config — ConfigMap handles it.""" def run_side_effect(*args, **kwargs): cmd = args[0] if args else kwargs.get("args", []) @@ -2311,64 +2284,43 @@ def run_side_effect(*args, **kwargs): mock_run.side_effect = run_side_effect backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) + backend.connect_session("test") - # Mock _syncer.is_config_synced to return True (reconnect) - with patch.object(backend._syncer, "is_config_synced", return_value=True): - with patch.object(backend._syncer, "sync_full_config") as mock_full_sync: - with patch.object(backend._syncer, "sync_credentials") as mock_creds: - with patch("subprocess.run", mock_run): - backend.connect_session("test") - - # Reconnect: credentials only - mock_creds.assert_called_once() - mock_full_sync.assert_not_called() - assert "paude-test-0" in str(mock_creds.call_args) + # No oc exec/cp calls for sync + exec_calls = [ + c for c in mock_run.call_args_list if "exec" in str(c) and "mkdir" in str(c) + ] + assert len(exec_calls) == 0 @patch("subprocess.run") - def test_connect_session_does_not_sync_when_pod_not_running( + def test_connect_session_returns_1_when_pod_not_running( self, mock_run: MagicMock ) -> None: - """connect_session does not sync if pod is not running.""" + """connect_session returns 1 if pod is not running.""" mock_run.return_value = MagicMock(returncode=0, stdout="Pending", stderr="") backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(backend._syncer, "sync_full_config") as mock_full_sync: - with patch.object(backend._syncer, "sync_credentials") as mock_creds: - result = backend.connect_session("test") - - # Should return 1 (error) without syncing - assert result == 1 - mock_full_sync.assert_not_called() - mock_creds.assert_not_called() + result = backend.connect_session("test") + assert result == 1 @patch("subprocess.run") - def test_connect_session_does_not_sync_when_pod_not_found( + def test_connect_session_returns_1_when_pod_not_found( self, mock_run: MagicMock ) -> None: - """connect_session does not sync if pod doesn't exist.""" - # Simulate pod not found - returncode != 0 + """connect_session returns 1 if pod doesn't exist.""" mock_run.return_value = MagicMock( returncode=1, stdout="", stderr="pod not found" ) backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(backend._syncer, "sync_full_config") as mock_full_sync: - with patch.object(backend._syncer, "sync_credentials") as mock_creds: - result = backend.connect_session("test") - - # Should return 1 (error) without syncing - assert result == 1 - mock_full_sync.assert_not_called() - mock_creds.assert_not_called() + result = backend.connect_session("test") + assert result == 1 @patch("subprocess.run") def test_connect_session_shows_empty_workspace_message( self, mock_run: MagicMock, capsys: Any ) -> None: """connect_session shows message when workspace is empty.""" - call_order = [] def run_side_effect(*args, **kwargs): cmd = args[0] if args else kwargs.get("args", []) @@ -2378,17 +2330,13 @@ def run_side_effect(*args, **kwargs): return MagicMock(returncode=0, stdout="Running", stderr="") # Empty workspace - no .git directory if "test" in cmd and "-d" in cmd and ".git" in cmd_str: - call_order.append("check_git_dir") return MagicMock(returncode=1, stdout="", stderr="") return MagicMock(returncode=0, stdout="", stderr="") mock_run.side_effect = run_side_effect backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(backend._syncer, "is_config_synced", return_value=True): - with patch.object(backend._syncer, "sync_credentials"): - backend.connect_session("test") + backend.connect_session("test") captured = capsys.readouterr() assert "Workspace is empty" in captured.err @@ -2396,182 +2344,6 @@ def run_side_effect(*args, **kwargs): assert "git push paude-test main" in captured.err -class TestIsConfigSynced: - """Tests for _syncer.is_config_synced method.""" - - @patch("subprocess.run") - def test_returns_true_when_ready_exists(self, mock_run: MagicMock) -> None: - """_syncer.is_config_synced returns True when .ready file exists.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - result = backend._syncer.is_config_synced("paude-test-0") - - assert result is True - # Verify it runs test -f /credentials/.ready - cmd_str = str(mock_run.call_args) - assert "test" in cmd_str - assert "-f" in cmd_str - assert "/credentials/.ready" in cmd_str - - @patch("subprocess.run") - def test_returns_false_when_ready_missing(self, mock_run: MagicMock) -> None: - """_syncer.is_config_synced returns False when .ready file doesn't exist.""" - mock_run.return_value = MagicMock(returncode=1, stdout="", stderr="") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - result = backend._syncer.is_config_synced("paude-test-0") - - assert result is False - - -class TestSyncCredentialsToPod: - """Tests for _syncer.sync_credentials method (fast credential refresh).""" - - @patch("subprocess.run") - def test_syncs_gcloud_files(self, mock_run: MagicMock, tmp_path: Path) -> None: - """_syncer.sync_credentials syncs gcloud credential files.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create fake gcloud credentials - gcloud_dir = tmp_path / ".config" / "gcloud" - gcloud_dir.mkdir(parents=True) - (gcloud_dir / "application_default_credentials.json").write_text("{}") - (gcloud_dir / "credentials.db").write_text("") - (gcloud_dir / "access_tokens.db").write_text("") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer.sync_credentials("paude-test-0", verbose=True) - - # Verify oc cp was called for gcloud files - cp_calls = [c for c in mock_run.call_args_list if "cp" in str(c)] - gcloud_cp_calls = [c for c in cp_calls if "gcloud" in str(c)] - assert len(gcloud_cp_calls) >= 1 - - @patch("subprocess.run") - def test_touches_ready_marker(self, mock_run: MagicMock, tmp_path: Path) -> None: - """_syncer.sync_credentials touches .ready marker.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer.sync_credentials("paude-test-0") - - # Verify touch .ready was called - touch_calls = [ - c - for c in mock_run.call_args_list - if "touch" in str(c) and ".ready" in str(c) - ] - assert len(touch_calls) >= 1 - - @patch("subprocess.run") - def test_does_not_sync_claude_or_git_config( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_credentials only syncs gcloud, not claude/git config.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create fake configs - gcloud_dir = tmp_path / ".config" / "gcloud" - gcloud_dir.mkdir(parents=True) - (gcloud_dir / "application_default_credentials.json").write_text("{}") - - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - (claude_dir / "settings.json").write_text("{}") - - (tmp_path / ".gitconfig").write_text("[user]\nname = Test") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer.sync_credentials("paude-test-0") - - # Verify no claude or git syncing - all_calls_str = str(mock_run.call_args_list) - assert "rsync" not in all_calls_str # No rsync for claude dir - assert "gitconfig" not in all_calls_str # No gitconfig sync - - -class TestSyncCursorAuthJson: - """Tests for Cursor auth.json sync in ConfigSyncer.""" - - @patch("subprocess.run") - def test_sync_config_files_syncs_auth_json_for_cursor( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_sync_config_files syncs auth.json for cursor agent.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create fake cursor config and auth.json - cursor_dir = tmp_path / ".cursor" - cursor_dir.mkdir() - (cursor_dir / "cli-config.json").write_text("{}") - config_cursor = tmp_path / ".config" / "cursor" - config_cursor.mkdir(parents=True) - (config_cursor / "auth.json").write_text('{"accessToken": "test"}') - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - backend._syncer._target = "test-pod-0" - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer._sync_config_files("cursor") - - # Verify oc cp was called for cursor-auth.json - cp_calls = [c for c in mock_run.call_args_list if "cp" in str(c)] - auth_cp_calls = [c for c in cp_calls if "cursor-auth.json" in str(c)] - assert len(auth_cp_calls) >= 1 - - @patch("subprocess.run") - def test_sync_config_files_does_not_sync_auth_json_for_claude( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_sync_config_files does NOT sync auth.json for non-cursor agents.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create fake claude config - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - - # Create fake cursor auth.json (should NOT be synced for claude agent) - config_cursor = tmp_path / ".config" / "cursor" - config_cursor.mkdir(parents=True) - (config_cursor / "auth.json").write_text('{"accessToken": "test"}') - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - backend._syncer._target = "test-pod-0" - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer._sync_config_files("claude") - - # Verify cursor-auth.json was NOT synced - all_calls_str = str(mock_run.call_args_list) - assert "cursor-auth.json" not in all_calls_str - - @patch("subprocess.run") - def test_sync_credentials_does_not_sync_auth_json_for_claude( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """sync_credentials does NOT sync auth.json for non-cursor agents.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - config_cursor = tmp_path / ".config" / "cursor" - config_cursor.mkdir(parents=True) - (config_cursor / "auth.json").write_text('{"accessToken": "test"}') - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer.sync_credentials("test-pod-0", agent_name="claude") - - all_calls_str = str(mock_run.call_args_list) - assert "cursor-auth.json" not in all_calls_str - - @patch( "paude.backends.openshift.certs.generate_ca_cert", return_value=_FAKE_CA, @@ -2619,623 +2391,125 @@ def run_side_effect(*args, **kwargs): assert len(proxy_policy_calls) >= 1 -class TestSyncConfigToPod: - """Tests for _syncer.sync_full_config method (tmpfs-based credential sync).""" - - @patch("subprocess.run") - def test_creates_config_directory_structure( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config creates /credentials directory structure.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Find the exec call that creates the directory structure - exec_calls = [ - c - for c in mock_run.call_args_list - if "exec" in str(c) and "mkdir -p" in str(c) - ] - assert len(exec_calls) >= 1 - - # Verify the command includes mkdir (idempotent) and chmod, but NOT rm -rf - # Using mkdir -p instead of rm -rf preserves working directories - exec_cmd = str(exec_calls[0]) - assert "rm -rf /credentials" not in exec_cmd - assert "mkdir -p /credentials/gcloud /credentials/claude" in exec_cmd - assert "chmod -R g+rwX /credentials" in exec_cmd - - @patch("subprocess.run") - def test_syncs_claude_config_files( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config syncs claude config directory via rsync.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create mock claude files - claude_dir = tmp_path / ".claude" - claude_dir.mkdir(parents=True) - (claude_dir / "settings.json").write_text("{}") - (claude_dir / "credentials.json").write_text("{}") - (tmp_path / ".claude.json").write_text("{}") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Find rsync calls (now using rsync for ~/.claude/) - rsync_calls = [c for c in mock_run.call_args_list if "rsync" in str(c)] - assert len(rsync_calls) >= 1, "Should use rsync for ~/.claude/ directory" - - # Verify rsync targets claude directory - rsync_calls_str = str(rsync_calls) - assert ".claude" in rsync_calls_str - - # .claude.json is still synced separately via cp - cp_calls = [c for c in mock_run.call_args_list if "cp" in str(c)] - cp_calls_str = str(cp_calls) - assert ".claude.json" in cp_calls_str - - @patch("subprocess.run") - def test_syncs_gitconfig(self, mock_run: MagicMock, tmp_path: Path) -> None: - """_syncer.sync_full_config syncs gitconfig.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create mock gitconfig - (tmp_path / ".gitconfig").write_text("[user]\nname = Test") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Find oc cp call for gitconfig - cp_calls = [c for c in mock_run.call_args_list if "cp" in str(c)] - cp_calls_str = str(cp_calls) +class TestConfigMapBuilder: + """Tests for build_config_map and ConfigMap-based StatefulSet configuration.""" - assert ".gitconfig" in cp_calls_str - assert "/credentials/gitconfig" in cp_calls_str - - @patch("subprocess.run") - def test_syncs_global_gitignore(self, mock_run: MagicMock, tmp_path: Path) -> None: - """_syncer.sync_full_config syncs global gitignore from ~/.config/git/ignore.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create mock global gitignore - git_config_dir = tmp_path / ".config" / "git" - git_config_dir.mkdir(parents=True) - (git_config_dir / "ignore").write_text("**/.claude/settings.local.json\n") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Find oc cp call for global gitignore - cp_calls = [c for c in mock_run.call_args_list if "cp" in str(c)] - cp_calls_str = str(cp_calls) - - assert ".config/git/ignore" in cp_calls_str - assert "/credentials/gitignore-global" in cp_calls_str - - @patch("subprocess.run") - def test_skips_global_gitignore_when_missing( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config skips global gitignore when it doesn't exist.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Don't create the global gitignore file - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Should not have a cp call for gitignore-global - cp_calls_str = str(mock_run.call_args_list) - assert "gitignore-global" not in cp_calls_str - - @patch("subprocess.run") - def test_sync_full_config_copies_global_gitignore( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config copies global gitignore.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # Create mock global gitignore - git_config_dir = tmp_path / ".config" / "git" - git_config_dir.mkdir(parents=True) - (git_config_dir / "ignore").write_text("*.log\n") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Verify oc cp was called for gitignore-global - cp_calls = [c for c in mock_run.call_args_list if "gitignore-global" in str(c)] - assert len(cp_calls) >= 1 - - @patch("subprocess.run") - def test_creates_ready_marker(self, mock_run: MagicMock, tmp_path: Path) -> None: - """_syncer.sync_full_config creates .ready marker file.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Find the exec call that creates the .ready marker - exec_calls = [ - c - for c in mock_run.call_args_list - if "exec" in str(c) and ".ready" in str(c) - ] - # Should have at least 2 calls: one to create .ready, one to verify - assert len(exec_calls) >= 2 - - # Verify touch .ready is in the create command - create_cmd = str(exec_calls[0]) - assert "touch /credentials/.ready" in create_cmd - - # Verify chmod is wrapped with error suppression (non-fatal) - assert "2>/dev/null || true" in create_cmd - - # Verify there's a test -f call to verify .ready was created - verify_cmd = str(exec_calls[1]) - assert "test" in verify_cmd - assert "/credentials/.ready" in verify_cmd - - @patch("subprocess.run") - def test_warns_when_ready_marker_fails( - self, mock_run: MagicMock, tmp_path: Path, capsys: Any - ) -> None: - """_syncer.sync_full_config warns if .ready marker creation fails.""" - - def run_side_effect(*args, **kwargs): - cmd = args[0] if args else kwargs.get("args", []) - cmd_str = " ".join(cmd) if isinstance(cmd, list) else str(cmd) + def test_build_config_map_contains_required_keys(self) -> None: + """ConfigMap contains gcloud-adc, agent-sandbox-config.sh, and .ready.""" + from paude.backends.openshift.resources import build_config_map - # Fail the test -f verification - if "test" in cmd and "-f" in cmd and ".ready" in cmd_str: - return MagicMock(returncode=1, stdout="", stderr="") - return MagicMock(returncode=0, stdout="", stderr="") + cm = build_config_map("test-session", "test-ns") + data = cm["data"] - mock_run.side_effect = run_side_effect + assert "gcloud-adc" in data + assert "agent-sandbox-config.sh" in data + assert ".ready" in data + assert data[".ready"] == "" - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) + def test_build_config_map_includes_gitconfig_when_available(self) -> None: + """ConfigMap includes gitconfig when git user config is available.""" + from paude.backends.openshift.resources import build_config_map - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") + with patch( + "paude.backends.openshift.resources._read_git_user_config", + return_value="[user]\n\tname = Test\n\temail = test@example.com\n", + ): + cm = build_config_map("test-session", "test-ns") - captured = capsys.readouterr() - assert "Warning: Failed to create" in captured.err - assert ".ready" in captured.err + assert "gitconfig" in cm["data"] + assert "Test" in cm["data"]["gitconfig"] - @patch("subprocess.run") - def test_handles_missing_files_gracefully( - self, mock_run: MagicMock, tmp_path: Path + def test_build_config_map_includes_empty_gitconfig_when_no_host_config( + self, ) -> None: - """_syncer.sync_full_config doesn't fail when files are missing.""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - # No credential files exist in tmp_path - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - # Should not raise - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Should still create the directory structure and .ready marker - calls_str = str(mock_run.call_args_list) - assert "mkdir -p /credentials/gcloud /credentials/claude" in calls_str - assert ".ready" in calls_str - - @patch("subprocess.run") - def test_raises_on_mkdir_failure(self, mock_run: MagicMock, tmp_path: Path) -> None: - """_syncer.sync_full_config raises OpenShiftError when mkdir fails.""" - from paude.backends.openshift import OpenShiftError - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - cmd = args[0] if args else [] - # Fail the mkdir command (first exec call) - if "exec" in cmd and "mkdir" in str(cmd): - return MagicMock( - returncode=1, - stdout="", - stderr="mkdir: cannot create directory: No space left", - ) - return MagicMock(returncode=0, stdout="", stderr="") + """ConfigMap includes empty gitconfig when no git user config exists.""" + from paude.backends.openshift.resources import build_config_map - mock_run.side_effect = mock_run_side_effect + with patch( + "paude.backends.openshift.resources._read_git_user_config", + return_value="", + ): + cm = build_config_map("test-session", "test-ns") - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) + assert cm["data"]["gitconfig"] == "" - with patch.object(Path, "home", return_value=tmp_path): - with pytest.raises(OpenShiftError) as exc_info: - backend._syncer.sync_full_config("test-pod-0") + def test_build_config_map_metadata(self) -> None: + """ConfigMap has correct metadata and labels.""" + from paude.backends.openshift.resources import build_config_map - assert "Failed to prepare config directory" in str(exc_info.value) + cm = build_config_map("my-session", "my-ns") - @patch("subprocess.run") - def test_exec_calls_use_extended_timeout( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config exec calls use OC_EXEC_TIMEOUT (not default).""" - mock_run.return_value = MagicMock(returncode=0, stdout="", stderr="") - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) + assert cm["kind"] == "ConfigMap" + assert cm["metadata"]["name"] == "paude-config-my-session" + assert cm["metadata"]["namespace"] == "my-ns" + assert cm["metadata"]["labels"]["paude.io/session-name"] == "my-session" - with patch.object(Path, "home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Find all exec calls (mkdir/chmod operations) - exec_calls = [ - c for c in mock_run.call_args_list if len(c[0]) > 0 and "exec" in c[0][0] - ] - - # There should be at least 2 exec calls: - # 1. mkdir + chmod for config directory prep - # 2. chmod + touch for .ready marker - assert len(exec_calls) >= 2, ( - f"Expected at least 2 exec calls, got {len(exec_calls)}" + def test_statefulset_with_config_map_uses_entrypoint_command(self) -> None: + """StatefulSet with ConfigMap uses entrypoint-session.sh as command.""" + builder = StatefulSetBuilder( + session_name="test", + namespace="ns", + image="img:latest", + resources={"requests": {"cpu": "1"}, "limits": {"cpu": "2"}}, ) + spec = builder.with_config_map("paude-config-test").build() - # All exec calls should use the extended timeout - for call in exec_calls: - timeout = call[1].get("timeout") - assert timeout == OpenShiftBackend.OC_EXEC_TIMEOUT, ( - f"exec call should use OC_EXEC_TIMEOUT ({OpenShiftBackend.OC_EXEC_TIMEOUT}), " - f"got {timeout}. Call: {call}" - ) - - -class TestSyncConfigWithPlugins: - """Tests for _sync_config_to_pod with full ~/.claude/ sync including plugins.""" - - @patch("subprocess.run") - def test_sync_config_uses_rsync_with_excludes( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config uses rsync with config excludes for ~/.claude/.""" - # Create mock claude directory - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - (claude_dir / "settings.json").write_text("{}") - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - # Find rsync calls - rsync_calls = [ - c - for c in mock_run.call_args_list - if c[0] and len(c[0]) > 0 and "rsync" in c[0][0] - ] - - # Should have at least one rsync call for claude directory - assert len(rsync_calls) >= 1 - - # Verify excludes are passed - rsync_cmd = rsync_calls[0][0][0] - assert "--exclude" in rsync_cmd - - @patch("subprocess.run") - def test_sync_config_calls_rewrite_plugin_paths( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config calls _rewrite_plugin_paths after rsync.""" - # Create mock claude directory with plugins subdirectory - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - (claude_dir / "settings.json").write_text("{}") - (claude_dir / "plugins").mkdir() - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - with patch.object(backend._syncer, "_rewrite_plugin_paths") as mock_rewrite: - backend._syncer.sync_full_config("test-pod-0") - mock_rewrite.assert_called_once() - - @patch("subprocess.run") - def test_sync_config_skips_rewrite_when_no_plugins_dir( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config skips _rewrite_plugin_paths when no plugins dir.""" - # Create mock claude directory WITHOUT plugins subdirectory - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - (claude_dir / "settings.json").write_text("{}") - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - with patch.object(backend._syncer, "_rewrite_plugin_paths") as mock_rewrite: - backend._syncer.sync_full_config("test-pod-0") - mock_rewrite.assert_not_called() - - @patch("subprocess.run") - def test_sync_config_handles_missing_claude_dir( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config handles missing ~/.claude/ gracefully.""" - # Don't create claude directory - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - # Should not raise - backend._syncer.sync_full_config("test-pod-0") - - # Should not have rsync calls for claude directory - rsync_calls = [ - c - for c in mock_run.call_args_list - if c[0] and len(c[0]) > 0 and "rsync" in c[0][0] and ".claude" in str(c) - ] - assert len(rsync_calls) == 0 - - @patch("subprocess.run") - def test_sync_config_skips_rewrite_on_rsync_failure( - self, mock_run: MagicMock, tmp_path: Path - ) -> None: - """_syncer.sync_full_config does NOT call _rewrite_plugin_paths when rsync fails.""" - # Create mock claude directory - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - (claude_dir / "settings.json").write_text("{}") - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - cmd = args[0] if args else [] - # Fail rsync calls - if "rsync" in cmd: - return MagicMock( - returncode=1, - stdout="", - stderr="error: rsync failed", - ) - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - with patch.object(backend._syncer, "_rewrite_plugin_paths") as mock_rewrite: - backend._syncer.sync_full_config("test-pod-0") - # Should NOT be called because rsync failed - mock_rewrite.assert_not_called() - - @patch("subprocess.run") - def test_sync_config_prints_warning_on_rsync_failure( - self, mock_run: MagicMock, tmp_path: Path, capsys: Any - ) -> None: - """_syncer.sync_full_config prints warning when rsync fails.""" - # Create mock claude directory - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - (claude_dir / "settings.json").write_text("{}") - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - cmd = args[0] if args else [] - if "rsync" in cmd: - return MagicMock( - returncode=1, - stdout="", - stderr="error: rsync failed", - ) - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - - with patch("pathlib.Path.home", return_value=tmp_path): - backend._syncer.sync_full_config("test-pod-0") - - captured = capsys.readouterr() - assert "Failed to copy agent config directory" in captured.err - assert "plugins may not work" in captured.err - - -class TestRewritePluginPaths: - """Tests for _rewrite_plugin_paths method.""" - - @staticmethod - def _make_claude_agent() -> Any: - """Create a claude agent for testing _rewrite_plugin_paths.""" - from paude.agents import get_agent - - return get_agent("claude") - - @patch("subprocess.run") - def test_rewrite_plugin_paths_uses_jq(self, mock_run: MagicMock) -> None: - """_rewrite_plugin_paths uses jq to rewrite installed_plugins.json.""" - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - agent = self._make_claude_agent() - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - backend._syncer._target = "test-pod-0" - backend._syncer._rewrite_plugin_paths("/credentials/claude", agent, Path.home()) - - # Find exec calls with jq - jq_calls = [ - c - for c in mock_run.call_args_list - if c[0] and len(c[0]) > 0 and "exec" in c[0][0] and "jq" in str(c) - ] - - # Should have two jq calls (installed_plugins.json and known_marketplaces.json) - assert len(jq_calls) >= 2 - - @patch("subprocess.run") - def test_rewrite_plugin_paths_targets_correct_files( - self, mock_run: MagicMock - ) -> None: - """_rewrite_plugin_paths rewrites both plugin metadata files.""" - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - agent = self._make_claude_agent() - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - backend._syncer._target = "test-pod-0" - backend._syncer._rewrite_plugin_paths("/credentials/claude", agent, Path.home()) - - # Check for installed_plugins.json rewrite - installed_plugins_calls = [ - c for c in mock_run.call_args_list if "installed_plugins.json" in str(c) - ] - assert len(installed_plugins_calls) >= 1 - - # Check for known_marketplaces.json rewrite - known_marketplaces_calls = [ - c for c in mock_run.call_args_list if "known_marketplaces.json" in str(c) + container = spec["spec"]["template"]["spec"]["containers"][0] + assert container["command"] == [ + "tini", + "--", + "bash", + "-c", + "/usr/local/bin/entrypoint-session.sh && exec sleep infinity", ] - assert len(known_marketplaces_calls) >= 1 - - @patch("subprocess.run") - def test_rewrite_plugin_paths_uses_correct_container_path( - self, mock_run: MagicMock - ) -> None: - """_rewrite_plugin_paths rewrites to /home/paude/.claude/plugins/.""" - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - agent = self._make_claude_agent() - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - backend._syncer._target = "test-pod-0" - backend._syncer._rewrite_plugin_paths("/credentials/claude", agent, Path.home()) - - # Check that the container path is used - all_calls_str = str(mock_run.call_args_list) - assert "/home/paude/.claude/plugins" in all_calls_str - - @patch("subprocess.run") - def test_rewrite_plugin_paths_handles_null_installpath( - self, mock_run: MagicMock - ) -> None: - """_rewrite_plugin_paths jq expression handles null/missing installPath.""" - - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - agent = self._make_claude_agent() - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - backend._syncer._target = "test-pod-0" - backend._syncer._rewrite_plugin_paths("/credentials/claude", agent, Path.home()) - - # The jq expression should include null-safety check - all_calls_str = str(mock_run.call_args_list) - # Check for the conditional that guards against null installPath - assert "if .installPath then" in all_calls_str - - @patch("subprocess.run") - def test_rewrite_plugin_paths_handles_null_installlocation( - self, mock_run: MagicMock - ) -> None: - """_rewrite_plugin_paths jq handles null/missing installLocation.""" - def mock_run_side_effect(*args: Any, **kwargs: Any) -> MagicMock: - return MagicMock(returncode=0, stdout="", stderr="") - - mock_run.side_effect = mock_run_side_effect - agent = self._make_claude_agent() - - backend = OpenShiftBackend(config=OpenShiftConfig(namespace="test-ns")) - backend._syncer._target = "test-pod-0" - backend._syncer._rewrite_plugin_paths("/credentials/claude", agent, Path.home()) - - # The jq expression for known_marketplaces should include null-safety - all_calls_str = str(mock_run.call_args_list) - assert "if .value.installLocation then" in all_calls_str - - -class TestClaudeConfigExcludes: - """Tests for ClaudeAgent config_excludes (canonical source of truth).""" - - def test_config_excludes_contains_expected_patterns(self) -> None: - """config_excludes contains session-specific and cache patterns.""" - from paude.agents.claude import ClaudeAgent - - excludes = ClaudeAgent().config.config_excludes - - # Session-specific patterns (anchored with leading /) - assert "/history.jsonl" in excludes - assert "/tasks" in excludes - assert "/todos" in excludes - assert "/session-env" in excludes - - # Cache patterns (anchored to only match top-level cache) - assert "/cache" in excludes - assert "/stats-cache.json" in excludes - - # Git metadata - assert "/.git" in excludes - - def test_config_excludes_uses_anchored_patterns(self) -> None: - """config_excludes uses anchored patterns to not exclude plugins/cache.""" - from paude.agents.claude import ClaudeAgent + def test_statefulset_with_config_map_sets_headless_env(self) -> None: + """StatefulSet with ConfigMap sets PAUDE_HEADLESS=1.""" + builder = StatefulSetBuilder( + session_name="test", + namespace="ns", + image="img:latest", + resources={"requests": {"cpu": "1"}, "limits": {"cpu": "2"}}, + ) + spec = builder.with_config_map("paude-config-test").build() - excludes = ClaudeAgent().config.config_excludes + container = spec["spec"]["template"]["spec"]["containers"][0] + env_dict = {e["name"]: e["value"] for e in container["env"]} + assert env_dict["PAUDE_HEADLESS"] == "1" - # All patterns should be anchored (start with /) to prevent - # accidentally excluding nested directories like plugins/cache - for pattern in excludes: - assert pattern.startswith("/"), ( - f"Pattern '{pattern}' should be anchored with leading / " - "to prevent excluding nested directories" - ) + def test_statefulset_with_config_map_uses_configmap_volume(self) -> None: + """StatefulSet with ConfigMap uses configMap volume instead of emptyDir.""" + builder = StatefulSetBuilder( + session_name="test", + namespace="ns", + image="img:latest", + resources={"requests": {"cpu": "1"}, "limits": {"cpu": "2"}}, + ) + spec = builder.with_config_map("paude-config-test").build() - def test_config_excludes_does_not_contain_plugins(self) -> None: - """config_excludes does not exclude plugins directory.""" - from paude.agents.claude import ClaudeAgent + volumes = spec["spec"]["template"]["spec"]["volumes"] + cred_vol = next(v for v in volumes if v["name"] == "credentials") + assert "configMap" in cred_vol + assert cred_vol["configMap"]["name"] == "paude-config-test" + assert "emptyDir" not in cred_vol + + def test_statefulset_without_config_map_uses_emptydir(self) -> None: + """StatefulSet without ConfigMap uses emptyDir (legacy behavior).""" + builder = StatefulSetBuilder( + session_name="test", + namespace="ns", + image="img:latest", + resources={"requests": {"cpu": "1"}, "limits": {"cpu": "2"}}, + ) + spec = builder.build() - excludes = ClaudeAgent().config.config_excludes + volumes = spec["spec"]["template"]["spec"]["volumes"] + cred_vol = next(v for v in volumes if v["name"] == "credentials") + assert "emptyDir" in cred_vol + assert "configMap" not in cred_vol - assert "plugins" not in excludes + container = spec["spec"]["template"]["spec"]["containers"][0] + assert container["command"] == ["tini", "--", "sleep", "infinity"] class TestEnsureProxyImageViaBuild: diff --git a/tests/test_podman_session.py b/tests/test_podman_session.py index f05583c..d5dc118 100644 --- a/tests/test_podman_session.py +++ b/tests/test_podman_session.py @@ -1576,8 +1576,8 @@ def test_includes_proxy_health_check(self) -> None: class TestPodmanBackendSyncHostConfig: """Tests for PodmanBackend._sync_host_config.""" - def test_sync_copies_agent_config_dir(self, tmp_path: Path) -> None: - """Sync copies agent config directory to /credentials/.""" + def test_sync_does_not_copy_agent_config_dir(self, tmp_path: Path) -> None: + """Sync does not copy agent config directory.""" mock_runner = MagicMock() mock_runner.engine.binary = "podman" mock_runner.engine.supports_multi_network_create = True @@ -1589,20 +1589,20 @@ def test_sync_copies_agent_config_dir(self, tmp_path: Path) -> None: backend = _make_backend(mock_runner) backend._engine = mock_runner.engine - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): + with patch("paude.backends.sync_base.Path.home", return_value=tmp_path): claude_dir = tmp_path / ".claude" claude_dir.mkdir() (claude_dir / "settings.json").write_text("{}") backend._sync_host_config("paude-test", "claude") - # Should have called podman cp for config dir contents + # Should NOT have cp calls for agent config dir cp_calls = [ c for c in mock_runner.engine.run.call_args_list - if len(c[0]) >= 2 and c[0][0] == "cp" + if len(c[0]) >= 2 and c[0][0] == "cp" and ".claude" in str(c[0][1]) ] - assert len(cp_calls) > 0 + assert len(cp_calls) == 0 def test_sync_copies_gitconfig(self, tmp_path: Path) -> None: """Sync copies .gitconfig to /credentials/gitconfig.""" @@ -1617,7 +1617,7 @@ def test_sync_copies_gitconfig(self, tmp_path: Path) -> None: backend = _make_backend(mock_runner) backend._engine = mock_runner.engine - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): + with patch("paude.backends.sync_base.Path.home", return_value=tmp_path): (tmp_path / ".gitconfig").write_text("[user]\n name = Test\n") backend._sync_host_config("paude-test", "claude") @@ -1643,7 +1643,7 @@ def test_sync_creates_ready_marker(self, tmp_path: Path) -> None: backend = _make_backend(mock_runner) backend._engine = mock_runner.engine - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): + with patch("paude.backends.sync_base.Path.home", return_value=tmp_path): backend._sync_host_config("paude-test", "claude") # Should have called exec touch /credentials/.ready @@ -1670,7 +1670,7 @@ def test_sync_skipped_for_remote_engine(self, tmp_path: Path) -> None: backend = _make_backend(mock_runner) backend._engine = mock_runner.engine - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): + with patch("paude.backends.sync_base.Path.home", return_value=tmp_path): (tmp_path / ".claude").mkdir() (tmp_path / ".gitconfig").write_text("[user]\n name = Test\n") @@ -1679,63 +1679,6 @@ def test_sync_skipped_for_remote_engine(self, tmp_path: Path) -> None: # Should NOT have called any podman commands mock_runner.engine.run.assert_not_called() - def test_sync_copies_config_file(self, tmp_path: Path) -> None: - """Sync copies agent config file (e.g., .claude.json).""" - mock_runner = MagicMock() - mock_runner.engine.binary = "podman" - mock_runner.engine.supports_multi_network_create = True - mock_runner.engine.default_bridge_network = "podman" - mock_runner.engine.is_remote = False - mock_runner.engine.run.return_value = MagicMock( - returncode=0, stdout="", stderr="" - ) - backend = _make_backend(mock_runner) - backend._engine = mock_runner.engine - - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): - (tmp_path / ".claude.json").write_text("{}") - - backend._sync_host_config("paude-test", "claude") - - # Should have called podman cp for .claude.json - cp_calls = [ - c - for c in mock_runner.engine.run.call_args_list - if len(c[0]) >= 2 and c[0][0] == "cp" and ".claude.json" in str(c[0][1]) - ] - assert len(cp_calls) == 1 - # Dest should be /credentials/claude/claude.json - assert "paude-test:/credentials/claude/claude.json" in str(cp_calls[0]) - - def test_sync_cursor_copies_auth_json(self, tmp_path: Path) -> None: - """Sync copies cursor auth.json for cursor agent.""" - mock_runner = MagicMock() - mock_runner.engine.binary = "podman" - mock_runner.engine.supports_multi_network_create = True - mock_runner.engine.default_bridge_network = "podman" - mock_runner.engine.is_remote = False - mock_runner.engine.run.return_value = MagicMock( - returncode=0, stdout="", stderr="" - ) - backend = _make_backend(mock_runner) - backend._engine = mock_runner.engine - - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): - cursor_config = tmp_path / ".config" / "cursor" - cursor_config.mkdir(parents=True) - (cursor_config / "auth.json").write_text("{}") - - backend._sync_host_config("paude-test", "cursor") - - # Should have called podman cp for auth.json - cp_calls = [ - c - for c in mock_runner.engine.run.call_args_list - if len(c[0]) >= 2 and c[0][0] == "cp" and "auth.json" in str(c[0][1]) - ] - assert len(cp_calls) == 1 - assert "paude-test:/credentials/cursor-auth.json" in str(cp_calls[0]) - def test_sync_logs_warning_when_step_fails(self, tmp_path: Path, capsys) -> None: """Sync logs a warning when a podman sync step fails.""" mock_runner = MagicMock() @@ -1761,7 +1704,7 @@ def run_side_effect(*args, **kwargs): backend = _make_backend(mock_runner) backend._engine = mock_runner.engine - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): + with patch("paude.backends.sync_base.Path.home", return_value=tmp_path): backend._sync_host_config("paude-test", "claude") captured = capsys.readouterr() @@ -1824,78 +1767,6 @@ def test_connect_session_calls_sync(self) -> None: backend.connect_session("my-session") mock_sync.assert_called_once() - def test_sync_excludes_config_excludes(self, tmp_path: Path) -> None: - """Sync filters out config_excludes (e.g., projects/, todos/).""" - mock_runner = MagicMock() - mock_runner.engine.binary = "podman" - mock_runner.engine.supports_multi_network_create = True - mock_runner.engine.default_bridge_network = "podman" - mock_runner.engine.is_remote = False - mock_runner.engine.run.return_value = MagicMock( - returncode=0, stdout="", stderr="" - ) - backend = _make_backend(mock_runner) - backend._engine = mock_runner.engine - - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): - claude_dir = tmp_path / ".claude" - claude_dir.mkdir() - (claude_dir / "settings.json").write_text("{}") - # Create dirs that should be excluded - (claude_dir / "projects").mkdir() - (claude_dir / "projects" / "big_file.json").write_text("{}") - (claude_dir / "todos").mkdir() - (claude_dir / "todos" / "todo.json").write_text("{}") - (claude_dir / "cache").mkdir() - (claude_dir / "cache" / "data.bin").write_text("x" * 100) - - backend._sync_host_config("paude-test", "claude") - - # Find the cp call that copies the config dir - cp_calls = [ - c - for c in mock_runner.engine.run.call_args_list - if len(c[0]) >= 2 - and c[0][0] == "cp" - and "/credentials/claude" in str(c[0][2]) - ] - assert len(cp_calls) == 1 - # The source path should be a filtered temp dir, not the original - source = cp_calls[0][0][1] - assert "filtered" in source - # Verify the temp dir would not contain excluded dirs - assert "projects" not in source - assert "todos" not in source - - def test_sync_copies_global_gitignore(self, tmp_path: Path) -> None: - """Sync copies global gitignore to /credentials/gitignore-global.""" - mock_runner = MagicMock() - mock_runner.engine.binary = "podman" - mock_runner.engine.supports_multi_network_create = True - mock_runner.engine.default_bridge_network = "podman" - mock_runner.engine.is_remote = False - mock_runner.engine.run.return_value = MagicMock( - returncode=0, stdout="", stderr="" - ) - backend = _make_backend(mock_runner) - backend._engine = mock_runner.engine - - with patch("paude.backends.podman.sync.Path.home", return_value=tmp_path): - git_config = tmp_path / ".config" / "git" - git_config.mkdir(parents=True) - (git_config / "ignore").write_text(".DS_Store\n*.swp\n") - - backend._sync_host_config("paude-test", "claude") - - # Should have called podman cp for gitignore-global - cp_calls = [ - c - for c in mock_runner.engine.run.call_args_list - if len(c[0]) >= 2 and c[0][0] == "cp" and "gitignore-global" in str(c[0][2]) - ] - assert len(cp_calls) == 1 - assert "paude-test:/credentials/gitignore-global" in str(cp_calls[0]) - class TestPodmanPortUrls: """Tests for port URL helpers and env var injection.""" diff --git a/tests/test_port_forward.py b/tests/test_port_forward.py index 558c854..bd009fc 100644 --- a/tests/test_port_forward.py +++ b/tests/test_port_forward.py @@ -440,20 +440,16 @@ class TestSessionConnectorCleanup: @patch.object(SessionConnector, "_stop_port_forward") @patch.object(SessionConnector, "_attach_to_pod", return_value=0) @patch.object(SessionConnector, "_start_port_forward", return_value=(None, [])) - @patch.object(SessionConnector, "_sync_for_connect") @patch.object(SessionConnector, "_verify_pod_running", return_value=("pod-0", "ns")) def test_connect_stops_port_forward_on_success( self, mock_verify: MagicMock, # noqa: ARG002 - mock_sync: MagicMock, # noqa: ARG002 mock_start_pf: MagicMock, # noqa: ARG002 mock_attach: MagicMock, # noqa: ARG002 mock_stop_pf: MagicMock, mock_diag: MagicMock, # noqa: ARG002 ) -> None: - connector = SessionConnector( - MagicMock(), "ns", MagicMock(), MagicMock(), MagicMock() - ) + connector = SessionConnector(MagicMock(), "ns", MagicMock(), MagicMock()) connector.connect_session("test-session") mock_stop_pf.assert_called_once_with("test-session") @@ -461,20 +457,16 @@ def test_connect_stops_port_forward_on_success( @patch.object(SessionConnector, "_stop_port_forward") @patch.object(SessionConnector, "_attach_to_pod", side_effect=RuntimeError("boom")) @patch.object(SessionConnector, "_start_port_forward", return_value=(None, [])) - @patch.object(SessionConnector, "_sync_for_connect") @patch.object(SessionConnector, "_verify_pod_running", return_value=("pod-0", "ns")) def test_connect_stops_port_forward_on_error( self, mock_verify: MagicMock, # noqa: ARG002 - mock_sync: MagicMock, # noqa: ARG002 mock_start_pf: MagicMock, # noqa: ARG002 mock_attach: MagicMock, # noqa: ARG002 mock_stop_pf: MagicMock, mock_diag: MagicMock, # noqa: ARG002 ) -> None: - connector = SessionConnector( - MagicMock(), "ns", MagicMock(), MagicMock(), MagicMock() - ) + connector = SessionConnector(MagicMock(), "ns", MagicMock(), MagicMock()) with pytest.raises(RuntimeError): connector.connect_session("test-session") mock_stop_pf.assert_called_once_with("test-session") @@ -482,12 +474,10 @@ def test_connect_stops_port_forward_on_error( @patch("paude.backends.openshift.session_connection._show_port_forward_diagnostics") @patch.object(SessionConnector, "_stop_port_forward") @patch.object(SessionConnector, "_attach_to_pod", return_value=0) - @patch.object(SessionConnector, "_sync_for_connect") @patch.object(SessionConnector, "_verify_pod_running", return_value=("pod-0", "ns")) def test_connect_passes_restart_info_to_monitor( self, mock_verify: MagicMock, # noqa: ARG002 - mock_sync: MagicMock, # noqa: ARG002 mock_attach: MagicMock, # noqa: ARG002 mock_stop_pf: MagicMock, # noqa: ARG002 mock_diag: MagicMock, # noqa: ARG002 @@ -501,9 +491,7 @@ def test_connect_passes_restart_info_to_monitor( log_path=Path("/tmp/test.log"), ) - connector = SessionConnector( - MagicMock(), "ns", MagicMock(), MagicMock(), MagicMock() - ) + connector = SessionConnector(MagicMock(), "ns", MagicMock(), MagicMock()) with ( patch.object( SessionConnector, diff --git a/tests/test_upgrade.py b/tests/test_upgrade.py index 99d07cc..c53d902 100644 --- a/tests/test_upgrade.py +++ b/tests/test_upgrade.py @@ -508,7 +508,7 @@ def test_upgrade_openshift_patches_image( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "registry.example.com/paude:new" from paude.cli.upgrade import _upgrade_openshift @@ -549,7 +549,7 @@ def test_upgrade_openshift_updates_version_label( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" from paude.cli.upgrade import _upgrade_openshift @@ -584,7 +584,7 @@ def test_upgrade_openshift_scales_proxy_when_present( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" from paude.cli.upgrade import _upgrade_openshift @@ -617,7 +617,7 @@ def test_upgrade_openshift_no_proxy_scaling_without_proxy( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" from paude.cli.upgrade import _upgrade_openshift @@ -634,41 +634,10 @@ def test_upgrade_openshift_no_proxy_scaling_without_proxy( backend._proxy.wait_for_ready.assert_not_called() @patch("paude.config.detector.detect_config", return_value=None) - def test_upgrade_openshift_resyncs_config( + def test_upgrade_openshift_statefulset_not_found( self, - mock_detect_config: MagicMock, + mock_detect_config: MagicMock, # noqa: ARG002 ) -> None: - """Agent config is re-synced into the pod after upgrade.""" - from paude.backends.openshift import OpenShiftBackend - - backend = MagicMock(spec=OpenShiftBackend) - backend.namespace = "test-ns" - backend._lookup = MagicMock() - backend._lookup.get_statefulset.return_value = self._make_statefulset() - backend._lookup.has_proxy_deployment.return_value = False - backend._lifecycle = MagicMock() - backend._lifecycle._oc = MagicMock() - backend._proxy = MagicMock() - backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() - backend.ensure_image_via_build.return_value = "paude:new" - - from paude.cli.upgrade import _upgrade_openshift - - _upgrade_openshift( - "test-session", - backend, - rebuild=False, - openshift_context=None, - overrides=_NO_OVERRIDES, - ) - - backend._syncer.sync_full_config.assert_called_once() - call_kwargs = backend._syncer.sync_full_config.call_args - assert call_kwargs[0][0] == "paude-test-session-0" # pod_name - assert call_kwargs[1]["agent_name"] == "claude" - - def test_upgrade_openshift_statefulset_not_found(self) -> None: """Error when StatefulSet not found.""" from paude.backends.openshift import OpenShiftBackend @@ -709,7 +678,7 @@ def test_upgrade_openshift_otel_updates_proxy_domains( backend._proxy = MagicMock() backend._proxy.get_deployment_domains.return_value = [".googleapis.com"] backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" backend.ensure_proxy_image_via_build.return_value = "paude-proxy:new" @@ -759,7 +728,7 @@ def test_upgrade_openshift_otel_clear_removes_proxy_domain( "old-collector.example.com", ] backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" backend.ensure_proxy_image_via_build.return_value = "paude-proxy:new" @@ -801,7 +770,7 @@ def test_upgrade_openshift_otel_no_proxy_no_domain_update( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" from paude.cli.upgrade import _upgrade_openshift @@ -838,7 +807,7 @@ def test_upgrade_openshift_rebuilds_proxy_image_without_otel( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" backend.ensure_proxy_image_via_build.return_value = "paude-proxy:new" @@ -878,7 +847,7 @@ def test_upgrade_openshift_rebuild_flag_forces_proxy_rebuild( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" backend.ensure_proxy_image_via_build.return_value = "paude-proxy:rebuilt" @@ -915,7 +884,7 @@ def test_upgrade_openshift_resolves_proxy_image_without_script_dir( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = "paude:new" from paude.cli.upgrade import _upgrade_openshift @@ -955,7 +924,7 @@ def test_upgrade_openshift_resolves_proxy_image_from_base_image_name( backend._lifecycle._oc = MagicMock() backend._proxy = MagicMock() backend._pod_waiter = MagicMock() - backend._syncer = MagicMock() + backend.ensure_image_via_build.return_value = ( "quay.io/bbrowning/paude-base-centos10:0.15.0rc4" )