Skip to content

Life of a Build

When you run just run, Capsem cross-compiles guest binaries, repacks the initrd, builds the host app, codesigns it, and boots a VM — all in ~10 seconds. This page explains what each stage produces and which tools do the work.

flowchart TD
    subgraph stage1["1. Guest binaries"]
        CARGO_CROSS["cargo build --target\naarch64-unknown-linux-musl"]
        AGENT["capsem-pty-agent"]
        NETPROXY["capsem-net-proxy"]
        MCP["capsem-mcp-server"]
        CARGO_CROSS --> AGENT & NETPROXY & MCP
    end

    subgraph stage2["2. Initrd repack"]
        INITRD_IN["initrd.img\n(from build-assets)"]
        SCRIPTS["capsem-init + doctor\n+ bench + snapshots"]
        REPACK["cpio + gzip repack"]
        INITRD_IN --> REPACK
        AGENT & NETPROXY & MCP --> REPACK
        SCRIPTS --> REPACK
        REPACK --> INITRD_OUT["initrd.img\n(repacked)"]
    end

    subgraph stage3["3. Host binary"]
        PNPM["pnpm install"]
        ASTRO["astro build\n(Astro + Svelte + Tailwind)"]
        DIST["frontend/dist/\n(static HTML/JS/CSS)"]
        CARGO_HOST["cargo build -p capsem\n(Tauri embeds frontend)"]
        PNPM --> ASTRO --> DIST --> CARGO_HOST
        CARGO_HOST --> SIGN["codesign\n(com.apple.security.virtualization)"]
    end

    subgraph stage0["0. VM images (first-time only)"]
        TOML["guest/config/*.toml"]
        BUILDER["capsem-builder\n(Python CLI)"]
        DOCKER["Docker / Podman"]
        TOML --> BUILDER --> DOCKER
        DOCKER --> VMLINUZ["vmlinuz"]
        DOCKER --> ROOTFS["rootfs.squashfs"]
        DOCKER --> INITRD_BASE["initrd.img (base)"]
    end

    INITRD_BASE -.-> INITRD_IN

    subgraph stage4["4. Boot"]
        SIGN --> BOOT["capsem binary"]
        INITRD_OUT --> BOOT
        VMLINUZ --> BOOT
        ROOTFS --> BOOT
        BOOT --> VM["Linux VM running"]
    end

Stage 1: Guest binaries (cross-compilation)

Section titled “Stage 1: Guest binaries (cross-compilation)”

The guest agent crate (crates/capsem-agent/) produces three binaries that run inside the Linux VM. They are cross-compiled on the host using musl targets:

BinaryPurposeTarget
capsem-pty-agentBridges terminal I/O over vsockaarch64-unknown-linux-musl
capsem-net-proxyRelays HTTPS to host MITM proxy over vsockaarch64-unknown-linux-musl
capsem-mcp-serverMCP tool relay over vsockaarch64-unknown-linux-musl

Cross-compilation uses rust-lld (from the llvm-tools rustup component). The linker config lives in .cargo/config.toml:

[target.aarch64-unknown-linux-musl]
linker = "rust-lld"
[target.x86_64-unknown-linux-musl]
linker = "rust-lld"

If you see ld: unknown options: --as-needed, run rustup component add llvm-tools.

The initrd is a gzipped cpio archive that the kernel unpacks into RAM at boot. The _pack-initrd recipe:

  1. Extracts the base initrd (produced by just build-assets)
  2. Copies in the freshly cross-compiled guest binaries (chmod 555, read-only)
  3. Copies in shell scripts: capsem-init (PID 1), capsem-doctor, capsem-bench, snapshots
  4. Repacks with cpio + gzip
  5. Regenerates BLAKE3 checksums (B3SUMS + manifest.json)

This is why just run is fast (~10s) — it only rebuilds what changed, not the full rootfs.

This stage has two parts: the frontend build and the Rust compilation.

The UI lives in frontend/ and is built by pnpm before Rust compilation starts. The build chain:

  1. pnpm install — installs npm dependencies (Astro, Svelte, Tailwind, DaisyUI, xterm.js, LayerChart, sql.js, Tauri API bindings)
  2. astro build — compiles .astro and .svelte files into static HTML/JS/CSS in frontend/dist/
  3. Tauri’s build step copies frontend/dist/ into the Rust binary as embedded assets

The frontend stack:

TechnologyRole
Astro 5Static site generator — page routing, builds the app shell
Svelte 5Reactive components — terminal view, stats charts, settings panels
Tailwind v4 + DaisyUI v5Styling — utility classes + themed component library
xterm.js 6Terminal emulator — renders the in-VM shell
LayerChart 2Charts — session stats, cost tracking (D3-based Svelte library)
sql.jsSQLite in the browser — queries session DBs client-side

For frontend iteration without booting a VM, use just ui (Astro dev server with mock data on port 5173). For the full Tauri app with hot-reload, use just dev.

The Rust workspace compiles into a single capsem binary:

CrateRole
capsem-coreAll business logic: VM config, boot, vsock, MITM proxy, MCP gateway, network policy, telemetry
capsem-appThin Tauri shell: IPC commands, CLI, state management
capsem-protoShared protocol types between host and guest
capsem-loggerSession DB schema and async writer (SQLite)

On macOS, the binary must be codesigned with the com.apple.security.virtualization entitlement or Virtualization.framework crashes. The justfile handles this automatically via the _sign recipe.

The capsem binary loads three assets from assets/{arch}/:

AssetProduced byWhat it is
vmlinuzjust build-assetsCustom Linux kernel (no modules, no IP stack, 7MB)
initrd.imgjust run (repacked each time)Guest binaries + init scripts
rootfs.squashfsjust build-assetsDebian bookworm base + AI CLIs + tools

Boot sequence: kernel loads initrd into RAM, capsem-init (PID 1) sets up overlayfs, air-gapped networking, and launches the PTY agent + net proxy. The host connects over vsock.

The slow path (~10 min, first-time only). The capsem-builder Python CLI reads TOML configs from guest/config/ and produces kernel + rootfs via Docker/Podman.

Terminal window
uv run capsem-builder build guest/ --arch arm64 # build everything
uv run capsem-builder validate guest/ # lint configs
uv run capsem-builder doctor guest/ # check prerequisites

The builder needs Docker or Podman.

macOS — Both run inside a Linux VM. The default memory (2GB for Podman) is too small. Minimum 4GB, recommended 8GB.

Terminal window
# Podman setup
brew install podman
podman machine init --memory 8192 --cpus 8
podman machine start
# Fix existing machine
podman machine stop
podman machine set --memory 8192 --cpus 8
podman machine start

Docker Desktop: Settings -> Resources -> set Memory to 8GB, CPUs to 8.

Linux — Containers run natively, no memory tuning needed.

Terminal window
# Debian/Ubuntu
sudo apt install podman
# Fedora/RHEL
sudo dnf install podman

Everything below is checked by bootstrap.sh and just doctor. You don’t need to install these manually — the bootstrap script tells you exactly what’s missing.

ToolWhat it does in the build
Rust (stable)Compiles host + guest binaries
rust-lldLinker for musl cross-compilation
justTask runner — single entry point for all workflows
Node.js 24+ / pnpmBuilds the Astro + Svelte frontend
Python 3.11+ / uvRuns capsem-builder (image builds, schema generation)
Docker or PodmanContainer runtime for kernel + rootfs builds
cargo-llvm-covCode coverage (just test)
cargo-auditDependency vulnerability scanning
cargo-tauriTauri CLI for app builds
b3sumBLAKE3 checksums for asset integrity
codesign (macOS)Signs binary with virtualization entitlement