Life of a Build
When you run just run, Capsem cross-compiles guest binaries, repacks the initrd, builds the host app, codesigns it, and boots a VM — all in ~10 seconds. This page explains what each stage produces and which tools do the work.
The build pipeline
Section titled “The build pipeline”flowchart TD
subgraph stage1["1. Guest binaries"]
CARGO_CROSS["cargo build --target\naarch64-unknown-linux-musl"]
AGENT["capsem-pty-agent"]
NETPROXY["capsem-net-proxy"]
MCP["capsem-mcp-server"]
CARGO_CROSS --> AGENT & NETPROXY & MCP
end
subgraph stage2["2. Initrd repack"]
INITRD_IN["initrd.img\n(from build-assets)"]
SCRIPTS["capsem-init + doctor\n+ bench + snapshots"]
REPACK["cpio + gzip repack"]
INITRD_IN --> REPACK
AGENT & NETPROXY & MCP --> REPACK
SCRIPTS --> REPACK
REPACK --> INITRD_OUT["initrd.img\n(repacked)"]
end
subgraph stage3["3. Host binary"]
PNPM["pnpm install"]
ASTRO["astro build\n(Astro + Svelte + Tailwind)"]
DIST["frontend/dist/\n(static HTML/JS/CSS)"]
CARGO_HOST["cargo build -p capsem\n(Tauri embeds frontend)"]
PNPM --> ASTRO --> DIST --> CARGO_HOST
CARGO_HOST --> SIGN["codesign\n(com.apple.security.virtualization)"]
end
subgraph stage0["0. VM images (first-time only)"]
TOML["guest/config/*.toml"]
BUILDER["capsem-builder\n(Python CLI)"]
DOCKER["Docker / Podman"]
TOML --> BUILDER --> DOCKER
DOCKER --> VMLINUZ["vmlinuz"]
DOCKER --> ROOTFS["rootfs.squashfs"]
DOCKER --> INITRD_BASE["initrd.img (base)"]
end
INITRD_BASE -.-> INITRD_IN
subgraph stage4["4. Boot"]
SIGN --> BOOT["capsem binary"]
INITRD_OUT --> BOOT
VMLINUZ --> BOOT
ROOTFS --> BOOT
BOOT --> VM["Linux VM running"]
end
Stage 1: Guest binaries (cross-compilation)
Section titled “Stage 1: Guest binaries (cross-compilation)”The guest agent crate (crates/capsem-agent/) produces three binaries that run inside the Linux VM. They are cross-compiled on the host using musl targets:
| Binary | Purpose | Target |
|---|---|---|
capsem-pty-agent | Bridges terminal I/O over vsock | aarch64-unknown-linux-musl |
capsem-net-proxy | Relays HTTPS to host MITM proxy over vsock | aarch64-unknown-linux-musl |
capsem-mcp-server | MCP tool relay over vsock | aarch64-unknown-linux-musl |
Cross-compilation uses rust-lld (from the llvm-tools rustup component). The linker config lives in .cargo/config.toml:
[target.aarch64-unknown-linux-musl]linker = "rust-lld"
[target.x86_64-unknown-linux-musl]linker = "rust-lld"If you see ld: unknown options: --as-needed, run rustup component add llvm-tools.
Stage 2: Initrd repack
Section titled “Stage 2: Initrd repack”The initrd is a gzipped cpio archive that the kernel unpacks into RAM at boot. The _pack-initrd recipe:
- Extracts the base initrd (produced by
just build-assets) - Copies in the freshly cross-compiled guest binaries (chmod 555, read-only)
- Copies in shell scripts:
capsem-init(PID 1),capsem-doctor,capsem-bench,snapshots - Repacks with
cpio + gzip - Regenerates BLAKE3 checksums (
B3SUMS+manifest.json)
This is why just run is fast (~10s) — it only rebuilds what changed, not the full rootfs.
Stage 3: Host binary
Section titled “Stage 3: Host binary”This stage has two parts: the frontend build and the Rust compilation.
Frontend (pnpm build)
Section titled “Frontend (pnpm build)”The UI lives in frontend/ and is built by pnpm before Rust compilation starts. The build chain:
- pnpm install — installs npm dependencies (Astro, Svelte, Tailwind, DaisyUI, xterm.js, LayerChart, sql.js, Tauri API bindings)
- astro build — compiles
.astroand.sveltefiles into static HTML/JS/CSS infrontend/dist/ - Tauri’s build step copies
frontend/dist/into the Rust binary as embedded assets
The frontend stack:
| Technology | Role |
|---|---|
| Astro 5 | Static site generator — page routing, builds the app shell |
| Svelte 5 | Reactive components — terminal view, stats charts, settings panels |
| Tailwind v4 + DaisyUI v5 | Styling — utility classes + themed component library |
| xterm.js 6 | Terminal emulator — renders the in-VM shell |
| LayerChart 2 | Charts — session stats, cost tracking (D3-based Svelte library) |
| sql.js | SQLite in the browser — queries session DBs client-side |
For frontend iteration without booting a VM, use just ui (Astro dev server with mock data on port 5173). For the full Tauri app with hot-reload, use just dev.
Rust compilation (cargo build)
Section titled “Rust compilation (cargo build)”The Rust workspace compiles into a single capsem binary:
| Crate | Role |
|---|---|
capsem-core | All business logic: VM config, boot, vsock, MITM proxy, MCP gateway, network policy, telemetry |
capsem-app | Thin Tauri shell: IPC commands, CLI, state management |
capsem-proto | Shared protocol types between host and guest |
capsem-logger | Session DB schema and async writer (SQLite) |
On macOS, the binary must be codesigned with the com.apple.security.virtualization entitlement or Virtualization.framework crashes. The justfile handles this automatically via the _sign recipe.
Stage 4: Boot
Section titled “Stage 4: Boot”The capsem binary loads three assets from assets/{arch}/:
| Asset | Produced by | What it is |
|---|---|---|
vmlinuz | just build-assets | Custom Linux kernel (no modules, no IP stack, 7MB) |
initrd.img | just run (repacked each time) | Guest binaries + init scripts |
rootfs.squashfs | just build-assets | Debian bookworm base + AI CLIs + tools |
Boot sequence: kernel loads initrd into RAM, capsem-init (PID 1) sets up overlayfs, air-gapped networking, and launches the PTY agent + net proxy. The host connects over vsock.
VM image builds (just build-assets)
Section titled “VM image builds (just build-assets)”The slow path (~10 min, first-time only). The capsem-builder Python CLI reads TOML configs from guest/config/ and produces kernel + rootfs via Docker/Podman.
uv run capsem-builder build guest/ --arch arm64 # build everythinguv run capsem-builder validate guest/ # lint configsuv run capsem-builder doctor guest/ # check prerequisitesContainer runtime
Section titled “Container runtime”The builder needs Docker or Podman.
macOS — Both run inside a Linux VM. The default memory (2GB for Podman) is too small. Minimum 4GB, recommended 8GB.
# Podman setupbrew install podmanpodman machine init --memory 8192 --cpus 8podman machine start
# Fix existing machinepodman machine stoppodman machine set --memory 8192 --cpus 8podman machine startDocker Desktop: Settings -> Resources -> set Memory to 8GB, CPUs to 8.
Linux — Containers run natively, no memory tuning needed.
# Debian/Ubuntusudo apt install podman
# Fedora/RHELsudo dnf install podmanTools summary
Section titled “Tools summary”Everything below is checked by bootstrap.sh and just doctor. You don’t need to install these manually — the bootstrap script tells you exactly what’s missing.
| Tool | What it does in the build |
|---|---|
| Rust (stable) | Compiles host + guest binaries |
rust-lld | Linker for musl cross-compilation |
| just | Task runner — single entry point for all workflows |
| Node.js 24+ / pnpm | Builds the Astro + Svelte frontend |
| Python 3.11+ / uv | Runs capsem-builder (image builds, schema generation) |
| Docker or Podman | Container runtime for kernel + rootfs builds |
| cargo-llvm-cov | Code coverage (just test) |
| cargo-audit | Dependency vulnerability scanning |
| cargo-tauri | Tauri CLI for app builds |
| b3sum | BLAKE3 checksums for asset integrity |
| codesign (macOS) | Signs binary with virtualization entitlement |