Documentation menu

Security Architecture

Overview

Printhouse gives an AI agent a real computer with real capabilities. That’s the whole point. But “real capabilities” means the security model has to be genuinely good — not security theater, not a checkbox exercise.

Here’s how it works.

Container isolation

Every user gets their own isolated workspace. These aren’t shared VMs or containers running side-by-side with weak boundaries. Each workspace runs in gVisor — the same container runtime Google uses for Cloud Run and other multi-tenant services.

gVisor intercepts system calls at the application level rather than relying on Linux kernel namespace isolation alone. This provides defense-in-depth: even if an application escapes the container’s namespace, gVisor’s syscall filtering prevents access to the host kernel.

Your workspace is yours. Other users’ workspaces are completely isolated — different processes, different filesystems, different network namespaces.

Credential proxy

This is the most important security property in the system: your API keys and OAuth tokens never touch the agent.

Here’s the architecture:

Each workspace has two isolated components:

  1. Sandbox — where the agent runs. Has the AI harness, your files, your tools. Does NOT have your real credentials. Instead, it has dummy placeholder keys.
  2. Credential proxy — a separate process that intercepts all outbound network traffic from the sandbox. It holds the real credentials in memory.

When the agent makes an outbound request:

  1. The request includes a dummy placeholder key (the only key the agent knows about)
  2. All outbound traffic is transparently routed through the credential proxy
  3. The proxy matches the request to a connected app and its configured credential
  4. It swaps the dummy key for the real credential in the appropriate header
  5. The request continues to the external service with valid credentials
  6. The response flows back to the agent

The agent cannot see, log, or exfiltrate your real API keys because they literally don’t exist in its environment. They only exist in the credential proxy’s memory, which the agent cannot access — the agent runs in a restricted sandbox with limited capabilities, so it can’t tamper with or bypass the proxy.

Credentials are encrypted at rest using AES-256-GCM before being stored in the database. They’re only decrypted when loaded into the credential proxy.

Network access controls

You control what your agent can reach on the internet. There are two modes:

Open access (default)

The agent can reach any domain. Requests to connected app domains get automatic credential injection. Requests to other domains pass through without credentials.

This is the right default for exploration and general use — the agent can search the web, access documentation, hit public APIs, etc.

Connected apps only

Requests must go through a connected app, limited to the permissions you granted. It’s not just a domain allowlist — every request must match a specific connected app credential. Even if you’ve connected GitHub, the agent can only use your GitHub credential with the scopes you authorized. It can’t forge a request to a different account or use a different token.

This is the stricter option — the agent can only talk to services you’ve explicitly authorized, using only the credentials you’ve granted. If you didn’t connect it, the request doesn’t leave the sandbox.

What this means in practice

Compared to other AI agent platforms:

  • Self-hosted agents (OpenClaw, etc.) typically give the agent raw API keys in environment variables. If the agent is compromised or hallucinating, your keys are exposed.
  • Claude Code / local agents run on your personal machine with your local credentials. The blast radius of a mistake is your entire development environment.
  • Printhouse keeps real credentials in an isolated proxy. The agent works with placeholders. Even in the worst case — a jailbroken model, a prompt injection attack, a malicious skill — the agent cannot access your actual credentials.

This isn’t a minor architectural detail. It’s the foundation that makes it safe to give an AI agent real power.