AuthSecAuthSec
B
Infrastructure9 min readApr 13, 2026

Building an Authenticated AI Gateway: How We Put OpenClaw Behind Enterprise SSO

AI assistants are powerful, but deploying one inside an organization without authentication is like leaving the front door wide open. Here's how we built authsec-openclaw — a Go reverse proxy that wraps OpenClaw with enterprise-grade SSO and gives every chat session a verified identity.

OpenClawSSOOAuth2Reverse Proxy
RKK

Ritam Kumar Kundu

Engineering

The Problem

OpenClaw is great at orchestrating AI models, tools, and plugins. But out of the box, it doesn't know who is using it. That's a non-starter for any team or enterprise deployment where every request needs to be tied back to a real person.

We needed a solution that could sit in front of OpenClaw, handle authentication, inject user identity into the AI context, and do it all without modifying OpenClaw itself. Specifically, the gaps we had to close were:

  • Identity — know which user is behind every request.
  • Access control — restrict who can use the system.
  • Audit trail — trace actions back to a real person.
  • Multi-tenancy — serve multiple teams from one deployment.

The Architecture

The answer was a Go reverse proxy that acts as the single entry point. The browser talks to the AuthSec proxy, the proxy talks to the OpenClaw gateway, and OpenClaw in turn talks to LLM providers (OpenAI, Anthropic, etc.), tools (browser, exec, Discord), an optional paired Windows node, and a shared workspace volume.

On the proxy side we handle OAuth2/OIDC login, session management, user identity injection, and rate limiting. On the OpenClaw side, the gateway focuses on what it's good at — orchestrating LLMs, executing tools, and managing the workspace.

Here's what happens when a user hits the public URL on port 8080:

  • The proxy checks for a valid session cookie.
  • If there is no session, it redirects to AuthSec for OAuth2/OIDC login.
  • After login, the proxy creates an HMAC-signed session with the user's identity.
  • On every proxied request, it writes a USER.md file into OpenClaw's workspace with the authenticated user's email and ID.
  • When the user asks 'who am I?', OpenClaw reads that file and responds with the real identity.

Three Authentication Modes

We built the proxy to support three auth modes, because different environments have different needs. The mode is set via a single environment variable (AUTHSEC_MODE), and the proxy wires up the correct adapter at startup. In production, you'd use OIDC or Native mode. For hacking locally, stub mode gets you running in seconds.

This is a zero-modification integration — OpenClaw doesn't need to know about AuthSec at all. It just reads user context from its workspace like it would any other file.

  • Stub — for local development. A hardcoded allowlist with no external IdP needed.
  • OIDC — for standard deployments. Works with any OAuth2/OIDC provider such as Google, Okta, or Azure AD.
  • Native AuthSec — for the full platform. Uses AuthSec's own multi-tenant SSO with RBAC.

Session Management Done Right

Sessions are stored in-memory with HMAC-SHA256 signed cookies. We deliberately avoided JWTs for session tokens here — the proxy is the only consumer, so there's no need for a self-contained token. The session store also runs a background goroutine that sweeps expired entries every five minutes — simple, effective, and no external dependency like Redis is needed for a single-node deployment.

  • Server-side revocation — kill a session instantly, no waiting for token expiry.
  • Small cookies — just a session ID, not a full JWT payload.
  • TTL-based cleanup — expired sessions are garbage-collected automatically.
  • Secure defaults — HttpOnly, SameSite=Lax, and an optional Secure flag for HTTPS.

Identity Injection: The Bridge Between Auth and AI

The most interesting piece is how authenticated identity flows into the AI context. The auth middleware does four things on every request:

  • Validates the session.
  • Extracts user claims such as email, subject, and admin status.
  • Writes a USER.md into OpenClaw's shared workspace volume.
  • Stores identity in the request context for logging.

Why USER.md works

When you chat with OpenClaw and ask 'who am I?', it doesn't hallucinate — it reads real, verified identity data. The AI knows who it's talking to, backed by an OAuth2 flow, not a guess.

There's no custom API surface between the proxy and OpenClaw. The proxy just writes a markdown file. OpenClaw already reads workspace files — we just gave it one more.

Security Layers

We didn't just bolt on a login page. The proxy enforces several security policies in front of the AI gateway.

  • Token leak prevention — query parameters containing tokens are rejected outright, so secrets can't end up in URLs, logs, or browser history.
  • Request body limits — oversized payloads are blocked before they reach the backend.
  • Rate limiting — auth endpoints are throttled to prevent brute-force attacks.
  • Admin enforcement — certain routes require admin claims in the session.
  • Network isolation — in Docker, OpenClaw lives on an internal network with the proxy as the only ingress; in Kubernetes, NetworkPolicies enforce the same boundary.

Windows Node Integration

One of the more unique features: OpenClaw can pair with a Windows machine as a 'node' to execute real desktop actions — opening Notepad, launching a browser, managing files. The proxy doesn't interfere with this; it authenticates the web session and then proxies WebSocket connections transparently to the gateway, which communicates with the paired node.

When a user says 'open Notepad on my machine', the request flows from the authenticated browser session, through the proxy as a WebSocket passthrough, to the OpenClaw gateway, and finally to the paired Windows node where Notepad actually launches.

The node pairing flow uses a device approval model: the node requests pairing, an operator approves it from the gateway, and then exec permissions are unlocked. For a single-user local setup, a helper PowerShell script automates this entire flow.

From Docker Compose to Kubernetes

We ship two deployment paths. For local development, Docker Compose runs three services on a bridge network — OpenClaw as the internal AI gateway, the proxy as the public-facing auth layer, and Caddy for optional TLS termination. One 'docker compose up -d --build' and you're running. The .env file holds all configuration: AuthSec credentials, LLM provider keys, and session secrets.

For production, we ship a full-featured Helm chart. The chart also supports multi-tenant routing, where different tenants hit different OpenClaw instances based on hostname or URL path.

  • Separate Deployments for the proxy and OpenClaw with independent scaling.
  • Horizontal Pod Autoscaler on the proxy — it's stateless, so you can scale freely.
  • NetworkPolicy ensuring only proxy pods can reach OpenClaw on port 18789.
  • PodDisruptionBudget for zero-downtime upgrades.
  • Pod security: non-root user, read-only root filesystem, dropped capabilities, seccomp profiles.
  • Ingress with cert-manager integration for automatic TLS.
  • Existing Secrets support — credentials never live in values.yaml.

The Installer

For teams that want a one-command setup, we built an idempotent bash installer (install.sh) that handles the full bootstrap. It supports a non-interactive mode for CI/CD and a dry-run mode for previewing changes. All sensitive files are written with 0600 permissions.

  • Checks prerequisites (Docker, curl, jq).
  • Detects or installs OpenClaw.
  • Generates secrets and writes the .env file.
  • Bootstraps AuthSec RBAC by creating roles and bindings via the admin API.
  • Runs a device-code authentication flow that opens the user's browser.
  • Starts Docker Compose.
  • Runs health checks to verify everything is working.

What We Learned

Four lessons stood out from building this integration.

  • Keep the proxy dumb about AI. The proxy doesn't parse prompts, filter responses, or understand what OpenClaw does — it handles auth, sessions, and proxying. That separation lets OpenClaw evolve independently while the auth layer stays stable.
  • USER.md is a surprisingly effective bridge. Rather than building a complex API integration between the proxy and OpenClaw, writing a markdown file to a shared volume turned out to be the simplest and most reliable approach.
  • WebSocket proxying is the hard part. HTTP proxying is straightforward, but OpenClaw uses WebSockets for real-time chat, and getting the upgrade handshake, session validation, and connection lifecycle right took more iteration than the rest of the proxy combined.
  • Stub mode saves hours. A zero-dependency auth mode for local development meant we could iterate on the proxy logic without needing a running AuthSec instance, and the mode switch is clean enough that there's no risk of stub mode leaking into production.

What's Next

We've got a roadmap of features that build on this foundation.

  • Social login at the client level — Google and Microsoft login configured per-tenant through AuthSec's admin panel.
  • MCP SDK integration — connecting the Python-based AuthSec MCP SDK so individual AI tools can be gated by RBAC, not just the gateway as a whole.
  • Full Hydra-backed flows — wiring claw-auth through AuthSec's Ory Hydra backend for enterprise OIDC and SAML federation.

Try It

The project is open source and lives at github.com/authsec-ai/claw-auth. To run an authenticated OpenClaw instance locally, clone the repo, copy the example .env, fill in your AuthSec credentials and an LLM API key, and bring the stack up with Docker Compose. Then open the proxy on port 8080 and ask 'who am I?' — and this time, the AI will actually know.

authsec-openclaw is part of the AuthSec platform — enterprise identity and access management for AI applications.

  • Clone github.com/authsec-ai/claw-auth.
  • Change into deploy/docker-compose and copy .env.example to .env.
  • Edit .env with your AuthSec credentials and an LLM API key.
  • Run 'docker compose up -d --build'.
  • Open http://localhost:8080 and sign in.
Share this article: