NemoClaw - NVIDIA's Enterprise Security Layer for AI Agents
Sandboxed execution, default-deny policies, privacy routing, and human-in-the-loop approvals for OpenClaw agents.
NemoClaw wraps OpenClaw agents in enterprise-grade security - Landlock, seccomp, and network namespace isolation
What Is NemoClaw?
NemoClaw is NVIDIA's open-source enterprise security and runtime stack for OpenClaw autonomous AI agents. Announced by Jensen Huang at GTC 2026 on March 16, 2026.
The relationship is simple: OpenClaw = the agent OS. NemoClaw = the enterprise security hardening layer on top.
| Component | What It Does |
|---|---|
| OpenClaw | Agent framework - skills, orchestration, memory, channels (Telegram, CLI, etc.) |
| NemoClaw | Security layer - sandboxing, default-deny policies, privacy routing, HITL approvals, audit trails |
Why NemoClaw Exists
Autonomous AI agents with broad system access are powerful but dangerous. SecurityScorecard found tens of thousands of exposed OpenClaw instances running on the open internet. One incident involved an agent deleting an entire inbox for a Meta AI executive. NemoClaw provides the missing security infrastructure.
Architecture
NemoClaw layers three core components on top of OpenClaw:
| Layer | Purpose | Implementation |
|---|---|---|
| OpenShell Runtime | Sandboxed execution | Landlock + seccomp + netns (Linux kernel isolation) |
| Privacy Router | Data locality control | Routes to local or cloud inference based on data sensitivity |
| Nemotron Models | Local inference | On-premise via NVIDIA NIM or Ollama |
Key Design Principles
- Sandbox Supervisor runs out-of-process - the agent cannot access, modify, or terminate its own security controls
- Default-deny - if an action isn't explicitly permitted in YAML policy, it's blocked
- Human-in-the-loop - blocked actions surface in the OpenShell TUI for approve (
a) or reject (r) - Blueprint system - deployments defined by Python scripts for repeatability and auditability
Installation & Setup
Prerequisites
| Requirement | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPU | 4+ vCPU |
| RAM | 8 GB | 16 GB |
| Disk | 20 GB free | 40 GB free |
| Node.js | 22.16+ | Latest LTS |
| Docker | Required | Linux native (macOS: Colima/Docker Desktop) |
| NVIDIA API Key | Required | Generate at build.nvidia.com |
One-Command Install
# Install NemoClaw
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
# If OpenShell CLI fails (known issue in alpha):
curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | OPENSHELL_VERSION=dev sh
# Then re-run nemoclaw.sh
Onboarding Wizard
# The wizard runs automatically after install, or manually:
nemoclaw onboard
# It will:
# 1. Run preflight checks (Docker running, Node.js version, etc.)
# 2. Prompt for sandbox name (default: my-assistant)
# 3. Ask for NVIDIA API key
# 4. Offer model selection (recommended: Nemotron 3 Super 120B)
# 5. Offer policy presets (pypi, npm, telegram, discord)
# 6. Build the sandbox image with Landlock + seccomp + netns
Verify Installation
# Check status
nemoclaw my-assistant status
# Connect to sandbox
nemoclaw my-assistant connect
# Launch the interactive TUI
openclaw tui
# Send a test message
openclaw agent --agent main --local \
-m "What is NemoClaw?" \
--session-id test
🦞 OpenClaw 2026.3.11 (29dc654) - Hot reload for config, cold sweat for deploys. followed by the agent's response.
Policy Configuration
NemoClaw uses declarative YAML policies with a default-deny posture. If it's not in the allowlist, it's blocked.
# Network policy - control which endpoints the agent can reach
network:
default: deny
allow:
- domain: "api.telegram.org"
ports: [443]
protocol: https
- domain: "pypi.org"
ports: [443]
protocol: https
- domain: "registry.npmjs.org"
ports: [443]
protocol: https
- domain: "api.nvidia.com"
ports: [443]
protocol: https
# Filesystem policy - restrict reads/writes
filesystem:
default: deny
allow:
- path: "/workspace"
permissions: [read, write]
- path: "/tmp"
permissions: [read, write]
deny:
- path: "/etc"
permissions: [write]
- path: "/root/.ssh"
permissions: [read, write]
# Process policy - control what the agent can execute
process:
default: deny
allow:
- command: "python3"
- command: "node"
- command: "git"
Policy Presets
During onboarding, NemoClaw offers presets for common services:
- pypi - allows pip install from pypi.org
- npm - allows npm install from registry.npmjs.org
- telegram - allows Telegram Bot API access
- discord - allows Discord API access
Dynamic Policies
# Apply a policy to a running sandbox
nemoclaw my-assistant policy add --domain "api.openai.com" --port 443
# Remove a policy
nemoclaw my-assistant policy remove --domain "api.openai.com"
# List active policies
nemoclaw my-assistant policy list
Building Agent Workflows
NemoClaw inherits OpenClaw's workflow architecture. Agents follow a think → plan → act → observe loop:
1. Ingestion - Messages arrive via channel adapters (Telegram, CLI, API)
2. Context - Session history + memory search + tool schemas + system prompt
3. Model Call - LLM decides which skills to invoke
4. Execution - Skills execute within NemoClaw sandbox (policy-checked)
5. Response - Results routed back through the channel
Multi-Agent Coordination
// Agent-to-agent messaging inside the NemoClaw sandbox
await manager.sessionSendTo('planner', 'coder',
'Auth module needs rate limiting');
// Broadcast to all agents
await manager.sessionSendTo('monitor', '*',
'Build failed - all agents pause current tasks');
Heartbeat Loop
Every configurable interval (default 5 min), the agent checks for scheduled actions, external triggers, and background tasks. Enables time-based automation like "check email every hour" or "run security scan at midnight."
Skills & MCP
- 31,000+ community skills on Clawhub
- Skills are TypeScript modules with defined schemas
- MCP (Model Context Protocol) integration for dynamic tool discovery
- All skill invocations pass through the NemoClaw policy engine
Production Deployment
Local Development
nemoclaw my-assistant connect
openclaw tui # Interactive terminal UI
openclaw agent --agent main --local -m "hello" --session-id dev
Remote GPU Instance (Always-On)
# NVIDIA's recommended path: Brev.dev
# 1. Go to NVIDIA NemoClaw page → "Try It Now" → Brev
# 2. Deploy the pre-configured launchable
# 3. Connect:
brev shell nemoclaw-xxxxxxx
# 4. Run installer inside the instance
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash
Docker
# The sandbox runs in Docker with kernel-level isolation
docker run -d nvidia/nemoclaw:latest \
--name nemoclaw \
--gpus all \
-p 8080:8080 \
-v /your/policies:/config:ro \
-e NVIDIA_API_KEY=$NVIDIA_API_KEY
Hardened Deployment (Ansible)
# The openclaw-ansible repo provides automated hardened setup:
# - Tailscale VPN for secure access
# - UFW firewall rules
# - Docker isolation
# - Automatic updates
git clone https://github.com/openclaw/openclaw-ansible
cd openclaw-ansible
ansible-playbook -i inventory.yml deploy.yml
Telegram Bridge (Common Pattern)
# Set bot token from BotFather
export TELEGRAM_BOT_TOKEN=your_token_here
# Start all services including Telegram bridge
nemoclaw start
# Monitor in second terminal - approve/deny policy requests
openshell term
# Press 'a' to approve, 'r' to reject
| Platform | Status |
|---|---|
| Linux + Docker | ✅ Primary tested path |
| macOS (Apple Silicon) + Colima | ⚠️ Tested with limitations |
| NVIDIA DGX Spark | ✅ Tested |
| Windows WSL2 + Docker Desktop | ⚠️ Tested with limitations |
LLM Provider Integration
NemoClaw is model-agnostic through OpenClaw's AI Gateway:
| Provider | Models | Notes |
|---|---|---|
| NVIDIA Nemotron | Nemotron 3 Super 120B | Native integration. Local or API. Free tier available. |
| OpenAI | GPT-4, GPT-4o | Via AI Gateway, routed through privacy router. |
| Anthropic | Claude 3 Opus/Sonnet | Cloud routing based on data sensitivity policy. |
| Local (Ollama) | Llama, Mistral, any GGUF | Zero-cost, fully offline execution. |
| Gemini Pro | Multimodal tasks. |
The Privacy Router makes per-request decisions: sensitive data (medical, financial, PII) stays on local inference; less sensitive workloads route to cloud. All routing decisions are logged for audit.
# Switch inference provider post-setup
nemoclaw my-assistant config set model ollama/llama3.3:70b
nemoclaw my-assistant config set model nvidia/nemotron-3-super-120b
nemoclaw my-assistant config set model openai/gpt-4o
Monitoring & Observability
# Real-time sandbox monitoring
openshell term
# Stream logs
nemoclaw my-assistant logs --follow
# Check status
nemoclaw my-assistant status
What Gets Logged
- All policy decisions (approved, denied, escalated)
- All tool/skill invocations with parameters
- All network requests with routing decisions (local vs cloud)
- All file system access attempts
- Full audit trails for compliance (HIPAA, GDPR, financial regulations)
NemoClaw vs Alternatives
| Feature | NemoClaw | OpenClaw (standalone) | CrewAI | LangGraph |
|---|---|---|---|---|
| Focus | Enterprise agent security | Autonomous personal AI | Multi-agent orchestration | Stateful agent graphs |
| Security | Landlock+seccomp+netns, default-deny, HITL | Basic permission scoping | None built-in | None built-in |
| Privacy | Privacy router (local vs cloud) | None | None | None |
| Audit trails | Full compliance logging | None | None | None |
| HITL | Built-in TUI | Optional | Optional | Manual |
| Maturity | Alpha (Mar 2026) | Production (Nov 2025+) | Stable | Stable |
| Language | TypeScript | TypeScript | Python | Python |
| Built by | NVIDIA | Peter Steinberger | CrewAI Inc | LangChain |
The Bottom Line
NemoClaw is the only agent framework with comprehensive security sandboxing - default-deny policies, kernel-level isolation, privacy routing, and audit trails. If you're deploying OpenClaw agents in any environment where security matters (enterprise, regulated industries, production systems), NemoClaw is the hardening layer you need. It's alpha software, so expect rough edges - but the security architecture is sound and fills a critical gap that no other agent framework addresses.