Loading...
Loading...
NVIDIA NemoClaw plugin for secure sandboxed installation and orchestration of OpenClaw always-on AI assistants via OpenShell
npx skill4agent add aradotso/trending-skills nvidia-nemoclawSkill by ara.so — Daily 2026 Skills collection.
curl -fsSL https://nvidia.com/nemoclaw.sh | bashgit clone https://github.com/NVIDIA/NemoClaw.git
cd NemoClaw
npm install
npm run build
npm link # makes `nemoclaw` available globally# Required: NVIDIA cloud API key for Nemotron inference
export NVIDIA_API_KEY="nvapi-xxxxxxxxxxxx"
# Optional: override default model
export NEMOCLAW_MODEL="nvidia/nemotron-3-super-120b-a12b"
# Optional: custom sandbox data directory
export NEMOCLAW_SANDBOX_DIR="/var/nemoclaw/sandboxes"nemoclaw onboardmy-assistant$NVIDIA_API_KEY──────────────────────────────────────────────────
Sandbox my-assistant (Landlock + seccomp + netns)
Model nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
──────────────────────────────────────────────────
Run: nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs: nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────
[INFO] === Installation complete ===nemoclaw my-assistant connectsandbox@my-assistant:~$ openclaw tuisandbox@my-assistant:~$ openclaw agent --agent main --local -m "hello" --session-id testnemoclaw| Command | Description |
|---|---|
| Interactive setup: gateway, providers, sandbox |
| Open interactive shell inside sandbox |
| Show NemoClaw-level sandbox health |
| Stream sandbox logs |
| Start auxiliary services (Telegram bridge, tunnel) |
| Stop auxiliary services |
| Deploy to remote GPU instance via Brev |
| Launch OpenShell TUI for monitoring and approvals |
openclaw nemoclawNote: These are under active development — usehost CLI as the primary interface.nemoclaw
| Command | Description |
|---|---|
| Bootstrap OpenClaw inside OpenShell sandbox |
| Show sandbox health, blueprint state, and inference |
| Stream blueprint execution and sandbox logs |
# List all sandboxes at the OpenShell layer
openshell sandbox list
# Check specific sandbox
openshell sandbox inspect my-assistant| Component | Role |
|---|---|
| Plugin | TypeScript CLI: launch, connect, status, logs |
| Blueprint | Versioned Python artifact: sandbox creation, policy, inference setup |
| Sandbox | Isolated OpenShell container running OpenClaw with policy-enforced egress/filesystem |
| Inference | NVIDIA cloud model calls routed through OpenShell gateway |
import { NemoClawClient } from '@nvidia/nemoclaw';
const client = new NemoClawClient({
apiKey: process.env.NVIDIA_API_KEY!,
model: process.env.NEMOCLAW_MODEL ?? 'nvidia/nemotron-3-super-120b-a12b',
});import { NemoClawClient, SandboxConfig } from '@nvidia/nemoclaw';
async function createSandbox() {
const client = new NemoClawClient({
apiKey: process.env.NVIDIA_API_KEY!,
});
const config: SandboxConfig = {
name: 'my-assistant',
model: 'nvidia/nemotron-3-super-120b-a12b',
policy: {
network: {
allowedEgressHosts: ['build.nvidia.com'],
blockUnlisted: true,
},
filesystem: {
allowedPaths: ['/sandbox', '/tmp'],
readOnly: false,
},
},
};
const sandbox = await client.sandbox.create(config);
console.log(`Sandbox created: ${sandbox.id}`);
return sandbox;
}import { NemoClawClient } from '@nvidia/nemoclaw';
async function chatWithAgent(sandboxName: string, message: string) {
const client = new NemoClawClient({
apiKey: process.env.NVIDIA_API_KEY!,
});
const sandbox = await client.sandbox.get(sandboxName);
const session = await sandbox.connect();
const response = await session.agent.send({
agentId: 'main',
message,
sessionId: `session-${Date.now()}`,
});
console.log('Agent response:', response.content);
await session.disconnect();
}
chatWithAgent('my-assistant', 'Summarize the latest NVIDIA earnings report.');import { NemoClawClient } from '@nvidia/nemoclaw';
async function checkStatus(sandboxName: string) {
const client = new NemoClawClient({
apiKey: process.env.NVIDIA_API_KEY!,
});
const status = await client.sandbox.status(sandboxName);
console.log({
sandbox: status.name,
healthy: status.healthy,
blueprint: status.blueprintState,
inference: status.inferenceProvider,
policyVersion: status.policyVersion,
});
}import { NemoClawClient } from '@nvidia/nemoclaw';
async function streamLogs(sandboxName: string) {
const client = new NemoClawClient({
apiKey: process.env.NVIDIA_API_KEY!,
});
const logStream = client.sandbox.logs(sandboxName, { follow: true });
for await (const entry of logStream) {
console.log(`[${entry.timestamp}] ${entry.level}: ${entry.message}`);
}
}import { NemoClawClient, NetworkPolicy } from '@nvidia/nemoclaw';
async function updateNetworkPolicy(sandboxName: string) {
const client = new NemoClawClient({
apiKey: process.env.NVIDIA_API_KEY!,
});
// Network policies are hot-reloadable at runtime
const updatedPolicy: NetworkPolicy = {
allowedEgressHosts: [
'build.nvidia.com',
'api.github.com',
],
blockUnlisted: true,
};
await client.sandbox.updatePolicy(sandboxName, {
network: updatedPolicy,
});
console.log('Network policy updated (hot reload applied).');
}| Layer | What it protects | Hot-reloadable? |
|---|---|---|
| Network | Blocks unauthorized outbound connections | ✅ Yes |
| Filesystem | Prevents reads/writes outside | ❌ Locked at creation |
| Process | Blocks privilege escalation and dangerous syscalls | ❌ Locked at creation |
| Inference | Reroutes model API calls to controlled backends | ✅ Yes |
const config: SandboxConfig = {
name: 'dev-sandbox',
model: 'nvidia/nemotron-3-super-120b-a12b',
policy: {
network: { blockUnlisted: false }, // permissive for dev
filesystem: { allowedPaths: ['/sandbox', '/tmp', '/home/dev'] },
},
};const config: SandboxConfig = {
name: 'prod-assistant',
model: 'nvidia/nemotron-3-super-120b-a12b',
policy: {
network: {
allowedEgressHosts: ['build.nvidia.com'],
blockUnlisted: true,
},
filesystem: {
allowedPaths: ['/sandbox', '/tmp'],
readOnly: false,
},
},
};nemoclaw deploy my-gpu-instance --sandbox my-assistantawait client.deploy({
instance: 'my-gpu-instance',
sandboxName: 'my-assistant',
provider: 'brev',
});Error: Sandbox 'my-assistant' not foundopenshell sandbox list
nemoclaw my-assistant statusError: Inference provider authentication failedexport NVIDIA_API_KEY="nvapi-xxxxxxxxxxxx"
nemoclaw onboard # re-run to reconfigureError: Cannot connect to Docker daemonsudo systemctl start docker
sudo usermod -aG docker $USER # add current user to docker group
newgrp dockerError: 'openshell' command not foundopenshell term # open TUI to approve/deny the request
# OR update policy to allow the host:
nemoclaw my-assistant policy update --allow-host api.example.comnemoclaw my-assistant logs --follow
# or with verbose flag
nemoclaw my-assistant logs --follow --level debug