Loading...
Loading...
Guidance for creating standalone CLI tools that perform neural network inference by extracting PyTorch model weights and reimplementing inference in C/C++. This skill applies when tasks involve converting PyTorch models to standalone executables, extracting model weights to portable formats (JSON), implementing neural network forward passes in C/C++, or creating CLI tools that load images and run inference without Python dependencies.
npx skill4agent add letta-ai/skills pytorch-model-climodel.pyimport torch
import json
# Load state dict
state_dict = torch.load('model.pth', map_location='cpu')weights = {}
for key, tensor in state_dict.items():
weights[key] = tensor.numpy().tolist()with open('weights.json', 'w') as f:
json.dump(weights, f)model.eval()
with torch.no_grad():
output = model(input_tensor)
prediction = output.argmax().item()g++ -o cli_tool main.cpp lodepng.cpp cJSON.c -std=c++11 -lm-std=c++11-lmmap_location='cpu'