Loading...
Loading...
Edge-optimized RAG memory system for OpenClaw with semantic search. Automatically loads memory files, provides intelligent recall, and enhances conversations with relevant context. Perfect for Jetson and edge devices (<10MB memory).
npx skill4agent add aaaaqwq/claude-code-skills openclaw-memory-enhancer| Capability | Description |
|---|---|
| 🔍 Semantic Search | Vector similarity search, understanding intent not just keywords |
| 📂 Auto Load | Automatically reads all files from |
| 💡 Smart Recall | Finds relevant historical memory during conversations |
| 🔗 Memory Graph | Builds connections between related memories |
| 💾 Local Storage | 100% local, no cloud, complete privacy |
| 🚀 Edge Optimized | <10MB memory, runs on Jetson/Raspberry Pi |
| Task | Command (Edge Version) | Command (Standard Version) |
|---|---|---|
| Load memories | | |
| Search | | |
| Add memory | | |
| Export | | |
| Stats | | |
python3 memory_enhancer_edge.py --loadpip install sentence-transformers numpy
python3 memory_enhancer.py --loadclawhub install openclaw-memory-enhancergit clone https://github.com/henryfcb/openclaw-memory-enhancer.git \
~/.openclaw/skills/openclaw-memory-enhancer# Load existing OpenClaw memories
cd ~/.openclaw/skills/openclaw-memory-enhancer
python3 memory_enhancer_edge.py --load
# Search for memories
python3 memory_enhancer_edge.py --search "voice-call plugin setup"
# Add a new memory
python3 memory_enhancer_edge.py --add "User prefers dark mode"
# Show statistics
python3 memory_enhancer_edge.py --stats
# Export to Markdown
python3 memory_enhancer_edge.py --exportfrom memory_enhancer_edge import MemoryEnhancerEdge
# Initialize
memory = MemoryEnhancerEdge()
# Load existing memories
memory.load_openclaw_memory()
# Search for relevant memories
results = memory.search_memory("AI trends report", top_k=3)
for r in results:
print(f"[{r['similarity']:.2f}] {r['content'][:100]}...")
# Recall context for a conversation
context = memory.recall_for_prompt("Help me check billing")
# Returns formatted memory context
# Add new memory
memory.add_memory(
content="User prefers direct results",
source="chat",
memory_type="preference"
)# In your OpenClaw agent
from skills.openclaw_memory_enhancer.memory_enhancer_edge import MemoryEnhancerEdge
class EnhancedAgent:
def __init__(self):
self.memory = MemoryEnhancerEdge()
self.memory.load_openclaw_memory()
def process(self, user_input: str) -> str:
# 1. Recall relevant memories
memory_context = self.memory.recall_for_prompt(user_input)
# 2. Enhance prompt with context
enhanced_prompt = f"""
{memory_context}
User: {user_input}
"""
# 3. Call LLM with enhanced context
response = call_llm(enhanced_prompt)
return response| Type | Description | Example |
|---|---|---|
| Daily memory files | |
| Capability records | Skills, tools |
| Core conventions | Important rules |
| Question & Answer | Q: How to... A: You should... |
| Direct instructions | "Remember: always do X" |
| Technical solutions | Step-by-step guides |
| User preferences | "User likes dark mode" |
~/.openclaw/workspace/knowledge-base/Vector Dimensions: 128
Memory Usage: < 10MB
Dependencies: None (Python stdlib)
Storage Format: JSON
Max Memories: 1000 (configurable)
Query Latency: < 100msVector Dimensions: 384
Memory Usage: 50-100MB
Dependencies: sentence-transformers, numpy
Storage Format: NumPy + JSON
Model Size: ~50MB download
Query Latency: < 50msself.config = {
"vector_dim": 128, # Vector dimensions
"max_memory_size": 1000, # Max number of memories
"chunk_size": 500, # Content chunk size
"min_keyword_len": 2, # Minimum keyword length
}# Lower the threshold
results = memory.search_memory(query, threshold=0.2) # Default 0.3
# Increase top_k
results = memory.search_memory(query, top_k=10) # Default 5self.config["max_memory_size"] = 5000 # Increase from 1000max_memory_size