Loading...
Loading...
Configure Ollama as embedding provider for GrepAI. Use this skill for local, private embedding generation.
npx skill4agent add yoanbernabeu/grepai-skills grepai-embeddings-ollama| Advantage | Description |
|---|---|
| 🔒 Privacy | Code never leaves your machine |
| 💰 Free | No API costs or usage limits |
| ⚡ Speed | No network latency |
| 🔌 Offline | Works without internet |
| 🔧 Control | Choose your model |
# Install Ollama
brew install ollama # macOS
# or
curl -fsSL https://ollama.com/install.sh | sh # Linux
# Start Ollama
ollama serve
# Download model
ollama pull nomic-embed-text# .grepai/config.yaml
embedder:
provider: ollama
model: nomic-embed-text
endpoint: http://localhost:11434embedder:
provider: ollama
model: nomic-embed-text
endpoint: http://192.168.1.100:11434 # Remote Ollama serverembedder:
provider: ollama
model: nomic-embed-text
endpoint: http://localhost:11434
dimensions: 768 # Usually auto-detectedollama pull nomic-embed-text| Property | Value |
|---|---|
| Dimensions | 768 |
| Size | ~274 MB |
| Speed | Fast |
| Quality | Excellent for code |
| Language | English-optimized |
embedder:
provider: ollama
model: nomic-embed-textollama pull nomic-embed-text-v2-moe| Property | Value |
|---|---|
| Dimensions | 768 |
| Size | ~500 MB |
| Speed | Medium |
| Quality | Excellent |
| Language | Multilingual |
embedder:
provider: ollama
model: nomic-embed-text-v2-moeollama pull bge-m3| Property | Value |
|---|---|
| Dimensions | 1024 |
| Size | ~1.2 GB |
| Speed | Slower |
| Quality | Very high |
| Language | Multilingual |
embedder:
provider: ollama
model: bge-m3
dimensions: 1024ollama pull mxbai-embed-large| Property | Value |
|---|---|
| Dimensions | 1024 |
| Size | ~670 MB |
| Speed | Medium |
| Quality | Highest |
| Language | English |
embedder:
provider: ollama
model: mxbai-embed-large
dimensions: 1024| Model | Dims | Size | Speed | Quality | Use Case |
|---|---|---|---|---|---|
| 768 | 274MB | ⚡⚡⚡ | ⭐⭐⭐ | General use |
| 768 | 500MB | ⚡⚡ | ⭐⭐⭐⭐ | Multilingual |
| 1024 | 1.2GB | ⚡ | ⭐⭐⭐⭐⭐ | Large codebases |
| 1024 | 670MB | ⚡⚡ | ⭐⭐⭐⭐⭐ | Maximum accuracy |
| Model | RAM Required |
|---|---|
| ~500 MB |
| ~800 MB |
| ~1.5 GB |
| ~1 GB |
ollama ps# Keep model loaded indefinitely
curl http://localhost:11434/api/generate -d '{
"model": "nomic-embed-text",
"keep_alive": -1
}'curl http://localhost:11434/api/tagsollama listcurl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "function authenticate(user, password)"
}'# Enable service
sudo systemctl enable ollama
# Start service
sudo systemctl start ollama
# Check status
sudo systemctl status ollamanohup ollama serve > /dev/null 2>&1 &# Allow remote connections
OLLAMA_HOST=0.0.0.0 ollama serve# .grepai/config.yaml
embedder:
provider: ollama
model: nomic-embed-text
endpoint: http://server-ip:11434# Start Ollama
ollama serve# Pull the model
ollama pull nomic-embed-textnomic-embed-textollama psrm .grepai/index.gob
grepai watchnomic-embed-text✅ Ollama Embedding Provider Configured
Provider: Ollama
Model: nomic-embed-text
Endpoint: http://localhost:11434
Dimensions: 768 (auto-detected)
Status: Connected
Model Info:
- Size: 274 MB
- Loaded: Yes
- GPU: Apple Metal