Loading...
Loading...
Install and configure Ollama for local embeddings with GrepAI. Use this skill when setting up private, local embedding generation.
npx skill4agent add yoanbernabeu/grepai-skills grepai-ollama-setup| Benefit | Description |
|---|---|
| 🔒 Privacy | Code never leaves your machine |
| 💰 Free | No API costs |
| ⚡ Fast | Local processing, no network latency |
| 🔌 Offline | Works without internet |
# Install Ollama
brew install ollama
# Start the Ollama service
ollama serve.dmg# One-line installer
curl -fsSL https://ollama.com/install.sh | sh
# Start the service
ollama serve# Download the recommended model (768 dimensions)
ollama pull nomic-embed-text# Multilingual support (better for non-English code/comments)
ollama pull nomic-embed-text-v2-moe
# Larger, more accurate
ollama pull bge-m3
# Maximum quality
ollama pull mxbai-embed-large| Model | Dimensions | Size | Best For |
|---|---|---|---|
| 768 | 274 MB | General code search |
| 768 | 500 MB | Multilingual codebases |
| 1024 | 1.2 GB | Large codebases |
| 1024 | 670 MB | Maximum accuracy |
# Check if Ollama server is responding
curl http://localhost:11434/api/tags
# Expected output: JSON with available modelsollama list
# Output:
# NAME ID SIZE MODIFIED
# nomic-embed-text:latest abc123... 274 MB 2 hours ago# Quick test (should return embedding vector)
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "function hello() { return world; }"
}'# .grepai/config.yaml
embedder:
provider: ollama
model: nomic-embed-text
endpoint: http://localhost:11434grepai initnomic-embed-text# Run in current terminal (see logs)
ollama serve# Using nohup
nohup ollama serve &
# Or as a systemd service (Linux)
sudo systemctl enable ollama
sudo systemctl start ollama# Check if running
pgrep -f ollama
# Or test the API
curl -s http://localhost:11434/api/tags | head -1nomic-embed-textbge-m3mxbai-embed-largeconnection refusedollama serveollama pull nomic-embed-textollama psollama servenomic-embed-textollama pull nomic-embed-text✅ Ollama Setup Complete
Ollama Version: 0.1.x
Endpoint: http://localhost:11434
Model: nomic-embed-text (768 dimensions)
Status: Running
GrepAI is ready to use with local embeddings.
Your code will never leave your machine.