netryx-street-level-geolocation
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseNetryx Street-Level Geolocation
Netryx街景定位
Skill by ara.so — Daily 2026 Skills collection.
Netryx is a locally-hosted geolocation engine that identifies the precise GPS coordinates of any street-level photograph. It crawls street-view panoramas into a searchable index, then matches a query image against that index using a three-stage computer vision pipeline: global retrieval (CosPlace), local feature extraction (ALIKED/DISK), and deep feature matching (LightGlue). Sub-50m accuracy, no internet presence of the target image required, runs entirely on local hardware.
由ara.so开发的技能 — 属于2026每日技能合集。
Netryx是一款本地部署的定位引擎,可识别任意街景照片的精确GPS坐标。它会抓取街景全景图并构建可搜索的索引,随后通过三阶段计算机视觉流水线将查询图像与索引进行匹配:全局检索(CosPlace)、局部特征提取(ALIKED/DISK)以及深度特征匹配(LightGlue)。定位精度可达50米以内,目标图像无需联网即可处理,全程在本地硬件运行。
Installation
安装步骤
bash
git clone https://github.com/sparkyniner/Netryx-OpenSource-Next-Gen-Street-Level-Geolocation.git
cd Netryx-OpenSource-Next-Gen-Street-Level-Geolocation
python3 -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
pip install git+https://github.com/cvg/LightGlue.git # required
pip install kornia # optional: Ultra Mode (LoFTR)bash
git clone https://github.com/sparkyniner/Netryx-OpenSource-Next-Gen-Street-Level-Geolocation.git
cd Netryx-OpenSource-Next-Gen-Street-Level-Geolocation
python3 -m venv venv
source venv/bin/activate # Windows系统:venv\Scripts\activate
pip install -r requirements.txt
pip install git+https://github.com/cvg/LightGlue.git # 必须安装
pip install kornia # 可选:Ultra模式(LoFTR)Optional: Gemini API key for AI Coarse mode
可选:用于AI粗定位模式的Gemini API密钥
bash
export GEMINI_API_KEY="your_key_here" # never hard-code; use env varbash
export GEMINI_API_KEY="your_key_here" # 请勿硬编码;使用环境变量macOS tkinter fix (blank GUI)
macOS tkinter修复(GUI空白问题)
bash
brew install python-tk@3.11 # match your Python versionbash
brew install python-tk@3.11 # 请匹配你的Python版本Launch the GUI
启动GUI
bash
python test_super.pyAll indexing and searching is driven from this single GUI entry point.
bash
python test_super.py所有索引构建和搜索操作都通过这个单一GUI入口执行。
Project Structure
项目结构
netryx/
├── test_super.py # Main app: GUI + indexing + search pipeline
├── cosplace_utils.py # CosPlace model loading & descriptor extraction
├── build_index.py # Standalone high-performance index builder
├── requirements.txt
├── cosplace_parts/ # Raw embedding chunks written during indexing
└── index/
├── cosplace_descriptors.npy # All 512-dim descriptors (compiled)
└── metadata.npz # Coordinates, headings, panoid IDsnetryx/
├── test_super.py # 主应用:GUI + 索引构建 + 搜索流水线
├── cosplace_utils.py # CosPlace模型加载与描述符提取
├── build_index.py # 独立的高性能索引构建工具
├── requirements.txt
├── cosplace_parts/ # 索引构建过程中生成的原始嵌入块
└── index/
├── cosplace_descriptors.npy # 已编译的所有512维描述符
└── metadata.npz # 坐标、朝向、全景图IDCore Workflow
核心工作流程
Step 1 — Create an Index
步骤1 — 创建索引
Index an area before searching. The GUI does this interactively; you can also drive it programmatically.
GUI steps:
- Select Create mode
- Enter center
latitude, longitude - Set radius (km) and grid resolution (default 300)
- Click Create Index
Indexing time reference:
| Radius | ~Panoramas | Time (M2 Max) | Index size |
|---|---|---|---|
| 0.5 km | 500 | 30 min | ~60 MB |
| 1 km | 2 000 | 1–2 h | ~250 MB |
| 5 km | 30 000 | 8–12 h | ~3 GB |
| 10 km | 100 000 | 24–48 h | ~7 GB |
Indexing is incremental — safe to interrupt and resume.
搜索前需先对目标区域构建索引。GUI支持交互式操作,也可通过代码程序化执行。
GUI操作步骤:
- 选择「Create」模式
- 输入中心坐标
纬度, 经度 - 设置半径(单位:公里)和网格分辨率(默认300)
- 点击「Create Index」
索引构建时间参考:
| 半径 | 全景图数量 | 耗时(M2 Max) | 索引大小 |
|---|---|---|---|
| 0.5 km | 500 | 30分钟 | ~60 MB |
| 1 km | 2000 | 1–2小时 | ~250 MB |
| 5 km | 30000 | 8–12小时 | ~3 GB |
| 10 km | 100000 | 24–48小时 | ~7 GB |
索引支持增量构建 — 可安全中断并恢复。
Step 2 — Search
步骤2 — 搜索
GUI steps:
- Select Search mode
- Upload a street-level photo
- Choose Manual (provide center coords + radius) or AI Coarse (Gemini infers region)
- Click Run Search → Start Full Search
- Result: GPS pin on map + confidence score
GUI操作步骤:
- 选择「Search」模式
- 上传街景照片
- 选择「Manual」(提供中心坐标+半径)或「AI Coarse」(由Gemini推断区域)
- 点击「Run Search → Start Full Search」
- 结果:地图上的GPS标记 + 置信度分数
Programmatic Usage
程序化调用
Extract a CosPlace descriptor
提取CosPlace描述符
python
from cosplace_utils import load_cosplace_model, extract_descriptor
from PIL import Image
import torch
device = torch.device("cuda" if torch.cuda.is_available() else
"mps" if torch.backends.mps.is_available() else "cpu")
model = load_cosplace_model(device=device)
img = Image.open("query.jpg").convert("RGB")
descriptor = extract_descriptor(model, img, device=device) # shape: (512,)
print("Descriptor shape:", descriptor.shape)python
from cosplace_utils import load_cosplace_model, extract_descriptor
from PIL import Image
import torch
device = torch.device("cuda" if torch.cuda.is_available() else
"mps" if torch.backends.mps.is_available() else "cpu")
model = load_cosplace_model(device=device)
img = Image.open("query.jpg").convert("RGB")
descriptor = extract_descriptor(model, img, device=device) # 形状: (512,)
print("Descriptor shape:", descriptor.shape)Search the index against a query descriptor
基于查询描述符搜索索引
python
import numpy as np
from math import radians, sin, cos, sqrt, atan2python
import numpy as np
from math import radians, sin, cos, sqrt, atan2Load compiled index
加载已编译的索引
descriptors = np.load("index/cosplace_descriptors.npy") # (N, 512)
meta = np.load("index/metadata.npz", allow_pickle=True)
latitudes = meta["latitudes"] # (N,)
longitudes = meta["longitudes"] # (N,)
headings = meta["headings"] # (N,)
panoids = meta["panoids"] # (N,)
def haversine_km(lat1, lon1, lat2, lon2):
R = 6371.0
dlat = radians(lat2 - lat1)
dlon = radians(lon2 - lon1)
a = sin(dlat/2)**2 + cos(radians(lat1))*cos(radians(lat2))*sin(dlon/2)**2
return R * 2 * atan2(sqrt(a), sqrt(1 - a))
def search_index(query_descriptor, center_lat, center_lon,
radius_km=2.0, top_k=500):
"""Return top-k candidate indices ranked by cosine similarity within radius."""
# Radius mask
dists = np.array([
haversine_km(center_lat, center_lon, lat, lon)
for lat, lon in zip(latitudes, longitudes)
])
mask = dists <= radius_km
# Cosine similarity (descriptors assumed L2-normalised)
q = query_descriptor / (np.linalg.norm(query_descriptor) + 1e-8)
sims = descriptors[mask] @ q # cosine scores
local_indices = np.where(mask)[0]
ranked = local_indices[np.argsort(sims)[::-1]] # descending
return ranked[:top_k]candidates = search_index(descriptor, center_lat=48.8566,
center_lon=2.3522, radius_km=1.0)
print(f"Top candidate panoid: {panoids[candidates[0]]}")
print(f" lat={latitudes[candidates[0]]:.6f} lon={longitudes[candidates[0]]:.6f}")
undefineddescriptors = np.load("index/cosplace_descriptors.npy") # (N, 512)
meta = np.load("index/metadata.npz", allow_pickle=True)
latitudes = meta["latitudes"] # (N,)
longitudes = meta["longitudes"] # (N,)
headings = meta["headings"] # (N,)
panoids = meta["panoids"] # (N,)
def haversine_km(lat1, lon1, lat2, lon2):
R = 6371.0
dlat = radians(lat2 - lat1)
dlon = radians(lon2 - lon1)
a = sin(dlat/2)**2 + cos(radians(lat1))*cos(radians(lat2))*sin(dlon/2)**2
return R * 2 * atan2(sqrt(a), sqrt(1 - a))
def search_index(query_descriptor, center_lat, center_lon,
radius_km=2.0, top_k=500):
"""返回指定半径内基于余弦相似度排序的前k个候选索引。"""
# 半径过滤掩码
dists = np.array([
haversine_km(center_lat, center_lon, lat, lon)
for lat, lon in zip(latitudes, longitudes)
])
mask = dists <= radius_km
# 余弦相似度(假设描述符已做L2归一化)
q = query_descriptor / (np.linalg.norm(query_descriptor) + 1e-8)
sims = descriptors[mask] @ q # 余弦相似度分数
local_indices = np.where(mask)[0]
ranked = local_indices[np.argsort(sims)[::-1]] # 降序排列
return ranked[:top_k]candidates = search_index(descriptor, center_lat=48.8566,
center_lon=2.3522, radius_km=1.0)
print(f"Top candidate panoid: {panoids[candidates[0]]}")
print(f" lat={latitudes[candidates[0]]:.6f} lon={longitudes[candidates[0]]:.6f}")
undefinedBuild / rebuild the compiled index from parts
从碎片构建/重建已编译索引
python
undefinedpython
undefinedRun after adding new cosplace_parts/*.npz chunks
添加新的cosplace_parts/*.npz文件后执行
import subprocess
subprocess.run(["python", "build_index.py"], check=True)
Or directly from Python if `build_index.py` exposes a function:
```python
import importlib.util, pathlib
spec = importlib.util.spec_from_file_location("build_index",
pathlib.Path("build_index.py"))
build_index = importlib.util.module_from_spec(spec)
spec.loader.exec_module(build_index)
build_index.build() # adjust to actual function name in the fileimport subprocess
subprocess.run(["python", "build_index.py"], check=True)
或如果`build_index.py`暴露了函数,可直接在Python中调用:
```python
import importlib.util, pathlib
spec = importlib.util.spec_from_file_location("build_index",
pathlib.Path("build_index.py"))
build_index = importlib.util.module_from_spec(spec)
spec.loader.exec_module(build_index)
build_index.build() # 请根据文件中的实际函数名称调整Pipeline Stages in Detail
流水线阶段详情
Stage 1 — Global Retrieval (CosPlace)
阶段1 — 全局检索(CosPlace)
- Extracts 512-dim descriptor from query and its horizontal flip
- Both descriptors compared via cosine similarity against the index
- Haversine radius filter restricts candidates to the target area
- Returns top 500–1 000 candidates
- Runs in < 1 second (single matrix multiply)
- 从查询图像及其水平翻转版本中提取512维描述符
- 将两个描述符与索引中的描述符进行余弦相似度比较
- 通过Haversine半径过滤将候选结果限制在目标区域内
- 返回前500–1000个候选结果
- 运行时间**< 1秒**(单次矩阵乘法)
Stage 2 — Geometric Verification (ALIKED/DISK + LightGlue)
阶段2 — 几何验证(ALIKED/DISK + LightGlue)
For each candidate:
1. Download panorama tiles from Street View (8 tiles, stitched)
2. Crop rectilinear view at indexed heading
3. Generate multi-FOV crops: 70°, 90°, 110°
4. Extract keypoints:
CUDA → ALIKED (1024 keypoints)
MPS/CPU → DISK (768 keypoints)
5. LightGlue deep feature matching vs. query keypoints
6. RANSAC geometric verification → inlier countBest match = candidate with most RANSAC-verified inliers.
Processing 300–500 candidates: 2–5 minutes depending on hardware.
Processing 300–500 candidates: 2–5 minutes depending on hardware.
针对每个候选结果:
1. 从街景服务下载全景图瓦片(8张,拼接成完整图像)
2. 根据索引中的朝向裁剪为直线视图
3. 生成多视场裁剪图:70°、90°、110°
4. 提取关键点:
CUDA设备 → ALIKED(1024个关键点)
MPS/CPU设备 → DISK(768个关键点)
5. 与查询图像的关键点进行LightGlue深度特征匹配
6. RANSAC几何验证 → 内点计数最佳匹配结果 = RANSAC验证内点数量最多的候选结果。
处理300–500个候选结果的时间:2–5分钟(取决于硬件性能)。
处理300–500个候选结果的时间:2–5分钟(取决于硬件性能)。
Stage 3 — Refinement
阶段3 — 优化
Heading refinement : top-15 candidates × ±45° at 15° steps × 3 FOVs
Spatial consensus : cluster matches into 50 m cells; prefer clusters
Confidence score : geographic clustering tightness + uniqueness ratio朝向优化 : 前15个候选结果 × ±45°范围,步长15° × 3种视场
空间一致性 : 将匹配结果聚类到50米单元格中;优先选择聚类结果
置信度分数 : 地理聚类紧密性 + 唯一性比率Ultra Mode (optional, slower)
Ultra模式(可选,速度较慢)
Enable the Ultra Mode checkbox in the GUI for difficult images (night, motion blur, low texture).
What it adds:
- LoFTR — detector-free dense matching (handles blur/low-contrast)
- Descriptor hopping — re-searches index using a descriptor from the matched panorama if initial match is weak (< 50 inliers)
- Neighbourhood expansion — searches all panoramas within 100 m of the best match
对于复杂图像(夜间、运动模糊、低纹理),可在GUI中勾选「Ultra Mode」复选框启用该模式。
新增特性:
- LoFTR — 无检测器的密集匹配(处理模糊/低对比度场景)
- 描述符跳转 — 如果初始匹配结果较弱(内点<50个),则使用匹配全景图的描述符重新搜索索引
- 邻域扩展 — 搜索最佳匹配结果周边100米内的所有全景图
Configuration Reference
配置参考
All configuration is passed through the GUI or by modifying constants in . Key parameters:
test_super.py| Parameter | Default | Effect |
|---|---|---|
| Grid resolution | 300 | Panorama density during indexing; don't change |
| Radius (search) | user-set | Haversine filter radius in km |
| Top-K candidates | 500–1000 | Candidates passed to Stage 2 |
| Heading steps | ±45° / 15° | Refinement sweep range |
| Spatial cell size | 50 m | Consensus clustering granularity |
| Neighbourhood expansion | 100 m | Ultra Mode only |
| Weak match threshold | 50 inliers | Triggers descriptor hopping in Ultra Mode |
所有配置可通过GUI设置,或修改中的常量。关键参数:
test_super.py| 参数 | 默认值 | 作用 |
|---|---|---|
| Grid resolution | 300 | 索引构建时的全景图密度;请勿修改 |
| Radius (search) | 用户自定义 | Haversine过滤半径(单位:公里) |
| Top-K candidates | 500–1000 | 进入阶段2的候选结果数量 |
| Heading steps | ±45° / 15° | 优化扫描范围 |
| Spatial cell size | 50 m | 一致性聚类粒度 |
| Neighbourhood expansion | 100 m | 仅Ultra模式可用 |
| Weak match threshold | 50 inliers | 在Ultra模式下触发描述符跳转的阈值 |
Hardware & Device Selection
硬件与设备选择
python
import torchpython
import torchNetryx auto-selects; mirror this logic in custom scripts
Netryx会自动选择;自定义脚本中可参考此逻辑
if torch.cuda.is_available():
device = torch.device("cuda") # ALIKED, full speed
elif torch.backends.mps.is_available():
device = torch.device("mps") # Mac M-series, DISK
else:
device = torch.device("cpu") # Works, significantly slower
**Minimum requirements:** 4 GB GPU VRAM, 8 GB RAM, Python 3.9+
**Recommended:** NVIDIA GPU with 8 GB+ VRAM (CUDA) or Apple M1+ (MPS)
---if torch.cuda.is_available():
device = torch.device("cuda") # 使用ALIKED,速度最快
elif torch.backends.mps.is_available():
device = torch.device("mps") # Mac M系列,使用DISK
else:
device = torch.device("cpu") # 可运行,但速度显著变慢
**最低要求:** 4 GB GPU显存,8 GB内存,Python 3.9+
**推荐配置:** 配备8 GB+显存的NVIDIA GPU(CUDA)或Apple M1+(MPS)
---Index Design Patterns
索引设计模式
Multi-city indexing
多城市索引
All cities share one unified index. The radius filter at search time isolates results:
python
undefined所有城市可共享一个统一索引。搜索时通过半径过滤隔离目标区域的结果:
python
undefinedIndex Paris, London, Tokyo — all into the same index/
为巴黎、伦敦、东京构建索引 — 全部存入同一个index/目录
Then search by specifying center + radius:
搜索时指定中心坐标+半径即可:
Paris only
仅搜索巴黎
candidates = search_index(desc, center_lat=48.8566, center_lon=2.3522, radius_km=5)
candidates = search_index(desc, center_lat=48.8566, center_lon=2.3522, radius_km=5)
London only
仅搜索伦敦
candidates = search_index(desc, center_lat=51.5074, center_lon=-0.1278, radius_km=5)
undefinedcandidates = search_index(desc, center_lat=51.5074, center_lon=-0.1278, radius_km=5)
undefinedIncremental indexing
增量索引
New files are appended automatically during indexing.
Rebuild the compiled index after adding new areas:
cosplace_parts/*.npzRebuild the compiled index after adding new areas:
bash
python build_index.py索引构建过程中,新的文件会自动追加。
添加新区域后需重建已编译索引:
cosplace_parts/*.npz添加新区域后需重建已编译索引:
bash
python build_index.pyTroubleshooting
故障排除
GUI appears blank on macOS
macOS系统下GUI显示空白
bash
brew install python-tk@3.11 # match your exact Python versionbash
brew install python-tk@3.11 # 请匹配你的Python版本import lightglue
fails
import lightglueimport lightglue
失败
import lightgluebash
pip install git+https://github.com/cvg/LightGlue.gitbash
pip install git+https://github.com/cvg/LightGlue.gitimport kornia
fails (Ultra Mode unavailable)
import korniaimport kornia
失败(Ultra模式不可用)
import korniabash
pip install korniabash
pip install korniaCUDA out of memory
CUDA内存不足
- Reduce candidates (e.g. 300 instead of 500)
top_k - Switch to DISK instead of ALIKED by forcing MPS/CPU device
- Reduce FOV count if modifying pipeline directly
- 减少候选数量(例如从500改为300)
top_k - 强制使用MPS/CPU设备,切换为DISK替代ALIKED
- 如果直接修改流水线,可减少视场数量
Indexing stops / resumes incorrectly
索引构建停止/恢复异常
The index writes incrementally to . Delete corrupted files in that folder and re-run; completed chunks are skipped.
cosplace_parts/.npz索引会增量写入目录。删除该目录中损坏的文件后重新运行;已完成的碎片会被自动跳过。
cosplace_parts/.npzLow confidence score / wrong result
置信度分数低/结果错误
- Enable Ultra Mode for degraded images
- Increase search radius if location estimate is uncertain
- Use a higher grid resolution index for the target area (re-index)
- Try AI Coarse mode if manual center coordinates are uncertain
- 针对画质不佳的图像,启用Ultra模式
- 如果位置预估不确定,增大搜索半径
- 为目标区域构建更高网格分辨率的索引(重新索引)
- 如果手动中心坐标不确定,尝试AI Coarse模式
Gemini AI Coarse mode not available
Gemini AI粗定位模式不可用
bash
export GEMINI_API_KEY="your_key_here"Verify it is set:
echo $GEMINI_API_KEYbash
export GEMINI_API_KEY="your_key_here"验证是否设置成功:
echo $GEMINI_API_KEYKey Dependencies
核心依赖
| Package | Role |
|---|---|
| Model inference backbone |
| Deep feature matching |
| LoFTR dense matching (Ultra Mode) |
| Index storage and cosine similarity |
| Image loading and preprocessing |
| GUI (stdlib, may need upgrade on macOS) |
| 包 | 作用 |
|---|---|
| 模型推理基础框架 |
| 深度特征匹配 |
| LoFTR密集匹配(Ultra模式) |
| 索引存储与余弦相似度计算 |
| 图像加载与预处理 |
| GUI(标准库,macOS系统可能需要升级) |
Quick-Start Checklist
快速入门检查清单
[ ] Clone repo and create venv
[ ] pip install -r requirements.txt
[ ] pip install git+https://github.com/cvg/LightGlue.git
[ ] (optional) pip install kornia
[ ] (optional) export GEMINI_API_KEY=...
[ ] python test_super.py
[ ] Create mode → set coords + radius → Create Index (wait for completion)
[ ] python build_index.py (if not auto-built)
[ ] Search mode → upload photo → Manual/AI Coarse → Run Search
[ ] Read GPS result + confidence score on map[ ] 克隆仓库并创建虚拟环境
[ ] 执行pip install -r requirements.txt
[ ] 执行pip install git+https://github.com/cvg/LightGlue.git
[ ] (可选)执行pip install kornia
[ ] (可选)执行export GEMINI_API_KEY=...
[ ] 执行python test_super.py
[ ] 进入Create模式 → 设置坐标+半径 → 点击Create Index (等待完成)
[ ] 执行python build_index.py (如果未自动构建)
[ ] 进入Search模式 → 上传照片 → 选择Manual/AI Coarse → 点击Run Search
[ ] 查看地图上的GPS结果 + 置信度分数