netryx-street-level-geolocation

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Netryx Street-Level Geolocation

Netryx街景定位

Skill by ara.so — Daily 2026 Skills collection.
Netryx is a locally-hosted geolocation engine that identifies the precise GPS coordinates of any street-level photograph. It crawls street-view panoramas into a searchable index, then matches a query image against that index using a three-stage computer vision pipeline: global retrieval (CosPlace), local feature extraction (ALIKED/DISK), and deep feature matching (LightGlue). Sub-50m accuracy, no internet presence of the target image required, runs entirely on local hardware.

ara.so开发的技能 — 属于2026每日技能合集。
Netryx是一款本地部署的定位引擎,可识别任意街景照片的精确GPS坐标。它会抓取街景全景图并构建可搜索的索引,随后通过三阶段计算机视觉流水线将查询图像与索引进行匹配:全局检索(CosPlace)、局部特征提取(ALIKED/DISK)以及深度特征匹配(LightGlue)。定位精度可达50米以内,目标图像无需联网即可处理,全程在本地硬件运行。

Installation

安装步骤

bash
git clone https://github.com/sparkyniner/Netryx-OpenSource-Next-Gen-Street-Level-Geolocation.git
cd Netryx-OpenSource-Next-Gen-Street-Level-Geolocation

python3 -m venv venv
source venv/bin/activate          # Windows: venv\Scripts\activate

pip install -r requirements.txt
pip install git+https://github.com/cvg/LightGlue.git   # required
pip install kornia                                        # optional: Ultra Mode (LoFTR)
bash
git clone https://github.com/sparkyniner/Netryx-OpenSource-Next-Gen-Street-Level-Geolocation.git
cd Netryx-OpenSource-Next-Gen-Street-Level-Geolocation

python3 -m venv venv
source venv/bin/activate          # Windows系统:venv\Scripts\activate

pip install -r requirements.txt
pip install git+https://github.com/cvg/LightGlue.git   # 必须安装
pip install kornia                                        # 可选:Ultra模式(LoFTR)

Optional: Gemini API key for AI Coarse mode

可选:用于AI粗定位模式的Gemini API密钥

bash
export GEMINI_API_KEY="your_key_here"   # never hard-code; use env var
bash
export GEMINI_API_KEY="your_key_here"   # 请勿硬编码;使用环境变量

macOS tkinter fix (blank GUI)

macOS tkinter修复(GUI空白问题)

bash
brew install python-tk@3.11   # match your Python version

bash
brew install python-tk@3.11   # 请匹配你的Python版本

Launch the GUI

启动GUI

bash
python test_super.py
All indexing and searching is driven from this single GUI entry point.

bash
python test_super.py
所有索引构建和搜索操作都通过这个单一GUI入口执行。

Project Structure

项目结构

netryx/
├── test_super.py           # Main app: GUI + indexing + search pipeline
├── cosplace_utils.py       # CosPlace model loading & descriptor extraction
├── build_index.py          # Standalone high-performance index builder
├── requirements.txt
├── cosplace_parts/         # Raw embedding chunks written during indexing
└── index/
    ├── cosplace_descriptors.npy   # All 512-dim descriptors (compiled)
    └── metadata.npz               # Coordinates, headings, panoid IDs

netryx/
├── test_super.py           # 主应用:GUI + 索引构建 + 搜索流水线
├── cosplace_utils.py       # CosPlace模型加载与描述符提取
├── build_index.py          # 独立的高性能索引构建工具
├── requirements.txt
├── cosplace_parts/         # 索引构建过程中生成的原始嵌入块
└── index/
    ├── cosplace_descriptors.npy   # 已编译的所有512维描述符
    └── metadata.npz               # 坐标、朝向、全景图ID

Core Workflow

核心工作流程

Step 1 — Create an Index

步骤1 — 创建索引

Index an area before searching. The GUI does this interactively; you can also drive it programmatically.
GUI steps:
  1. Select Create mode
  2. Enter center
    latitude, longitude
  3. Set radius (km) and grid resolution (default 300)
  4. Click Create Index
Indexing time reference:
Radius~PanoramasTime (M2 Max)Index size
0.5 km50030 min~60 MB
1 km2 0001–2 h~250 MB
5 km30 0008–12 h~3 GB
10 km100 00024–48 h~7 GB
Indexing is incremental — safe to interrupt and resume.
搜索前需先对目标区域构建索引。GUI支持交互式操作,也可通过代码程序化执行。
GUI操作步骤:
  1. 选择「Create」模式
  2. 输入中心坐标
    纬度, 经度
  3. 设置半径(单位:公里)和网格分辨率(默认300)
  4. 点击「Create Index」
索引构建时间参考:
半径全景图数量耗时(M2 Max)索引大小
0.5 km50030分钟~60 MB
1 km20001–2小时~250 MB
5 km300008–12小时~3 GB
10 km10000024–48小时~7 GB
索引支持增量构建 — 可安全中断并恢复。

Step 2 — Search

步骤2 — 搜索

GUI steps:
  1. Select Search mode
  2. Upload a street-level photo
  3. Choose Manual (provide center coords + radius) or AI Coarse (Gemini infers region)
  4. Click Run Search → Start Full Search
  5. Result: GPS pin on map + confidence score

GUI操作步骤:
  1. 选择「Search」模式
  2. 上传街景照片
  3. 选择「Manual」(提供中心坐标+半径)或「AI Coarse」(由Gemini推断区域)
  4. 点击「Run Search → Start Full Search」
  5. 结果:地图上的GPS标记 + 置信度分数

Programmatic Usage

程序化调用

Extract a CosPlace descriptor

提取CosPlace描述符

python
from cosplace_utils import load_cosplace_model, extract_descriptor
from PIL import Image
import torch

device = torch.device("cuda" if torch.cuda.is_available() else
                      "mps"  if torch.backends.mps.is_available() else "cpu")

model = load_cosplace_model(device=device)

img = Image.open("query.jpg").convert("RGB")
descriptor = extract_descriptor(model, img, device=device)  # shape: (512,)
print("Descriptor shape:", descriptor.shape)
python
from cosplace_utils import load_cosplace_model, extract_descriptor
from PIL import Image
import torch

device = torch.device("cuda" if torch.cuda.is_available() else
                      "mps"  if torch.backends.mps.is_available() else "cpu")

model = load_cosplace_model(device=device)

img = Image.open("query.jpg").convert("RGB")
descriptor = extract_descriptor(model, img, device=device)  # 形状: (512,)
print("Descriptor shape:", descriptor.shape)

Search the index against a query descriptor

基于查询描述符搜索索引

python
import numpy as np
from math import radians, sin, cos, sqrt, atan2
python
import numpy as np
from math import radians, sin, cos, sqrt, atan2

Load compiled index

加载已编译的索引

descriptors = np.load("index/cosplace_descriptors.npy") # (N, 512) meta = np.load("index/metadata.npz", allow_pickle=True) latitudes = meta["latitudes"] # (N,) longitudes = meta["longitudes"] # (N,) headings = meta["headings"] # (N,) panoids = meta["panoids"] # (N,)
def haversine_km(lat1, lon1, lat2, lon2): R = 6371.0 dlat = radians(lat2 - lat1) dlon = radians(lon2 - lon1) a = sin(dlat/2)**2 + cos(radians(lat1))*cos(radians(lat2))*sin(dlon/2)**2 return R * 2 * atan2(sqrt(a), sqrt(1 - a))
def search_index(query_descriptor, center_lat, center_lon, radius_km=2.0, top_k=500): """Return top-k candidate indices ranked by cosine similarity within radius.""" # Radius mask dists = np.array([ haversine_km(center_lat, center_lon, lat, lon) for lat, lon in zip(latitudes, longitudes) ]) mask = dists <= radius_km
# Cosine similarity (descriptors assumed L2-normalised)
q = query_descriptor / (np.linalg.norm(query_descriptor) + 1e-8)
sims = descriptors[mask] @ q                    # cosine scores

local_indices = np.where(mask)[0]
ranked = local_indices[np.argsort(sims)[::-1]]  # descending
return ranked[:top_k]
candidates = search_index(descriptor, center_lat=48.8566, center_lon=2.3522, radius_km=1.0) print(f"Top candidate panoid: {panoids[candidates[0]]}") print(f" lat={latitudes[candidates[0]]:.6f} lon={longitudes[candidates[0]]:.6f}")
undefined
descriptors = np.load("index/cosplace_descriptors.npy") # (N, 512) meta = np.load("index/metadata.npz", allow_pickle=True) latitudes = meta["latitudes"] # (N,) longitudes = meta["longitudes"] # (N,) headings = meta["headings"] # (N,) panoids = meta["panoids"] # (N,)
def haversine_km(lat1, lon1, lat2, lon2): R = 6371.0 dlat = radians(lat2 - lat1) dlon = radians(lon2 - lon1) a = sin(dlat/2)**2 + cos(radians(lat1))*cos(radians(lat2))*sin(dlon/2)**2 return R * 2 * atan2(sqrt(a), sqrt(1 - a))
def search_index(query_descriptor, center_lat, center_lon, radius_km=2.0, top_k=500): """返回指定半径内基于余弦相似度排序的前k个候选索引。""" # 半径过滤掩码 dists = np.array([ haversine_km(center_lat, center_lon, lat, lon) for lat, lon in zip(latitudes, longitudes) ]) mask = dists <= radius_km
# 余弦相似度(假设描述符已做L2归一化)
q = query_descriptor / (np.linalg.norm(query_descriptor) + 1e-8)
sims = descriptors[mask] @ q                    # 余弦相似度分数

local_indices = np.where(mask)[0]
ranked = local_indices[np.argsort(sims)[::-1]]  # 降序排列
return ranked[:top_k]
candidates = search_index(descriptor, center_lat=48.8566, center_lon=2.3522, radius_km=1.0) print(f"Top candidate panoid: {panoids[candidates[0]]}") print(f" lat={latitudes[candidates[0]]:.6f} lon={longitudes[candidates[0]]:.6f}")
undefined

Build / rebuild the compiled index from parts

从碎片构建/重建已编译索引

python
undefined
python
undefined

Run after adding new cosplace_parts/*.npz chunks

添加新的cosplace_parts/*.npz文件后执行

import subprocess subprocess.run(["python", "build_index.py"], check=True)

Or directly from Python if `build_index.py` exposes a function:

```python
import importlib.util, pathlib

spec = importlib.util.spec_from_file_location("build_index",
                                               pathlib.Path("build_index.py"))
build_index = importlib.util.module_from_spec(spec)
spec.loader.exec_module(build_index)
build_index.build()   # adjust to actual function name in the file

import subprocess subprocess.run(["python", "build_index.py"], check=True)

或如果`build_index.py`暴露了函数,可直接在Python中调用:

```python
import importlib.util, pathlib

spec = importlib.util.spec_from_file_location("build_index",
                                               pathlib.Path("build_index.py"))
build_index = importlib.util.module_from_spec(spec)
spec.loader.exec_module(build_index)
build_index.build()   # 请根据文件中的实际函数名称调整

Pipeline Stages in Detail

流水线阶段详情

Stage 1 — Global Retrieval (CosPlace)

阶段1 — 全局检索(CosPlace)

  • Extracts 512-dim descriptor from query and its horizontal flip
  • Both descriptors compared via cosine similarity against the index
  • Haversine radius filter restricts candidates to the target area
  • Returns top 500–1 000 candidates
  • Runs in < 1 second (single matrix multiply)
  • 从查询图像及其水平翻转版本中提取512维描述符
  • 将两个描述符与索引中的描述符进行余弦相似度比较
  • 通过Haversine半径过滤将候选结果限制在目标区域内
  • 返回前500–1000个候选结果
  • 运行时间**< 1秒**(单次矩阵乘法)

Stage 2 — Geometric Verification (ALIKED/DISK + LightGlue)

阶段2 — 几何验证(ALIKED/DISK + LightGlue)

For each candidate:
  1. Download panorama tiles from Street View (8 tiles, stitched)
  2. Crop rectilinear view at indexed heading
  3. Generate multi-FOV crops: 70°, 90°, 110°
  4. Extract keypoints:
       CUDA  → ALIKED (1024 keypoints)
       MPS/CPU → DISK (768 keypoints)
  5. LightGlue deep feature matching vs. query keypoints
  6. RANSAC geometric verification → inlier count
Best match = candidate with most RANSAC-verified inliers.
Processing 300–500 candidates: 2–5 minutes depending on hardware.
针对每个候选结果:
  1. 从街景服务下载全景图瓦片(8张,拼接成完整图像)
  2. 根据索引中的朝向裁剪为直线视图
  3. 生成多视场裁剪图:70°、90°、110°
  4. 提取关键点:
       CUDA设备 → ALIKED(1024个关键点)
       MPS/CPU设备 → DISK(768个关键点)
  5. 与查询图像的关键点进行LightGlue深度特征匹配
  6. RANSAC几何验证 → 内点计数
最佳匹配结果 = RANSAC验证内点数量最多的候选结果。
处理300–500个候选结果的时间:2–5分钟(取决于硬件性能)。

Stage 3 — Refinement

阶段3 — 优化

Heading refinement : top-15 candidates × ±45° at 15° steps × 3 FOVs
Spatial consensus  : cluster matches into 50 m cells; prefer clusters
Confidence score   : geographic clustering tightness + uniqueness ratio
朝向优化 : 前15个候选结果 × ±45°范围,步长15° × 3种视场
空间一致性  : 将匹配结果聚类到50米单元格中;优先选择聚类结果
置信度分数   : 地理聚类紧密性 + 唯一性比率

Ultra Mode (optional, slower)

Ultra模式(可选,速度较慢)

Enable the Ultra Mode checkbox in the GUI for difficult images (night, motion blur, low texture).
What it adds:
  • LoFTR — detector-free dense matching (handles blur/low-contrast)
  • Descriptor hopping — re-searches index using a descriptor from the matched panorama if initial match is weak (< 50 inliers)
  • Neighbourhood expansion — searches all panoramas within 100 m of the best match

对于复杂图像(夜间、运动模糊、低纹理),可在GUI中勾选「Ultra Mode」复选框启用该模式。
新增特性:
  • LoFTR — 无检测器的密集匹配(处理模糊/低对比度场景)
  • 描述符跳转 — 如果初始匹配结果较弱(内点<50个),则使用匹配全景图的描述符重新搜索索引
  • 邻域扩展 — 搜索最佳匹配结果周边100米内的所有全景图

Configuration Reference

配置参考

All configuration is passed through the GUI or by modifying constants in
test_super.py
. Key parameters:
ParameterDefaultEffect
Grid resolution300Panorama density during indexing; don't change
Radius (search)user-setHaversine filter radius in km
Top-K candidates500–1000Candidates passed to Stage 2
Heading steps±45° / 15°Refinement sweep range
Spatial cell size50 mConsensus clustering granularity
Neighbourhood expansion100 mUltra Mode only
Weak match threshold50 inliersTriggers descriptor hopping in Ultra Mode

所有配置可通过GUI设置,或修改
test_super.py
中的常量。关键参数:
参数默认值作用
Grid resolution300索引构建时的全景图密度;请勿修改
Radius (search)用户自定义Haversine过滤半径(单位:公里)
Top-K candidates500–1000进入阶段2的候选结果数量
Heading steps±45° / 15°优化扫描范围
Spatial cell size50 m一致性聚类粒度
Neighbourhood expansion100 m仅Ultra模式可用
Weak match threshold50 inliers在Ultra模式下触发描述符跳转的阈值

Hardware & Device Selection

硬件与设备选择

python
import torch
python
import torch

Netryx auto-selects; mirror this logic in custom scripts

Netryx会自动选择;自定义脚本中可参考此逻辑

if torch.cuda.is_available(): device = torch.device("cuda") # ALIKED, full speed elif torch.backends.mps.is_available(): device = torch.device("mps") # Mac M-series, DISK else: device = torch.device("cpu") # Works, significantly slower

**Minimum requirements:** 4 GB GPU VRAM, 8 GB RAM, Python 3.9+  
**Recommended:** NVIDIA GPU with 8 GB+ VRAM (CUDA) or Apple M1+ (MPS)

---
if torch.cuda.is_available(): device = torch.device("cuda") # 使用ALIKED,速度最快 elif torch.backends.mps.is_available(): device = torch.device("mps") # Mac M系列,使用DISK else: device = torch.device("cpu") # 可运行,但速度显著变慢

**最低要求:** 4 GB GPU显存,8 GB内存,Python 3.9+  
**推荐配置:** 配备8 GB+显存的NVIDIA GPU(CUDA)或Apple M1+(MPS)

---

Index Design Patterns

索引设计模式

Multi-city indexing

多城市索引

All cities share one unified index. The radius filter at search time isolates results:
python
undefined
所有城市可共享一个统一索引。搜索时通过半径过滤隔离目标区域的结果:
python
undefined

Index Paris, London, Tokyo — all into the same index/

为巴黎、伦敦、东京构建索引 — 全部存入同一个index/目录

Then search by specifying center + radius:

搜索时指定中心坐标+半径即可:

Paris only

仅搜索巴黎

candidates = search_index(desc, center_lat=48.8566, center_lon=2.3522, radius_km=5)
candidates = search_index(desc, center_lat=48.8566, center_lon=2.3522, radius_km=5)

London only

仅搜索伦敦

candidates = search_index(desc, center_lat=51.5074, center_lon=-0.1278, radius_km=5)
undefined
candidates = search_index(desc, center_lat=51.5074, center_lon=-0.1278, radius_km=5)
undefined

Incremental indexing

增量索引

New
cosplace_parts/*.npz
files are appended automatically during indexing.
Rebuild the compiled index after adding new areas:
bash
python build_index.py

索引构建过程中,新的
cosplace_parts/*.npz
文件会自动追加。
添加新区域后需重建已编译索引:
bash
python build_index.py

Troubleshooting

故障排除

GUI appears blank on macOS

macOS系统下GUI显示空白

bash
brew install python-tk@3.11   # match your exact Python version
bash
brew install python-tk@3.11   # 请匹配你的Python版本

import lightglue
fails

import lightglue
失败

bash
pip install git+https://github.com/cvg/LightGlue.git
bash
pip install git+https://github.com/cvg/LightGlue.git

import kornia
fails (Ultra Mode unavailable)

import kornia
失败(Ultra模式不可用)

bash
pip install kornia
bash
pip install kornia

CUDA out of memory

CUDA内存不足

  • Reduce
    top_k
    candidates (e.g. 300 instead of 500)
  • Switch to DISK instead of ALIKED by forcing MPS/CPU device
  • Reduce FOV count if modifying pipeline directly
  • 减少
    top_k
    候选数量(例如从500改为300)
  • 强制使用MPS/CPU设备,切换为DISK替代ALIKED
  • 如果直接修改流水线,可减少视场数量

Indexing stops / resumes incorrectly

索引构建停止/恢复异常

The index writes incrementally to
cosplace_parts/
. Delete corrupted
.npz
files in that folder and re-run; completed chunks are skipped.
索引会增量写入
cosplace_parts/
目录。删除该目录中损坏的
.npz
文件后重新运行;已完成的碎片会被自动跳过。

Low confidence score / wrong result

置信度分数低/结果错误

  1. Enable Ultra Mode for degraded images
  2. Increase search radius if location estimate is uncertain
  3. Use a higher grid resolution index for the target area (re-index)
  4. Try AI Coarse mode if manual center coordinates are uncertain
  1. 针对画质不佳的图像,启用Ultra模式
  2. 如果位置预估不确定,增大搜索半径
  3. 为目标区域构建更高网格分辨率的索引(重新索引)
  4. 如果手动中心坐标不确定,尝试AI Coarse模式

Gemini AI Coarse mode not available

Gemini AI粗定位模式不可用

bash
export GEMINI_API_KEY="your_key_here"
Verify it is set:
echo $GEMINI_API_KEY

bash
export GEMINI_API_KEY="your_key_here"
验证是否设置成功:
echo $GEMINI_API_KEY

Key Dependencies

核心依赖

PackageRole
torch
Model inference backbone
lightglue
(GitHub)
Deep feature matching
kornia
LoFTR dense matching (Ultra Mode)
numpy
Index storage and cosine similarity
Pillow
Image loading and preprocessing
tkinter
GUI (stdlib, may need upgrade on macOS)

作用
torch
模型推理基础框架
lightglue
(GitHub)
深度特征匹配
kornia
LoFTR密集匹配(Ultra模式)
numpy
索引存储与余弦相似度计算
Pillow
图像加载与预处理
tkinter
GUI(标准库,macOS系统可能需要升级)

Quick-Start Checklist

快速入门检查清单

[ ] Clone repo and create venv
[ ] pip install -r requirements.txt
[ ] pip install git+https://github.com/cvg/LightGlue.git
[ ] (optional) pip install kornia
[ ] (optional) export GEMINI_API_KEY=...
[ ] python test_super.py
[ ] Create mode → set coords + radius → Create Index  (wait for completion)
[ ] python build_index.py  (if not auto-built)
[ ] Search mode → upload photo → Manual/AI Coarse → Run Search
[ ] Read GPS result + confidence score on map
[ ] 克隆仓库并创建虚拟环境
[ ] 执行pip install -r requirements.txt
[ ] 执行pip install git+https://github.com/cvg/LightGlue.git
[ ] (可选)执行pip install kornia
[ ] (可选)执行export GEMINI_API_KEY=...
[ ] 执行python test_super.py
[ ] 进入Create模式 → 设置坐标+半径 → 点击Create Index (等待完成)
[ ] 执行python build_index.py (如果未自动构建)
[ ] 进入Search模式 → 上传照片 → 选择Manual/AI Coarse → 点击Run Search
[ ] 查看地图上的GPS结果 + 置信度分数