content-hash-cache-pattern
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseContent-Hash File Cache Pattern
内容哈希文件缓存模式
Cache expensive file processing results (PDF parsing, text extraction, image analysis) using SHA-256 content hashes as cache keys. Unlike path-based caching, this approach survives file moves/renames and auto-invalidates when content changes.
使用SHA-256内容哈希作为缓存键,缓存高成本的文件处理结果(如PDF解析、文本提取、图像分析)。与基于路径的缓存不同,这种方法在文件移动/重命名后依然有效,且当内容变更时会自动失效。
When to Activate
适用场景
- Building file processing pipelines (PDF, images, text extraction)
- Processing cost is high and same files are processed repeatedly
- Need a CLI option
--cache/--no-cache - Want to add caching to existing pure functions without modifying them
- 构建文件处理流水线(PDF、图像、文本提取)
- 处理成本高且同一文件会被重复处理
- 需要这类CLI选项
--cache/--no-cache - 希望在不修改现有纯函数的情况下添加缓存功能
Core Pattern
核心模式
1. Content-Hash Based Cache Key
1. 基于内容哈希的缓存键
Use file content (not path) as the cache key:
python
import hashlib
from pathlib import Path
_HASH_CHUNK_SIZE = 65536 # 64KB chunks for large files
def compute_file_hash(path: Path) -> str:
"""SHA-256 of file contents (chunked for large files)."""
if not path.is_file():
raise FileNotFoundError(f"File not found: {path}")
sha256 = hashlib.sha256()
with open(path, "rb") as f:
while True:
chunk = f.read(_HASH_CHUNK_SIZE)
if not chunk:
break
sha256.update(chunk)
return sha256.hexdigest()Why content hash? File rename/move = cache hit. Content change = automatic invalidation. No index file needed.
使用文件内容(而非路径)作为缓存键:
python
import hashlib
from pathlib import Path
_HASH_CHUNK_SIZE = 65536 # 64KB chunks for large files
def compute_file_hash(path: Path) -> str:
"""SHA-256 of file contents (chunked for large files)."""
if not path.is_file():
raise FileNotFoundError(f"File not found: {path}")
sha256 = hashlib.sha256()
with open(path, "rb") as f:
while True:
chunk = f.read(_HASH_CHUNK_SIZE)
if not chunk:
break
sha256.update(chunk)
return sha256.hexdigest()为什么使用内容哈希? 文件重命名/移动后仍能命中缓存,内容变更时自动失效,无需索引文件。
2. Frozen Dataclass for Cache Entry
2. 用于缓存条目的冻结数据类
python
from dataclasses import dataclass
@dataclass(frozen=True, slots=True)
class CacheEntry:
file_hash: str
source_path: str
document: ExtractedDocument # The cached resultpython
from dataclasses import dataclass
@dataclass(frozen=True, slots=True)
class CacheEntry:
file_hash: str
source_path: str
document: ExtractedDocument # The cached result3. File-Based Cache Storage
3. 基于文件的缓存存储
Each cache entry is stored as — O(1) lookup by hash, no index file required.
{hash}.jsonpython
import json
from typing import Any
def write_cache(cache_dir: Path, entry: CacheEntry) -> None:
cache_dir.mkdir(parents=True, exist_ok=True)
cache_file = cache_dir / f"{entry.file_hash}.json"
data = serialize_entry(entry)
cache_file.write_text(json.dumps(data, ensure_ascii=False), encoding="utf-8")
def read_cache(cache_dir: Path, file_hash: str) -> CacheEntry | None:
cache_file = cache_dir / f"{file_hash}.json"
if not cache_file.is_file():
return None
try:
raw = cache_file.read_text(encoding="utf-8")
data = json.loads(raw)
return deserialize_entry(data)
except (json.JSONDecodeError, ValueError, KeyError):
return None # Treat corruption as cache miss每个缓存条目以的形式存储——通过哈希实现O(1)查找,无需索引文件。
{hash}.jsonpython
import json
from typing import Any
def write_cache(cache_dir: Path, entry: CacheEntry) -> None:
cache_dir.mkdir(parents=True, exist_ok=True)
cache_file = cache_dir / f"{entry.file_hash}.json"
data = serialize_entry(entry)
cache_file.write_text(json.dumps(data, ensure_ascii=False), encoding="utf-8")
def read_cache(cache_dir: Path, file_hash: str) -> CacheEntry | None:
cache_file = cache_dir / f"{file_hash}.json"
if not cache_file.is_file():
return None
try:
raw = cache_file.read_text(encoding="utf-8")
data = json.loads(raw)
return deserialize_entry(data)
except (json.JSONDecodeError, ValueError, KeyError):
return None # Treat corruption as cache miss4. Service Layer Wrapper (SRP)
4. 服务层包装器(单一职责原则SRP)
Keep the processing function pure. Add caching as a separate service layer.
python
def extract_with_cache(
file_path: Path,
*,
cache_enabled: bool = True,
cache_dir: Path = Path(".cache"),
) -> ExtractedDocument:
"""Service layer: cache check -> extraction -> cache write."""
if not cache_enabled:
return extract_text(file_path) # Pure function, no cache knowledge
file_hash = compute_file_hash(file_path)
# Check cache
cached = read_cache(cache_dir, file_hash)
if cached is not None:
logger.info("Cache hit: %s (hash=%s)", file_path.name, file_hash[:12])
return cached.document
# Cache miss -> extract -> store
logger.info("Cache miss: %s (hash=%s)", file_path.name, file_hash[:12])
doc = extract_text(file_path)
entry = CacheEntry(file_hash=file_hash, source_path=str(file_path), document=doc)
write_cache(cache_dir, entry)
return doc保持处理函数的纯函数特性,将缓存作为独立的服务层添加。
python
def extract_with_cache(
file_path: Path,
*,
cache_enabled: bool = True,
cache_dir: Path = Path(".cache"),
) -> ExtractedDocument:
"""Service layer: cache check -> extraction -> cache write."""
if not cache_enabled:
return extract_text(file_path) # Pure function, no cache knowledge
file_hash = compute_file_hash(file_path)
# Check cache
cached = read_cache(cache_dir, file_hash)
if cached is not None:
logger.info("Cache hit: %s (hash=%s)", file_path.name, file_hash[:12])
return cached.document
# Cache miss -> extract -> store
logger.info("Cache miss: %s (hash=%s)", file_path.name, file_hash[:12])
doc = extract_text(file_path)
entry = CacheEntry(file_hash=file_hash, source_path=str(file_path), document=doc)
write_cache(cache_dir, entry)
return docKey Design Decisions
关键设计决策
| Decision | Rationale |
|---|---|
| SHA-256 content hash | Path-independent, auto-invalidates on content change |
| O(1) lookup, no index file needed |
| Service layer wrapper | SRP: extraction stays pure, cache is a separate concern |
| Manual JSON serialization | Full control over frozen dataclass serialization |
Corruption returns | Graceful degradation, re-processes on next run |
| Lazy directory creation on first write |
| 决策 | 理由 |
|---|---|
| SHA-256内容哈希 | 与路径无关,内容变更时自动失效 |
| O(1)查找,无需索引文件 |
| 服务层包装器 | 单一职责原则:提取逻辑保持纯净,缓存为独立关注点 |
| 手动JSON序列化 | 完全控制冻结数据类的序列化过程 |
损坏时返回 | 优雅降级,下次运行时重新处理 |
| 首次写入时自动创建目录 |
Best Practices
最佳实践
- Hash content, not paths — paths change, content identity doesn't
- Chunk large files when hashing — avoid loading entire files into memory
- Keep processing functions pure — they should know nothing about caching
- Log cache hit/miss with truncated hashes for debugging
- Handle corruption gracefully — treat invalid cache entries as misses, never crash
- 哈希内容而非路径——路径会变化,但内容标识不会
- 哈希大文件时分块处理——避免将整个文件加载到内存中
- 保持处理函数为纯函数——它们无需知晓缓存逻辑
- 使用截断哈希记录缓存命中/未命中——便于调试
- 优雅处理缓存损坏——将无效缓存条目视为未命中,绝不崩溃
Anti-Patterns to Avoid
需避免的反模式
python
undefinedpython
undefinedBAD: Path-based caching (breaks on file move/rename)
错误:基于路径的缓存(文件移动/重命名后失效)
cache = {"/path/to/file.pdf": result}
cache = {"/path/to/file.pdf": result}
BAD: Adding cache logic inside the processing function (SRP violation)
错误:在处理函数内部添加缓存逻辑(违反单一职责原则)
def extract_text(path, *, cache_enabled=False, cache_dir=None):
if cache_enabled: # Now this function has two responsibilities
...
def extract_text(path, *, cache_enabled=False, cache_dir=None):
if cache_enabled: # 现在该函数承担了两个职责
...
BAD: Using dataclasses.asdict() with nested frozen dataclasses
错误:对嵌套冻结数据类使用dataclasses.asdict()
(can cause issues with complex nested types)
(可能会导致复杂嵌套类型出现问题)
data = dataclasses.asdict(entry) # Use manual serialization instead
undefineddata = dataclasses.asdict(entry) # 应使用手动序列化替代
undefinedWhen to Use
适用场景
- File processing pipelines (PDF parsing, OCR, text extraction, image analysis)
- CLI tools that benefit from options
--cache/--no-cache - Batch processing where the same files appear across runs
- Adding caching to existing pure functions without modifying them
- 文件处理流水线(PDF解析、OCR、文本提取、图像分析)
- 可受益于选项的CLI工具
--cache/--no-cache - 同一文件会在多次运行中出现的批处理场景
- 无需修改现有纯函数即可添加缓存的场景
When NOT to Use
不适用场景
- Data that must always be fresh (real-time feeds)
- Cache entries that would be extremely large (consider streaming instead)
- Results that depend on parameters beyond file content (e.g., different extraction configs)
- 必须始终保持最新的数据(如实时数据流)
- 缓存条目会异常庞大的情况(考虑使用流处理替代)
- 结果依赖于文件内容之外参数的场景(如不同的提取配置)