golang-samber-hot
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePersona: You are a Go engineer who treats caching as a system design decision. You choose eviction algorithms based on measured access patterns, size caches from working-set data, and always plan for expiration, loader failures, and monitoring.
角色定位:你是一名将缓存视为系统设计决策的Go工程师。你会根据实测的访问模式选择淘汰算法,根据工作集数据确定缓存大小,并始终为过期、加载器故障和监控做规划。
Using samber/hot for In-Memory Caching in Go
使用samber/hot实现Go语言内存缓存
Generic, type-safe in-memory caching library for Go 1.22+ with 9 eviction algorithms, TTL, loader chains with singleflight deduplication, sharding, stale-while-revalidate, and Prometheus metrics.
Official Resources:
This skill is not exhaustive. Please refer to library documentation and code examples for more informations. Context7 can help as a discoverability platform.
bash
go get -u github.com/samber/hot适用于Go 1.22+的通用、类型安全的内存缓存库,支持9种淘汰算法、TTL、带singleflight去重的加载器链、分片、stale-while-revalidate以及Prometheus指标。
官方资源:
本技能内容并不详尽。如需更多信息,请参考库文档和代码示例。Context7可作为发现平台提供帮助。
bash
go get -u github.com/samber/hotAlgorithm Selection
算法选择
Pick based on your access pattern — the wrong algorithm wastes memory or tanks hit rate.
| Algorithm | Constant | Best for | Avoid when |
|---|---|---|---|
| W-TinyLFU | | General-purpose, mixed workloads (default) | You need simplicity for debugging |
| LRU | | Recency-dominated (sessions, recent queries) | Frequency matters (scan pollution evicts hot items) |
| LFU | | Frequency-dominated (popular products, DNS) | Access patterns shift (stale popular items never evict) |
| TinyLFU | | Read-heavy with frequency bias | Write-heavy (admission filter overhead) |
| S3FIFO | | High throughput, scan-resistant | Small caches (<1000 items) |
| ARC | | Self-tuning, unknown patterns | Memory-constrained (2x tracking overhead) |
| TwoQueue | | Mixed with hot/cold split | Tuning complexity is unacceptable |
| SIEVE | | Simple scan-resistant LRU alternative | Highly skewed access patterns |
| FIFO | | Simple, predictable eviction order | Hit rate matters (no frequency/recency awareness) |
Decision shortcut: Start with . Switch only when profiling shows the miss rate is too high for your SLO.
hot.WTinyLFUFor detailed algorithm comparison, benchmarks, and a decision tree, see Algorithm Guide.
根据你的访问模式选择算法——错误的算法会浪费内存或降低命中率。
| 算法 | 常量 | 最佳适用场景 | 避免场景 |
|---|---|---|---|
| W-TinyLFU | | 通用场景、混合工作负载(默认选项) | 需要简单性以便调试 |
| LRU | | 以访问时效性为主的场景(会话、近期查询) | 访问频率很重要的场景(扫描污染会驱逐热点数据) |
| LFU | | 以访问频率为主的场景(热门商品、DNS) | 访问模式会变化的场景(过时的热门数据永远不会被驱逐) |
| TinyLFU | | 读密集型、偏向访问频率的场景 | 写密集型场景(准入过滤器开销大) |
| S3FIFO | | 高吞吐量、抗扫描的场景 | 小型缓存(少于1000条数据) |
| ARC | | 自调优、访问模式未知的场景 | 内存受限的场景(2倍的跟踪开销) |
| TwoQueue | | 冷热数据分离的混合场景 | 无法接受调优复杂度的场景 |
| SIEVE | | 简单的抗扫描LRU替代方案 | 访问模式高度倾斜的场景 |
| FIFO | | 简单、驱逐顺序可预测的场景 | 命中率很重要的场景(不感知访问频率/时效性) |
决策捷径: 从开始。仅当性能分析显示未命中率不符合你的SLO时再切换算法。
hot.WTinyLFU如需详细的算法对比、基准测试和决策树,请参考算法指南。
Core Usage
核心用法
Basic Cache with TTL
带TTL的基础缓存
go
import "github.com/samber/hot"
cache := hot.NewHotCache[string, *User](hot.WTinyLFU, 10_000).
WithTTL(5 * time.Minute).
WithJanitor().
Build()
defer cache.StopJanitor()
cache.Set("user:123", user)
cache.SetWithTTL("session:abc", session, 30*time.Minute)
value, found, err := cache.Get("user:123")go
import "github.com/samber/hot"
cache := hot.NewHotCache[string, *User](hot.WTinyLFU, 10_000).
WithTTL(5 * time.Minute).
WithJanitor().
Build()
defer cache.StopJanitor()
cache.Set("user:123", user)
cache.SetWithTTL("session:abc", session, 30*time.Minute)
value, found, err := cache.Get("user:123")Loader Pattern (Read-Through)
加载器模式(读穿透)
Loaders fetch missing keys automatically with singleflight deduplication — concurrent calls for the same missing key share one loader invocation:
Get()go
cache := hot.NewHotCache[int, *User](hot.WTinyLFU, 10_000).
WithTTL(5 * time.Minute).
WithLoaders(func(ids []int) (map[int]*User, error) {
return db.GetUsersByIDs(ctx, ids) // batch query
}).
WithJanitor().
Build()
defer cache.StopJanitor()
user, found, err := cache.Get(123) // triggers loader on miss加载器会自动获取缺失的键,并通过singleflight实现去重——针对同一缺失键的并发调用会共享一次加载器调用:
Get()go
cache := hot.NewHotCache[int, *User](hot.WTinyLFU, 10_000).
WithTTL(5 * time.Minute).
WithLoaders(func(ids []int) (map[int]*User, error) {
return db.GetUsersByIDs(ctx, ids) // 批量查询
}).
WithJanitor().
Build()
defer cache.StopJanitor()
user, found, err := cache.Get(123) // 缺失时触发加载器Capacity Sizing
容量规划
Before setting the cache capacity, estimate how many items fit in the memory budget:
- Estimate single-item size — estimate size of the struct, add the size of heap-allocated fields (slices, maps, strings). Include the key size. A rough per-entry overhead of ~100 bytes covers internal bookkeeping (pointers, expiry timestamps, algorithm metadata).
- Ask the developer how much memory is dedicated to this cache in production (e.g., 256 MB, 1 GB). This depends on the service's total memory and what else shares the process.
- Compute capacity — . Round down to leave headroom.
capacity = memoryBudget / estimatedItemSize
Example: *User struct ~500 bytes + string key ~50 bytes + overhead ~100 bytes = ~650 bytes/entry
256 MB budget → 256_000_000 / 650 ≈ 393,000 itemsIf the item size is unknown, ask the developer to measure it with a unit test that allocates N items and checks . Guessing capacity without measuring leads to OOM or wasted memory.
runtime.ReadMemStats设置缓存容量前,先估算内存预算能容纳多少条数据:
- 估算单条数据大小——估算结构体的大小,加上堆分配字段(切片、映射、字符串)的大小,还要包含键的大小。每条数据约100字节的粗略开销可覆盖内部簿记(指针、过期时间戳、算法元数据)。
- 询问开发人员生产环境中为该缓存分配了多少内存(例如256 MB、1 GB)。这取决于服务的总内存以及进程中其他组件的内存占用。
- 计算容量——。向下取整以预留余量。
容量 = 内存预算 / 估算的单条数据大小
示例:*User结构体约500字节 + 字符串键约50字节 + 开销约100字节 = 约650字节/条
256 MB预算 → 256_000_000 / 650 ≈ 393,000条数据如果数据大小未知,请让开发人员通过单元测试测量:分配N条数据并检查。不进行测量就猜测容量会导致OOM或内存浪费。
runtime.ReadMemStatsCommon Mistakes
常见错误
- Forgetting — without it, expired entries stay in memory until the algorithm evicts them. Always chain
WithJanitor()in the builder and.WithJanitor().defer cache.StopJanitor() - Calling without missing cache config — panics at runtime. Enable
SetMissing()orWithMissingCache(algorithm, capacity)in the builder first.WithMissingSharedCache() - +
WithoutLocking()— mutually exclusive, panics.WithJanitor()is only safe for single-goroutine access without background cleanup.WithoutLocking() - Oversized cache — a cache holding everything is a map with overhead. Size to your working set (typically 10-20% of total data). Monitor hit rate to validate.
- Ignoring loader errors — returns
Get()on loader failure. Always check(zero, false, err), not justerr.found
- 忘记使用——如果不使用它,过期数据会一直留在内存中,直到算法将其驱逐。务必在构建器中链式调用
WithJanitor(),并使用.WithJanitor()。defer cache.StopJanitor() - 未配置缺失缓存就调用——会在运行时触发panic。请先在构建器中启用
SetMissing()或WithMissingCache(algorithm, capacity)。WithMissingSharedCache() - 同时使用和
WithoutLocking()——两者互斥,会触发panic。WithJanitor()仅适用于单协程访问且无后台清理的场景。WithoutLocking() - 缓存容量过大——容纳所有数据的缓存就是一个有额外开销的映射。应根据工作集大小设置容量(通常为总数据的10-20%)。通过监控命中率来验证。
- 忽略加载器错误——加载器失败时,会返回
Get()。务必检查(零值, false, err),而不仅仅是err。found
Best Practices
最佳实践
- Always set TTL — unbounded caches serve stale data indefinitely because there is no signal to refresh
- Use to spread expirations — without jitter, items created together expire together, causing thundering herd on the loader
WithJitter(lambda, upperBound) - Monitor with — hit rate below 80% usually means the cache is undersized or the algorithm is wrong for the workload
WithPrometheusMetrics(cacheName) - Use /
WithCopyOnRead(fn)for mutable values — without copies, callers mutate cached objects and corrupt shared stateWithCopyOnWrite(fn)
For advanced patterns (revalidation, sharding, missing cache, monitoring setup), see Production Patterns.
For the complete API surface, see API Reference.
If you encounter a bug or unexpected behavior in samber/hot, open an issue at https://github.com/samber/hot/issues.
- 始终设置TTL——无界缓存会无限期提供过期数据,因为没有刷新信号
- 使用分散过期时间——如果不使用抖动,同时创建的数据会同时过期,导致加载器出现惊群效应
WithJitter(lambda, upperBound) - 使用进行监控——命中率低于80%通常意味着缓存容量不足或算法不适合当前工作负载
WithPrometheusMetrics(cacheName) - 对可变值使用/
WithCopyOnRead(fn)——如果不复制,调用者会修改缓存对象并破坏共享状态WithCopyOnWrite(fn)
如需高级模式(重新验证、分片、缺失缓存、监控设置),请参考生产模式。
如需完整的API说明,请参考API参考。
如果在使用samber/hot时遇到bug或意外行为,请在https://github.com/samber/hot/issues提交issue。
Cross-References
交叉参考
- -> See skill for general caching strategy and when to use in-memory cache vs Redis vs CDN
samber/cc-skills-golang@golang-performance - -> See skill for Prometheus metrics integration and monitoring
samber/cc-skills-golang@golang-observability - -> See skill for database query patterns that pair with cache loaders
samber/cc-skills-golang@golang-database - -> See skill for querying Prometheus cache metrics via CLI
samber/cc-skills@promql-cli
- -> 请查看技能,了解通用缓存策略以及何时使用内存缓存、Redis或CDN
samber/cc-skills-golang@golang-performance - -> 请查看技能,了解Prometheus指标集成和监控
samber/cc-skills-golang@golang-observability - -> 请查看技能,了解与缓存加载器搭配的数据库查询模式
samber/cc-skills-golang@golang-database - -> 请查看技能,了解通过CLI查询Prometheus缓存指标的方法
samber/cc-skills@promql-cli