Loading...
Loading...
In-memory caching in Golang using samber/hot — eviction algorithms (LRU, LFU, TinyLFU, W-TinyLFU, S3FIFO, ARC, TwoQueue, SIEVE, FIFO), TTL, cache loaders, sharding, stale-while-revalidate, missing key caching, and Prometheus metrics. Apply when using or adopting samber/hot, when the codebase imports github.com/samber/hot, or when the project repeatedly loads the same medium-to-low cardinality resources at high frequency and needs to reduce latency or backend pressure.
npx skill4agent add samber/cc-skills-golang golang-samber-hotgo get -u github.com/samber/hot| Algorithm | Constant | Best for | Avoid when |
|---|---|---|---|
| W-TinyLFU | | General-purpose, mixed workloads (default) | You need simplicity for debugging |
| LRU | | Recency-dominated (sessions, recent queries) | Frequency matters (scan pollution evicts hot items) |
| LFU | | Frequency-dominated (popular products, DNS) | Access patterns shift (stale popular items never evict) |
| TinyLFU | | Read-heavy with frequency bias | Write-heavy (admission filter overhead) |
| S3FIFO | | High throughput, scan-resistant | Small caches (<1000 items) |
| ARC | | Self-tuning, unknown patterns | Memory-constrained (2x tracking overhead) |
| TwoQueue | | Mixed with hot/cold split | Tuning complexity is unacceptable |
| SIEVE | | Simple scan-resistant LRU alternative | Highly skewed access patterns |
| FIFO | | Simple, predictable eviction order | Hit rate matters (no frequency/recency awareness) |
hot.WTinyLFUimport "github.com/samber/hot"
cache := hot.NewHotCache[string, *User](hot.WTinyLFU, 10_000).
WithTTL(5 * time.Minute).
WithJanitor().
Build()
defer cache.StopJanitor()
cache.Set("user:123", user)
cache.SetWithTTL("session:abc", session, 30*time.Minute)
value, found, err := cache.Get("user:123")Get()cache := hot.NewHotCache[int, *User](hot.WTinyLFU, 10_000).
WithTTL(5 * time.Minute).
WithLoaders(func(ids []int) (map[int]*User, error) {
return db.GetUsersByIDs(ctx, ids) // batch query
}).
WithJanitor().
Build()
defer cache.StopJanitor()
user, found, err := cache.Get(123) // triggers loader on misscapacity = memoryBudget / estimatedItemSizeExample: *User struct ~500 bytes + string key ~50 bytes + overhead ~100 bytes = ~650 bytes/entry
256 MB budget → 256_000_000 / 650 ≈ 393,000 itemsruntime.ReadMemStatsWithJanitor().WithJanitor()defer cache.StopJanitor()SetMissing()WithMissingCache(algorithm, capacity)WithMissingSharedCache()WithoutLocking()WithJanitor()WithoutLocking()Get()(zero, false, err)errfoundWithJitter(lambda, upperBound)WithPrometheusMetrics(cacheName)WithCopyOnRead(fn)WithCopyOnWrite(fn)samber/cc-skills-golang@golang-performancesamber/cc-skills-golang@golang-observabilitysamber/cc-skills-golang@golang-databasesamber/cc-skills@promql-cli