Go Concurrency Skill
Operator Context
This skill operates as an operator for Go concurrency workflows, configuring Claude's behavior for correct, leak-free concurrent code. It implements the Domain Intelligence architectural pattern -- encoding Go concurrency idioms, sync primitives, and channel patterns as non-negotiable constraints rather than suggestions.
Hardcoded Behaviors (Always Apply)
- CLAUDE.md Compliance: Read and follow repository CLAUDE.md before writing concurrent code
- Over-Engineering Prevention: Use concurrency only when justified by I/O, CPU parallelism, or measured bottleneck. Sequential code is correct by default
- Context First Parameter: All cancellable or I/O operations accept as first parameter
- No Goroutine Leaks: Every goroutine must have a guaranteed exit path via context, channel close, or explicit shutdown
- Race Detector Required: Run on all concurrent code during development
- Channel Ownership: Only the sender closes a channel. Never close from receiver side
- Select With Context: Every statement in concurrent code must include a case
Default Behaviors (ON unless disabled)
- errgroup Over WaitGroup: Prefer
golang.org/x/sync/errgroup
for goroutine management with error collection
- Buffered Channel Sizing: Buffer size matches expected backpressure, not arbitrary large numbers
- Directional Channel Returns: Return (receive-only) from producer functions to prevent caller misuse
- Mutex Scope Minimization: Lock only the critical section, use immediately after
- Loop Variable Safety: Use Go 1.22+ loop variable semantics; remove legacy shadows in new code
- Graceful Shutdown: Workers and servers implement clean shutdown with drain timeout
- Atomic for Counters: Use / for simple shared counters instead of mutex
Optional Behaviors (OFF unless enabled)
- Gopls MCP Analysis: Use gopls MCP tools to trace channel usage and context propagation — for tracing channel flow, for understanding goroutine spawn sites, after concurrent code edits. Fallback: CLI or LSP tool
- Container GOMAXPROCS Tuning: Configure GODEBUG flags for container CPU limit overrides
- Performance Profiling: Profile goroutine counts and channel contention under load
- Custom Rate Limiter: Build token-bucket rate limiter instead of using
What This Skill CAN Do
- Guide implementation of worker pools, fan-out/fan-in, and pipeline patterns
- Apply correct context propagation through concurrent call chains
- Select appropriate sync primitives (Mutex, RWMutex, WaitGroup, Once, atomic)
- Implement rate limiting with context-aware waiting
- Diagnose and fix race conditions, deadlocks, and goroutine leaks
- Structure graceful shutdown for background workers and servers
What This Skill CANNOT Do
- Fix general Go bugs unrelated to concurrency (use systematic-debugging instead)
- Optimize non-concurrent performance (use performance optimization workflows instead)
- Write tests for concurrent code (use go-testing skill instead)
- Handle Go error handling patterns (use go-error-handling skill instead)
Instructions
Step 1: Assess Concurrency Need
Before writing concurrent code, answer these questions:
- Is the work I/O-bound? (network, database, filesystem) -- concurrency likely helps
- Is the work CPU-bound? -- concurrency helps only if parallelizable
- Is there a measured bottleneck? -- if not measured, don't assume
If none apply, write sequential code. Concurrency adds complexity; justify it.
Step 2: Choose the Right Primitive
| Need | Primitive | When |
|---|
| Communicate between goroutines | Channel | Data flows from producer to consumer |
| Protect shared state | | Multiple goroutines read/write same data |
| Read-heavy shared state | | Many readers, few writers |
| Wait for goroutines to finish | | Need error collection + context cancel |
| Wait without error collection | | Fire-and-forget goroutines |
| One-time initialization | | Lazy singleton, config loading |
| Simple shared counter | | Increment/read without mutex overhead |
Step 3: Context Propagation
Always pass context as first parameter for I/O or cancellable operations.
go
func FetchData(ctx context.Context, id string) (*Data, error) {
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
resultCh := make(chan *Data, 1)
errCh := make(chan error, 1)
go func() {
data, err := slowOperation(id)
if err != nil {
errCh <- err
return
}
resultCh <- data
}()
select {
case data := <-resultCh:
return data, nil
case err := <-errCh:
return nil, fmt.Errorf("fetch failed: %w", err)
case <-ctx.Done():
return nil, fmt.Errorf("fetch cancelled: %w", ctx.Err())
}
}
When to use context vs not:
go
// USE context: I/O, cancellable operations, request-scoped values
func FetchUserData(ctx context.Context, userID string) (*User, error) { ... }
// NO context needed: pure computation
func CalculateTotal(prices []float64) float64 { ... }
Step 4: Implement the Pattern
Sync Primitives
go
// Mutex for state protection
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
// RWMutex for read-heavy workloads
type Cache struct {
mu sync.RWMutex
items map[string]any
}
func (c *Cache) Get(key string) (any, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
item, ok := c.items[key]
return item, ok
}
func (c *Cache) Set(key string, value any) {
c.mu.Lock()
defer c.mu.Unlock()
c.items[key] = value
}
errgroup for concurrent work with error handling (preferred over WaitGroup)
go
import "golang.org/x/sync/errgroup"
func ProcessAll(ctx context.Context, items []Item) error {
g, ctx := errgroup.WithContext(ctx)
for _, item := range items {
g.Go(func() error {
return process(ctx, item) // Go 1.22+: item captured correctly
})
}
return g.Wait()
}
sync.Once for one-time initialization
go
type Config struct {
once sync.Once
config *AppConfig
err error
}
func (c *Config) Load() (*AppConfig, error) {
c.once.Do(func() {
c.config, c.err = loadConfigFromFile()
})
return c.config, c.err
}
Channel patterns: buffered vs unbuffered
go
// Unbuffered: synchronous, sender blocks until receiver ready
ch := make(chan int)
// Buffered: async up to buffer size
ch := make(chan int, 100)
// Guidelines:
// - Use unbuffered when you need synchronization
// - Use buffered to decouple sender/receiver timing
// - Buffer size should match expected backpressure
For worker pool, fan-out/fan-in, pipeline, rate limiter, and graceful shutdown patterns, see
references/concurrency-patterns.md
.
Step 5: Run Race Detector
bash
# ALWAYS run with race detector during development
go test -race -count=1 -v ./...
# Run specific test with race detection
go test -race -run TestConcurrentOperation ./...
Step 6: Concurrency Checklist
Before declaring concurrent code complete, verify:
Error Handling
Error: "DATA RACE detected by race detector"
Cause: Multiple goroutines access shared variable without synchronization
Solution:
- Identify the variable from the race detector output (it shows goroutine stacks)
- Protect with for complex state, or for simple counters
- If using channels, ensure the variable is only accessed by one goroutine at a time
- Re-run to confirm fix
Error: "all goroutines are asleep - deadlock!"
Cause: Circular wait on channels or mutexes; no goroutine can make progress
Solution:
- Check for unbuffered channel sends with no receiver ready
- Check for mutex lock ordering inconsistencies
- Ensure channels are closed when done to unblock loops
- Add buffering to channels where appropriate
Error: "context deadline exceeded" in concurrent operations
Cause: Operations not completing within timeout, or context cancelled upstream
Solution:
- Check if timeout is realistic for the operation
- Verify context is propagated correctly (not using when a parent context exists)
- Ensure goroutines check in their select loops
- Consider increasing timeout or adding per-operation timeouts with
Anti-Patterns
Anti-Pattern 1: Goroutine Without Exit Path
What it looks like:
go func() { for { doWork() } }()
with no context check or stop channel
Why wrong: Goroutine runs forever, leaking memory. Cannot be cancelled or shut down gracefully.
Do instead: Always include
or a stop channel in goroutine loops.
Anti-Pattern 2: Unnecessary Concurrency
What it looks like: Spawning goroutines for sequential work that does not benefit from parallelism
Why wrong: Adds complexity (error channels, WaitGroups, race risks) without performance gain. Sequential code is simpler and correct by default.
Do instead: Measure first. Use concurrency only for I/O-bound, CPU-parallel, or proven bottleneck scenarios.
Anti-Pattern 3: Closing Channel From Receiver Side
What it looks like: Consumer goroutine calling
on a channel it reads from
Why wrong: Sender may still send, causing panic. Multiple receivers may double-close.
Do instead: Only the sender (producer) closes the channel. Use
in the goroutine that writes.
Anti-Pattern 4: Mutex Lock Without Defer Unlock
What it looks like:
followed by complex logic before
, with early returns in between
Why wrong: Early returns or panics skip the
, causing deadlocks.
Do instead: Always
immediately after
.
Anti-Pattern 5: Ignoring Context in Select
What it looks like:
select { case msg := <-ch: handle(msg) }
without a
case
Why wrong: Goroutine blocks forever if channel never receives and context is cancelled.
Do instead: Every
in concurrent code must include
case <-ctx.Done(): return
.
References
This skill uses these shared patterns:
- Anti-Rationalization - Prevents shortcut rationalizations
- Verification Checklist - Pre-completion checks
Domain-Specific Anti-Rationalization
| Rationalization | Why It's Wrong | Required Action |
|---|
| "No need for context, this is fast" | Fast today, slow tomorrow under load | Pass context to all I/O operations |
| "Race detector is slow, skip it" | Races are silent until production | Run every time |
| "One goroutine leak won't matter" | Leaks compound; OOM in production | Verify every goroutine has exit path |
| "Sequential is too slow" | Assumption without measurement | Profile first, then add concurrency |
| "Buffer of 1000 should be enough" | Arbitrary buffers hide backpressure bugs | Size buffers to actual throughput |
Reference Files
${CLAUDE_SKILL_DIR}/references/concurrency-patterns.md
: Worker pool, fan-out/fan-in, pipeline, rate limiter, and graceful shutdown patterns with full code examples