2. Concurrency Deep Dive
Structured concurrency with contexts, channels, synchronization, and bounded parallelism without leaks.
Question: What is a goroutine leak, and how can you prevent one?
Answer: A goroutine leak occurs when a goroutine remains active in memory but is no longer doing useful work and has no path to termination. This typically happens when a goroutine blocks indefinitely on a channel or other synchronization primitive.
Explanation: Always ensure every goroutine you launch has a clear exit path. The most common way to prevent leaks is by using context.Context
to signal cancellation. When the context is canceled, the goroutine should detect this (usually in a select
statement) and return.
Question: Explain the difference between a buffered and an unbuffered channel.
Answer:
Unbuffered Channel (
make(chan T)
): Has no capacity. A sender will block until a receiver is ready to take the value. A receiver will block until a sender is ready to provide a value. They guarantee synchronization.Buffered Channel (
make(chan T, N)
): Has a capacity of N. A sender will block only when the buffer is full. A receiver will block only when the buffer is empty. They are used to decouple the sender and receiver and can smooth out bursts of work.
Explanation: Unbuffered channels are for coordinating and synchronizing goroutines, ensuring a handoff occurs. Buffered channels act more like a queue, allowing producers and consumers to work at different rates, up to the buffer's capacity. Using an unbounded buffer (a very large buffered channel) is often an anti-pattern that can hide downstream performance issues and lead to high memory consumption.
Question: What is the purpose of the
select
statement in Go?
Answer: A select
statement allows a goroutine to wait on multiple channel operations simultaneously. It blocks until one of its cases can proceed, then executes that case. If multiple cases are ready at the same time, one is chosen at random.
Explanation: select
is fundamental to implementing complex concurrent patterns. It's used for multiplexing, implementing timeouts (with time.After
), and handling cancellation signals from a context.Context
. A default
case can be added to make the select
non-blocking.
Question: How should
context.Context
be used correctly in Go applications?
Answer: context.Context
should be passed as the first argument to functions in a call chain, conventionally named ctx
. It is used to propagate cancellation signals, deadlines, and request-scoped values. Never store a context
inside a struct; pass it explicitly.
Explanation: The primary use of context
is for cancellation. For example, in an HTTP server, a context is created for each incoming request. If the client disconnects, the context is canceled, signaling all downstream operations (database queries, RPC calls) to stop work, freeing up resources. Using context for optional parameters is an anti-pattern.
// Example: worker pool with context cancellation
type Task func(ctx context.Context) error
func RunWorkerPool(
ctx context.Context, workers int, tasks <-chan Task,
) <-chan error {
errs := make(chan error)
var wg sync.WaitGroup
wg.Add(workers)
for i := 0; i < workers; i++ {
go func() {
defer wg.Done()
for {
select {
case <-ctx.Done():
return
case t, ok := <-tasks:
if !ok {
return
}
if err := t(ctx); err != nil {
select {
case errs <- err:
case <-ctx.Done():
return
}
}
}
}
}()
}
go func() {
wg.Wait()
close(errs)
}()
return errs
}
Question: When should you use atomics (
sync/atomic
) versus a mutex?
Answer: Use sync/atomic
for simple, primitive operations like incrementing a counter or swapping a pointer. Atomics provide lock-free guarantees and can be much faster than mutexes for these specific use cases. Use a mutex (sync.Mutex
) to protect more complex critical sections involving multiple operations.
Explanation: Atomic operations are handled directly by the hardware and avoid the overhead of scheduler context switching that can come with mutex contention. However, they are limited in scope. If you need to perform a sequence of actions that must all appear to happen as a single, atomic unit (e.g., read a value, modify it, write it back), a mutex is the correct tool. Atomic operations include the necessary happens‑before guarantees; never mix atomic and non‑atomic reads/writes on the same variable.
Question: What are best practices for using
sync.WaitGroup
?
Answer: Always call wg.Add(n)
before starting goroutines, call defer wg.Done()
as the first line in the goroutine, and call wg.Wait()
in the orchestrating goroutine.
Explanation: Do not copy a WaitGroup
. Avoid calling Add
after goroutines may have already returned to zero; this can race. Prefer a Context
to signal cancellation rather than trying to forcefully stop goroutines.
Question: Who should close a channel, and what are safe patterns?
Answer: The sending side owns the channel and should close it to signal no further values. Receivers should range over the channel and stop when it closes.
Explanation: Never close a channel from the receiving side and never close a channel that you did not create. Sending on a closed channel panics; receiving from a closed channel yields zero values until the buffer drains.
Question: How do you coordinate goroutines that produce errors?
Answer: Use errgroup
from golang.org/x/sync/errgroup
, which runs functions concurrently, cancels on first error, and returns a combined error.
Explanation: errgroup.WithContext
provides a context that is canceled when any function returns an error, helping to avoid goroutine leaks and wasted work.
Question: How do you implement rate limiting or throttling?
Answer: Use time.Ticker
for steady throughput or golang.org/x/time/rate.Limiter
for token-bucket rate limiting with bursts.
Explanation: A Ticker
sends at fixed intervals but can drift under load. A Limiter
provides more control over sustained rate and burst capacity and integrates well with Context
.
Question: When should you use
sync.Cond
?
Answer: Use sync.Cond
to signal state changes to goroutines waiting on a predicate (e.g., producer/consumer) when channels are a poor fit.
Explanation: Hold the lock while checking the predicate in a loop; call Signal
/Broadcast
after state changes.
Question: What is
atomic.Value
and when to use it?
Answer: atomic.Value
gives lock-free, type-safe loads/stores of a single value (e.g., config snapshots) with happens-before for readers.
Explanation: All stored values must share the exact concrete type. Ideal for read-mostly data that is occasionally swapped.
Question: What is
context.WithCancelCause
and how do you retrieve the cause?
Answer: context.WithCancelCause
attaches a specific error to cancellation; callers use context.Cause(ctx)
to inspect it.
Explanation: Improves error reporting vs generic context canceled
.