4. Performance & Memory

Measure first; reduce allocations/copies; leverage compiler optimizations; tune GC only after fixes.

Question: What is escape analysis and how can you use it to optimize Go code?

Answer: Escape analysis is a compile-time process that determines whether a variable can be safely allocated on the goroutine's stack or if it must be "escaped" to the heap. Stack allocation is much faster as it's a simple pointer bump, while heap allocation requires a more complex memory management and triggers garbage collection.

Explanation: You can view escape analysis decisions by using the build flag go build -gcflags='-m'. Common reasons for escaping include: returning a pointer to a variable, sending a pointer over a channel, or storing a pointer in a slice that outlives the current function's stack frame. By understanding why variables escape, you can sometimes refactor code to favor stack allocation, reducing GC pressure and improving performance.

Question: What is sync.Pool and when is it appropriate to use it?

Answer: sync.Pool is a concurrent-safe, temporary object cache that can be used to reuse objects between goroutines, reducing the number of allocations.

Explanation: sync.Pool is most effective for short-lived, high-throughput objects, such as temporary buffers or workers. It helps reduce pressure on the garbage collector. However, objects in the pool can be garbage collected at any time, so it should not be used for long-term storage or for objects that must be cleaned up, like database connections.

Question: What are common strategies to minimize memory allocations in Go?

Answer:

  1. Pre-allocate slices and maps: If you know the approximate size, use make([]T, 0, size) to set the capacity and avoid reallocations.

  2. Reuse buffers: Use bytes.Buffer or strings.Builder for building strings. Use sync.Pool to reuse buffers in high-throughput code.

  3. Be mindful of string vs. []byte conversions: Each conversion creates a copy. Avoid them in tight loops.

  4. Avoid retaining large backing arrays: When re-slicing, if you only need a small part of a large slice, the original large array will be kept in memory. Create a copy if necessary to release the old array.

Question: What is bounds check elimination (BCE) and how can you help the compiler?

Answer: BCE removes redundant slice/array bounds checks when the compiler can prove indices are safe.

Explanation: Iterate with for i := 0; i < len(s); i++ { _ = s[i] } patterns where appropriate. Hoist len(s) to a local variable. Avoid complex indexing that hides invariants.

Question: When does the compiler inline functions and why does it matter?

Answer: The compiler inlines small, simple functions to reduce call overhead.

Explanation: Inlining can enable further optimizations (like constant propagation) and reduce allocations, but can also increase binary size. Use go build -gcflags=all='-m' to inspect decisions.

Question: How do you tune the garbage collector?

Answer: Control GC aggressiveness with GOGC (default 100) and observe with GODEBUG=gctrace=1.

Explanation: Higher GOGC reduces GC frequency at the cost of higher memory usage; lower GOGC reduces memory footprint but increases GC CPU. Prefer allocation reduction before tuning GC knobs.

Question: How do you capture and interpret mutex/block profiles?

Answer: Enable with runtime.SetMutexProfileFraction(n) and runtime.SetBlockProfileRate(ns); collect via pprof endpoints or go test -mutexprofile/-blockprofile.

Explanation: Inspect where goroutines contend on locks or block on sync ops to guide redesign.

Question: When to use runtime/trace vs pprof?

Answer: Use pprof for CPU/heap hotspots; use runtime/trace to analyze causal timelines of goroutines, syscalls, network, and scheduler.

Explanation: go tool trace trace.out visualizes tasks/regions/lifecycles for latency investigations.