runtime.NumGoroutine() trending, net/http/pprof
goroutine dumps, or Uber's goleak in tests. Fix by ensuring every goroutine has a
guaranteed exit path via context.Context cancellation, buffered channels, or
errgroup.go.uber.org/goleak with
defer goleak.VerifyNone(t) in tests;
http://localhost:6060/debug/pprof/goroutine?debug=2 in production.goleak v1.3.0 supports the two most recent Go minor
versions. synctest available in Go 1.24+. Experimental goroutineleak pprof
profile in Go 1.26+.goroutine.Kill().goleak.VerifyNone(t) is incompatible with t.Parallel(). Use
goleak.VerifyTestMain(m) for parallel test packages. [src2, src5]runtime.NumGoroutine() includes runtime-internal goroutines (GC, finalizer, signal
handler). Raw count comparisons can produce false positives. [src6]go vet or the race detector to find goroutine leaks — neither detects
them. [src4]synctest package (Go 1.24+) only detects goroutines blocked on synchronization
primitives. CPU-bound infinite loops are not detected. [src3]| # | Cause | Likelihood | Signature | Fix |
|---|---|---|---|---|
| 1 | Unbuffered channel — forgotten sender | ~35% | Goroutine blocked on ch <- with no receiver |
Use buffered channel make(chan T, 1) or drain before return [src1]
|
| 2 | Unbuffered channel — forgotten receiver | ~20% | Goroutine blocked on <-ch with no sender/close |
Close channel when sender is done: defer close(ch) [src1]
|
| 3 | Missing context cancellation | ~20% | Goroutine in select{} without ctx.Done() case |
Always pass and check context.Context [src7] |
| 4 | Early return skipping channel reads | ~10% | Multiple goroutines spawned, only first result consumed | Use errgroup.Group or drain all channels [src4] |
| 5 | Range over unclosed channel | ~5% | for v := range ch blocks forever |
Sender must close(ch) when done [src3] |
| 6 | Infinite loop without exit condition | ~5% | Goroutine in for{} with no return |
Add ctx.Done() or done channel check [src7] |
| 7 | Leaked time.Ticker goroutine |
~3% | time.NewTicker without ticker.Stop() |
Always defer ticker.Stop() [src7] |
| 8 | Orphaned background worker | ~2% | Library starts goroutine; caller forgets Stop()/Close() |
Follow library contract — call cleanup methods [src3] |
START — suspected goroutine leak
├── In tests?
│ ├── YES → Add defer goleak.VerifyNone(t) [src2]
│ │ ├── Using t.Parallel()? → Use goleak.VerifyTestMain(m) [src5]
│ │ └── Go 1.24+? → Consider synctest.Test() [src3]
│ └── NO ↓
├── In production/development?
│ ├── Can add pprof endpoint?
│ │ ├── YES → import _ "net/http/pprof"; /debug/pprof/goroutine?debug=2 [src6]
│ │ └── NO → Log runtime.NumGoroutine() periodically [src7]
│ └── Have Prometheus? → Export go_goroutines; alert on sustained increase [src7]
│
├── Leak confirmed — what is blocked?
│ ├── Channel send (ch <-) → Receiver missing or returned early
│ │ ├── Can buffer? → make(chan T, 1) [src1]
│ │ └── Multiple senders? → errgroup or drain all channels [src4]
│ ├── Channel receive (<-ch) → Sender never sends or closes
│ │ └── Ensure sender calls close(ch) [src1]
│ ├── select{} without ctx.Done() → Add cancellation case [src7]
│ ├── Mutex/lock → Check for deadlock (separate issue)
│ └── I/O operation → Add timeout via context or Deadline [src7]
│
└── DEFAULT → Use pprof goroutine dump to identify blocked stack [src6]
The fastest way to catch goroutine leaks is in tests. [src2, src5]
go get -u go.uber.org/goleak
package mypackage
import (
"testing"
"go.uber.org/goleak"
)
func TestNoLeak(t *testing.T) {
defer goleak.VerifyNone(t)
// Your test code here.
// goleak checks for unexpected goroutines when the test exits.
}
Verify: go test -v ./... — if a goroutine leaks, goleak prints the leaked
goroutine's stack trace and fails the test.
For parallel tests or package-wide coverage. [src2, src5]
package mypackage
import (
"testing"
"go.uber.org/goleak"
)
func TestMain(m *testing.M) {
goleak.VerifyTestMain(m)
}
Verify: go test -v ./... — all tests checked for leaks after the entire suite
finishes.
For production or development environments. [src6, src7]
import (
"net/http"
_ "net/http/pprof" // registers /debug/pprof/* handlers
)
func main() {
// Bind pprof to a separate internal port in production
go func() {
http.ListenAndServe("localhost:6060", nil)
}()
// ... rest of application
}
Verify:
curl http://localhost:6060/debug/pprof/goroutine?debug=2 — shows full stack traces. Look for
goroutines blocked on channel operations or select statements.
For quick sanity checks and alerting. [src6, src7]
import (
"log"
"runtime"
"time"
)
func monitorGoroutines(interval time.Duration) {
ticker := time.NewTicker(interval)
defer ticker.Stop()
baseline := runtime.NumGoroutine()
for range ticker.C {
current := runtime.NumGoroutine()
if current > baseline*2 {
log.Printf("WARNING: goroutine count %d exceeds 2x baseline %d",
current, baseline)
}
}
}
Verify: Run under load, then idle. Goroutine count should return to baseline.
No third-party dependencies needed. [src3]
package mypackage
import (
"testing"
"testing/synctest"
)
func TestNoLeakSynctest(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
// Test code with goroutines
synctest.Wait()
})
// Panics if blocked goroutines remain
}
Verify: GOEXPERIMENT=synctest go test -v ./... (Go 1.24). In Go 1.25+, no
experiment flag needed.
Identify the specific blocked stacks. [src6]
# Full stack traces
curl -s http://localhost:6060/debug/pprof/goroutine?debug=2
# Grouped summary
curl -s http://localhost:6060/debug/pprof/goroutine?debug=1
# Interactive analysis
go tool pprof http://localhost:6060/debug/pprof/goroutine
# Commands: top10, list funcName, web
Verify: Blocked goroutines show stacks ending in chan send,
chan receive, select, or sync.Mutex.Lock.
// Input: Function that spawns a goroutine to do work with a timeout
// Output: Result or timeout error — no goroutine leak
func fetchWithTimeout(ctx context.Context) (Result, error) {
ch := make(chan Result, 1) // buffered — sender never blocks
go func() {
time.Sleep(2 * time.Second)
ch <- Result{Data: "success", Err: nil}
// Even if ctx cancelled, send completes into buffer
}()
select {
case res := <-ch:
return res, res.Err
case <-ctx.Done():
return Result{}, ctx.Err()
}
}
// Input: Multiple concurrent tasks where failure cancels the rest
// Output: First error encountered, all goroutines cleaned up
func fetchAll(ctx context.Context, urls []string) error {
g, ctx := errgroup.WithContext(ctx)
for _, url := range urls {
g.Go(func() error {
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil { return err }
resp, err := http.DefaultClient.Do(req)
if err != nil { return err } // cancels ctx for others
defer resp.Body.Close()
if resp.StatusCode != 200 {
return fmt.Errorf("%s returned %d", url, resp.StatusCode)
}
return nil
})
}
return g.Wait()
}
// Input: Stream of jobs
// Output: Processed results with bounded goroutine count
func workerPool(ctx context.Context, jobs <-chan int, n int) <-chan string {
results := make(chan string, n)
var wg sync.WaitGroup
for i := 0; i < n; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for {
select {
case job, ok := <-jobs:
if !ok { return } // channel closed
results <- fmt.Sprintf("worker %d: job %d", id, job)
case <-ctx.Done():
return // cancelled
}
}
}(i)
}
go func() { wg.Wait(); close(results) }()
return results
}
// BAD — goroutine writing to ch2 leaks when ch1 returns error [src4]
func fetchTwo() error {
ch1 := make(chan error)
ch2 := make(chan error)
go func() { ch1 <- doWork1() }()
go func() { ch2 <- doWork2() }()
if err := <-ch1; err != nil {
return err // ch2 sender blocks forever — LEAKED
}
return <-ch2
}
// GOOD — errgroup waits for all goroutines; no leak [src4]
func fetchTwo() error {
var g errgroup.Group
g.Go(func() error { return doWork1() })
g.Go(func() error { return doWork2() })
return g.Wait() // waits for both; returns first error
}
// BAD — goroutine sending to ch blocks forever after timeout [src1]
func fetchWithTimeout() (string, error) {
ch := make(chan string) // unbuffered!
go func() {
result := slowOperation()
ch <- result // blocks forever if main timed out
}()
select {
case r := <-ch:
return r, nil
case <-time.After(1 * time.Second):
return "", fmt.Errorf("timeout") // goroutine LEAKED
}
}
// GOOD — buffered channel lets goroutine complete [src1]
func fetchWithTimeout() (string, error) {
ch := make(chan string, 1) // buffered!
go func() {
result := slowOperation()
ch <- result // always succeeds into buffer
}()
select {
case r := <-ch:
return r, nil
case <-time.After(1 * time.Second):
return "", fmt.Errorf("timeout") // goroutine exits cleanly
}
}
// BAD — consumer blocks forever because nobody closes ch [src3]
func process(items []int) <-chan int {
ch := make(chan int)
go func() {
for _, item := range items {
ch <- item * 2
}
// BUG: forgot close(ch)
}()
return ch
}
// for result := range process(items) { ... } // blocks forever
// GOOD — close(ch) signals consumer to exit range loop [src3]
func process(items []int) <-chan int {
ch := make(chan int)
go func() {
defer close(ch) // always close when done sending
for _, item := range items {
ch <- item * 2
}
}()
return ch
}
// for result := range process(items) { ... } // exits cleanly
ctx, cancel := context.WithCancel(parentCtx) and not calling defer cancel()
leaks the goroutine watching the context timer. Always defer cancel immediately. [src7]time.After in a loop: Each call in a select inside a
loop creates a new timer goroutine that isn't collected until it fires. Use
time.NewTimer/timer.Reset() instead. [src7]goleak.IgnoreTopFunction("google.golang.org/grpc...") to whitelist known safe goroutines.
[src2, src5]select{} without a done channel: An empty select{} blocks the
goroutine forever. Always include case <-ctx.Done(): return. [src7]ctx.Done() but the consumer returned, the producer's final send still blocks. Use
buffered channels or a drain goroutine. [src1, src4]goleak.VerifyNone(t). [src2]# === pprof goroutine dump (full stack traces) ===
curl -s http://localhost:6060/debug/pprof/goroutine?debug=2
# === pprof goroutine summary (grouped by stack) ===
curl -s http://localhost:6060/debug/pprof/goroutine?debug=1
# === Interactive pprof analysis ===
go tool pprof http://localhost:6060/debug/pprof/goroutine
# Commands: top10, list funcName, web, traces
# === Compare profiles (before/after load test) ===
go tool pprof -base goroutine_before.pb.gz goroutine_after.pb.gz
# === Run tests with goleak ===
go test -v -run TestMyFunc ./...
# === Check goroutine count in tests ===
go test -v -count=1 ./... 2>&1 | grep -i "goroutine\|leak"
# === Go 1.24+ synctest ===
GOEXPERIMENT=synctest go test -v ./...
| Feature / Tool | Available Since | Notes |
|---|---|---|
runtime.NumGoroutine() |
Go 1.0 | Built-in; includes runtime goroutines [src7] |
net/http/pprof goroutine profile |
Go 1.0 | debug=1 (summary), debug=2 (full stacks) [src6] |
context.WithCancel / WithTimeout |
Go 1.7 | Standard cancellation pattern [src7] |
golang.org/x/sync/errgroup |
Go 1.7 (module) | Coordinated goroutine groups [src4] |
go.uber.org/goleak v1.0 |
2018 | VerifyNone, VerifyTestMain [src2] |
goleak v1.1.0 (IgnoreCurrent) |
2021 | Filter pre-existing goroutines [src5] |
goleak v1.3.0 (IgnoreAnyFunction) |
2024 | Match function anywhere in stack [src5] |
testing/synctest |
Go 1.24 (2025-02) | Built-in leak detection in tests [src3] |
runtime/pprof goroutineleak profile |
Go 1.26 (experimental) | GC-based leak detection [src3, src8] |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Goroutine count grows during idle | Memory grows but goroutine count stable | Memory profiler (pprof heap) |
| Test occasionally hangs or times out | Test fails deterministically with stack trace | Standard debugging / go vet |
| pprof shows blocked goroutine stacks | All goroutines are active (CPU-bound) | CPU profiler (pprof profile) |
| Channel operations block indefinitely | Mutex deadlock (all goroutines blocked) | Deadlock detector (go vet, -race) |
| Need CI-integrated leak detection | Production monitoring only | Prometheus go_goroutines metric |
goleak uses polling with exponential backoff: It checks up to 20 times
with delays from 1us to 100ms. Very fast goroutines that start and stop between checks may not be
caught. [src2]synctest is still evolving: The API changed between Go 1.24 and 1.25. Pin
your Go version and check release notes before upgrading. [src3]goroutineleak pprof profile (Go 1.26) uses GC marking: It
cannot detect goroutines blocked on reachable synchronization objects (e.g., a global channel). It only
catches goroutines blocked on unreachable objects. [src3, src8]runtime.NumGoroutine() baseline varies: HTTP servers, database connection
pools, and gRPC clients all maintain background goroutines. Establish a per-application baseline before
alerting. [src6]