How Do I Detect and Fix Goroutine Leaks in Go?

Type: Software Reference Confidence: 0.93 Sources: 8 Verified: 2026-02-20 Freshness: stable

TL;DR

Constraints

Quick Reference

# Cause Likelihood Signature Fix
1 Unbuffered channel — forgotten sender ~35% Goroutine blocked on ch <- with no receiver Use buffered channel make(chan T, 1) or drain before return [src1]
2 Unbuffered channel — forgotten receiver ~20% Goroutine blocked on <-ch with no sender/close Close channel when sender is done: defer close(ch) [src1]
3 Missing context cancellation ~20% Goroutine in select{} without ctx.Done() case Always pass and check context.Context [src7]
4 Early return skipping channel reads ~10% Multiple goroutines spawned, only first result consumed Use errgroup.Group or drain all channels [src4]
5 Range over unclosed channel ~5% for v := range ch blocks forever Sender must close(ch) when done [src3]
6 Infinite loop without exit condition ~5% Goroutine in for{} with no return Add ctx.Done() or done channel check [src7]
7 Leaked time.Ticker goroutine ~3% time.NewTicker without ticker.Stop() Always defer ticker.Stop() [src7]
8 Orphaned background worker ~2% Library starts goroutine; caller forgets Stop()/Close() Follow library contract — call cleanup methods [src3]

Decision Tree

START — suspected goroutine leak
├── In tests?
│   ├── YES → Add defer goleak.VerifyNone(t) [src2]
│   │   ├── Using t.Parallel()? → Use goleak.VerifyTestMain(m) [src5]
│   │   └── Go 1.24+? → Consider synctest.Test() [src3]
│   └── NO ↓
├── In production/development?
│   ├── Can add pprof endpoint?
│   │   ├── YES → import _ "net/http/pprof"; /debug/pprof/goroutine?debug=2 [src6]
│   │   └── NO → Log runtime.NumGoroutine() periodically [src7]
│   └── Have Prometheus? → Export go_goroutines; alert on sustained increase [src7]
│
├── Leak confirmed — what is blocked?
│   ├── Channel send (ch <-) → Receiver missing or returned early
│   │   ├── Can buffer? → make(chan T, 1) [src1]
│   │   └── Multiple senders? → errgroup or drain all channels [src4]
│   ├── Channel receive (<-ch) → Sender never sends or closes
│   │   └── Ensure sender calls close(ch) [src1]
│   ├── select{} without ctx.Done() → Add cancellation case [src7]
│   ├── Mutex/lock → Check for deadlock (separate issue)
│   └── I/O operation → Add timeout via context or Deadline [src7]
│
└── DEFAULT → Use pprof goroutine dump to identify blocked stack [src6]

Step-by-Step Guide

1. Add goleak to your test suite

The fastest way to catch goroutine leaks is in tests. [src2, src5]

go get -u go.uber.org/goleak
package mypackage

import (
    "testing"
    "go.uber.org/goleak"
)

func TestNoLeak(t *testing.T) {
    defer goleak.VerifyNone(t)
    // Your test code here.
    // goleak checks for unexpected goroutines when the test exits.
}

Verify: go test -v ./... — if a goroutine leaks, goleak prints the leaked goroutine's stack trace and fails the test.

2. Use goleak.VerifyTestMain for package-wide detection

For parallel tests or package-wide coverage. [src2, src5]

package mypackage

import (
    "testing"
    "go.uber.org/goleak"
)

func TestMain(m *testing.M) {
    goleak.VerifyTestMain(m)
}

Verify: go test -v ./... — all tests checked for leaks after the entire suite finishes.

3. Enable pprof for runtime detection

For production or development environments. [src6, src7]

import (
    "net/http"
    _ "net/http/pprof" // registers /debug/pprof/* handlers
)

func main() {
    // Bind pprof to a separate internal port in production
    go func() {
        http.ListenAndServe("localhost:6060", nil)
    }()
    // ... rest of application
}

Verify: curl http://localhost:6060/debug/pprof/goroutine?debug=2 — shows full stack traces. Look for goroutines blocked on channel operations or select statements.

4. Monitor runtime.NumGoroutine() over time

For quick sanity checks and alerting. [src6, src7]

import (
    "log"
    "runtime"
    "time"
)

func monitorGoroutines(interval time.Duration) {
    ticker := time.NewTicker(interval)
    defer ticker.Stop()
    baseline := runtime.NumGoroutine()
    for range ticker.C {
        current := runtime.NumGoroutine()
        if current > baseline*2 {
            log.Printf("WARNING: goroutine count %d exceeds 2x baseline %d",
                current, baseline)
        }
    }
}

Verify: Run under load, then idle. Goroutine count should return to baseline.

5. Use synctest for built-in detection (Go 1.24+)

No third-party dependencies needed. [src3]

package mypackage

import (
    "testing"
    "testing/synctest"
)

func TestNoLeakSynctest(t *testing.T) {
    synctest.Test(t, func(t *testing.T) {
        // Test code with goroutines
        synctest.Wait()
    })
    // Panics if blocked goroutines remain
}

Verify: GOEXPERIMENT=synctest go test -v ./... (Go 1.24). In Go 1.25+, no experiment flag needed.

6. Analyze pprof goroutine dump to find the leak

Identify the specific blocked stacks. [src6]

# Full stack traces
curl -s http://localhost:6060/debug/pprof/goroutine?debug=2

# Grouped summary
curl -s http://localhost:6060/debug/pprof/goroutine?debug=1

# Interactive analysis
go tool pprof http://localhost:6060/debug/pprof/goroutine
# Commands: top10, list funcName, web

Verify: Blocked goroutines show stacks ending in chan send, chan receive, select, or sync.Mutex.Lock.

Code Examples

Go: Fix forgotten sender with buffered channel

// Input:  Function that spawns a goroutine to do work with a timeout
// Output: Result or timeout error — no goroutine leak

func fetchWithTimeout(ctx context.Context) (Result, error) {
    ch := make(chan Result, 1) // buffered — sender never blocks

    go func() {
        time.Sleep(2 * time.Second)
        ch <- Result{Data: "success", Err: nil}
        // Even if ctx cancelled, send completes into buffer
    }()

    select {
    case res := <-ch:
        return res, res.Err
    case <-ctx.Done():
        return Result{}, ctx.Err()
    }
}

Go: Fix early return with errgroup

// Input:  Multiple concurrent tasks where failure cancels the rest
// Output: First error encountered, all goroutines cleaned up

func fetchAll(ctx context.Context, urls []string) error {
    g, ctx := errgroup.WithContext(ctx)

    for _, url := range urls {
        g.Go(func() error {
            req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
            if err != nil { return err }
            resp, err := http.DefaultClient.Do(req)
            if err != nil { return err } // cancels ctx for others
            defer resp.Body.Close()
            if resp.StatusCode != 200 {
                return fmt.Errorf("%s returned %d", url, resp.StatusCode)
            }
            return nil
        })
    }
    return g.Wait()
}

Go: Worker pool preventing unbounded goroutines

// Input:  Stream of jobs
// Output: Processed results with bounded goroutine count

func workerPool(ctx context.Context, jobs <-chan int, n int) <-chan string {
    results := make(chan string, n)
    var wg sync.WaitGroup

    for i := 0; i < n; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for {
                select {
                case job, ok := <-jobs:
                    if !ok { return } // channel closed
                    results <- fmt.Sprintf("worker %d: job %d", id, job)
                case <-ctx.Done():
                    return // cancelled
                }
            }
        }(i)
    }

    go func() { wg.Wait(); close(results) }()
    return results
}

Anti-Patterns

Wrong: Unbuffered channel with early return

// BAD — goroutine writing to ch2 leaks when ch1 returns error [src4]
func fetchTwo() error {
    ch1 := make(chan error)
    ch2 := make(chan error)
    go func() { ch1 <- doWork1() }()
    go func() { ch2 <- doWork2() }()
    if err := <-ch1; err != nil {
        return err // ch2 sender blocks forever — LEAKED
    }
    return <-ch2
}

Correct: Use errgroup to coordinate goroutines

// GOOD — errgroup waits for all goroutines; no leak [src4]
func fetchTwo() error {
    var g errgroup.Group
    g.Go(func() error { return doWork1() })
    g.Go(func() error { return doWork2() })
    return g.Wait() // waits for both; returns first error
}

Wrong: Goroutine blocked after timeout

// BAD — goroutine sending to ch blocks forever after timeout [src1]
func fetchWithTimeout() (string, error) {
    ch := make(chan string) // unbuffered!
    go func() {
        result := slowOperation()
        ch <- result // blocks forever if main timed out
    }()
    select {
    case r := <-ch:
        return r, nil
    case <-time.After(1 * time.Second):
        return "", fmt.Errorf("timeout") // goroutine LEAKED
    }
}

Correct: Buffer the channel

// GOOD — buffered channel lets goroutine complete [src1]
func fetchWithTimeout() (string, error) {
    ch := make(chan string, 1) // buffered!
    go func() {
        result := slowOperation()
        ch <- result // always succeeds into buffer
    }()
    select {
    case r := <-ch:
        return r, nil
    case <-time.After(1 * time.Second):
        return "", fmt.Errorf("timeout") // goroutine exits cleanly
    }
}

Wrong: Range over channel with no close

// BAD — consumer blocks forever because nobody closes ch [src3]
func process(items []int) <-chan int {
    ch := make(chan int)
    go func() {
        for _, item := range items {
            ch <- item * 2
        }
        // BUG: forgot close(ch)
    }()
    return ch
}
// for result := range process(items) { ... } // blocks forever

Correct: Always close the channel when done

// GOOD — close(ch) signals consumer to exit range loop [src3]
func process(items []int) <-chan int {
    ch := make(chan int)
    go func() {
        defer close(ch) // always close when done sending
        for _, item := range items {
            ch <- item * 2
        }
    }()
    return ch
}
// for result := range process(items) { ... } // exits cleanly

Common Pitfalls

Diagnostic Commands

# === pprof goroutine dump (full stack traces) ===
curl -s http://localhost:6060/debug/pprof/goroutine?debug=2

# === pprof goroutine summary (grouped by stack) ===
curl -s http://localhost:6060/debug/pprof/goroutine?debug=1

# === Interactive pprof analysis ===
go tool pprof http://localhost:6060/debug/pprof/goroutine
# Commands: top10, list funcName, web, traces

# === Compare profiles (before/after load test) ===
go tool pprof -base goroutine_before.pb.gz goroutine_after.pb.gz

# === Run tests with goleak ===
go test -v -run TestMyFunc ./...

# === Check goroutine count in tests ===
go test -v -count=1 ./... 2>&1 | grep -i "goroutine\|leak"

# === Go 1.24+ synctest ===
GOEXPERIMENT=synctest go test -v ./...

Version History & Compatibility

Feature / Tool Available Since Notes
runtime.NumGoroutine() Go 1.0 Built-in; includes runtime goroutines [src7]
net/http/pprof goroutine profile Go 1.0 debug=1 (summary), debug=2 (full stacks) [src6]
context.WithCancel / WithTimeout Go 1.7 Standard cancellation pattern [src7]
golang.org/x/sync/errgroup Go 1.7 (module) Coordinated goroutine groups [src4]
go.uber.org/goleak v1.0 2018 VerifyNone, VerifyTestMain [src2]
goleak v1.1.0 (IgnoreCurrent) 2021 Filter pre-existing goroutines [src5]
goleak v1.3.0 (IgnoreAnyFunction) 2024 Match function anywhere in stack [src5]
testing/synctest Go 1.24 (2025-02) Built-in leak detection in tests [src3]
runtime/pprof goroutineleak profile Go 1.26 (experimental) GC-based leak detection [src3, src8]

When to Use / When Not to Use

Use When Don't Use When Use Instead
Goroutine count grows during idle Memory grows but goroutine count stable Memory profiler (pprof heap)
Test occasionally hangs or times out Test fails deterministically with stack trace Standard debugging / go vet
pprof shows blocked goroutine stacks All goroutines are active (CPU-bound) CPU profiler (pprof profile)
Channel operations block indefinitely Mutex deadlock (all goroutines blocked) Deadlock detector (go vet, -race)
Need CI-integrated leak detection Production monitoring only Prometheus go_goroutines metric

Important Caveats

Related Units