Go Memory and CPU Profiling: Listen to Your Runtime
Mert TosunGo
Your Go service is up, tests pass, deploy succeeds, but production starts slowing down after traffic spikes.
CPU jumps. Memory grows. Restarts help for a while, then the same issue appears again.
This is where profiling becomes your best friend.
Why pprof matters
Go ships with a great built-in profiler: pprof.
With pprof you can answer concrete questions:
- Which function burns most CPU?
- Where are allocations coming from?
- Which objects remain on the heap?
- Are goroutines leaking?
Instead of guessing, you measure.
Enable profiling endpoints
import (
_ "net/http/pprof"
"net/http"
)
func main() {
go http.ListenAndServe("localhost:6060", nil)
// start your app...
}
Now you have:
/debug/pprof/profile(CPU)/debug/pprof/heap(memory)/debug/pprof/goroutine(goroutines)
CPU profiling
Capture 30 seconds from a live service:
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30
Then use:
top10for hottest functionslist <func>for line-level costwebor-http=:8081for flame graphs
Memory profiling
go tool pprof http://localhost:6060/debug/pprof/heap
Use:
-inuse_spaceto inspect currently retained memory-alloc_spaceto inspect total allocation churn
If in-use memory keeps climbing with stable traffic, check retained references and goroutine lifecycles.
Benchmark + profile loop
go test -bench=. -cpuprofile=cpu.prof -memprofile=mem.prof
go tool pprof cpu.prof
go tool pprof mem.prof
This is a reliable way to validate optimizations before production rollout.
Common findings in Go services
- Repeated JSON encoding dominates CPU
- String concatenation inside loops creates allocation pressure
- Missing slice pre-allocation (
make(..., 0, n)) causes repeated growth - Goroutines blocked on channels never exit
Closing thought
Performance engineering in Go is not about folklore.
It is a data problem.
Profile first, optimize second, and keep changes measurable.