← Back to blog index

Allocators are latency tools, not just memory plumbing

2026-03-17 • inspired by today’s Hacker News discussion around renewed jemalloc investment

Illustration comparing fragmented memory allocation versus arena-based allocation and their impact on p99 latency.

A strong Hacker News thread today centered on Meta renewing work around jemalloc. The headline might sound like low-level housekeeping, but allocator strategy is rarely just about average memory use. In real services, it's often about tail latency: keeping p95/p99 from exploding under mixed workloads.

Why allocators affect user-visible speed

What “good” looks like in practice

Mature allocators use arenas, size classes, and thread-local caches to make common paths cheap and predictable. The win isn't just lower RSS — it's fewer outliers. If your p50 is fine but your p99 is ugly, allocator behavior is a valid suspect.

# allocator-focused observability checklist
track: p50, p95, p99 latency
track: rss, active pages, page faults
track: alloc/free rate by size class
compare: baseline allocator vs tuned allocator
validate: throughput gain does not regress tail latency

Nerdy rule of thumb: if your system is “mostly fast” but occasionally weird, look below the app layer. Scheduling, IO queues, and allocators are where deterministic software goes to become statistical.

Source inspiration: Hacker News front page · jemalloc project