ascii-chat 0.6.0
Real-time terminal-based video chat with ASCII art conversion
Loading...
Searching...
No Matches
Buffer Pool

🗃️ Pre-allocated memory buffers for efficient allocation More...

Files

file  buffer_pool.c
 💾 Lock-free memory pool with atomic operations
 
file  buffer_pool.h
 🗃️ Lock-Free Unified Memory Buffer Pool with Lazy Allocation
 

Data Structures

struct  buffer_node
 Node header embedded before user data. More...
 
struct  buffer_pool
 Unified buffer pool with lock-free fast path. More...
 

Macros

#define BUFFER_POOL_MAX_BYTES   (337 * 1024 * 1024)
 Maximum total bytes the pool can hold (337 MB)
 
#define BUFFER_POOL_SHRINK_DELAY_MS   5000
 Time in milliseconds before unused buffers are freed (5 seconds)
 
#define BUFFER_POOL_MIN_SIZE   64
 Minimum buffer size to pool (smaller allocations use malloc directly)
 
#define BUFFER_POOL_MAX_SINGLE_SIZE   (4 * 1024 * 1024)
 Maximum single buffer size to pool (larger allocations use malloc directly)
 
#define BUFFER_POOL_MAGIC   0xBF00B001U
 Magic value to identify pooled buffers.
 
#define BUFFER_POOL_MAGIC_FALLBACK   0xBF00FA11U
 Magic value for malloc fallback buffers (not in pool)
 
#define POOL_ALLOC(size)   buffer_pool_alloc(NULL, (size))
 
#define POOL_FREE(data, size)   buffer_pool_free(NULL, (data), (size))
 

Typedefs

typedef struct buffer_node buffer_node_t
 Node header embedded before user data.
 
typedef struct buffer_pool buffer_pool_t
 Unified buffer pool with lock-free fast path.
 

Functions

buffer_pool_tbuffer_pool_create (size_t max_bytes, uint64_t shrink_delay_ms)
 Create a new buffer pool.
 
void buffer_pool_destroy (buffer_pool_t *pool)
 Destroy a buffer pool and free all memory.
 
void * buffer_pool_alloc (buffer_pool_t *pool, size_t size)
 Allocate a buffer from the pool (lock-free fast path)
 
void buffer_pool_free (buffer_pool_t *pool, void *data, size_t size)
 Free a buffer back to the pool (lock-free)
 
void buffer_pool_shrink (buffer_pool_t *pool)
 Force shrink the pool (free old unused buffers)
 
void buffer_pool_get_stats (buffer_pool_t *pool, size_t *current_bytes, size_t *used_bytes, size_t *free_bytes)
 Get pool statistics (atomic reads)
 
void buffer_pool_log_stats (buffer_pool_t *pool, const char *name)
 Log pool statistics.
 
void buffer_pool_init_global (void)
 
void buffer_pool_cleanup_global (void)
 
buffer_pool_tbuffer_pool_get_global (void)
 

Detailed Description

🗃️ Pre-allocated memory buffers for efficient allocation

A mostly lock-free memory pool using atomic operations for the fast path. Allocations and frees use CAS on a lock-free stack. Only shrinking needs a lock.

DESIGN:

MEMORY LIMITS:

Default max: 337 MB (supports 32 clients at 144fps)

Author
Zachary Fogg me@zf.nosp@m.o.gg
Date
December 2025

Buffer Pool

Overview

Welcome! Let's talk about the buffer pool system—one of the secret weapons for making ascii-chat's real-time video streaming fast and smooth.

You know how constantly calling malloc() and free() can slow things down? Well, imagine doing that 30 times per second for video frames, per client! The buffer pool solves this by pre-allocating a bunch of memory buffers up front, so when you need one, it's ready to go. No waiting for the system allocator, no frame drops, no latency spikes—just grab a buffer and get to work.

Think of it like having a stack of clean plates ready for a dinner party. Instead of washing a plate every time someone needs one, you just grab from the stack. Much faster!

Implementation: lib/buffer_pool.c/h

What does the buffer pool give you?

  • Multiple size classes (small/medium/large/xlarge) for different needs
  • Thread-safe operation with mutex protection (multiple threads can use it safely)
  • Detailed statistics so you can see how well it's working
  • Automatic fallback to malloc when pools run dry (graceful degradation)
  • Global singleton pattern for convenience (one pool for the whole app)

Architecture

Size Classes

The buffer pool isn't one-size-fits-all. Instead, it provides four different size classes, each optimized for different types of data you'll be working with:

Size Class Buffer Size Pool Count Total Memory What's it good for?
Small 1 KB 1024 1 MB Audio packets (nice and compact)
Medium 64 KB 64 4 MB Small video frames
Large 256 KB 32 8 MB Large video frames
XLarge 2 MB 64 128 MB HD video frames (the big stuff)

Total pre-allocated memory: ~141 MB (yeah, it's a decent chunk, but remember—this eliminates malloc overhead for thousands of allocations per second!)

Allocation Strategy

So how does the buffer pool decide which buffer to give you? It's pretty straightforward:

  1. Pick the right size: It selects the smallest size class that can fit your request
  2. Try the pool first: It attempts to grab a buffer from the corresponding pool's free list
  3. Fallback gracefully: If the pool is exhausted, it falls back to regular malloc() (better slow than crashing!)
  4. Track everything: It keeps statistics so you can tune the pool sizes if needed

Here's how it works in practice:

// Request 50 KB buffer
void *buf = buffer_pool_alloc(50 * 1024); // Gets from medium pool (64 KB)
// Request 300 KB buffer
void *buf = buffer_pool_alloc(300 * 1024); // Gets from large pool (256 KB)? Nope! Too small.
// Gets from xlarge pool (2 MB) instead
// When done, return to pool (important!)
buffer_pool_free(buf, 50 * 1024);
void buffer_pool_free(buffer_pool_t *pool, void *data, size_t size)
Free a buffer back to the pool (lock-free)
void * buffer_pool_alloc(buffer_pool_t *pool, size_t size)
Allocate a buffer from the pool (lock-free fast path)

Data Structures

Buffer Node

Individual buffer in the pool:

typedef struct buffer_node {
void *data; // Actual buffer memory
size_t size; // Size of this buffer
struct buffer_node *next; // Next free buffer (linked list)
bool in_use; // Debug tracking
struct buffer_node buffer_node_t
Node header embedded before user data.
Node header embedded before user data.
Definition buffer_pool.h:76
size_t size
Size of user data portion.
Definition buffer_pool.h:79

Single Pool

Pool for one size class:

typedef struct buffer_pool {
buffer_node_t *free_list; // Stack of available buffers
buffer_node_t *nodes; // Pre-allocated node array
void *memory_block; // Single malloc for all buffers
size_t buffer_size; // Size per buffer
size_t pool_size; // Total buffer count
size_t used_count; // Currently in use
// Statistics
uint64_t hits; // Successful allocations from pool
uint64_t misses; // Had to fallback to malloc
uint64_t returns; // Successful returns to pool
uint64_t peak_used; // Peak usage
uint64_t total_bytes_allocated; // Total bytes served
struct buffer_pool buffer_pool_t
Unified buffer pool with lock-free fast path.
unsigned long long uint64_t
Definition common.h:59
Unified buffer pool with lock-free fast path.
Definition buffer_pool.h:90

Pool Manager

Multi-size pool coordinator:

typedef struct data_buffer_pool {
buffer_pool_t *small_pool; // 1 KB buffers
buffer_pool_t *medium_pool; // 64 KB buffers
buffer_pool_t *large_pool; // 256 KB buffers
buffer_pool_t *xlarge_pool; // 2 MB buffers
mutex_t pool_mutex; // Thread safety
// Global statistics
uint64_t total_allocs; // Total requests
uint64_t pool_hits; // Satisfied from pool
uint64_t malloc_fallbacks; // Had to malloc
} data_buffer_pool_t;
pthread_mutex_t mutex_t
Mutex type (POSIX: pthread_mutex_t)
Definition mutex.h:38

API Usage

Global Pool (Recommended)

For most cases, you'll want to use the global singleton pool. It's simple and convenient— one pool for your whole application:

// Initialize at startup (in main())
data_buffer_pool_init_global();
// Allocate buffer (anywhere in code)
void *buffer = buffer_pool_alloc(64 * 1024);
if (!buffer) {
return SET_ERRNO(ERROR_MEMORY, "Buffer allocation failed");
}
// Use buffer...
memcpy(buffer, frame_data, frame_size);
// Return to pool (MUST specify same size!)
buffer_pool_free(buffer, 64 * 1024);
// Cleanup at shutdown
data_buffer_pool_cleanup_global();
#define SET_ERRNO(code, context_msg,...)
Set error code with custom context message and log it.
@ ERROR_MEMORY
Definition error_codes.h:53

CRITICAL: You must pass the same size to buffer_pool_free() that you used in buffer_pool_alloc(). The pool needs to know which size class to return the buffer to. If you mix this up, bad things happen!

Custom Pool

For isolated subsystems, create dedicated pools:

// Create dedicated pool
data_buffer_pool_t *my_pool = data_buffer_pool_create();
// Allocate from custom pool
void *buf = data_buffer_pool_alloc(my_pool, 1024);
// Return to custom pool
data_buffer_pool_free(my_pool, buf, 1024);
// Destroy pool (frees all buffers)
data_buffer_pool_destroy(my_pool);

Statistics

Basic Statistics

Track hit rate for performance tuning:

uint64_t hits, misses;
data_buffer_pool_get_stats(pool, &hits, &misses);
double hit_rate = (hits * 100.0) / (hits + misses);
log_info("Buffer pool hit rate: %.1f%%", hit_rate);
#define log_info(...)
Log an INFO message.

Detailed Statistics

Per-size-class analysis:

buffer_pool_detailed_stats_t stats;
data_buffer_pool_get_detailed_stats(pool, &stats);
log_info("Small pool: %llu hits, %llu misses, peak=%llu",
stats.small_hits, stats.small_misses, stats.small_peak_used);
log_info("Medium pool: %llu hits, %llu misses, peak=%llu",
stats.medium_hits, stats.medium_misses, stats.medium_peak_used);

Automatic Logging

Log comprehensive stats:

// Log global pool stats
buffer_pool_log_global_stats();
// Log custom pool stats
data_buffer_pool_log_stats(my_pool, "Video encoder pool");

Output example:

* [buffer_pool] Global pool statistics:
*   Small (1KB):   hits=45231 misses=12 peak=890/1024 (86.9%)
*   Medium (64KB): hits=8901 misses=156 peak=48/64 (75.0%)
*   Large (256KB): hits=3421 misses=89 peak=28/32 (87.5%)
*   XLarge (2MB):  hits=142 misses=3 peak=45/64 (70.3%)
*   Overall hit rate: 98.7%
* 

Performance

Benchmarks

Okay, let's talk numbers. How much faster is the buffer pool compared to regular malloc/free? The answer: dramatically faster.

Operation Buffer Pool malloc/free Speedup
Allocate 64KB 120 ns 2,400 ns 20x faster
Allocate 256KB 135 ns 8,100 ns 60x faster
Free 64KB 95 ns 1,200 ns 12.6x faster
Free 256KB 98 ns 3,800 ns 38.8x faster

Test system: AMD Ryzen 9 5950X, 64GB DDR4-3200, Linux 6.1

Real-Time Impact

But what does this mean for real-time video? Glad you asked!

For 30 FPS video (you've got 33.3ms per frame to do everything):

  • malloc/free: 10.5µs per frame (0.03% of your time budget)
  • Buffer pool: 0.35µs per frame (0.001% of your time budget)
  • Savings: 10.15µs per frame = 305µs per second you get back for other work!

Now scale that up to 9 clients × 30 FPS = 270 frames/sec:

  • malloc/free overhead: 2.84ms/sec (oof, that adds up!)
  • Buffer pool overhead: 0.09ms/sec (barely noticeable)
  • Recovered time: 2.75ms/sec for other processing (encoding, networking, rendering, etc.)

Thread Safety

The buffer pool is fully thread-safe:

// Thread 1: Video capture
void* capture_thread(void* arg) {
while (running) {
capture_frame(buf);
enqueue_frame(buf); // Send to encoder
}
}
// Thread 2: Video encoder
void* encoder_thread(void* arg) {
while (running) {
void *buf = dequeue_frame();
encode_frame(buf);
buffer_pool_free(buf, FRAME_SIZE); // Return to pool
}
}
#define FRAME_SIZE
Definition analysis.c:61

Synchronization: Single mutex protects all four pools. This is acceptable because buffer operations are very fast (<200ns). For higher concurrency, consider per-pool mutexes.

Tuning

Pool Sizing

Adjust pool sizes in buffer_pool.h based on workload:

// For higher resolution video (1920x1080 → ASCII):
#define BUFFER_POOL_LARGE_SIZE (512 * 1024) // 512 KB (was 256 KB)
#define BUFFER_POOL_LARGE_COUNT 512 // More buffers
// For lower latency (reduce buffering):
#define BUFFER_POOL_MEDIUM_COUNT 128 // Fewer buffers

Rule of thumb:

  • Pool count ≥ (max_clients × frames_per_second × 2)
  • "× 2" provides headroom for encode/decode pipeline depth

Monitoring

Watch for malloc fallbacks in production:

buffer_pool_detailed_stats_t stats;
data_buffer_pool_get_detailed_stats(global_pool, &stats);
// Alert if miss rate exceeds threshold
double miss_rate = (stats.medium_misses * 100.0) /
(stats.medium_hits + stats.medium_misses);
if (miss_rate > 5.0) {
log_warn("Medium pool miss rate high: %.1f%% - consider increasing pool size",
miss_rate);
}
#define log_warn(...)
Log a WARN message.

Best Practices

  1. Always match alloc/free sizes:
    size_t size = 64 * 1024;
    void *buf = buffer_pool_alloc(size);
    // ... use buffer ...
    buffer_pool_free(buf, size); // MUST be same size!
  2. Check allocation failure:
    void *buf = buffer_pool_alloc(size);
    if (!buf) {
    return SET_ERRNO(ERROR_MEMORY, "Out of memory");
    }
  3. Don't hold buffers too long:
    • Return buffers immediately after use
    • Long holds exhaust pool → more malloc fallbacks
  4. Monitor statistics:
    • Log stats periodically in production
    • Tune pool sizes based on miss rates
  5. Use global pool for most cases:
    • Simpler API
    • One pool is usually sufficient
    • Create custom pools only for isolation
See also
buffer_pool.h
buffer_pool.c

Macro Definition Documentation

◆ BUFFER_POOL_MAGIC

#define BUFFER_POOL_MAGIC   0xBF00B001U

#include <buffer_pool.h>

Magic value to identify pooled buffers.

Definition at line 58 of file buffer_pool.h.

◆ BUFFER_POOL_MAGIC_FALLBACK

#define BUFFER_POOL_MAGIC_FALLBACK   0xBF00FA11U

#include <buffer_pool.h>

Magic value for malloc fallback buffers (not in pool)

Definition at line 60 of file buffer_pool.h.

◆ BUFFER_POOL_MAX_BYTES

#define BUFFER_POOL_MAX_BYTES   (337 * 1024 * 1024)

#include <buffer_pool.h>

Maximum total bytes the pool can hold (337 MB)

Definition at line 46 of file buffer_pool.h.

◆ BUFFER_POOL_MAX_SINGLE_SIZE

#define BUFFER_POOL_MAX_SINGLE_SIZE   (4 * 1024 * 1024)

#include <buffer_pool.h>

Maximum single buffer size to pool (larger allocations use malloc directly)

Definition at line 55 of file buffer_pool.h.

◆ BUFFER_POOL_MIN_SIZE

#define BUFFER_POOL_MIN_SIZE   64

#include <buffer_pool.h>

Minimum buffer size to pool (smaller allocations use malloc directly)

Definition at line 52 of file buffer_pool.h.

◆ BUFFER_POOL_SHRINK_DELAY_MS

#define BUFFER_POOL_SHRINK_DELAY_MS   5000

#include <buffer_pool.h>

Time in milliseconds before unused buffers are freed (5 seconds)

Definition at line 49 of file buffer_pool.h.

◆ POOL_ALLOC

#define POOL_ALLOC (   size)    buffer_pool_alloc(NULL, (size))

#include <buffer_pool.h>

Definition at line 177 of file buffer_pool.h.

◆ POOL_FREE

#define POOL_FREE (   data,
  size 
)    buffer_pool_free(NULL, (data), (size))

#include <buffer_pool.h>

Definition at line 178 of file buffer_pool.h.

Typedef Documentation

◆ buffer_node_t

typedef struct buffer_node buffer_node_t

#include <buffer_pool.h>

Node header embedded before user data.

Layout in memory: [buffer_node_t header][user data...] ^– pointer returned to caller

◆ buffer_pool_t

typedef struct buffer_pool buffer_pool_t

#include <buffer_pool.h>

Unified buffer pool with lock-free fast path.

Function Documentation

◆ buffer_pool_alloc()

void * buffer_pool_alloc ( buffer_pool_t pool,
size_t  size 
)

#include <buffer_pool.h>

Allocate a buffer from the pool (lock-free fast path)

Parameters
poolBuffer pool (NULL = use global pool)
sizeSize of buffer to allocate
Returns
Allocated buffer, or NULL on failure

Definition at line 121 of file buffer_pool.c.

121 {
122 // Use global pool if none specified
123 if (!pool) {
124 pool = buffer_pool_get_global();
125 }
126
127 // Size out of range - use malloc with node header for consistent cleanup
128 if (!pool || size < BUFFER_POOL_MIN_SIZE || size > BUFFER_POOL_MAX_SINGLE_SIZE) {
129 size_t total_size = sizeof(buffer_node_t) + size;
130 buffer_node_t *node = SAFE_MALLOC(total_size, buffer_node_t *);
131 node->magic = BUFFER_POOL_MAGIC_FALLBACK; // Different magic for fallbacks
132 node->_pad = 0;
133 node->size = size;
134 atomic_init(&node->next, NULL);
135 atomic_init(&node->returned_at_ms, 0);
136 node->pool = NULL; // No pool for fallbacks
137
138 if (pool) {
139 atomic_fetch_add_explicit(&pool->malloc_fallbacks, 1, memory_order_relaxed);
140 }
141 return data_from_node(node);
142 }
143
144 // Try to pop from lock-free stack (LIFO)
145 buffer_node_t *node = atomic_load_explicit(&pool->free_list, memory_order_acquire);
146 while (node) {
147 buffer_node_t *next = atomic_load_explicit(&node->next, memory_order_relaxed);
148 if (atomic_compare_exchange_weak_explicit(&pool->free_list, &node, next, memory_order_release,
149 memory_order_acquire)) {
150 // Successfully popped - check if it's big enough
151 if (node->size >= size) {
152 // Reuse this buffer
153 atomic_store_explicit(&node->next, NULL, memory_order_relaxed);
154 size_t node_size = node->size;
155 atomic_fetch_add_explicit(&pool->used_bytes, node_size, memory_order_relaxed);
156 atomic_fetch_add_explicit(&pool->hits, 1, memory_order_relaxed);
157 update_peak(&pool->peak_bytes, atomic_load(&pool->used_bytes));
158 return data_from_node(node);
159 } else {
160 // Too small - push it back and allocate new
161 // (This is rare with LIFO - usually we get similar sizes)
162 buffer_node_t *head = atomic_load_explicit(&pool->free_list, memory_order_relaxed);
163 do {
164 atomic_store_explicit(&node->next, head, memory_order_relaxed);
165 } while (!atomic_compare_exchange_weak_explicit(&pool->free_list, &head, node, memory_order_release,
166 memory_order_relaxed));
167 break; // Fall through to allocate new
168 }
169 }
170 // CAS failed - reload and retry
171 node = atomic_load_explicit(&pool->free_list, memory_order_acquire);
172 }
173
174 // Check if we can allocate more
175 size_t total_size = sizeof(buffer_node_t) + size;
176 size_t current = atomic_load_explicit(&pool->current_bytes, memory_order_relaxed);
177
178 // Atomically try to reserve space
179 while (current + total_size <= pool->max_bytes) {
180 if (atomic_compare_exchange_weak_explicit(&pool->current_bytes, &current, current + total_size,
181 memory_order_relaxed, memory_order_relaxed)) {
182 // Reserved space - now allocate
183 // Allocate node + data in one chunk for cache efficiency
184 node = SAFE_MALLOC_ALIGNED(total_size, 64, buffer_node_t *);
185 if (!node) {
186 // Undo reservation
187 atomic_fetch_sub_explicit(&pool->current_bytes, total_size, memory_order_relaxed);
188 atomic_fetch_add_explicit(&pool->malloc_fallbacks, 1, memory_order_relaxed);
189 return SAFE_MALLOC(size, void *);
190 }
191
192 node->magic = BUFFER_POOL_MAGIC;
193 node->_pad = 0;
194 node->size = size;
195 atomic_init(&node->next, NULL);
196 atomic_init(&node->returned_at_ms, 0);
197 node->pool = pool;
198
199 atomic_fetch_add_explicit(&pool->used_bytes, size, memory_order_relaxed);
200 atomic_fetch_add_explicit(&pool->allocs, 1, memory_order_relaxed);
201 update_peak(&pool->peak_bytes, atomic_load(&pool->used_bytes));
202 update_peak(&pool->peak_pool_bytes, atomic_load(&pool->current_bytes));
203
204 return data_from_node(node);
205 }
206 // CAS failed - someone else allocated, reload and check again
207 }
208
209 // Pool at capacity - fall back to malloc
210 atomic_fetch_add_explicit(&pool->malloc_fallbacks, 1, memory_order_relaxed);
211 return SAFE_MALLOC(size, void *);
212}
#define BUFFER_POOL_MAGIC
Magic value to identify pooled buffers.
Definition buffer_pool.h:58
buffer_pool_t * buffer_pool_get_global(void)
#define BUFFER_POOL_MAX_SINGLE_SIZE
Maximum single buffer size to pool (larger allocations use malloc directly)
Definition buffer_pool.h:55
#define BUFFER_POOL_MAGIC_FALLBACK
Magic value for malloc fallback buffers (not in pool)
Definition buffer_pool.h:60
#define SAFE_MALLOC_ALIGNED(size, alignment, cast)
Definition common.h:293
#define SAFE_MALLOC(size, cast)
Definition common.h:208
uint32_t _pad
Padding for alignment.
Definition buffer_pool.h:78
uint32_t magic
Magic to identify pooled buffers.
Definition buffer_pool.h:77
struct buffer_pool * pool
Owning pool (for free)
Definition buffer_pool.h:82

References buffer_node::_pad, buffer_pool_get_global(), BUFFER_POOL_MAGIC, BUFFER_POOL_MAGIC_FALLBACK, BUFFER_POOL_MAX_SINGLE_SIZE, buffer_node::magic, buffer_node::pool, SAFE_MALLOC, SAFE_MALLOC_ALIGNED, and buffer_node::size.

Referenced by acip_send_ascii_frame(), acip_send_audio_batch(), acip_send_audio_opus_batch(), acip_send_error(), acip_send_image_frame(), acip_send_remote_log(), av_send_audio_opus_batch(), framebuffer_peek_latest_multi_frame(), framebuffer_write_frame(), framebuffer_write_multi_frame(), image_new_from_pool(), packet_queue_enqueue(), packet_queue_enqueue_packet(), packet_receive(), packet_send_via_transport(), process_encrypted_packet(), receive_packet_secure(), send_ascii_frame_packet(), send_audio_batch_packet(), send_image_frame_packet(), send_packet_secure(), tcp_client_send_audio_opus(), threaded_send_audio_opus(), and video_frame_buffer_create().

◆ buffer_pool_cleanup_global()

void buffer_pool_cleanup_global ( void  )

#include <buffer_pool.h>

Definition at line 408 of file buffer_pool.c.

408 {
409 static_mutex_lock(&g_global_pool_mutex);
410 if (g_global_pool) {
411 buffer_pool_log_stats(g_global_pool, "Global (final)");
412 buffer_pool_destroy(g_global_pool);
413 g_global_pool = NULL;
414 }
415 static_mutex_unlock(&g_global_pool_mutex);
416}
void buffer_pool_destroy(buffer_pool_t *pool)
Destroy a buffer pool and free all memory.
void buffer_pool_log_stats(buffer_pool_t *pool, const char *name)
Log pool statistics.

References buffer_pool_destroy(), and buffer_pool_log_stats().

Referenced by asciichat_shared_init(), and server_main().

◆ buffer_pool_create()

buffer_pool_t * buffer_pool_create ( size_t  max_bytes,
uint64_t  shrink_delay_ms 
)

#include <buffer_pool.h>

Create a new buffer pool.

Parameters
max_bytesMaximum bytes the pool can hold (0 = use default)
shrink_delay_msTime before unused buffers freed (0 = use default)
Returns
New buffer pool, or NULL on failure

Definition at line 70 of file buffer_pool.c.

70 {
72 if (!pool) {
73 SET_ERRNO(ERROR_MEMORY, "Failed to allocate buffer pool");
74 return NULL;
75 }
76
77 if (mutex_init(&pool->shrink_mutex) != 0) {
78 SET_ERRNO(ERROR_THREAD, "Failed to initialize shrink mutex");
79 SAFE_FREE(pool);
80 return NULL;
81 }
82
83 atomic_init(&pool->free_list, NULL);
84 pool->max_bytes = max_bytes > 0 ? max_bytes : BUFFER_POOL_MAX_BYTES;
85 pool->shrink_delay_ms = shrink_delay_ms > 0 ? shrink_delay_ms : BUFFER_POOL_SHRINK_DELAY_MS;
86
87 atomic_init(&pool->current_bytes, 0);
88 atomic_init(&pool->used_bytes, 0);
89 atomic_init(&pool->peak_bytes, 0);
90 atomic_init(&pool->peak_pool_bytes, 0);
91 atomic_init(&pool->hits, 0);
92 atomic_init(&pool->allocs, 0);
93 atomic_init(&pool->returns, 0);
94 atomic_init(&pool->shrink_freed, 0);
95 atomic_init(&pool->malloc_fallbacks, 0);
96
97 char pretty_max[64];
98 format_bytes_pretty(pool->max_bytes, pretty_max, sizeof(pretty_max));
99 log_info("Created buffer pool (max: %s, shrink: %llu ms, lock-free)", pretty_max,
100 (unsigned long long)pool->shrink_delay_ms);
101
102 return pool;
103}
#define BUFFER_POOL_MAX_BYTES
Maximum total bytes the pool can hold (337 MB)
Definition buffer_pool.h:46
#define BUFFER_POOL_SHRINK_DELAY_MS
Time in milliseconds before unused buffers are freed (5 seconds)
Definition buffer_pool.h:49
#define SAFE_FREE(ptr)
Definition common.h:320
@ ERROR_THREAD
Definition error_codes.h:95
int mutex_init(mutex_t *mutex)
Initialize a mutex.
void format_bytes_pretty(size_t bytes, char *out, size_t out_capacity)
Format byte count into human-readable string.
Definition format.c:10
size_t max_bytes
Maximum total bytes allowed.
Definition buffer_pool.h:94
mutex_t shrink_mutex
Only used for shrinking.
Definition buffer_pool.h:92
uint64_t shrink_delay_ms
Time before unused buffers freed.
Definition buffer_pool.h:95

References BUFFER_POOL_MAX_BYTES, BUFFER_POOL_SHRINK_DELAY_MS, ERROR_MEMORY, ERROR_THREAD, format_bytes_pretty(), log_info, max_bytes, mutex_init(), SAFE_FREE, SAFE_MALLOC, SET_ERRNO, shrink_delay_ms, and shrink_mutex.

Referenced by buffer_pool_init_global(), and packet_queue_create_with_pools().

◆ buffer_pool_destroy()

void buffer_pool_destroy ( buffer_pool_t pool)

#include <buffer_pool.h>

Destroy a buffer pool and free all memory.

Parameters
poolPool to destroy

Definition at line 105 of file buffer_pool.c.

105 {
106 if (!pool)
107 return;
108
109 // Drain the free list
110 buffer_node_t *node = atomic_load(&pool->free_list);
111 while (node) {
112 buffer_node_t *next = atomic_load(&node->next);
113 SAFE_FREE(node); // Node and data are one allocation
114 node = next;
115 }
116
118 SAFE_FREE(pool);
119}
int mutex_destroy(mutex_t *mutex)
Destroy a mutex.

References mutex_destroy(), SAFE_FREE, and shrink_mutex.

Referenced by buffer_pool_cleanup_global(), and packet_queue_destroy().

◆ buffer_pool_free()

void buffer_pool_free ( buffer_pool_t pool,
void *  data,
size_t  size 
)

#include <buffer_pool.h>

Free a buffer back to the pool (lock-free)

Parameters
poolBuffer pool (NULL = auto-detect from buffer)
dataBuffer to free
sizeSize of buffer (used for fallback, can be 0 if pooled)

Definition at line 214 of file buffer_pool.c.

214 {
215 (void)size; // Size parameter not needed with header-based detection
216
217 if (!data)
218 return;
219
220 // All buffer_pool allocations have headers, safe to check magic
221 buffer_node_t *node = node_from_data(data);
222
223 // If it's a malloc fallback (has fallback magic), free the node directly
224 if (node->magic == BUFFER_POOL_MAGIC_FALLBACK) {
225 SAFE_FREE(node); // Free the node (includes header + data)
226 return;
227 }
228
229 // If it's not a pooled buffer (no valid magic), it's external - use platform free
230 if (node->magic != BUFFER_POOL_MAGIC) {
231 free(data); // Unknown allocation, just free the data pointer
232 return;
233 }
234
235 // It's a pooled buffer - return to pool
236 // Use the pool stored in the node if none provided
237 if (!pool) {
238 pool = node->pool;
239 }
240
241 if (!pool) {
242 // Shouldn't happen, but safety
243 log_error("Pooled buffer has no pool reference!");
244 return;
245 }
246
247 // Update stats
248 atomic_fetch_sub_explicit(&pool->used_bytes, node->size, memory_order_relaxed);
249 atomic_fetch_add_explicit(&pool->returns, 1, memory_order_relaxed);
250
251 // Set return timestamp
252 atomic_store_explicit(&node->returned_at_ms, get_time_ms(), memory_order_relaxed);
253
254 // Push to lock-free stack
255 buffer_node_t *head = atomic_load_explicit(&pool->free_list, memory_order_relaxed);
256 do {
257 atomic_store_explicit(&node->next, head, memory_order_relaxed);
258 } while (!atomic_compare_exchange_weak_explicit(&pool->free_list, &head, node, memory_order_release,
259 memory_order_relaxed));
260
261 // Periodically trigger shrink (every 100 returns)
262 uint64_t returns = atomic_load_explicit(&pool->returns, memory_order_relaxed);
263 if (returns % 100 == 0) {
264 buffer_pool_shrink(pool);
265 }
266}
void buffer_pool_shrink(buffer_pool_t *pool)
Force shrink the pool (free old unused buffers)
#define log_error(...)
Log an ERROR message.

References BUFFER_POOL_MAGIC, BUFFER_POOL_MAGIC_FALLBACK, buffer_pool_shrink(), log_error, buffer_node::magic, buffer_node::pool, SAFE_FREE, and buffer_node::size.

Referenced by acds_client_handler(), acds_session_create(), acds_session_join(), acds_session_lookup(), acip_client_receive_and_dispatch(), acip_send_ascii_frame(), acip_send_audio_batch(), acip_send_audio_opus_batch(), acip_send_error(), acip_send_image_frame(), acip_send_remote_log(), acip_server_receive_and_dispatch(), audio_ring_buffer_destroy(), av_send_audio_opus_batch(), client_crypto_handshake(), crypto_handshake_client_auth_response(), crypto_handshake_client_complete(), crypto_handshake_client_key_exchange(), crypto_handshake_server_auth_challenge(), crypto_handshake_server_complete(), framebuffer_clear(), framebuffer_write_frame(), framebuffer_write_multi_frame(), image_destroy(), image_destroy_to_pool(), packet_queue_enqueue(), packet_queue_enqueue_packet(), packet_queue_free_packet(), packet_queue_try_dequeue(), packet_receive(), packet_send_via_transport(), process_encrypted_packet(), receive_packet_secure(), send_ascii_frame_packet(), send_audio_batch_packet(), send_image_frame_packet(), send_packet_secure(), server_crypto_handshake(), tcp_client_send_audio_opus(), threaded_send_audio_opus(), and video_frame_buffer_destroy().

◆ buffer_pool_get_global()

buffer_pool_t * buffer_pool_get_global ( void  )

#include <buffer_pool.h>

Definition at line 418 of file buffer_pool.c.

418 {
419 return g_global_pool;
420}

Referenced by buffer_pool_alloc(), packet_queue_enqueue(), packet_queue_enqueue_packet(), video_frame_buffer_create(), and video_frame_buffer_destroy().

◆ buffer_pool_get_stats()

void buffer_pool_get_stats ( buffer_pool_t pool,
size_t *  current_bytes,
size_t *  used_bytes,
size_t *  free_bytes 
)

#include <buffer_pool.h>

Get pool statistics (atomic reads)

Definition at line 332 of file buffer_pool.c.

332 {
333 if (!pool) {
334 if (current_bytes)
335 *current_bytes = 0;
336 if (used_bytes)
337 *used_bytes = 0;
338 if (free_bytes)
339 *free_bytes = 0;
340 return;
341 }
342
343 size_t current = atomic_load_explicit(&pool->current_bytes, memory_order_relaxed);
344 size_t used = atomic_load_explicit(&pool->used_bytes, memory_order_relaxed);
345
346 if (current_bytes)
347 *current_bytes = current;
348 if (used_bytes)
349 *used_bytes = used;
350 if (free_bytes)
351 *free_bytes = (current > used) ? (current - used) : 0;
352}

◆ buffer_pool_init_global()

void buffer_pool_init_global ( void  )

#include <buffer_pool.h>

Definition at line 397 of file buffer_pool.c.

397 {
398 static_mutex_lock(&g_global_pool_mutex);
399 if (!g_global_pool) {
400 g_global_pool = buffer_pool_create(0, 0);
401 if (g_global_pool) {
402 log_info("Initialized global buffer pool");
403 }
404 }
405 static_mutex_unlock(&g_global_pool_mutex);
406}
buffer_pool_t * buffer_pool_create(size_t max_bytes, uint64_t shrink_delay_ms)
Create a new buffer pool.
Definition buffer_pool.c:70

References buffer_pool_create(), and log_info.

Referenced by asciichat_shared_init().

◆ buffer_pool_log_stats()

void buffer_pool_log_stats ( buffer_pool_t pool,
const char *  name 
)

#include <buffer_pool.h>

Log pool statistics.

Definition at line 354 of file buffer_pool.c.

354 {
355 if (!pool)
356 return;
357
358 size_t current = atomic_load(&pool->current_bytes);
359 size_t used = atomic_load(&pool->used_bytes);
360 size_t peak = atomic_load(&pool->peak_bytes);
361 size_t peak_pool = atomic_load(&pool->peak_pool_bytes);
362 uint64_t hits = atomic_load(&pool->hits);
363 uint64_t allocs = atomic_load(&pool->allocs);
364 uint64_t returns = atomic_load(&pool->returns);
365 uint64_t shrink_freed = atomic_load(&pool->shrink_freed);
366 uint64_t fallbacks = atomic_load(&pool->malloc_fallbacks);
367
368 char pretty_current[64], pretty_used[64], pretty_free[64];
369 char pretty_peak[64], pretty_peak_pool[64], pretty_max[64];
370
371 format_bytes_pretty(current, pretty_current, sizeof(pretty_current));
372 format_bytes_pretty(used, pretty_used, sizeof(pretty_used));
373 format_bytes_pretty(current > used ? current - used : 0, pretty_free, sizeof(pretty_free));
374 format_bytes_pretty(peak, pretty_peak, sizeof(pretty_peak));
375 format_bytes_pretty(peak_pool, pretty_peak_pool, sizeof(pretty_peak_pool));
376 format_bytes_pretty(pool->max_bytes, pretty_max, sizeof(pretty_max));
377
378 uint64_t total_requests = hits + allocs + fallbacks;
379 double hit_rate = total_requests > 0 ? (double)hits * 100.0 / (double)total_requests : 0;
380
381 log_info("=== Buffer Pool: %s ===", name ? name : "unnamed");
382 log_info(" Pool: %s / %s (peak: %s)", pretty_current, pretty_max, pretty_peak_pool);
383 log_info(" Used: %s, Free: %s (peak used: %s)", pretty_used, pretty_free, pretty_peak);
384 log_info(" Hits: %llu (%.1f%%), Allocs: %llu, Fallbacks: %llu", (unsigned long long)hits, hit_rate,
385 (unsigned long long)allocs, (unsigned long long)fallbacks);
386 log_info(" Returns: %llu, Shrink freed: %llu", (unsigned long long)returns, (unsigned long long)shrink_freed);
387}

References format_bytes_pretty(), log_info, and max_bytes.

Referenced by buffer_pool_cleanup_global(), and packet_queue_destroy().

◆ buffer_pool_shrink()

void buffer_pool_shrink ( buffer_pool_t pool)

#include <buffer_pool.h>

Force shrink the pool (free old unused buffers)

Parameters
poolBuffer pool
Note
This is the only operation that takes a lock

Definition at line 268 of file buffer_pool.c.

268 {
269 if (!pool || pool->shrink_delay_ms == 0)
270 return;
271
272 // Only one thread can shrink at a time
273 if (mutex_trylock(&pool->shrink_mutex) != 0) {
274 return; // Another thread is shrinking
275 }
276
277 uint64_t now = get_time_ms();
278 uint64_t cutoff = (now > pool->shrink_delay_ms) ? (now - pool->shrink_delay_ms) : 0;
279
280 // Atomically swap out the entire free list
281 buffer_node_t *list = atomic_exchange_explicit(&pool->free_list, NULL, memory_order_acquire);
282
283 // Partition into keep and free lists
284 buffer_node_t *keep_list = NULL;
285 buffer_node_t *free_list = NULL;
286
287 while (list) {
288 buffer_node_t *next = atomic_load_explicit(&list->next, memory_order_relaxed);
289 uint64_t returned_at = atomic_load_explicit(&list->returned_at_ms, memory_order_relaxed);
290
291 if (returned_at < cutoff) {
292 // Old buffer - add to free list
293 atomic_store_explicit(&list->next, free_list, memory_order_relaxed);
294 free_list = list;
295 } else {
296 // Recent buffer - keep it
297 atomic_store_explicit(&list->next, keep_list, memory_order_relaxed);
298 keep_list = list;
299 }
300 list = next;
301 }
302
303 // Push kept buffers back to free list
304 if (keep_list) {
305 // Find tail
306 buffer_node_t *tail = keep_list;
307 while (atomic_load(&tail->next)) {
308 tail = atomic_load(&tail->next);
309 }
310
311 // Atomically prepend to current free list
312 buffer_node_t *head = atomic_load_explicit(&pool->free_list, memory_order_relaxed);
313 do {
314 atomic_store_explicit(&tail->next, head, memory_order_relaxed);
315 } while (!atomic_compare_exchange_weak_explicit(&pool->free_list, &head, keep_list, memory_order_release,
316 memory_order_relaxed));
317 }
318
319 // Free old buffers
320 while (free_list) {
321 buffer_node_t *next = atomic_load_explicit(&free_list->next, memory_order_relaxed);
322 size_t total_size = sizeof(buffer_node_t) + free_list->size;
323 atomic_fetch_sub_explicit(&pool->current_bytes, total_size, memory_order_relaxed);
324 atomic_fetch_add_explicit(&pool->shrink_freed, 1, memory_order_relaxed);
325 SAFE_FREE(free_list);
326 free_list = next;
327 }
328
330}
#define mutex_trylock(mutex)
Try to lock a mutex without blocking (with debug tracking in debug builds)
Definition mutex.h:157
#define mutex_unlock(mutex)
Unlock a mutex (with debug tracking in debug builds)
Definition mutex.h:175

References mutex_trylock, mutex_unlock, SAFE_FREE, shrink_delay_ms, shrink_mutex, and buffer_node::size.

Referenced by buffer_pool_free().