Overview
Welcome! Let's talk about packet queues—the unsung heroes that keep ascii-chat's network communication smooth and organized.
Imagine you're at a busy restaurant. Orders are coming in from multiple tables, and the kitchen needs to process them one at a time. You don't want orders getting mixed up or lost, right? That's exactly what packet queues do for network data—they line up incoming packets so each client's data gets processed in order, without chaos.
Each client connected to the server gets their own dedicated packet queue. When packets arrive over the network, they get placed in the appropriate client's queue. Then, a separate thread can process packets at its own pace without blocking the network thread. It's like having a separate order ticket for each table—clean, organized, and efficient.
Implementation: lib/packet_queue.c/h
What makes packet queues useful?
- Per-client queues: Each client has their own queue (no cross-contamination!)
- Thread-safe: Multiple threads can safely interact with the queue
- Flexible ownership: Choose whether the queue copies data or just references it
- Buffer pool integration: Works with our buffer pool for efficient memory usage
- Statistics tracking: See how well your queues are performing
Architecture
Queue Structure
typedef struct packet_node {
packet_header_t header;
void *data;
struct packet_node *next;
} packet_node_t;
typedef struct packet_queue {
packet_node_t *head;
packet_node_t *tail;
size_t size;
size_t max_size;
bool owns_data;
mutex_t mutex;
uint64_t packets_enqueued;
uint64_t packets_dequeued;
uint64_t packets_dropped;
size_t peak_size;
} packet_queue_t;
Data Ownership
Queues can either copy packet data or reference it:
Deep copy mode (owns_data = true):
- Queue allocates its own copy of packet data
- Safe for producer to free original data immediately
- Queue frees data when packet dequeued
- Higher memory usage, but safer
Reference mode (owns_data = false):
- Queue stores pointer to original data
- Producer must keep data alive until dequeued
- Consumer responsible for freeing data
- Lower memory usage, but requires careful lifetime management
char *data = SAFE_MALLOC(1024, char*);
strcpy(data, "Hello");
SAFE_FREE(data);
char *data2 = SAFE_MALLOC(1024, char*);
strcpy(data2, "Hello");
if (packet) {
process_packet(packet->data);
}
packet_queue_t * packet_queue_create(size_t max_size)
void packet_queue_free_packet(queued_packet_t *packet)
int packet_queue_enqueue(packet_queue_t *queue, packet_type_t type, const void *data, size_t data_len, uint32_t client_id, bool copy_data)
queued_packet_t * packet_queue_dequeue(packet_queue_t *queue)
API Reference
Creation/Destruction
if (!queue) {
log_error("Failed to create packet queue");
return NULL;
}
void packet_queue_destroy(packet_queue_t *queue)
Integration
Buffer Pool Integration:
- Packet queues use buffer pool for efficient memory allocation
- Node pool pre-allocates packet nodes
- Data pool pre-allocates packet data buffers
- Reduces malloc/free overhead in high-throughput scenarios
Network Integration:
- Each client has a dedicated packet queue
- Network receive thread enqueues packets
- Client processing thread dequeues packets
- Decouples network I/O from packet processing
Performance
Throughput:
- Enqueue: ~1M packets/second (mutex-protected)
- Dequeue: ~1M packets/second (mutex-protected)
- Memory overhead: ~40 bytes per queued packet
Scalability:
- Per-client queues eliminate contention between clients
- Statistics tracking has minimal overhead
- Buffer pool reduces allocation overhead
Thread Safety
Mutex Protection:
- All queue operations are mutex-protected
- Thread-safe for concurrent enqueue/dequeue
- No lock-free optimizations (simplicity over speed)
Concurrent Access:
- Multiple threads can enqueue concurrently
- Only one thread should dequeue (per queue)
- Statistics are updated atomically
- See also
- packet_queue.h
-
buffer_pool.h