Multi-client real-time video and audio streaming server with per-client threading, end-to-end encryption, and terminal-aware ASCII rendering.
Overview
Welcome to the heart of ascii-chat—the server! This is where all the magic happens.
Picture a video conference with multiple people. Someone needs to coordinate everything: collecting video from each person, mixing the audio streams together, generating those cool ASCII frames, and sending everything back out to everyone. That's exactly what the server does, and it does it all in real-time at 60 frames per second!
The server is designed like a well-oiled machine. Each client gets their own dedicated threads (think of them as personal assistants)—one for receiving data, one for sending, one for video rendering, and one for audio mixing. This means when Client A's connection gets slow, it doesn't affect Client B at all. They're independent! This architecture scales beautifully—we've tested with 9+ clients without breaking a sweat.
And security? Every single packet is encrypted end-to-end using modern cryptography (X25519, XSalsa20-Poly1305—the good stuff). Your video chat stays private.
What makes this server special?
- 4 threads per client: Receive, send, video render, audio render (complete independence)
- Real-time performance: 60fps video per client, 172fps audio mixing (smooth as butter)
- Linear scalability: Add more clients, performance scales proportionally (no bottlenecks!)
- End-to-end encryption: Full cryptographic handshake—your data stays yours
- Terminal-aware: Generates ASCII optimized for each client's terminal capabilities
- Graceful shutdown: Clean resource cleanup even when things go sideways
Implementation: src/server/*.c, src/server/*.h
Architecture
Threading Model
Let's talk about how the server juggles multiple clients simultaneously—it's actually pretty clever!
Think of the server like a restaurant. You've got a host (main thread) greeting customers and showing them to tables. Then each table gets its own team of waiters (4 threads per client) who handle everything for that specific table. One waiter takes orders (receive thread), another delivers food (send thread), one prepares the visual presentation (video render), and another handles drinks (audio render). This way, if one table's order is taking forever, it doesn't slow down the other tables—everyone gets independent service!
The threading setup:
Global Threads (these run regardless of client count):
- Main Thread: The host—greets new clients, handles connections, coordinates everything
- Stats Logger Thread: The accountant—checks performance every 30 seconds, logs metrics
Per-Client Threads (4 dedicated threads for each connected client):
- Receive Thread: Listens for incoming packets from this client (what are they sending?)
- Send Thread: Delivers outgoing packets to this client (here's your ASCII frame!)
- Video Render Thread: Generates personalized ASCII frames at 60fps (smooth video)
- Audio Render Thread: Mixes audio from everyone at 172fps (clear sound)
Why this design rocks:
- Linear scalability: Add clients, performance scales proportionally (no shared chokepoints)
- Fault isolation: Client A's slow connection doesn't affect Client B at all
- Simple synchronization: Each client owns their stuff, minimal lock fighting
- Real-time guarantees: Everyone gets their own CPU time slice—fair and predictable
For the nitty-gritty details on locks, synchronization, and threading internals, check out Concurrency Architecture Documentation.
Server Modules
The server is organized into modular components:
Main Entry Point
File: src/server/main.c, src/server/main.h
Server initialization, signal handling, connection management, and overall orchestration. Handles:
- Socket binding and listening (IPv4 and IPv6)
- Client connection acceptance
- Signal handler setup (SIGINT/SIGTERM)
- Global shutdown coordination
- Resource cleanup
Client Management
File: src/server/client.c, src/server/client.h
Per-client lifecycle management, threading coordination, and state management. Handles:
- Client connection establishment (
add_client())
- Client disconnection handling (
remove_client())
- Thread creation and management per client
- Client state synchronization
- Hash table management for O(1) client lookups
Protocol Handler
File: src/server/protocol.c, src/server/protocol.h
Network packet processing and protocol implementation. Handles:
- Packet parsing and validation
- Client state updates based on received packets
- Media data storage (video frames, audio samples)
- Protocol compliance checking
- Packet type dispatch
Stream Generation
File: src/server/stream.c, src/server/stream.h
Multi-client video mixing, ASCII frame generation, and personalized rendering. Handles:
- Video frame collection from all active clients
- Composite frame generation (single, 2x2, 3x3 layouts)
- ASCII conversion with terminal capability awareness
- Per-client personalized frame generation
- Buffer pool integration for efficient memory management
Render Threads
File: src/server/render.c, src/server/render.h
Per-client rendering threads with rate limiting and timing control. Handles:
- Video render thread management (60fps per client)
- Audio render thread management (172fps per client)
- Frame generation timing and rate limiting
- Thread lifecycle management
- Platform-specific timing integration
Statistics and Monitoring
File: src/server/stats.c, src/server/stats.h
Performance monitoring, resource utilization tracking, and health reporting. Handles:
- Continuous performance metric collection
- Per-client statistics reporting
- Buffer pool utilization tracking
- Packet queue performance analysis
- Hash table efficiency monitoring
- Periodic statistics logging (every 30 seconds)
Cryptographic Operations
File: src/server/crypto.c, src/server/crypto.h
Per-client cryptographic handshake, key exchange, and session encryption management. Handles:
- Cryptographic handshake with each client
- X25519 key exchange per session
- Session encryption key derivation
- Client authentication (password, SSH key, passwordless)
- Client whitelist integration
- Per-client crypto context management
For complete details on cryptography, see Cryptography Module. For handshake protocol details, see Handshake Protocol.
Concurrency Architecture
Synchronization Primitives
The server uses a carefully designed synchronization architecture to ensure thread safety while maintaining high performance. See Concurrency Documentation for complete details on:
- Global RWLock (
g_client_manager_rwlock): Protects global client array
- Per-Client Mutexes: Fine-grained locking for client state
- Atomic Variables: Lock-free thread control flags
- Condition Variables: Thread wakeup during shutdown
Lock Ordering Protocol
Okay, this is super important—like "your program will freeze if you mess this up" important.
Deadlocks are the nightmare of multi-threaded programming. Imagine two threads: Thread A holds Lock 1 and wants Lock 2, while Thread B holds Lock 2 and wants Lock 1. Both threads wait forever. Boom, deadlock. The program freezes. Not fun.
The solution? Always acquire locks in the same order. It's like a traffic rule—if everyone follows it, there are no collisions.
CRITICAL RULE (memorize this!):
Always acquire locks in this exact order:
- Global RWLock (
g_client_manager_rwlock) — The big one, protects the client list
- Per-Client Mutex (
client_state_mutex) — Protects individual client state
- Specialized Mutexes (
video_buffer_mutex, g_stats_mutex, etc.) — Specific subsystems
Never acquire them in a different order. Ever. Seriously. Violating this causes deadlocks, and debugging deadlocks is like finding a needle in a haystack while wearing mittens.
For detailed examples, war stories, and the rationale behind this ordering, check out Concurrency Documentation.
Snapshot Pattern
Here's a clever technique we use throughout the server to keep things fast: the snapshot pattern.
The idea is simple: hold locks for as little time as possible. Instead of keeping a lock while doing expensive work (like rendering a frame), we quickly grab the lock, copy the data we need into local variables (a "snapshot"), release the lock, and then do the work using the snapshot.
It's like taking a photo instead of staring at the original—you get what you need, then other people can look at the original while you work with your copy.
Here's what it looks like in practice:
bool should_continue = client->video_render_thread_running && client->active;
uint32_t client_id_snapshot = client->client_id;
unsigned short width_snapshot = client->width;
unsigned short height_snapshot = client->height;
if (should_continue) {
generate_frame(client_id_snapshot, width_snapshot, height_snapshot);
}
This pattern is why the server can maintain 60fps video rendering—we minimize lock contention, so threads don't spend time waiting for each other.
Cryptographic Operations
The server supports end-to-end encryption with multiple authentication modes. See Cryptography Module for algorithm details and Handshake Protocol for connection establishment.
Cryptographic Handshake
Each client connection performs a cryptographic handshake before media streaming begins:
Phase 0: Algorithm Negotiation
- Client advertises supported algorithms
- Server selects compatible algorithms
- Both agree on key exchange, cipher, and authentication methods
Phase 1: Key Exchange
- Server generates ephemeral X25519 keypair
- Client generates ephemeral X25519 keypair
- Both exchange public keys and compute shared secret via ECDH
Phase 2: Authentication
- Server verifies client identity (if whitelist enabled)
- Server signs challenge with Ed25519 identity key (if available)
- Client proves identity with password or SSH key
Phase 3: Session Establishment
- Both derive session encryption keys from shared secret
- All subsequent packets are encrypted with XSalsa20-Poly1305
- Perfect forward secrecy (ephemeral keys per session)
Keys Module
The server supports multiple key management approaches:
SSH Key Authentication:
- Server loads Ed25519 private key (via
--server-key)
- Server verifies client public keys against whitelist (via
--client-keys)
- Keys must be Ed25519 format (modern, secure, fast)
- Native Decryption: Encrypted keys are decrypted using bcrypt_pbkdf (libsodium-bcrypt-pbkdf)
- BearSSL AES-256-CTR. No external tools required.
Password Authentication:
- Both server and client derive same key from shared password
- Uses Argon2id key derivation for password hashing
- No identity keys required (password-only mode)
Passwordless Mode:
- Ephemeral keys only (no long-term identity)
- Key exchange provides confidentiality but not authentication
- Suitable for trusted networks or testing
For complete key management details, see Keys Module.
Statistics Logger Thread
The stats logger thread (stats_logger_thread() in stats.c) provides continuous performance monitoring and resource utilization tracking.
Statistics Functionality
Monitoring Frequency:
- Statistics collected every 30 seconds
- Thread checks shutdown flag every 10ms for responsive shutdown
- Background processing doesn't affect real-time performance
Metrics Collected:
- Active client count
- Clients with audio capabilities
- Clients with video capabilities
- Per-client packet queue statistics (enqueued, dequeued, dropped)
- Per-client video buffer statistics (total frames, dropped frames, drop rate)
- Buffer pool utilization (global allocation/deallocation rates)
- Lock debugging statistics (mutex/RWLock acquisitions, releases, currently held)
- Hash table efficiency metrics
Statistics Output: The stats logger generates comprehensive performance reports with per-client details:
Stats: Clients: 3, Audio: 2, Video: 3
Client 1 audio queue: 1024 enqueued, 1024 dequeued, 0 dropped
Client 1 video buffer: 1800 frames, 12 dropped (0.7% drop rate)
Client 2 audio queue: 512 enqueued, 512 dequeued, 0 dropped
...
Thread Safety:
- Uses reader locks on shared data structures
- Takes atomic snapshots of volatile counters
- Minimal impact on operational performance
- Safe concurrent access with render threads
Debug Output
The stats logger thread includes extensive debug instrumentation for troubleshooting:
- Thread startup/shutdown logging
- Loop iteration counters
- Shutdown flag state changes
- Sleep cycle progression
- Statistics collection timing
Debug output is enabled via logging system configuration. See Logging System for details.
Error Handling
The server implements comprehensive error handling throughout all modules. See Error Number System for complete error handling details.
Error Handling Patterns
Library Code (server modules):
- Use
SET_ERRNO() macros for error reporting
- Provide meaningful context messages
- Capture system errors with
SET_ERRNO_SYS()
- Return appropriate error codes
Application Code (main.c):
- Check for library errors with
HAS_ERRNO()
- Use
FATAL() macros for fatal errors
- Use
FATAL_AUTO_CONTEXT() for automatic context detection
- Log errors with full context before shutdown
Fatal Errors
The server uses FATAL() macros to terminate the server on critical errors:
- Network errors: Socket creation, binding, listening failures
- Cryptographic errors: Key loading failures, handshake failures
- Memory errors: Critical allocation failures
- Configuration errors: Invalid command-line options, missing dependencies
All fatal errors include full context including file, line, function, and error message. See Exit Codes for complete exit code reference.
Non-Fatal Errors
Many errors are handled gracefully without server termination:
- Client connection errors: Individual client failures don't crash server
- Packet parsing errors: Invalid packets logged but client not disconnected
- Buffer allocation errors: Frame dropping instead of crash
- Network timeouts: Connection retry logic instead of termination
Shutdown and Exit
The server implements graceful shutdown with proper resource cleanup.
Shutdown Sequence
1. Shutdown Signal (SIGINT/SIGTERM handler):
- Sets atomic
g_server_should_exit flag (signal-safe)
- Closes listening sockets to interrupt
accept() calls
- Returns immediately (no complex cleanup in signal handler)
2. Main Thread Detection:
- Main loop detects
g_server_should_exit flag
- Stops accepting new connections
- Initiates client cleanup sequence
3. Client Cleanup (per client, in remove_client()):
- Sets thread shutdown flags (atomic operations)
- Shuts down packet queues (wakes up blocked threads)
- Joins threads in order: send → receive → video render → audio render
- Cleans up resources: queues, buffers, mutexes
- Closes client sockets
4. Statistics Thread Cleanup:
- Statistics thread detects shutdown flag
- Exits monitoring loop gracefully
- Logs final statistics report
- Thread joined by main thread
5. Resource Cleanup:
- Closes remaining sockets
- Destroys synchronization primitives
- Frees global buffers
- Logs final shutdown message
Exit Codes
The server exits with specific codes to indicate status:
- 0: Normal shutdown (graceful termination)
- 1: Fatal error or forced termination (double Ctrl+C)
- Error codes: See Exit Codes for complete reference
Exit codes are set via FATAL() macros or explicit exit() calls.
Known Quirks and Limitations
Every system has its quirks, and ascii-chat is no exception. Here are some things you should know about—not bugs, just... characteristics!
SSH Keygen Dependency
Native Encrypted Key Support: The server supports native decryption of encrypted Ed25519 private keys without requiring any external tools.
How it works: When you load an encrypted Ed25519 private key (via --server-key), ascii-chat decrypts it using the same algorithms as OpenSSH:
- bcrypt_pbkdf: Key derivation function (via libsodium-bcrypt-pbkdf library)
- AES-256-CTR: Symmetric encryption (via BearSSL)
- OpenSSH format: Full support for openssh-key-v1 format
Supported encryption:
- Cipher:
aes256-ctr, aes256-cbc (OpenSSH defaults for Ed25519 keys)
- KDF:
bcrypt (bcrypt_pbkdf with configurable rounds)
- Key types: Ed25519 only (RSA/ECDSA not supported)
Password input methods:
- Interactive prompt (default, uses platform_get_password())
- Environment variable:
$ASCII_CHAT_KEY_PASSWORD (for automation)
- SSH/GPG agent:
$SSH_AUTH_SOCK for password-free operation
Implementation: See lib/crypto/keys/ssh_keys.c:59-134 for the native decryption code and cmake/LibsodiumBcryptPbkdf.cmake for the bcrypt_pbkdf library integration.
Stats Logger Thread Behavior
The stats logger thread (stats_logger_thread() in stats.c) has specific behavior characteristics:
30-Second Intervals:
- Statistics are collected and logged every 30 seconds
- Thread sleeps in 10ms increments to maintain shutdown responsiveness
- Shutdown flag is checked frequently (every 10ms) during sleep periods
Per-Client Details:
- Only logs per-client details if clients have active queues or buffers
- Filters out empty/zero statistics to reduce log spam
- Format:
Client N audio queue: X enqueued, Y dequeued, Z dropped
Lock Debug Integration:
- Integrates with lock debugging system for mutex/RWLock statistics
- Reports total locks acquired/released and currently held count
- Helps identify lock contention issues
Final Statistics Report:
- Logs final server statistics on thread exit
- Prints error statistics summary
- Helps diagnose issues during shutdown
Double Ctrl+C Behavior
Historical Issue: Previously required double Ctrl+C to shutdown (fixed).
Root Cause: Signal handler was accessing shared client data structures without proper synchronization, causing race conditions and incomplete shutdown.
Current Behavior (after fix):
- Single Ctrl+C properly shuts down server
- Signal handler is signal-safe (only sets flags, no data structure access)
- Main thread handles all cleanup with proper synchronization
See Concurrency Documentation Bug #1 for complete fix details.
Client Limits
Current Limitation: Server supports up to MAX_CLIENTS concurrent connections (defined in client.h). Default is typically 64 clients.
Why: Client array is statically allocated for performance (no dynamic allocation).
- Faster client lookups (O(1) array access)
- No allocation overhead per connection
- Predictable memory usage
Scaling: Server scales linearly up to 9+ clients in testing. Actual capacity depends on:
- CPU cores available
- Memory bandwidth
- Network bandwidth
- Client frame rates and resolutions
Frame Dropping Under Load
Here's the thing about real-time video: When the system gets overloaded, you have two choices—buffer everything (causing latency to skyrocket) or drop frames (keeping latency low but reducing smoothness). We chose the latter.
Why drop frames instead of buffering? Think about a live conversation. Would you rather:
- See smooth video but with a 5-second delay (buffering approach), or
- See slightly choppy video but respond in real-time (frame dropping approach)?
For a chat application, real-time responsiveness beats smoothness. So when the server gets overwhelmed (lots of clients, slow CPU, whatever), it:
- Always uses the latest available frame (freshest data)
- Drops older frames logarithmically based on buffer occupancy (smart dropping)
- Maintains target frame rate as much as possible
What this means for you: Under heavy load, clients might see 30fps instead of 60fps, but the video will always feel responsive. No weird lag where someone asks a question and you respond 5 seconds later!
Integration with Library Modules
The server integrates with many library modules:
- Network I/O (Network Module): Socket operations, packet protocol
- Cryptography (Crypto Module): Handshake, encryption, key management
- Video Processing (Video to ASCII): RGB to ASCII conversion
- Audio Processing (Audio Module): Audio mixing and playback
- Buffer Management (Buffer Pool): Efficient memory allocation
- Packet Queues (Packet Queue): Thread-safe packet delivery
- Logging (Logging System): Structured logging with levels
- Error Handling (Error System): Typed error codes and context
- Platform Abstraction (Platform Layer): Cross-platform threading/sockets
Performance Characteristics
Linear Scaling:
- Performance scales linearly with number of clients
- No shared bottlenecks between clients
- Each client gets dedicated CPU resources
- Real-time guarantees maintained per client
Frame Rates:
- Video: 60fps per client (16.67ms intervals)
- Audio: 172fps per client (5.8ms intervals)
- Frame rate maintained under normal load
- Frame dropping prevents latency accumulation under heavy load
Memory Usage:
- Per-client memory: ~1-2 MB per client (buffers, queues, state)
- Buffer pool: Shared pool reduces allocation overhead
- Scales linearly with number of clients
CPU Usage:
- Video rendering: ~10-20% CPU per client (depends on resolution)
- Audio mixing: ~1-2% CPU per client
- Network I/O: Minimal CPU (kernel handles most work)
- Statistics: Negligible (<1% CPU)
Best Practices
Here are some hard-won lessons from building and debugging this server. Follow these, and you'll save yourself a lot of headaches!
Thread Safety (avoiding the deadlock nightmare):
- Always follow lock ordering: Global → per-client → specialized (no exceptions!)
- Use snapshot pattern: Copy data while holding locks, process without locks
- Minimize lock time: Hold locks for as little time as possible
- Atomic flags: Use atomics for simple booleans (no lock needed)
Why? Deadlocks are terrible to debug. Following these rules prevents them entirely.
Error Handling (making bugs easy to find):
- Check all returns: Network operations can fail—always check the return value
- Use SET_ERRNO(): In library code, use this for automatic context capture
- Use FATAL(): In main code, fatal errors should be obvious and informative
- Meaningful messages: "Failed to bind" is better than "Error -1"
Why? Good error messages save hours of debugging. You'll thank yourself later.
Performance (keeping it fast):
- Use buffer pools: Hot paths should never call malloc() directly
- Avoid lock fighting: If threads are waiting for locks, rethink your design
- Lock-free when possible: Ring buffers, atomics—they're your friends
- Profile contention: Use the lock debugging system to find bottlenecks
Why? Real-time 60fps doesn't happen by accident. Every microsecond counts.
Shutdown (cleaning up gracefully):
- Flags first: Set shutdown flags before joining threads (they need to know to stop!)
- Join in order: send → receive → render (order matters for cleanup)
- Check frequently: Loops should check shutdown flags often (responsive exit)
- Interruptible sleep: Use sleep that can be interrupted by shutdown signals
Why? Nothing's worse than a program that won't exit. Clean shutdown is a feature.
- See also
- src/server/main.c For server entry point and initialization
-
src/server/client.c For client lifecycle management
-
src/server/protocol.c For Packet Types processing
-
src/server/stream.c For Video to ASCII Conversion mixing and ASCII generation
-
src/server/render.c For rendering threads
-
src/server/stats.c For performance monitoring
-
src/server/crypto.c For cryptographic operations
-
CONCURRENCY.md For complete concurrency architecture
-
topic_crypto For cryptography details
-
topic_handshake For Handshake Module protocol details
-
topic_keys For key management details
-
topic_network For Network Module I/O details