ascii-chat 0.6.0
Real-time terminal-based video chat with ASCII art conversion
Loading...
Searching...
No Matches

🧪 Unit, integration, and performance tests More...

Files

file  common.c
 Common test utilities implementation.
 
file  common.h
 Common test utilities and environment detection.
 
file  globals.c
 🔗 Global symbol stubs for test executables to satisfy linker dependencies
 
file  logging.c
 🧪 Test utilities for logging: stdout/stderr redirection and capture helpers
 
file  logging.h
 Test logging control utilities.
 
file  test_env.h
 Test environment detection utilities.
 

Macros

#define TEST_LOGGING_SETUP_AND_TEARDOWN()
 Macro to create setup and teardown functions for quiet testing.
 
#define TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVELS(setup_level, restore_level, disable_stdout, disable_stderr)
 Macro to create setup and teardown functions for quiet testing with custom log level control.
 
#define TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVEL()    TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVELS(LOG_FATAL, LOG_DEBUG, true, true)
 Macro to create setup and teardown functions for quiet testing with log level control (default levels)
 
#define TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVELS(suite_name, setup_level, restore_level, disable_stdout, disable_stderr, ...)
 Macro to create a complete test suite with quiet logging and custom log levels.
 
#define TEST_SUITE_WITH_QUIET_LOGGING(suite_name, ...)
 Macro to create a complete test suite with quiet logging (default log levels)
 
#define TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVEL(suite_name, ...)
 Macro to create a complete test suite with quiet logging and log level control (default levels)
 
#define TEST_LOGGING_TEMPORARILY_DISABLE()
 Macro to temporarily disable logging for a specific test.
 
#define TEST_LOGGING_TEMPORARILY_DISABLE_STDOUT()
 Macro to temporarily disable only stdout for a specific test.
 
#define TEST_LOGGING_TEMPORARILY_DISABLE_STDERR()
 Macro to temporarily disable only stderr for a specific test.
 
#define TEST_SUITE_WITH_DEBUG_LOGGING(suite_name, ...)    TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVELS(suite_name, LOG_DEBUG, LOG_DEBUG, false, false, ##__VA_ARGS__)
 Macro to create a complete test suite with debug logging and stdout/stderr enabled.
 
#define TEST_SUITE_WITH_VERBOSE_LOGGING(suite_name, ...)   TEST_SUITE_WITH_DEBUG_LOGGING(suite_name, ##__VA_ARGS__)
 Alias for TEST_SUITE_WITH_DEBUG_LOGGING for verbose output.
 

Functions

const char * test_get_binary_path (void)
 Get the path to the ascii-chat binary for integration tests.
 
int test_logging_disable (bool disable_stdout, bool disable_stderr)
 Disable stdout/stderr output for quiet test execution.
 
int test_logging_restore (void)
 Restore stdout/stderr output after test logging disable.
 
bool test_logging_is_disabled (void)
 Check if logging is currently disabled.
 

Detailed Description

🧪 Unit, integration, and performance tests

This header provides common utilities and helpers for writing ascii-chat tests. It includes standard test headers, platform detection, and environment checks to help tests run reliably across different environments (CI, Docker, WSL).

CORE FEATURES:

TEST ENVIRONMENT:

This header automatically includes:

HEADLESS ENVIRONMENT DETECTION:

Tests that require hardware (like webcam) can use test_is_in_headless_environment() to skip when running in CI/Docker/WSL.

Note
This header must be included before any test code that uses the test environment detection functions.
All test files should include this header for consistent test infrastructure.
Author
Zachary Fogg me@zf.nosp@m.o.gg
Date
August 2025

This header provides utilities for controlling logging output during tests. It enables tests to temporarily disable or redirect stdout/stderr for quiet test execution, and provides convenient macros for test suite setup and teardown.

CORE FEATURES:

TEST LOGGING MACROS:

USAGE:

// Simple test suite with quiet logging
Test(my_suite, my_test) {
// Test code runs quietly
}
// Test suite with custom log levels
my_suite, LOG_FATAL, LOG_DEBUG, true, true
);
// Debug mode for test development
@ LOG_DEBUG
Definition log/logging.h:61
@ LOG_FATAL
Definition log/logging.h:65
#define TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVELS(suite_name, setup_level, restore_level, disable_stdout, disable_stderr,...)
Macro to create a complete test suite with quiet logging and custom log levels.
#define TEST_SUITE_WITH_DEBUG_LOGGING(suite_name,...)
Macro to create a complete test suite with debug logging and stdout/stderr enabled.
#define TEST_SUITE_WITH_QUIET_LOGGING(suite_name,...)
Macro to create a complete test suite with quiet logging (default log levels)
Note
Logging redirection is automatically restored when tests complete.
All logging control functions are thread-safe for use in parallel tests.
This header should be included via tests/common.h for consistency.
Author
Zachary Fogg me@zf.nosp@m.o.gg
Date
September 2025

This header provides test environment detection functions that can be used by both test code and production code (to adjust behavior like timeouts).

Unlike common.h, this header has NO Criterion dependency, so it can be safely included by any code that needs to detect if it's running in a test environment.

Note
This is the canonical location for test environment detection. Do NOT duplicate this logic in other files.

Testing with Criterion

Overview

Welcome! This guide will help you get comfortable with the test suite in ascii-chat. We use Criterion, a modern C testing framework, to make sure everything works correctly. Don't worry if you're new to testing—we'll walk you through everything you need to know.

The test suite is designed to work smoothly across different platforms (Linux, macOS, Windows) using Docker when needed, so you can run tests confidently on your machine.

What types of tests do we have?

  • Unit Tests: Test individual components in isolation (think "does this one function work?")
  • Integration Tests: Test how components work together (think "does the whole system work?")
  • Performance Tests: Make sure things run fast enough (think "is this optimized enough?")

What tools do we use?

  • Criterion test framework (https://github.com/Snaipe/Criterion) for running tests
  • Docker for consistent testing across platforms
  • Automated test runner scripts that make running tests easy
  • Codecov for tracking how much of our code is tested
  • GitHub Actions to automatically run tests when code changes

Criterion Test Framework

We chose Criterion because it's modern, feature-rich, and makes writing tests pleasant. If you're coming from other testing frameworks, you'll find it familiar. If you're new to testing in C, don't worry—it's straightforward once you see a few examples.

Want to learn more?

What features do we use?

  • Standard tests: The basic Test(suite, name) for regular tests
  • Parameterized tests: Run the same test with different inputs (ParameterizedTest, ParameterizedTestParameters)
  • Theory tests: Test properties that should hold for many values (Theory, TheoryDataPoints)
  • Fixtures: Setup and teardown code that runs before/after tests (TestSuite, .init, .fini)
  • Assertions: Check that things work as expected (cr_assert, cr_assert_eq, cr_assert_not_null, etc.)
  • Timeouts: Make sure tests don't hang forever (.timeout = N in seconds)
  • Parallel execution: Run multiple tests at once for speed

Windows Compatibility

Heads up! If you're on Windows, there's something important you should know.

Criterion has limited Windows support, and many features don't work correctly on native Windows builds. To make sure tests run reliably for everyone, we've set up Docker-based testing.

What to do on Windows:

  • Use Docker: This is the recommended way to run tests on Windows
  • Use the PowerShell script: Run tests/scripts/run-docker-tests.ps1 for easy testing
  • Avoid native builds: Only use native Windows Criterion builds if you absolutely have to, and expect some limitations

The Docker setup gives you the same Linux environment that works perfectly with Criterion, so your tests will run just like they do on Linux or macOS. No need to worry about platform differences!

Test Types

We organize tests into three categories, each serving a different purpose. Understanding when to use each type will help you write better tests.

Unit Tests

Unit tests check individual pieces of code in isolation. Think of them like checking each ingredient before you cook—you want to make sure each component works correctly on its own.

What makes a good unit test?

  • Tests one function or module at a time
  • Uses mocks or stubs for things it depends on (so you're only testing that one piece)
  • Runs quickly (we're talking milliseconds here)
  • Always gives the same result (deterministic)
  • Doesn't affect other tests (isolated)

Where to find them:

All unit tests live in tests/unit/. Each module typically has its own test file.

Examples you can look at:

  • tests/unit/ascii_test.c: Testing ASCII conversion functions
  • tests/unit/buffer_pool_test.c: Testing buffer pool allocation
  • tests/unit/crypto_test.c: Testing cryptographic operations
  • tests/unit/palette_test.c: Testing palette management

Integration Tests

Integration tests check how different pieces work together. Now you're testing the whole recipe, not just individual ingredients.

What integration tests do:

  • Test multiple components working together
  • May use real resources (actual network connections, real files, etc.)
  • Can take a bit longer to run (seconds instead of milliseconds)
  • Test realistic scenarios that users might actually encounter

Where to find them:

Integration tests live in tests/integration/.

Examples you can look at:

  • tests/integration/crypto_handshake_integration_test.c: Testing the full cryptographic handshake
  • tests/integration/crypto_network_integration_test.c: Testing network encryption end-to-end
  • tests/integration/ascii_simd_integration_test.c: Verifying SIMD optimizations work correctly

Performance Tests

Performance tests make sure critical code paths run fast enough. They're like a speedometer for your code—telling you if things are running as fast as they should.

What performance tests do:

  • Measure how long things take to run
  • Check that optimizations are actually working
  • Catch performance regressions (when code gets slower)
  • Use realistic data sizes and workloads

Where to find them:

Performance tests live in tests/performance/.

Examples you can look at:

  • tests/performance/ascii_performance_test.c: Making sure ASCII conversion is fast
  • tests/performance/crypto_performance_test.c: Making sure encryption/decryption is fast

Environment Variables

We use a couple of environment variables to make tests run faster and behave appropriately during testing. The test runners set these automatically, so you usually don't need to think about them—but it's good to know what they do.

Test environment variables:

  • **TESTING=1**: A general flag that says "we're in test mode"
    • Code checks this to enable test-friendly behavior
    • Enables shorter timeouts and reduced memory usage for faster tests
    • Automatically set by our test runners
  • **CRITERION_TEST=1**: Specifically identifies Criterion test execution
    • Network code uses this to shorten timeouts (tests shouldn't wait 30 seconds!)
    • Also automatically set by test runners

How code detects test mode:

Code checks if it's running in a test environment like this:

static int is_test_environment(void) {
return SAFE_GETENV("CRITERION_TEST") != NULL ||
SAFE_GETENV("TESTING") != NULL;
}
#define SAFE_GETENV(name)
Definition common.h:378

This lets the code automatically optimize for testing:

  • Network timeouts drop from 5-30 seconds down to 1 second (much faster!)
  • Memory allocations can be limited for faster test execution
  • Expensive initialization steps can be skipped

Real examples:

Here's how we shorten network timeouts during tests (from lib/network/packet.c):

static int calculate_packet_timeout(size_t packet_size) {
int base_timeout = is_test_environment() ? 1 : SEND_TIMEOUT;
// ... rest of calculation ...
}
#define SEND_TIMEOUT
Send timeout in seconds (5 seconds)
Definition network.h:108

And here's how we limit memory usage in tests (from tests/unit/compression_test.c):

size_t max_size = (getenv("TESTING") || getenv("CRITERION_TEST"))
? 1000
: 1000000;

Network Timeouts

Network operations can take time, and we don't want tests waiting around forever. So when tests are running (when TESTING=1 or CRITERION_TEST=1 is set), network timeouts get automatically shortened.

What changes during testing:

  • Connection timeouts: 3 seconds → 1 second
  • Send timeouts: 5 seconds → 1 second
  • Receive timeouts: 15 seconds → 1 second
  • Accept timeouts: 3 seconds → 1 second

This means integration tests that use the network complete in seconds instead of minutes, while still testing real network behavior. Pretty neat!

Where this is implemented:

The timeout logic lives in these network functions:

These functions automatically detect the test environment and use shorter timeouts:

timeout.tv_sec = is_test_environment() ? 1 : timeout_seconds;

Docker-Based Testing

To make sure tests work the same everywhere, we've set up Docker-based testing. This is especially helpful on Windows where Criterion has limited support, but it's also great for ensuring consistent results across all platforms.

Dockerfile

Our test Dockerfile (tests/Dockerfile) uses Arch Linux as the base. It's lightweight, stays up-to-date, and has everything Criterion needs.

What's included:

  • Arch Linux base: Lightweight and current
  • Criterion: The full test framework, ready to go
  • All build tools: Clang, CMake, Ninja, ccache, and all the libraries we need
  • Pre-built BearSSL: Dependencies are compiled when we build the image (saves time later)
  • ccache configured: Compilation results are cached between runs for speed

The Dockerfile structure is pretty straightforward:

FROM archlinux:latest
# Install compiler, build tools, and dependencies
# Pre-build BearSSL from submodule
# Configure ccache for persistent caching
# Set environment variables (TESTING=1, etc.)
WORKDIR /app
CMD ["sh", "-c", "cmake --preset docker && cmake --build build_docker --target tests && ctest --test-dir build_docker --output-on-failure --parallel 0"]

docker-compose.yml

The Docker Compose setup (tests/docker-compose.yml) makes it easy to run tests without thinking about Docker details.

What it provides:

  • **ascii-chat-tests service**: The main test container
    • Builds from our tests/Dockerfile
    • Mounts your source code so you can edit files and see changes
    • Sets up ccache volume so compilation is fast between runs
    • Automatically sets all the test environment variables

Volumes:

  • ccache-data: Keeps compilation cache between Docker runs (much faster rebuilds!)
  • Source code: Mounted read-write so build artifacts can be created

Environment variables:

  • TESTING=1: We're in test mode
  • CRITERION_TEST=1: Criterion test flag
  • CCACHE_DIR=/ccache: Where to store compilation cache
  • ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer: For AddressSanitizer support

Using Docker for Testing

On Windows, we have a convenient PowerShell script that makes Docker testing easy:

Using the PowerShell script (recommended on Windows):

# Run all tests
./tests/scripts/run-docker-tests.ps1
# Run just unit tests
./tests/scripts/run-docker-tests.ps1 unit
# Run just integration tests
./tests/scripts/run-docker-tests.ps1 integration
# Run just performance tests
./tests/scripts/run-docker-tests.ps1 performance
# Run a specific test
./tests/scripts/run-docker-tests.ps1 unit options
./tests/scripts/run-docker-tests.ps1 unit options terminal_detect
# Run tests matching a pattern
./tests/scripts/run-docker-tests.ps1 test_unit_buffer_pool -f "creation"
# Run clang-tidy for code analysis
./tests/scripts/run-docker-tests.ps1 -ClangTidy
./tests/scripts/run-docker-tests.ps1 clang-tidy lib/common.c
# Open an interactive shell for debugging
./tests/scripts/run-docker-tests.ps1 -Interactive
# Clean everything and rebuild from scratch
./tests/scripts/run-docker-tests.ps1 -Clean

Using Docker Compose directly:

You can also use Docker Compose commands directly if you prefer:

# Build and run all tests
docker-compose -f tests/docker-compose.yml up --build
# Run tests in an existing container
docker-compose -f tests/docker-compose.yml run ascii-chat-tests
# Get an interactive shell for debugging
docker-compose -f tests/docker-compose.yml run ascii-chat-tests /bin/bash
# Clean rebuild (useful when dependencies change)
docker-compose -f tests/docker-compose.yml build --no-cache

Why use Docker for testing?

  • Consistent environment: Tests run the same way on your machine, CI, and everyone else's machine
  • Full Criterion support: All Criterion features work perfectly in Linux
  • Isolated: Tests don't mess with your system or depend on what you have installed
  • Reproducible: Same setup every time means reliable results
  • Fast: ccache keeps compilation fast between runs

Running Tests with ctest

Tests are run using CMake's ctest tool, which integrates with Criterion to provide parallel execution, filtering, and XML output for CI.

Features

What ctest provides:

  • Parallel execution: Uses --parallel 0 to auto-detect CPU cores
  • Label filtering: Filter tests by category (unit, integration, performance)
  • Name filtering: Filter tests by name pattern
  • Timeout handling: Criterion handles individual test timeouts
  • XML output: Criterion generates XML in build/Testing/criterion-xml/
  • Verbose mode: See lots of detail when something goes wrong

Usage

Basic usage:

# Build tests first
cmake --build build --target tests
# Run all tests
ctest --test-dir build --output-on-failure --parallel 0
# Run a specific category using labels
ctest --test-dir build --label-regex "^unit$" --output-on-failure
ctest --test-dir build --label-regex "^integration$" --output-on-failure
ctest --test-dir build --label-regex "^performance$" --output-on-failure
# Run a specific test by name pattern
ctest --test-dir build -R "buffer_pool" --output-on-failure
ctest --test-dir build -R "test_unit_ascii" --output-on-failure

Execution control:

# Control parallel execution
ctest --test-dir build --parallel 4 --output-on-failure # Use 4 cores
ctest --test-dir build --parallel 1 --output-on-failure # Sequential (debugging)
# Verbose output
ctest --test-dir build --output-on-failure --verbose
# List available tests
ctest --test-dir build -N

Architecture

How it works:

CMake discovers tests from tests/unit/, tests/integration/, and tests/performance/:

  • Test executables are named like: test_{category}_{name} (e.g., test_unit_ascii)
  • Each test is labeled with its category for filtering

Building:

  • Build all tests: cmake --build build --target tests
  • Build specific test: cmake --build build --target test_unit_buffer_pool
  • Uses Ninja for fast incremental builds

XML Output:

Criterion generates XML output in build/Testing/criterion-xml/:

  • Each test executable produces its own XML file
  • XML files are used by CI for test result reporting

Parameterized Tests

Sometimes you want to test the same logic with different inputs. Instead of writing five nearly-identical test functions, you can use parameterized tests!

Basic syntax:

// First, define your test cases
ParameterizedTestParameters(suite_name, test_name) {
static TestCase cases[] = {
{ .input = 1, .expected = 2 },
{ .input = 2, .expected = 4 },
{ .input = 3, .expected = 6 },
};
return cr_make_param_array(TestCase, cases, sizeof(cases) / sizeof(cases[0]));
}
// Then, write the test that uses those cases
ParameterizedTest(TestCase *tc, suite_name, test_name) {
cr_assert_eq(process_input(tc->input), tc->expected);
}

⚠️ Important memory note:

If you need to allocate memory in your parameterized test data, you must use Criterion's special memory functions. Regular malloc() won't work correctly because Criterion needs to track the memory to clean it up properly.

Use these Criterion functions:

  • cr_malloc(): Allocate memory (Criterion tracks it)
  • cr_calloc(): Allocate zero-initialized memory
  • cr_realloc(): Reallocate memory
  • cr_free(): Free memory allocated with Criterion functions

But honestly? The easiest approach is to just use static arrays when possible:

// Good: Static array (no memory issues!)
ParameterizedTestParameters(palette, utf8_boundary_property) {
static const char *palettes[] = {
" ░▒▓█",
" .:-=+*#%@",
" 0123456789",
};
return cr_make_param_array(const char *, palettes,
sizeof(palettes) / sizeof(palettes[0]));
}
// Avoid: Dynamic allocation (requires Criterion functions)
ParameterizedTestParameters(palette, utf8_boundary_property) {
char **palettes = cr_malloc(3 * sizeof(char*));
palettes[0] = cr_strdup(" ░▒▓█");
palettes[1] = cr_strdup(" .:-=+*#%@");
palettes[2] = cr_strdup(" 0123456789");
// ... gotta remember to free this ...
}

Examples in our codebase:

Want to see parameterized tests in action? Check these out:

  • tests/unit/terminal_detect_test.c: Testing different COLORTERM and TERM values
  • tests/unit/webcam_test.c: Testing different webcam indices
  • tests/unit/simd_scalar_comparison_test.c: Testing different palettes

Theorized Tests

Theorized tests are Criterion's way of doing property-based testing. Instead of testing specific values, you test that a property holds for a whole range of values. It's like mathematical proof by example—if the property holds for all these values, it probably holds in general.

Basic syntax:

// Define the range of values to test
TheoryDataPoints(suite_name, property_name) = {
TheoryPointsFromRange(0, 100, 1), // Integers from 0 to 100
TheoryPointsFromRange(0.0, 1.0, 0.1), // Floats from 0.0 to 1.0 in 0.1 steps
};
// Write the test that checks the property
Theory((size_t data_size), suite_name, property_name) {
// This runs for every data_size in the range
void *data = malloc(data_size);
cr_assert_not_null(data);
free(data);
}

Theory data sources:

Criterion gives you several ways to generate test values:

  • TheoryPointsFromRange(min, max, step): Generate a range of values
  • TheoryPointsFromArray(array, count): Use your own array of values
  • TheoryPointsFromBitfield(bits): Generate all bit combinations

What are these good for?

Theorized tests are perfect for checking mathematical properties:

  • Roundtrip properties: decompress(compress(x)) == x should always be true
  • Boundary conditions: Does the code handle values at the limits?
  • Invariant properties: Properties that should always hold no matter what

Examples in our codebase:

We use theorized tests in lots of places:

  • tests/unit/compression_test.c: Compression roundtrip property
  • tests/unit/crypto_test.c: Encryption roundtrip and nonce uniqueness
  • tests/unit/palette_test.c: Palette length and UTF-8 boundary properties
  • tests/unit/ringbuffer_test.c: FIFO ordering property
  • tests/unit/mixer_test.c: Audio bounds property
  • tests/unit/ascii_test.c: Image size property
  • tests/unit/buffer_pool_test.c: Allocation roundtrip and pool reuse
  • tests/unit/aspect_ratio_test.c: Aspect ratio preservation

Test Logging Macros

Tests can generate a lot of output, which makes it hard to see what actually matters. We've got some helpful macros in lib/tests/logging.h (included via lib/tests/common.h) that let you control logging during tests.

Test Suite Macros

Quiet logging for a whole suite:

#include "tests/common.h"
Test(my_suite, my_test) {
// Logging is automatically turned off (goes to /dev/null)
// Log level is set to LOG_FATAL (only fatal errors show up)
}
Common test utilities and environment detection.

Custom log levels:

my_suite,
LOG_FATAL, // Log level during setup
LOG_DEBUG, // Log level to restore after
true, // Disable stdout
true // Disable stderr
);

Debug logging:

Test(debug_suite, debug_test) {
// Logging is enabled with LOG_DEBUG level
// You can see stdout/stderr for debugging
}

With timeout:

TEST_SUITE_WITH_QUIET_LOGGING(my_suite, .timeout = 10);

Per-Test Logging Macros

Sometimes you just want to quiet things down for part of a test:

Temporarily disable all logging:

Test(my_suite, my_test) {
// ... code that should be quiet ...
// Logging automatically restored when test ends
}
#define TEST_LOGGING_TEMPORARILY_DISABLE()
Macro to temporarily disable logging for a specific test.

Disable just stdout:

Test(my_suite, my_test) {
// stdout is quiet, but stderr still works
}
#define TEST_LOGGING_TEMPORARILY_DISABLE_STDOUT()
Macro to temporarily disable only stdout for a specific test.

Disable just stderr:

Test(my_suite, my_test) {
// stderr is quiet, but stdout still works
}
#define TEST_LOGGING_TEMPORARILY_DISABLE_STDERR()
Macro to temporarily disable only stderr for a specific test.

Manual Setup/Teardown

If you need more control, you can set things up manually:

TestSuite(my_suite,
.init = setup_quiet_test_logging,
.fini = restore_test_logging);
#define TEST_LOGGING_SETUP_AND_TEARDOWN()
Macro to create setup and teardown functions for quiet testing.

Low-Level Functions

For maximum control, you can call the functions directly:

// Disable logging
test_logging_disable(true, true); // disable stdout, disable stderr
// Restore logging
// Check if logging is disabled
bool is_disabled = test_logging_is_disabled();
int test_logging_disable(bool disable_stdout, bool disable_stderr)
Disable stdout/stderr output for quiet test execution.
int test_logging_restore(void)
Restore stdout/stderr output after test logging disable.
bool test_logging_is_disabled(void)
Check if logging is currently disabled.

Code Coverage

We use Codecov (https://codecov.io/) to track how much of our code is covered by tests. Coverage reports are automatically generated when code is pushed or pull requests are created, and uploaded to Codecov so you can see what's covered (and what isn't).

Our Codecov project:

Configuration:

Coverage settings are in codecov.yml:

  • Different coverage targets for different parts of the codebase
  • Coverage grouped by functionality
  • Automatic PR comments with coverage reports
  • Coverage thresholds enforced per component

Generating coverage locally:

To build and run tests with coverage instrumentation:

# Configure with coverage enabled
cmake -B build -DCMAKE_BUILD_TYPE=Debug -DASCIICHAT_ENABLE_COVERAGE=ON
# Build and run tests
cmake --build build --target tests
ctest --test-dir build --output-on-failure --parallel 0
# Coverage files are generated as .gcov files
# (They're automatically uploaded in CI, but you can check them locally too)

Coverage organization:

Coverage is organized by test type and platform:

  • ascii-chat-tests-ubuntu-debug: Unit tests on Ubuntu (debug build)
  • ascii-chat-tests-macos-debug: Unit tests on macOS (debug build)
  • ascii-chat-integration-ubuntu: Integration tests on Ubuntu
  • ascii-chat-performance-ubuntu: Performance tests on Ubuntu

Coverage targets:

Different parts of the codebase have different coverage goals:

  • Core server/client: 75% target
  • ASCII engine: 80% target (this is the main feature, so we want good coverage)
  • Audio/video systems: 70% target
  • Platform-specific code: 60% target (informational—some platform code is hard to test)

Best Practices

These are some guidelines we've learned from experience. Following them will help you write tests that are fast, reliable, and easy to maintain.

Writing good tests:

  • Always include lib/tests/common.h: It has everything you need
  • Use quiet logging macros: Tests should be quiet unless something fails
  • Test both success and failure paths: Make sure error handling works too
  • Use theorized tests for properties: Great for roundtrip properties, invariants, etc.
  • Use parameterized tests for similar cases: Don't repeat yourself
  • Keep tests independent: One test shouldn't affect another
  • Name tests clearly: Future you (and others) will thank you

Organizing tests:

  • One test file per module: tests/unit/{module}_test.c is the pattern
  • Group related tests: Use test suites to organize
  • Use fixtures for setup/teardown: Don't repeat setup code in every test
  • Document complex logic: If the test is tricky, explain why

Performance:

  • Unit tests should be fast: We're talking milliseconds here
  • Integration tests can be slower: Seconds are fine for these
  • Use test environment variables: Let the code optimize for testing
  • Skip expensive operations: Tests don't need to do everything the real code does

Debugging:

  • Use -v for verbose output: See what's actually happening
  • Use --no-parallel for sequential execution: Easier to see what's going on
  • Use debug logging suite: When you need to see debug output
  • Use interactive Docker shell: When you need to poke around

CI/CD Integration

Our test suite is integrated with GitHub Actions, so tests run automatically on every push and pull request. This means you'll know right away if something breaks!

What happens automatically:

  • Tests run on every push and PR
  • Tests run on multiple platforms (Linux, macOS)
  • Coverage reports are generated and uploaded to Codecov
  • Test results are reported in JUnit XML format
  • Docker is used for consistent test execution

CI workflows:

The main test workflow is in .github/workflows/test.yml:

  • Automatically discovers and runs all tests
  • Runs tests in parallel for speed
  • Uploads coverage to Codecov
  • Saves test result artifacts

Additional Resources

Want to learn more?

Where to look in the codebase:

  • Test scripts: tests/scripts/
  • Test source: tests/unit/, tests/integration/, tests/performance/
  • Test utilities: lib/tests/
  • Docker config: tests/Dockerfile, tests/docker-compose.yml

Test Utilities

The test utilities in lib/tests/ provide common functionality for writing tests:

Files:

  • test_logging.h - Logging control utilities for quiet test execution
  • test_logging.c - Implementation of logging redirection
  • common.h - Common test utilities and environment detection
  • globals.c - Global symbol stubs for linker dependencies

These utilities help tests run reliably by providing logging control, environment detection, and common headers. See the individual file documentation for details.

Need help?

If you're stuck or have questions, feel free to ask! We're here to help. Check out the examples in the codebase—seeing real tests is often the best way to learn.

See also
tests/logging.h
tests/logging.c
tests/common.h
tests/globals.c
Author
Zachary Fogg me@zf.nosp@m.o.gg
Date
September 2025

Macro Definition Documentation

◆ TEST_LOGGING_SETUP_AND_TEARDOWN

#define TEST_LOGGING_SETUP_AND_TEARDOWN ( )

#include <logging.h>

Value:
void setup_quiet_test_logging(void) { \
test_logging_disable(true, true); \
} \
void restore_test_logging(void) { \
test_logging_restore(); \
}

Macro to create setup and teardown functions for quiet testing.

This macro creates two functions:

  • setup_quiet_test_logging() - disables both stdout and stderr
  • restore_test_logging() - restores stdout and stderr

Usage:

TestSuite(my_suite, .init = setup_quiet_test_logging, .fini = restore_test_logging);

Definition at line 119 of file tests/logging.h.

120 { \
121 test_logging_disable(true, true); \
122 } \
123 void restore_test_logging(void) { \
124 test_logging_restore(); \
125 }

◆ TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVEL

#define TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVEL ( )     TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVELS(LOG_FATAL, LOG_DEBUG, true, true)

#include <logging.h>

Macro to create setup and teardown functions for quiet testing with log level control (default levels)

This macro creates two functions that also control the log level:

  • setup_quiet_test_logging() - sets log level to FATAL and disables stdout/stderr
  • restore_test_logging() - restores log level to DEBUG and restores stdout/stderr

Usage:

TestSuite(my_suite, .init = setup_quiet_test_logging, .fini = restore_test_logging);
#define TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVEL()
Macro to create setup and teardown functions for quiet testing with log level control (default levels...

Definition at line 163 of file tests/logging.h.

◆ TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVELS

#define TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVELS (   setup_level,
  restore_level,
  disable_stdout,
  disable_stderr 
)

#include <logging.h>

Value:
void setup_quiet_test_logging(void) { \
log_set_level(setup_level); \
test_logging_disable(disable_stdout, disable_stderr); \
} \
void restore_test_logging(void) { \
log_set_level(restore_level); \
test_logging_restore(); \
}

Macro to create setup and teardown functions for quiet testing with custom log level control.

This macro creates two functions that control the log level:

  • setup_quiet_test_logging() - sets log level to setup_level and disables stdout/stderr
  • restore_test_logging() - restores log level to restore_level and restores stdout/stderr

Usage:

TestSuite(my_suite, .init = setup_quiet_test_logging, .fini = restore_test_logging);
#define TEST_LOGGING_SETUP_AND_TEARDOWN_WITH_LOG_LEVELS(setup_level, restore_level, disable_stdout, disable_stderr)
Macro to create setup and teardown functions for quiet testing with custom log level control.

Definition at line 140 of file tests/logging.h.

141 { \
142 log_set_level(setup_level); \
143 test_logging_disable(disable_stdout, disable_stderr); \
144 } \
145 void restore_test_logging(void) { \
146 log_set_level(restore_level); \
147 test_logging_restore(); \
148 }

◆ TEST_LOGGING_TEMPORARILY_DISABLE

#define TEST_LOGGING_TEMPORARILY_DISABLE ( )

#include <logging.h>

Value:
bool _logging_was_disabled = test_logging_is_disabled(); \
if (!_logging_was_disabled) { \
test_logging_disable(true, true); \
} \
/* Note: Restoration happens automatically when test ends */

Macro to temporarily disable logging for a specific test.

This macro can be used within a test to temporarily disable logging, with automatic restoration when the test ends.

Usage:

Test(my_suite, my_test) {
// ... test code that should be quiet ...
}

Definition at line 252 of file tests/logging.h.

254 { \
255 test_logging_disable(true, true); \
256 } \
257 /* Note: Restoration happens automatically when test ends */

◆ TEST_LOGGING_TEMPORARILY_DISABLE_STDERR

#define TEST_LOGGING_TEMPORARILY_DISABLE_STDERR ( )

#include <logging.h>

Value:
bool _logging_was_disabled = test_logging_is_disabled(); \
if (!_logging_was_disabled) { \
test_logging_disable(false, true); \
}

Macro to temporarily disable only stderr for a specific test.

This macro can be used within a test to temporarily disable only stderr, keeping stdout available for normal output.

Usage:

Test(my_suite, my_test) {
// ... test code that can use stdout but should be quiet on stderr ...
}

Definition at line 293 of file tests/logging.h.

295 { \
296 test_logging_disable(false, true); \
297 }

◆ TEST_LOGGING_TEMPORARILY_DISABLE_STDOUT

#define TEST_LOGGING_TEMPORARILY_DISABLE_STDOUT ( )

#include <logging.h>

Value:
bool _logging_was_disabled = test_logging_is_disabled(); \
if (!_logging_was_disabled) { \
test_logging_disable(true, false); \
}

Macro to temporarily disable only stdout for a specific test.

This macro can be used within a test to temporarily disable only stdout, keeping stderr available for error messages.

Usage:

Test(my_suite, my_test) {
// ... test code that should be quiet on stdout but can use stderr ...
}

Definition at line 273 of file tests/logging.h.

275 { \
276 test_logging_disable(true, false); \
277 }

◆ TEST_SUITE_WITH_DEBUG_LOGGING

#define TEST_SUITE_WITH_DEBUG_LOGGING (   suite_name,
  ... 
)     TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVELS(suite_name, LOG_DEBUG, LOG_DEBUG, false, false, ##__VA_ARGS__)

#include <logging.h>

Macro to create a complete test suite with debug logging and stdout/stderr enabled.

This macro creates unique setup/teardown functions with debug logging enabled and stdout/stderr available for debugging output.

Usage:

TEST_SUITE_WITH_DEBUG_LOGGING(my_suite, .timeout = 10);

Definition at line 311 of file tests/logging.h.

◆ TEST_SUITE_WITH_QUIET_LOGGING

#define TEST_SUITE_WITH_QUIET_LOGGING (   suite_name,
  ... 
)

#include <logging.h>

Value:
void setup_quiet_test_logging_##suite_name(void) { \
test_logging_disable(true, true); \
} \
void restore_test_logging_##suite_name(void) { \
test_logging_restore(); \
} \
TestSuite(suite_name, .init = setup_quiet_test_logging_##suite_name, .fini = restore_test_logging_##suite_name, \
##__VA_ARGS__);

Macro to create a complete test suite with quiet logging (default log levels)

This macro creates unique setup/teardown functions and declares the test suite in one go. It supports additional TestSuite options and avoids redefinition errors when multiple test suites are in the same file.

Usage:

TEST_SUITE_WITH_QUIET_LOGGING(my_suite, .timeout = 10);

Definition at line 204 of file tests/logging.h.

205 { \
206 test_logging_disable(true, true); \
207 } \
208 void restore_test_logging_##suite_name(void) { \
209 test_logging_restore(); \
210 } \
211 TestSuite(suite_name, .init = setup_quiet_test_logging_##suite_name, .fini = restore_test_logging_##suite_name, \
212 ##__VA_ARGS__);

◆ TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVEL

#define TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVEL (   suite_name,
  ... 
)

#include <logging.h>

Value:
void setup_quiet_test_logging_##suite_name(void) { \
log_set_level(LOG_FATAL); \
test_logging_disable(true, true); \
} \
void restore_test_logging_##suite_name(void) { \
log_set_level(LOG_DEBUG); \
test_logging_restore(); \
} \
TestSuite(suite_name, .init = setup_quiet_test_logging_##suite_name, .fini = restore_test_logging_##suite_name, \
##__VA_ARGS__);

Macro to create a complete test suite with quiet logging and log level control (default levels)

This macro creates unique setup/teardown functions with log level control and declares the test suite in one go. It supports additional TestSuite options and avoids redefinition errors.

Usage:

#define TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVEL(suite_name,...)
Macro to create a complete test suite with quiet logging and log level control (default levels)

Definition at line 226 of file tests/logging.h.

227 { \
228 log_set_level(LOG_FATAL); \
229 test_logging_disable(true, true); \
230 } \
231 void restore_test_logging_##suite_name(void) { \
232 log_set_level(LOG_DEBUG); \
233 test_logging_restore(); \
234 } \
235 TestSuite(suite_name, .init = setup_quiet_test_logging_##suite_name, .fini = restore_test_logging_##suite_name, \
236 ##__VA_ARGS__);

◆ TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVELS

#define TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVELS (   suite_name,
  setup_level,
  restore_level,
  disable_stdout,
  disable_stderr,
  ... 
)

#include <logging.h>

Value:
void setup_quiet_test_logging_##suite_name(void) { \
log_set_level(setup_level); \
test_logging_disable(disable_stdout, disable_stderr); \
} \
void restore_test_logging_##suite_name(void) { \
log_set_level(restore_level); \
test_logging_restore(); \
} \
TestSuite(suite_name, .init = setup_quiet_test_logging_##suite_name, .fini = restore_test_logging_##suite_name, \
##__VA_ARGS__);

Macro to create a complete test suite with quiet logging and custom log levels.

This macro creates unique setup/teardown functions with custom log levels and declares the test suite in one go. It supports additional TestSuite options and avoids redefinition errors.

Usage:

Definition at line 178 of file tests/logging.h.

180 { \
181 log_set_level(setup_level); \
182 test_logging_disable(disable_stdout, disable_stderr); \
183 } \
184 void restore_test_logging_##suite_name(void) { \
185 log_set_level(restore_level); \
186 test_logging_restore(); \
187 } \
188 TestSuite(suite_name, .init = setup_quiet_test_logging_##suite_name, .fini = restore_test_logging_##suite_name, \
189 ##__VA_ARGS__);

◆ TEST_SUITE_WITH_VERBOSE_LOGGING

#define TEST_SUITE_WITH_VERBOSE_LOGGING (   suite_name,
  ... 
)    TEST_SUITE_WITH_DEBUG_LOGGING(suite_name, ##__VA_ARGS__)

#include <logging.h>

Alias for TEST_SUITE_WITH_DEBUG_LOGGING for verbose output.

This macro is an alias for TEST_SUITE_WITH_DEBUG_LOGGING, providing verbose logging output for debugging test failures.

Usage:

#define TEST_SUITE_WITH_VERBOSE_LOGGING(suite_name,...)
Alias for TEST_SUITE_WITH_DEBUG_LOGGING for verbose output.

Definition at line 325 of file tests/logging.h.

Function Documentation

◆ test_get_binary_path()

const char * test_get_binary_path ( void  )

#include <common.h>

Get the path to the ascii-chat binary for integration tests.

This function finds the ascii-chat binary by trying multiple candidate paths. It handles both direct test invocation from the repo root and ctest invocation from the build directory.

Search order:

  1. BUILD_DIR environment variable (if set)
  2. ./build_docker/bin/ascii-chat (Docker from repo root)
  3. ./build/bin/ascii-chat (local from repo root)
  4. ./bin/ascii-chat (from build directory - ctest)
  5. /app/build_docker/bin/ascii-chat (Docker absolute)
Returns
Path to the ascii-chat binary, or a fallback path if not found
Note
The returned string is static - do not free it.
On Windows, returns paths with .exe extension.
Example
Test(my_suite, integration_test) {
const char *binary = test_get_binary_path();
pid_t pid = fork();
if (pid == 0) {
execl(binary, "ascii-chat", "server", "--help", NULL);
exit(127);
}
// ...
}
const char * test_get_binary_path(void)
Get the path to the ascii-chat binary for integration tests.

Definition at line 16 of file tests/common.c.

16 {
17 static char binary_path[256];
18 static bool initialized = false;
19
20 if (initialized) {
21 return binary_path;
22 }
23
24#ifdef _WIN32
25 // Windows: try several paths
26 const char *candidates[] = {
27 "./build/bin/ascii-chat.exe",
28 "./bin/ascii-chat.exe",
29 "ascii-chat.exe",
30 };
31 const char *fallback = "./build/bin/ascii-chat.exe";
32#else
33 // Check if we're in Docker (/.dockerenv file exists)
34 bool in_docker = (access("/.dockerenv", F_OK) == 0);
35 const char *build_dir = getenv("BUILD_DIR");
36
37 // Try BUILD_DIR first if set
38 if (build_dir) {
39 safe_snprintf(binary_path, sizeof(binary_path), "./%s/bin/ascii-chat", build_dir);
40 if (access(binary_path, X_OK) == 0) {
41 initialized = true;
42 return binary_path;
43 }
44 }
45
46 // Try several paths in order of preference
47 const char *candidates[] = {
48 // Relative paths from repo root
49 in_docker ? "./build_docker/bin/ascii-chat" : "./build/bin/ascii-chat",
50 // Relative paths from build directory (when ctest runs from there)
51 "./bin/ascii-chat",
52 // Absolute paths for Docker
53 in_docker ? "/app/build_docker/bin/ascii-chat" : NULL,
54 };
55 const char *fallback = in_docker ? "./build_docker/bin/ascii-chat" : "./build/bin/ascii-chat";
56#endif
57
58 // Try each candidate path
59 for (size_t i = 0; i < sizeof(candidates) / sizeof(candidates[0]); i++) {
60 if (candidates[i] && access(candidates[i], X_OK) == 0) {
61 safe_snprintf(binary_path, sizeof(binary_path), "%s", candidates[i]);
62 initialized = true;
63 return binary_path;
64 }
65 }
66
67 // Fallback to default (will likely fail but gives useful error)
68 safe_snprintf(binary_path, sizeof(binary_path), "%s", fallback);
69 initialized = true;
70 return binary_path;
71}
int safe_snprintf(char *buffer, size_t buffer_size, const char *format,...)
Safe version of snprintf that ensures null termination.
bool initialized
Definition mmap.c:36

References initialized, and safe_snprintf().

◆ test_logging_disable()

int test_logging_disable ( bool  disable_stdout,
bool  disable_stderr 
)

#include <logging.c>

Disable stdout/stderr output for quiet test execution.

Redirect stdout and/or stderr to /dev/null for quiet testing.

Parameters
disable_stdoutIf true, redirect stdout to /dev/null
disable_stderrIf true, redirect stderr to /dev/null
Returns
0 on success, -1 on error
Parameters
disable_stdoutIf true, redirect stdout to /dev/null
disable_stderrIf true, redirect stderr to /dev/null
Returns
0 on success, -1 on failure

Redirects stdout and/or stderr to /dev/null to suppress output during tests. This is useful for tests that produce noisy output or test error handling without cluttering the test output.

Note
The original file descriptors are saved and can be restored with test_logging_restore().
This function can be called multiple times safely.

Definition at line 34 of file tests/logging.c.

34 {
35 // If already disabled, return success
36 if (logging_disabled) {
37 return 0;
38 }
39
40 // Open /dev/null for writing
41 dev_null_fd = platform_open("/dev/null", PLATFORM_O_WRONLY);
42 if (dev_null_fd == -1) {
43 return -1;
44 }
45
46 // Save original file descriptors if we need to redirect them
47 if (disable_stdout) {
48 original_stdout_fd = dup(STDOUT_FILENO);
49 if (original_stdout_fd == -1) {
50 close(dev_null_fd);
51 dev_null_fd = -1;
52 return -1;
53 }
54 dup2(dev_null_fd, STDOUT_FILENO);
55 // Reopen stdout to use the redirected file descriptor
56 (void)freopen("/dev/null", "w", stdout);
57 // Make stdout unbuffered to ensure immediate redirection
58 (void)setvbuf(stdout, NULL, _IONBF, 0);
59 }
60
61 if (disable_stderr) {
62 original_stderr_fd = dup(STDERR_FILENO);
63 if (original_stderr_fd == -1) {
64 // Clean up stdout if it was redirected
65 if (disable_stdout && original_stdout_fd != -1) {
66 dup2(original_stdout_fd, STDOUT_FILENO);
67 close(original_stdout_fd);
68 original_stdout_fd = -1;
69 }
70 close(dev_null_fd);
71 dev_null_fd = -1;
72 return -1;
73 }
74 dup2(dev_null_fd, STDERR_FILENO);
75 // Reopen stderr to use the redirected file descriptor
76 (void)freopen("/dev/null", "w", stderr);
77 // Make stderr unbuffered to ensure immediate redirection
78 (void)setvbuf(stderr, NULL, _IONBF, 0);
79 }
80
81 logging_disabled = true;
82 return 0;
83}
#define PLATFORM_O_WRONLY
Open file for writing only.
Definition file.h:45
int platform_open(const char *pathname, int flags,...)
Safe file open (open replacement)

References PLATFORM_O_WRONLY, and platform_open().

◆ test_logging_is_disabled()

bool test_logging_is_disabled ( void  )

#include <logging.c>

Check if logging is currently disabled.

Returns
true if logging is disabled, false otherwise
true if logging is disabled, false otherwise

Checks whether stdout and/or stderr have been redirected to /dev/null. Useful for conditional logging control or debugging test setup.

Definition at line 133 of file tests/logging.c.

133 {
134 return logging_disabled;
135}

◆ test_logging_restore()

int test_logging_restore ( void  )

#include <logging.c>

Restore stdout/stderr output after test logging disable.

Restore stdout and/or stderr to their original state.

Returns
0 on success, -1 on error
0 on success, -1 on failure

Restores stdout and/or stderr to their original file descriptors that were saved when test_logging_disable() was called.

Note
This function should be called in test teardown to ensure proper cleanup, though the test macros handle this automatically.

Definition at line 90 of file tests/logging.c.

90 {
91 // If not disabled, return success
92 if (!logging_disabled) {
93 return 0;
94 }
95
96 // Restore original stdout
97 if (original_stdout_fd != -1) {
98 dup2(original_stdout_fd, STDOUT_FILENO);
99 // Reopen stdout to use the restored file descriptor
100 (void)freopen("/dev/stdout", "w", stdout);
101 // Restore line buffering for stdout
102 (void)setvbuf(stdout, NULL, _IOLBF, 0);
103 close(original_stdout_fd);
104 original_stdout_fd = -1;
105 }
106
107 // Restore original stderr
108 if (original_stderr_fd != -1) {
109 dup2(original_stderr_fd, STDERR_FILENO);
110 // Reopen stderr to use the restored file descriptor
111 (void)freopen("/dev/stderr", "w", stderr);
112 // Restore unbuffered mode for stderr
113 (void)setvbuf(stderr, NULL, _IONBF, 0);
114 close(original_stderr_fd);
115 original_stderr_fd = -1;
116 }
117
118 // Close /dev/null file descriptor
119 if (dev_null_fd != -1) {
120 close(dev_null_fd);
121 dev_null_fd = -1;
122 }
123
124 logging_disabled = false;
125 return 0;
126}