ascii-chat 0.8.38
Real-time terminal-based video chat with ASCII art conversion
Loading...
Searching...
No Matches

🧪 Unit, integration, and performance tests More...

Files

file  common.c
 Common test utilities implementation.
 
file  globals.c
 🔗 Global symbol stubs for test executables to satisfy linker dependencies
 
file  logging.c
 🧪 Test utilities for logging: stdout/stderr redirection and capture helpers
 

Functions

int test_logging_disable (bool disable_stdout, bool disable_stderr)
 Disable stdout/stderr output for quiet test execution.
 
int test_logging_restore (void)
 Restore stdout/stderr output after test logging disable.
 
bool test_logging_is_disabled (void)
 Check if logging is currently disabled.
 

Detailed Description

🧪 Unit, integration, and performance tests

Testing with Criterion

Overview

Welcome! This guide will help you get comfortable with the test suite in ascii-chat. We use Criterion, a modern C testing framework, to make sure everything works correctly. Don't worry if you're new to testing—we'll walk you through everything you need to know.

The test suite is designed to work smoothly across different platforms (Linux, macOS, Windows) using Docker when needed, so you can run tests confidently on your machine.

What types of tests do we have?

  • Unit Tests: Test individual components in isolation (think "does this one function work?")
  • Integration Tests: Test how components work together (think "does the whole system work?")
  • Performance Tests: Make sure things run fast enough (think "is this optimized enough?")

What tools do we use?

  • Criterion test framework (https://github.com/Snaipe/Criterion) for running tests
  • Docker for consistent testing across platforms
  • Automated test runner scripts that make running tests easy
  • Codecov for tracking how much of our code is tested
  • GitHub Actions to automatically run tests when code changes

Criterion Test Framework

We chose Criterion because it's modern, feature-rich, and makes writing tests pleasant. If you're coming from other testing frameworks, you'll find it familiar. If you're new to testing in C, don't worry—it's straightforward once you see a few examples.

Want to learn more?

What features do we use?

  • Standard tests: The basic Test(suite, name) for regular tests
  • Parameterized tests: Run the same test with different inputs (ParameterizedTest, ParameterizedTestParameters)
  • Theory tests: Test properties that should hold for many values (Theory, TheoryDataPoints)
  • Fixtures: Setup and teardown code that runs before/after tests (TestSuite, .init, .fini)
  • Assertions: Check that things work as expected (cr_assert, cr_assert_eq, cr_assert_not_null, etc.)
  • Timeouts: Make sure tests don't hang forever (.timeout = N in seconds)
  • Parallel execution: Run multiple tests at once for speed

Windows Compatibility

Heads up! If you're on Windows, there's something important you should know.

Criterion has limited Windows support, and many features don't work correctly on native Windows builds. To make sure tests run reliably for everyone, we've set up Docker-based testing.

What to do on Windows:

  • Use Docker: This is the recommended way to run tests on Windows
  • Use the PowerShell script: Run tests/scripts/run-docker-tests.ps1 for easy testing
  • Avoid native builds: Only use native Windows Criterion builds if you absolutely have to, and expect some limitations

The Docker setup gives you the same Linux environment that works perfectly with Criterion, so your tests will run just like they do on Linux or macOS. No need to worry about platform differences!

Test Types

We organize tests into three categories, each serving a different purpose. Understanding when to use each type will help you write better tests.

Unit Tests

Unit tests check individual pieces of code in isolation. Think of them like checking each ingredient before you cook—you want to make sure each component works correctly on its own.

What makes a good unit test?

  • Tests one function or module at a time
  • Uses mocks or stubs for things it depends on (so you're only testing that one piece)
  • Runs quickly (we're talking milliseconds here)
  • Always gives the same result (deterministic)
  • Doesn't affect other tests (isolated)

Where to find them:

All unit tests live in tests/unit/. Each module typically has its own test file.

Examples you can look at:

  • tests/unit/ascii_test.c: Testing ASCII conversion functions
  • tests/unit/buffer_pool_test.c: Testing buffer pool allocation
  • tests/unit/crypto_test.c: Testing cryptographic operations
  • tests/unit/palette_test.c: Testing palette management

Integration Tests

Integration tests check how different pieces work together. Now you're testing the whole recipe, not just individual ingredients.

What integration tests do:

  • Test multiple components working together
  • May use real resources (actual network connections, real files, etc.)
  • Can take a bit longer to run (seconds instead of milliseconds)
  • Test realistic scenarios that users might actually encounter

Where to find them:

Integration tests live in tests/integration/.

Examples you can look at:

  • tests/integration/crypto_handshake_integration_test.c: Testing the full cryptographic handshake
  • tests/integration/crypto_network_integration_test.c: Testing network encryption end-to-end
  • tests/integration/ascii_simd_integration_test.c: Verifying SIMD optimizations work correctly

Performance Tests

Performance tests make sure critical code paths run fast enough. They're like a speedometer for your code—telling you if things are running as fast as they should.

What performance tests do:

  • Measure how long things take to run
  • Check that optimizations are actually working
  • Catch performance regressions (when code gets slower)
  • Use realistic data sizes and workloads

Where to find them:

Performance tests live in tests/performance/.

Examples you can look at:

  • tests/performance/ascii_performance_test.c: Making sure ASCII conversion is fast
  • tests/performance/crypto_performance_test.c: Making sure encryption/decryption is fast

Environment Variables

We use a couple of environment variables to make tests run faster and behave appropriately during testing. The test runners set these automatically, so you usually don't need to think about them—but it's good to know what they do.

Test environment variables:

  • **TESTING=1**: A general flag that says "we're in test mode"
    • Code checks this to enable test-friendly behavior
    • Enables shorter timeouts and reduced memory usage for faster tests
    • Automatically set by our test runners
  • **CRITERION_TEST=1**: Specifically identifies Criterion test execution
    • Network code uses this to shorten timeouts (tests shouldn't wait 30 seconds!)
    • Also automatically set by test runners

How code detects test mode:

Code checks if it's running in a test environment like this:

static int is_test_environment(void) {
return SAFE_GETENV("CRITERION_TEST") != NULL ||
SAFE_GETENV("TESTING") != NULL;
}

This lets the code automatically optimize for testing:

  • Network timeouts drop from 5-30 seconds down to 1 second (much faster!)
  • Memory allocations can be limited for faster test execution
  • Expensive initialization steps can be skipped

Real examples:

Here's how we shorten network timeouts during tests (from lib/network/packet.c):

static int calculate_packet_timeout(size_t packet_size) {
int base_timeout = is_test_environment() ? 1 : SEND_TIMEOUT;
// ... rest of calculation ...
}

And here's how we limit memory usage in tests (from tests/unit/compression_test.c):

size_t max_size = (getenv("TESTING") || getenv("CRITERION_TEST"))
? 1000
: 1000000;

Network Timeouts

Network operations can take time, and we don't want tests waiting around forever. So when tests are running (when TESTING=1 or CRITERION_TEST=1 is set), network timeouts get automatically shortened.

What changes during testing:

  • Connection timeouts: 3 seconds → 1 second
  • Send timeouts: 5 seconds → 1 second
  • Receive timeouts: 15 seconds → 1 second
  • Accept timeouts: 3 seconds → 1 second

This means integration tests that use the network complete in seconds instead of minutes, while still testing real network behavior. Pretty neat!

Where this is implemented:

The timeout logic lives in these network functions:

These functions automatically detect the test environment and use shorter timeouts:

timeout.tv_sec = is_test_environment() ? 1 : timeout_seconds;

Docker-Based Testing

To make sure tests work the same everywhere, we've set up Docker-based testing. This is especially helpful on Windows where Criterion has limited support, but it's also great for ensuring consistent results across all platforms.

Dockerfile

Our test Dockerfile (tests/Dockerfile) uses Arch Linux as the base. It's lightweight, stays up-to-date, and has everything Criterion needs.

What's included:

  • Arch Linux base: Lightweight and current
  • Criterion: The full test framework, ready to go
  • All build tools: Clang, CMake, Ninja, ccache, and all the libraries we need
  • Pre-built BearSSL: Dependencies are compiled when we build the image (saves time later)
  • ccache configured: Compilation results are cached between runs for speed

The Dockerfile structure is pretty straightforward:

FROM archlinux:latest
# Install compiler, build tools, and dependencies
# Pre-build BearSSL from submodule
# Configure ccache for persistent caching
# Set environment variables (TESTING=1, etc.)
WORKDIR /app
CMD ["sh", "-c", "cmake --preset docker && cmake --build build_docker --target tests && ctest --test-dir build_docker --output-on-failure --parallel 0"]

docker-compose.yml

The Docker Compose setup (tests/docker-compose.yml) makes it easy to run tests without thinking about Docker details.

What it provides:

  • **ascii-chat-tests service**: The main test container
    • Builds from our tests/Dockerfile
    • Mounts your source code so you can edit files and see changes
    • Sets up ccache volume so compilation is fast between runs
    • Automatically sets all the test environment variables

Volumes:

  • ccache-data: Keeps compilation cache between Docker runs (much faster rebuilds!)
  • Source code: Mounted read-write so build artifacts can be created

Environment variables:

  • TESTING=1: We're in test mode
  • CRITERION_TEST=1: Criterion test flag
  • CCACHE_DIR=/ccache: Where to store compilation cache
  • ASAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer: For AddressSanitizer support

Using Docker for Testing

On Windows, we have a convenient PowerShell script that makes Docker testing easy:

Using the PowerShell script (recommended on Windows):

# Run all tests
./tests/scripts/run-docker-tests.ps1
# Run just unit tests
./tests/scripts/run-docker-tests.ps1 unit
# Run just integration tests
./tests/scripts/run-docker-tests.ps1 integration
# Run just performance tests
./tests/scripts/run-docker-tests.ps1 performance
# Run a specific test
./tests/scripts/run-docker-tests.ps1 unit options
./tests/scripts/run-docker-tests.ps1 unit options terminal_detect
# Run tests matching a pattern
./tests/scripts/run-docker-tests.ps1 test_unit_buffer_pool -f "creation"
# Run clang-tidy for code analysis
./tests/scripts/run-docker-tests.ps1 -ClangTidy
./tests/scripts/run-docker-tests.ps1 clang-tidy lib/common.c
# Open an interactive shell for debugging
./tests/scripts/run-docker-tests.ps1 -Interactive
# Clean everything and rebuild from scratch
./tests/scripts/run-docker-tests.ps1 -Clean

Using Docker Compose directly:

You can also use Docker Compose commands directly if you prefer:

# Build and run all tests
docker-compose -f tests/docker-compose.yml up --build
# Run tests in an existing container
docker-compose -f tests/docker-compose.yml run ascii-chat-tests
# Get an interactive shell for debugging
docker-compose -f tests/docker-compose.yml run ascii-chat-tests /bin/bash
# Clean rebuild (useful when dependencies change)
docker-compose -f tests/docker-compose.yml build --no-cache

Why use Docker for testing?

  • Consistent environment: Tests run the same way on your machine, CI, and everyone else's machine
  • Full Criterion support: All Criterion features work perfectly in Linux
  • Isolated: Tests don't mess with your system or depend on what you have installed
  • Reproducible: Same setup every time means reliable results
  • Fast: ccache keeps compilation fast between runs

Running Tests with ctest

Tests are run using CMake's ctest tool, which integrates with Criterion to provide parallel execution, filtering, and XML output for CI.

Features

What ctest provides:

  • Parallel execution: Uses --parallel 0 to auto-detect CPU cores
  • Label filtering: Filter tests by category (unit, integration, performance)
  • Name filtering: Filter tests by name pattern
  • Timeout handling: Criterion handles individual test timeouts
  • XML output: Criterion generates XML in build/Testing/criterion-xml/
  • Verbose mode: See lots of detail when something goes wrong

Usage

Basic usage:

# Build tests first
cmake --build build --target tests
# Run all tests
ctest --test-dir build --output-on-failure --parallel 0
# Run a specific category using labels
ctest --test-dir build --label-regex "^unit$" --output-on-failure
ctest --test-dir build --label-regex "^integration$" --output-on-failure
ctest --test-dir build --label-regex "^performance$" --output-on-failure
# Run a specific test by name pattern
ctest --test-dir build -R "buffer_pool" --output-on-failure
ctest --test-dir build -R "test_unit_ascii" --output-on-failure

Execution control:

# Control parallel execution
ctest --test-dir build --parallel 4 --output-on-failure # Use 4 cores
ctest --test-dir build --parallel 1 --output-on-failure # Sequential (debugging)
# Verbose output
ctest --test-dir build --output-on-failure --verbose
# List available tests
ctest --test-dir build -N

Architecture

How it works:

CMake discovers tests from tests/unit/, tests/integration/, and tests/performance/:

  • Test executables are named like: test_{category}_{name} (e.g., test_unit_ascii)
  • Each test is labeled with its category for filtering

Building:

  • Build all tests: cmake --build build --target tests
  • Build specific test: cmake --build build --target test_unit_buffer_pool
  • Uses Ninja for fast incremental builds

XML Output:

Criterion generates XML output in build/Testing/criterion-xml/:

  • Each test executable produces its own XML file
  • XML files are used by CI for test result reporting

Parameterized Tests

Sometimes you want to test the same logic with different inputs. Instead of writing five nearly-identical test functions, you can use parameterized tests!

Basic syntax:

// First, define your test cases
ParameterizedTestParameters(suite_name, test_name) {
static TestCase cases[] = {
{ .input = 1, .expected = 2 },
{ .input = 2, .expected = 4 },
{ .input = 3, .expected = 6 },
};
return cr_make_param_array(TestCase, cases, sizeof(cases) / sizeof(cases[0]));
}
// Then, write the test that uses those cases
ParameterizedTest(TestCase *tc, suite_name, test_name) {
cr_assert_eq(process_input(tc->input), tc->expected);
}

⚠️ Important memory note:

If you need to allocate memory in your parameterized test data, you must use Criterion's special memory functions. Regular malloc() won't work correctly because Criterion needs to track the memory to clean it up properly.

Use these Criterion functions:

  • cr_malloc(): Allocate memory (Criterion tracks it)
  • cr_calloc(): Allocate zero-initialized memory
  • cr_realloc(): Reallocate memory
  • cr_free(): Free memory allocated with Criterion functions

But honestly? The easiest approach is to just use static arrays when possible:

// Good: Static array (no memory issues!)
ParameterizedTestParameters(palette, utf8_boundary_property) {
static const char *palettes[] = {
" ░▒▓█",
" .:-=+*#%@",
" 0123456789",
};
return cr_make_param_array(const char *, palettes,
sizeof(palettes) / sizeof(palettes[0]));
}
// Avoid: Dynamic allocation (requires Criterion functions)
ParameterizedTestParameters(palette, utf8_boundary_property) {
char **palettes = cr_malloc(3 * sizeof(char*));
palettes[0] = cr_strdup(" ░▒▓█");
palettes[1] = cr_strdup(" .:-=+*#%@");
palettes[2] = cr_strdup(" 0123456789");
// ... gotta remember to free this ...
}

Examples in our codebase:

Want to see parameterized tests in action? Check these out:

  • tests/unit/terminal_detect_test.c: Testing different COLORTERM and TERM values
  • tests/unit/webcam_test.c: Testing different webcam indices
  • tests/unit/simd_scalar_comparison_test.c: Testing different palettes

Theorized Tests

Theorized tests are Criterion's way of doing property-based testing. Instead of testing specific values, you test that a property holds for a whole range of values. It's like mathematical proof by example—if the property holds for all these values, it probably holds in general.

Basic syntax:

// Define the range of values to test
TheoryDataPoints(suite_name, property_name) = {
TheoryPointsFromRange(0, 100, 1), // Integers from 0 to 100
TheoryPointsFromRange(0.0, 1.0, 0.1), // Floats from 0.0 to 1.0 in 0.1 steps
};
// Write the test that checks the property
Theory((size_t data_size), suite_name, property_name) {
// This runs for every data_size in the range
void *data = malloc(data_size);
cr_assert_not_null(data);
free(data);
}

Theory data sources:

Criterion gives you several ways to generate test values:

  • TheoryPointsFromRange(min, max, step): Generate a range of values
  • TheoryPointsFromArray(array, count): Use your own array of values
  • TheoryPointsFromBitfield(bits): Generate all bit combinations

What are these good for?

Theorized tests are perfect for checking mathematical properties:

  • Roundtrip properties: decompress(compress(x)) == x should always be true
  • Boundary conditions: Does the code handle values at the limits?
  • Invariant properties: Properties that should always hold no matter what

Examples in our codebase:

We use theorized tests in lots of places:

  • tests/unit/compression_test.c: Compression roundtrip property
  • tests/unit/crypto_test.c: Encryption roundtrip and nonce uniqueness
  • tests/unit/palette_test.c: Palette length and UTF-8 boundary properties
  • tests/unit/ringbuffer_test.c: FIFO ordering property
  • tests/unit/mixer_test.c: Audio bounds property
  • tests/unit/ascii_test.c: Image size property
  • tests/unit/buffer_pool_test.c: Allocation roundtrip and pool reuse
  • tests/unit/aspect_ratio_test.c: Aspect ratio preservation

Test Logging Macros

Tests can generate a lot of output, which makes it hard to see what actually matters. We've got some helpful macros in lib/tests/logging.h (included via lib/tests/common.h) that let you control logging during tests.

Test Suite Macros

Quiet logging for a whole suite:

#include "tests/common.h"
TEST_SUITE_WITH_QUIET_LOGGING(my_suite);
Test(my_suite, my_test) {
// Logging is automatically turned off (goes to /dev/null)
// Log level is set to LOG_FATAL (only fatal errors show up)
}

Custom log levels:

TEST_SUITE_WITH_QUIET_LOGGING_AND_LOG_LEVELS(
my_suite,
LOG_FATAL, // Log level during setup
LOG_DEBUG, // Log level to restore after
true, // Disable stdout
true // Disable stderr
);

Debug logging:

TEST_SUITE_WITH_DEBUG_LOGGING(debug_suite);
Test(debug_suite, debug_test) {
// Logging is enabled with LOG_DEBUG level
// You can see stdout/stderr for debugging
}

With timeout:

TEST_SUITE_WITH_QUIET_LOGGING(my_suite, .timeout = 10);

Per-Test Logging Macros

Sometimes you just want to quiet things down for part of a test:

Temporarily disable all logging:

Test(my_suite, my_test) {
TEST_LOGGING_TEMPORARILY_DISABLE();
// ... code that should be quiet ...
// Logging automatically restored when test ends
}

Disable just stdout:

Test(my_suite, my_test) {
TEST_LOGGING_TEMPORARILY_DISABLE_STDOUT();
// stdout is quiet, but stderr still works
}

Disable just stderr:

Test(my_suite, my_test) {
TEST_LOGGING_TEMPORARILY_DISABLE_STDERR();
// stderr is quiet, but stdout still works
}

Manual Setup/Teardown

If you need more control, you can set things up manually:

TEST_LOGGING_SETUP_AND_TEARDOWN();
TestSuite(my_suite,
.init = setup_quiet_test_logging,
.fini = restore_test_logging);

Low-Level Functions

For maximum control, you can call the functions directly:

// Disable logging
test_logging_disable(true, true); // disable stdout, disable stderr
// Restore logging
// Check if logging is disabled
bool is_disabled = test_logging_is_disabled();
int test_logging_disable(bool disable_stdout, bool disable_stderr)
Disable stdout/stderr output for quiet test execution.
int test_logging_restore(void)
Restore stdout/stderr output after test logging disable.
bool test_logging_is_disabled(void)
Check if logging is currently disabled.

Code Coverage

We use Codecov (https://codecov.io/) to track how much of our code is covered by tests. Coverage reports are automatically generated when code is pushed or pull requests are created, and uploaded to Codecov so you can see what's covered (and what isn't).

Our Codecov project:

Configuration:

Coverage settings are in codecov.yml:

  • Different coverage targets for different parts of the codebase
  • Coverage grouped by functionality
  • Automatic PR comments with coverage reports
  • Coverage thresholds enforced per component

Generating coverage locally:

To build and run tests with coverage instrumentation:

# Configure with coverage enabled
cmake -B build -DCMAKE_BUILD_TYPE=Debug -DASCIICHAT_ENABLE_COVERAGE=ON
# Build and run tests
cmake --build build --target tests
ctest --test-dir build --output-on-failure --parallel 0
# Coverage files are generated as .gcov files
# (They're automatically uploaded in CI, but you can check them locally too)

Coverage organization:

Coverage is organized by test type and platform:

  • ascii-chat-tests-ubuntu-debug: Unit tests on Ubuntu (debug build)
  • ascii-chat-tests-macos-debug: Unit tests on macOS (debug build)
  • ascii-chat-integration-ubuntu: Integration tests on Ubuntu
  • ascii-chat-performance-ubuntu: Performance tests on Ubuntu

Coverage targets:

Different parts of the codebase have different coverage goals:

  • Core server/client: 75% target
  • ASCII engine: 80% target (this is the main feature, so we want good coverage)
  • Audio/video systems: 70% target
  • Platform-specific code: 60% target (informational—some platform code is hard to test)

Best Practices

These are some guidelines we've learned from experience. Following them will help you write tests that are fast, reliable, and easy to maintain.

Writing good tests:

  • Always include lib/tests/common.h: It has everything you need
  • Use quiet logging macros: Tests should be quiet unless something fails
  • Test both success and failure paths: Make sure error handling works too
  • Use theorized tests for properties: Great for roundtrip properties, invariants, etc.
  • Use parameterized tests for similar cases: Don't repeat yourself
  • Keep tests independent: One test shouldn't affect another
  • Name tests clearly: Future you (and others) will thank you

Organizing tests:

  • One test file per module: tests/unit/{module}_test.c is the pattern
  • Group related tests: Use test suites to organize
  • Use fixtures for setup/teardown: Don't repeat setup code in every test
  • Document complex logic: If the test is tricky, explain why

Performance:

  • Unit tests should be fast: We're talking milliseconds here
  • Integration tests can be slower: Seconds are fine for these
  • Use test environment variables: Let the code optimize for testing
  • Skip expensive operations: Tests don't need to do everything the real code does

Debugging:

  • Use -v for verbose output: See what's actually happening
  • Use --no-parallel for sequential execution: Easier to see what's going on
  • Use debug logging suite: When you need to see debug output
  • Use interactive Docker shell: When you need to poke around

CI/CD Integration

Our test suite is integrated with GitHub Actions, so tests run automatically on every push and pull request. This means you'll know right away if something breaks!

What happens automatically:

  • Tests run on every push and PR
  • Tests run on multiple platforms (Linux, macOS)
  • Coverage reports are generated and uploaded to Codecov
  • Test results are reported in JUnit XML format
  • Docker is used for consistent test execution

CI workflows:

The main test workflow is in .github/workflows/test.yml:

  • Automatically discovers and runs all tests
  • Runs tests in parallel for speed
  • Uploads coverage to Codecov
  • Saves test result artifacts

Additional Resources

Want to learn more?

Where to look in the codebase:

  • Test scripts: tests/scripts/
  • Test source: tests/unit/, tests/integration/, tests/performance/
  • Test utilities: lib/tests/
  • Docker config: tests/Dockerfile, tests/docker-compose.yml

Test Utilities

The test utilities in lib/tests/ provide common functionality for writing tests:

Files:

  • test_logging.h - Logging control utilities for quiet test execution
  • test_logging.c - Implementation of logging redirection
  • common.h - Common test utilities and environment detection
  • globals.c - Global symbol stubs for linker dependencies

These utilities help tests run reliably by providing logging control, environment detection, and common headers. See the individual file documentation for details.

Need help?

If you're stuck or have questions, feel free to ask! We're here to help. Check out the examples in the codebase—seeing real tests is often the best way to learn.

See also
tests/logging.h
tests/logging.c
tests/common.h
tests/globals.c
Author
Zachary Fogg me@zf.nosp@m.o.gg
Date
September 2025

Function Documentation

◆ test_logging_disable()

int test_logging_disable ( bool  disable_stdout,
bool  disable_stderr 
)

#include <logging.c>

Disable stdout/stderr output for quiet test execution.

Parameters
disable_stdoutIf true, redirect stdout to /dev/null
disable_stderrIf true, redirect stderr to /dev/null
Returns
0 on success, -1 on error

Definition at line 34 of file tests/logging.c.

34 {
35 // If already disabled, return success
36 if (logging_disabled) {
37 return 0;
38 }
39
40 // Open /dev/null for writing
41 dev_null_fd = platform_open("/dev/null", PLATFORM_O_WRONLY);
42 if (dev_null_fd == -1) {
43 return -1;
44 }
45
46 // Save original file descriptors if we need to redirect them
47 if (disable_stdout) {
48 original_stdout_fd = dup(STDOUT_FILENO);
49 if (original_stdout_fd == -1) {
50 close(dev_null_fd);
51 dev_null_fd = -1;
52 return -1;
53 }
54 dup2(dev_null_fd, STDOUT_FILENO);
55 // Reopen stdout to use the redirected file descriptor
56 (void)freopen("/dev/null", "w", stdout);
57 // Make stdout unbuffered to ensure immediate redirection
58 (void)setvbuf(stdout, NULL, _IONBF, 0);
59 }
60
61 if (disable_stderr) {
62 original_stderr_fd = dup(STDERR_FILENO);
63 if (original_stderr_fd == -1) {
64 // Clean up stdout if it was redirected
65 if (disable_stdout && original_stdout_fd != -1) {
66 dup2(original_stdout_fd, STDOUT_FILENO);
67 close(original_stdout_fd);
68 original_stdout_fd = -1;
69 }
70 close(dev_null_fd);
71 dev_null_fd = -1;
72 return -1;
73 }
74 dup2(dev_null_fd, STDERR_FILENO);
75 // Reopen stderr to use the redirected file descriptor
76 (void)freopen("/dev/null", "w", stderr);
77 // Make stderr unbuffered to ensure immediate redirection
78 (void)setvbuf(stderr, NULL, _IONBF, 0);
79 }
80
81 logging_disabled = true;
82 return 0;
83}
int platform_open(const char *pathname, int flags,...)

References platform_open().

◆ test_logging_is_disabled()

bool test_logging_is_disabled ( void  )

#include <logging.c>

Check if logging is currently disabled.

Returns
true if logging is disabled, false otherwise

Definition at line 133 of file tests/logging.c.

133 {
134 return logging_disabled;
135}

◆ test_logging_restore()

int test_logging_restore ( void  )

#include <logging.c>

Restore stdout/stderr output after test logging disable.

Returns
0 on success, -1 on error

Definition at line 90 of file tests/logging.c.

90 {
91 // If not disabled, return success
92 if (!logging_disabled) {
93 return 0;
94 }
95
96 // Restore original stdout
97 if (original_stdout_fd != -1) {
98 dup2(original_stdout_fd, STDOUT_FILENO);
99 // Reopen stdout to use the restored file descriptor
100 (void)freopen("/dev/stdout", "w", stdout);
101 // Restore line buffering for stdout
102 (void)setvbuf(stdout, NULL, _IOLBF, 0);
103 close(original_stdout_fd);
104 original_stdout_fd = -1;
105 }
106
107 // Restore original stderr
108 if (original_stderr_fd != -1) {
109 dup2(original_stderr_fd, STDERR_FILENO);
110 // Reopen stderr to use the restored file descriptor
111 (void)freopen("/dev/stderr", "w", stderr);
112 // Restore unbuffered mode for stderr
113 (void)setvbuf(stderr, NULL, _IONBF, 0);
114 close(original_stderr_fd);
115 original_stderr_fd = -1;
116 }
117
118 // Close /dev/null file descriptor
119 if (dev_null_fd != -1) {
120 close(dev_null_fd);
121 dev_null_fd = -1;
122 }
123
124 logging_disabled = false;
125 return 0;
126}