Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Failure Assessment

Understanding Test Failures

A test fails when pperl’s output differs from perl5’s output. The harness compares stdout (TAP lines), stderr (warnings), and exit codes.

Interpreting Results

Verbose Mode

Use -v to see the diff on failure:

t/TEST -v 05-op/010-arithmetic.t

The diff shows exactly where pperl’s output diverged from perl5.

Result Files

Check t/.results/LATEST.txt for the most recent full run:

grep "FAIL" t/.results/LATEST.txt      # All failures
grep "05-op" t/.results/LATEST.txt     # Status of operator tests

Common Failure Patterns

Wrong numeric output: pperl’s numeric formatting differs from perl5 (e.g., trailing zeros, precision differences). Check if the underlying computation is correct.

Missing warning: perl5 emits a warning that pperl doesn’t. Often "Use of uninitialized value" or similar. The test itself may pass, but stderr comparison fails.

Extra output: pperl emits debug output or extra warnings. Check for leftover trace output.

Timeout: Test exceeded the per-test timeout (default 5s). May indicate an infinite loop in pperl, or the test genuinely needs more time.

Parse error: pperl’s parser doesn’t handle a construct that perl5 accepts. Common with edge-case syntax.

Missing feature: The test uses a feature pperl hasn’t implemented. Check the error message.

Regression Detection

Compare the current run against the previous one:

t/TEST --compare --skip=bytecode,tier,debug

The comparison report shows:

  • New failures (regressions) — previously passing tests that now fail
  • New passes (improvements) — previously failing tests that now pass
  • Stable failures — tests that were already failing

Regressions are the critical concern. A code change should never introduce new failures.

Triage Categories

Must Fix (Regressions)

Tests that previously passed but now fail. These indicate a bug introduced by recent changes.

Should Fix (Known Failures)

Tests in the expected failure set that should eventually pass as features are implemented.

Won’t Fix (Intentional Differences)

Tests that fail because pperl intentionally differs from perl5 (e.g., always-enabled modern features, no XS support). These should be documented in the test file with a comment explaining why.

Timeout Failures

May need --timeout override in the test header, or may indicate a real performance regression.