Test::Assertions(3pm) | User Contributed Perl Documentation | Test::Assertions(3pm) |
Test::Assertions - a simple set of building blocks for both unit and runtime testing
#ASSERT does nothing use Test::Assertions; #ASSERT warns "Assertion failure"... use Test::Assertions qw(warn); #ASSERT dies with "Assertion failure"... use Test::Assertions qw(die); #ASSERT warns "Assertion failure"... with stack trace use Test::Assertions qw(cluck); #ASSERT dies with "Assertion failure"... with stack trace use Test::Assertions qw(confess); #ASSERT prints ok/not ok use Test::Assertions qw(test); #Will cause an assertion failure ASSERT(1 == 0); #Optional message ASSERT(0 == 1, "daft"); #Checks if coderef dies ASSERT( DIED( sub {die()} ) ); #Check if perl compiles OK ASSERT( COMPILES('program.pl') ); #Deep comparisons ASSERT( EQUAL(\@a, \@b), "lists of widgets match" # an optional message ); ASSERT( EQUAL(\%a, \%b) ); #Compare to a canned value ASSERT( EQUALS_FILE($foo, 'bar.dat'), "value matched stored value" ); #Compare to a canned value (regex match using file contents as regex) ASSERT( MATCHES_FILE($foo, 'bar.regex') ); #Compare file contents ASSERT( FILES_EQUAL('foo.dat', 'bar.dat') ); #returns 'not ok for Foo::Bar Tests (1 errors in 3 tests)' ASSESS( ['ok 1', 'not ok 2', 'A comment', 'ok 3'], 'Foo::Bar Tests', 0 ); #Collate results from another test script ASSESS_FILE("test.pl"); #File routines $success = WRITE_FILE('bar.dat', 'hello world'); ASSERT( WRITE_FILE('bar.dat', 'hello world'), 'file was written'); $string = READ_FILE('example.out'); ASSERT( READ_FILE('example.out'), 'file has content' );
The helper routines don't need to be used inside ASSERT():
if ( EQUALS_FILE($string, $filename) ) { print "File hasn't changed - skipping\n"; } else { my $rc = run_complex_process($string); print "File changed - string was reprocessed with result '$rc'\n"; } ($boolean, $output) = COMPILES('file.pl'); # or... my $string; ($boolean, $standard_output) = COMPILES('file.pl', 1, \$string); # $string now contains standard error, separate from $standard_output
In test mode:
use Test::Assertions qw(test); plan tests => 4; plan tests; #will attempt to deduce the number only (1,2); #Only report ok/not ok for these tests ignore 2; #Skip this test #In test/ok mode... use Test::Assertions qw(test/ok); ok(1); #synonym for ASSERT
Test::Assertions provides a convenient set of tools for constructing tests, such as unit tests or run-time assertion checks (like C's ASSERT macro). Unlike some of the Test:: modules available on CPAN, Test::Assertions is not limited to unit test scripts; for example it can be used to check output is as expected within a benchmarking script. When it is used for unit tests, it generates output in the standard form for CPAN unit testing (under Test::Harness).
The package's import method is used to control the behaviour of ASSERT: whether it dies, warns, prints 'ok'/'not ok', or does nothing.
In 'test' mode the script also exports plan(), only() and ignore() functions. In 'test/ok' mode an ok() function is also exported for compatibility with Test/Test::Harness. The plan function attempts to count the number of tests if it isn't told a number (this works fine in simple test scripts but not in loops/subroutines). In either mode, a warning will be emitted if the planned number of tests is not the same as the number of tests actually run, e.g.
# Looks like you planned 2 tests but actually ran 1.
($bool, $desc) = ASSESS(@args)
is equivalent to
($bool, $desc) = INTERPRET(scalar ASSESS(@args))
$verbose is an optional boolean default timeout is 60 seconds (0=never timeout)
In a scalar context returns a result string; in a list context returns 1=pass or 0=fail, followed by a description. The timeout uses alarm(), but has no effect on platforms which do not implement alarm().
In scalar context it returns 1 if the code compiled, 0 otherwise. In list context it returns the same boolean, followed by the output (that is, standard output and standard error combined) of the syntax check.
If $scalar_reference is supplied and is a scalar reference then the standard output and standard error of the syntax check subprocess will be captured separately. Standard error will be put into this scalar - IO::CaptureOutput is loaded on demand to do this - and standard output will be returned as described above.
When Test::Assertions is imported with no arguments, ASSERT is aliased to an empty coderef. If this is still too much runtime overhead for you, you can use a constant to optimise out ASSERT statements at compile time. See the section on runtime testing in Test::Assertions::Manual for a discussion of overheads, some examples and some benchmark results.
The following modules are loaded on demand:
Carp File::Spec Test::More File::Compare IO::CaptureOutput
- Declare ASSERT() with :assertions attribute in versions of perl >= 5.9 so it can be optimised away at runtime. It should be possible to declare the attribute conditionally in a BEGIN block (with eval) for backwards compatibility
Test::Assertions::Manual - A guide to using Test::Assertions
$Revision: 1.54 $ on $Date: 2006/08/07 10:44:42 $ by $Author: simonf $
John Alden with additions from Piers Kent and Simon Flack <cpan _at_ bbc _dot_ co _dot_ uk>
(c) BBC 2005. This program is free software; you can redistribute it and/or modify it under the GNU GPL.
See the file COPYING in this distribution, or http://www.gnu.org/licenses/gpl.txt
2022-11-19 | perl v5.36.0 |