NAME
Test2::Aggregate - Aggregate tests for increased speed
SYNOPSIS
use Test2::Aggregate;
Test2::Aggregate::run_tests(
dirs => \@test_dirs
);
done_testing();
VERSION
Version 0.11_3
DESCRIPTION
Aggregates all tests specified with dirs
(which can even be individual tests) to avoid forking, reloading etc that can help with performance (dramatically if you have numerous small tests) and also facilitate group profiling. Test files are expected to end in .t and are run as subtests of a single aggregate test.
A bit similar (mainly in intent) to Test::Aggregate, but no inspiration was drawn from the specific module, so simpler in concept and execution, which makes it more likely to work with your test suite (especially if you use modern tools like Test2). It does not even try to package each test by default, which may be good or bad (e.g. redefines), depending on your requirements.
Generally, the way to use this module is to try to aggregate sets of quick tests (e.g. unit tests). Try to iterativelly add tests to the aggregator, dropping those that do not work. Trying an entire suite in one go is a bad idea, an incompatible test can break the run failing all the subsequent tests (especially when doing things like globally redefining built-ins etc). The module can usually work with Test::More suites, but will have more issues than when you use the more modern Test2::Suite (see notes).
METHODS
run_tests
my $stats = Test2::Aggregate::run_tests(
dirs => \@dirs, # optional if lists defined
lists => \@lists, # optional if dirs defined
exclude => qr/exclude_regex/, # optional
include => qr/include_regex/, # optional
root => '/testroot/', # optional
load_modules => \@modules, # optional
package => 0, # optional
shuffle => 0, # optional
sort => 0, # optional
reverse => 0, # optional
unique => 1, # optional
repeat => 1, # optional, requires Test2::Plugin::BailOnFail for < 0
slow => 0, # optional
override => \%override, # optional, requires Sub::Override
stats_output => $stats_output_path, # optional
extend_stats => 0, # optional
test_warnings => 0, # optional
pre_eval => $code_to_eval, # optional
dry_run => 0 # optional
);
Runs the aggregate tests. Returns a hashref with stats like this:
$stats = {
'test.t' => {
'test_no' => 1, # numbering starts at 1
'pass_perc' => 100, # for single runs pass/fail is 100/0
'timestamp' => '20190705T145043', # start of test
'time' => '0.1732', # seconds - only with stats_output
'warnings' => $STDERR # only with test_warnings on non empty STDERR
}
};
The parameters to pass:
dirs
(either this orlists
is required)An arrayref containing directories which will be searched recursively, or even individual tests. The directories (unless
shuffle
orreverse
are true) will be processed and tests run in order specified. Test files are expected to end in.t
.lists
(either this ordirs
is required)Arrayref of flat files from which each line will be pushed to
dirs
(so they have a lower precedence - noteroot
still applies, don't include it in the paths inside the list files). If the path does not exist, it will be silently ignored, however the "official" way to skip a line without checking it as a path is to start with a#
(comment).exclude
(optional)A regex to filter out tests that you want excluded.
include
(optional)A regex which the tests have to match in order to be included in the test run. Applied after
exclude
.root
(optional)If defined, must be a valid root directory that will prefix all
dirs
andlists
items. You may want to set it to'./'
if you want dirs relative to the current directory and the dot is not in your@INC
.load_modules
(optional)Arrayref with modules to be loaded (with
eval "use ..."
) at the start of the test. Useful for testing modules with special namespace requirements.package
(optional)Will package each test in its own namespace. While it will help avoid things like redefine warnings, it may break some tests when aggregating them, so it is disabled by default.
override
(optional)Pass Sub::Override compatible key/values as a hashref.
repeat
(optional)Number of times to repeat the test(s) (default is 1 for a single run). If
repeat
is negative, Test2::Plugin::BailOnFail is required, as the tests will repeat until they bail on a failure. It can be combined withtest_warnings
in which case a warning will also cause the test run to end.unique
(optional)From v0.11, duplicate tests are by default removed from the running list as that could mess up the stats output. You can still define it as false to allow duplicate tests in the list.
shuffle
(optional)Random order of tests if set to true. Will override
sort
.sort
(optional)Sort tests alphabetically if set to true. Provides a way to fix the test order across systems.
reverse
(optional)Reverse order of tests if set to true.
slow
(optional)When true, tests will be skipped if the environment variable
SKIP_SLOW
is set.test_warnings
(optional)Tests for warnings over all the tests if set to true - this is added as a final test which expects zero as the number of tests which had STDERR output. The STDERR output of each test will be printed at the end of the test run (and included in the test run result hash), so if you want to see warnings the moment they are generated leave this option disabled.
dry_run
(optional)Instead of running the tests, will do
ok($testname)
for each one. Otherwise, test order, stats files etc. will be produced normally.pre_eval
(optional)String with code to pass to eval before each test. For example:
pre_eval => "no warnings 'redefine';"
Passing the above will silence redefine warnings, but only if you don't set warnings subsequently in the test.
stats_output
(optional)stats_output
specifies a path where a file will be created to print out running time per test (average if multiple iterations) and passing percentage. Output is sorted from slowest test to fastest. On negativerepeat
the stats of each successful run will be written separately instead of the averages. The name of the file iscaller_script-YYYYMMDDTHHmmss.txt
. If'-'
is passed instead of a path, then the output will be written to STDOUT. The timing stats are useful because the test harness doesn't normally measure time per subtest (remember, your individual aggregated tests become subtests). If you prefer to capture the hash output of the function and use that for your reports, you still need to definestats_output
to enable timing (just send the output to/dev/null
,/tmp
etc).extend_stats
(optional)This option exist to make the default output format of
stats_output
be fixed, but still allow additions in future versions that will only be written with theextend_stats
option enabled. Additions withextend_stats
as of the current version:- starting date/time in ISO_8601.
USAGE NOTES
Not all tests can be modified to run under the aggregator, it is not intended for tests that require an isolated environment, do overrides etc. For other tests which can potentially run under the aggregator, sometimes very simple changes may be needed like giving unique names to subs (or not warning for redefines, or trying the package option), replacing things that complain, restoring the environment at the end of the test etc.
Unit tests are usually great for aggregating. You could use the hash that run_tests
returns in a script that tries to add more tests automatically to an aggregate list to see which added tests passed and keep them, dropping failures.
Trying to aggregate too many tests into a single one can be counter-intuitive as you would ideally want to parallelize your test suite (so a super-long aggregated test continuing after the rest are done will slow down the suite). And in general more tests will run aggregated if they are grouped so that tests that can't be aggregated together are in different groups.
In general you can call Test2::Aggregate::run_tests
multiple times in a test and even load run_tests
with tests that already contain another run_tests
, the only real issue with multiple calls is that if you use repeat < 0
on a call, Test2::Plugin::BailOnFail is loaded so any subsequent failure, on any following run_tests
call will trigger a Bail.
Test::More
If you haven't switched to the Test2::Suite you are generally advised to do so for a number of reasons, compatibility with this module being only a very minor one. If you are stuck with a Test::More suite, Test2::Aggregate can still probably help you more than the similarly-named Test::Aggregate...
modules.
Although the module tries to load Test2
with minimal imports to not interfere, it is generally better to do use Test::More;
in your aggregating test (i.e. alongside with use Test2::Aggregate
.
One more caveat is that Test2::Aggregate::run_tests
uses subtest
from the Test2::Suite, which on rare occasions can return a true value when a Test::More subtest fails by running no tests, so you could have a failed test show up as having a 100 pass_perc
in the Test2::Aggregate::run_tests
output.
$ENV{AGGREGATE_TESTS}
The environment variable AGGREGATE_TESTS
will be set while the tests are running for your convenience. Example usage is making a test you know cannot run under the aggregator check and croak if it was run under it, or a module that can only be loaded once, so you load it on the aggregated test file and then use something like this in the individual test files:
eval 'use My::Module' unless $ENV{AGGREGATE_TESTS};
You can also make a test abort/die if you know it should not be aggregated by checking the variable.
AUTHOR
Dimitrios Kechagias, <dkechag at cpan.org>
BUGS
Please report any bugs or feature requests to bug-test2-aggregate at rt.cpan.org
, or through the web interface at https://rt.cpan.org/NoAuth/ReportBug.html?Queue=Test2-Aggregate. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
GIT
https://github.com/SpareRoom/Test2-Aggregate
COPYRIGHT & LICENSE
Copyright (C) 2019, SpareRoom.com
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.