NAME

Test::Fetchware - Provides testing subroutines for App::Fetchware.

VERSION

version 1.016

SYNOPSIS

use Test::Fetchware ':TESTING';

eval_ok($code, $expected_exception_text_or_regex, $test_name);
eval_ok(sub { some_code_that_dies()},
    <<EOE, 'check some_code_that_dies() exception()');
some_code_that_dies() died with this message!
EOE
eval_ok(sub { some_code_whose_messages_change(),
    qr/A regex that matches some_code_whose_messages_change() error message/,
    'checked some_code_whose_messages_change() exception');

print_ok(\&printer, $expected, $test_name);
print_ok(sub { some_func_that_prints()},
    \$expected, 'checked some_func_that_prints() printed $expected');
print_ok(sub {some_func_that_prints()},
    qr/some regex that matches what some_func_that_prints() prints/,
    'checked some_func_that_prints() printed matched expected regex');
print_ok(sub { some_func_that_prints()},

sub { # a coderef that returns true of some_func_that_prints() printed what it
    #should print and returns false if it did not
    }, 'checked some_func_that_prints() printed matched coderefs expectations.');

subtest 'some subtest that tests fetchware' => sub {
    skip_all_unless_release_testing();

    # ... Your tests go here that will be skipped unless
    # FETCHWARE_RELEASE_TESTING among other env vars are set properly.
};

make_clean();

my $test_dist_path = make_test_dist(
    file_name => $file_name,
    ver_num = $ver_num,
    # These are all optional...
    destination_directory => rel2abs($destination_directory),
    fetchwarefile => $fetchwarefile,
    # You can only specify fetchwarefile *or* append_option.
    append_option => q{fetchware_option 'some value';},
    configure => <<EOF,
#!/bin/sh

# A test ./configure for testing ./configure failure...it always fails.

echo "fetchware: ./configure failed!
# Return failure exit status to truly indicate failure.
exit 1
EOF
    makefile => <<EOF,
# Test Makefile.
all:
    sh -c 'echo "fetchware make failed!"'
EOF
);

my $md5sum_fil_path = md5sum_file($archive_to_md5);


my $expected_filename_listing = expected_filename_listing()

DESCRIPTION

These subroutines provide miscellaneous subroutines that App::Fetchware's test suite uses. Some are quite specific such as make_test_dist(), while others are simple subroutines replacing entire CPAN modules such as eval_ok (similar to Test::Exception) and print_ok (similar to Test::Output). I wrote them instead of using the CPAN dependency, because all it would take is a relatively simple function that I could easily write and test. And their interfaces disagreed with me.

TESTING SUBROUTINES

eval_ok()

eval_ok($code, $expected_exception_text_or_regex, $test_name);

Executes the $code coderef, and compares its thrown exception, $@, to $expected_exception_text_or_regex, and uses $test_name as the name for the test if provided.

If $expected_exception_text_or_regex is a string then Test::More's is() is used, and if $expected_exception_text_or_regex is a 'Regexp' according to ref(), then like() is used, which will treat $expected_exception_text_or_regex as a regex instead of as just a string.

print_ok(\&printer, $expected, $test_name);

Tests if $expected is in the output that \&printer->() produces on STDOUT.

It passes $test_name along to the underlying Test::More function that it uses to do the test.

$expected can be a SCALAR, Regexp, or CODEREF as returned by Perl's ref() function.

  • If $expected is a SCALAR according to ref()

    • Then Use eq to determine if the test passes.

  • If $expected is a Regexp according to ref()

    • Then use a regex comparision just like Test::More's like() function.

  • If $expected is a CODEREF according to ref()

    • Then execute the coderef with a copy of the $printer's STDOUT and use the result of that expression to determine if the test passed or failed .

    NOTICE: print_ok()'s manipuation of STDOUT only works for the current Perl process. STDOUT may be inherited by forks, but for some reason my knowledge of Perl and Unix lacks a better explanation other than that print_ok() does not work for testing what fork()ed and exec()ed processes do such as those executed with run_prog().

    I also have not tested other possibilities, such as using IO::Handle to manipulate STDOUT, or tie()ing STDOUT like Test::Output does. These methods probably would not survive a fork() and an exec() though either.

fork_ok()

fork_ok(&code_fork_should_do, $test_name);

Simply properly forks, and runs the caller's provided coderef in the child, and tests that the child's exit value is 0 for success using a simple ok() call from Test::More. The child's exit value is controlled by the caller based on what &code_fork_should_do returns. If &code_fork_should_do returns true, then the child returns 0 for success, and if &code_fork_should_do returns false, then the child returns 1 for failure.

Because the fork()ed child is a copy of the current perl process you can still access whatever Test::More or Test::Fetchware testing subroutines you may have imported for use in the test file that uses fork_ok().

This testing helper subroutine only exists for testing fetchware's command line interface. This interface is fetchware's run() subroutine and when you actually execute the fetchware program from the command line such as fetchware help.

WARNING

fork_ok() has a major bug that makes any tests you attempt to run in &code_fork_should_do that fail never report this failure properly to Test::Builder. Also, any success is not reported either. This is not fork_ok()'s fault it is Test::Builder's fault for still not having support for forking. This lack of support for forking may be fixed in Test::Builder 1.5 or perhaps 2.0, but those are still in development.

fork_not_ok()

fork_not_ok(&code_fork_should_do, $test_name);

The exact same thing as fork_ok() except it expects failure and reports true when the provided coderef returns failure. If the provided coderef returns true, then it reports failure to the test suite.

The same warnings and problems associated with fork_ok() apply to fork_not_ok().

skip_all_unless_release_testing()

subtest 'some subtest that tests fetchware' => sub {
    skip_all_unless_release_testing();

    # ... Your tests go here that will be skipped unless
    # FETCHWARE_RELEASE_TESTING among other env vars are set properly.
};

Skips all tests in your test file or subtest() if fetchware's testing environment variable, FETCHWARE_RELEASE_TESTING, is not set to its proper value. See "2. Call skip_all_unless_release_testing() as needed" in App::Fetchware for more information.

WARNING

If you call skip_all_unless_release_testing() in your main test file without being enclosed inside a subtest, then skip_all_unless_release_testing() will skip all of your test from that point on till then end of the file, so be careful where you use it, or just only use it in subtests to be safe.

make_clean()

make_clean();

Runs make clean and then chdirs to the parent directory. This subroutine is used in build() and install()'s test scripts to run make clean in between test runs. If you override build() or install() you may wish to use make_clean to automate this for you.

make_clean() also makes some simple checks to ensure that you are not running it inside of fetchware's own build directory. If it detects this, it BAIL_OUT()'s of the test file to indicate that the test file has gone crazy, and is about to do something it shouldn't.

make_test_dist()

my $test_dist_path = make_test_dist(
    file_name => $file_name,
    ver_num = $ver_num,
    # These are all optional...
    destination_directory => rel2abs($destination_directory),
    fetchwarefile => $fetchwarefile,
    # You can only specify fetchwarefile *or* append_option.
    append_option => q{fetchware_option 'some value';},
    configure => <<EOF,
#!/bin/sh

# A test ./configure for testing ./configure failure...it always fails.

echo "fetchware: ./configure failed!
# Return failure exit status to truly indicate failure.
exit 1
EOF
    makefile => <<EOF,
# Test Makefile.
all:
    sh -c 'echo "fetchware make failed!"'
EOF
);

Makes a $filename-$ver_num.fpkg fetchware package that can be used for testing fetchware's functionality without actually installing anything.

Reuses create_tempdir() to create a temp directory that is used to put the test-dist's files in. Then an archive is created based on original_cwd() or $destination_directory if provided, which is the current working directory before you call make_test_dist(). After the archive is created in original_cwd(), make_test_dist() deletes the $temp_dir using cleanup_tempdir().

If $destination_directory is not provided as an argument, then make_test_dist() will just use tmpdir(), File::Spec's location for your system's temporary directory.

Returns the full path to the created test-dist fetchwware package.

make_test_dist() supports customizing the Fetchwarefile, ./configure, and Makefile of the generated make_test_dist():

  • fetchwarefile - option takes a string that will be written to disk as that test dist's actual Fetchwarefile.

  • append_option - option confilicts with fetchwarefile option, so only one or the other can be used at the same time. append_option quite literally just appends a fetchware option (or any other string) to the default Fetchwarefile

  • configure - option takes a string that will completely replace the default ./configure file in your generated test dist. This file is expected to be a shell script by fetchware, but will probably transition into being a perl script file for better Windows support in the future.

  • makefile - option takes a string that will completely replace the default Makefile that is placed in your generated test dist. This file is expected to actually be a real Makefile.

WARNING

When you specify your own $destination_directory, you must also ensure that it's permissions are 0755, because during testing fetchware may drop_privs() causing it to lose its ability to access the $destination_directory. Therefore, when specifying your own $destination_directory, please chmod it to to 0755 to ensure its child can still access the test distribution in your $destination_directory.

md5sum_file()

my $md5sum_fil_path = md5sum_file($archive_to_md5);

Uses Digest::MD5 to generate a md5sum just like the md5sum program does, and instead of returning the output it returns the full path to a file containing the md5sum called "$archive_to_md5.md5".

expected_filename_listing()

cmd_deeply($got_filelisting, eval(expected_filename_listing()),
    'test name');

Returns a crazy string meant for use with Test::Deep for testing that Apache directory listings have been parsed correctly by lookup().

You must surround expected_filename_listing() with an eval, because Test::Deep's crazy subroutines for creating complex data structure tests are actual subroutines that need to be executed. They are not strings that can just be returned by expected_filename_listing(), and then forwarded along to Test::Deep, they must be executed:

cmd_deeply($got_filelisting, eval(expected_filename_listing()),
    'test name');

verbose_on()

verbose_on();

Just turns $fetchware::vebose on, by setting it to 1. It does not do anything else. There is no corresponding verbose_off(). Just a vebose_on().

Meant to be used in test suites, so that you can see any vmsg()s that print during testing for debugging purposes.

export_ok()

export_ok($sorted_subs, $sorted_export);

my @api_subs
    = qw(start lookup download verify unarchive build install uninstall);
export_ok(\@api_subs, \@TestPackage::EXPORT);

Just loops over @{$sorted_subs}, and array ref, and ensures that each one matches the same element of @{$sorted_export}. You do not have to pre sort these array refs, because export_ok() will copy them, and sort that copy of them. Uses Test::More's pass() or fail() for each element in the arrays.

end_ok()

Because end() no longer uses File::Temp's cleanup() to delete all temporary File::Temp managed temporary directories when end() is called, you can no longer test end() we a simple ok(not -e $temp_dir, $test_name);; instead, you should use this testing subroutine. It tests if the specified $temp_dir still has a locked 'fetchware.sem' fetchware semaphore file. If the file is not locked, then end_ok() reports success, but if it cannot obtain a lock, end_ok reports failure simply using ok().

add_prefix_if_nonroot()

my $prefix = add_prefix_if_nonroot();

my $callbacks_return_value = add_prefix_if_nonroot(sub { a callback });

fetchware is designed to be run as root, and to install system software in system directories requiring root privileges. But, fetchware is flexible enough to let you specifiy where you want the software you're going to install be installed via the prefix configuration option. This subroutine when run creates a temporary directory in File::Spec's tmpdir(), and then it directly runs config() itself to create this config option for you.

However, if you supply a coderef, add_prefix_if_nonroot() will instead call your coderef instead of using config() directly. If your callback returns a scalar such as the temporary directory that add_prefix_if_nonroot() normally returns, this scalar is also returned back to the caller.

It returns the path of the prefix that it configured for use, or it returns false if it's conditions were not met causing it not to add a prefix.

create_test_fetchwarefile()

my $fetchwarefile_path = create_test_fetchwarefile($fetchwarefile_content);

Writes the provided $fetchwarefile_content to a Fetchwarefile inside a File::Temp::tempfile(), and returns that file's path, $fetchwarefile_path.

rmdashr_ok()

rmdashr_ok($dir_to_recursive_delete, $test_message)

Recursively deletes the specified directory using File::Path's remove_tree() subroutine. Returns nothing, but does call Test::More's ok() for you with your $test_message if remove_tree() was successful.

NOTE:

rmdashr_ok() reports its test as PASS if any number of files are successfully deleted. It only reports FAIL if no directories were deleted. Test::More's note() is used to print out verbose info about exactly what files were deleted, any errors, and number or errors/warnings and successfully deleted files are printed using note(), which only shows the output if prove(1)'s -v switch is used.

ERRORS

As with the rest of App::Fetchware, Test::Fetchware does not return any error codes; instead, all errors are die()'d if it's Test::Fetchware's error, or croak()'d if its the caller's fault. These exceptions are simple strings, and usually more than just one line long to help further describe the problem to make fixing it easier.

SEE ALSO

Test::Exception is similar to Test::Fetchware's eval_ok().

Test::Output is similar to Test::Fetchware's print_ok().

AUTHOR

David Yingling <deeelwy@gmail.com>

COPYRIGHT AND LICENSE

This software is copyright (c) 2016 by David Yingling.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.