NAME
docs/tests.pod - Testing Parrot
A basic guide to writing tests for Parrot
This is quick and dirty pointer to how tests for Parrot should be written. The testing system is liable to change in the future, but tests written following the guidelines below should be easy to port into a new test suite.
How to write a test
New tests should be added to *.t files. These test files can be found in the directories t, imcc/t and languages/*/t. If a new feature is tested, it might also make sense to create a new *.t file.
The testing framework needs to know how many tests it should expect. So the number of planned tests needs to be incremented when adding a new test. This is done near the top of a test file, in a line that looks like:
use Parrot::Test tests => 8;
Parrot Assembler
PASM test are mostly used for testing ops. Appropriate test files for basic ops are t/op/*.t. Perl Magic Cookies are tested in t/pmc/*.t. Add the new test like this:
pasm_output_is(<<'CODE', <<'OUTPUT', "name for test");
*** a big chunk of assembler, eg:
print 1
print "\n" # you can even comment it if it's obscure
end # don't forget this...!
CODE
*** what you expect the output of the chunk to be, eg.
1
OUTPUT
Parrot Intermediate Representation
Tests can also be written in PIR. This is done with pir_output_is
and friends.
pir_output_is(<<'CODE',<<'OUT','nothing useful');
.include 'library/config.imc'
.sub main @MAIN
print "hi\n"
.end
CODE
hi
OUT
C source tests
C source tests are usually located in t/src/*.t. A simple test looks like:
c_output_is(<<'CODE', <<'OUTPUT', "name for test");
#include <stdio.h>
#include "parrot/parrot.h"
#include "parrot/embed.h"
static opcode_t *the_test(Parrot_Interp, opcode_t *, opcode_t *);
int main(int argc, char* argv[]) {
Parrot_Interp interpreter;
interpreter = Parrot_new(NULL);
if (!interpreter)
return 1;
Parrot_init(interpreter);
Parrot_run_native(interpreter, the_test);
printf("done\n");
fflush(stdout);
return 0;
}
static opcode_t*
the_test(Parrot_Interp interpreter,
opcode_t *cur_op, opcode_t *start)
{
/* Your test goes here. */
return NULL; /* always return NULL */
}
CODE
# Anything that might be output prior to "done".
done
OUTPUT
Note that it's always a good idea to output "done" to confirm that the compiled code executed completely. When mixing printf
and PIO_printf
always append a fflush(stdout);
after the former.
Testing language implementations
Language implementations are usually tested with the test function language_output_is
.
Ideal tests:
- o
-
Probe the boundaries (including edge cases, errors thrown etc.) of whatever code they're testing. These should include potentially out of band input unless we decide that compilers should check for this themselves.
- o
-
Are small and self contained, so that if the tested feature breaks we can identify where and why quickly.
- o
-
Are valid. Essentially, they should conform to the additional documentation that accompanies the feature (if any). [If there isn't any documentation, then feel free to add some and/or complain to the mailing list].
- o
-
Are a chunk of assembler and a chunk of expected output.
TODO tests
In test driven development, tests are implemented first. So the tests are initially expected to fail. This can be expressed by marking the tests as TODO. See Test::More on how to do that.