Source is available: or . The second half is mostly implementation.

A well-designed unit test will give one confidence in ones unit, it reduces development time, it improves API design. A unit test is a definite long term asset, as it allows later changes at low risk. If you don't want to write in objects, this tool isn't going to help you. For the whole of this document “skelgen” is a contraction of “skeleton-generator”, code that auto-magically builds a class outline and attempts to fillin function definitions where possible; likewise “codegen” is a condensed “code-generator”.

Other Perl tools.

Perl does have a testing culture, although it isn't OO, or matching other languages. Most modules in CPAN ship with some tests. Why develop punit? Most of the existing tools aren't focussed on objects, and they don't supply “effort saving” features. Test coverage is best supported by reducing the time to create each test, down to decisions in terms of the solution-space. I think it is a big win to not leave your decision space when making the API.

There are the following, as a non-complete list (more available 1):

  • Test::Skeleton::Generator ~ an alternative library with similar functionality to this project. It is less attached to objects, which I think are necessary for code reuse; and therefore cheap development. As I intend to add annotation support to my project, it was faster to start with a fresh codebase. People are encouraged to have a quick look at this library in addition ~ if you aren't attached to objects either, this may suit better.
  • Test::More ~ the basic library for recent versions of Perl.
  • Test::Exception ~ just for Exceptions
  • Test::Common ~ a wrapper module, to reduce the number of specific test libraries that you need to import.
  • Test::Assert ~ a library to provide logical condition statements, similar API to xUnit.
  • Test::Class ~ an xUnit library that seems more useful. Unfortunately lacks the skelgen.
  • Test::Unit ~ a procedural test framework.
  • Test::MockObject ~ I haven't used this, but reflection in Perl is quite easy.
  • Test::Extreme ~ this claims to be a “perlish xUnit” library. Doesn't ship with any type of skelgen.
  • Test::Builder ~ not a code gen, despite the name. Framework to manage TAP output and improve inter-operation between test libraries.
  • Test::Subs ~ I went to a hackday, this is one of the things that I take home with me. Subs looks like a useful API, although I have already coded with Test::Assert. This is a small piece of a solution.

Perl vs Other OO languages.

Chronologically Perl isn't my first language, neither is Perl my first enterprise language. I expect there to be a test framework / platform in place before I start using a language. I tend towards writing in objects, as the scoping and responsibility mapping gives you a better code structure. In business terms, code like this is cheaper to run, and may flex to the current situation easier.
The other languages that I use tend to be more forceful about using objects, and so ship with object based unit-testers (Java specifically). I have alot of commercial use of PHP, but have only ever written objects with it. I wrote tests, via a variety of methods, for every single class that I made. I write objects in JS, and where it makes sense to have the JS object separate to the DOM, I write unit tests in JS (where the code is too coupled to the DOM, I create a test page, technically my R&D copy).

What I want from unit test framework is be be able to express decisions in a concise language (e.g. assigning the same password twice to a user object is an error, and should raise an exception). Tests normally include alot of bureaucracy to make managing the population easier. A good framework will automate this paperwork. A good test framework will address things like orthogonality, test isolation, and test repeatability (e.g. a test for a delete function must be callable more than once).


Functional spec for punit

This tool SHOULD :

  • Create tests for units/ classes.
  • Create at least one test per testee-class function.
  • Under current analysis ignore private functions
  • Support Fixture structures.
  • Implement the logic conditions (e.g. assert*) as outlined in other unit-test suites.
  • Where there is output, make this TAP compliant.
  • Be linkable into other Perl tools.
  • As an extension to previous, runnable with prove.
  • Automate as much as possible, so using this tool reduces time to solution.

This tool SHOULD NOT:

  • Force a particular OO style on the operator.
  • Create tests that are fragile, so needing alot of test maintenance. This point is with respect to comment parsing on the second edition in particular.

Implementation discussion

My goal is to have results, more than write alot of code. As such, I will import other libraries as it seems to assist my goals. The presence of Moose and similar libraries will complicate the test framework. Performance of the thing to build test-cases isn't seen as an important issue. This tool shouldn't be run as apart of CI or regression testing, as 100% automation is not a practical goal. The remaining hand-code will need to be done.
The mechanisms in Test::Common and Test::Assert fight each other, so those two cannot be used together. I will test each of the classes in this project using existing simpler test tools (which I trust to have already been tested).
A minimal implementation would create the skeleton unit-tests, provide the assert type interface and provide the harness to execute the test case with. In practical terms unit tests are run for populations of classes, and the harness manages execution of each. A more useful implementation would import the grammar from jUnit/ phpunit-skelgen 1 2 [1]. This would allow the creation of tests in a more logical context (and its faster).

The requirement for ignoring private functions is common to test tools. One can argue that there shouldn't be too many private functions, and certainly as they are implementation details, they should be free to change API. In practical terms, I will probably make this a config item, to allow better inter-operation with different coding standards. If there is a algorithmic way to automate the creation of setup() functions, I haven't seen it. I will create code to create a default testee object, but the params feed to this will need to be edited by the operator. It would be very useful to auto-magically guess, but hard for test conditions.

In practical terms, in other languages; I define the API (as interfaces) at the end of the requirements stage. This is mostly assigning responsibilities to the vague concepts of classes. I define the test conditions against this API. Creating this executable test plan frequently requires altering the params feed to API, but this is a valuable verification exercise on the interface. As this is just an interface, change is quite cheap. I then generate the test cases (including filling in the manual bits), so there is an executable unit-test, covering all the algorithmically important points. People who dislike OO have commented that OO breaks up and hides algorithms. I think that this depends how you write the code. I do create algorithms; and use objects, and have testable code.

I asked CPAN, apparently one writes ones code (I guess managed via bitbucket or github, for VCS); then uploads an edition as a tar ball to CPAN. One gets a PAUSE account to ID the source. This is different, as every other OSS platform I have used allows access to a VCS.

One of the required features for a skelgen is to map the API on the testee class. Any Perl developer will be aware of Universal, and being able to read a classes symbol table. There will need to be some verification to confirm a method is defined in the current class, rather than imported. I think it is more correct to only build test methods for the current testee classes methods. If you need to test an ancestor, define a separate unit-test for the ancestor. This is safer/ easier to manage when API change. I may be able to just use something like Symbol::Util, to be tested. EDIT: This feature is implemented as discussed.

In practical terms, I haven't worked out if it is better to use a larger number of skinny libraries ~ better testing; or less external dependencies, so its easier to start using my code. Unit tests should never be constructed on the actual operational servers so having a long list of requirements isn't an auth headache. Where a library has narrow functionality, if there are odd errors; it may be faster to write my own. It is a hard decision: write 30lines of code; or manage the test cases that ship with an external module. The test cases will obviously be to the standard of the previous tools, as the code was written earlier.

Please see next article.


[1] I am unable to find a find docs for the @assert feature of jUnit.


punit ~ objects are business necessary, therefore tests should be objects.

RSS. Share: Share this resource on your twitter account. Share this resource on your linked-in account. G+

punit ~ objects are business necessary, therefore tests should be objects.

RSS. Share: Share this resource on your linked-in account. Share this resource on your twitter account. G+ ­ Follow edited