This content continues from punit. The source is or

...Although this code will create complete unit-tests as far as it is able; there are likely to be company specific things, or project specific things. For example some tests I wrote for the MVP edition of this tool (with which the current project shares no code) all needed DBIx::Class and a transaction library importing and instantiating. In normal OO fashion there will be some empty methods defined that child classes of the skelgen may use to inject additional behaviour in a safe fashion.

The created test files must be runnable via prove, but this shouldn't prove a challenge. Specifically test if the file is in the main context or not, when it is instantiate and run itself. Why do I think it valuable to write these tests back out to disk? a) my unit coverage tools require this, b) and mostly b, all of these that I made I PHP, I needed to adjust them abit. My parser is more forgiving, but I assume the same will happen. Having test coverage that doesn't prove anything is annoying.

I am programming with exceptions. They are a useful feature and allow you reduce your code volume by a large amount. All the error handling that amounts to a failure (so can't continue) may be done in one trap. Some people think that they slow the platform down too much (see assembly traces with and without the extra stack information), at which point I quote cost to make software vs the cost of a faster machine. I wish to avoid the situation of exceptions vs no error handling; as obviously it costs less not to do the work.

Some people will consider it limiting that this tool will only map objects. I am an OO developer, but more to the point this style of testing is purely for OO. I am sure other people code take the tool to support non-OO modules; noting if you don't split code into units, you can't write unit tests.

UPDATE 19th Dec 2014: I have spent quite a few hours in the last several days writing code for this. As ever, a carefully planned strong OO structure with alot of tests takes only twice the time of my initial MVP rushed hack. The extra time is dealing with edge cases, so the tool is more useful to other people. I am using the Test::Assert library at present, which as stated isn't emitting valid TAP. So far I have spent three WD to build this source. I will publish to github today.

UPDATE 21st Dec 2014: I have found no way to make Test::Assert work with TAP, so stripped it. I fixed the method enumerator so it can return private functions depending on flags; and will only return methods from the current class. The tests that ship with the code demonstrate a unit-test with private functions generated in complete tests. I am not allowing overwriting existing tests, due to the manual work. I am making everything UTF-8 as the developers primary language may not be in English. As the first iteration is done, I have deleted most of my hidden planning notes from this page.

UPDATE: More work is needed on the exceptions. As Test::Exception allows matches by type, this shouldn't be any effort. Error text matching is useful for die() statements, but again I frequently edit error texts, so this is brittle.
UPDATE: hack library path for me. Other users should use the published edition.

The first version of this project that I pushed into github had good test coverage (I think only the exceptions were excluded). Unfortunately the newer features are awkward to test. I am not concerned there there is no test for the HELP_MESSAGE function, as it just prints a text literal. As a “be nice” feature, the tool will now create missing 't' directories. I can test that there is a directory, when I run in normal mode; but that test code will have the same level of complexity as the implementation code. I am manually loading the testee class, or the function enumerator doesn't do anything. This wasn't expressed before, as running the code against its own classes obviously hides the effect.

UPDATE Jan 2016, Went to a Perl hackday, took this code. Got advice on the best Perl parsing solution. Wrote second edition which reads @assert comments. Syntax is the same as junit or phpunit @asserts. A test may use the following logical conditions:

  • '==' equals
  • '!=' not equals
  • '===' same as (implemented as a deep compare)
  • '!==' not same as
  • '<', '>', '>=', '<=' logical tests
  • 'isa', '!isa' implemented with the isa logical assert

I attach several samples in the Data directory. According to my normal metrics, I need to split the IOAccess class as its too large. I couldn't write this section as TDD, as I did not have a clue how I was making it at the start. The code supports 2common assert formats, but I expect use will require additional '@assert parsing lines'. When the code is run, getting a warning “ADD MORE CODE HERE” means that none of the existing parsers matched your @assert line.

assert_true( ref $ret == "" );
assert_true( length $ret > 0);

UPDATE April2016, I have now refactored that class, and added more unit tests. I have added a @NOTEST flag to suppress test generation where you don't think its useful. I added another unit test format from another perl test library (see next).
@assert (1, 3) == 1
that is it just takes the params section in the invocation line, and the expected result. I don't think this is as useful, as it stops you manipulating the return.
On completion of this refactoring I note I should remove the B library section, as it dups the more relevant PPI. This would make the RAM footprint smaller and probably make it faster. As B is a core library it doesn't make any difference to dependencies. I did a few hours work, and have completed this in April.

UPDATE June2016, I have improved the output on the attached unit tests abit. Pls note as this about test frameworks, some of these tests are supposed to fail, to show the fail code doesn't crash. I don't have a simple means to test the fail branch of a test framework, without doing what I did.
Second note, don't forget you can't have several builds of the same unit test, it confuses the Perl class loader.

Sample usage

From a module that looks like the following (not full class):


# funcC ~ blither blither
# remember this is test code, it doesn't do anything
# @assert $obj->funcC() === $obj
# @assert $obj->funcC() === $obj “a useful comment on what the test does”
    sub funcC {
        return $_[0];
    }
would produce the following unit test (not full class):

sub testfuncC {
    my ($self)        = @_;

     assert_deep_equals($obj->funcC(), $obj, “punit::t::Data::SampleClass#35”);
    assert_deep_equals($obj->funcC(), $obj, “a useful comment on what the test does”);
 
}

Q: Why do I think this is useful?

A: Its very useful for continuous integration, and for code that is mostly decision making.

Current dependences

All these modules are available from CPAN, and in many cases will already be installed.

  • utf8 ~ core
  • B ~ core
  • Exporter ~ core
  • Exception::Class
  • Try::Tiny
  • PPI
  • Module::Util
  • Data::Dumper ~ for use in the generated tests, my code doesn't need it
  • Test::More ~ used in the generated unittests
  • Test::Exception ~ used in the generated unittests
  • Scalar::Util

Implementation notes for punit

RSS. Share: Share this resource on your twitter account. Share this resource on your linked-in account. G+

Implementation notes for punit

RSS. Share: Share this resource on your linked-in account. Share this resource on your twitter account. G+ ­ Follow