The first article I started has gotten too long, so this is the second half. This is written after the fact, but the first thing I did with feature change on the website is the F series testcases listed at the end. The test cases which where crashing halfway through are the requirement for change. As I was talking with the jquery module author ~ A Wulf ~ he has changed it. Secondly he added basic list renumbering. As I have a en_UK market, I am sticking to the en_UK spelling where possible. Obviously the uploads to the github are left in en-US.

As I wrote in the previous article, this artifact was an exploratory project. I don't often write like this, as it is hard to do performance benchmarking on an exploration basis. For this project, I have iterated the requirements a few times. This process added to code complexity by a large margin. The simple numbering added by A wulf is about 10lines of code, covering the basic use case. As I am using numbered lists as an output artifact (for lists where ordering isn't meaningful, they are un-ordered lists), this isn't acceptable. Until I had the requirements as a written list, I was applying the columnisation to articles, and seeing what broke. I find this odd, and unschedulable.
As soon as I had the behaviour requirements, I generated the gunit test cases. The test cases are complicated, as there is different expectations on different screen sizes. It is not acceptable to need to adjust each test case manually, so the testcase had to open browser windows, on specific sizes. This step needs to go via the actual browser window, as far as I can tell, is A Wulf is bravely taking the size of the rendered letters, after they are in the screen buffer. It took some time to get the test boundary conditions detailed as jQuery selectors.
The F series was the first fault (elements with a large pixel size on the end of a column make it crash ~ regression bug in jQuery). As mentioned in the first article, I attempted a solution with CSS content, this broke the layout, so was useless. After this, the “split” class was incorrectly added to too many elements. The resultant code in the test cases is too complex to be a good test case. I needed to fix the test cases, which then proved my final copy library code was correct.
The current edition is 200 lines of text, as one visible function, which I intend to refactor. As a user centric thing, I need to add another jQuery module to provide a good interface for window resize events, so the columnisation may be un-applied, and re-applied (as its done by looking at rendered text, it is necessary to to it like this, after any other responsive CSS is applied).

The following are against a maximum size browser on a 1280x1024 display. This is bounds edge-case, and will have different failure points on a different size screen. The fault is actually a null pointer getting passed into a function when the module is splitting elements.

Having listed failures in my 3rd party, I thought I ought to be diligent. Thus this is one of the few volunteer modules to have tests. For the purpose of this tests, it is necessary to control window size, and so layout. Therefore the first file opens the other ones. Having unit tests that require manual interaction is a contradiction.

  • c1 base file ~ this loads the rest...
  • c2 ~ runs twice with different sizes
  • c3 ~ runs twice with different sizes
  • c4 ~ the basic renumbering done by A Wulf in the newest edition.

These tests are proper test cases (rather than test data of the preceding error reports), and require qunit if you copy them to your server. Everything in the files is done on the client side. When you run the test cases, you may wonder why the test items have these weird names attached to them. I was somewhat tired by the time I got up to making these testcases, and the entire tests are about is the numerical prefix, it just counts things. I thought a corruption of prime minister W Churchills 1939 “we have nothing but twigs and damp tissues, but we will fight; we are Empire." speech 1 was funny. Thus “we will count” until my eyes are square.
Then you will ask why one of the test fails, but I submitted it. If you read the test description, you will note it is expected to fail when there is a split. The test item afterwards retests that case. To get that test not to fail, would invalidate too many conditions.