I recently went to a short course on architectures, and engineering tools. I now have another cert for my file. This is an article about the course and the teaching, I do not intend to write out the course material. I write my reflections on the supplied information. Writing these is important, or one tends not to apply the course material to ones process, but quite time expensive.
The course notes and quite a few papers are provided free access via T Gilb's website. If you are interested in your own study, please contact T Gilb, who is focussing more on courses than contracts at present. There was a supplementary course on the Friday on communications, but I needed to get back to work.

Core precepts

I claim to be able to do the majority of what T Gilb's activity objectives are, and have been paid for doing them. However T Gilb is doing them on a scale substantially larger than me; apparently on the scale that he is contracted for, failure is quite common. I normally justify my expense. He states that this is mostly to do with the first level documentation, which people are very busy writing, but they write wrongly/ with an ineffective approach.
In all of his experience, it is always necessary to model a large number of stakeholders, all of which want different things. His process is numerically focussed, so he can state he only fails when there is data not supplied to him. To clarify that, he will focus his budget on the best known method to build the most important thing; but if he is mis-told the most important thing, he will fail to produce a valuable outcome. The large number of stakeholders mean any requirements and process will be quite complex to compute. Again his analytical technique carries this concept. He focusses on the most important stakeholder, and their most important ten goals. This provides a known improvement, on the biggest thing. As he is normally controls his budget situation, further objectives can be pursued afterwards. I will supply this document to my paperwork-shy “agile” colleges (as an example of paperwork being a useful tool to generate the right results, inside an Agile environment. If you can't type, docs are slow, but so is coding). As an extension of this, correctness is important, accuracy is important, precision isn't. My idea which is roughly four times faster, is still worth investing in; but if it costs more than the return to work out precisely the gain, it isn't worth investing in, not even the paperwork.

He states something I had never analysed before: We are rarely building a new system ~ we are building a better version of what we have. For example banking hasn't changed since the 1800s, new computer systems are deployed every few years which decrease costs, increase performance and increase the userbase. We ~ the technical people ~ should only sell on these values. The new systems MUST perform better than the current one (unlike the 5wd for a cheque to clear, done by a PC; which was 3wd in the 1970s, done by hand). He also talked about the difference between engineering and science. My first degree is in computer science, I make experiments, and write it up afterwards. Failure is an allowed outcome, as I am doing something new. With engineering, I am building towards definite goals, with a known and tested process. This is spending someone's money to give them something; failure isn't one of the options.

T Gilb's modus operandi is to take the current plan and redraft it. To decompose the majority of sentences into three heaps: functions, qualities and solutions. There should be NO solutions in the initial specification. These are choices which should come later in the process. As just written, the functions are largely predictable, and should not be affected by marketplace competition. If you provide a different thing, the users wanting the original thing will ignore you; if you provide a better thing than your competition, they will talk with you. To purge the solutions, which are likely to be listed; keep asking “why” (a la the Japanese management technique 1). To clarify that with an example, you do not want a password on your system, its poor usability; you want a secure system, with data integrity. As everything is rendered to a number, there should be consistent quality in the specification. This means it is easier to produce the right thing.

There should be no attachment to solutions, and these should not be listed in the requirements. If this process is followed, one may try a solution, if it doesn't work, swap it out for a second. It was not discussed how this plays with iterative development. T Gilbs process goal is the biggest known-cost changes come first, so there is some visible improvement to the system, this occurs at the start. K Gilb states that all features can be free at a really low quality, so small quick alterations allow “a solution”. The iterative structures were designed for “new systems”, so whatever solution one presented was a new thing.

What are my words/ communication?

This section is under this title, as T Gilb uses more structured language than I do. He adds emphasis to certain terms. The words he uses are to reduce length, and after good definition, quite orthogonal to each other.

For the last several years, I have been doing implementation planning with the titles “whut”, “why”, “how”. About 60 or 70% of “how” is how to test. These documents are as concise as practical; not massively formal; and support the fact I write better English when I do it at a separate time to writing code. They either live in the code repository along with the source, or in the project tracking software.
I am not in an architect role at present, and so not doing architecture. As comparison ~ at ms ~ although under a developer role; I did large amount of non-feature value creation. I changed the what the directors (who also did client work) needed to do, in order to present solutions. To take a single example, I spent about 2wd to remove about 0.5wd on every subsequent piece of work, by redefining the process (and automating 50% of it). If one is using the light weight Agile process, this is as much of an “architect” role as the company needed.
In future, I will try to get the commercial people (representative of the actual external clients) to have “value” section on larger bits of work. What are the values and goals that I am fulfilling? This is one of the points that T Gilb describes for reducing failure.
T Gilb is actually retired, and so had his education a long time ago. The “time and motion” studies 1 were fashionable in the 1950/1960s. His initial understanding of “management” and “engineering” is influenced from this time ~ everything should be calculatable as a number. To broaden the scope of his abilities, he is also was applying the numerical analysis perspective to words. Weighted terms for the Gilbs are “clear”, “quantitative”, “clarify”; “define” and “concrete” should be, but they didn't mention them. As they are selling a management and concentration process, rather than skills in any domain; detail on language used is universally important. They claim to push for all key nouns in documents being defined, to reduce ambiguity; and I guess reducing verbosity (when a word is defined, precisely; it may cover several sentences, which would otherwise be needed).
The biggest difference between this and other processes that I have read is the enforcement of quantitative output, for the initial change request. From a developer or designers perspective this generates definite bounds on what is required, so the right solution is justifiable. If you need to effect a large change in outcomes, it is obvious you need to effect a change in the way you do things. Numerical analysis on this stage is therefore valuable. Conversely a small change, shouldn't require a rewrite of anything. Being able to use the consultants skills, so that management to express their goals in terms of quantifiable results, not emotive ambitions is very useful. This does balance the quantitative analysis on the developers activities/ performance. The quantified changes listed the in specification should also list the quantified change the competition have done recently / are expected to do.

T Gilb dislikes the terms “MUST” and “SHALL” (with “MUST NOT” and “SHALL NOT”). He states these are inflexible, and fail to support the number of different concurrent requirements that need consideration. Looking at his stakeholder analysis, I would tend to agree with that. I have only seen these terms used in RFC, where I think they are justified. If all parties are going to communicate, they MUST talk the same language (pun intentional). Sensible RFC do not constrain your implementation (e.g. RFC 2866 1), but some of the commercially created ones are someone's implementation plan, with the confidential bits stripped 1. RFC are not supposed to be implementation plans, or requirements lists. They are supposed to be detailed contracts for the boundaries between entities. A document on what software MUST do, so any any piece of compliant software may talk with any other compliant software. What use is SQL if we don't agree how to say things?

What to I gain

For writing requirements (not implementation plans), one lists impacts (not transforms). Having a written quantified scale on what change is needed, T Gilb does a few estimates on what change could be performed to effect these. The documents he uses have many conditional clauses in them, and resemble CSS. Where the proposed solutions and the supplied resources don't match, he renegotiates ~ early before much is spent. His paperwork process allows this type of discussion after a few days. One of other terms is “ambition”. Due to there being large number of stakeholders, renegotiation is quite common. For example, if one is commissioned to rebuild the interior of a public use mansion, and the owner is thinking of a large spiral staircase in marble; the stakeholder for legal compliance will need to be consulted for the best way to still keep them happy for wheelchair access. The person paying is clearly the most important stakeholder, but legal requirements cannot be avoided.

T Gilb has a symbol for scales, which looks like a old technology pole ladder; or a slider dial. As everything is quantified, it is clearer/ faster to read in graphical form. His quantified scales looks like this:

    Past [2012, en_UK, DataElements=All, Purpose=Testing]
		40% ± 30% percentage of code tested <- TsG

    NewProcessReleaseDate [late2013, en_UK, DataElements=All, Purpose=Testing]
		60% ± 10% percentage of code tested <- OAB1
    NewProcessReleaseDate [late2013, ES, DataElements=All, Purpose=Testing]
		30% ± 10% percentage of code tested <- OAB1
    NewProcessReleaseDate [late2013, cn_HK, DataElements=All, Purpose=Testing]
		30% ± 10% percentage of code tested <- OAB1

To express that as English, the testing must be to a known saturation, or it is not much use. Not everything needs to be tested, which is why the target is only 60%. With the example, there is definite bounds on the change needed so “does it work” is simple to compute. The big reason for this change to reduce reduce ambiguity on what is tested, so we can sign things off. My initials are on the new state, as I set the required saturation (its the ratio between utility code, and decision making code; one needs to test the latter). After all the decision making code is tested in English, some testing is needed for other languages, but not as much. I hope it is obvious which requirements representation most useful, and which can be speed-read.
As previously stated everything is accurate, but not necessarily precise. A good example of this trend is GPS (when co-ord are quoted to 4 or 5 DP). Note, every datum is notorised with a source, so we may check its validity. Otherwise “nameless authority” leads to lack of taking responsibility, and quality dropping off.

[1] There is an RFC that I am thinking of, written under the microsoft flag, which is a list of their API calls. Probably version locked to a specific version of windows. I have read this, but need to justify my claim.