I am an Agile developer (...something that many people will claim), but I think that older terminology is useful here; as it shows the progress of objectives better than “its Agile innit!". I have spiraled in terms of what I'm focussing on. I emphasise here in the intro that this is a real scenario, and not “made up for a blog article”. If you have no interest in managing software creation, I suggest you do something other than read this article. As someone who is working at new startups, I build a product; then another.
- Most of these points fit into 1 week sprints, with a clearly defined and testable deliverable at the end of each item. As would be expected, the features that required less work where not in a sprint by themselves.
- Items marked 'feature' are things you talk about to clients/leads.
- Every item with time spend for 'performance' was to scale up to client expected data volumes, and not assume everyone is a patient developer.
- Most items marked 'quality'; a) perform an observable thing for features b) reduce RAM used, so the product can be run on a wider variety of client machines c) increase readability, so later dev work is faster/ responsive and cheaper.
At the start I am adding features (which tends to slow the movement of user stories); at the end I am adding quality (which increases user stickyness, and reduces user churn).
At my current employer, I am responsible for making the GUIs, for UX decisions, for software doing client communication etc. We have alot of numerical data, which is best presented as a graph (or just not presented). My CTO is a physics grad, and “does graphs” as an academic accomplishment. Talking about graphing, for the second product, I performed in chronological sequence:
- [FEATURE] Chose a graph library which supported RWD, as it was SVG. Added enough code that it could render DB records.
- [FEATURE] Added visual features, so that it complied to the CTOs “graphing rules” (e.g. unlike Excel, smart rounding rules on axis labels).
- [QUALITY] At this point, everything relating to data was low quality and quick; add tests to my REST API code (and make changes), and report bugs to the C team, who where generating this data.
- [PERFORMANCE] Most of the time cost to get a graph is extracting data from the DB. Make REST API run faster by using more complex SQL. Secondly leverage some more complex features on the DB schema.
- [PERFORMANCE] Use REST library/ middleware better, so we can support longer periods, with a sensible execution delay.
- [FEATURE] Add graphing for more types of data (same DB).
- [FEATURE] Add ability to update graph without remove/ adding it (used when changing report time).
- [PERFORMANCE] In networking; change from every action fulfilment having its own queue; to a shared network queue. Was unable to find OSS for this, so wrote entire feature. This is a better architecture, and has much higher performance overall system. Removes a class of network issues.
- [QUALITY] As a feature of last point, deal with network timeouts (a frequent 'feature' of the RAM-starved VPN endpoint). Implementation 1
- [QUALITY] Actually design a colour scheme, add much better layout with CSS.
- [QUALITY] Stop data coming out of the middleware in random orders, as this makes the graphs incorrect. Implementation 1
- [PERFORMANCE] Juggle work inside the REST API, trimming another 10% off the execution delay.
- [FEATURE] Add extra composite graph type, which shows trends very clearly; requires next change.
- [QUALITY] Improve use of Models and Collections inside the client-side code. Side effects are its easier to read, uses less RAM, another graph type is possible and the logging is more coherent.
- [QUALITY] Demonstrate current RWD in graphs (and slightly improve CSS to make this work nicely).
- [QUALITY] Sort raw data at last possible point, as this stops a few edge cases occurring; at the expense of more RAM on the client. Random order data implementation 2.
- [FEATURE] Add 'print colour scheme', immediately to support the marketing team; although I can see the same work happening at a clients office.
- [PERFORMANCE] Juggle code layout, so it it uses permanent processes, and incidentally is faster. Also people from other dev languages think ObjectComposition is easier to read.
- [QUALITY] Add another Service layer to graph generation; simplifying earlier code. Secondly this improves persistence of state, with regards to user preferences in graphs.
- [QUALITY] Add an extra layer of events to be able to spot a different set of network timeouts. Implementation 2.