Thursday, December 17, 2015

Reflection During Process Improvement

One of the most important activities when engaged in process improvement involves reflecting upon existing capabilities. True, the activities that sustained development may not improve development anymore. For example, when things have plateaued.

Not everything needs to change--if the business is producing value something is being done right. The trick is to tease out the existing goodness in what's right and identify small improvements to add.

Friday, December 11, 2015

Agile! The Good, the Hype and the Ugly (Test-Driven Development)

I’ve recently purchased Bertrand Meyer’s book “Agile! The Good, the Hype and the Ugly”. Meyer looks at Test Driven Development (TDD). Meyer’s discussion on TDD adds clarity to the murkiness surrounding it: no code without tests. Nuff said.

Is no code without tests enough?

Nope.

Testing shows the presence, not the absence of bugs. [Dijkstra]

Meyer's also points that its impractical to expect that you not start any development until all tests pass. You may justifiably document the existence of bugs during development and rightly spend your time working other parts of the functionality that are deemed important.

In "Why Most Unit Test is Waste (A Look at Test Driven Development)" I point out a vagueness surrounding when to begin TDD. This vagueness manifests itself in the form of bootstrapping your architecture. The answer isn't in that article because it goes against Agile principles (Working Software instead of comprehensive documentation). The answer is you create an architecture then move on to the development and testing (in which ever order you prefer).

Practically speaking there isn't a one-size fits all answer to these questions. Experience and knowledge play a big role in the decision on how much is enough. I do like Leslie Lamport's position in Thinking for Programmers: create a blueprint with enough detail to permit you to continue.

Wednesday, November 18, 2015

Harness the Power of Done (And Be Free!)

I'm going through an exercise to define a definition of done with the software team I work with. We have an informal definition implied through our working agreements.  A good definition is formally defined, well understood and applied consistently by all team members for all deliverables.

If you want apply something consistently you need buy-in.

To enable buy-in I started with the Scrum Guide, the definition of the Retrospective, a champion, and guidance. I stayed out of the decision making and limited input to questions and clarifications on objectives.  I was fortunate. The team wanted to do Retrospectives and had a champion to help develop this capability.

The Retrospective is an activity providing a team with the opportunity to reflect. Its focus is on people, processes, relationships and tools. Its intent is to provide a mechanism for capturing learning and identifying actions for improvement.

Built into the Retrospective is the requirement that the definition of done be improved. It is an explicit manifestation of continual improvement for a Scrum team.

People can bristle at the suggestion they can improve. They equate improvement with deficiency instead of excellence. Athletes intuitively understand that improvement results in better performance.

A champion is key to achieving buy-in.

With support a champion can engage, educate and identify impediments. I deal with impediments by helping frame questions. The lack of decision making other than objectives means the champion owns the solution. The lack of decision making is critical to developing a self-organzing team.

Successful champions are self-aware.

The self-aware champion has a pragmatic understanding of their abilities and values differences of opinion. They recognize that like-minded individuals are easy to work with but that they can fail to challenge assumptions.

A champion works in small groups.

A small group is important during the initial stages. During the initial stages the champion needs to clarify their ideas, approach and issues. A small group provides the champion a way to work though these issues with people who see the problem space differently. It is this small group that helps frame the objectives for the rest of the team.

The critical success factors are a champion and helping them create a work group that provides the diversity needed to do a deep dive on the issues.

It took a self-aware colleague to crystallize the importance of finding people to complement a champion. This colleague improved my understanding on how to improve the chances for successful organizational change.

Examples enforcing the value of differences in opinion include concerns expressed around the need to formalize the Retrospective and the need to continually improve.

Scrum defines the Retrospective as a meeting. Its value lies in capturing and executing opportunities for continual improvement. Challenging whether a separate meeting is valuable because there may be other ways to conduct effective Retrospectives.

Can Retrospectives be effective without a formal meeting, especially if the team already has other avenues to conduct conversations? I don't know. We conduct a Lean Coffee each week so this line of enquiry is valuable.

Tying the definition of done to continually improvement may not be obvious to the causal reader of the Scrum Guide. The implications require thought. As does the question of how much to improve.

Upon hearing this, I suggested this linkage might make an excellent agenda item for the work group. Will they discuss it? I may never know. The fact that it's on people's minds is valuable because it seeds a conversation on how the team can grow.

It's too early to tell if the team will succeed. The fact that there is desire, a champion and supportive management implies we are well on our way to getting a definition of done and valuable Retrospectives. I look forward to the outcome.

Thursday, November 12, 2015

Self-Organizing Teams for the Rest of Us

I was pleased to learn Bertrand Meyer's position on self-organization in "Agile! The Good, the Hype and the Ugly". In Meyer's view, self-organization is hype--widely touted ideas that make little difference, good or bad to the resulting software.

I support that a team should be empowered and that empowerment should include the ability to organize their work. I value the input from the people who work with me and I strive to create an environment where people can contribute to their fullest and can provide constructive criticism. It seems foolish and unwise to do otherwise. This is just common sense.

Where self-organization becomes confusing is the discussion on subtle control by asserting influence. [1, 2] It is devilishly difficult to glean what this really means. How might this be achieved in practice? It might involve management using the Socratic method. It might be a combination of completely different approaches.

Meyer points out that the degree of self-organization achieved is dependent upon the skills of the practitioners within the team. An exceptionally experienced team may work without a manager but until you have such a team it is likely to require one.

Meyer takes exception to the notion that self-organizing teams should be applied to the entire software industry. This doesn't imply that self-organization is a poor goal--on some levels self-organization is just common sense. On other levels, it paves the way towards higher performing teams.

Just because many teams will never become the equivalent of a conductor-less orchestra isn't reason to ignore this idea. It is reason to recognize that such a lofty goal may not be achievable and that the costs in trying to achieve it may not be worthwhile. That's good advice.

A few teams I've worked with have struggled with self-organization. It's refreshing to get another perspective on the topic. Especially when this perspective pushes aside the complexity hidden in notions of self-organization and points out that good software can be created without self-organization.

[1] Organizing Self-Organizing Teams presents a theory for promoting self-organization with a team. This theory assigns 6 roles that need to be used, mostly by an Agile Coach who interact with the team.

[2] Self-Organization and Subtle Control: Friends or Enemies? provides a simple introduction to complex adaptive systems, links the theory behind these systems to a software team and contains a couple of models for introducing positive change into a software team.

Tuesday, October 20, 2015

Blue Yodel (A Book Review)

I've recently purchased a copy of Blue Yodel by Ansel Elkins. I can't say enough about the poems in this book. They have a haunting and disturbing quality that forces you sit back and really think about what she's saying and, more importantly the deeper implications of what she is writing about.

The forward by Carl Phillips does a much better job than I can of dissecting some of Elkins writing. I'll simple add that the depth that Elkins brings to her imagery and the placement of dissimilar images has an effect that is almost magical.

If you are considering purchasing one book of poetry this year consider Blue Yodel. You won't be disappointed.


Wednesday, October 14, 2015

Keep It Simple

In The thing with code clarity: you can't be proud of something I can't read, Santiago L. Valdarrama makes an plea for clarity. I applaud Santiago's position but think clarity misses the point. I prefer simplicity--the removal of non-essential elements.

I'm not convinced clarity leads to simplicity. Clarity can lead to understanding. Is understanding a complex implementation worthwhile? Not if that implementation can be simpler.

Simplicity. Achieve it and the documentation need only explain the essential elements of the solution. It solves the problem of how much to document. It makes obvious to those maintaining the implementation what it does and where the gaps in their knowledge lie.

On Reddit, Uberhipster makes excellent comments and includes informative links. The first discusses The Little Prince; the second the progression of Picasso's Bull. I like the connection between my notion of simplicity and Picasso's Bull. But it raises troubling questions when applied to software.

The Little Prince and Bull are the work of a single individual. Most useful software is developed by teams.  I'll bet lack of clarity amplified through misunderstanding by multiple people creates a vicious downward spiral.

Clarity won't provide an escape from this spiral. Simplicity will.

Monday, September 21, 2015

The Play's the Thing

I finally finished my Winter 2015 (yes, that's correct) issue of Nautilus. Amazing articles.

One article that stands is Shakespeare's Genius is Nonsense. This article was informative enough to provide me with a new appreciation of Shakespeare.

Shakespeare's Genius is Nonsense provided insight into how the writing style creates a lasting effect for the reader (or upon hearing the words spoken) because of the links that spread out from each word based upon sound, sounds that resemble it, its sense, its potential senses, their homonyms, their cognates, their synonyms and their antonyms.

The emphasis in Shakespear's writing is not so much on the puns contained therein but on the unexploded puns which retain their energy and create a lasting effect.

Tuesday, September 15, 2015

Software to Amplify What Your Users Do

In Five thoughts on software, Seth Godin points out that utility and ease of use have been pursued at the expense of power and simplicity and that it is urgent that software companies create tools that increase the quality of what user's create.

This is similar to an argument made by Frederick P. Brooks Jr. in the Computer Scientist as Toolsmith. Brooks made this argument in 1977 and again in 1996. In 1996 Brooks wrote:
If the computer scientist is a toolsmith, and if our delight is to fashion power tools and amplifiers for minds, we must partner with those who will use our tools, those whose intelligences we hope to amplify.
Godin lays some of the problems with software at the feet of customers who accept poor software as the norm. He requests that customers have higher aspirations for what their tools can achieve. I agree.

Good customers are demanding. In my experience, the most demanding customers provide the best insight on what your product roadmap should contain. Finding those customers is challenging because the relationship needs to evolve into a mutually beneficial partnership.

If you have software that doesn't amplify your intelligence contact the vendor. Whether that vendor is interested in helping you create value or not will become apparent very quickly. If you conclude that they aren't interested in helping you create value then spend your money elsewhere.

The same goes for the software developers. You can tell if your company cares about its customers by the quality of input you obtain from those customers. And how far you are from those customers. If you are far enough away that you wouldn't recognize a customer if you saw one then perhaps you are working for the wrong company.

Godin lays down a challenge to the entire software industry: build powerful and simple software that allows him to increase the quality of what he creates. I like building software. I think it's a challenge worth committing too.

What will you do?

Sunday, August 23, 2015

latex2html broke my LaTeX installation on Mac OS X Yosemite

Installed LaTeX2HTML using Mac Ports:

# sudo port install latex2html

Looked good but it messed up my LaTeX style files so that none of my LaTeX files would build.
To correct this had to take the following actions.

# sudo port uninstall texlive-latex texlive-basic texlive-bin
# sudo port install texlive-latex
# sudo texhash

No avail.

Had to reinstall several style files (e.g., etoolbox.sty and parskip.sty).
Picked up the style files from ctan.org.

These went into /opt/local/share/texmf-texlive/tex/latex. Then

# sudo texhash

LaTeX restored and operational.

Monday, August 17, 2015

Why Most Unit Test is Waste (Into Modern Times)

In “Why Most Unit Test is Waste (An Exploration)” I summarize how Coplien views an explicit calling structure (or context) for the objects (and methods) as critical to enabling reasoning about the execution of a program. Here, I take an in-depth look at the article’s introduction.

What piques my interest in the introduction to “Why Most Unit Test is Waste” are statements comparing the difficulty of reasoning about programs written in FORTRAN to an object-oriented programming language. Understanding this is critical to understanding the motivation behind Coplien’s arguments on waste.

The object-oriented programming language Coplien refers too isn’t named. I selected C++. Java is an equally good choice.

It turns out that FORTRAN, C++ and Java all support polymorphism. In fact, all three languages support static (early) binding and dynamic dispatch. Static binding ensures that compilation fixes the binding of names. Dynamic dispatch is the process of selecting which implementation of a polymorphic operation (method or function) to call at run time.

FORTRAN introduced polymorphic types in FORTRAN 2003. A review of FORTRAN 2008, J3/10-007r1, states that the class type specifier is used to declare polymorphic entities. A polymorphic entity is a data entity that is able to be of differing dynamic types during program execution.

A review of “Programming Languages — C++” and “The Java® Language Specification Java SE 8 Edition”, shows that both support static binding and dynamic dispatch.

The C++ standard states that virtual functions provide the mechanism for dynamic binding and object-oriented programming. A class that declares or inherits a virtual function is called a polymorphic class.

The Java standard that instance methods provide the mechanism for dynamic dispatch. A method which is not declared static is an instance method.  An instance method is also called a non-static method. In “The Java® Virtual Machine Specification Java SE8 Edition”, instance method is likened to virtual methods in C++.

If FORTRAN, C++ and Java all use dynamic dispatch what is Coplien’s thesis regarding reasoning about FORTRAN and object-oriented programs?

Some thoughts:
  • my analysis of FORTRAN, C++ or Java is incorrect.
  • Coplien’s discussion relies on a version of FORTRAN that predates the introduction of polymorphism.
If my analysis is incorrect then this article is done. If Coplien’s thesis relies on an outdated version of FORTRAN then it is worthwhile understanding how (if) this affects the conclusions.

I’ll assume that my analysis is correct and Coplien refers to an outdated version of FORTRAN for purposes of supporting his thesis. Reliance upon an old version of FORTRAN is in line with the article’s introduction, which makes FORTRAN sound ancient.

Relying on an outdated version of FORTRAN requires that it not support polymorphism. FORTRAN 95 does not support polymorphism. FORTRAN 95 is an official standard and no public previews are available. (FORTRAN 77 introduced ad-hoc polymorphism for operators and I’ll ignore this. [Wikipedia])

Assuming an outdated version of FORTRAN, then all three programming languages support early binding. C++ and Java support dynamic dispatch. FORTRAN does not. There is no need to examine early binding as it is common to all three languages.

Dynamic dispatch is the process of selecting which implementation of a polymorphic operation (method or function) to call at run time. It contrasts with static dispatch in which the implementation of a polymorphic operation is selected at compile-time. Its purpose is to support cases where the appropriate implementation of a polymorphic operation can't be determined at compile time because it depends on the runtime type of one or more actual parameters to the operation. [Wikipedia]

In effect, you can’t reason about an object-oriented program without executing them because the behaviour of the program isn’t known until runtime. It isn’t known because the runtime type of one or more method parameters isn’t known until runtime.

FORTRAN is over-simplified in this article. Current implementations of FORTRAN support polymorphism and use dynamic dispatch to do so. This has no bearing on the position that object-oriented programs need to be executed in order to obtain an explicit calling sequence (or context) but it does clear up my confusion regarding the comments on FORTRAN—they illustrate the differences between a procedural and an object-oriented programming language.

My focus on polymorphism may also miss the point. The point may be that inheritance is a key differentiator over procedural programming languages and a key contributor to the complexity of testing object-oriented programs. Inheritance may be a better focus as polymorphism is required to enable it.


Saturday, July 25, 2015

Bottleneck, Where Art Thou

I've been reviewing the tools we use to help us develop software. Tools affect process and shape workflow. They colour how we view the work we do.  Some examples: Review Board provides support for identifying the reviewers of a patch. Bugzilla provides support for prioritizing tasks.

I identified our use of Bugzilla as an problem almost immediately. We use the priority field of a task to create a workflow. The highest priority tasks are tackled first. Unfortunately, Bugzilla doesn't mange priority as a list. It forces you to group tasks into a small number of priorities. As a result, multiple tasks can have the same priority.

The problem with multiple tasks having the same priority is that the software team, not the business ends up determining what to work on first. This is fine if the team is capable of arriving at the same decision as the business. If the team makes a mistake they produce less business value than they could have.

It took longer to identify Review Board as a tool that undermined our work flow. Peer review is a new process. To the team's credit, they embraced it and created a working agreement to ensure its use. Our agreement calls for a review of all code prior to committing it to production and for at least two people to participate in the review.

About six months after we deployed Review Board reviews began to bottleneck. The authors said they couldn't commit to production because the patches hadn't been reviewed. The reviewers said they were too busy to complete the reviews. The drop in commits to production occurred because code reviews stopped. The team was committed to reviews but a bottleneck developed.

In effect, the reviewers had been asked to review the patch, had agreed to do so, but didn't have the time to follow through. The authors identified the reviewers in Review Board and accepted that they would eventually get the promised review. I ultimately concluded that the team's commitment to performing reviews coupled with the working agreement created the problem.

We treated the decision to review a patch as final and never went back to revisit it. This was eye opening. Two positive behaviours: commitment and peer review conspired against us because we accepted that identifying reviewers ahead of the review as sufficient.

What I learned about Bugzilla is that it's a poor task management system. Using Bugzilla the way we use it encourages a form of lazy prioritization--no one in the decision process was forced to decide what task to do first and so the grouping of tasks is what became important. Bugzilla appears designed to encourage this. This loss of advantage manifests itself whenever there are multiple high priority tasks.

With Review Board I discovered that the act of identifying the reviewers before the review created a form of paralysis within the team. Our focus on commitment and delivery made us blind to the fact that we needed to be more agile in our decision making.

Bugzilla made me want a product and sprint backlog. [1] It's great for tracking issues but not in the manner we use it. Review Board reminded me that agility can come in many different forms and a lack of agility can create bottlenecks whose root causes can be surprising.

[1] I am aware of ScrumBugz and the Mozilla hosted version at ScrumBugs. I haven't investigated it.

Sunday, July 19, 2015

The Cost of a Cup of Coffee

I walked into my local coffee shop the other day. I was greeted by someone offering me two free cups of coffee (or tea) if I was willing to sign up for their credit card promotion. I was surprised how this made me feel.

I can respect the desire to extend a business through the use of an in-store promotion. It’s not exploitation. It’s just good business. My local supermarket does the same thing. They have a different credit card and coupled their promotion with chocolate chip cookies.

Coffee, tea or chocolate chip cookies in exchange for filling out a credit application and the privilege of being able to carry a credit card with that merchant’s branding embossed on it. Who benefits from this? More importantly, what’s in this for me?

This is an excellent deal for the institution holding the credit card contract and likely a good deal for the merchant. Credit card interest rates have a 18-20% annual percentage rate. At 18%, $100 translates into $18 per year for the financial institution holding the credit card contract, assuming there aren’t any fees associated with the credit card.

At my local coffee shop, $100 allows me to purchase over 52 cups of coffee annually—54 if you include the two free cups for filling in the application. At one cup of coffee each day interest payments would exceed $100 annually.

What benefit does this bring me? For consumers, debt is a means of using anticipated income and future purchasing power in the present before earning it. Of course, this credit card is usable with other merchants, so my local coffee shop or grocery store benefits whenever this credit card is used. And that is brilliant business.

So how did this make me feel? It disappointed me. I was at my local coffee shop to purchase a cup of coffee. Not to entertain the opportunity to become part of another revenue stream in their business. I was disappointed because I foolishly thought this business and I already had beneficial relationship and now they were asking if I was interested in taking a bet on my anticipated income to help grow their business model.

You’d think the additional revenue available through my credit card is worth more than two cups of coffee. Apparently not. That brought new perspective to the relationship.

This brilliant idea for creating a new revenue stream effectively made me rethink my relationship with this merchant. All of their consumer marketing went to waste the moment they asked me if I was interested in filling in their credit card application. That request put all of their previous marketing into perspective for me. It's really about the bottom line.

My local coffee shop isn’t what it wants me to believe it is. It wants me to believe that it's a familiar place where I can meet friends. In fact, it’s a business in a competitive environment and I am just participant in an income stream. That is disappointing. It’s a testament to the effectiveness of their marketing.

Their risk in developing this new income stream? Virtually nil.

Their risk in using their existing customer base? Again, virtually nil. Ok, at least one blog entry.

Friday, June 26, 2015

Finally, Something Sensible On Testing

A friend provided a link to a Kode Vicious article from ACM Queue (April 2015) by George Neville-Neil. If you haven't read anything by George you should.

The second comment in April's column covers testing. In a few sentences (ok, two pages) the fundamentals of testing are covered. It's an excellent summary including a strategy and requirements that every test regime needs.

Every test regime must have relevance and repeatability. To be relevant, tests must confirm the software works and attempt to break it. For repeatability, you must maintain complete control over the environment in which the tests are run.

To ensure repeatability in a complex system the control interface and test interface must be distinct. I liken the notion of control to a scientific control in an experiment. Scientific experiments are are carefully constructed events that seek new information. The same attention to detail and results is needed to construct a test and its environment. An example in the column shows how to do this.

I like the article because of its directness. George's explanations are short, concise and simple enough to share with your colleagues if improving your tests are on your mind.

Saturday, June 20, 2015

Saturday, June 6, 2015

Wifi MAC address of my Sonos Connect?

It looks like Sonos has two sets of rules for determining the wireless MAC address of their devices. I own a Play 1 and Play5.

My Play 5 has a wired MAC address beginning with B8:E9. Wifi MAC address of my Sonos Connect?" describes how to obtain the wireless MAC address for the Connect. It works for the Play 5 as well.

These instructions say the wireless MAC address = wired MAC address + 1. The wired MAC address is on the label affixed to the bottom of this device.

My Play 1 has a MAC address beginning with 5C:AA. The wireless MAC address is the same MAC address on the label on this device. (No +1 required.)

Sonos customer support indicated that 5C:AA is a new set of MAC addresses for their devices and the wireless MAC address for my Play 1 as reported through my diagnostic submission to them conforms to the description in "Wifi MAC address of my Sonos Connect?".

Sonos customer support was very helpful throughout this. 

I was able to establish the correct wireless MAC address for my new device through some experimentation with my wireless network.  I wanted to point out that Wifi MAC address of my Sonos Connect? worked for my older device but not the new device.

Thursday, May 28, 2015

The Wrong Kind of Paranoia

I tend to be pedantic in my approach to coding C++. I like const, I declare access to data members private. A wider view is presented in The Wrong Kind of Paranoia.

I don't interpret this discussion as an argument against writing code like I do. I interpret it as a wider discussion on how important it is to look at the quality attributes you want from your architecture.

Using one example in the article, safety is a quality attribute that you want the architecture to provide and that's why you separate the code that irradiates patients from the user interface. I agree with James: programming in the small will not address this type of architectural issue.

In all, the main take away I get from James' point is that programming in the small is part of the solution but don't loose sight of the architecture. If you do, none of the const data you create will make any difference whatsoever.

Friday, May 22, 2015

Why Most Unit Test is Waste (A Look at Test-Driven Development)

I reviewed “Jim Coplien and Bob Martin Debate TDD”, to better understand Coplien’s position on Test-Driven Development (TDD) in “Why Most Unit Test is Waste”.

In defining TDD, Martin states that it is infeasible for a software developer to call themselves professional if they do not practice TDD. He goes on to cite the laws of TDD:
  1. Don’t write a line of production code unit you have written a failing unit test.
  2. Do not write more of a unit test than is sufficient to fail. (Not compiling is a failure.)
  3. Do not write more production code than is sufficient to pass the currently failing unit test.
In order to properly understand Martin, it is important to understand his use of “professional”. A professional is an expert. [1] Later, Martin says that he thinks it is irresponsible for a developer to ship a line of code that has not been executed with a unit test.

Essentially, expert software developers test their work using unit tests.

Coplien’s problems with TDD are two-fold.
  1. The use of TDD without an architecture or framework.
  2. The use of TDD without an architecture or framework destroys the GUI.
Both problems have the same root cause: a poor architecture. Coplien provides examples where a lack of domain knowledge contributes to these problems. A lack of domain knowledge has no bearing on the usefulness of TDD.

TDD can be used to drive architecture. Driving architecture is one thing. Bootstrapping it is another matter entirely.

Coplien and Martin agree: do some up-front architecture, but don’t knock yourself out. Let executing code inform future decisions.

How much up-front architecture is required?

Coplien says that a 2 million line program should have constructors and destructors in place and enforce important relationships between objects and that these relationships be supported by tests. And you should have executable code for this implementation within 30 minutes.

Coplien’s tests are not unit tests. Coplien defines a unit test as an API test. It tests a subset of the state space of the API arguments. It’s a heuristic. He suggests “Design by Contract” is a better choice.

Design by Contract (DBC) ties an implementation to business requirements. TDD obfuscates this because the emphasis on unit tests can make it difficult to connect functionality to business requirements. It's not clear that the use of TDD implies causality.

Martin’s position on DBC is that he prefers unit tests tied to production code instead of contracts embedded within the production code.

Martin implies that something being used (TDD) is better than something that is not being used (DBC). I agree. Lack of use implies low utility but so does misuse. Low utility does not mean that the ideas in DBC are invalid. Unfortunately, the DBC verse TDD discussion doesn't go very far before the session ends.

This debate provides clarity on bootstrapping an architecture. Bootstrapping is hard and the transition from exploring the problem space and defining an architecture to developing executing code involves many tradeoffs. A balance is required. Perhaps poor judgement creates an imbalance that results in the use of TDD too early.

I like the notion that assertions provide a nice coupling between the semantics of the interface and the code itself. This provides a clear advantage over a separate unit test to enforce these semantics. Employing assertions for pre-,  post-condition or invariants is a clear win. However, using assertions does not eliminate the need for unit tests. Both should be used to create advantage.

[1] See Professionalism and TDD (Reprise). TDD currently plays a significant role in professional behaviour. Experts exhibit professional behaviour.

Thursday, April 23, 2015

Why Most Unit Testing is Waste (An Exploration)

A team member shared a copy of "Why Most Unit Testing is Waste" by James O. Coplien. I was so impressed that I read it cover-to-cover.  Coplien identifies unit test smells that indicate waste in unit testing.  A follow-up article, “Seque” provides more insight on these ideas.

Coplien isn’t against unit testing. He’s for the intelligent use of unit testing and sees more value in the creation of tests that focus on features. This shifts the focus of the unit under test from a method to a feature.   

Focusing on features is necessary because it is the only system artifact capable of providing an explicit calling structure (or context) for the objects (and methods) it relies upon. An explicit calling structure is required to reason about the execution of a program. 

Value is tied to Lean Manufacturing and eliminating waste, including overburden and value in the form of what customers will pay for. To eliminate waste, evaluate your test mass using a criteria based upon business requirements. Focusing on Lean Manufacturing means that fewer unit tests are created and that test effort is directed at activities providing more value (feature testing).  

Coplien provides an example of Lean development using a map. It’s informative that shows how generalization introduces waste. Unit testing over generalize the map to the point where they exceed the requirements of the application. That’s waste that would be avoided if feature testing were used.

In “Seque”, Coplien argues against automation in support of continuous improvement: automation provides a one-time benefit and that to enjoy continuous improvement people need to remain involved. Another argument is autonomation and automation and is directed at supporting the notion of testing as a heuristic activity and is part of the same argument against attempting to automate intellectual tasks. 

For other views on Coplien’s articles review of Reddit and Hacker News. Many of the links provided in the Hacker News article are insightful and the commentary a little more reasoned over that of Reddit.

The main message I get from Coplien’s articles is that good tests are the result of a focused intellectual activity directed at reducing the risk of program failure and that this activity should focus features. Use autonomation, not automation, if you want to continuously improve quality. 

Wednesday, March 25, 2015

The Mythical Man-Month (Worth Reading Again)

The blog post "Estimates? We Don't Need No Stinking Estimates!" was passed around my software team. It contains a reference to Frederick Brooks' essay "The Mythical Man-Month". As far as No Estimates is concerned, I like Mike Cohn's view in "Estimates are Not Commitments".

The reference to Brooks prompted me to review his essay and the wonderfully insightful wisdom contained therein.  On introducing Brooks' Law, Brooks writes:
Oversimplifying outrageously, we state Brooks's Law: Adding manpower to a late project only makes it later.
People are often familiar with Brooks' Law but are unaware of the other insights contained in The Mythical Man-Month.

Tuesday, February 24, 2015

Nautilus Magazine

I've recently subscribed to Nautilus. I liken this magazine's depth to that of the Economist but with a scientific focus.

The chief advantage of Nautilus over the Economist, if you want to call this an advantage, is that Nautilus is published quarterly. This ensures that I can usually read it cover to cover before the next issue arrives.

In addition to the low frequency of publication, the articles are engaging and well written. There are some really great ideas in the articles that are conveyed with humour and wit that make the topics all the more interesting.

(I do enjoy the Economist, I just can't absorb all of the content each week. Slow reader, I guess.)

Monday, February 23, 2015

Region of Waterloo - Discussion on Environmentally Sensitive Land (Wilmot Line Area)

The Region of Waterloo is hosting an open house on February 28th to discuss the future of the wetlands, forests and wildlife in the Wilmot Line area. 

If you use and enjoy this area come out and make your position clear.

Monday, January 26, 2015

Working Agreements for Agile Teams

I’ve recently had the opportunity to introduce a software team to working agreements. This prompted me to question my notion of an effective working agreement. In simple terms, an effective working agreement is easy to understand, unambiguous and is used by the team.