Wednesday, December 28, 2016

Well-Written Scrum Stories

I work in a domain where Stories in the sense of Well-Written Scrum Stories don't always work well. Think this is heresy? Read on.

My domain involves the construction of a system. Part of my team's responsibilities include writing software for business logic but it also includes writing software for control elements within the system. For example, we might be asked to integrate a new sensor into our application and that sensor might affect the user's work flow or we might integrate software in the support of mechanical elements (such as motors).

Stories for the sensor integration typically don't sit well with me. The reason for this is that I emphasize user value for my stories. In simple terms, my customers won't pay a sensor. They will pay for the utility provided by the work flow change that the sensor enables.

My emphasis on user value has worked well in other domains. I worked on the integration of Cilk Plus into LLVM. We were able to define stories that embodied user value. For example, one user story included the introduction of a for-loop supporting parallelism into the compiler and subsequent stories included the introduction of different stepping increments in the for-loop.

My current domain doesn't have that luxury. Or at least I haven't mastered this domain to that level.

The best description I've seen relating to my problem with stories in my domain comes from Bertrand Meyer. In Agile! The Good, the Hype and the Ugly, he describes how stories are not useful for describing situations with multiplicative complexity. That is things where all of the key elements of a system must be taken into account at the beginning.

So I live in a world where I use user stories focusing on user value for the business logic level and use another solution to handle the multiplicative complexity of the system. The other solution currently relies on traditional methods for requirements and design.

In effect, I am liberal in what constitutes my product backlog. It's not just user stories.

Thursday, December 22, 2016

Code Inflation in /bin/true

A friend provided a pointer to Code Inflation, published in IEEE Software (March/April 2015). It's an look at the growth of software and highlights this growth through the use of /bin/true and /bin/false over a twenty years or so. The statistics are staggering.

This article introduces some laws:
  • Software tends to grow over time whether or not a rationale need for this growth exists.
  • All nontrivial code contains defects.
  • The probability of a defect increases with code size.
And some wonderful advice: Instead of just adding more features to the next version of your code, resolve to simplify it.

This article references a paper by Rob Pike and Brian W. Kernighan, titled Program design in the UNIX environment which discusses the problems with the growth of cat in 1983. Pike and Kernighan discuss the style and use of UNIX:
... the UNIX system provided a new style of computing, a new way of thinking of how to attack a problem with a computer. This style was based on the use of tools: using programs separately or in combination to get a job done, rather than doing it by hand, by monolithic self-sufficient subsystems, or by special-purpose, one-time programs.
This paper contains insight on the style and design of cat and points out how important identifying a single function for each tool is and how doing so leads to the creation of a computing environment that supports a level of flexibility and utility that are simply profound.

It is the single function and purpose of these tools which embodies their power and more importantly their inability to astonish during one's use of these tools. The inability to astonish here is important: the tools work as advertised with no surprise. That's a critically important quality. This leads to another important observation in this paper:
The key to problem-solving on the UNIX system is to identify the right primitive operations and to put them at the right place. UNIX programs tend to solve general problems rather than special cases. In a very loose sense, the programs are orthogonal, spanning the space of jobs to be done (although with a fair amount of overlap for reasons of history, convenience or efficiency). Functions are placed where they will do the most good: there shouldn’t be a pager in every program that produces output any more than there should be filename pattern matching in every program that uses filenames.
A simple read that takes a similar stance is a book by Kernighan and Plauger called Software Tools (and an updated edition titled Software Tools in Pascal). Software Tools provides unique insights into the same philosophy of functionality and combination that made UNIX work so well.

Some different perspectives on UNIX philosophy: Simplicity isn't simple.

Tuesday, November 29, 2016

Story Points and Complexity

In Story Points and Velocity, I reference Mike Cohn's book "Agile Estimating and Planning" and his view that story points are a relative estimate of the complexity, size or risk in a story.

My software team and I moved away from estimates in man-days to story points. In doing so I place a heavy emphasis on the relative aspect of story points and use a story point baseline to keep track of the relative measures created by the team for the stories they work on.

To move the team away from the notion of man-days I emphasized complexity as a means to measure stories. This worked for several months until people began to raise questions regarding what is complexity.

To being the discussion I floated several perspective on story points. These included providing the team with the following documents.

I included the last article because of its emphasis on the relative nature of story points and its comments on haggling over definitions of complexity. My prescience proved correct. We spent 30 minutes discussing the matter and ended the meeting with an agreement to revisit story costs during the retrospective. I failed to convince that it's the relative ranking of stories that are important to the velocity calculation.

In my description of story points I emphasized that the by selecting powers of 2 as our scale we need only agree that stories are twice or half as complex relative to the stories in our baseline. 

I have a problem with relative costing using Fibonacci numbers because it's difficult for me to judge the difference between a 2, 3 and 5 point story. I am much more confident that halving or doubling the cost is an easier question to answer. If I run into a 3 point story I am happy to count it as a 4 and move on because I don't place an lot of emphasis on making "accurate" estimates.

I emphasized that it's not so much the number we are interested in, although you can't diminish its importance. I feel that the cost exercise is an important element of gaining consensus on what a story really encompasses. I tend to drive cost discussion using outliers. Not so much to rein in the costs but to ensure that the outliers explain their thinking and observations on a story and then build consensus on the cost.

Curiously, the story point baseline has proved invaluable. It ensures that simple questions can be asked about a story being costed. Those questions revolve around whether people are convinced that the cost of the story is like a story (or stories) in the baseline.

Monday, October 31, 2016

RBTLIB - A Client-Side Library for Review Board

I've been experimenting with a simple client-side library for Review Board. I named this library RBTLIB (RBTools Library). I have a basic implementation for obtaining the Root and Review Request resources from a Review Board instance.

RBTLIB started life as a project to answer how many review requests were entered into Review Board during a fixed period of time. I extended this project to see if it could become a passable RBTool replacement.

In creating RBTLIB, I mean no disrespect to the RBTools authors. RBTools provides a rich set of use cases to guided the development of RBTLIB.

During the development of this library I discovered Click, a command-line tool for Python applications. Click is super simple to use and develop command-line tools.

I spent time studying Eli Bendersky's PSS tool. If you enjoy learning by example, Eli's code is an excellent place to start.

RBTLIB is primarily an experiement in developing a client-side library for a RESTful API. I set out to develop a small tool to query a Review Board instance for any reviews added during a 24 hour period. The command-line tools built using RBTLIB can answer this question.

Some challenges remain.

First, the command-line tools return JSON objects. I don't have an answer to this challenge. It might be reasonable to leave the command-line tools as examples for other client development but this just skirts the problem.

Second, these tools provide support for a few Review Board resources. I'm looking at ways to represent resource data programatically. At present, I'm thinking it might be nice to write "root.json" or "root.capabilities.git" to access the entire root resource as a Python dictionary or just the git capabilities.

I like the layered implementation for the Root request and the Review Request request I created. For an example:

Source: rbt/rbtlib/root/links.py

It does a nice job of leveraging the structure of the Root resource and its links. So nice that I can write a Review Request getter in a single line:

get = root.links.key(
    'review_requests',
    'application/vnd.reviewboard.org.review-requests+json')

Source: rbt/rbtlib/review_requests.py

Two design objectives I set for myself in writing this library were:

  • Keep the Python implementation simple. RBTLIB is written in a simple style that enables additional getters for resources to be written in a single line.
  • Expose the entirety of the resource to the client. RBTLIB exposes the Python dictionary created from the JSON response to the client. While this approach achieves the objective the current implementation is deficient because the command-line tools simply pass this dictionary along to the user.
The next revision of RBTLIB may include an implementation of the Composite design pattern that allows the library to dynamically generating a structure for accessing the elements of the Review Board response. I'm going to try and maintain simplicity by avoiding the use of Meta Classes and if successful, I should be able to improve the results returned by the command-line tools.

Source: RBTLIB v0.1

Tuesday, October 25, 2016

Defensive Programming Done Right

I watched John Lakov's two part video on Defensive Programming Done Right [1, 2]. The first part provides motivation for defensive programming. The second, shows how to use BSLS to introduce defensive programming into C++. A definition of defensive programming is in Part I.

Part I looks at design by contract and observes that undefined behaviour in a contract can be advantageous, particularly if you structure your implementation so that sensible behaviour occurs whenever preconditions are violated.  Sensible behaviour is delegated to the application. Doing so simplifies library construction. For example, the Standard C Library's handling of null pointers provided to string functions is implemented this way.

A model is discussed for pre- and post-conditions applied functions and methods.
  • A function's postcondition is simply its return value. 
  • A method's postcondition is subject to the preconditions of the method and the object state when the method is called. 
The extension of pre- and post-conditions to methods introduces the notion of essential behaviour. Essential behaviour includes method postconditions but also other behavioural guarantees beyond these postconditions. These behavioural guarantees are essential to ensuring the method's correctness.

Both talks provide an introduction to the C++ proposal for Centralized Defensive-Programming Support for Narrow Contracts. Indeed, the implementation of BDE (of which BSLS is a component). The experience gained at Bloomberg using BDE provides the practical element of this proposal.

Centralized Defensive-Programming Support for Narrow Contracts defines a narrow contract as a combination of inputs and object state that can result in undefined behaviour detectable only at runtime. There is an excellent argument in this paper for not artificially widening a contract--an argument that the Standard C Library supports and which the Standard Template Library may have missed (for example with the introduction of the Vector container's at method).

In all, I had a great deal of difficulty finding value in Lakov's videos but think that this is a result of his presentation style rather than the content contained therein. Lakov is a co-auther of Centralized Defensive-Programming Support for Narrow Contracts and the value of that work and the ideas contained therein made clear what the videos did not.

I haven't done any research on what prompted the Vector at methods and the notion of artificially widened contracts but I am convinced that the C Standard Library embodies better solutions.

[1] Defensive Programming Done Right, Part I
[2] Defensive Programming Done Right, Part II

Sunday, October 2, 2016

The Abstract is 'an Enemy' (With a nod to LibAPI)

I discovered The Abstract is ‘an Enemy’: Alternative Perspectives to Computational Thinking in the references to Robert Atkey's talk on Generalising Abstraction.

The Abstract is 'an Enemy' is an argument against creating generic names for abstractions. The paper begins with a module name 'ProcessData'. I laughed on reading this having encountered a library called 'API' in my own work. The example struck a chord.

The compelling argument in The Abstract is 'an Enemy' is that software should be designed so that names are specific. The rationale for the specific is two-fold: it forces the design to encapsulate a single thought and it aligns what is being defined with something in the real world.

The paper goes to provide examples on how the increase in abstraction in an effort to simplify leads to complexity. In one example they discuss how the concept of a user is generalized to the point with the resulting concept in the implementation embodies two very different users.

The abstraction of user leads to complexity in the system but also diminishes the ability of the software to serve these users. The shared representation of user in the system resulted in the system not supporting the user's way of thinking about the world.
A misfit is a correspondence problem between abstractions in the device, abstractions in the shared representation (the user interface) and abstractions as the user thinks about them. Here, the abstractions in the ‘shared representation’ (the user interfaces ...) don’t match the users’ way of thinking about the world. Such misfits are known to cause usability difficulties.
 The provides a description of the tension between the need to model the real world and the need to limit complexity in an implementation. It's a good walk through how the design process goes awry and offers some insight on how to correct these challenges.

In my experience, I am confounded by the need to create arbitrary abstractions that obscure the real world. In the domain I work, in I am faced with electronic signals and devices that make up the physical interface to the product. In many cases the signal names presented in the schematics are never captured in the software and an abstraction for a physical device (such as a button) are non-existent.

I don't have an answer other than to suggest that the software implementation is very out of touch with reality. The resulting complexity in the product and simple misunderstanding that results is costly.

Monday, September 26, 2016

Abstract Data Types

I was reading "Contracts, Scenarios and Prototypes" and learned that Abstract Data Types were first presented by Barbara Liskov and Stephen Zilles in their paper "Programming with abstract data types" (requires access to the ACM Digital Library). The main contribution made by this paper is that it identifies an abstract data type as class of object completely characterized by the operations it can perform.

Liskov and Zilles' paper was written in 1974. It lists thirteen references that provide insight on how abstract data types were arrived at. It's an interesting list of references including work from Dijkstra, Neumann, Parnas and Wirth.

What is compelling about the introduction of "Contracts, Scenarios and Prototypes" is the depth of the references provided on the development of contracts. In addition to abstract data types, this introduction includes a look at Hoare's "An Axiomatic Basis for Computer Programming" which introduces pre- and post-conditions via Hoare Triples and Parnas' "A Technique for Software Module Specifications with Examples" for description of good specifications and strongly typed languages.

Saturday, September 3, 2016

Object-Oriented Programming: A Disaster Story

Read Object-Oriented Programming: A Disaster Story, not so much for what it said but for what ended up on Reddit. If you are looking for perspectives on object-oriented programming, the comments are a good read.

One commenter, discussed Closures and Objects are Equivalent, another brought in Alan Kay on Object-Oriented Programming. Both reasonable responses.

In my opinion, the best comment:
The value of objects is in treating systems behaviourally. Inheritance and even immutability are orthogonal. An object is a first-class, dynamically dispatched behaviour.   (/u/discretevent)
The lesson, is know your tools, use them appropriately and recognize their limitations. Orthogonality is a good way to organize your thinking on this.

In Object-Oriented Programming: A Disaster Story,  I do agree that shallow object hierarchies are better than deep but I'm not sure if the argument presented there is in response to programs that derive all classes from the same object or and argument against deep class hierarchies.

With respect to
Among OOP practitioners, there are competing schools of thought on the degree to which a program’s behaviors should be expressed as class methods rather than free-floating functions.
Isn't the correct answer to apply what is appropriate to the context?

I've made arguments wherein a free-floating function is the best tool for ensuring consistency between different classes whose behaviour is related only due to business rules and logic. For example. do you want a manager class in an automobile that ensures the lights are on and the doors are locked while driving? You can construct arguments for both approach.

The right approach is the one that leads to the simplest implementation. In this case, I agree with the author and avoid "nonsense Doer" classes.

There is a great set of comments in the Reddit thread relating to context and object decomposition.

Sunday, August 28, 2016

Good Grief! Good Goals!

Martin Fowler has an essay on An Appropriate Use of Metrics. It's a useful summary of how metrics are abused and often obscure the true intent of the goals they support. It provides guidance on how to improve goals by placing metrics in a supporting role, instead of a deciding role.

If you've done research on this you've likely heard it all before. I like this essay because it's useful to refer to from time to time and it's broad enough that you can use it to educate others on how to get your metrics aligned with and supportive of the intent behind your goals.

I think it important to emphasize something that Fowler touches on when discussing explicitly linking metrics to goals. He states:
A shift towards a more appropriate use of metrics means management cannot come up with measures in isolation. They must no longer delude themselves into thinking they know the best method for monitoring progress and stop enforcing a measure that may or may not be the most relevant to the goal. Instead management is responsible for ensuring the end goal is always kept in sight, working with the people with the most knowledge of the system to come up with measures that make the most sense to monitor for progress.
Management is responsible for ensuring that the focus remains on the end goal and for working with the people who are best positioned to develop meaning measures of progress.

Absolutely. That means you need to engage.

Friday, August 5, 2016

What's in a Dependency?

In The Right Thing, James Hague discusses the challenges of selecting good libraries to support your application. In James' example it's a Perl module that turns out to be unmaintained and eventually falls victim to a security issue.

I read James' article a couple of times. On the first read, I took it as a comment on the peril of replacing working code with a third-party library. On the second, I took it as a comment on dependencies and over generalization.

I sympathize with James. He doesn't go into detail on his program but it's hard not to rationalize using a library when one is available. There is the problem of replacing working code with a library.

His comments echo concerns in my own product on the introduction of libraries. The product I work on overuses (some say, abuses) an application framework to the detriment of the product. In response, some team members are suggesting other libraries with simpler APIs but supporting similar functionality.

On the surface, adding libraries is reasonable.

What bothers me is that agreement means I have two libraries with similar functionality but different APIs. There differences introduce costs--additional tests, learning curve, different failures modes and security issues. It introduces the question of which library to use going forward.

There is no easy answer. A good answer requires understanding the trade-offs involved.

  • I want a consistent product architecture.
  • I want to avoid arbitrary complexity. 
  • I need to be pragmatic--the pragmatic demands guidelines. For example, replacing the old library with the new one or using the new one whenever it is sufficient for the task at hand.

The change in perspective provided by the second read was profound. I view James' comments as a warning to evaluate dependencies before accepting them. Caveat emptor, if you will.

So what do you look for in your dependencies?
  • actively maintained
  • recent release
  • good platform support
  • good reputation within the community
  • active user community
  • large user community
  • meaningful tests 
  • documentation
James provides several links to support his article. In one link I was introduced to Huffmanization. I wasn't aware there was a term for making your most common function names meaningfully short.

Saturday, July 30, 2016

Stop Being a Cave Dweller

In Whispers and Cries Mark Bernstein discusses the dangers of misinformation and how blindly accepting information--the unwillingness to challenge the status quo leads to problems. I share my experience with misinformation in Feature-Based Development: The Lasagne and the Linguini when I explored statements made by Bertrand Meyer on user stories and Agile practices. Because of that exploration I corrected an error in my thinking on Scrum.

The insight Mark provides is to build a better network for your questions and to allow yourself to make mistakes. He points out that most software developers work in caves or enclaves. Those in caves use the tools and techniques they already know and only acquire new knowledge as required by circumstance. Those in enclaves use commonly accepted practices within those enclaves and that wisdom-formation in enclaves is often erratic.

What prompted Mark's article appears to be a discussion in The Dangers of Misinformation. There is advice in that article on how to share information and avoid spreading misinformation.

The situation in Mark's article and the one he references both identify a form of bias. Misinformation, when accepted by an enclave or by large numbers of people is a form of social proof. I suspect that those of us working in caves suffer from social loafing.

Thursday, July 7, 2016

What Can You Put in a Refrigerator?

I wanted to call attention to What Can You Put in a Refrigerator? a blog post by James Hague. If you have struggled with the notion of audience for a specification James' post on specifying refrigerator content does an great job of drawing out this challenge. The article is great for both its humour and the point it makes.

I'm not sure I'd let James near my fridge...

Friday, July 1, 2016

Product Backlogs: Not Just Stories!

In "Feature-Based Development: The Lasagne and the Linguini", Bertrand Meyer raises the spectre of multiplicative complexity and the failure of the user story to address this complexity. In Meyer's view a user story is too simple to manage requirements except for certain types of systems. User stories become unwieldy if there are feature-based interactions to manage.

The Scrum Guide describes the Product Backlog as providing a single list of requirements for a product. The Product Backlog lists all features, functions, requirements, enhancements, and fixes that constitute the changes to be made to the product in future releases. Backlog items have a description, order, estimate and value.

The Product Backlog doesn't require stories explicitly and acknowledge the existence of other types of requirements. Meyer's book goes on to mention that user stories are the preferred method for expressing requirements within Agile methods. It's obvious how Meyer's arrived that this conclusion for XP but how could he possibly include Scrum in his assessment?

Wednesday, June 8, 2016

Sonos: Permission to Access Music Library

I have two WiFi networks. My Sonos is on one network. Everything else is on the other. Occasionally, my Sonos Controller for Mac reports that it does not have permissions to access a music library located on my MacBook Pro. This occurs when adjusting my Music Library Settings in the Controller.

I corrected this by moving the the MackBook hosting the music library to the same WiFi network used by the Sonos.

The music library is still accessible after moving the MacBook back to the other network and restarting the Sonos Controller. This implies that the permissions issue is confined to the initial set up the music library in the Controller.

A better solution is to put all Sonos' and the MacBook on the same network.

Thursday, June 2, 2016

Scarcity (A Book Review)

I've been reading Scarcity, a book by Mullainathan and Shafir. Their thesis is that our reaction to scarcity contains a hidden logic that is equally applied to those without enough time and those without enough money.

The effect scarcity has on people is that it tends to increase focus and create tunnelling. The increase in focus brings whatever is perceived as scarce to the forefront of peoples's thinking but the effect of tunnelling over emphasizes this scarcity to the point where it effects other aspects of their life.

For example, a scarcity of time or money creates a focus on those things to the exclusion of other things, such as personal relationships. The exclusion of other things means that scarcity brings new perspective and that new perspective is often detrimental to resolving the issue. That is why busy people stay busy and poor people stay poor.

Tuesday, May 10, 2016

Scrum Master -- Artist and Clown?

I'm reading The Anatomy of Story, by John Truby. I'm reading it because I like to write and have this vague ambition about writing something worth reading someday. Thus far, I'm published on Twitter and Blogger.

The Anatomy of Story provided a new appreciation on how stories are constructed. This appreciation can be applied to movies, books and people. I was struck by the similarity between the Artist and Clown archetype and what makes a good Scrum Master. I talk about the importance of the Scrum Master in Scrum Master Selection--Critical Success Factor?

Truby describes the strengths of the Artist and Clown:
  • defines excellence for a people to positive effect
  • defines what doesn't work to negative effect
  • shows beauty and a vision for the future or shows what is beauty but is in reality ugly or foolish
And weaknesses:
  • can be the ultimate fascist insisting upon perfection
  • may create a special world where all can be controlled
  • simply tears everything down so that nothing has value
The Scrum Guide discusses the Scrum Master role as embodying the following activities.
The Scrum Master is responsible for ensuring Scrum is understood and enacted. Scrum Masters do this by ensuring that the Scrum Team adheres to Scrum theory, practices, and rules. 
The Scrum Master is a servant-leader for the Scrum Team.
In "Agile Methods: The Good, the Hype and the Ugly" an ACM Webinar by Bertrand Meyers discusses as ugly, the Coach and Method Keeper (e.g., Scrum Master) as a separate role (around 44:50). He says this leads people becoming a political commissars and creates a class of people who wash there hands of the result.

I'll suggest that the Scrum Master Meyer's is thinking of embody the weaknesses of the artist and clown. Those in the Scrum Guide embody its strength. Of course, Truby describes these strengths and weaknesses in the context of a character that embodies this archetype.

I selected Artist and Clown as the archetype of a good Scrum Master primarily because a good Scrum Master should encourage learning -- show the way but provide room for learning and, importantly mistakes. I wrestled with the notion of a Scrum Master as Clown but ultimately concluded that Clowns can have positive effect as well.

If you are a Scrum Master and you have lost your Scrum Team I wonder if looking at the archetype you embody provides insight on where you might be going wrong.

Wednesday, May 4, 2016

Scrum Master Selection--Critical Success Factor?

I am a firm believer that a separate Scrum Master role, as required by Scrum, is good for a team. The rationale for separation is documented in the Scrum Guide. I agree with it's intent, but like most of Scrum it's hard to get right.

In "Agile Methods: The Good, the Hype and the Ugly" an ACM Webinar by Bertrand Meyers discusses as ugly, the Coach and Method Keeper (e.g., Scrum Master) as a separate role (around 44:50). He says this leads people becoming a political commissars and creates a class of people who wash there hands of the result.

The separation described by Meyers' is a Scrum smell whose origin lies in the natural tension arising from the Scrum Master's role to ensure that the Scrum Team adheres to Scrum theory, practices and rules and the Development Team's need to be self-organizing. The problem arises when this natural tension turns into conflict.

This conflict manifests itself whenever the Development Team runs into situations where the theory, practices or rules conflict with what they perceive as the correct way to organize themselves. If the Scrum Master views this as a lack of buy-in instead of part of the learning process there will be trouble.

A common refrain is that if you aren't following the theory, practices and rules of Scrum than you aren't doing Scrum. You need to worry when you hear this sort of thing. It's problematic on at least two fronts.
  • It implies the process has over taken the deliverables in terms of importance. Carefully consider whether perfecting the process is beneficial to the customer deliverable before you place a great deal of importance on such statements.
  • It implies the natural tension arising between the Scrum Master's role and the Development Team's role has turned a corner and may be heading towards the sort of conflict that hurts everyone. Conflict can be good, especially in an environment that promotes constructive criticism and learning but it can be unhealthy in an environment where positions and opinions have become inflexible.
It takes a special person to be an effective Scrum Master. A good Scrum Master has:
  • a precise and wide-ranging knowledge of Scrum theory, practices and rules.
  • wide experience in applying Scrum theory, practices.
  • patience and ability to allow teams to make mistakes and learn from those mistakes.
  • humility and understanding both in terms of the Scrum Master's and Development Team's abilities.
In a world where everyone is looking for quick and easy answers to hard problems it is natural to gravitate to something the promises a way forward. The thing to remember is that Scrum is hard to get right because it deals with people and challenges them in ways that they may not be used to. 

Sorting out these challenges isn't something that comes with a certificate from a course you paid a few thousand dollars for. Its a mixture of experience, personality and wisdom that only comes through experience.

Monday, April 11, 2016

Relearning Design Patterns

In Relearning Design Patterns, Egon Elbre observes that a critical piece of information missing from Design Patterns: Elements of Reusable Object-Oriented Software is a pragmatic outlook on how and when to apply design patterns. Egon points to Christopher Alexander and his idea of pattern languages as a solution.

Pattern languages have the benefit of describing a system of patterns that support each other. It's a nice approach that has the potential to clarify when, and perhaps more importantly, when not to use a design pattern.

For example, the code I work with has a high occurrence the Singleton pattern. No other patterns are made explicit, except possibly Model-View-Controller. My challenge is that our design uses the Singleton to present one of something because there currently is only one of those things. The use of a Singleton introduces constraints that shouldn't be in the design.

If our code base modelled an airplane, it would have a singleton for the engine on a single engine plane, completely ignoring the fact that many planes have more than one engine. The Singleton places an arbitrary constraint on the design where none should exist and this introduces needless complexity into our domain.

In my view, the designers didn't view the domain in terms of appropriate and inappropriate constraints. Our implementation is the poorer for it. If they viewed the domain in terms of a pattern language they may have realized that using a Singleton for a jet engine assumes there will only ever be one engine. They may have recognized that a jet plane has the potential to support multiple engines. Recognizing this would have identified the Singleton as a poor design choice.

A classic case of when all you have is an idiom everything looks like and instance of it. Or perhaps that was a hammer.

In another example, I've recently had the opportunity to build a client-side library for a web application. At one point, I played with the notion that that a Singleton would be appropriate for managing the client session with the server. My initial rationale for this was that I knew the client would be run as a standalone application by a person using it on the command-line.

I ultimately rejected the Singleton because it placed an arbitrary constraint on the client-side library. I recognized that it's presence in the implementation would guarantee I limited the use cases supported by this library.

Tuesday, April 5, 2016

Working Agreements for Agile Teams (Part 3)

Curiously, I'm writing on branching policies again. In Working Agreements for Agile Teams, I discussed the problem with ambiguity. In Working Agreements for Agile Teams (Part 2), I discussed how the best working agreements are principles.

The issue of our prescriptive and ambiguous branching policy was raised and we agreed to rewrite it. This time I proposed
We agree to use topic branches for development.
This led to a discussion on why it's reasonable to disagree with this working agreement. For example, is it necessary to use a topic branch for a one line change? To improve a comment? My answer to these question is that it depends upon the code review.

We have another working agreement in which we agreed to peer review all code prior to merging it to the main line of development.

This led to a discussion on the interaction between working agreements. This discussion reenforced my thinking that good working agreements are principles and that reasonable people can construct arguments on when and when not to apply them.

I place a great deal of emphasis on product quality and what is merged into the main line.  Topic branches are irrelevant to product quality, although they can have an important role in ensuring it. In effect, my reasoning reflects my values regarding product quality and this reasoning motivates my choices.

If working agreements are principles and principles reflect values then what do your working agreements say about your team?

Sunday, March 13, 2016

Review Board RBTools Example

I was experimenting with Review Board in an effort to extend RBTools. I was surprised by the lack of examples using the RBTools API.

I can't say that I'm pleased with my implementation but it works.

Sample output:

https://reviews.reviewboard.org/r/7763/
Diff: 14884
rbtools/commands/post.py
rbtools/commands/publish.py
Diff: 14887
rbtools/commands/post.py
rbtools/commands/publish.py

Produced by this code:

On GitHub: bminard/experimental/reviewboard/review_requests

Monday, March 7, 2016

Successful Tests Find Bugs

The software team I work with has been challenging my notion of testing. Fundamentally, I approach testing from the perspective of the constantly dissatisfied. I am dissatisfied whenever a test doesn't produce a failure. Comments have been made that I should be satisfied with a successful test result. I beg to differ.

In my view, the purpose of testing is to find bugs. Thus a successful test is one that identifies a product deficiency relative to its stated requirements. If a test produces a successful result, great. It may be a valuable component of our regression tests but it hardly means we've been successful.

If we run a series of tests and have a series of successful results we are no longer getting value from that testing. We should re-evaluate the requirement and make a decision on whether we have exceeded the requirement or need to alter our testing.

The contrary position is also important: tests which exceed requirements are not valuable either. Except perhaps to delineate a boundary on the product's operation.

Saturday, February 13, 2016

Your Very Own Pytest Decorators

I was exploring Pytest to discover a way to add a decorator to my test cases. I went with a solution using pytest_namespace().

In conftest.py:

1
2
3
4
5
6
7
8
9
def decorator(func):
  def _decorator():
     func()
  return _decorator

def pytest_namespace():
  return {
    'decorator': decorator
  }

A source file:
1
2
3
@pytest.decorator
def test_decorator():
  pass

Simple.

Gist

Friday, January 15, 2016

Working Agreements for Agile Teams (Part 2)

In "Working Agreements for Agile Teams", I discuss how I prefer a working agreement to be unambiguous and not use descriptive language. One example, I provide:
We agree to use topic branches for development and merge our patches to the main line after completing unit testing and code reviews.
I pointed out that the use of "topic branch" as too descriptive but didn't provide any reason why I thought this.

In "Agile! The Good, the Hype and the Ugly", Bertrand Meyers provides an explanation of a principle. His explanation has three parts:
  • abstractness requires that a principle to be general rule and not a specific practice
  • falsifiability makes it possible for reasonable people to disagree with a principle
  • descriptiveness means it does not prescribe a behaviour.
What is clear, is that my notion of a good working agreement fits the definition of a principle. For example, "use branches" is preferable to "use topic branches" because of its generality.  I can disagree with this agreement--do I really want the expense of a branch if I'm updating comments?

In my view a good working agreement is a principle. A quick look at Scrum Alliance shows that my thinking is not in line with many others:
Work agreements are the set of rules/disciplines/processes the team agrees to follow without fail to make themselves more efficient and successful. 
So is a working agreement a principle that reasonable people can disagree with or is it something to follow without fail?

Here is another opinion from Scrum Alliance that begs the question about the appropriate balance of interaction over process.
Scrum teams are self-organizing and cross-functional. Self-organizing teams choose how best to accomplish their work, rather than being directed by others outside the team. To become self-organized, a team has to go through various stages of team development.
This article then goes on to list agreements for every Scrum Event.  How much prescription is too much?

Mike Cohn has something reasonable to say: Choose your rules wisely. ... Choose carefully.

I agree.

Saturday, January 9, 2016

Giving Up More Than You Realize with Twitter (Part 2)

In "Giving Up More Than You Realize with Twitter", I discussed how using Twitter can lead to privacy leaks through location sharing. There are other hazards awaiting those who share carelessly, and depending upon the circumstances those hazards can lead to life altering events.

The think-before-you-Tweet message in "How One Stupid Tweet Blew Up Justine Sacco’s Life" shows how random comments can create unwanted attention. Regardless of where you land in your evaluation of Sacco's Tweets the circumstances that motivated the New York Times article depict events where multiple lapses of judgement come into play to create and situation where everyone involved lost.

In another case of think-before-you-Tweet, "Why You Should Think Twice Before Shaming Anyone on Social Media" shows how a lapse in judgement can have negative consequences. It's difficult to see how the outcome that motivated the Wired article is justifiable on any level.

The New York Times article links Sacco's situation with public shaming and comments on the effect of public shaming written in 1787 by Benjamin Rush:
“Ignominy is universally acknowledged to be a worse punishment than death,” he wrote. “It would seem strange that ignominy should ever have been adopted as a milder punishment than death, did we not know that the human mind seldom arrives at truth upon any subject till it has first reached the extremity of error.”
We have to make mistakes before we can truly understand the truth of an error.

The Wired article manages to pull out the fine line between shaming and bullying. It points out that the bully is the one punching down and that the power differential can be an important component on how Tweets can be perceived. In fact:
Online shaming is a door that swings only one way: You may have the power to open it, but you don’t have the power to close it. And sometimes what rushes through that door can engulf you too.
The amplifying effect of social media adds a new dimension to online posts. The lack of context and the low cost of retaliation to any real or perceived slight brings incredible power.

For example, on the day I wrote this post, a woman Tweeted a comment about a remark that a random stranger made on her appearance. She has 420 follows. Her Tweet made it to @EverydaySexism who has 193K follows. (@EverydaySexism is referenced in the Wired article.)

I have an interpretation of this Tweet. It's a response to an unappreciated comment by a man. I perceive her response as sarcastic, something I infer from her choice of words. If I accept her comment at face value that's all I get from it. The responses to her tweet are informative in terms of the inferences others were able to make. I may simply lack imagination.

It's unclear what the intent was behind the initial Tweet. Perhaps shaming the anonymous man who commented on her appearance. If so, the response wasn't entirely supportive.  If she felt threatened then it's difficult to conclude that the ensuing online discussion was a victory.

Whatever the intent the parallels between what happened in the Wired and New York Time's articles and this woman's Tweet are similar: a random comment, amplified through social media which garnered substantial attention. The amplification creates a new context for the comment by virtue of who amplifies it.

The loss of control over who picks up a tweet (or any social media)  combined with their social context changes everything. In this case, a Tweet expressing dissatisfaction about a comment on appearance prompts  more negativity on this woman's appearance and personality.  The people making these comments are likely complete strangers.

The lesson, if there is one, is that once you publish something be prepared to loose control over it.

Other points of view.