Thursday, February 1, 2018

Sunk Cost, Code and Emotional Investment

In a Practical Application of DRY, I discussed sunk costs as part of Sandi Metz's discussion on the Wrong Abstraction. In my work on RBTLIB v0.3.0 I encountered another element of sunk cost: emotional attachment to your implementation.

I put in considerable effort between RBTLIB v0.2 and v0.3.0. This effort included at least 2 rewrites of the core algorithms for traversing the resource tree returned by Review Board. In my case, the core approach of using the Composite Pattern and Named Tuples didn't change. Their use did.

The issue was primarily due to grey areas in my knowledge of Python and the constraints I placed upon my implementation -- avoiding meta-classes and inexperience with using Python's __call__ method effectively. (OK, I didn't know __call__() existed when I started my implementation.)

Frankly, the situation drove me to new levels of frustration. Each time my frustration peaked I had to step back, build of the stamina for another rewrite and push through.

Interestingly, I thought I was disciplined. My emotions kept telling me my broken implementation would be ok if I just spent more time on it. Rationally, I could tell that I was stuck. Stealing myself to rewrite took significant effort.

Each time, I created an experimental branch with the idea of exploring what was wrong with the implementation. Every time I did that I had a breakthrough. The two experimental branches have been merged to master and the implementation is better for it.

I'm currently on my 3rd rewrite of RBTLIB v0.3.0. I am more confident that this implementation will work but I'm procrastinating because I am still unhappy with some aspects of it.

Wednesday, January 3, 2018

Working Agreements for Agile Teams (Part 5)

In Working Agreements for Agile Teams (Part 4), I discuss one side-effect of using working agreements as principles and individual decision making. I view those examples as growing pains--an adjustment that people make when the nature of team engagement changes.  Those discussions are healthy for a team because they re-enforce a new way of working together.

A recent example of learning to work together arose during a discussion on the interaction required by our working agreement on design reviews. This agreement focuses on a successful outcome--when the design is complete we are well positioned to complete the review.  It requires the involvement of a designer and two design reviewers:
We agree to document our design and review the design with at least two people prior to implementation.
This agreement positions the team to avoid situations where only one person understands the design.  It's simplistic. If you dwell on it you may conclude it's heavy handed. Taken literally, this working agreement requires every design review to involve three people.

My notion of design includes adding a method to a class. It also acknowledges this design might warrant a single line of text in a comment for the method. It's natural to ask why anyone would want this overhead for simple cases.

One team member made an argument against this approach:
  • The working agreement promoted inefficiency because it required too many people to engage.
  • The working agreement permitted passive engagement--they asked someone to be a reviewer and that person indicted interest but did not actively engage.
  • We need time to learn (or prototype) so their is something of substance to review.
  • A difference of opinion on when to start applying the working agreement.
My counter arguments were:
  • I am happy if the conversation on how to approach the design occurs and all three people actively engage in the decision.
  • Passivity is a form of passive aggressiveness that I won't tolerate--engage or choose not to engage but make a decision.
  • Absolutely, take the time to learn but ensure that the interaction of all three people acknowledges and understands the objective and intended outcome of this learning.
  • Start the interaction at the same time we start working on the story.

Ironically, we disagreed only on the starting point and the passivity. Everything else this team member said made sense to me.

So the working agreement failed to help us understand the importance of the interaction required to make the design review a success. It failed to balance the need for the author to learn and for the reviewers to understand. And it failed to address the notion that too much investment up front might commit us to a poor course of action. Or did it?

Clearly, the working agreement addresses none of the above explicitly. Clearly different perspectives resulted in different approaches. Importantly, these culminated in a very important and profound outcome for the team.

I encourage the team member to raise the differences of opinions in our Lean Coffee. They did and they and I discussed the issues with the team.

To the team's credit, they took both perspectives in stride and we agreed to enhance our understanding of the working agreement. We also agreed not to modify the working agreement to include this understanding.

Interactions over process triumphs again! Furthermore the team adopted several Agile principles in doing so. We all won.

Tuesday, December 5, 2017

Working Agreements for Agile Teams (Part 4)

In Working Agreements for Agile Teams (Part 3), I discuss how working agreements should be principles. Good working agreements are principles and that reasonable people can construct arguments on when and when not to apply them. I've run into a couple of problems with this approach.

In each case, senior people asserted their experience by choosing not to perform a design. And they got into trouble. What's interesting is that our working agreement on design is focused on an interaction between the author and two reviewers. The intent is to ensure knowledge sharing and to build some level of consensus that the right problem is being solved.

In the cases where these senior people got it wrong they failed to consider alternative designs or they failed to consider what other team members considered a valuable opportunity for knowledge transfer. It wasn't a question of whether a design was created it was about the communication that
would have resulted.

So the interesting part is the failure to understand the importance of the interaction and the communication element. This confounds me.

Monday, November 6, 2017

RBTools and Review Board's Web API

In Review Board RBTools Example I developed a simple client using RBTools. Here I explore another approach using the URI templates provided by Review Board's Web API.

The Review Board Web API embeds a lot of functionality. It enables client development, the documentation describes how to ensure forward compatibility and legacy APIs are managed. It's a nice piece of documentation for a rich API.

The entire API can be obtained using the Root Resource List. This resource insulates clients from URI changes. The URI Templates identify how to obtain specific resources.  To obtain a resource published by the API use the URI template for that resource and fill in the variables.

For example, to obtain a diff in a review request use the URI template for a diff [1].{review_request_id}/diffs/{diff_revision}/

Obtain the URI, review_request_id and diff_revision as follows:

Although crude, this client uses the Web API in a forward compatible manner. It also provides insight on the relationship between the URI templates and the values to populate to access a resource.

A good next step might be to develop a discovery mechanism using the URI templates and the linked resources so that tools can be created using the Web API.

[1] URI Template.

Sunday, October 8, 2017

Individuals and interactions over processes and tools

Recently, I've had the opportunity to reflect on the value of individuals and interactions over processes and tools, part of the Agile Manifesto.

The team has created a working agreement for design reviews. This agreement requires the involvement of two people and the designer. The rationale for this approach is that it promotes better design and knowledge sharing amongst team members. Interactions are critical to good design because different perspectives can identify opportunities and alternative approaches.

The team struggled with design reviews. To their credit, we are almost at the point where design review are a regular practice. Unfortunately, a few developers have challenges adhering to the working agreement because they feel that it's an impediment for some activities.

In one example, a developer gave a student a design to implement and then went on vacation. Unfortunately, the implementation failed code review when questions arose about the implementation and the student was unable to explain the design rationale. In another example, a developer took liberties in an implementation that broke a best practice.

In the first case, the team was able to improve the resulting design significantly over what the original developer created. In the second case, the best practice got discussed and differences in approach got worked out.

In each example, the developers who avoided the working agreement introduced other challenges that they didn't anticipate. The redesign and implementation in the first example cost an additional week plus the vacation time. The second example created a knowledge void.

The rationale given by each developer that motivated these examples was that they knew what they were doing. In my opinion, they failed to recognized the benefit introduced through the intent of the working agreement: ensure the appropriate interactions occurred.

When the first example went through the sprint retrospective, we ended up with a simple result: require the interaction, use the stand up to create awareness of the design intent and invite participation.

Friday, September 15, 2017

Safer Packer Examples with SSH

It's a little unsettling to see Packer template files with a clear text password for the vagrant user and root embedded within. Some template authors tell you to delete the vagrant user account if the virtual machine is publicly accessible. Still it's cringe worthy.

In my experiments with Packer I decided to script away some of this cringe worthiness. I took the position that I can improve upon the situation if I
  1. generate my own SSH key pair,
  2. lockout the vagrant user account so that only SSH access is possible using my key, and
  3. encrypt the root password file in the kickstart and preseed files on CentOS, Debian and Fedora.
This isn't perfect but it mitigates the above points as follows:
  1. avoids the use of the vagrant insecure public key.
  2. avoids the use of common words and phrases in example passwords.
  3. limits root password exposure.
As an added benefit, any examples that accidentally make it into production are more secure because the passwords and SSH keys are generated when the Vagrant Boxes are built.

The basic strategy I used to achieve the above is embedded within makefile.credentials. Credentials are generated by default, but can easily be manually created. Credentials are used by the Packer Temple files and a script for generating a Preseed Configuration file.

These examples use Debian but there are Fedora and CentOS examples as well.

Saturday, September 9, 2017

Subprocesses In Python

In A Poor Use of GitPython, I describe how my layering approach in a project using GitPython proved unsatisfactory. Unsatisfactory because I wasn't using GitPython to the full extent of its power. Unsatisfactory because I didn't want to spend time learning Git internals and how GitPython makes them available.

I revisited my approach without GitPython. In A Poor Use of GitPython, my approach resulted in the spread of Git command-line arguments to other functions and I'm looking for a nice abstraction that doesn't cause this problem.

Let's start with subprocess interaction:
def execute(command, *args):
  """ Use a subprocess to execute a command. Supply the command with any arguments.
  assert 0 < len(command)
  return subprocess.check_output([ command ] + list(args))

I want the output from the command and I want to know whenever I get a non-zero return code. It provides a nice test point for separating my application from the libraries it uses.

I call git using the following function.
class GitException(Exception):
  """ Throw an exception whenever an error occurs using GIT(1).
  def __init__(self, command, output, returncode):
    assert 0 < len(command)
    self._command = str(command)
    assert 0 <= len(output)
    self._output = str(output)
    assert 0 < returncode
    self._returncode = int(returncode)

  def command(self):
    return self._command

  def output(self):
    return self._output

  def returncode(self):
    return self._returncode

def git(command, *args):
  """ Execute GIT(1). Supply the git command and any arguments.
  assert 0 < len(command)
    execute("git", command, *args)
  except subprocess.CalledProcessError as e:
    raise GitException(e.cmd, e.output, e.returncode)

Too many layers? Perhaps. All I've achieved thus far is a couple of wrappers that provide strong guarantees on the length of the command. In some respects this is worse than the result I achieved in A Poor Use of GitPython.

The advantage lies in the recognition that some git commands (e.g., git-show-ref and git-ls-files.) return with error 1 under specific circumstances that I might want to handle in higher layers.