Saturday, March 31, 2018

RBTLIB v0.3.0 On Travis CI

In RBTLIB v0.3.0 On Read The Docs, I discussed adding support for Read the Docs to RBTLIB. Recently, I added RBTLIB to Travis CI. Travis CI is super easy to work with. It provided the opportunity to eliminate deployment issues. This is important, as my ultimate goal for RBTLIB is availability through PyPi.

The main advantage Travis CI provides is the ability to test on different platforms and to eliminate issues of portability. I lack experience with Python's setup tools so there are likely to be issues as I move RBTLIB to PyPi.

All in all, v0.3.0 has significant infrastructure improvements over v0.2. Functionally, v0.3.0 targets posting of review requests through rbt.

Friday, March 2, 2018

RBTLIB v0.3.0 On Read The Docs

In RBTLIB v0.3 Update (Part 2). I discussed introducing complexity measures to RBTLIB using radon and xenon. Recently, I've introduced Sphinx and taken advantage of Read the Docs.

Sphinx is a documentation generator for Python and other languages.

Read the Docs lets you create, host and search project documentation.

The combination of the two coupled with GitHub creates a publishing environment that allows me to update my project documentation push it to GitHub and have the documentation published on Read the Docs within minutes. Simple.

Part of the move to Read the Docs included a clean up of the naming for the project. I moved away from rbt to rbtlib for two reasons: I don't want to cause confusion between RBTools, which provides a command-line tool called rbt and my work.

It's not my intent to diminish the work that people are doing on Review Board and RBTools by causing confusion. I still don't know if my project will be successful. It is my hope that it may be useful to the Review Board team but I haven't engaged anyone there.

I learned through Kenneth Reitz's Requests module that a best practice exists for API versioning: Semantic Versioning. Seems sensible to adopt. I've moved from v0.3 to v0.3.0. Same release.

Semantic Versioning also helpfully includes advice on versioning projects in an alpha and beta stage: once I achieve my goals for v0.3.0 I'll be targeting v0.4.0.

I'd been using virtualenv to develop RBTLIB and incorporated virtualenvwrapper. Very nice set of tools.

RBTLIB documentation: http://rbtlib.readthedocs.io/en/latest/.

Thursday, February 1, 2018

Sunk Cost, Code and Emotional Investment

In a Practical Application of DRY, I discussed sunk costs as part of Sandi Metz's discussion on the Wrong Abstraction. In my work on RBTLIB v0.3.0 I encountered another element of sunk cost: emotional attachment to your implementation.

I put in considerable effort between RBTLIB v0.2 and v0.3.0. This effort included at least 2 rewrites of the core algorithms for traversing the resource tree returned by Review Board. In my case, the core approach of using the Composite Pattern and Named Tuples didn't change. Their use did.

The issue was primarily due to grey areas in my knowledge of Python and the constraints I placed upon my implementation -- avoiding meta-classes and inexperience with using Python's __call__ method effectively. (OK, I didn't know __call__() existed when I started my implementation.)

Frankly, the situation drove me to new levels of frustration. Each time my frustration peaked I had to step back, build of the stamina for another rewrite and push through.

Interestingly, I thought I was disciplined. My emotions kept telling me my broken implementation would be ok if I just spent more time on it. Rationally, I could tell that I was stuck. Stealing myself to rewrite took significant effort.

Each time, I created an experimental branch with the idea of exploring what was wrong with the implementation. Every time I did that I had a breakthrough. The two experimental branches have been merged to master and the implementation is better for it.

I'm currently on my 3rd rewrite of RBTLIB v0.3.0. I am more confident that this implementation will work but I'm procrastinating because I am still unhappy with some aspects of it.

Wednesday, January 3, 2018

Working Agreements for Agile Teams (Part 5)

In Working Agreements for Agile Teams (Part 4), I discuss one side-effect of using working agreements as principles and individual decision making. I view those examples as growing pains--an adjustment that people make when the nature of team engagement changes.  Those discussions are healthy for a team because they re-enforce a new way of working together.

A recent example of learning to work together arose during a discussion on the interaction required by our working agreement on design reviews. This agreement focuses on a successful outcome--when the design is complete we are well positioned to complete the review.  It requires the involvement of a designer and two design reviewers:
We agree to document our design and review the design with at least two people prior to implementation.
This agreement positions the team to avoid situations where only one person understands the design.  It's simplistic. If you dwell on it you may conclude it's heavy handed. Taken literally, this working agreement requires every design review to involve three people.

My notion of design includes adding a method to a class. It also acknowledges this design might warrant a single line of text in a comment for the method. It's natural to ask why anyone would want this overhead for simple cases.

One team member made an argument against this approach:
  • The working agreement promoted inefficiency because it required too many people to engage.
  • The working agreement permitted passive engagement--they asked someone to be a reviewer and that person indicted interest but did not actively engage.
  • We need time to learn (or prototype) so their is something of substance to review.
  • A difference of opinion on when to start applying the working agreement.
My counter arguments were:
  • I am happy if the conversation on how to approach the design occurs and all three people actively engage in the decision.
  • Passivity is a form of passive aggressiveness that I won't tolerate--engage or choose not to engage but make a decision.
  • Absolutely, take the time to learn but ensure that the interaction of all three people acknowledges and understands the objective and intended outcome of this learning.
  • Start the interaction at the same time we start working on the story.

Ironically, we disagreed only on the starting point and the passivity. Everything else this team member said made sense to me.

So the working agreement failed to help us understand the importance of the interaction required to make the design review a success. It failed to balance the need for the author to learn and for the reviewers to understand. And it failed to address the notion that too much investment up front might commit us to a poor course of action. Or did it?

Clearly, the working agreement addresses none of the above explicitly. Clearly different perspectives resulted in different approaches. Importantly, these culminated in a very important and profound outcome for the team.

I encourage the team member to raise the differences of opinions in our Lean Coffee. They did and they and I discussed the issues with the team.

To the team's credit, they took both perspectives in stride and we agreed to enhance our understanding of the working agreement. We also agreed not to modify the working agreement to include this understanding.

Interactions over process triumphs again! Furthermore the team adopted several Agile principles in doing so. We all won.