Friday, September 15, 2017

Safer Packer Examples with SSH

It's a little unsettling to see Packer template files with a clear text password for the vagrant user and root embedded within. Some template authors tell you to delete the vagrant user account if the virtual machine is publicly accessible. Still it's cringe worthy.

In my experiments with Packer I decided to script away some of this cringe worthiness. I took the position that I can improve upon the situation if I
  1. generate my own SSH key pair,
  2. lockout the vagrant user account so that only SSH access is possible using my key, and
  3. encrypt the root password file in the kickstart and preseed files on CentOS, Debian and Fedora.
This isn't perfect but it mitigates the above points as follows:
  1. avoids the use of the vagrant insecure public key.
  2. avoids the use of common words and phrases in example passwords.
  3. limits root password exposure.
As an added benefit, any examples that accidentally make it into production are more secure because the passwords and SSH keys are generated when the Vagrant Boxes are built.

The basic strategy I used to achieve the above is embedded within makefile.credentials. Credentials are generated by default, but can easily be manually created. Credentials are used by the Packer Temple files and a script for generating a Preseed Configuration file.

These examples use Debian but there are Fedora and CentOS examples as well.

Saturday, September 9, 2017

Subprocesses In Python

In A Poor Use of GitPython, I describe how my layering approach in a project using GitPython proved unsatisfactory. Unsatisfactory because I wasn't using GitPython to the full extent of its power. Unsatisfactory because I didn't want to spend time learning Git internals and how GitPython makes them available.

I revisited my approach without GitPython. In A Poor Use of GitPython, my approach resulted in the spread of Git command-line arguments to other functions and I'm looking for a nice abstraction that doesn't cause this problem.

Let's start with subprocess interaction:
1
2
3
4
5
def execute(command, *args):
  """ Use a subprocess to execute a command. Supply the command with any arguments.
  """
  assert 0 < len(command)
  return subprocess.check_output([ command ] + list(args))

I want the output from the command and I want to know whenever I get a non-zero return code. It provides a nice test point for separating my application from the libraries it uses.

I call git using the following function.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
class GitException(Exception):
  """ Throw an exception whenever an error occurs using GIT(1).
  """
  def __init__(self, command, output, returncode):
    assert 0 < len(command)
    self._command = str(command)
    assert 0 <= len(output)
    self._output = str(output)
    assert 0 < returncode
    self._returncode = int(returncode)

  @property
  def command(self):
    return self._command

  @property
  def output(self):
    return self._output

  @property
  def returncode(self):
    return self._returncode

def git(command, *args):
  """ Execute GIT(1). Supply the git command and any arguments.
  """
  assert 0 < len(command)
  try:
    execute("git", command, *args)
  except subprocess.CalledProcessError as e:
    raise GitException(e.cmd, e.output, e.returncode)

Too many layers? Perhaps. All I've achieved thus far is a couple of wrappers that provide strong guarantees on the length of the command. In some respects this is worse than the result I achieved in A Poor Use of GitPython.

The advantage lies in the recognition that some git commands (e.g., git-show-ref and git-ls-files.) return with error 1 under specific circumstances that I might want to handle in higher layers.

Thursday, August 17, 2017

Experiments with Packer and Vagrant

To remedy
Build 'virtualbox-iso' errored: Error uploading VirtualBox version: SCP failed to start. This usually means that SCP is not
    properly installed on the remote system.
Install openssl-clients for Linux guests. Doh.

Friday, August 11, 2017

Blog Entry Syntax Highlighting

I've had good success using Pygments to highlight code on this blog. It provides the best of both worlds: a large selection of language support and it generates standalone HTML.

Standalone HTML was important because many of the other solutions relied upon java script and some of these scripts are hosted on external servers which increased the possibility that the highlighting would disappear whenever the server was down.

I use:
pygmentize -f html -O style=emacs,linenos=1 -o test.html cmd.py
To produce:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
def cmd(path, *args, **kwargs):
    """
    Generate a git log command using the the provided arguments. Empty git log
    lines are stripped by this function.

    Params:
        [in] path - location of repo
        [in] args - non-keyword arguments to provide to the git log command
        [in] kwargs - keyword arguments to provide to the git log command

    Returns: a list of lines returned from the git log command
    """
    repo = Repo(path)
    assert repo.bare == False
    result = list()
    for line in repo.git.log(*args, **kwargs).split('\n'):
        if len(line) == 0:
     continue
        result.append(line)
    return result

The horizontal scrollbar shows up in the codeblock embedded in the file.

 The crude but effective source code to produce the same result:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
from BeautifulSoup import BeautifulSoup as beautify
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
import sys

file = open(sys.argv[1], 'r')

print("""\
<style>
  #codeblock {
    overflow-x: scroll;
    width: auto;
    white-space: nowrap;
  }
""")
print HtmlFormatter().get_style_defs('.highlight')
print("""\
</style>

<div id="codeblock">
""")
formatter = HtmlFormatter(linenos=True)
print highlight(file.read(), PythonLexer(), formatter)
print("""\
</div>
""")
file.close()
Update: GitHub Gists offers a script which greatly simplifies adding code to your blog.

Wednesday, July 19, 2017

Experiments with Vagrant and Packer On Fedora

I've had a strange experience with Packer on Fedora 13 (yes, 13). This may prove interesting to anyone encountered the following error from Packer:
==> virtualbox-iso: Error waiting for SSH: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password keyboard-interactive], no supported methods remain
I'm using Packer v0.10.1.

I found that adjusting ssh_timeout did not seem to have any effect. I ended up setting ssh_handshake_attempts equal to 100.

What I learned in investigating this problem is this is the error message you get when your handshake attempts threshold exceeded. This message implies an authentication problem, but in my case it occurred because SSH wasn't running yet.

To remedy this, confirm whether it's an SSH problem or not by executing:
PACKER_LOG=1 PACKER_LOG_PATH=out.log packer build --debug TEMPLATE
Stop at the step:
==> virtualbox-iso: Pausing after run of step 'StepTypeBootCommand'. Press enter to continue.
Run Packer on a template with headless set to false (the default) so you can watch the virtual machine boot. Login once things are up and running. If the install completes you can also "Press enter to continue". If Packer connects over SSH you can rule out an SSH problem.

To confirm whether it's problem with your Packer template check ssh_wait_timeout and ssh_timeoutssh_wait_timeout is deprecated in favour of ssh_timeout.  Curiously, neither seemed to have any effect on Fedora 13. At one point, I set my timeout to hours and watched Packer shutdown the virtual machine because it couldn't connect is the space of a few minutes.

t was successful in using both parameters on CentOS 7.0 and 7.2 as well as Debian 7.11.0 and 8.5.0. 

The problem doesn't appear to be my Packer template (Fedora uses the same template as CentOS and Debian). 

It isn't Packer (it works properly on CentOS and Debian).

Weird.

Thursday, July 13, 2017

A Poor Use of GitPython

I'm working on a project that uses Git. It's written in Python. The initial implementation uses GitPython. GitPython provides abstractions for using Python with Git repositories.

My project uses git-log. I chose a layered approach using this function as a single point of access to the Git repository.

This function is the focal point for access to the git repository.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
def cmd(path, *args, **kwargs):
    """
    Generate a git log command using the the provided arguments. Empty git log
    lines are stripped by this function.

    Params:
        [in] path - location of repo
        [in] args - non-keyword arguments to provide to the git log command
        [in] kwargs - keyword arguments to provide to the git log command

    Returns: a list of lines returned from the git log command
    """
    repo = Repo(path)
    assert repo.bare == False
    result = list()
    for line in repo.git.log(*args, **kwargs).split('\n'):
        if len(line) == 0:
     continue
        result.append(line)
    return result

Layering separates the repository interface from the information in the repository. It isolates the GitPython interface from the rest of the project but introduces a different challenge.

Consider this function.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
def commitLog(path, commitHash):
    """
    Return the log entry for a single commit hash.

    Params:
        [in] path - location of repo
        [in] commitHash - commit hash

    Returns: log entry for the specified commit hash
    """
    return ' '.join(cmd(path, '-n 1', commitHash)[3:])

My approach to layering moves the command-line options for git-log into the clients of cmd().

A better approach might have used GitPython's object model. Unfortunately the learning curve requires knowledge of git internals. Something I wanted to avoid in the interest of time. In my haste I didn't take advantage of GitPython's power.

The next revision of this project needs to revisit GitPython's object model to help traverse my repository. It will be interesting to see if this model supports obtaining information  provided by git log -n 1 and other porcelain options easily.

Tuesday, June 20, 2017

Experiments with Vagrant and Packer on Debian

I had occasion recently to review Vagrant and Packer from Hashi Corp. My review involved an exploration of whether Packer is useful for bootstrapping virtual machine. I ended up selecting a project by Pierre Mavro called packer-debian.

I am impressed with the power provided by Packer and Vagrant. They achieve the goal of automating the construction and deployment of virtual machines while enabling control of the virtual machine configuration. Very powerful tools.

My only disappointment with Packer is that the JSON files used to configure the virtual machine do not seem to support comments.

My experiments are located on GitHub. Ultimately, my experiments are directed at creating virtual machines for ancient operating systems. My immediate goal is to ensure that I can build an application on Fedora 13. (Yes, 13. Long story.). I will publish my work on GitHub as it progresses.

With thanks to Pierre Mavro for publishing his work. It enabled me to use a working example as a basis for my own investigation.

Wednesday, June 14, 2017

Bottleneck, Where Art Thou (New Tools, Same Problems)

In Bottleneck, Where Art Thou, I discuss how our use of Review Board and Bugzilla conspired to produce bottlenecks. Blindly accepting the workflow imposed by tools can create poor outcomes. To improve our workflow the team moved to JIRA and Confluence with goal of abandoning Bugzilla. We evaluated Crucible but remained with Review Board.

Two months following the introduction of JIRA code reviews remain a bottleneck. Team members often position code reviews as something to do in addition to everything else they need to complete. Reviews piled up on individual to do lists just as before.

Well, almost like before. JIRA provides tools to manage workflow. We use the JIRA Agile simplified workflow: "To Do" progresses to "In Progress" then "Done". We use stories and tasks, so a bottleneck means that tasks stay "In Progress" longer.

A student suggested changing our workflow to address the code reviews. They suggested placing "In Review" between "In Progress" and "Done" much like in Every team needs kick-ass code reviews. Seems simple enough. [1]

Except workflows seldom get simpler as they age. They get bigger and complex.

Adding a state tells the team and I what we already know. It identifies tasks in the bottleneck. A state doesn't bring us closer to the solution. It may even deflect from a solution.  A better solution improves the timeliness of code reviews. A state doesn't improve timeliness. It draws attention to deficient time management.

A better solution enables better time management for tasks requiring code reviews. Our code reviews require that at least two people review a change. This means tasks involve three people: the author and two reviewers.

The default implementation of a JIRA task and the JIRA Agile simplified workflow has the potential to shape our workflow in a disadvantageous manner. A task in JIRA has a single owner. JIRA makes is very easy to add new states. There is a conflict between our workflow and the defaults provided by JIRA.

If you assume the team is committed to code reviews but struggles with execution ask why. Does a single task owner mean they are solely responsible for it? Does this place team members in competition for completing tasks? If competition exists, does this imply that team members are likely to complete their tasks before they help their peers?

If there is any truth to competition then a better solution is to change a JIRA task to include reviewers. Or introduce tasks for code review.

We went with multiple owners for tasks in the hope that this provided a greater sense commitment to tasks and allows team members to better coordinate their work. If not, we can always add a new state and watch the reviews pile up there.

[1] Another student had a similar idea. Their rationale for "In Review" was management might be interested in task status. Management isn't asking and we didn't have a compelling reason for adding "In Review" that helped the team.

Monday, May 22, 2017

RBTLIB v0.3 Update (Part 2)

In RBTLIB v0.3 Update (Part 1) I discussed some plans for posting review requests to a Review Board instance using RBTLIB. During the development I added a complexity measure to the project.  I was intrigued by how complexity measures were used in a talk by Sandi Metz.  Sandi's talk is on Ruby.  I'm using Python.

I found two Python modules: radon and xenon that compute cyclomatic complexity. Radon computes several measures and xenon provides a way to add complexity measures to a continuous integration.

To measure the entire project, including test code from the project's root directory:
radon cc -e "ven/*" -as .
The core abstraction in RBTLIB at this point is still the Composite design pattern.
This component has the highest complexity:

rbtlib/resource/composite.py
    M 41:4 Composite.href_component - B (6)
    M 63:4 Composite.component - B (6)
    C 31:0 Composite - A (4)
    M 53:4 Composite.list_component - A (4)
    M 34:4 Composite.__init__ - A (1)

From the radon documents:

M - Module
C - Class

A - low - simple block
B - low - well structured and stable block

Overall complexity of the project thus far:

72 blocks (classes, functions, methods) analyzed.
Average complexity: A (1.88888888889)

So far, complexity doesn't appear to be a problem.

Tuesday, May 16, 2017

Code Matters

In Code Matters, Bertrand Meyer's discusses several flaws introduced as a result of poor language design. He cites examples from an Apple and OpenSSL security vulnerability that occurred in 2014. It's a nice discussion on the importance of language design and how it affects the implementation.

I found Meyer's discussion on root cause analysis informative, particularly the hypothetical example discussing how a combination of factors create situations that are difficult to detect. What makes Meyer's point interesting is his reference to Nancy Leveson.

Leveson's home page contains a good collection of papers on safety in engineering. One paper investigates the Therac-25, a medical device containing software issues that massively overdosed six patients. The section on "Causal Factors" is informative.

One conclusion from Leveson's paper is that focusing on particular bugs does not lead to a safe design. The mistakes attributed to the Therac-25 involve poor software engineering practices and using software to ensure safe operation. You can't patch your way out of a poor implementation and you shouldn't involve software in safety critical functions.

Meyer's point in his hypothetical example on how a combination of factors can be difficult to detect and result in catastrophic failure is made real in Leveson's discussion of "Unrealistic Risk Assessment" in the Therac-25.

It also looks like a good lesson in probabilities wherein a probability of greater than zero means that the event can occur (however unlikely).


Y􏰳G|􏰥|6j􏰡m@Z􏰇􏰮et􏰜Zom=j􏰸\][􏰣􏰽=to􏰬VZl􏰤o[􏰲􏰽􏰇t􏰼^􏰇£~􏰢􏰭􏰬XZbk ̄Z¥􏰽=j{􏰬b^bal|=j􏰡m􏰨Zl^@Z􏰇£ Zb􏰤o[􏰏􏰬􏰇j􏰓[z^=j􏰡[ym􏰇t􏰶vwu􏰥􏰢􏰏􏰬􏰇j􏰉[􏰇tl^􏰵􏰬~􏰢􏰓m􏰇tomom@Z􏰷£l^}􏰢4k1t@r􏰏m=j􏰣|4a@r~􏰢y[om=jlvwZl􏰤y[􏰾q=Z{m􏰇t􏰼uom􏰨Zl􏰤o[ymbab1⁄4"􏰳C^=t~􏰢o[6j xym@Z􏰲v􏰇t􏰼Z¥􏰽=j{􏰬􏰉m􏰇t¥􏰽􏰋Z{m6j􏰸\][􏰣􏰽􏰇ty􏰬􏰋Zl􏰤y[􏰅^􏰇t􏰓􏰬XZ􏰥􏰢􏰥|@Zom􏰉[=jl􏰤o[􏰼Zb^􏰥􏰢b􏰤{r􏰇j􏰋uRj􏰷£l^}􏰢4k~|}􏰢bab􏰴􏰅kl^=j􏰉􏰬XZ@r~􏰢o[􏰨r􏰇j􏰡m4v £b^􏰥􏰢om􏰨ZXZl^}􏰢􏰇£l^􏰨Z ̄Zom=j􏰧\][􏰣􏰽􏰇to􏰬)m􏰇t+t􏰲v􏰼kXZ􏰇􏰮􏰥|􏰇t􏰇􏰮z^􏰥􏰢VZ{m@Zb􏰤)􏰬@Zg¬=j{[y􏰬􏰥􏰢􏰼u􏰸r􏰥􏰢y􏰬􏰇j2􏰴VZl􏰤􏰧􏰦􏰫􏰳X􏰬l^􏰇t􏰥􏰢o[}􏰢4kl^􏰇t@r ̄Z􏰼u􏰇ty􏰬 m􏰨Zbkl^ba¤^􏰇t􏰥􏰢b􏰤y􏰬=j;􏰽¢kXZ{[􏰨rXZ􏰲v+􏰰􏰨Zl^ba􏰯^=j^􏰥􏰢􏰷Zg􏰮􏰻j2􏰤􏰨Z􏰶􏰴8t{[􏰚Zbk=j"u􏰚Z􏰲􏰴^=jXr`Zom=j􏰸\][􏰣􏰽=to􏰬¦􏰰Z~|bvwu􏰇t@r |􏰥|6j􏰅n}|􏰥|=j2ay[om}􏰢±;F􏰳Ru@Zo[y􏰬+ny􏰬􏰷Z¥􏰽=j{􏰬%j􏰷Zg¬=j􏰋u%t{[¤n􏰻j􏰸\Zl􏰤o[􏰠[􏰇tb^􏰠􏰬􏰥􏰢􏰱M1⁄4􏰬g£lab􏰴􏰪Z{m6j􏰸\][􏰣􏰽􏰇ty􏰬􏰠m=j􏰣|4a x􏰨r􏰥􏰢o[om6j2v8^􏰇t"£l^}􏰢y􏰬la􏰨r1

Sunday, April 23, 2017

RBTLIB v0.3 Update (Part 1)

In RBTLIB - A Client-Side Library for Review Board, I introduced RBTLIB v0.1. I'm working on RBTLIB v0.3. A big change between the first and third revision is the introduction of classes for Review Board resources.

Why this change? RBTLIB v0.3 introduces support for operations using HTTP POST. These require the client to authenticate. This means managing user credentials and the introduction of session support for POST requests.

After some experimentation I discovered that it's possible to query a Review Board instance using the simple implementation up to RBTLIB v0.2. If your Review Board instance supports anonymous access you can write a simple client to query it.

That's important as the original use case for my project was a client to report on reviews entered during a fixed time period -- something that RBTLIB v0.2 does easily. RBTLIB v0.3 exceeds my original requirement and the implementation is more complex as a result.

One more note about RBTLIB v0.2: the resource links getter is set up so that you can't provide URL parameters to the Root resource. This prevents providing parameters to the Root resource in the event that the Review Board Web API is changed to support this.

Monday, April 17, 2017

How To Keep Your Best Programmers

How To Keep Your Best Programmers provides an interesting perspective on why talent stays (or leaves) an organization. It's worth a read, simply because of the wide perspective provided with the references.

In my opinion, the primary reason people leave an organization is that value tends to decrease with time. Erik Dietrich captures this well in a quote that discusses the value apex. (This source for this quote is Up or Out: Solving the IT Turnover Crisis.)  

The value apex is a function of the ability to generate new ideas and the perception of others about those ideas. In simple terms, if your interest wanes good ideas dry up. If you get pigeon holed no one will listen anyway. It is a question of managing diminishing returns. 

The interesting question arises following realization you've joined an organization where value convergence is the norm. That organization is dead. It just doesn't know it yet. The obvious symptom of value convergence is an organization where nothing is written down. People seek to create the perception of value by putting themselves in positions of power by virtue of the knowledge they hoard.

My take on the meritocracy inversion is that IT organization should be flat. This avoids seniority based heirarchies and permits the creation of a system of renumeration based upon merit and the solutions produced. A flat organization fleshes out the senior loafers and the capable junior people.

Saturday, March 25, 2017

Practical Application of DRY

In The Wrong Abstraction, Sandi Metz says the cost of code duplication is less than the cost of using the wrong abstraction. I had to watch her Rails 2014 talk to fully understand her point. She discusses the cost of duplication around the 13:58 mark.

I watched the entire talk.  The ideas are presented so well that it's simply brilliant.

The main take-away with respect to the cost of code duplication verse the wrong abstraction is that it's also an argument for delaying the application of DRY until you fully understand the algorithm.

What makes her presentation compelling is the insight she provides on knowing when to apply DRY. It's the first time I've seen someone provide a practical example of why delaying the application of this principle is important.

In addition to the insight provided on applying DRY the talk contains a great deal of practical advice.

The presentation provides a useful tool for explaining the evolution of code through the ebb and flow of complexity during development. The presentation is directed at individuals attempting a refactor. The Wrong Abstraction includes the notion of a team working on the code.

I plan on using the presentation as a vehicle to help my team understand the challenges they are facing in their own struggles with complexity. Having the tools to discuss this challenge goes a long way towards understanding and finding solutions to it.

The Wrong Abstraction introduces the notion of the sunk cost associated with using the wrong abstraction and makes an argument that developers need to recognize this accept that sometimes the best way out of that challenge is to take a back step by reintroducing duplication. It is the step back which provides the opportunity to revisit the abstraction and improve it.

Sunday, March 19, 2017

It's The Foundation That Matters

I have to agree with Santiago L. Valdarrama and the points he makes in The unremarkable career of (some) modern software developers. Continuously learning is important to your career. Complacency is a career killer.

I've resigned from two positions during my career because those organizations got in the way of my learning or didn't provide the opportunity to apply it. Staying in these organizations comes at great personal cost. It's a cost that can sneak up on you if you aren't vigilant.

In addition to building a solid foundation build a broad network to get better insight on the issues you face.  I discuss some of the pitfalls of social proof and social loafing in Stop Being a Cave Dweller. These pitfalls are an issue if you think Google and Stack Overflow are your friends. (They certainly help but you need to think as well.)

The response on Reddit to Santiago's blog post is interesting. In my view Santiago's, point is understand the theory and stay sharp. The fact of the matter is that technology changes. You need something to carry with you as you move through your career. That something is probably a solid foundation in fundamentals that can carry you for the long haul.

Friday, February 24, 2017

Engineering Logs for the Product Backlog

I'm experimenting with how to better control and pay down technical debt. The objective of this experiment is to determine if we can align the payment of technical debt with asks from business stakeholders. 

Martin Fowler says this about technical debt:
The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline. The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments.
The tricky thing about technical debt, of course, is that unlike money it's impossible to measure effectively. The interest payments hurt a team's productivity, but since we CannotMeasureProductivity, we can't really see the true effect of our technical debt.
The balance between the quick and dirty solution and the deadline is critical here. 

I include in quick an dirty, those outcomes which aren't as successful as we had hoped--those situations were once you complete the implementation you realize how to create a simpler implementation but you are simply out of time.

To improve balance,  I use an engineering log to identify potential improvements and align those improvement with requests by the business. 

An engineering log is a list of things we need to do to improve the product but which are not going to add to the value the business can extract from the market--refactoring code falls into this category. Importantly, my engineering log is not in my Product Backlog.

The basic idea is that a refactor usually needs to proceed the introduction of new functionality. This is  likely true if you are working on a legacy code base. In my view, this refactor is limited to the work needed to make the introduction of the new feature easy.

I use the engineering log to identify technical debt and include the payment of that technical debt in stories motivated by the business. 

This has the advantage of aligning the repayment of technical debt with a business objective, so code that's "good enough" isn't refactored until there is a new business requirement for that code. It has the disadvantage of adding cost to the business activity.

The disadvantage does have a business impact. I don't have a solution for that other than to write the code right the first time.

Saturday, February 18, 2017

The Accidental Creative (Stimulating Creativity)

Several years ago I read The Accidental Creative. I shared my thoughts on the book in The Accidental Creative (Book Review). I've also read Die Empty. Both are recommended reads.

The Accidental Creative includes a framework to ensure you remain creative over the long term. One part of this framework is to ensure that you "Curate stimuli that help you pursue creative possibilities." I've found that an effective way for me to remain creative is to review writing that I've enjoyed either because it's entertaining or because it's thought provoking.

Recently, I've begun to ensure that I select one piece of writing to read each week. This is always something that I've read in the past and I take the opportunity to revisit part of it over the course of a weekend. The result of randomly selecting a piece of writing that has influenced me in some way is profound. I find that I reconnect with some of the ideas that have inspired me and that I can make new connections on how to solve current problems.

Food for thought.

Thursday, January 26, 2017

RBTLIB's Whole-Part Hierarchy

In RBTLIB - A Client-Side Library for Review Board I introduced an implementation of a client-side library for Review Board. In that post, I described the next steps for the library including the introduction of the composite pattern to manage the whole-part hierarchy defining a resource.

The implementation includes the composite pattern implemented using named tuples. I choose named tuples because I wanted to avoid an implementation that makes explicit use of meta-classes (named tuples use meta-classes).

I wanted to avoid meta-classes because of my goal for a simple implementation. My notion of simple embodies the notion that the code be straight-forward to read and understand. I find decorators to be a simpler alternative to meta-classes. The "hard part" of using decorators is that the implementation requires decorator chains.

I introduced a JSON attribute to each top-level named tuple. The resulting implementation includes two copies of the resource definition: the JSON attribute containing a copy of the Review Board response and a whole-part hierarchy for each resource component.

The introduction of the JSON attribute is an experiment. I am still not sure what I want from my client application. The applications defined in the scripts still return the JSON structure provided by Review Board. My current rationale for doing this is that I like the notion of using these scripts to support the plumbing and porcelain notion used in git.

For example, an implementation of RB Tools using RBTLIB might rely upon the scripts instead of the RBTLIB API directly. The plumbing and porcelain notion in git works very well and I think the separation of the two might create the opportunity to easily extend RBTLIB or the RB Tools clone very easily.

RBTLIB supports only retrieval of the Root and Review Request resources via HTTP GET. A good test of the design needs to include support for the remaining resources and support for HTTP POST. HTTP POST may take the design in an entirely new direction so there may be compelling reasons to support HTTP POST before introducing the remaining resources.

Friday, January 20, 2017

Self-Organizing Teams for the Rest of Us (Another Look)

In Self-Organizing Teams for the Rest of Us, I shared Bertrand Meyer's position on self-organizing teams. Self-organizing teams choose how best to accomplish their work, rather than being directed by others outside the team--highly accomplished self-organizing teams may not require a manager. Self-Organizing teams are self-managed or self-designing [1].

In TSP: Leading a Development Team, Watts S. Humphrey provides a look at self-directed teams.

Is there a difference between self-directed and self-organized teams? If there is, it's that self-directed teams have leaders with a set of responsibilities that are broader than the team's responsibilities.