Saturday, October 20, 2018

Vagrant Boxes on CentOS

I've been working with Vagrant for sometime now. One effective practice I've developed is the construction of Vagrant boxes for specific applications. A couple of examples: Review Board and Cpython.

I use the Cpython Vagrant box to build and examine Cpython source code. I've found that the ability to automate the construction of the box and their configuration to build Cpython is handy.

No rocket science here, just a pragmatic approach to constructing environments for specific purposes.

My Vagrant boxes are available on GitHub.

Wednesday, July 25, 2018

Emacs Org-Mode Doesn't Generate Images

I've recently installed org-mode into my Emacs editor and ran into a problem. I wanted to generate an equation and view it using LaTeX preview mode. No luck.

Initially, what I was presented with in Org-Mode was an empty box where the equation should have been. My *Messages* buffer showed the following.

Wrote /Users/brian/Desktop/Notebooks/temp/
Creating images for section...
Failed to create dvi file from /var/folders/0l/75r2j43x6f1_5k3l9st73j480000gq/T/orgtex19727pJV.tex
Creating images for section...done
Unable to load image (image :type png :file /Users/brian/Desktop/Notebooks/temp/ltxpng/foo_057ff709818f1d42b80c9f41f8ebc70d5d4bb806.png :ascent center) [16 times]
Cool. DVI file failed to be created. But why?

Manually building the above LaTeX file yielded another hint:
! LaTeX Error: File `ulem.sty' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)
Cool. A missing package.

Installed texlive using MacPorts. Problem solved.

Tuesday, June 26, 2018

Experiments with Packer and Vagrant (Fini)

I spent time exploring HashiCorp's Packer and Vagrant tools. My objective for this exploration was to understand how Packer and Vagrant could help me develop and maintain my infrastructure. I like both tools. I like them a lot.

The power of Packer is that it turns infrastructure into code. You can configure virtual machines using Packer with a small collection of scripts. The advantages Packer introduces are the ability to use source control to manage the configuration. This permits the update of the virtual machine through the modification of these scripts.

The power of Vagrant is that it enables deployment of the virtual machine. It's genius is that you can use Vagrant to deploy clusters of virtual machines. My use case is the deployment of continuous integration servers but I have other use cases wherein web servers and application servers can be created with the Packer and Vagrant combination and then deployed into a test and production environment.

The main contribution my exploration makes it that I introduce my own SSH key pairs into my Vagrant Boxes and I took steps to update the Kickstart Configuration and Preseed files with encrypted root passwords. I also locked out the Vagrant user account so that access to the virtual machine can only occur over SSH using my key pair.

I developed a collection of make files to coordinate provisioning the Vagrant Boxes. I don't actually like the target structure used by these make files. In hindsight they would be more useful if the target names reflected the purpose of the Vagrant Box (i.e., web server instead of delian-jessie).

I use a script to generate my Kickstart and Preseed files. Possibly useful for examples.

I don't like the way my Packer template provisioning scripts are structured. I initially thought that separating scripts by service (e.g., nfs instead networking). Ultimately I think that a better structure for provisioning scripts is closer to the purpose of the Vagrant Box.

For example, to build of a developer and production environment I want makefile targets like:
  • base_box (provision to enable vagrant user)
  • developer_box (provision to enable vagrant user and developer tool chain)
  • production_box (provision to enable vagrant user and no developer tool chain)

and scripts that provision these boxes. In this example, creating a developer box should rely on the base box script and the developer box script. This enables a minimal approach to creating additional boxes.

My production box need ever be provisioned to include a developer tool chain, which ensures that only production services and applications flow into the production environment.

The source: experiments with Vagrant

Monday, May 28, 2018

Experiments with Packer and Vagrant on CentOS

In Experiments with Packer and Vagrant on Debian I discussed my experience with Pierre Mavro's packer-debian project. Here I discuss my experience with Packer and Vagrant on CentOS.

My exploration of CentOS relies on work done by Gavin Burris. I extended  Gavin's work to include CentOS 7.2. Using Gavin's example, I was able to bring up a Vagrant box using Virtual Box on CentOS 7.2-1511 in a matter of minutes.

One problem I encountered in my test environment took a while to solve. I executed:

        > packer version
        > Packer v0.10.1
        > packer build centos7.json

dd returns a non-zero return code with these commands:

        > sudo dd if=/dev/zero of=/boot/zero bs=1M
        > sudo rm -f /boot/zero
        > sudo dd if=/dev/zero of=/zero bs=1M
        > sudo rm -f /zero

Packer reports the following errors:

        > virtualbox-iso: dd: error writing ‘/boot/zero’: No space left on device
        > virtualbox-iso: 397+0 records invirtualbox-iso: 396+0 records out
        > virtualbox-iso: 415494144 bytes (415 MB) copied, 1.05651 s, 393 MB/s
        > ==> virtualbox-iso: Unregistering and deleting virtual machine...
        > ==> virtualbox-iso: Deleting output directory...
        > Build 'virtualbox-iso' errored: Script exited with non-zero exit status: 1

Ouch. Packer reasonably deletes what it believes to be a broken Virtual Box.

To correct this, replace the dd commands above with the following:

        > sudo dd if=/dev/zero of=/boot/zero bs=1M || sudo rm -f /boot/zero
        > sudo dd if=/dev/zero of=/zero bs=1M || sudo rm -f /zero

Problem solved!

The problem is that dd has to fill the device which means it must exit with a non-zero return code.

While interesting, I ended up removing the dd commands altogether. My modifications to centos7.json.

Sunday, April 29, 2018

The Temporary Scrum Master

I'm curious how rotating the Scrum Master role through the Development Team works out for the Development Team and the Organization as a whole.  I'm not sure that rotating the Scrum Master role is healthy. Selecting one member of the Development Team to permanently become the Scrum Master seems the better choice.

The Scrum Guide permits the the Product Owner and Scrum Master to execute work in the Sprint Backlog. I take this to mean both roles can be carried out by someone in the Development Team.

A review of the servant-leadership philosophy applied to the Scrum Master role provides insight on the challenges:
  • service to the Product Owner: the support the Scrum Master provides the Product Owner is not focused solely upon the domain. It includes Product Backlog management.
  • service to the Development Team: this focuses on the organization and the Scrum Team. It includes building bridges to other parts of the organization. It includes coaching the Development Team on self-organization and cross-functionality.
  • service to the Organization: this includes helping the organization leverage Scrum better.
When I hear about the Scrum Master role being fulfilled by the Development Team it usually includes a concessions to ensure the Scrum Master isn't taking on that role permanently. The motivation behind this concession is interesting.

Rotation implies that the organization isn't fully vested in Scrum. Further:
  • it implies the Scrum Master role is less valuable than the "other" role the Scrum Master has. 
  • it implies that Scrum Master isn't a good career choice for domain experts. 
  • it subjects the team to different Scrum Master's each with their own set of values and approaches.
Different values and approaches aren't bad. They are opportunities for learning. But they may cause confusion if you are just rolling out your Scrum implementation.

In all, rotating the Scrum Master doesn't sit well with me. The Scrum Master seems better suited as a permanent role. Even if I assume the Developer turned Scrum Master is a domain expert there are significant trade-offs involved in this approach, especially if you view Scrum as an important initiative that can benefit the entire organization.

Saturday, March 31, 2018

RBTLIB v0.3.0 On Travis CI

In RBTLIB v0.3.0 On Read The Docs, I discussed adding support for Read the Docs to RBTLIB. Recently, I added RBTLIB to Travis CI. Travis CI is super easy to work with. It provided the opportunity to eliminate deployment issues. This is important, as my ultimate goal for RBTLIB is availability through PyPi.

The main advantage Travis CI provides is the ability to test on different platforms and to eliminate issues of portability. I lack experience with Python's setup tools so there are likely to be issues as I move RBTLIB to PyPi.

All in all, v0.3.0 has significant infrastructure improvements over v0.2. Functionally, v0.3.0 targets posting of review requests through rbt.

Friday, March 2, 2018

RBTLIB v0.3.0 On Read The Docs

In RBTLIB v0.3 Update (Part 2). I discussed introducing complexity measures to RBTLIB using radon and xenon. Recently, I've introduced Sphinx and taken advantage of Read the Docs.

Sphinx is a documentation generator for Python and other languages.

Read the Docs lets you create, host and search project documentation.

The combination of the two coupled with GitHub creates a publishing environment that allows me to update my project documentation push it to GitHub and have the documentation published on Read the Docs within minutes. Simple.

Part of the move to Read the Docs included a clean up of the naming for the project. I moved away from rbt to rbtlib for two reasons: I don't want to cause confusion between RBTools, which provides a command-line tool called rbt and my work.

It's not my intent to diminish the work that people are doing on Review Board and RBTools by causing confusion. I still don't know if my project will be successful. It is my hope that it may be useful to the Review Board team but I haven't engaged anyone there.

I learned through Kenneth Reitz's Requests module that a best practice exists for API versioning: Semantic Versioning. Seems sensible to adopt. I've moved from v0.3 to v0.3.0. Same release.

Semantic Versioning also helpfully includes advice on versioning projects in an alpha and beta stage: once I achieve my goals for v0.3.0 I'll be targeting v0.4.0.

I'd been using virtualenv to develop RBTLIB and incorporated virtualenvwrapper. Very nice set of tools.

RBTLIB documentation:

Thursday, February 1, 2018

Sunk Cost, Code and Emotional Investment

In a Practical Application of DRY, I discussed sunk costs as part of Sandi Metz's discussion on the Wrong Abstraction. In my work on RBTLIB v0.3.0 I encountered another element of sunk cost: emotional attachment to your implementation.

I put in considerable effort between RBTLIB v0.2 and v0.3.0. This effort included at least 2 rewrites of the core algorithms for traversing the resource tree returned by Review Board. In my case, the core approach of using the Composite Pattern and Named Tuples didn't change. Their use did.

The issue was primarily due to grey areas in my knowledge of Python and the constraints I placed upon my implementation -- avoiding meta-classes and inexperience with using Python's __call__ method effectively. (OK, I didn't know __call__() existed when I started my implementation.)

Frankly, the situation drove me to new levels of frustration. Each time my frustration peaked I had to step back, build of the stamina for another rewrite and push through.

Interestingly, I thought I was disciplined. My emotions kept telling me my broken implementation would be ok if I just spent more time on it. Rationally, I could tell that I was stuck. Stealing myself to rewrite took significant effort.

Each time, I created an experimental branch with the idea of exploring what was wrong with the implementation. Every time I did that I had a breakthrough. The two experimental branches have been merged to master and the implementation is better for it.

I'm currently on my 3rd rewrite of RBTLIB v0.3.0. I am more confident that this implementation will work but I'm procrastinating because I am still unhappy with some aspects of it.

Wednesday, January 3, 2018

Working Agreements for Agile Teams (Part 5)

In Working Agreements for Agile Teams (Part 4), I discuss one side-effect of using working agreements as principles and individual decision making. I view those examples as growing pains--an adjustment that people make when the nature of team engagement changes.  Those discussions are healthy for a team because they re-enforce a new way of working together.

A recent example of learning to work together arose during a discussion on the interaction required by our working agreement on design reviews. This agreement focuses on a successful outcome--when the design is complete we are well positioned to complete the review.  It requires the involvement of a designer and two design reviewers:
We agree to document our design and review the design with at least two people prior to implementation.
This agreement positions the team to avoid situations where only one person understands the design.  It's simplistic. If you dwell on it you may conclude it's heavy handed. Taken literally, this working agreement requires every design review to involve three people.

My notion of design includes adding a method to a class. It also acknowledges this design might warrant a single line of text in a comment for the method. It's natural to ask why anyone would want this overhead for simple cases.

One team member made an argument against this approach:
  • The working agreement promoted inefficiency because it required too many people to engage.
  • The working agreement permitted passive engagement--they asked someone to be a reviewer and that person indicted interest but did not actively engage.
  • We need time to learn (or prototype) so their is something of substance to review.
  • A difference of opinion on when to start applying the working agreement.
My counter arguments were:
  • I am happy if the conversation on how to approach the design occurs and all three people actively engage in the decision.
  • Passivity is a form of passive aggressiveness that I won't tolerate--engage or choose not to engage but make a decision.
  • Absolutely, take the time to learn but ensure that the interaction of all three people acknowledges and understands the objective and intended outcome of this learning.
  • Start the interaction at the same time we start working on the story.

Ironically, we disagreed only on the starting point and the passivity. Everything else this team member said made sense to me.

So the working agreement failed to help us understand the importance of the interaction required to make the design review a success. It failed to balance the need for the author to learn and for the reviewers to understand. And it failed to address the notion that too much investment up front might commit us to a poor course of action. Or did it?

Clearly, the working agreement addresses none of the above explicitly. Clearly different perspectives resulted in different approaches. Importantly, these culminated in a very important and profound outcome for the team.

I encourage the team member to raise the differences of opinions in our Lean Coffee. They did and they and I discussed the issues with the team.

To the team's credit, they took both perspectives in stride and we agreed to enhance our understanding of the working agreement. We also agreed not to modify the working agreement to include this understanding.

Interactions over process triumphs again! Furthermore the team adopted several Agile principles in doing so. We all won.