Saturday, May 13, 2017

How to keep test suite and project in sync while promoting team collaboration

In software development there is a typical problem: how to maintain a relevant version of the test suite for each version of the project?

The goal is that for every version of project we need to have a test suite version that would provide accurate feature coverage.

Let’s imagine simple model when you have version 2.0.0 of your project released to the public and version 2.1.0 is currently under active development. The test suite lives in it's own separate repo.

Let say you have 50 test cases to cover all the functionality of 2.0.0 project version and test suite version is 15.2.0. Seems like the test team is active.

For version 2.1.0 of the project test team added 5 new test cases and versioned test suite as 16.0.0.
Now let’s imagine customers reported a critical bug that development team quickly fixes in the version 2.0.1. The test team have to update test suite as well.

Updates were made and version 15.2.1 of the test suite was released.
In other words version 15.2.1 of the test suite provides appropriate feature coverage for version 2.0.1 of the project.

Overtime as the project evolves it becomes difficult to tell which version of test suite should be used for specific version of project. And here we’ve described quite simple example.

Reality looks much different. It's very common in large enterprises to see multiple releases being developed simultaneously and project is relying on components developed by separate teams.

Let’s take hybrid mobile app as an example.
The test matrix would include mobile app version, web app version, mobile platform and device model, environment setup (which is a beast by itself).
Mobile app changes, the web app changes. Some users use iOS device and some are in love with Android. Screen size varies. Environment can be loaded with various versions of database, different legal and compliance materials.
You’re getting the picture.

At my previous project test team was using branching strategy to assist with this issue.
There were three environments: QA, STAGE, PROD.
There were four branches in test suite repo: master, qa, stage, prod.
Current development for new tests and any refactoring was done on the master branch.
Other three branches where design to carry test suite version that is appropriate for project version that is currently deployed to specific environment.

As build is passing validation in each environment test code would be merged to next higher environment branch.

This branching strategy was taking care of some of axes in the test matrix we've described earlier. Web app, DB, legal and compliance versioning were covered. Mobile platform, OS version and device permutations were handled in the code by introduction of if-else-case statements that would point to correct block of code.
Not an ideal solution.

On our current project we've decided to merge test suite and project together. Codebase for both would live in the same repo and will have the same version. If any changes are done to the project - version is bumped for both.

The project is set up as a Maven project.
In order to accomplish the model described above an extra <module> was added to root pom.xml to achieve aggregation. Which means when Maven goal is invoked at the root level it will be run against each individual module. Also each module's pom.xml have <parent> section to reference root pom.xml as a parent pom. This means settings like <groupId>, <version>, <distributionManagement> and others are shared by the root pom with modules.

Root pom.xml

<tests> pom.xml would look like this

In our case we wanted to accomplish two goals when changes are done to the project codebase :
  1. global version update
  2. simultaneous artifact creation and deployment for both project and the test suite
In order to update version for root pom and all modules that are specified in the pom.xml we need to use mvn version:set -DnewVersion=<new_version_here> goal at the root level.

One of the things our team have done to transform delivery process at the client location was to implement software delivery pipeline.
Current setup consists of Jenkins (open-source continuous integration tool), Artifactory (cloud-based artifact repository) and UrbanCode Deploy (application deployment tool).
On Jenkins we would have build, deploy, smoke test and regression test jobs. These jobs are chained and would trigger one another in order from left to right.
Build job would pull project repo from BitBucket and invoke mvn clean deploy on the root level. This would not only build the project but also run unit tests and deploy artifacts to Artifactory. After that it would run registerVersionWithUCD.groovy script that would instruct UrbanCode Deploy to create version under respective component and store URLs for the artifacts deployed to Artifactory.

For our use case we wanted so that smoke test job would have workspace populated only with test suite related files.
To accomplish this we've done these action:
  • added assembly plugin to <build> section of the test suite pom.xml to zip the whole test suite sub-project structure
  • updated registerVersionWithUCD.groovy script to register version with test suite artifact URL
  • created getTestSuite.groovy script what would pull test suite zip-archive from Artifactory using URL stored in UCD and unzip it to current directory (job's workspace)
At that point smoke test job would simply need to invoke mvn test goal to run smoke tests.

What have we achieved with this setup?
Now there it's very easy to tell which version of test suite we need to use for which version of project. As long as developers follow protocol and bump version using mvn version:set it would propagate to the test suite pom.xml and will be identical to version in root pom.xml. And there are number of automated ways to help with this as well.

That is good by itself but benefits of merging project and test suite in one repo did not stop there.
This model encouraged developers and testers truly come together and start to collaborate.
Both developers and testers without any additional incentive started to review each other pull requests and familiarizing themselves with codebase out of pure curiosity.
All of the sudden new features were implemented with tests in mind, testers were promoting quality right from the pull requests phase catching small and big problems early. During sprint planning meetings developers started to ask questions like: "what do you need from me to allow you to create tests for this?".
Furthermore members of these teams started to hang out together and feel happier at work place.

Now that is a true DevOps Transformation.

Sunday, December 11, 2016

Control iTunes from command line

It is possible to control iTunes from command line. In order to do so you need to install itunes-remote.

Run npm install --global itunes-remote
Now you can send commands to iTunes.

First command needs to start with itunes-remote.

This would start itunes-remote session and all following commands don't have to starts with itunes-remote any more.

Available commands are:
  • help
  • exit
  • play
  • pause
  • next
  • previous
  • back
  • search
So you can run commands like this:
itunes-remote search moby

Important to note here that you can only work with tracks that are part of your Library.

Command exit did not work for me.
When you are using search for title that consists of multiple words you should use single quotes like so:
itunes-remote search 'twisted transistor'

If you really want to you can start using this functionality inside of your scripts.

For example I added these lines to my pre-push git hook:

puts ("Pushing...pushing real good")
system("itunes-remote play")
sleep 3
system("itunes-remote search 'salt-n-pepa'")
sleep 5 system("kill $(ps -A | grep /Applications/ | awk 'NR==1 {print $1}')")

What happens here is that this script will be executed whenever I push to my repo. It would open iTunes, wait 3 seconds (this is necessary for the process to get ready for commands) and would start playing song Push It by Salt-N-Pepa. And after 5 seconds the scripts would kill iTunes process.

Enjoy pushing!

Sunday, November 27, 2016

Install any App Store software from command line using mas-cli

   Recently I had a need to create easy onboarding setup.
   Whenever a new member joins the team I want him to be up and running with a local dev environment in a shortest amount of time. I want the process to be error prone, repeatable and involve as less manual interaction as possible. Wouldn't it be nice to just say: "Hey, run this script. And you good!"
   As a part of this effort there is a need to be able to programmatically install apps that are only distributed via App Store and are not available via Homebrew.
   Meet mas-cli utility.
   It allows you to search, install and update App Store distributed software.
   Installing mas-cli on you machine is as easy as running brew install mas in the Terminal window.
The way mas-cli works is that you first need to know the ID of the app you're interested in. You can find it by running mas search <name_here>. This will return a list of available app that have anything to do with the name you provided. Find the one you most interested in and make a note of the ID displayed next to the program name. Now you can run mas install <id_here>. And that's it. The program will be installed without you ever having the need to interact with App Store directly.
   That's cool. But what we really interested here is to come up with a way to put software installation routine in the onboarding script.
   Here is an example of how to do that if you lets say want to install Xcode:
mas install $(mas search xcode | grep 'Xcode' |head -1 | awk '{print $1}')
Here we first search the registry for xcode. Select the result that equals to 'Xcode'. Take only first column - that would be the ID. And we pass it to mas install command.

  So what left here is to add a line like this into your onboarding script. Just make sure you have mas utility installed on the machine prior to invoking such routine.

  Mas-cli GitHub page

Sunday, October 30, 2016

Why Homebrew is awesome

   Homebrew is a package manager for Mac OS. Similar to yum on Centos and apt-get on Ubuntu you can search repositories for specific software and install it on your machine using brew install package_name.
   Now, this post is about a cool feature of Homebrew I didn't know before. You can install full fledged apps that typically are distributed in a form of .dmg files right from Terminal using Homebrew.
   For example you can install heavyweight apps like IntelliJ and RubyMine by JetBrains, Atom, Chrome browser etc ... without ever leaving the Terminal, opening the browser to look for installers.
   The only thing you need to do is to specify cask option in brew command.
For example, if we want to install IntelliJ just run  brew cask install Caskroom/cask/intellij-idea
   Isn't that cool! No more looking and googling around for app's distributions and how to install it.
   Installing java is a good example. Instead of going to Oracle website, figuring out which link to press, which type of package you need for your specific need. With Homebrew you just type brew search java and once you settled on what package you like you go brew cask install Caskroom/cask/java... without ever leaving the Terminal window.
   Now, how do you know when you need to use cask option?
   Let's talk about brew search some_string. This command is used to search for available packages which have some_string in them. If the package you like to install is having cask in it - you need to use cask option.

  So let's go over an example - I want to install Chrome. Here's how it would look like:
  1. search for available packages by running brew search chrome

    seems like we want to use package Caskroom/cask/google-chrome
  2. install the package by running brew cask install Caskroom/cask/google-chrome
That's it! Chrome should be installed and available in your Applications folder.