Thursday, February 8, 2018

SonarQube as a high visibility tool

The Need for Real-time Visibility

During assessments and value stream mapping exercises, we often encounter leadership that lacks visibility into how their teams are performing. There is typically a formal process established to combat this issue such as weekly  or biweekly status reports from either scrum masters or individual team members. Teams often see these processes as another burden on their shoulders; they treat them as formalities instead of worthwhile necessities.

Unfortunately, these reports often prove to be brief, highly subjective, and lacking any glimpse into how teams are actually performing. Due to the weeks of time between reports, very long feedback loops are created that increase the chance of failing to delivery high quality software on time. When it's discovered that there is a formal process in place for communicating updates to leadership, it's almost always an indication that the organization is very heavily siloed.

It's not like team leaders are just bad people. Most genuinely care about their project's success. However, they often are unable to allocate time or money into looking into issues surrounding communication processes, or they might not know an issue exists in the first place. I like to say, "You don't know what you don't know."
Wouldn't it be great if a team leader could just open up a browser and navigate to a web page that shows a dashboard of real-time metrics that he cares about? At Liatrio, we strongly believe that information should be easily accessible at a click of the button.

Managing Visibility with SonarQube

SonarQube, an open-source tool that supports all major languages, is one of our recommended tools for providing project health visibility. It provides extensive code quality analysis, shows code coverage by unit tests, and displays integration test reports. It can even track and estimate technical debt.
One of my favorite features is the ability to track various aspects of a project's quality as they evolve over time. This provides insight into determining if any new bugs were introduced in the latest build, if code unit test coverage dropped, or if complexity increased. These insights are delivered in a very descriptive yet straightforward way.

Quality Profiles are used to define requirements and specify sets of rules. These can be created for each language, as well.

Below are some examples of rules that one might turn on for any given Quality Profile.

While performing code quality analysis, SonarQube has three types of problems it can identify: Issues, Code Smells and Vulnerabilities. Issues are broken down by Severity.
Another key SonarQube feature are Quality Gates. These are the thresholds of quality set to clearly indicate whether software quality is "good to go" or not acceptable yet.

SonarQube code analysis makes a great addition to any software delivery pipeline. Furthermore, the it can be even more worthwhile when integrated with other tools. You or a DevOps Engineer on your team could set up SonarQube in a way that a Jenkins job would be marked as failed if the latest build does not pass Quality Gates; the Build Breaker Plugin is great for accomplishing this.

DIY Dashboards

Custom dashboards are a necessity for fully utilizing the benefits of SonarQube. In essence, SonarQube accomplishes code analysis and compiles metrics. As a user, you can customize dashboards using widgets for all of those metrics.

It's up to you to create the perfect dashboard that makes the most sense for your team's project. Whatever combination of widgets you choose, the functionality is available.

Go the Extra Mile

Remove extra steps and needless processes, and get down to real-time data that conveys how teams are really doing. Limit manual work, and take advantage of the automation.

Written for

Monday, January 15, 2018

Culture is not pizza

Have you ever heard: ”Oh yes! We have a great culture! We have great work-life balance!. We do pizza Friday’s.”
I bet you did, at least in at every other place you were having an interview. Am I right?
Can we really call that as a culture though? Is it really about having happy hour drinks and eating not-so-healthy food together?
Not to me.  Let's look at a few things that will drive a good team culture.

BDD (Behavior Driven Development)

Culture is when the whole team is involved in flushing out feature requirements and gaining a common understanding for what is about to be delivered. Eliminating the risk for a potential rework needed at the end of the feature development cycle. 
Culture is about developers planning their work in a way that empowers automated acceptance testing. Directly asking an SDET: “Hey, what can I do to make your life easier? Any specific accessibility label naming requests?”

Pull Requests

Culture is whenever someone creates a pull request, they explicitly follow up on the team's messenger app with an ask to spend a good amount of time reviewing it and encourages the team to provide feedback or to propose a better solution. It’s not about your ego, it’s about checking in the best possible solution for the job. It is much easier to find problems in 10 lines of code than in 10000, when you merge your code and a bug is discovered.
And whenever PR comments are posted and updates are pushed to the branch he would repeatedly ask to code review the change.

TDD (Test Driven Development)

Culture is using Test Driven Development. Making sure a team is developing only what is needed to make tests pass and nothing else. No weird code spaghetti going that would directly impact future time wasted on tech debt resolution and refactoring. And ironically, this is always a very low priority in a “company with a great culture”.

Brown Bag Sessions

Culture is when there is a dedicated time for the whole team to gather and share knowledge about up and coming technologies and individuals to demonstrate work on POCs when it is applicable to the line of business. But it should not stop on the tooling level. Team members should also discuss better code development patterns, share discoveries about great online resources etc.


Culture is when team members discuss ideas and actively use whiteboard to outline solution models. I'm sure you've noticed it on your own, whenever you write about a concept or draw a model on a piece of paper it is a tremendous difference compared to just thinking about it, both in terms of depth of details and clarity visualizing it. You can come out of a foggy understanding and make it much more granular.


Culture is when a team is rigorously follow a habbit of hosting retrospective meetings. It's when an entire team gather in the same room and each induvidual member provids his honest feedback about previous development cycle with a focus on what went well, what did not go well. And the key here is not to cover these as fast as possible and move on but to come up with action items and the game plan. And during next retrospective meetings to check on the status for these items, course correct if needed.


Culture is when a company is actively encouraging employees to attend multiple conferences every year and provides a budget for that. The idea behind this is to get a gut check of where the company is compared to where the industry is going, facilitate peer networking and laydown foundation for possible bussines collaboration with other companies.


So as we see, culture is about the process. It is about creating an environment where individuals are happy to come back to work because all of processes are making sense and geared towards making a day-to-day life easier and fun. Fun because we like what we do, not because we found a way to forget about what we do. And if you don't like what you do...well, that's a topic for a different blog post.

Written for

Monday, November 6, 2017

Gatekeepers are watching you

It doesn’t matter what we focus on the progress always looks like a staircase. Growth in experience, expansion of connections we have, improvement of understanding can be seen as a staircase. And when we grow we move up from one stair to another.

In order to get to the next stair we need to improve. And that takes time and effort. Sometimes it takes too long, possibly years. But there is an advantage to that too. The longer you work on the same level the greater foundation you build for a sustainable growth and there is lesser risk to fall. And by falling in this case I mean making a mistake that would lead to loss of time, motivation, confidence and experience.
This is the way growth works.
How to get to next level? By learning and practicing, learning and practicing. This leads us to become better and better.
At some point we cross the line between growth levels. The moment of clarity takes place. Some sort of event happens that clarifies that we've reached the new level. It could be measurable but a lot of times it is only on the sensitive side. “I think I got this!”
And we jump to the next stair. New level of understanding, career promotions or whatever specifically you been pursuing.
The growth model would be incomplete with out us talking about special type of people - The Gatekeepers. 
Gatekeeper is a person who possess superior skills, connections or knowledge. He is the one who can help you to get to the next level by fast forwarding part of the learning curve. They are the people who look out from the higher stairs to see who might need a little bit of an uplift to get to the next level.
As an example, in the corporate world naturally gatekeepers are the ones who occupy leadership roles, those who can influence our promotions, delegate us more responsibilities, connect us with important people. They are gatekeepers by default.
Why are we interested in gatekeepers?
Rather than walking the whole way and learning everything needed to get to the next level on your own you might be pulled to the next level by the gatekeeper. That might significantly shorten the learning curve and open opportunities that you’ve never imagined before.
Now, why are they doing it? What are the motives?
In this world people only move on motive.
To understand this we need to talk about the topic of basic human needs
At each stage of life we have certain composition of basic needs in us. They are love and connection, certainty, uncertainty, growth, contribution, significance.
We have all of these needs in us at all times. And only the ratio between them is changing depending on which stage of life we’re in.
For example, in 20s one would perhaps look for significance and connection, while in 50s he might be looking for more certainty and contribution, has a desire to give back.
So going back to the topic of the gatekeepers the reason they are trying to help you out is because they feel the need of giving back to those who deserve. For them it’s the way to fulfill their basic need of contribution, that is dominant for them at the current period of their life. 
They will only going to help out to those who really deserve it because for them time is precious and they want a sure return on their investment. They will only do it if they see that you can succeed and that you almost there, you just need a little push. 
This is very similar to what volunteers do when they want to give back to the community. A Gatekeeper often feels that he reached high level of understanding and for him in order to feel good he needs to share his knowledge with those who deserve it. It’s important to understand how basic needs work and how gatekeepers manage their time to really take advantage of this.
How to get their attention? 
Gatekeeper will pick you from the crowd if he thinks you deserve it. His thought process would work like this: if you go out of your way to achieve some outstanding results on continuous basis and not just once, you got him interested. You need to show that you’re constantly looking for knowledge, constantly looking for progress and you just need a little help. You need to be at least half way through to the next stair. If he thinks you need more work than that he might think that you're a high risk of failing to succeed and it’s not worth of his time because you might not return on his investment. 
The point is to work with an extra effort, to go an extra mile. So when it will be a point in time when the gatekeeper comes along he will see that you deserve his time and he will participate. It is important to go an extra mile as a normal way to operate in day to day life. And you would benefit even more if that would be your attitude in everything you do.
By knowing the topic of gatekeepers we can seek their help more thoughtfully. We can create a list of our gatekeepers and figure out a plan how to get their attention a bit faster. 
There is another benefit of going an extra mile - it makes you positive and it makes you happy. We should go an extra mile as a trademark of our professionalism. “I want to work the best I can because I want to leave a great mark and great legacy after myself.”
It's not about using the gatekeeper, but rather a win win on both sides. He gets satisfaction from helping you and therefore fulfilling his basic need of contribution. And you get to skip the ending part of the learning curve. Essentially, you get what you deserved. It’s a grand finale of your hard work. 

Saturday, October 7, 2017

How BDD unites the team

One of the key paradigms in the work that we do is eliminating waste in an organization. Speeding up feedback loops and employing crisp communication both inside individual teams and externally are absolutely essential on the path to achieve effective collaboration.
When a team does not have a dedicated sprint planning session or it is done wrongly, it becomes one of the major reasons waist occurs.


BDD stands for Behavior Driven Development. It’s an approach used in software delivery to promote effective communication during sprint planning sessions. The idea is to describe software functionality from the end user perspective and make it a starting point for feature development. It’s noteworthy that Cucumber is the most commonly used tool for using BDD in test automation.
During BDD sessions, the team dives into “storytelling”. Using Gherkin jargon, team members use plain English keywords such as Given, When, and Then to walk a hypothetical user through the feature flow. Each of these user flows is called a Scenario and can be treated as a test case.
A user story consists of one or more scenarios that describe the behavior of the feature along with other details. These user stories essentially provide us with acceptance criteria and requirements for feature implementations.
The key point here is to have the whole team actively participating in story generation in order to achieve common understanding for each feature that is getting developed. If this is not how the team operates and requirements are simply dropped on the developers’ shoulders, it is extremely common to see the situation where the vision for the feature is greatly different between developers, testers, product owners, and business units.
Often, some members would be too shy to participate, some would use or try to use the gained authority to further fulfill their egos. Be aware of this.
Storytelling sessions spark arguments and discussions around the user interface and feature functionality between team members. During these sometimes heated sessions, members of the team discover gray areas in the requirements that needs to be further clarified by the business.
At first, teams that are getting onboarded with the BDD process show signs of doubt in this approach and try to push back. Nobody likes the change, especially when the existing process was in place for years and “worked”, but when you ask the team to really give it a try after an hour or so they start to discover those grey areas we’ve talked about previously. The realization arises that without this sprint planning format, certain questions would never be asked in the old model and would lead to time wasting during feature implementation or even worse, during a testing phase or a demo. A tester would come to a developer asking, “Are you sure this is how it should work?” The developer might respond, “I don’t know. I’ve developed based on my interpretation of the requirements”. And the argument begins. Too bad it would happen after the feature is already implemented and not before.
This is where the “Three Amigos” approach would shine. The idea is that during story generation in sprint planning, team members that represent different roles, such as developers and testers, would come together to further update the story’s description and create subtasks needed to accomplish the whole piece.
As a result, the process of estimating workload and scoping the sprint commitment becomes an easy task.


Transforming an organization is not an easy task. You need to be able to sell the vision and persist when push-backs occur.
In our experience, injecting BDD as part of sprint planning initiatives bonds the team together and removes barriers between individual roles.  The team starts to have a common understanding of the functionality that is getting developed. That by itself lowers the risk of delivering wrong feature implementation saving an organization time and money.
Written for

Monday, August 14, 2017

Bobcat as an AEM focused UI testing framework

One of our large enterprise clients develops user facing experiences using AEM (Adobe Experience Manager). We can think of it as a Wordpress for enterprises. Developer can author a page using various components.

AEM consists of Author and Publisher instances. Each is a server that listens to port 4502 and 4503 respectively. Author instance is used for developing and staging pages with content. Using various components developer can greatly customize the web page. It is a common pattern to see each team to develop their own components to build on top of the existing functionality. Parsys is an area that acts as a canvas for adding components on the page. Developer can edit default properties of each component by updating respective fields in a dialog modal. Once the page is ready to go developer clicks Activate on the siteadmin management interface the page. This uploads the page to the Publisher instance and makes it live and available to the public.

But how do we test this?

With every software product comes the topic of testing. And developing in AEM is not different. How do we test that the web page is available and is performing as expected? How can we do it in the automated fashion?
There is always an option to use Selenium WebDriver to interact with the page and couple it with some sort of unit test framework that would act as a test runner. But let's spend some time and discuss what alternatives we might have.
AEM ships with build-in test framework HobbsJs. Although it is typically a good idea to use test framework that is shipped with the development environment I would argue that it's not the case here. Hobbs have number of limitations. To start with it seems like we can only use it in the Developer mode. So it is beneficiary for testing pre-configured component properties but thats about it. We can't test authoring and we can't test publishing. It is impossible to access the navigation bar where you need to switch between different modes.

It's a different story with Bobcat test framework.
This is a highly AEM centric product. Therefore we were pleased to find a lot of neat features that help to drive the page test automation. Think of

it as a great combination of Selenium WebDriver plus helpers to perform AEM specific actions.
On the authoring side this framework supply us with methods to manage page creation, activation, deletion, checking if it is present in the sideadmin tree. Use siteadminPage instance variable for that.

Ideally you would want to create a test page before each individual scenario, use it and destroy it at the end of the scenario regardless whether it finished with success of failure. You can achieve this setup using before and after scenario hooks.

Given that the page is open we can use other Bobcat helpers to interact with parsys by dragging or removing components. Once the component is on the parsys we can edit it's properties. Again all of this is done programmatically using the helpers provided to us. No coding necessary.
In file tester can specify settings for webdriver "capabilities" and default page timeout time. In files we can provide Bobcat with AEM instances url and login information. It's not a great idea to hardcode the latter and we suggest to supply it during the runtime by injecting it into the system properties hashmap.

Logging into the siteadmin page is also done in a nice way. Given that credentials are supplied during the runtime and stored in author.login and author.password respectively Bobcat simply adds a cookie to the browser and we're in. No need to actually type login information or pressing the Sign In button.
Use aemLogin.authorLogin() for that.

If you find yourself in the situations where Bobcat does not have a specific helper for your task you can still use Selenium. Simply call methods on

the webdriver instance variable. For example we developed a method to exclusively deal with navigation bar and switch between user modes.

Once that was done we were able to do authoring part and immediately perform publisher side validation by switching to the Preview mode. Neat!

Authors of the Bobcat project made a good effort to provide extensive documentation of features and functionality on the Wiki page. But we still found the material to be outdated and the examples would not work right of the bet.

Cucumber: going one step further

For our test framefork implementation we used setup of JUnit + Cucumber + Bobcat.
Cucumber is a tool for driving scenario execution and to manage reporting. Each scenario is written in Gerkhin language. It features usage of
en, When, Then, And key words to start each phrase.
"Given I have this state, when I do something, then I should see something."

That's the genaral idea behind describing software behavior using this language. It is commonly reffered as BDD or Behaviour Driven Development. And Cucumber is able to match (using RegEx engine) each Gerkin phrase with the respective block of code (step definition) written in any major coding language and to execute it.
Cucumber supports tags. In fact this is one of it's strongest features. Using tags you can include or exclude various scenarios and come up with custom run configurations based on your current needs. More on Cucumber here.

The sole purpose of using JUnit in the above setup is to kickstart Cucumber. For that we implemented this minimalistic class:

Cucumber meets BDD

For me the main reason to use BDD is to provide across-the-team visibility for testing scenarios. With Cucumber framework we are able to describe user story in plain English. Now both technical and non-technical people can easily understand the test case simply by reading .feature files. It is no longer exclusively a coding expert role to make a list of the currently automated scenarios and compare it with acceptance criterias generated during Sprint Planning meeting. In fact the these two should match by the end of the Sprint.

Final thoughts

Based on our experience we've gathered during client uplifts we came to conclusion that it is crucial for the success of the organization to drive processes in each team to the state where the team members are having the common understanding for the committed feature set under development during each individual sprint and the tooling is carefully selected and used as intended. Using BDD and Bobcat framework allowed us to not only rapidly develop AEM-centric test suites for multiple teams but also provide clarity for both technical and non-technical members. 

Written for

Saturday, May 13, 2017

How to keep test suite and project in sync while promoting team collaboration

In software development there is a typical problem: how to maintain a relevant version of the test suite for each version of the project?

The goal is that for every version of project we need to have a test suite version that would provide accurate feature coverage.

Let’s imagine simple model when you have version 2.0.0 of your project released to the public and version 2.1.0 is currently under active development. The test suite lives in it's own separate repo.

Let say you have 50 test cases to cover all the functionality of 2.0.0 project version and test suite version is 15.2.0. Seems like the test team is active.

For version 2.1.0 of the project test team added 5 new test cases and versioned test suite as 16.0.0.
Now let’s imagine customers reported a critical bug that development team quickly fixes in the version 2.0.1. The test team have to update test suite as well.

Updates were made and version 15.2.1 of the test suite was released.
In other words version 15.2.1 of the test suite provides appropriate feature coverage for version 2.0.1 of the project.

Overtime as the project evolves it becomes difficult to tell which version of test suite should be used for specific version of project. And here we’ve described quite simple example.

Reality looks much different. It's very common in large enterprises to see multiple releases being developed simultaneously and project is relying on components developed by separate teams.

Let’s take hybrid mobile app as an example.
The test matrix would include mobile app version, web app version, mobile platform and device model, environment setup (which is a beast by itself).
Mobile app changes, the web app changes. Some users use iOS device and some are in love with Android. Screen size varies. Environment can be loaded with various versions of database, different legal and compliance materials.
You’re getting the picture.

At my previous project test team was using branching strategy to assist with this issue.
There were three environments: QA, STAGE, PROD.
There were four branches in test suite repo: master, qa, stage, prod.
Current development for new tests and any refactoring was done on the master branch.
Other three branches where design to carry test suite version that is appropriate for project version that is currently deployed to specific environment.

As build is passing validation in each environment test code would be merged to next higher environment branch.

This branching strategy was taking care of some of axes in the test matrix we've described earlier. Web app, DB, legal and compliance versioning were covered. Mobile platform, OS version and device permutations were handled in the code by introduction of if-else-case statements that would point to correct block of code.
Not an ideal solution.

On our current project we've decided to merge test suite and project together. Codebase for both would live in the same repo and will have the same version. If any changes are done to the project - version is bumped for both.

The project is set up as a Maven project.
In order to accomplish the model described above an extra <module> was added to root pom.xml to achieve aggregation. Which means when Maven goal is invoked at the root level it will be run against each individual module. Also each module's pom.xml have <parent> section to reference root pom.xml as a parent pom. This means settings like <groupId>, <version>, <distributionManagement> and others are shared by the root pom with modules.

Root pom.xml

<tests> pom.xml would look like this

In our case we wanted to accomplish two goals when changes are done to the project codebase :
  1. global version update
  2. simultaneous artifact creation and deployment for both project and the test suite
In order to update version for root pom and all modules that are specified in the pom.xml we need to use mvn version:set -DnewVersion=<new_version_here> goal at the root level.

One of the things our team have done to transform delivery process at the client location was to implement software delivery pipeline.
Current setup consists of Jenkins (open-source continuous integration tool), Artifactory (cloud-based artifact repository) and UrbanCode Deploy (application deployment tool).
On Jenkins we would have build, deploy, smoke test and regression test jobs. These jobs are chained and would trigger one another in order from left to right.
Build job would pull project repo from BitBucket and invoke mvn clean deploy on the root level. This would not only build the project but also run unit tests and deploy artifacts to Artifactory. After that it would run registerVersionWithUCD.groovy script that would instruct UrbanCode Deploy to create version under respective component and store URLs for the artifacts deployed to Artifactory.

For our use case we wanted so that smoke test job would have workspace populated only with test suite related files.
To accomplish this we've done these action:
  • added assembly plugin to <build> section of the test suite pom.xml to zip the whole test suite sub-project structure
  • updated registerVersionWithUCD.groovy script to register version with test suite artifact URL
  • created getTestSuite.groovy script what would pull test suite zip-archive from Artifactory using URL stored in UCD and unzip it to current directory (job's workspace)
At that point smoke test job would simply need to invoke mvn test goal to run smoke tests.

What have we achieved with this setup?
Now there it's very easy to tell which version of test suite we need to use for which version of project. As long as developers follow protocol and bump version using mvn version:set it would propagate to the test suite pom.xml and will be identical to version in root pom.xml. And there are number of automated ways to help with this as well.

That is good by itself but benefits of merging project and test suite in one repo did not stop there.
This model encouraged developers and testers truly come together and start to collaborate.
Both developers and testers without any additional incentive started to review each other pull requests and familiarizing themselves with codebase out of pure curiosity.
All of the sudden new features were implemented with tests in mind, testers were promoting quality right from the pull requests phase catching small and big problems early. During sprint planning meetings developers started to ask questions like: "what do you need from me to allow you to create tests for this?".
Furthermore members of these teams started to hang out together and feel happier at work place.

Now that is a true DevOps Transformation.

Written for

Sunday, December 11, 2016

Control iTunes from command line

It is possible to control iTunes from command line. In order to do so you need to install itunes-remote.

Run npm install --global itunes-remote
Now you can send commands to iTunes.

First command needs to start with itunes-remote.

This would start itunes-remote session and all following commands don't have to starts with itunes-remote any more.

Available commands are:
  • help
  • exit
  • play
  • pause
  • next
  • previous
  • back
  • search
So you can run commands like this:
itunes-remote search moby

Important to note here that you can only work with tracks that are part of your Library.

Command exit did not work for me.
When you are using search for title that consists of multiple words you should use single quotes like so:
itunes-remote search 'twisted transistor'

If you really want to you can start using this functionality inside of your scripts.

For example I added these lines to my pre-push git hook:

puts ("Pushing...pushing real good")
system("itunes-remote play")
sleep 3
system("itunes-remote search 'salt-n-pepa'")
sleep 5 system("kill $(ps -A | grep /Applications/ | awk 'NR==1 {print $1}')")

What happens here is that this script will be executed whenever I push to my repo. It would open iTunes, wait 3 seconds (this is necessary for the process to get ready for commands) and would start playing song Push It by Salt-N-Pepa. And after 5 seconds the scripts would kill iTunes process.

Enjoy pushing!