Monday, November 6, 2017

Gatekeepers are watching you

It doesn’t matter what we focus on the progress always looks like a staircase. Growth in experience, expansion of connections we have, improvement of understanding can be seen as a staircase. And when we grow we move up from one stair to another.

In order to get to the next stair we need to improve. And that takes time and effort. Sometimes it takes too long, possibly years. But there is an advantage to that too. The longer you work on the same level the greater foundation you build for a sustainable growth and there is lesser risk to fall. And by falling in this case I mean making a mistake that would lead to loss of time, motivation, confidence and experience.
This is the way growth works.
How to get to next level? By learning and practicing, learning and practicing. This leads us to become better and better.
At some point we cross the line between growth levels. The moment of clarity takes place. Some sort of event happens that clarifies that we've reached the new level. It could be measurable but a lot of times it is only on the sensitive side. “I think I got this!”
And we jump to the next stair. New level of understanding, career promotions or whatever specifically you been pursuing.
The growth model would be incomplete with out us talking about special type of people - The Gatekeepers. 
Gatekeeper is a person who possess superior skills, connections or knowledge. He is the one who can help you to get to the next level by fast forwarding part of the learning curve. They are the people who look out from the higher stairs to see who might need a little bit of an uplift to get to the next level.
As an example, in the corporate world naturally gatekeepers are the ones who occupy leadership roles, those who can influence our promotions, delegate us more responsibilities, connect us with important people. They are gatekeepers by default.
Why are we interested in gatekeepers?
Rather than walking the whole way and learning everything needed to get to the next level on your own you might be pulled to the next level by the gatekeeper. That might significantly shorten the learning curve and open opportunities that you’ve never imagined before.
Now, why are they doing it? What are the motives?
In this world people only move on motive.
To understand this we need to talk about the topic of basic human needs
At each stage of life we have certain composition of basic needs in us. They are love and connection, certainty, uncertainty, growth, contribution, significance.
We have all of these needs in us at all times. And only the ratio between them is changing depending on which stage of life we’re in.
For example, in 20s one would perhaps look for significance and connection, while in 50s he might be looking for more certainty and contribution, has a desire to give back.
So going back to the topic of the gatekeepers the reason they are trying to help you out is because they feel the need of giving back to those who deserve. For them it’s the way to fulfill their basic need of contribution, that is dominant for them at the current period of their life. 
They will only going to help out to those who really deserve it because for them time is precious and they want a sure return on their investment. They will only do it if they see that you can succeed and that you almost there, you just need a little push. 
This is very similar to what volunteers do when they want to give back to the community. A Gatekeeper often feels that he reached high level of understanding and for him in order to feel good he needs to share his knowledge with those who deserve it. It’s important to understand how basic needs work and how gatekeepers manage their time to really take advantage of this.
How to get their attention? 
Gatekeeper will pick you from the crowd if he thinks you deserve it. His thought process would work like this: if you go out of your way to achieve some outstanding results on continuous basis and not just once, you got him interested. You need to show that you’re constantly looking for knowledge, constantly looking for progress and you just need a little help. You need to be at least half way through to the next stair. If he thinks you need more work than that he might think that you're a high risk of failing to succeed and it’s not worth of his time because you might not return on his investment. 
The point is to work with an extra effort, to go an extra mile. So when it will be a point in time when the gatekeeper comes along he will see that you deserve his time and he will participate. It is important to go an extra mile as a normal way to operate in day to day life. And you would benefit even more if that would be your attitude in everything you do.
By knowing the topic of gatekeepers we can seek their help more thoughtfully. We can create a list of our gatekeepers and figure out a plan how to get their attention a bit faster. 
There is another benefit of going an extra mile - it makes you positive and it makes you happy. We should go an extra mile as a trademark of our professionalism. “I want to work the best I can because I want to leave a great mark and great legacy after myself.”
It's not about using the gatekeeper, but rather a win win on both sides. He gets satisfaction from helping you and therefore fulfilling his basic need of contribution. And you get to skip the ending part of the learning curve. Essentially, you get what you deserved. It’s a grand finale of your hard work. 

Saturday, October 7, 2017

How BDD unites the team

One of the key paradigms in the work that we do is eliminating waste in an organization. Speeding up feedback loops and employing crisp communication both inside individual teams and externally are absolutely essential on the path to achieve effective collaboration.
When a team does not have a dedicated sprint planning session or it is done wrongly, it becomes one of the major reasons waist occurs.

MEET BDD

BDD stands for Behavior Driven Development. It’s an approach used in software delivery to promote effective communication during sprint planning sessions. The idea is to describe software functionality from the end user perspective and make it a starting point for feature development. It’s noteworthy that Cucumber is the most commonly used tool for using BDD in test automation.
During BDD sessions, the team dives into “storytelling”. Using Gherkin jargon, team members use plain English keywords such as Given, When, and Then to walk a hypothetical user through the feature flow. Each of these user flows is called a Scenario and can be treated as a test case.
A user story consists of one or more scenarios that describe the behavior of the feature along with other details. These user stories essentially provide us with acceptance criteria and requirements for feature implementations.
The key point here is to have the whole team actively participating in story generation in order to achieve common understanding for each feature that is getting developed. If this is not how the team operates and requirements are simply dropped on the developers’ shoulders, it is extremely common to see the situation where the vision for the feature is greatly different between developers, testers, product owners, and business units.
Often, some members would be too shy to participate, some would use or try to use the gained authority to further fulfill their egos. Be aware of this.
Storytelling sessions spark arguments and discussions around the user interface and feature functionality between team members. During these sometimes heated sessions, members of the team discover gray areas in the requirements that needs to be further clarified by the business.
At first, teams that are getting onboarded with the BDD process show signs of doubt in this approach and try to push back. Nobody likes the change, especially when the existing process was in place for years and “worked”, but when you ask the team to really give it a try after an hour or so they start to discover those grey areas we’ve talked about previously. The realization arises that without this sprint planning format, certain questions would never be asked in the old model and would lead to time wasting during feature implementation or even worse, during a testing phase or a demo. A tester would come to a developer asking, “Are you sure this is how it should work?” The developer might respond, “I don’t know. I’ve developed based on my interpretation of the requirements”. And the argument begins. Too bad it would happen after the feature is already implemented and not before.
This is where the “Three Amigos” approach would shine. The idea is that during story generation in sprint planning, team members that represent different roles, such as developers and testers, would come together to further update the story’s description and create subtasks needed to accomplish the whole piece.
As a result, the process of estimating workload and scoping the sprint commitment becomes an easy task.

FINAL WORDS

Transforming an organization is not an easy task. You need to be able to sell the vision and persist when push-backs occur.
In our experience, injecting BDD as part of sprint planning initiatives bonds the team together and removes barriers between individual roles.  The team starts to have a common understanding of the functionality that is getting developed. That by itself lowers the risk of delivering wrong feature implementation saving an organization time and money.
Written for liatrio.com

Monday, August 14, 2017

Bobcat as an AEM focused UI testing framework

One of our large enterprise clients develops user facing experiences using AEM (Adobe Experience Manager). We can think of it as a Wordpress for enterprises. Developer can author a page using various components.

AEM consists of Author and Publisher instances. Each is a server that listens to port 4502 and 4503 respectively. Author instance is used for developing and staging pages with content. Using various components developer can greatly customize the web page. It is a common pattern to see each team to develop their own components to build on top of the existing functionality. Parsys is an area that acts as a canvas for adding components on the page. Developer can edit default properties of each component by updating respective fields in a dialog modal. Once the page is ready to go developer clicks Activate on the siteadmin management interface the page. This uploads the page to the Publisher instance and makes it live and available to the public.


But how do we test this?

With every software product comes the topic of testing. And developing in AEM is not different. How do we test that the web page is available and is performing as expected? How can we do it in the automated fashion?
There is always an option to use Selenium WebDriver to interact with the page and couple it with some sort of unit test framework that would act as a test runner. But let's spend some time and discuss what alternatives we might have.
AEM ships with build-in test framework HobbsJs. Although it is typically a good idea to use test framework that is shipped with the development environment I would argue that it's not the case here. Hobbs have number of limitations. To start with it seems like we can only use it in the Developer mode. So it is beneficiary for testing pre-configured component properties but thats about it. We can't test authoring and we can't test publishing. It is impossible to access the navigation bar where you need to switch between different modes.

It's a different story with Bobcat test framework.
This is a highly AEM centric product. Therefore we were pleased to find a lot of neat features that help to drive the page test automation. Think of

it as a great combination of Selenium WebDriver plus helpers to perform AEM specific actions.
On the authoring side this framework supply us with methods to manage page creation, activation, deletion, checking if it is present in the sideadmin tree. Use siteadminPage instance variable for that.

Ideally you would want to create a test page before each individual scenario, use it and destroy it at the end of the scenario regardless whether it finished with success of failure. You can achieve this setup using before and after scenario hooks.

Given that the page is open we can use other Bobcat helpers to interact with parsys by dragging or removing components. Once the component is on the parsys we can edit it's properties. Again all of this is done programmatically using the helpers provided to us. No coding necessary.
In webdriver.properties file tester can specify settings for webdriver "capabilities" and default page timeout time. In instances.properties files we can provide Bobcat with AEM instances url and login information. It's not a great idea to hardcode the latter and we suggest to supply it during the runtime by injecting it into the system properties hashmap.

Logging into the siteadmin page is also done in a nice way. Given that credentials are supplied during the runtime and stored in author.login and author.password respectively Bobcat simply adds a cookie to the browser and we're in. No need to actually type login information or pressing the Sign In button.
Use aemLogin.authorLogin() for that.

If you find yourself in the situations where Bobcat does not have a specific helper for your task you can still use Selenium. Simply call methods on

the webdriver instance variable. For example we developed a method to exclusively deal with navigation bar and switch between user modes.

Once that was done we were able to do authoring part and immediately perform publisher side validation by switching to the Preview mode. Neat!

Authors of the Bobcat project made a good effort to provide extensive documentation of features and functionality on the Wiki page. But we still found the material to be outdated and the examples would not work right of the bet.


Cucumber: going one step further

For our test framefork implementation we used setup of JUnit + Cucumber + Bobcat.
Cucumber is a tool for driving scenario execution and to manage reporting. Each scenario is written in Gerkhin language. It features usage of
Giv
en, When, Then, And key words to start each phrase.
"Given I have this state, when I do something, then I should see something."

That's the genaral idea behind describing software behavior using this language. It is commonly reffered as BDD or Behaviour Driven Development. And Cucumber is able to match (using RegEx engine) each Gerkin phrase with the respective block of code (step definition) written in any major coding language and to execute it.
Cucumber supports tags. In fact this is one of it's strongest features. Using tags you can include or exclude various scenarios and come up with custom run configurations based on your current needs. More on Cucumber here.


The sole purpose of using JUnit in the above setup is to kickstart Cucumber. For that we implemented this minimalistic TestRunner.java class:

Cucumber meets BDD

For me the main reason to use BDD is to provide across-the-team visibility for testing scenarios. With Cucumber framework we are able to describe user story in plain English. Now both technical and non-technical people can easily understand the test case simply by reading .feature files. It is no longer exclusively a coding expert role to make a list of the currently automated scenarios and compare it with acceptance criterias generated during Sprint Planning meeting. In fact the these two should match by the end of the Sprint.

Final thoughts

Based on our experience we've gathered during client uplifts we came to conclusion that it is crucial for the success of the organization to drive processes in each team to the state where the team members are having the common understanding for the committed feature set under development during each individual sprint and the tooling is carefully selected and used as intended. Using BDD and Bobcat framework allowed us to not only rapidly develop AEM-centric test suites for multiple teams but also provide clarity for both technical and non-technical members. 


Written for liatrio.com

Saturday, May 13, 2017

How to keep test suite and project in sync while promoting team collaboration

In software development there is a typical problem: how to maintain a relevant version of the test suite for each version of the project?

The goal is that for every version of project we need to have a test suite version that would provide accurate feature coverage.

Let’s imagine simple model when you have version 2.0.0 of your project released to the public and version 2.1.0 is currently under active development. The test suite lives in it's own separate repo.

Let say you have 50 test cases to cover all the functionality of 2.0.0 project version and test suite version is 15.2.0. Seems like the test team is active.

For version 2.1.0 of the project test team added 5 new test cases and versioned test suite as 16.0.0.
Now let’s imagine customers reported a critical bug that development team quickly fixes in the version 2.0.1. The test team have to update test suite as well.

Updates were made and version 15.2.1 of the test suite was released.
In other words version 15.2.1 of the test suite provides appropriate feature coverage for version 2.0.1 of the project.

Overtime as the project evolves it becomes difficult to tell which version of test suite should be used for specific version of project. And here we’ve described quite simple example.

Reality looks much different. It's very common in large enterprises to see multiple releases being developed simultaneously and project is relying on components developed by separate teams.

Let’s take hybrid mobile app as an example.
The test matrix would include mobile app version, web app version, mobile platform and device model, environment setup (which is a beast by itself).
Mobile app changes, the web app changes. Some users use iOS device and some are in love with Android. Screen size varies. Environment can be loaded with various versions of database, different legal and compliance materials.
You’re getting the picture.

At my previous project test team was using branching strategy to assist with this issue.
There were three environments: QA, STAGE, PROD.
There were four branches in test suite repo: master, qa, stage, prod.
Current development for new tests and any refactoring was done on the master branch.
Other three branches where design to carry test suite version that is appropriate for project version that is currently deployed to specific environment.

As build is passing validation in each environment test code would be merged to next higher environment branch.

This branching strategy was taking care of some of axes in the test matrix we've described earlier. Web app, DB, legal and compliance versioning were covered. Mobile platform, OS version and device permutations were handled in the code by introduction of if-else-case statements that would point to correct block of code.
Not an ideal solution.

On our current project we've decided to merge test suite and project together. Codebase for both would live in the same repo and will have the same version. If any changes are done to the project - version is bumped for both.

The project is set up as a Maven project.
In order to accomplish the model described above an extra <module> was added to root pom.xml to achieve aggregation. Which means when Maven goal is invoked at the root level it will be run against each individual module. Also each module's pom.xml have <parent> section to reference root pom.xml as a parent pom. This means settings like <groupId>, <version>, <distributionManagement> and others are shared by the root pom with modules.

Root pom.xml
...
<groupId>some.group.id</groupId>
<artifactId>product</artifactId>
<version>1.1.11-SNAPSHOT</version>
<packaging>pom</packaging>
<modules>
   <module>core</module>   
   <module>apps</module>   
   <module>launcher</module>
   <module>tests</module>
</modules>
<distributionManagement>
...
</distributionManagement>
...

<tests> pom.xml would look like this
...
<parent>
   <groupId>some.group.id</groupId>
   <version>1.1.11-SNAPSHOT</version>
</parent>
<artifactId>product-tests</artifactId>
...

In our case we wanted to accomplish two goals when changes are done to the project codebase :
  1. global version update
  2. simultaneous artifact creation and deployment for both project and the test suite
In order to update version for root pom and all modules that are specified in the pom.xml we need to use mvn version:set -DnewVersion=<new_version_here> goal at the root level.

One of the things our team have done to transform delivery process at the client location was to implement software delivery pipeline.
Current setup consists of Jenkins (open-source continuous integration tool), Artifactory (cloud-based artifact repository) and UrbanCode Deploy (application deployment tool).
On Jenkins we would have build, deploy, smoke test and regression test jobs. These jobs are chained and would trigger one another in order from left to right.
Build job would pull project repo from BitBucket and invoke mvn clean deploy on the root level. This would not only build the project but also run unit tests and deploy artifacts to Artifactory. After that it would run registerVersionWithUCD.groovy script that would instruct UrbanCode Deploy to create version under respective component and store URLs for the artifacts deployed to Artifactory.

For our use case we wanted so that smoke test job would have workspace populated only with test suite related files.
To accomplish this we've done these action:
  • added assembly plugin to <build> section of the test suite pom.xml to zip the whole test suite sub-project structure
  • updated registerVersionWithUCD.groovy script to register version with test suite artifact URL
  • created getTestSuite.groovy script what would pull test suite zip-archive from Artifactory using URL stored in UCD and unzip it to current directory (job's workspace)
At that point smoke test job would simply need to invoke mvn test goal to run smoke tests.

What have we achieved with this setup?
Now there it's very easy to tell which version of test suite we need to use for which version of project. As long as developers follow protocol and bump version using mvn version:set it would propagate to the test suite pom.xml and will be identical to version in root pom.xml. And there are number of automated ways to help with this as well.

That is good by itself but benefits of merging project and test suite in one repo did not stop there.
This model encouraged developers and testers truly come together and start to collaborate.
Both developers and testers without any additional incentive started to review each other pull requests and familiarizing themselves with codebase out of pure curiosity.
All of the sudden new features were implemented with tests in mind, testers were promoting quality right from the pull requests phase catching small and big problems early. During sprint planning meetings developers started to ask questions like: "what do you need from me to allow you to create tests for this?".
Furthermore members of these teams started to hang out together and feel happier at work place.

Now that is a true DevOps Transformation.


Written for liatrio.com

Sunday, December 11, 2016

Control iTunes from command line

It is possible to control iTunes from command line. In order to do so you need to install itunes-remote.

Run npm install --global itunes-remote
Now you can send commands to iTunes.

First command needs to start with itunes-remote.

This would start itunes-remote session and all following commands don't have to starts with itunes-remote any more.

Available commands are:
  • help
  • exit
  • play
  • pause
  • next
  • previous
  • back
  • search
So you can run commands like this:
itunes-remote search moby
next
stop

Important to note here that you can only work with tracks that are part of your Library.

Command exit did not work for me.
When you are using search for title that consists of multiple words you should use single quotes like so:
itunes-remote search 'twisted transistor'

If you really want to you can start using this functionality inside of your scripts.

For example I added these lines to my pre-push git hook:

#!/usr/local/bin/ruby
puts ("Pushing...pushing real good")
system("itunes-remote play")
sleep 3
system("itunes-remote search 'salt-n-pepa'")
sleep 5 system("kill $(ps -A | grep /Applications/iTunes.app/Contents/MacOS/iTunes | awk 'NR==1 {print $1}')")

What happens here is that this script will be executed whenever I push to my repo. It would open iTunes, wait 3 seconds (this is necessary for the process to get ready for commands) and would start playing song Push It by Salt-N-Pepa. And after 5 seconds the scripts would kill iTunes process.

Enjoy pushing!

Sunday, November 27, 2016

Install any App Store software from command line using mas-cli

   Recently I had a need to create easy onboarding setup.
   Whenever a new member joins the team I want him to be up and running with a local dev environment in a shortest amount of time. I want the process to be error prone, repeatable and involve as less manual interaction as possible. Wouldn't it be nice to just say: "Hey, run this script. And you good!"
   As a part of this effort there is a need to be able to programmatically install apps that are only distributed via App Store and are not available via Homebrew.
   Meet mas-cli utility.
   It allows you to search, install and update App Store distributed software.
   Installing mas-cli on you machine is as easy as running brew install mas in the Terminal window.
The way mas-cli works is that you first need to know the ID of the app you're interested in. You can find it by running mas search <name_here>. This will return a list of available app that have anything to do with the name you provided. Find the one you most interested in and make a note of the ID displayed next to the program name. Now you can run mas install <id_here>. And that's it. The program will be installed without you ever having the need to interact with App Store directly.
   That's cool. But what we really interested here is to come up with a way to put software installation routine in the onboarding script.
   Here is an example of how to do that if you lets say want to install Xcode:
mas install $(mas search xcode | grep 'Xcode' |head -1 | awk '{print $1}')
Here we first search the registry for xcode. Select the result that equals to 'Xcode'. Take only first column - that would be the ID. And we pass it to mas install command.

  So what left here is to add a line like this into your onboarding script. Just make sure you have mas utility installed on the machine prior to invoking such routine.

  Mas-cli GitHub page https://github.com/mas-cli/mas

Sunday, October 30, 2016

Why Homebrew is awesome

   Homebrew is a package manager for Mac OS. Similar to yum on Centos and apt-get on Ubuntu you can search repositories for specific software and install it on your machine using brew install package_name.
   Now, this post is about a cool feature of Homebrew I didn't know before. You can install full fledged apps that typically are distributed in a form of .dmg files right from Terminal using Homebrew.
   For example you can install heavyweight apps like IntelliJ and RubyMine by JetBrains, Atom, Chrome browser etc ... without ever leaving the Terminal, opening the browser to look for installers.
   The only thing you need to do is to specify cask option in brew command.
For example, if we want to install IntelliJ just run  brew cask install Caskroom/cask/intellij-idea
   Isn't that cool! No more looking and googling around for app's distributions and how to install it.
   Installing java is a good example. Instead of going to Oracle website, figuring out which link to press, which type of package you need for your specific need. With Homebrew you just type brew search java and once you settled on what package you like you go brew cask install Caskroom/cask/java... without ever leaving the Terminal window.
   Now, how do you know when you need to use cask option?
   Let's talk about brew search some_string. This command is used to search for available packages which have some_string in them. If the package you like to install is having cask in it - you need to use cask option.

  So let's go over an example - I want to install Chrome. Here's how it would look like:
  1. search for available packages by running brew search chrome

    seems like we want to use package Caskroom/cask/google-chrome
  2. install the package by running brew cask install Caskroom/cask/google-chrome
That's it! Chrome should be installed and available in your Applications folder.