Monday, August 14, 2017

Bobcat as an AEM focused UI testing framework

One of our large enterprise clients develops user facing experiences using AEM (Adobe Experience Manager). We can think of it as a Wordpress for enterprises. Developer can author a page using various components.

AEM consists of Author and Publisher instances. Each is a server that listens to port 4502 and 4503 respectively. Author instance is used for developing and staging pages with content. Using various components developer can greatly customize the web page. It is a common pattern to see each team to develop their own components to build on top of the existing functionality. Parsys is an area that acts as a canvas for adding components on the page. Developer can edit default properties of each component by updating respective fields in a dialog modal. Once the page is ready to go developer clicks Activate on the siteadmin management interface the page. This uploads the page to the Publisher instance and makes it live and available to the public.


But how do we test this?

With every software product comes the topic of testing. And developing in AEM is not different. How do we test that the web page is available and is performing as expected? How can we do it in the automated fashion?
There is always an option to use Selenium WebDriver to interact with the page and couple it with some sort of unit test framework that would act as a test runner. But let's spend some time and discuss what alternatives we might have.
AEM ships with build-in test framework HobbsJs. Although it is typically a good idea to use test framework that is shipped with the development environment I would argue that it's not the case here. Hobbs have number of limitations. To start with it seems like we can only use it in the Developer mode. So it is beneficiary for testing pre-configured component properties but thats about it. We can't test authoring and we can't test publishing. It is impossible to access the navigation bar where you need to switch between different modes.

It's a different story with Bobcat test framework.
This is a highly AEM centric product. Therefore we were pleased to find a lot of neat features that help to drive the page test automation. Think of

it as a great combination of Selenium WebDriver plus helpers to perform AEM specific actions.
On the authoring side this framework supply us with methods to manage page creation, activation, deletion, checking if it is present in the sideadmin tree. Use siteadminPage instance variable for that.

Ideally you would want to create a test page before each individual scenario, use it and destroy it at the end of the scenario regardless whether it finished with success of failure. You can achieve this setup using before and after scenario hooks.

Given that the page is open we can use other Bobcat helpers to interact with parsys by dragging or removing components. Once the component is on the parsys we can edit it's properties. Again all of this is done programmatically using the helpers provided to us. No coding necessary.
In webdriver.properties file tester can specify settings for webdriver "capabilities" and default page timeout time. In instances.properties files we can provide Bobcat with AEM instances url and login information. It's not a great idea to hardcode the latter and we suggest to supply it during the runtime by injecting it into the system properties hashmap.

Logging into the siteadmin page is also done in a nice way. Given that credentials are supplied during the runtime and stored in author.login and author.password respectively Bobcat simply adds a cookie to the browser and we're in. No need to actually type login information or pressing the Sign In button.
Use aemLogin.authorLogin() for that.

If you find yourself in the situations where Bobcat does not have a specific helper for your task you can still use Selenium. Simply call methods on

the webdriver instance variable. For example we developed a method to exclusively deal with navigation bar and switch between user modes.

Once that was done we were able to do authoring part and immediately perform publisher side validation by switching to the Preview mode. Neat!

Authors of the Bobcat project made a good effort to provide extensive documentation of features and functionality on the Wiki page. But we still found the material to be outdated and the examples would not work right of the bet.


Cucumber: going one step further

For our test framefork implementation we used setup of JUnit + Cucumber + Bobcat.
Cucumber is a tool for driving scenario execution and to manage reporting. Each scenario is written in Gerkhin language. It features usage of
Giv
en, When, Then, And key words to start each phrase.
"Given I have this state, when I do something, then I should see something."

That's the genaral idea behind describing software behavior using this language. It is commonly reffered as BDD or Behaviour Driven Development. And Cucumber is able to match (using RegEx engine) each Gerkin phrase with the respective block of code (step definition) written in any major coding language and to execute it.
Cucumber supports tags. In fact this is one of it's strongest features. Using tags you can include or exclude various scenarios and come up with custom run configurations based on your current needs. More on Cucumber here.


The sole purpose of using JUnit in the above setup is to kickstart Cucumber. For that we implemented this minimalistic TestRunner.java class:

Cucumber meets BDD

For me the main reason to use BDD is to provide across-the-team visibility for testing scenarios. With Cucumber framework we are able to describe user story in plain English. Now both technical and non-technical people can easily understand the test case simply by reading .feature files. It is no longer exclusively a coding expert role to make a list of the currently automated scenarios and compare it with acceptance criterias generated during Sprint Planning meeting. In fact the these two should match by the end of the Sprint.


Final thoughts

Based on our experience we've gathered during client uplifts we came to conclusion that it is crucial for the success of the organization to drive processes in each team to the state where the team members are having the common understanding for the committed feature set under development during each individual sprint and the tooling is carefully selected and used as intended. Using BDD and Bobcat framework allowed us to not only rapidly develop AEM-centric test suites for multiple teams but also provide clarity for both technical and non-technical members. 

Saturday, May 13, 2017

How to keep test suite and project in sync while promoting team collaboration

In software development there is a typical problem: how to maintain a relevant version of the test suite for each version of the project?

The goal is that for every version of project we need to have a test suite version that would provide accurate feature coverage.

Let’s imagine simple model when you have version 2.0.0 of your project released to the public and version 2.1.0 is currently under active development. The test suite lives in it's own separate repo.

Let say you have 50 test cases to cover all the functionality of 2.0.0 project version and test suite version is 15.2.0. Seems like the test team is active.

For version 2.1.0 of the project test team added 5 new test cases and versioned test suite as 16.0.0.
Now let’s imagine customers reported a critical bug that development team quickly fixes in the version 2.0.1. The test team have to update test suite as well.

Updates were made and version 15.2.1 of the test suite was released.
In other words version 15.2.1 of the test suite provides appropriate feature coverage for version 2.0.1 of the project.

Overtime as the project evolves it becomes difficult to tell which version of test suite should be used for specific version of project. And here we’ve described quite simple example.

Reality looks much different. It's very common in large enterprises to see multiple releases being developed simultaneously and project is relying on components developed by separate teams.

Let’s take hybrid mobile app as an example.
The test matrix would include mobile app version, web app version, mobile platform and device model, environment setup (which is a beast by itself).
Mobile app changes, the web app changes. Some users use iOS device and some are in love with Android. Screen size varies. Environment can be loaded with various versions of database, different legal and compliance materials.
You’re getting the picture.

At my previous project test team was using branching strategy to assist with this issue.
There were three environments: QA, STAGE, PROD.
There were four branches in test suite repo: master, qa, stage, prod.
Current development for new tests and any refactoring was done on the master branch.
Other three branches where design to carry test suite version that is appropriate for project version that is currently deployed to specific environment.

As build is passing validation in each environment test code would be merged to next higher environment branch.

This branching strategy was taking care of some of axes in the test matrix we've described earlier. Web app, DB, legal and compliance versioning were covered. Mobile platform, OS version and device permutations were handled in the code by introduction of if-else-case statements that would point to correct block of code.
Not an ideal solution.

On our current project we've decided to merge test suite and project together. Codebase for both would live in the same repo and will have the same version. If any changes are done to the project - version is bumped for both.

The project is set up as a Maven project.
In order to accomplish the model described above an extra <module> was added to root pom.xml to achieve aggregation. Which means when Maven goal is invoked at the root level it will be run against each individual module. Also each module's pom.xml have <parent> section to reference root pom.xml as a parent pom. This means settings like <groupId>, <version>, <distributionManagement> and others are shared by the root pom with modules.

Root pom.xml
...
<groupId>some.group.id</groupId>
<artifactId>product</artifactId>
<version>1.1.11-SNAPSHOT</version>
<packaging>pom</packaging>
<modules>
   <module>core</module>   
   <module>apps</module>   
   <module>launcher</module>
   <module>tests</module>
</modules>
<distributionManagement>
...
</distributionManagement>
...

<tests> pom.xml would look like this
...
<parent>
   <groupId>some.group.id</groupId>
   <version>1.1.11-SNAPSHOT</version>
</parent>
<artifactId>product-tests</artifactId>
...

In our case we wanted to accomplish two goals when changes are done to the project codebase :
  1. global version update
  2. simultaneous artifact creation and deployment for both project and the test suite
In order to update version for root pom and all modules that are specified in the pom.xml we need to use mvn version:set -DnewVersion=<new_version_here> goal at the root level.

One of the things our team have done to transform delivery process at the client location was to implement software delivery pipeline.
Current setup consists of Jenkins (open-source continuous integration tool), Artifactory (cloud-based artifact repository) and UrbanCode Deploy (application deployment tool).
On Jenkins we would have build, deploy, smoke test and regression test jobs. These jobs are chained and would trigger one another in order from left to right.
Build job would pull project repo from BitBucket and invoke mvn clean deploy on the root level. This would not only build the project but also run unit tests and deploy artifacts to Artifactory. After that it would run registerVersionWithUCD.groovy script that would instruct UrbanCode Deploy to create version under respective component and store URLs for the artifacts deployed to Artifactory.

For our use case we wanted so that smoke test job would have workspace populated only with test suite related files.
To accomplish this we've done these action:
  • added assembly plugin to <build> section of the test suite pom.xml to zip the whole test suite sub-project structure
  • updated registerVersionWithUCD.groovy script to register version with test suite artifact URL
  • created getTestSuite.groovy script what would pull test suite zip-archive from Artifactory using URL stored in UCD and unzip it to current directory (job's workspace)
At that point smoke test job would simply need to invoke mvn test goal to run smoke tests.

What have we achieved with this setup?
Now there it's very easy to tell which version of test suite we need to use for which version of project. As long as developers follow protocol and bump version using mvn version:set it would propagate to the test suite pom.xml and will be identical to version in root pom.xml. And there are number of automated ways to help with this as well.

That is good by itself but benefits of merging project and test suite in one repo did not stop there.
This model encouraged developers and testers truly come together and start to collaborate.
Both developers and testers without any additional incentive started to review each other pull requests and familiarizing themselves with codebase out of pure curiosity.
All of the sudden new features were implemented with tests in mind, testers were promoting quality right from the pull requests phase catching small and big problems early. During sprint planning meetings developers started to ask questions like: "what do you need from me to allow you to create tests for this?".
Furthermore members of these teams started to hang out together and feel happier at work place.

Now that is a true DevOps Transformation.

Sunday, December 11, 2016

Control iTunes from command line

It is possible to control iTunes from command line. In order to do so you need to install itunes-remote.

Run npm install --global itunes-remote
Now you can send commands to iTunes.

First command needs to start with itunes-remote.

This would start itunes-remote session and all following commands don't have to starts with itunes-remote any more.

Available commands are:
  • help
  • exit
  • play
  • pause
  • next
  • previous
  • back
  • search
So you can run commands like this:
itunes-remote search moby
next
stop

Important to note here that you can only work with tracks that are part of your Library.

Command exit did not work for me.
When you are using search for title that consists of multiple words you should use single quotes like so:
itunes-remote search 'twisted transistor'

If you really want to you can start using this functionality inside of your scripts.

For example I added these lines to my pre-push git hook:

#!/usr/local/bin/ruby
puts ("Pushing...pushing real good")
system("itunes-remote play")
sleep 3
system("itunes-remote search 'salt-n-pepa'")
sleep 5 system("kill $(ps -A | grep /Applications/iTunes.app/Contents/MacOS/iTunes | awk 'NR==1 {print $1}')")

What happens here is that this script will be executed whenever I push to my repo. It would open iTunes, wait 3 seconds (this is necessary for the process to get ready for commands) and would start playing song Push It by Salt-N-Pepa. And after 5 seconds the scripts would kill iTunes process.

Enjoy pushing!

Sunday, November 27, 2016

Install any App Store software from command line using mas-cli

   Recently I had a need to create easy onboarding setup.
   Whenever a new member joins the team I want him to be up and running with a local dev environment in a shortest amount of time. I want the process to be error prone, repeatable and involve as less manual interaction as possible. Wouldn't it be nice to just say: "Hey, run this script. And you good!"
   As a part of this effort there is a need to be able to programmatically install apps that are only distributed via App Store and are not available via Homebrew.
   Meet mas-cli utility.
   It allows you to search, install and update App Store distributed software.
   Installing mas-cli on you machine is as easy as running brew install mas in the Terminal window.
The way mas-cli works is that you first need to know the ID of the app you're interested in. You can find it by running mas search <name_here>. This will return a list of available app that have anything to do with the name you provided. Find the one you most interested in and make a note of the ID displayed next to the program name. Now you can run mas install <id_here>. And that's it. The program will be installed without you ever having the need to interact with App Store directly.
   That's cool. But what we really interested here is to come up with a way to put software installation routine in the onboarding script.
   Here is an example of how to do that if you lets say want to install Xcode:
mas install $(mas search xcode | grep 'Xcode' |head -1 | awk '{print $1}')
Here we first search the registry for xcode. Select the result that equals to 'Xcode'. Take only first column - that would be the ID. And we pass it to mas install command.

  So what left here is to add a line like this into your onboarding script. Just make sure you have mas utility installed on the machine prior to invoking such routine.

  Mas-cli GitHub page https://github.com/mas-cli/mas

Sunday, October 30, 2016

Why Homebrew is awesome

   Homebrew is a package manager for Mac OS. Similar to yum on Centos and apt-get on Ubuntu you can search repositories for specific software and install it on your machine using brew install package_name.
   Now, this post is about a cool feature of Homebrew I didn't know before. You can install full fledged apps that typically are distributed in a form of .dmg files right from Terminal using Homebrew.
   For example you can install heavyweight apps like IntelliJ and RubyMine by JetBrains, Atom, Chrome browser etc ... without ever leaving the Terminal, opening the browser to look for installers.
   The only thing you need to do is to specify cask option in brew command.
For example, if we want to install IntelliJ just run  brew cask install Caskroom/cask/intellij-idea
   Isn't that cool! No more looking and googling around for app's distributions and how to install it.
   Installing java is a good example. Instead of going to Oracle website, figuring out which link to press, which type of package you need for your specific need. With Homebrew you just type brew search java and once you settled on what package you like you go brew cask install Caskroom/cask/java... without ever leaving the Terminal window.
   Now, how do you know when you need to use cask option?
   Let's talk about brew search some_string. This command is used to search for available packages which have some_string in them. If the package you like to install is having cask in it - you need to use cask option.

  So let's go over an example - I want to install Chrome. Here's how it would look like:
  1. search for available packages by running brew search chrome

    seems like we want to use package Caskroom/cask/google-chrome
  2. install the package by running brew cask install Caskroom/cask/google-chrome
That's it! Chrome should be installed and available in your Applications folder.