Posts about QA and Testing

Interactive and Detailed Automated Test Case Reports

Posted by Komal Batra on Dec 4, 2018 3:57:00 PM

Reporting element in test automation framework plays an important role. The success of a test automation framework and its survival depends on how effectively reporting mechanism is implemented.

To transfer the information from the development team to the customer/business team, a detailed and interactive report of test case is required.

There are various reporting libraries that an automation framework designer can use for the reporting component. One of them is the Extent HTML Report, which provides beautiful, detailed and interactive report of automated tests.

Extent HTML Report

It is an interactive reporting mechanism which can be integrated with a Selenium test automation framework.

There are many features provided by extent report:

  • Dashboard - provides detailed and graphical analysis of the project
  • Interactive – an interactive HTML format report with a lot of UI widgets.
  • Integration Support - can be configured with Java(JUnit , TestNG) and .Net(NUnit) test automation frameworks.
  • Detailed Information - provides detailed information of the test cases including the details of the failure test cases.

How to generate Extent Report

#Step 1: Download the Extent Report libraries and add them to your Selenium project. Or add below in pom.xml






#Step 2: Create a java folder say utils , add another folder - Listeners and Create a class - extentListener that implements ITestListener and add the following code

Package utils.Listeners;

import org.testng.ITestContext;
import org.testng.ITestListener;
import org.testng.ITestResult;

import com.aventstack.extentreports.ExtentReports;
import com.aventstack.extentreports.ExtentTest;
import com.aventstack.extentreports.Status;
import com.aventstack.extentreports.reporter.*;

import baseFunctions;
public class extentListener extends baseFunctions implements ITestListener {
public static String testName;
ExtentHtmlReporter htmlReporter;
ExtentReports extent;
public static ExtentTest test;

public void onStart(ITestContext context) {
    htmlReporter = new ExtentHtmlReporter(System.getProperty("user.dir") +    "/test-output/TestResultsReport.html");
    ExtentHtmlReporter htmlReporter = new ExtentHtmlReporter(fileName);
    htmlReporter.config().setReportName("Automation Report")
    extent = new ExtentReports();
    return extent;


public void onFinish(ITestContext context) {

public void onTestStart(ITestResult result) {
    test = extent.createTest(result.getName());
    testName = result.getName();
    test.log(Status.INFO, result.getName() + " Test has Started");
    System.out.println("*******TEST STARTED ******");

public void onTestSuccess(ITestResult result) {
    test.log(Status.PASS, result.getName() + " Test Passed Successfully");

public void onTestFailure(ITestResult result) {
    test.log(Status.FAIL, result.getName() + " Test is failed" +        result.getThrowable());

public void onTestSkipped(ITestResult result) {
    test.log(Status.SKIP, result.getName() + " Test is Skipped" +  result.getThrowable());

public void onTestFailedButWithinSuccessPercentage(ITestResult result) {

#Step 3: For every failed assertion , attach screenshots in the report. Add the class SoftAssertionListener in the same utils folder

Package utils.Listeners;

import java.util.Map;

import org.testng.asserts.IAssert;
import org.testng.asserts.SoftAssert;
import org.testng.collections.Maps;

public class softAssertionListener extends SoftAssert {
private final Map<AssertionError, IAssert<?>> m_errors = Maps.newLinkedHashMap();

protected void doAssert(IAssert<?> a) {

try {
} catch (AssertionError ex) {
onAssertFailure(a, ex);
m_errors.put(ex, a);

try {"Snapshot below: " + extentListener.test.addScreenCaptureFromPath(baseFunctions.takeScreenShot(userjourney.methodName)));
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println("Screenshot taken");
} finally {
public void assertAll() {
if (!m_errors.isEmpty()) {
StringBuilder sb = new StringBuilder("The following asserts failed:"); boolean first = true;
for (Map.Entry<AssertionError, IAssert<?>> ae : m_errors.entrySet()) {
if (first) {
first = false;
} else {
throw new AssertionError(sb.toString());

#Step 4: Execute your test cases and visualise your beautiful report

Here’s what the final reports look like:

Pass test case 


Failed Test Case - includes screenshots & the failure details


And that’s how you can create interactive and detailed automation test case reports.

Do you have some other tips and tricks for presenting automation test reports? Don’t forget to share them in the comments below.

Topics: QA and Testing

Testing with Emulator: A step-wise guide

Posted by Deepshikha Singh on Oct 19, 2017 2:52:00 PM

What is an Emulator?

An Emulator is an application that emulates real mobile device software, hardware, and operating systems, allowing us to test and debug our application. It is generally provided by the device manufacturer. Mobile emulators are free and provided as a part of SDK with each new OS release. As a developer or a tester, you can configure the emulator to closely resemble the devices on which you plan to deploy your application.

The emulator window consists of a device screen on the left and phone controls/keyboard on the right. The device screen initially displays ‘ANDROID‘, and then displays the graphical logo, while the Android platform associated with the AVD is initializing.

What is AVD Manager?

AVD Manager is a tool to create and manage Android Virtual Devices(AVDs), which define device configuration for the Android Emulator. Before you can actually run an emulator, you need to install the Android SDK on your machine and define an AVD, which defines the hardware characteristics of the emulator. You can define things like the device RAM, whether there is touch screen and/or keyboard, camera support, audio playback support, etc. You can create several AVDs to test your device on several virtual devices.

How to Create an Android Virtual Device?

Once installation of all the prerequisites are done, we need to launch the SDK Manager to download relevant files to create an emulator and run the virtual device.

  • In command line(cmd), type in android sdk. It would open the SDK Manager for you to download the relevant files
  • In the SDK Manager select the files as shown below. This will help you create a virtual device

There are two ways to create Android Virtual Devices for mobile testing:

  • Using Command Line
  • Using AVD Manager

Creating a New Android Virtual Device using AVD Manager

The first step is to launch the AVD Manager and for that, there are a number of options you can follow:

  • Launch AVD Manager using Program Menu : Go to Start →  All Program →  Android SDK Tools → AVD Manager
  • Launch AVD Manager using Command Line : Go to Run and type cmd to open command prompt window. Type: android avd
  • Launch AVD Manager from folder location : Browse to Android SDK folder and Double-click on AVD Manager.

Either of the above ways, it would open the Android Virtual Device Manager, which would help you create the virtual devices. The AVD Manager main screen shows one default virtual device, which is Nexus 5.

1. Click on Create button to create a new AVD

Create button to create a new AVD

2. A pop up will open, follow the below screenshot for the values. In case you are simulating a specific device, you might want to name it as such. For example NexusSix-4.4 refers to an AVD that simulate Nexus-SIx which runs on Android 4.4.2 version.

NexusSix-4.4 refers to an AVD

In this test I choose a very generic name Android to keep it simple.

Note: You can play around with different configurations for your virtual device.

 3. Click on OK to proceed with the save changes.

Click on OK to proceed

4. After you complete above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now select the AVD name and click on Start on the right.

AVD name and click on Start on the right

5. This would launch the pop up with few options, you may choose as you want. Once done, click on Launch, this will launch the emulator.

click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to start from a previously saved snapshot of the device’s state. The emulator launches much faster when launched from a snapshot.
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit

 6. Emulator will launch, it would take few minutes to display the Home Screen of the virtual android device. The time taken is actually equal to the switch on time on the real mobile device.


  • You can use the command ‘adb devices‘ to see if the adb is detecting the emulator. This basically completes the Android SDK installation part.
  • If ANDROID logo appears for more than 15-30 minutes, something has probably gone wrong. Reboot your computer, start AVD Manager, delete our created ‘Android’ AVD, recreate this AVD, and relaunch new AVD.

The emulator window launched with 5554:Android in its title bar. Value 5554 identifies a console port that you can use to query and control the AVD’s environment. Android supports a maximum of 16 concurrently executing AVDs, where each AVD is assigned an even-numbered port number that starts at 5554. It means that we can initialize more than one AVD at the same time and can test the app with parallel execution.

Creating an Android Emulator using Command Line

Android Emulator can be configured differently for simulating different android devices. With the help of AVD configuration, Android Emulator can simulate:

  • Target platforms versions
  • Screen sizes
  • Solutions
  • Input methods
  • Hardware configurations
  • External storage sizes for SD card

Although there are a good number of default device configurations present in AVD Manager, if you have any particular device in mind that you want to emulate, you need to create an AVD that resembles the features of your target device. For example, you need to know the screen size and resolution and so on. 

Creating Default Device Definition

1. Go to Device Definitions tab and select one of the presets and click ‘Create AVD’

2. Every default device will provide you some preset AVD and that can be changed according to your need

how 6

Note: Notice that the OK button is disabled, just because under CPU/ABI it says that “No system Images installed for this target“. To avoid this, select different Device configuration from Device & select any skin or select none for skin and proceed.

Below are the list of optional and mandatory settings:

  • AVD NAME: Choose a name for the AVD like NexusSix-4.4 that refer to your device configuration
  • Device: Select the AVD resolution as per the device options
  • Target: Select the Android version / Test Environment or Target Platform
  • CPU/ABI: Select Intel Atom (x86) for 32-bit and ARM (armeabi-v7) for 64-bit.
  • Keyboard: Select this box to use the keyboard in the AVD
  • Skin: Select this box to get the hardware buttons
  • Front Camera: If the system have a webcam, that can be used with AVD
  • Back Camera: If the system have a webcam, that can be used with AVD
  • Memory Options: Set RAM & VM Heap for the device according to your need
  • Internal Storage: Set this as per your need and Select GiB/MiB from the drop down
  • SD Card: Set this as per your need and Select GiB/MiB from the drop down
  • Emulation Options

- Snapshot : Select this to persist, it lets you quickly start the emulator after the first startup.

- Use Host GPU: Select this to use the computer graphics

3. After you complete the above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now Select the newly added AVD name and click on Start on the right.

4. This would launch the pop up with few options, you may choose as you want. Once done click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to be started from a previously saved snapshot of the device’s state.The emulator launches much faster when launched from a snapshot
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit

Once done, Emulator will launch. It would take few minutes to display the Home Screen of the virtual Android device.

Creating a Custom Device Definition

In case the available device definitions do not match the device type you’d like to emulate, you can create a custom device definition for your AVD:

1. If the AVD manager is opened, go to Device Definitions

2. Click on Create Devices.

3. The Configure Hardware Profile window will display and it will allow you to specify various configurations such as the screen size, memory options, input type, and sensors.

Note: Once all the information is filled properly, Create Device button will get enabled and you may proceed then.

4. After you complete above steps, the emulator will be displayed under the Device Definition tab, configured on the AVD Manager. Now Select the newly created AVD name and click on Create AVD on the right.

Automation using Android emulator

  • Get Emulator Platform Version
  • Unlock Android emulator screen
  • Go to Settings. You will find About Phone under settings.
  • Go to About Phone. It will show you Android version
  • Verify calculator App Is Available In Emulator 


We are going to run an Appium test for calculator application so it should be there in emulator. Generally, calculator app will be already installed in emulator. To check if it is installed or not,

1. Unlock emulator

2. Verify if there is an application with name Calculator

3. Get app activity and package name. We need launcher activity and package name of calculator app. Activity and package name of calculator app for me are:

  • Package name :
  • Activity name :

4. Create Appium Test Script In Eclipse. Now we are ready to create and run our first Appium test on Android emulator for calculator application. I have prepared Appium test script as below. I have used RemoteWebDriver of Selenium Webdriver to launch app with required capabilities.

5. To start an instance of the emulator from the command line, navigate to the tools/ folder of the SDK. Enter emulator command like this: emulator -avd <avd_name> [<options>]

So that was all about performing Appium testing on emulator. In case you are just starting off with mobile application testing, you can explore our list of the best tools for mobile test automation. You can also check out our Appium tutorial webinar to level up your mobile testing skills.

Topics: Coding and Tutorial, QA and Testing

Drag-&-Drop Custom Layout Builder

Posted by Shamsher Alam on Oct 13, 2017 3:03:00 PM

The Custom Layout Builder is an application that allows a developer to create an HTML  page. The application provides a Grid layout, API and UI  that can be used to create a webpage. The developer need not write code. The code is auto-generated.

You can check out this quick demo to get started.


Various functionalities that the layout builder provides are:

  • Drag and drop columns & rows – Depending on the screen layout requirements

  • Resize, delete and add columns – Depending on the content of the columns using the different icons

  • Adjust layout depending on type of screen – phone, tablet and desktop

  • Quickly edit the source code directly – Code is generated and can be viewed on the click of the '</>' icon. This can be accessed by the developer and modified as per requirements

  • Content in the columns can be added using TinyMCE.

  • Pluggable CSS Class and ID Framework. Developers can add their own classes within the code using the icon. As of now, there are a couple of classes that can be auto-generated.

  • Deletion of code that is not required.

Software Components

  • TinyMCE – A WYSIWYG Editor that allows the user to add text, tables and format content.

  • JQuery – A javascript Library used to implement the layout builder

  • HTML – To develop the layout builder 

  • Font awesome – Library that is used to get the icons, customize them and add relevant CSS.

  • Bootstrap.js – Library that allows incorporating jQuery and jQuery Grid Library plugins.

  • Jquery Grid Library – Allows to generate source code for grid operations like insert, filter, delete etc

Assumptions: None

Open Issues: None

You can find the Custom Layout Builder on GitHub and start using it right away.

If you have any queries about the Custom Layout Builder or feedback you would like to share, please let know in the comments below.

Topics: Drupal, QA and Testing

Best tools for mobile test automation

Posted by Nitin Kumar on Jun 6, 2017 9:16:00 PM

In our previous blog, we covered what mobile testing is all about, and a quick checklist to make sure you test all the right things. In this blog, we take a look at how mobile testing can be made easier and faster, i.e. mobile test automation. 

Why Do We Need Mobile Test Automation?

A mobile application comprises of two parts:

  • the API section, which does the processing
  • the UI part, which does the display and takes care of the workflows

From a testing perspective, it is relatively straight forward to be testing the APIs individually, as we can directly test the endpoints either manually using REST client or Postman, or by using automation tools like JMeter, SOAPUI etc. However, the key challenge that remains is testing whether the apps that are created and the APIs, work in tandem with the UI.

So what exactly is the challenge that we are talking about here? 

Look around you and do a quick sampling. You will see at least 3 to 4 screen sizes, 2 major operating systems (if not 3), and 4-5 operating system versions. A quick calculation reveals approximately 40 possible screen-size-OS-version combinations just from a sample of 10 mobile phones. Now imagine the numbers if you were to create an application which is meant to work on everyone's phone!

The number of possible test combinations can be huge considering just the standard screen-sizes, the OS, and the major updates/releases. With a high frequency of updates to the application, it makes it very difficult for testers to test the application thoroughly in the short time spans available. And this is exactly why we need the mobile test automation, to help reduce the overhead of testing the features manually.

Tools of the Trade

There are various tools that are available to test mobile applications. Let’s take a look at some of the most prevalent mobile test automation tools, their pros and cons, and their best use cases.

The Open Source Tools

Appium: The Appium tool is built over Selenium, with a similar syntax, and has some additional capabilities which allow it to interact with the mobile. With its strong compatibility to Selenium, this is one of the preferred open source tools. 

One of the biggest challenges that we come across when automating a mobile application test is finding the correct locators for the element. Appium comes with a built-in tool- UIAutomator, which helps easily find the locators. Appium also has a recorder that helps record the steps and use them in the form of a script.

In terms of support of applications, it has support for Android and iOS applications along with native and hybrid applications.

If you want to take a closer look at the tool, do check out our webinar on “Simplifying Mobile Test Automation with Appium”.

Calabash: The Calabash tool is another well-known tool in the open source community for mobile test automation. The tool is built atop CucumberJS and supports BDD type of automation. 

The challenge of locators is handled quite well in Calabash. There is a query language that is used here, which returns all the elements that are present on the screen. This has been very well explained here.
Calabash too has good support for Android and iOS applications. The base scripts here are written in Ruby.


When comparing the two open source tools, it becomes very important to understand when to use a particular tool. With modern day automation, the preferred practice is to have a single suite for testing both the web and the mobile applications. And that becomes a major factor when choosing which mobile test automation tool to use.

Consider a simple example where we have a web application and a corresponding mobile app. If we have done the automation of the web component using Selenium + Java, Appium becomes an automatic choice. That’s because Appium is built over Selenium. However, if we have used Ruby for the web testing automation, we would prefer Calabash. 

In terms of features and capabilities, both the tools are equally powerful and capable.

BDD is another factor that does have a role to play in the tool finalisation. With QA professionals moving towards a lot of BDD tests written by the product owners, it has become increasingly important that we perform mobile test automation with the same approach. This is where  Calabash has an edge over Appium.

Pro-Tip: Considering the fact that almost all QA professionals have basic Selenium automation experience, I feel it’s best to start off mobile test automation with Appium. With a syntax that’s similar to Selenium, it greatly reduces the learning curve for a team that’s doing mobile testing for the first time.

Licensed Tools

Perfecto Mobile: This is more of a web tool that allows a user to record a script and play the script across different devices. The users are also allowed to extend the scripts by writing specific code that they feel may be needed. 

The advantage of using Perfecto Mobile is the reduction in the cost of procuring actual devices. The devices are present in the Perfecto Mobile lab and each test is run on actual devices where the actions that we wish to perform are performed and streamed online. 

Another advantage that comes with Perfecto Mobile is that you can create a test using any of the devices and the same can be played back on any other device. This makes sure that all the tests work fine on actual devices.

And finally, we can simulate interrupts without any additional steps. There are built-in functions available to simulate interrupt operations

SeeTestAutomation: This tool allows you to work on a level lower than Perfecto Mobile. Here, we can actually write code and perform the functions that we want. There is a recorder available to record simulated scenarios. Additionally, there is a SeeTestCloud which allows the tests to be run on actual hosted devices.


Interestingly, both these tools give more or less the same capabilities, including a mobile devices lab, recorders etc. The difference comes in the built-in features that come with the app. The tool of choice here is SeeTestAutomation, especially after looking at the default capabilities that it supports. These include:

  • Device monitoring
  • Sim card services
  • Mocking GPS location [Important feature in mobile testing]

Personally speaking, I think it’s best to choose one of the open source tools for mobile test automation. It gives you the liberty to grow and expand the tool as required. Meanwhile, you can use the paid platforms just for device support, and also get services like browserstack to access browsers and app platforms for testing.

But that’s just what I think. Rest is up to the automation testers to think and decide.

Meanwhile, you can check out our QA Automation services as well as exploratory and accessibility testing capabilities. 

Topics: QA and Testing

Galen vs Wraith: Comparing automated regression testing tools?

Posted by Deepshikha Singh on May 26, 2017 4:03:00 PM

With enterprises wanting to offer optimal viewing and interaction experiences across devices, there’s increasing emphasis on building responsive websites. And hence, visual testing has become a key component of the QA process. Additionally, there are several tools available to perform visual testing, and QA teams need to be able to choose the right one, depending upon project requirements.

Recently, our QA team was engaged with an Australian insurance enterprise, carrying out manual testing, and device/browser testing for all their new website features. During the engagement, the client also requested automated visual testing for their website. 

We proposed the open-source Galen tool. However, since the client already had a stable environment, it was decided that the Wraith tool would be better suited to the project. And so we used Wraith to run tests on each deployment on the production environment, validating CSS, image comparison, styling, and other visual aspects. 

This got us thinking about the key differences between Galen and Wraith, and how they are suited for different project requirements. So here we take a quick look at the two most prevalent visual regression tools, comparing them on the basis of features, best use cases, advantages, and drawbacks, to help decide upon the one that is best suited for your enterprise project.

So that was our handy chart to help you compare between Galen and Wraith. Decide the parameters that are the most important for your project, whether it’s the environment, or integration capabilities, or cross-browser testing, and check out how the two tools stack up. And then choose the one that works best for you.

Once you have locked down your tool, your in-house QA team can get started with it. However, if you wish to onboard a QA Automation team, just write to us below and let’s explore how we can help.

Topics: Framework and Libraries, QA and Testing

The basic mobile testing checklist

Posted by Nitin Kumar on May 17, 2017 9:08:00 PM

Mobile has become a buzzword for practically every industry that you can think of. Whether you wish to sell a service, or garner a huge user base, or grow a community, you need a mobile strategy. And more often than not, for enterprises and product companies, leveraging mobile inevitably means engaging your customers via an app. 

However, someone has to ensure that all these apps getting created, or websites accessed via a mobile browser, are actually functioning as intended. Are they offering customers the responsiveness and ease-of-use that you want your brand to be known for, or are they delivering buggy, extremely frustrating experiences? And that is why you need mobile testing.

What is Mobile Testing?

Mobile testing is performed to make sure that the website or app under development is delivering the features and functionalities that it is meant for. As with any QA testing, mobile testing involves the following types:

Functional Testing: This type of testing deals with ensuring that the functional behavior of the application is as expected. This includes:

  • Checking and comparing with wireframes
  • Checking if the user flows are as per requirements
  • Ensuring that the UI as per the design
  • Checking if the app works well across devices
  • Ensuring the app works well across operating systems

Performance Testing: Performance testing is very crucial when developing a mobile application. For example, let’s look at a simple ticket booking system. There are thousands of travelers who could be booking tickets for bus, train, and flights. Even if the application is for any one single mode of transport, at the peak time, there could be thousands of requests that hitting the application. This makes it very important that the app is able to handle the load.

Having said that, it is actually the API that caters to the requests. However, we need to ensure that the app does not fail when there are slow responses.

Security Testing: The most valuable thing today is data. And when users interact with a mobile website or application, there are numerous scenarios that involve data security:

  • To make sure that people use an app, it is very important that they trust their data will not be compromised. 
  • Another aspect of security is when there are payment portals integrated with the application. The app needs to make sure that the card details that are entered are not compromised.
  • Also, in terms of data privacy, the app makers need to ensure that the app/APIs are not prone to the common security threats.

Interrupts Testing: This testing is very specific to mobiles. Here we need to make sure that the application runs fine even when there are system generated interrupts. System generated interrupts can be any of the following:

  • Incoming phone call

  • SMS

  • MMS

  • Airplane mode

  • Low battery popup

  • Phone shutdown 

  • Low memory

Installation Testing: This is another key aspect unique to mobile testing. Any application goes through a series of development stages and hence we need to test for the following:

  • Installation using the .apk files on Android, and .ipa files on iOS

  • Installation from Google play store on Android and iTunes on iOS

  • Installation of updates from Google play store on Android and iTunes on iOS


Usability Testing: The final key area for testing on a mobile device is usability testing. The tester needs to make sure that the app is easily usable, all navigations are as per the wireframes, and meet the iOS and Android standards. Also, we need to test the application on different screen sizes and make sure that there are no issues in the rendering of the application.

Is Mobile Testing the same as Web Testing?

One very common question that we get to hear to is, what is the difference between web based testing and mobile testing? 

Indeed, testing on a mobile web browser is similar to testing on a desktop web browser. The functional, performance, and security testing is common for mobile and web testing processes. However, there are certain key differences that need to be kept in mind:

  • There are a lot of native events that occur on a mobile device that would not occur on a desktop browser. To name a few would be phone calls, messages, airplane mode etc. And hence interrupts testing is unique to mobile testing.

  • Mobile testing is not limited to browser alone, there are apps that need to be tested. The current user preference is to use an app rather than browsing on a website, even a mobile optimized website. And while testing an app, the perspective completely changes in the sense that we are no longer just looking at the features and functionality, but also having a close look at the UI. There are separate wireframes that are created for testing on Android and iOS, primarily to maintain the design standards and also to be able to utilize the native features available with the OS.

What Are the Basics of Mobile Testing?

Mobile Testing Basics checklistWhile the goal of mobile testing is the same as web testing, there are certain additional elements that get added to the basics of mobile testing. Given that the application has to run on a mobile device, we need to consider and test for certain native conditions that do not exist while operating on a web browser. Below is a quick checklist that needs to be considered when performing basic mobile testing:

  1. App installation -  Test the app installation in different OS and devices with different supported OS versions. 

  2. Interrupt conditions - When the app is loaded or is loading, and a call is received, it generates an interrupt condition. It is important to check if the app handles the interrupt conditions properly.

  3. Offline mode - Check if there are any crashes/issues seen when the app is operated in no network/airplane mode.

  4. Landscape and Portrait orientation - Check if the app supports a change in device orientation. If it does, ensure that upon screen rotation the UI remains consistent.

  5. Location based services - If the app has any location services, it should display the location correctly. It is very important to check this on data as well as wireless network.

  6. Test the app on Wifi and data network - Check if the app works correctly on data as well as wireless medium.

  7. Simulate phone restart interrupt - Check if the app works fine if the phone restarts when the application is running.

  8. Simulate message interrupt - Check if the app works fine if there is a message received when the application is running.

  9. Simulate a low battery interrupt -  When the app is running, simulate a scenario for a low battery popup and check how the application responds to the popup.

  10. Simulate a low memory interrupt - When the app is running, simulate a scenario for low memory pop-up and check how the application responds to the popup.

  11. Security - It is important to ensure that the app being used is secure. There should not be any vulnerabilities in the form of advertisements or any other medium that causes a threat to users’ data.

  12. App social network integration - With almost all applications using Facebook and Google integrations for login, it becomes very important for testers to check if the privacy policy and notification settings are considered in the application.

  13. Performance requirements - It is important to know what is the minimum configuration needed for the application to run properly. Hence we need to do a proper benchmarking of the application to ensure that the application does not hamper the performance of the device

  14. Images - In the case of images used in the application, the images should not get distorted/stretched/pixelated when uploaded.

  15. App installation from App store/iTunes - Check if the app gets correctly installed when downloaded from the App store/iTunes.

This basic checklist helps you make sure that you do not miss out on testing any key feature, scenario, or functionality while performing mobile testing. You can take a look at the most prevalent mobile test automation tools and decide which one to use, depending upon your project requirements.

You can also check out our range of webinars on QA and Testing, especially our session on mobile test automation, to know more about mobile testing and the best practices for mobile test automation.

Topics: QA and Testing

An Introduction to Selenium

Posted by Deepshikha Singh on Mar 20, 2017 2:40:00 PM

Selenium is an open source software testing framework released under the Apache 2.0 license. It was built in 2004 by Jason Huggins and other testers at Thoughtworks. The name 'Selenium' came from an email in which Huggins mocked a competitor named 'Mercury' , saying that one can cure mercury poisoning by taking selenium supplements.

Selenium is mainly used for functional and regression testing. It is built in Java and is highly useful in web applications.

What we use at Srijan ?

- Selenium IDE
- Selenium RC
- Selenium WebDriver

Selenium IDE (Integrated Development Environment) is a Firefox extension which allows recording, editing and debugging of tests. Selenium IDE was originally created by Shinya Kasatani and donated to the Selenium project in 2006. You can record and play tests using Selenium IDE. It reduces manual effort and saves time as it can be reused while retesting.

Scripts are recorded in Selenese, a special test scripting language for Selenium. Selenese provides commands for performing actions in a browser and for retrieving data from the resulting pages.

Key features:

- Easy to record and run.
- Can create and run test suites (multiple tests).
- Can apply breakpoints at any point.
- Can edit scripts manually.
- Can save test scripts as Java, C# or any other language.
- Can easily assert title of any page.

Selenium RC (Remote Control, also known as Selenium 1) offers a more flexible and more complex approach to create run browser tests. With Selenium RC it is possible to run tests inside every JavaScript compatible browser using a wide range of programming languages. Your tests issue commands which the client library sends to the server. The server then 'runs' your actions in the browser and reports the results.

Selenium RC allows you to write automated tests in any supported programming language. Tests written in this way allow you to use standard programming practices making them easy to maintain, robust and easy to collaborate as a team. Selenium RC is a server, written in Java. It accepts commands for the browser via HTTP. RC makes it possible to write automated tests for a web application in any programming language, which allows for better integration of Selenium in existing unit test frameworks. To make writing tests easier, Selenium project currently provides client drivers for PHP, Python, Ruby, .NET, PERL and Java. The Java driver can also be used with JavaScript. A new instance of Selenium RC server is needed to launch HTML test cases, which means that the port should be different for each parallel run. However, for Java/PHP test cases, only one Selenium RC instance needs to run continuously.

Selenium WebDriver (also known as Selenium 2) is a browser automation framework that accepts commands (in Selenese or any other language; Java in our case) and send them to a browser. It is implemented through a browser-specific driver which sends commands to a browser and retrieves results.

Why the move to Selenium WebDriver from Selenium RC ?

- Cleaner APIs in WebDriver.
- Ability to handle multiple windows and modal windows effectively.
- Ability to handle mouse hover movements.
- Ability to handle window scrolls.
- Does not require Selenium RC server to be running.
- Native automation faster and a little less prone to error and browser configuration than Selenium RC.

We use:

IDE: Eclipse Helios and JUnit
Programming language: Java for writing scripts in Selenium WebDriver
Key features:
- Supports all the modern browsers (Firefox, Chrome, IE, Safari, Opera etc).
- Supports all operating systems (Windows, Linux , Mac).
- Cleaner APIs than Selenium RC.
- Object oriented programming based, hence gives accurate results.
- Can be used for Flash testing and Ajax handling

Selenium scripting is useful for handling projects which are complex and are of longer durations. Selenium is a widely used automation framework now.

Steps to configure WebDriver with Eclipse:

1. Download the latest version of  WebDriver from (Select the java binding  file to download)
2. Download Eclipse from
3. Download latest version of Java.
4. Extract files from the files from .zip Selenium WebDriver
5. Open Eclipse, create a new Java project .
6. Create a package under new Java project and then go to Configure build path .
8. From configure build path, click on Add external jars.
7. Add the jar files from the Selenium WebDriver folder that you have extracted.

Reference links:

Topics: Framework and Libraries, QA and Testing

Visual Regression with Galen

Posted by soumyajit.basu on Jan 16, 2017 11:52:00 AM

Galen is an open source layout testing tool for software applications, which helps us test the look and feel of the application. The idea behind using Galen is to simplify layout testing where we won’t need to manually test an application for UI based issues. Galen can also be used to automate UI regression.


Currently Galen is a fully functional testing framework with rich reporting and test management system. It supports both Javascript and Java. It has its own syntax called Galen specs for describing the positioning and alignment of elements on a web page. It uses Selenium to interact with the page objects.

Here's a Galen framework tutorial that'll help make sense of the tool, and all the concepts involved.

We also have a quick video which will take you through all the steps and concepts explained in this blog.


Responsiveness of a Browser

The framework is designed to test the responsiveness of an application. It is easy to set up Galen to test the responsiveness of an application for different browsers and for multiple devices. Galen initiates a browser, and resizes it to a defined resolution which can be passed as a parameter when writing the tests. It can then test the responsiveness according to the specifications defined.

Runs on Cloud and Selenium

Galen has been made flexible enough to run on third-party cloud services such as BrowserStack and Sauce Labs. The tests can also be run using Selenium Grid. This would help save time when running tests on multiple devices.



For executing Galen the Java version should be 1.8.

NPM based installation

For npm based installation, just type in the command sudo npm install -g galenframework-cli

Manual Installation

Install on OS X and Linux

  • Download the archive from Download Page and extract it in any directory. Go to the extracted directory of the galen and execute the following command: sudo ./
  • To check whether Galen is successfully installed execute the following command: galen -v.

The above command should print the installed version of Galen within your system.

Configuring Galen

To configure Galen, create a project directory and inside that, execute the following command: galen config

This would basically create a galen.config file in your project directory which would contain the configurational parameters for Galen. You need to configure the ChromeDriver for Chrome and the GeckoDriver for Firefox. To execute your tests in headless mode you can also execute it using PhantomJs.

To configure the drivers use the following syntax:


Nota Bene

In case of drivers, instead of using the driver executables for chrome, phantomjs, and firefox, you can also install them in your local machine.
Since I am using a Mac I will be using Homebrew to install the drivers.
1. To execute your tests on firefox you will need the geckodriver. Gecko driver can be installed by using brew install geckdriver. This will install the latest stable version of gecko driver in your system.
2. To execute your tests on chrome you will need the chromedriver. Chrome driver can be installed by using brew install chromedriver. This will install the latest stable version of chrome driver in your system.
3. To execute your tests headlessly you will need the phantomjs driver which can be installed in your system using brew install phantomjs.

Once this is done you just need to give the path of the driver in your galen.config file. This will look something like this:


Modus Operandi3-706014603879248525

Galen operates in the following manner.

  • Galen opens a page in a browser
  • Resizes the browser to a specific size
  • Tests the layout with Galen specs

In case the page is not directly accessible, Galen allows you to perform some operations on the website first. As it is based on Selenium, all operations related to clicking, typing, injecting client-side JavaScript can be done very easily.

Galen specs is the language with which the specifications of responsiveness can be defined as per different devices. The following code snippet will give an idea of how easy it is to implement the page specifications using Galen.

= Main section =
     height 100px
     inside screen 0px top
     width 100% of screen/width

     height 50 to 60px
     width 100% of screen/width
     below header ~ 0px

     inside menu 20 to 50px left, 0 to 10px top

    @on mobile
        width 100px

    @on desktop
        width 150px

Project Structure

This would be the possible project directory structure.

1. Drivers directory consists of the browser kits required to execute a browser. This directory includes GeckoDriver, ChromeDriver and PhantomjsDriver.

2. PageInteractions directory contains all the JavaScript related logic related to page interactions.

3. Reports directory consists of all the reports executed on Cross Devices, Cross Browsers and specific to only a single device.

  • Cross_Browser_Specs_reports which has the reports for Cross Browsing based test executions
  • Cross_Device_Specs_Reports which has the reports for Cross Browsing and Cross Device based test execution
  • Uni_Specs_Reports which has the reports for tests executed specific to a single device

4. Specs directory consists of the Galen specifications and has two classifications:

  • Cross_Specs which essentially focuses on the characteristics on multiple devices
  • Uni_Specs focuses on the characteristics of a single device

5. Test_Suites directory consists of the test suites required to execute all the tests and has two classifications.

  • Cross_Browser which focuses on tests written for multiple browser and multiple devices
  • Cross_Devices which focuses on tests written for multiple devices only specific to the default browser mentioned in the galen configuration
  • Uni_Devices which focuses on tests written specific to the device that's specific to the default browser mentioned in the galen configuration only

6. galen.config consists of all the configuration related parameters required to execute Galen.

Galen Specs2-9

Galen specs language is quite flexible and gives the opportunity to express exactly how you want your website to behave on different devices and on different browsers. There are two main parts in page specs file:

Object definition

Each Galen spec file normally starts with object definition. It is the place where you give names to page objects and also define the so called locators – the way for Galen to find element on test page. The available locators are:

  • id – searches for object by id in DOM
  • css – uses CSS selectors to find objects
  • xpath – uses XPathexpressions

The object definition is defined with @objects keyword and all objects are declared below it. The object definition can be defined as follows:

 header      css
 header-logo css div:nth-child(2) a img.cgu-logo
 carousel    id  hero-carousel-mob

Object Specs

The object specs consists of the specifications of the page object. You can consider your object specs to be an assertion. Object specs are the validation metrics for the page objects. Based on the object specs the validation for the visual regression is carried out.

   height 50px
   width 375px
   height 33px
   width 41px

Sections and Tagging

Sections are declared with = symbol in the beginning and end of the line. Also in Galen you can have multiple sections within each other. This allows you to structure your test code so that it is easier to read the specs.

= Header section =
     height 50px
     width 375px
     height 33px
     width 41px

Quite often you need to declare different specs for different conditions (e.g. various devices like mobile, tablet, desktop etc.). You can specify tags in your spec file so that it is easier to manage your layout testing. You can wrap all your checks inside @on statement like this:

@on mobile
   height 50px
   width 375 to 410px

@on desktop
   height 40px
   width 1024px


One of the simplest specs in Galen. It doesn’t take any parameters. It is used to check that the element is either missing on page (in HTML code) or it is present but not visible (e.g. style display:none)



Spec inside verifies that an element is visually inside another element.

 inside site-nav 10px top


The basis for all specs is a so called “range”. Range is a format of defining boundaries for any value. With the range you may express that the expected value should be exactly ‘0’, or greater than ‘0’, less then ‘0’ etc. For example

# The value between range
width 375 to 400px
# Greater than
width > 40 px
# Less than
width < 40 px
# Greater than or equals
width >= 40 px
# Less than or equals
width <= 40 px
# Approximate
width ~ 100 px

Galen Test Suite3-3

To give more flexibility and maintainability to the user, Galen came up with another simple format for defining test suites. Test suites are just a set of pages that need to be checked with the appropriate test specs. But there is something more in there which gives a lot of power in testing. Apart from this it also makes the test execution much smoother.

The aim of deploying a test suite is mainly to carry out continuous executions of all the test specification without human intervention and Galen  provides such ease through its deployment of the test suite. The entire specification of the application can be executed at one go.

Basic Format

The basic format of the test suite looks something like this.

Checking the responsiveness for business section on ${device}
    Business section | http://${domain}/business ${resolution}
       wait 10s until exist "css: div#nebula_div_btn"
       check ././Specs/Uni_Specs/CGU_Business.gspec

Defining the basic format of a test suite in Galen takes place in three simple steps:

  • Defining the name of the test. For this test we have also specified to run these against a specific domain and platform with the browser resize resolution.
  • If there are any conditions that needs to be provided or if you need to call an external JavaScript file to define the Page Interactions that can be written onto the second step.
  • In the third step you define the Page Action which basically tells to check the page specification for the particular url defined.

Passing JavaScript variable to a spec file

You can pass some data from your tests into your spec file so that they are accessible from JavaScript statements. To do this you need to pass it via --VvariableName variableValue argument. For example:

Home page test 1024x768
        check homepage.gspec --include desktop --Vusername John


It is always nice to put commonly used data in variables. e.g. we can put domain name in the variable. This can be done with special instruction: set

@@ set
  resolution 375x667
  set device Iphone6

@@ groups Business_Section
Checking the responsiveness for business section on ${device}
  Business section | http://${domain}/business ${resolution}
    wait 10s until exist "css: div#nebula_div_btn"
    check ././Specs/Uni_Specs/CGU_Business.gspec
Or you can also use variable reference when defining another variable.

@@ set
    base_url        http://${domain}/

Home page on a small mobile device
    ${base_url} 320x600
        check   homepage.gspec --include "mobile,all" --exclude "nomobile"

Replacing Page URL in reports

Once the test is done Galen will generate a HTML report and use the browser factory that you defined in your spec. If you would like to change the way it appears there you can use ‘|’ symbol like in the following example:

Checking the responsiveness for menu in the business section on ${device}
  Breadcrumb in Business section | http://${domain}/business ${resolution}

In the end galen will use everything before ‘|’ in HTML report. So only the Breadcrumb in Business section part will be shown.


If you want to run tests against different devices or different browsers, you can write a data table and mark the test with “parameterized” special instruction. This can be done using the @@ parameterized annotation.

@@ parameterized
    | deviceName | tags      | size     |
    | Mobile     | mobile    | 320x600  |
    | Tablet     | tablet    | 640x480  |
    | Desktop    | desktop   | 1024x800 |
Home page on ${deviceName} device ${size}
        check homepage.gspec --include "${tags}"

As you can see, we defined a data table where the first row is used as a header with names of parameters. Using the example above Galen will run test 3 times with all listed parameters. In the report you will see the tests runs like this:

Home page on Mobile device
Home page on Tablet device
Home page on Desktop device

We can also parameterize the already existing parameterized tests in the suite. This can be done in the following way:

@@ parameterized
    | browser  |
    | firefox  |
    | chrome   |
    | ie       |
@@ parameterized
    | deviceName | tags      | size     |
    | Mobile     | mobile    | 320x600  |
    | Tablet     | tablet    | 640x480  |
    | Desktop    | desktop   | 1024x800 |
Home page on ${deviceName} device in ${browser} browser
    ${browser} ${size}
        check homepage.gspec --include "${tags}"


Tables comes in handy when we want to parameterize all the tests running within the test suite. It would look something like this:

@@ table devices
    | deviceName | tags      | size     |
    | Mobile     | mobile    | 320x600  |
    | Tablet     | tablet    | 640x480  |
    | Desktop    | desktop   | 1024x800 |

@@ parameterized using devices
Home page on ${deviceName} device ${size}
        check homepage.gspec --include "${tags}"

@@ parameterized using devices
Login page on ${deviceName} device ${size}
        check login.gspec --include "${tags}"

You can also merge tables inside your parameterization code block:

@@ table deviceList_A
    | deviceName | tags      | size     |
    | Mobile     | mobile    | 320x600  |
    | Tablet     | tablet    | 640x480  |
    | Desktop    | desktop   | 1024x800 |

@@ table deviceList_B
    | deviceName | tags      | size     |
    | Mobile 2   | mobile    | 360x600  |
    | Mobile 3   | mobile    | 400x600  |
    | Mobile 4   | mobile    | 500x600  |

@@ parameterized using deviceList_A, deviceList_B
Login page on ${deviceName} device ${size}
        check login.gspec --include "${tags}"

Disabling a test

A test can be disabled in Galen using the @@ disable annotation.

@@ disabled
Home page      640x480
        check homepage.gspec

Login page      640x480
        check homepage.gspec

Test Groups

Since version 1.6 you can tag your tests with groups. That will allow you to group your tests in HTML report and choose which group to run in command line. To mark a specific test as a group just use groups keyword:

@@ groups Business_Section
Checking the responsiveness for business section in ${browser} browser on ${deviceName}
  selenium ${browser} http://${domain}/business ${size}
  wait 10s until exist "css: div#nebula_div_btn"
  check ././Specs/Cross_Specs/CGU_Business.gspec --include "${tags}"

Waiting for Dynamic Content

There may be a possibility that we have to apply wait for any element to be loaded in the DOM before the test procedure can be carried forward. Timeout can be classified in two ways:

Strict Timeout: You can define timeout in minutes or seconds. For example

Home page test 640x480
        wait 10s
        check homepage.gspec

Condition Based Timeout: For this way of waiting you need to provide a locator (css, xpath or id) for an element and a condition for it. There are 4 conditions that could be used in wait action:

  • element appears in DOM
  • element is removed from DOM
  • element becomes visible
  • element gets hidden

In case the condition fails, Galen will throw an exception and interrupt the test. For example:

@@ groups Business_Section
Checking the responsiveness for business section in ${browser} browser on
  selenium ${browser} http://${domain}/business ${size}
  wait 10s until exist "css: div#nebula_div_btn"
  check ././Specs/Cross_Specs/CGU_Business.gspec --include "${tags}"


For further information on configuring the test suites you can look over here. The entire code repository is present here in my GitHub profile.

Would love to know if you have any further thoughts or insights on Galen, or about this Galen framework tutorial. Just drop a line the comments below and we can start a discussion.

And if you are looking at other resources around testing tools, do check out our guides to Behat and ChimpJS.

Topics: Framework and Libraries, QA and Testing

Behat: Introduction & installation

Posted by Surabhi Gokte on Nov 9, 2016 1:13:00 PM

I got the opportunity to learn and work on Behat for one of the internal projects at and I found it easy and interesting. During my learning phase, I noticed that there is not much documentation available on Behat over the internet. And hence this blog.

Hope it fills in some of the gaps in Behat’s supporting literature.

What is Behat?

Behat is a Behaviour Driven Development (BDD) tool which is used to test the behaviour of the application from end user’s point of view. It’s a popular open source tool for automation testing of business cases, using human readable scenarios to map onto the PHP framework.

Behat was inspired by Ruby’s Cucumber project. Both Behat and Cucumber are based on BDD. That’s the only similarity. Cucumber supports many languages (Ruby, Python, Java etc.) but Behat is just supported by PHP.

We use Gherkin, a language quite like English, to write various scenarios/features for testing an application. Behat can also be extended by writing custom PHP functions. These functions are written in FeatureContext.php file which is created inside the bootstrap folder.

Who Should Use Behat?

Behat was developed for Business Analysts (BA) and developers, so as to bridge the gap between business stakeholders and the development team.

It is also used by testers for testing websites. As Behat uses Gherkin language to write it’s scenarios, it becomes easy for testers to map it with the acceptance criteria of the project. This results in fulfilment of the client requirements to the best of our ability..

When to use Behat?

Behat helps completely achieve the client requirements because it works according to the Acceptance Criteria of the project. It uses English like, human understandable language to write Behat steps. That’s the best part, because it can be understood by anyone, whether it’s a project manager, a developer or any other business stakeholder.

It is best suited when:

  • Testing the data/content on the webpage
  • Testing actions (like links, buttons etc.)
  • Testing forms
  • In migration state of any website from one CMS to other
  • For end to end testing (i.e. flow of an application from start to finish)
  • For Functional, Regression, Smoke, and Sanity Testing

When not to use Behat?

Behat is a great tool with easy installation, but it lags behind in terms of documentation. Also it is not a great option if you are testing:

  • Dynamic data
  • Images
  • Http response of links throughout the website

Steps for Behat Ins  tallation  

Before we get down to the steps, we are assuming you have done the following:

  • Installed Composer in the system. If not, find the steps here to install composer.
  • Made a folder and named it ‘behat’ (or any name you wish) at any location in your system.
  • Opened terminal and typed cd Desktop/behat (here we are assuming that ‘behat’ folder is created on Desktop. If you create it at some other location, go to that directory through terminal using ‘cd’ command).
  • Typed the following commands to install and run behat (these commands are same for Windows, Mac or any other Linux based system):

1. touch composer.json

The ‘touch’ command is used to create empty files in Mac and Linux systems. For Windows, you can create composer.json manually inside the project folder (in our case it is named behat). Once the file ‘composer.json’ is created, add the code below:

The above code is in json format and hence indentation needs to be taken care of. It consists of various dependencies that Behat requires.

2. touch behat.yml

Now it's time to move onto the next step in Behat. Create a behat.yml file using command “touch behat.yml” inside the behat  folder. Once the file behat.yml is created, add the following code:      

3. composer install

Now it’s time to update the composer in our system. We are almost done by now. Just a step away from completing the installation process.

Execute “composer install” from your terminal. This command will read the ‘composer.json’ file from the current directory and execute it. It will install/update all the dependencies and its versions specified in composer.json file.

At this point, Behat is installed in your system. Now the steps below will help you initialize and run behat.

4. bin/behat --init

The command “bin/behat --init” initializes Behat. This is to be run only once. It will create a directory structure as shown below:

This command creates a directory (folder) called ‘features’ inside which we create our .feature files. The .feature filecontains a feature, it’s overview, and scenarios. Scenario is a structure of Gherkin that consists of various steps as per the acceptance criteria.
Inside ‘features’ directory, there is a another directory present called ‘bootstrap’. Inside this directory you will find a php file called ‘FeatureContext.php’. This is the heart of your behat, as it consists of the code that is mapped with the steps written within the scenarios in .feature file(s).

5. bin/behat features/name_of_featurefile.feature

This command is used to run the feature files (here name_of_featurefile represents the name which you will give to your feature file) and features is the folder created inside the project folder.

If only a particular scenario is to be run, tags can be used. Following is an example:

> bin/behat features/name_of_featurefile.feature --tags 1

If more than one scenarios are to be run, the tags associated can be separated using comma (,). Following is an example:

> bin/behat features/name_of_featurefile.feature --tags 1, 2

Behat Components

Browser Emulators

Browser Emulators are used for testing the responsiveness of a website. These can be of two types:

  • Browser Controllers
  • Headless Browsers

When we run the command  bin/behat features/name_of_featurefile.feature, it is run on headless browser. To run our features on a real time browser (like Firefox, Chrome etc.) we need browser controllers (like Selenium etc.). In some cases we might require specific drivers like chromedriver for Chrome browser.

If using selenium, following is the command:

> java -jar selenium-server-standalone-2.53.1.jar

This command will run the selenium jar file which can be downloaded from Selenium’s official website. To run a jar file, java should be installed in the system.

It can be run on a particular browser by downloading it’s driver which is easily available online. To run selenium on chrome browser, following is the command:

> java -jar selenium-server-standalone-2.53.1.jar"/home/mp0zxhb/chromedriver"
Here ‘/home/mp0zxhb/chromedriver’ represents the path where chromedriver is downloaded in the system.

About Mink

Mink is a browser emulator for a web application, written in PHP. We use Mink library in behat so as to control our browser. Here's all the information on mink.

To use mink, PHP 5.3+ should be installed in the system.

Terms in Behat


Gherkin is an English like language which is business readable and domain specific, that is specially designed to describe the behaviour. It is also available in other languages (like French).

Gherkin is used to write the scenario(s) in the feature file(s). It has a defined structure and is indentation sensitive. Following are the keywords used in Gherkin:

Given, When, Then, And, But




A feature gives the overview  of the functionality that would be covered in the specific feature file. We write features in a text file with .feature extension. One .feature file contains a single feature, but can have multiple scenarios within that feature.



A scenario consists of a short summary (which starts with keyword Scenario) and step definitions. The step definitions are written in Gherkin format.



It is a file which is written in YAML format, which consists of all configurations required for Behat. This file is loaded by default.

Example: An example of behat.yml file is given in Behat Installation Steps section above.


Composer is a tool which is used to declare various dependencies of our project and manage their installations (and/or updations). It can be called a dependency management tool, but definitely not a package manager.

We use commands like composer install, composer update etc. to install/update project libraries/dependencies through composer.

Behat Commands

Following are the generally used commands in behat:

bin/behat -h When you run this command, you get all available commands and options in behat.

bin/behat -dl- When you run this command, you get list of available step definitions that can be used in scenarios. When any custom step is written, it gets added to this list.

bin/behat --dry-run --append-snippets-When this command is run, the snippet (custom PHP function body) gets added in FeatureContext.php file inside which custom PHP function is to be written.

That’s all folks! Hope this helps you get started with Behat. You can check out one of our QA webinars on how to leverage Behat to tell stories through your code, and build the right product.



You could also take a look at how we used Behat on one of our client projects for an insurance company.

Topics: Framework and Libraries, QA and Testing

Behat testing for an Insurance company

Posted by Deepshikha Singh on Nov 3, 2016 1:29:00 PM

The increasing need for incorporating inputs and feedback from business stakeholders into the website is making the enterprises shift to Behavior Driven Development (BDD) methodology. This ability of BDD to include feedbacks makes it easy for the end user to navigate through the website and use the existing functionalities.

A leading insurance company in Australia had a similar requirement, where they wanted the website to work seamlessly for external users. With ongoing feature additions and changes on the website, the insurance company contacted for automated QA testing to check all the lateral effects.

To ensure that both developers and testers remain in sync, Behat was chosen as the tool to make BDD possible. Behat was a good choice because it’s written in PHP, as is Drupal, which was used to build the website. used Behat for both functional testing, and regression testing.

Used Behat’s inbuilt (default) features

Srijan used Drupal extension with Behat to avail the default inbuilt features, for which we did not need to write any custom code. This made the tests easier and faster.

Integrated Selenium with Behat to enable cross-browser testing 

This was done to run the tests on multiple browsers simultaneously. This also helped in reducing the execution time of the entire suite. We can use Browserstack or Saucelabs for cross-browser testing

Performed end-to-end testing using Behat 

Srijan added custom code to get functionalities working. Suppose you want to click on a certain element on a web page, and for this Behat doesn’t give any default definition. Here, custom will add the corresponding PHP code for the definition. This is understood by Behat and mapped with the step in the feature file.

Integrated with Goutte driver to get Behat running headless 

When we run our Behat test with Selenium, the browser window opens up increasing the execution time. But when our test needs to be integrated with continuous integration, then longer execution time is not desirable, since we are only bothered with the execution report. So to decrease the execution time we used Goutte driver which runs headless in background. This is much faster and does not hamper anyone’s work.

Scenario based requirements

Srijan added tags to the feature file to get the entire suite to run using a single command and also to run some specific scenario based upon the requirement. Suppose there are 100 scenarios in one feature file and you only want to run 10 of them. You can do this by tagging the required scenario with any keyword using symbol @. 

Example : Let's say we use @regression. Now when you run your entire suite using a single command, and pass @regression argument in the command, then only the scenarios tagged with @regression would run.

Integrated Behat with drush and drupal API 

Drush enables us to use Drupal APIs. It was done to create different users, login and logout using authenticated user, anonymous users etc. without writing any custom code

Defined theme regions

This is used when a user is verifying the visibility/availability of certain elements in a particular region on the webpage, provided the same element occurs at multiple places in the webpage.

Behat integrated with CI (Continuous Integration) - Jenkins

This is helpful for integration testing when the development is done in chunks or small fragments.

Srijan’s QA Automation team enabled the client to check for the lateral effects on existing functionalities, whenever there are new changes on the website. It has helped the client reduce manual effort in testing and improve overall website performance. 

Topics: Framework and Libraries, QA and Testing


Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us