Deepshikha Singh

Deepshikha Singh

Recent Posts

API Automation using Postman

Posted by Deepshikha Singh on Mar 26, 2019 6:01:00 PM

If you are new to Postman, please visit my blog API Testing using Postman for an overview of what Postman is and how it  works.

Automation has now become a norm across sectors, and when used in testing, it can undoubtedly improve the depth and scope of the tests. Since it is not possible to test everything manually, so using Postman automation can save our time as well as efforts.

Postman is the ultimate tool for API automation. In this blog, we will be taking a look at how to achieve API automation using Postman.

Example Scenario for Automation using Postman:

Let’s say, a login API generates a Token (oAuth, JWT etc) and refreshes it every hour. The API utilizes this token to get the expected response. If we don’t automate, we will have to copy and paste this every time for each API.

And for a test suite containing 100 API’s, a good amount of manual effort will go into pasting the token for each API. By introducing automation, this process of updating the newly generated taken can be automated.

Let’s automate the Login API through Postman

  1. Open Postman Application and create a collection.

  2. Then make a POST request to Login API by passing the correct credentials (Body Parameter+Header)



3. Now click on Send button to see the response.


4. In the above response, we get an “access_token” key which would be used in all the following API’s. Hence, we need to write a custom code which can define the variable, and update its value with every hit to the Login API. Going further, we will use this variable for access_token value and would not need to copy and paste the access_token value for each API.

5. Also, we can create the variable for host_name, protocol etc so that we don’t need to write the protocol and host_name for each API, instead we can use the variable.

6. Another important aspect is to check for correct Response/Status Code and the Response Time for each API. It would be really great if we can write the code for this task too, so that when we run our entire API test suite. We can easily view all the failing tests which does not give correct status code as well as all those API’s which are taking more than an acceptable response time to execute.

7. Below is the sample code snippet which would solve all the above problems.


8. You can see the value of access_token is set in the environment variable as expected.


9. Also, under TestResults tab you can check if the assertions you wrote have been passed or failed.


10. Postman itself provides us with lots of code snippets, you just need to click on the desired code snippet and the generated code adds in the Tests tab to perform assertions/actions. To get the snippets, click on Tests tab and then on the arrow “<” to see the snippets.


11. If you want to add more custom code, you can always do so depending on your requirements, a couple of them are mentioned in the screenshots below:





That’s all for this blog. Happy Testing!!

Topics: API Management, Coding and Tutorial

Manual API Testing using Postman for Beginners

Posted by Deepshikha Singh on Feb 27, 2019 6:19:00 PM

Manual API Testing using Postman for Beginners

Reliable API calls are critical to any decoupled application. Whether it a simple configuration change to an entity or updating the Drupal core, both of them can alter the API response and lead to application-breaking changes on the front-end. An API test suite can watch out for these API breaking changes by running a slew of tests against your endpoint. And when you need to create an API test suite, Postman delivers.

Why Postman tool?

Postman is a simple GUI for sending HTTP requests and viewing responses. It is built upon an extensive set of power tools, which are incredibly easy to use. Postman helps you perform a variety of functions ranging from

  • organizing requests into collection and folders,

  • sharing common values across requests with environment variables,

  • scripting tests with the built-in node.js based runtime,

  • and at last, automating it all using Postman’s very own CLI — Newman.


Install native Postman Application

Postman for Mac/Windows/Linux:

Go to  and download the application based on the OS you are using and follow the steps prompted to successfully install the Postman application.

After you have installed Postman successfully, your postman window should look like:


If you have accomplished this step, you are all set to take the next flight.

Making the first http request in Postman:

Since we have installed the Postman app successfully, it is now time to start testing the API with Postman by making first ever HTTP request to server.

What is HTTP?

The Hypertext Transfer Protocol (HTTP) is designed to enable communications between clients and servers. HTTP works as a request-response protocol between a client and server. A web browser may be the client, and an application on a computer that hosts a website may be the server.

Example: A client (browser) submits an HTTP request to the server; then the server returns a response to the client. The response contains status information about the request and may also contain the requested content.


Most common http methods:

1. GET : The GET method is used to retrieve information from the given server using a given URI. Requests using GET should only retrieve data and should have no other effect on the data.

2. POST : A POST request is used to send data to the server, for example, customer information, file upload, etc. using HTML forms.

3. PUT : PUT is used to send data to a server to create/update a resource. Replaces all the current representations of the target resource with the uploaded content.

4. PATCH : PATCH is used to update partial resources. For instance, when you only need to update one field of the resource, PUTting a complete resource representation might be cumbersome and utilizes more bandwidth.

5. HEAD : HEAD is almost identical to GET, but without the response body. HEAD transfers the status line and the header section only.

6. DELETE : The DELETE method deletes the specified resource.

7. OPTIONS : The OPTIONS method describes the communication options for the target resource.

Testing GET Requests

Let’s now jump directly to test those API’s. Suppose we have an API which fetches the user information of a particular application. To test this we will have to use GET request. The GET request is explained below:

For sample requests, visit

a. For making the first HTTP request(GET):

  1. Make a collection in Postman — To make a collection in Postman, click on New->Collection->CollectionDemo(Any Collection Name you wish)->Create

  2. Make a Request — To make a request, click on New->Request->GetUser(Any request name you wish)->Select the Collection you wish to save request in(Present in bottom of dialog box)->Save to Collection Demo

  3. By now, we have created our first request, now we need to pass different parameters in the request to get the expected response.

  4. In the “Enter Request URL” text box type :

  5. Click on “Send” Button

    manual-api-testing-using-psotman-for beginners-srijan-1

6. You should be able to see the below response in the Body section:

manual-api-testing-using-postman-for beginners-srijan-1

7. You should be delighted you have successfully tested your first API request.

Testing POST Requests

Now, suppose we need to create a user into a application that means we are sending data or feeding data to an application. For these type of requests we use POST request. In POST request we send data/parameter in the body of the request, and in response to that, API returns some data to us which validates the user has been created. The response can either be a success message or the id of the new user created and time when the user was created.

a. For making the first HTTP request(POST):

POST Request — To make a POST request, click on New->Request->CreateUser(Any request name you wish)->Select the Collection you wish to save request in(Present in bottom of dialog box)->Save to Collection Demo

  1. From the Dropdown select POST

  2. In the “Enter Request URL” text box, type :

  3. Click on Body Tab and select “Raw” radio button

  4. In the text box, paste :

   "name": "morpheus",
   "job": "leader"

5. Click on Send button

6. User should see the below response:

manual-api-testing-using-postman-for beginners-srijan-2

7. Also, check for correct status code, in this case you should get : ‘Status:201 Created’

manual-api-testing-using-postman-for beginners-srijan-3

You have successfully tested your POST request too, similarly you can try your hands with PUT, PATCH, DELETE etc.

  1. Check for expected response.

  2. Check for correct status code.

  3. Check for Time (Response Time), it should be acceptable as per business.

  4. Always perform negative tests to verify that the API doesn’t respond if data is tampered.

That’s all for this blog. Happy Testing!!

Stay tuned for my next blog on “Automation with Postman”

Topics: API Management

Introduction to Nightwatch JS

Posted by Deepshikha Singh on Mar 15, 2018 2:18:00 PM

In this blog, we are going to take a look at how you can intsall and configure Nightwatch JS, and its integration with Jenkins. Before that let's find out what Nightwatch JS is and what are its features.

What is Nightwatch JS?

Nightwatch.js is an easy to use Node.js based end-to-end (E2E) testing solution for browser based apps and websites. It uses the powerful W3C WebDriver API to perform commands and assertions on DOM elements. 

Features of Nightwatch JS

  • Clean syntax : Simple but powerful syntax which enables you to write tests very quickly, using only Javascript (Node.js) and CSS or Xpath selectors. 
  • Built-in Test Runner : Built-in command-line test runner which can run the tests either sequentially or in parallel, together, by group, tags or single. Grunt support is built-in. 
  • Selenium Server : Controls the Selenium standalone server automatically in a separate child process; which can be disabled if Selenium runs on another host. 
  • Cloud Services Support : Works with cloud testing providers, such as SauceLabs and BrowserStack
  • CSS and XPath Support : Either CSS or Xpath selectors can be used to locate and verify elements on the page or execute commands. 
  • CI Support : JUnit XML reporting is built-in, so you can integrate your tests in your build process with systems such as Teamcity, Jenkins, Hudson etc. 
  • Easy to extend : Flexible command and assertion framework which makes it easy to extend, in order to implement your application specific commands and assertions.  

How does Nightwatch JS works?

Nightwatch works by communicating over a restful HTTP api with a WebDriver server (typically the Selenium server). The restful API protocol is defined by the W3C WebDriver API. See below for an example workflow for browser initialization.

Most of the times, Nightwatch needs to send at least 2 requests to the WebDriver server in order to perform a command or assertion:

  • First is the request to locate an element given a CSS selector (or Xpath expression) 
  • Second request is to perform the actual command/assertion on the given element


Assuming you have nodejs installed in your system, here’s how to go about installing Nightwatch JS

  • To install the latest version using the npm command line tool, run the following: $ npm install [-g] nightwatch where ‘g’ is for installing globally 
  • Make sure you have Java installed and Selenium server standalone jar downloaded in your system 
  • Selenium Server Setup : Start selenium server using command:



The test runner expects a configuration file to be passed, using a nightwatch.json file from the current directory by default, if present. A nightwatch.conf.js file will also be loaded by default, if found.
Let's create the nightwatch.json in the project's root folder and add this inside:
  "src_folders" : ["tests"],
  "output_folder" : "reports",
  "custom_commands_path" : "",
  "custom_assertions_path" : "",
  "page_objects_path" : "",
  "globals_path" : "",

  "selenium" : {
    "start_process" : false,
    "server_path" : "",
    "log_path" : "",
    "port" : 4444,
    "cli_args" : {
      "" : "",
      "webdriver.gecko.driver" : "",
      "webdriver.edge.driver" : ""

  "test_settings" : {
    "default" : {
      "launch_url" : "http://localhost",
      "selenium_port"  : 4444,
      "selenium_host"  : "localhost",
      "silent": true,
      "screenshots" : {
        "enabled" : false,
        "path" : ""
      "desiredCapabilities": {
        "browserName": "firefox",
        "marionette": true

    "chrome" : {
      "desiredCapabilities": {
        "browserName": "chrome"

    "edge" : {
      "desiredCapabilities": {
        "browserName": "MicrosoftEdge"

Create your Test Case in Nightwatch.js

We basically define your tests within a Node module:

module.exports = {
    "My test case": function(browser){
        // control the browser

  • Nightwatch then invokes that test methods, passing you a “browser” object which you can control by invoking some Nightwatch commands or assertions. That’s pretty much it. 
  • Under the hood, the Nightwatch test runner communicates with the Selenium server over the Selenium WebDriver Wire protocol. 
  • The Nightwatch test runner can be controlled through command line arguments. However I strongly suggest you use the settings.json file where you specify the behavior of the runner, the input and output directories, and the Selenium component. 

Run Test Case in Nightwatch JS

  • Nightwatch <filename.js> 

Integration with Jenkins

Jenkins is an open-source continuous integration server written in Java. It is by far the most widely used tool for managing continuous integration builds and delivery pipelines. 

In order to integrate Jenkins with Nighwatch JS, you first need to have the same installation setup on the server where Jenkins is running:

  • Node.js and npm 
  • Firefox installed (as it will be used by the Selenium Server by default) 
  • (Nightwatch, selenium standalone server etc…will be delivered by our setup) 
  • Running nightwatch is done by simply invoking it from a batch command : nightwatch <filename.js>
  • The runner will produce a nice JUnit compatible XML file which can be taken by Jenkins for presenting the results and eventual failures to us.

So that's how you can install and started working with Nightwatch JS. If you found this blog useful, read more about PhantomJS and ChimpJS. Also watch our webinar on Eliminating JavaScript Codesmells.

Topics: Framework and Libraries

Testing with Emulator: A step-wise guide

Posted by Deepshikha Singh on Oct 19, 2017 2:52:00 PM

What is an Emulator?

An Emulator is an application that emulates real mobile device software, hardware, and operating systems, allowing us to test and debug our application. It is generally provided by the device manufacturer. Mobile emulators are free and provided as a part of SDK with each new OS release. As a developer or a tester, you can configure the emulator to closely resemble the devices on which you plan to deploy your application.

The emulator window consists of a device screen on the left and phone controls/keyboard on the right. The device screen initially displays ‘ANDROID‘, and then displays the graphical logo, while the Android platform associated with the AVD is initializing.

What is AVD Manager?

AVD Manager is a tool to create and manage Android Virtual Devices(AVDs), which define device configuration for the Android Emulator. Before you can actually run an emulator, you need to install the Android SDK on your machine and define an AVD, which defines the hardware characteristics of the emulator. You can define things like the device RAM, whether there is touch screen and/or keyboard, camera support, audio playback support, etc. You can create several AVDs to test your device on several virtual devices.

How to Create an Android Virtual Device?

Once installation of all the prerequisites are done, we need to launch the SDK Manager to download relevant files to create an emulator and run the virtual device.

  • In command line(cmd), type in android sdk. It would open the SDK Manager for you to download the relevant files
  • In the SDK Manager select the files as shown below. This will help you create a virtual device

There are two ways to create Android Virtual Devices for mobile testing:

  • Using Command Line
  • Using AVD Manager

Creating a New Android Virtual Device using AVD Manager

The first step is to launch the AVD Manager and for that, there are a number of options you can follow:

  • Launch AVD Manager using Program Menu : Go to Start →  All Program →  Android SDK Tools → AVD Manager
  • Launch AVD Manager using Command Line : Go to Run and type cmd to open command prompt window. Type: android avd
  • Launch AVD Manager from folder location : Browse to Android SDK folder and Double-click on AVD Manager.

Either of the above ways, it would open the Android Virtual Device Manager, which would help you create the virtual devices. The AVD Manager main screen shows one default virtual device, which is Nexus 5.

1. Click on Create button to create a new AVD

Create button to create a new AVD

2. A pop up will open, follow the below screenshot for the values. In case you are simulating a specific device, you might want to name it as such. For example NexusSix-4.4 refers to an AVD that simulate Nexus-SIx which runs on Android 4.4.2 version.

NexusSix-4.4 refers to an AVD

In this test I choose a very generic name Android to keep it simple.

Note: You can play around with different configurations for your virtual device.

 3. Click on OK to proceed with the save changes.

Click on OK to proceed

4. After you complete above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now select the AVD name and click on Start on the right.

AVD name and click on Start on the right

5. This would launch the pop up with few options, you may choose as you want. Once done, click on Launch, this will launch the emulator.

click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to start from a previously saved snapshot of the device’s state. The emulator launches much faster when launched from a snapshot.
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit

 6. Emulator will launch, it would take few minutes to display the Home Screen of the virtual android device. The time taken is actually equal to the switch on time on the real mobile device.


  • You can use the command ‘adb devices‘ to see if the adb is detecting the emulator. This basically completes the Android SDK installation part.
  • If ANDROID logo appears for more than 15-30 minutes, something has probably gone wrong. Reboot your computer, start AVD Manager, delete our created ‘Android’ AVD, recreate this AVD, and relaunch new AVD.

The emulator window launched with 5554:Android in its title bar. Value 5554 identifies a console port that you can use to query and control the AVD’s environment. Android supports a maximum of 16 concurrently executing AVDs, where each AVD is assigned an even-numbered port number that starts at 5554. It means that we can initialize more than one AVD at the same time and can test the app with parallel execution.

Creating an Android Emulator using Command Line

Android Emulator can be configured differently for simulating different android devices. With the help of AVD configuration, Android Emulator can simulate:

  • Target platforms versions
  • Screen sizes
  • Solutions
  • Input methods
  • Hardware configurations
  • External storage sizes for SD card

Although there are a good number of default device configurations present in AVD Manager, if you have any particular device in mind that you want to emulate, you need to create an AVD that resembles the features of your target device. For example, you need to know the screen size and resolution and so on. 

Creating Default Device Definition

1. Go to Device Definitions tab and select one of the presets and click ‘Create AVD’

2. Every default device will provide you some preset AVD and that can be changed according to your need

how 6

Note: Notice that the OK button is disabled, just because under CPU/ABI it says that “No system Images installed for this target“. To avoid this, select different Device configuration from Device & select any skin or select none for skin and proceed.

Below are the list of optional and mandatory settings:

  • AVD NAME: Choose a name for the AVD like NexusSix-4.4 that refer to your device configuration
  • Device: Select the AVD resolution as per the device options
  • Target: Select the Android version / Test Environment or Target Platform
  • CPU/ABI: Select Intel Atom (x86) for 32-bit and ARM (armeabi-v7) for 64-bit.
  • Keyboard: Select this box to use the keyboard in the AVD
  • Skin: Select this box to get the hardware buttons
  • Front Camera: If the system have a webcam, that can be used with AVD
  • Back Camera: If the system have a webcam, that can be used with AVD
  • Memory Options: Set RAM & VM Heap for the device according to your need
  • Internal Storage: Set this as per your need and Select GiB/MiB from the drop down
  • SD Card: Set this as per your need and Select GiB/MiB from the drop down
  • Emulation Options

- Snapshot : Select this to persist, it lets you quickly start the emulator after the first startup.

- Use Host GPU: Select this to use the computer graphics

3. After you complete the above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now Select the newly added AVD name and click on Start on the right.

4. This would launch the pop up with few options, you may choose as you want. Once done click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to be started from a previously saved snapshot of the device’s state.The emulator launches much faster when launched from a snapshot
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit

Once done, Emulator will launch. It would take few minutes to display the Home Screen of the virtual Android device.

Creating a Custom Device Definition

In case the available device definitions do not match the device type you’d like to emulate, you can create a custom device definition for your AVD:

1. If the AVD manager is opened, go to Device Definitions

2. Click on Create Devices.

3. The Configure Hardware Profile window will display and it will allow you to specify various configurations such as the screen size, memory options, input type, and sensors.

Note: Once all the information is filled properly, Create Device button will get enabled and you may proceed then.

4. After you complete above steps, the emulator will be displayed under the Device Definition tab, configured on the AVD Manager. Now Select the newly created AVD name and click on Create AVD on the right.

Automation using Android emulator

  • Get Emulator Platform Version
  • Unlock Android emulator screen
  • Go to Settings. You will find About Phone under settings.
  • Go to About Phone. It will show you Android version
  • Verify calculator App Is Available In Emulator 


We are going to run an Appium test for calculator application so it should be there in emulator. Generally, calculator app will be already installed in emulator. To check if it is installed or not,

1. Unlock emulator

2. Verify if there is an application with name Calculator

3. Get app activity and package name. We need launcher activity and package name of calculator app. Activity and package name of calculator app for me are:

  • Package name :
  • Activity name :

4. Create Appium Test Script In Eclipse. Now we are ready to create and run our first Appium test on Android emulator for calculator application. I have prepared Appium test script as below. I have used RemoteWebDriver of Selenium Webdriver to launch app with required capabilities.

5. To start an instance of the emulator from the command line, navigate to the tools/ folder of the SDK. Enter emulator command like this: emulator -avd <avd_name> [<options>]

So that was all about performing Appium testing on emulator. In case you are just starting off with mobile application testing, you can explore our list of the best tools for mobile test automation. You can also check out our Appium tutorial webinar to level up your mobile testing skills.

Topics: Coding and Tutorial, QA and Testing

What is PhantomJS

Posted by Deepshikha Singh on Jul 12, 2017 5:42:00 PM

PhantomJS is a headless browser, meaning a web browser without a graphical user interface, used for automatinWhat is Phantom JSg web page interaction. It is based on WebKit, the web browser engine. So it cannot render web pages but can act like a web browser. PhantomJS provides a JavaScript API to enable navigation, screen capture, user behaviour, assertions etc. It also supports touch events, making it relevant to mobile applications as well. 

Common use cases for PhantomJS 

  • Headless Testing - PhantomJS is a popular tool for running unit tests. It can simulate user behaviour like resource request and resource receipt without using the UI. It executes automated tests and displays results in the command line.
  • Page Automation - PhantomJS can load and manipulate web pages. It can automate a lot of QA tasks as by implementing web standards like DOM, CSS and JavaScript. You can set up scripts to trigger clicks or move from one page to another. It can respond to events like 'Page not found', 'Page closed' etc. It also helps to check sign-in issues.
  • Screen Capture - PhantomJS can get a screenshot of a web page with features to adjust the size of the screen and the exact area to be captured. It can get screenshots of HTML pages and various image formats like JPEG, GIF and PNG. It also supports Adobe Acrobat files.
  • Network Monitoring - PhantomJS can be connected to a web application when there is a request-response cycle. It can collect data about the website. This data can be re-formatted and viewed to analyse network behaviour and performance.
  • Client Side Testing - Most websites use heavy client side technologies. PhantomJS can be very efficiently used to execute client side scripts and render DOM as a web browser.

At Srijan, we’ve leveraged PhantomJS on two of our recent project:

IAG - One of the projects where PhantomJS has been extensively used is IAG wherein we used it with Behat to integrate with JS elements on the web page. Since Behat restricts us to interact with JS elements on a webpage, hence PhantomJs has become a necessity.

Also, Phantomjs helps to reduce the execution time thereby improving the performance.

OnCorps - PhantomJS is used with Codeception where we have a docker container based system. The PhantomJS server is running on a different container as compared to where the tests are running, so the headless browser stuff is done by PhantomJS. 

OnCorps uses Acquia for their code and it’s not very compatible with anything non-PHP. So, we had to run PhantomJS in a separate container.

Integrating Phantom with other software

As you can see, we’ve used PhantomJS in conjunction with other tools, and there can be numerous such examples:

  • Many testing tools like Mocha and Casper that are widely used are based on PhantomJS. 
  • You can have a test framework wherein the Javascript unit tests can be written in Jasmine. These can be executed using PhantomJS. This can be integrated into the continuous build environment like TeamCity.
  • PhantomJS can be easily used in a CI environment based on Jenkins.

Advantages of PhantomJS

  • It’s available on all major operating systems – Windows, Mac OS X and Linux
  • It’s easy to install and doesn’t require too many resources. It can start working without the use of third-party devices or external services. Development is easy. For example, while writing code, you can select the HTML element you want to access.
  • It can be used to create Cron jobs to test certain functions on the production website.
  • A lot of manual testing can be replaced by PhantomJS tests. So testing is faster and more efficient.

Disadvantages of PhantomJS

  • PhantomJS can be used to run unit tests on the code but it is not a testing framework
  • It’s not the best tool to automate functional testing or end-to-end testing
  • It’s used to attack websites in an automated manner as it does not require a UI

All in all, PhantomJS is a good tool for client-side testing. It can be used in a testing framework to reduce efforts of manual testing. But it cannot be the only tool used for a comprehensive test of the application. So, you can check out a combination Behat, Mink, and PhantomJS for your testing suite. Or use PhantomJS and Selenium to automate tests.

You can also check out our posts on working with Behat, and Selenium, and our webinar on how to manage QA tasks in an agile project. Hope you find them insightful.

Topics: Framework and Libraries

Galen vs Wraith: Comparing automated regression testing tools?

Posted by Deepshikha Singh on May 26, 2017 4:03:00 PM

With enterprises wanting to offer optimal viewing and interaction experiences across devices, there’s increasing emphasis on building responsive websites. And hence, visual testing has become a key component of the QA process. Additionally, there are several tools available to perform visual testing, and QA teams need to be able to choose the right one, depending upon project requirements.

Recently, our QA team was engaged with an Australian insurance enterprise, carrying out manual testing, and device/browser testing for all their new website features. During the engagement, the client also requested automated visual testing for their website. 

We proposed the open-source Galen tool. However, since the client already had a stable environment, it was decided that the Wraith tool would be better suited to the project. And so we used Wraith to run tests on each deployment on the production environment, validating CSS, image comparison, styling, and other visual aspects. 

This got us thinking about the key differences between Galen and Wraith, and how they are suited for different project requirements. So here we take a quick look at the two most prevalent visual regression tools, comparing them on the basis of features, best use cases, advantages, and drawbacks, to help decide upon the one that is best suited for your enterprise project.

So that was our handy chart to help you compare between Galen and Wraith. Decide the parameters that are the most important for your project, whether it’s the environment, or integration capabilities, or cross-browser testing, and check out how the two tools stack up. And then choose the one that works best for you.

Once you have locked down your tool, your in-house QA team can get started with it. However, if you wish to onboard a QA Automation team, just write to us below and let’s explore how we can help.

Topics: Framework and Libraries, QA and Testing

The basics of Jenkins

Posted by Deepshikha Singh on Apr 19, 2017 6:14:00 PM

Continuous Integration(CI) is a process where developers integrate their code at regular intervals. Code is regularly committed to a shared repository and a build is triggered. It is important that when new code is committed, existing functionality does not get broken. And that is ensured by Jenkins.

The Jenkins tool is a popular open source automation software used for the continuous integration and continuous deployment process. It triggers builds and runs functional tests, UI tests, and smoke tests. It can be used for deployment and static analysis as well.

At Srijan, we have used the Jenkins tool in almost every project to smoothen the CI/CD process for smart and quick delivery of the projects. Some of the key areas where Jenkins was the perfect tool to use are :

  • Jenkins was used as a gatekeeper for pushing the code to Github repository. Client had put certain checks related to code standards and some smoke tests. So any code that is pushed to github repo has to go through those checks before the code is merged to github.
  • Srijan team has scheduled the Jenkins jobs to trigger at the predefined time and send across the report via email and Slack.
  • Jenkins job/build was triggered on every pull request (PR) raised by any team member. A pull request is raised by any team member who pushes his code to github repo to get it merged with master branch so that we have a stable code in Master branch.
  • Jenkins is very useful in sending reports to the client on daily basis, so we configure the job to run at predefined time. We don’t need to trigger the job manually, instead Jenkins does this for us and itself sends out the reports.

Now that we know what’s possible with Jenkins, let’s take a closer look at what the tool is, and how it works during the development lifecycle.

Basic Features of Jenkins

  • It is coded in Java and can be used across operating systems like Windows, Ubuntu, Debian, Mac OS X,  Red Hat, FreeBSD, Open BSD, Gentoo, Fedora, CentOS, 
  • It triggers a build when new code is committed in the Git repository. Once the build is triggered, it shows number of builds that succeeded or failed, time of last build success and failure, and preferably which parts of a pipeline succeeded or failed.
  • Jenkins deploys code on the test server(or any desired server) and triggers automated tests.
  • It generates test results that can be emailed to concerned teams.
  • It is a dominant tool with numerous active installations and users across the world. Therefore there is lot of support within the community.

How Does Jenkins Work

Jenkins can be used for automating the entire CI process. Let us look at how Jenkins works in the software development lifecycle -

  • A developer commits code to the source code repository. The Jenkins tool detects the changes that have occurred in the source code repository as it checks the repository at regular intervals.
  • If there is a commit, Jenkins will start preparing a new build with the new changes.
  • If the build is successful, Jenkins deploys the build in the test server. If the build fails, then the concerned team is notified.  It gives a basic report of success and failure of build and the build trend.
  • After testing, Jenkins generates feedback. It generates a test report for the development and QA teams indicating the test results. 
  • It will continue to check for new source code. As a result, the entire CI process is automated, leading to higher efficiency and faster software delivery process.

Advantages of Jenkins

  • The Jenkins tool is widely used, and hence has very strong community support. This means help is readily available for all issues and bottlenecks.
  • It provides about 1,000 plugins to different applications in the DevOps process. This makes Jenkins quite extensible. The best part is, you can write your own plugin if you do not find one that suits your requirements.
  • For example, it can integrate with Version Control Systems like Git and Subversion. It can work with build tools like Maven and Ant. It has plugins to integrate with tools like JIRA and Bugzilla for reporting issues and the builds where these issues are.
  • There are constant incremental iterative improvements.
  • It is an open source, cross-platform tool.
  • Jenkins can generate test reports that can be viewed on the web UI.
  • It can deploy directly to production or test environments making the CI process automated.

Disadvantages of Jenkins

  • Jenkins is not compliant with Sarbanes Oxley, ITIL, or COBIT which might be required by some clients.
  • Jenkins might not be the best tool for .NET projects as there are other tools that provide more features and integration with .NET tools
  • It does not support two factor authentication.

A robust CI/CD system needs to provide all functions required for DevOps and also be flexible to run in different environments. All said and done, the Jenkins tool is great for the CI/CD process as it automates most of the tasks of DevOps and has the facility for others to add plugins as per requirements. 

You can also check out our posts on working with Behat, and Selenium, and our webinar on how to manage QA tasks in an agile project. Hope you find them insightful.

Topics: CI/CD & DevOps

Srijan - A must join for the technology explorer

Posted by Deepshikha Singh on Mar 30, 2017 9:41:00 PM

My journey with Srijan started on 10 Dec,2014 and since then, though with some hiccups, it has been a wonderful ride.

So, my journey started this way - I was looking for a change from my past organisation and got an offer from one of the good organisations located in Noida. It was the time to showcase my knowledge base for that company. So, I went there and tried exploring the company by speaking to a couple of people. I was satisfied with my decision of choosing that company.

Suddenly, the very next day, I received a call for an interview from one of the HRs of Srijan . Although I was not too interested, I agreed to get myself interviewed for the profile of an Automation Test engineer. My telephonic interview was scheduled for the following day at 1PM. I made myself available for the interview but to my disappointment, I didn’t receive any call for interview that day. This portrayed a negative impression about the company. I tried calling back but it was not answered. Thereafter, I got engaged with my work.

It was around 5 PM when I received a call from one of the QA persons for the interview. The person apologised to me for being late. The interview started with some basic questions with a gradual increase in the level of questions. I was comfortably answering the questions, when all of a sudden, the call got dropped off. I waited for the call but there was none. I called back again. The person who responded agreed to call back. But no call again.

Aaah! What was going on I wondered! By this time I went into more negativity about the company. I was continuously bombarded with thoughts of the company being irresponsible, least bothered etc. And I now had some strong points to not go ahead with this company.

Surprisingly, the next day, I received a call from one of the HRs. The first round of the interview was clear and they wanted to schedule my next round with the QA Lead. I replied, “Please make sure you guys call me on time.” She assured me that I would be getting the call at 3 PM.

Unlike the previous day, sharp at 3 PM, I got the call which was a turning point in my life. The interview lasted for around 1.5 hours. After my conversation with the interviewer, I became positive about the company. He gave me a strong reasons to give Srijan a try if I were selected. And finally, I received my offer letter within a week.

The point is, sometimes the good thing come late, and not in the way that you expect them to. The trick is to recognize a great opportunity when it knocks. And I am glad I got that right!

My first day at Srijan was great, getting to know the place and the people. Come Day 2, I was assigned a project where I had to single-handedly drive the test automation, gathering requirement, designing the framework, etc. But the good part was it was to be done in Selenium web driver, which was my forte. There was a time when I had to try out manual testing. I was least interested in that but managed to prove my capabilities.

Learning new things is a norm at Srijan. I got a chance to work out of the Srijan Goa office for two months and met a bunch of new people who expanded my knowledge base. Back in Delhi, got the chance to explore technologies and tools which were all Greek to me. There was Zephyr, Tarantula, PHP, Behat, Wraith, Jmeter, Loadstorm, Codeception, Galen; just to name a few. Every time I was asked to explore a new technology, I was worried if I would be able to prove myself and meet expectations. Gradually, my confidence grew and I was ready to learn any new technology.

Another highlight of my work at Srijan was attending DrupalCon Asia. To my shock and excitement, I was the only QA to be selected for the Con and it was a big deal for me. There I got the chance to meet Seta Shahi, our QA Lead, to whom I owe a large part what I have achieved at Srijan so far. She has always been just a phone call away, ready to help whenever I was stuck on a project. We talked shop for sometime and was encouraged with positivity.

After DrupalCon, I also got the chance to attend the Global Selenium Conference after a few days. And finally, I took a trip to Australia for a month, on a client project. It was like a dream come true. I have gotten so many opportunities for growth at Srijan that I could not have asked for anything better.

The best part of Srijan is its work culture. You are given the flexibility to explore the technology of your choice and do something fruitful with it, rather than imposing something. Everyone here is ready to extend a helping hand, irrespective of whether it’s a personal or professional problem.

I must say, if you have got the hunger to learn and explore new things, Srijan is the best place to be. It is one of those few organisations where your opinions are heard and valued. Moreover, there are a plenty of marvelous people who always enlighten me with their positive thoughts and deep knowledge. It’s been more than 2 years working at Srijan and believe me, each day here is a new challenge which pushes you to learn new skills and widen your horizons.

So, if you get a chance to work with Srijan, please don’t miss it. It would surely be one of the best experiences of your professional career.

May the force be with us!!

Topics: Life at Srijan

An Introduction to Selenium

Posted by Deepshikha Singh on Mar 20, 2017 2:40:00 PM

Selenium is an open source software testing framework released under the Apache 2.0 license. It was built in 2004 by Jason Huggins and other testers at Thoughtworks. The name 'Selenium' came from an email in which Huggins mocked a competitor named 'Mercury' , saying that one can cure mercury poisoning by taking selenium supplements.

Selenium is mainly used for functional and regression testing. It is built in Java and is highly useful in web applications.

What we use at Srijan ?

- Selenium IDE
- Selenium RC
- Selenium WebDriver

Selenium IDE (Integrated Development Environment) is a Firefox extension which allows recording, editing and debugging of tests. Selenium IDE was originally created by Shinya Kasatani and donated to the Selenium project in 2006. You can record and play tests using Selenium IDE. It reduces manual effort and saves time as it can be reused while retesting.

Scripts are recorded in Selenese, a special test scripting language for Selenium. Selenese provides commands for performing actions in a browser and for retrieving data from the resulting pages.

Key features:

- Easy to record and run.
- Can create and run test suites (multiple tests).
- Can apply breakpoints at any point.
- Can edit scripts manually.
- Can save test scripts as Java, C# or any other language.
- Can easily assert title of any page.

Selenium RC (Remote Control, also known as Selenium 1) offers a more flexible and more complex approach to create run browser tests. With Selenium RC it is possible to run tests inside every JavaScript compatible browser using a wide range of programming languages. Your tests issue commands which the client library sends to the server. The server then 'runs' your actions in the browser and reports the results.

Selenium RC allows you to write automated tests in any supported programming language. Tests written in this way allow you to use standard programming practices making them easy to maintain, robust and easy to collaborate as a team. Selenium RC is a server, written in Java. It accepts commands for the browser via HTTP. RC makes it possible to write automated tests for a web application in any programming language, which allows for better integration of Selenium in existing unit test frameworks. To make writing tests easier, Selenium project currently provides client drivers for PHP, Python, Ruby, .NET, PERL and Java. The Java driver can also be used with JavaScript. A new instance of Selenium RC server is needed to launch HTML test cases, which means that the port should be different for each parallel run. However, for Java/PHP test cases, only one Selenium RC instance needs to run continuously.

Selenium WebDriver (also known as Selenium 2) is a browser automation framework that accepts commands (in Selenese or any other language; Java in our case) and send them to a browser. It is implemented through a browser-specific driver which sends commands to a browser and retrieves results.

Why the move to Selenium WebDriver from Selenium RC ?

- Cleaner APIs in WebDriver.
- Ability to handle multiple windows and modal windows effectively.
- Ability to handle mouse hover movements.
- Ability to handle window scrolls.
- Does not require Selenium RC server to be running.
- Native automation faster and a little less prone to error and browser configuration than Selenium RC.

We use:

IDE: Eclipse Helios and JUnit
Programming language: Java for writing scripts in Selenium WebDriver
Key features:
- Supports all the modern browsers (Firefox, Chrome, IE, Safari, Opera etc).
- Supports all operating systems (Windows, Linux , Mac).
- Cleaner APIs than Selenium RC.
- Object oriented programming based, hence gives accurate results.
- Can be used for Flash testing and Ajax handling

Selenium scripting is useful for handling projects which are complex and are of longer durations. Selenium is a widely used automation framework now.

Steps to configure WebDriver with Eclipse:

1. Download the latest version of  WebDriver from (Select the java binding  file to download)
2. Download Eclipse from
3. Download latest version of Java.
4. Extract files from the files from .zip Selenium WebDriver
5. Open Eclipse, create a new Java project .
6. Create a package under new Java project and then go to Configure build path .
8. From configure build path, click on Add external jars.
7. Add the jar files from the Selenium WebDriver folder that you have extracted.

Reference links:

Topics: Framework and Libraries, QA and Testing

A step-wise guide to starting with ChimpJS

Posted by Deepshikha Singh on Nov 22, 2016 1:05:00 PM

 It’s very rare that clients offer specific suggestions on which technology or software to use during a project. But that’s exactly what happened on my recent project, making us switch from JBehave to ChimpJS. As we got started, I realized there are not many step-wise guides on ChimpJS and so, I decided to take a quick stab at it.

So, first things first.

What is Chimp JS Testing? 

Chimp JS, chimp estingChimp is an open source software that simplifies Behaviour Driven Development (BDD) for developers and testers. Chimp testing can be used to simplify end-to-end and automated testing. It allows you to write browser automation tests in easy-to-read syntax and re-runs scenarios whenever your code changes. Chimp integrates well with  Cucumber.js, Meteor, NodeJs, Phantom.js, Selenium, Saucelabs, Simian etc.

Features of Chimp JS 

  • Live-watch Mode Development: Chimp continuously watches your file system and reruns the scenario you are working on anytime the source code or features change.
  • Integrates Industry Tools: Chimp integrates Mocha/Cucumber, Selenium, and WebdriverIO to seamlessly work together without you having to lift a finger.
  • Synchronous Testing: No more callbacks or promises, you can write all your browser automation tests in an easy-to-read and maintain syntax.              

Benefits of Chimp JS

  • Hyper Focus: By continuously re-running the scenario you are working on, you maintain focus on the task at hand and easily practice outside-in testing
  • Faster Development: Chimp does all the heavy-lifting of configuring and managing the tools needed for BDD, so you can focus on delivering high quality code and test automation.
  • Real Time feedback as you code: Chimp manages browser sessions and provides you with instant visual feedback, as well as console output, allowing you to quickly fix errors during development.

The Switch

We had started off with using JBehave. Everything was going well, with all client requirements being met. But the client suggested we make a switch to ChimpJS


Because the client-side developers were working with a JS framework, and they felt that testing in JS would be helpful for them. Another reason was that it integrates well with Circle CI.

But wait. Why Cricle CI? There were other, better CI tools in the market. Like Jenkins. 

Turns out, the client did not like Jenkins, and wanted to go ahead with ChimpJS. We hadn't used it yet and decided to go ahead and explore.

The acceptance/BDD testing phase was started using ChimpJS. The big question here is why did we decide to do that? Here are a few reasons we decided to use ChimpJS?

  • ChimpJS makes the job a lot easier for the developers, as it takes away the pain of setting up automation-testing framework.
  • ChimpJS integrates well with Circle CI, which is what my client suggested we use.

My reaction to ChimpJS


When I looked up “chimp” on Google, I was presented with a lot of images and information about chimpanzees. My bad, that one.

I put in “ChimpJS” and instantly all things ChimpJS popped up. I quickly got started and watched the video on their homepage thrice, just to extract the most out of it. Upon looking for community support, I found that there isn’t much and a lot of questions remain unanswered. Frankly, their homepage isn’t much help.

I didn’t have sound knowledge on JS. So I talked to a  couple of people in the organization about getting started with Chimp and got to know that I have to use an Ubuntu/linux system. This answer again pushed me to wonder, why Ubuntu system? I thought of installing ChimpJS on Windows system and watch the performance. I struggled a lot but that experiment did not have great results.

Initial roadblocks in ChimpJS


So I got an Ubuntu system and installed ChimpJS with a simple command : npm install chimp -g where npm is node package manager and -g is to install chimp globally.

Chimp is installed in the node directory . So to execute the script you need to move down to  ./node_modules/chimp/bin/chimp --watch. To execute the script you need to run --watch command. Initially it will launch the chrome browser as there is no script. But for me it didn’t run i.e. the browser wasn't launched.

But why?

I proceeded as it was mentioned on their website, and after a lot of research, got to know that I have to install ChimpJS in my project directory. But this wasn't mentioned anywhere in the steps. No worries, I tried doing that and finally it worked for me. This was the first baby step for me and it motivated me a lot. Finally making some progress here.

The ChimpJS Basics

Now that Chimp is installed inside my project directory, we can move ahead:

  • Create a directory with name “features” inside your project directly, where all the feature files will be kept.
  • Start making your feature file in a text file or any text editor you are comfortable with and yes, don’t forget to save it with a .feature extension.
  • Inside “features” folder create another folder named “support”, where all the step definitions file will be placed. These will be saved with a .js extension.

The feature file is written with the Gherkin keywords (Given, When, Then). Tag the feature file with @watchannotation because @watch will look for the this tag and execute the lines sequentially written under this tag.

The step definition file will contain code to automate the feature written in feature file.

The most difficult part is writing xpaths in JS. Normal xpaths, as we write in java, doesn’t work at all. Writing xpath in JS was a nightmare for me. However, I did find a very helpful resource to get started on xpaths in JavaScript.

Step-wise Guide to Getting Started with ChimpJS

For a step-by-step guide to writing the automation code and executing them please follow the directions here.

Now, let's get started.       

1. Make a project folder/directory with the name “chimp-tutorial” using command:  mkdir chimp-tutorial
2. Move the program scope to the project directory you have just created by cd chimp-tutorial
3. Install chimp on your system by executing npm install chimp
4. Make another directory inside your project directory with name “features” by executing command mkdir features
5. Run chimp using chimp --watch

Explanation : Using npm install chimp, Chimp will download any tools it needs and then start watching your files. When it's ready, it will open a default Chrome browser for you. Don't close this window as you will use it to see the automation happening.

6. Create the file “search.feature” in the feature directory you just created under your project directory by ./chimp-tutorial/features/search.feature and paste the following content into it:

7. Save this file and you'll see this output in the console

8. Next add a @watch tag just above the scenario as you can see below:

9. Save this file again and you'll see this in the console:

You can implement step definitions for undefined steps with these snippets:

Explanation: Cucumber.js has just reported that you have not implemented step definitions for your scenario and it has provided you with some helpful code snippets that you can use.

10. So create the file “steps_defs.js” (remember to save it with .js extension) inside the support folder present inside features directory by ./chimp-tutorial/features/support/step_defs.js and paste the below code snippets into it like this:

Note that the steps definitions files must be wrapped with “module.exports” for Cucumber.js to use them.

11. Save this file, and now you'll see this in the console:

Explanation: Cucumber.js is now letting you know that 1 scenario is pending. This is because the first step used callback.pending, therefore subsequent steps are skipped. The next step is to automate the steps one by one and turn every line cucumber-green!

12. Edit the Given step in your step definitions file to match the following code:
13. When you save this file, you will see the browser navigate to as you instructed it to. You will also see this in the console:

Our first successful passing step.

14. Now, We have two more steps to automate, so edit the “When” step in your step definitions file to match the following code:

15. Save. Now you'll see your browser go to and it will also search for "". You'll also see this in the console:

Explanation: Two  steps are passing. We are closer now!

This is what happened: Notice the searchTerm parameter being passed into the step. This is because the regex of the “When” function is extracting anything between quotes defined in the feature file and passing into the parameter of the step definition function. This is how we pass parameters from feature files to the automation layer.

There are also two new methods being used on the browser object. They are setValue  and keys. The first method uses the selector input[name='q'] which targets Google's search box, and the second method sends a sequence of keys. The 'Enter' keyword is a used to submit the input.

16. Now we are on to the final step. Modify the “Then” step in your step definitions file to match the following code:

17. Save this file and the browser will navigate to Google and perform the search and the console will show the following:

Explanation: The entire scenario has passed because all 3 steps passed. Brilliant!

This is what happened to the last step: A “link” parameter is being passed in from the feature, and now the “waitForExist” command is being used from WebdriverIO. This command will wait for the element addressed by the selector to exist before continuing. If the element is not found, this method will timeout.

npm install auto create Package.json: While we run npm install, it auto creates package.json in root of your project. This command installs a package, and any packages that it depends on. The most important things in your package.json are the name and version fields. Those are actually required, and your package won't install without them. The name and version together form an identifier that is assumed to be completely unique. Changes to the package should come along with changes to the version.

Future perspective/expectation from ChimpJS            

  • Needs to simplify the way of writing the xpath or make any in built function which can easily convert selenium xpath to corresponding JS xpaths.
  • Installation guide needs to be more stronger and clear.
  • Good integration with Jenkins

So that's how you started with ChimpJS. And if you found this helpful, or are just curious about BDD, we also have a quick guide to installing Behat that you check out.

Topics: Framework and Libraries


Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us