Posts about Coding and Tutorial

4 Easy Steps to Implement SSO on your Drupal Website

Posted by Vinay Abhang on Jan 11, 2020 8:00:00 AM

Regardless of your position on either side of the web, there is always risk of being a victim of a cyber attack. This has put businesses in a tizz to make personal data of the visitors secure.

Enter Google. 

Google offers a SAML-based Single Sign-On (SSO) service which secures and supports its partner companies with full control over the authorization and authentication by acting as an identity provider to validate users.

SSO fixes major problems of the large and small enterprises by providing-

  1. Greater security and compliance
  2. Enhanced usability and employee satisfaction
  3. Reduced IT costs

This secure single sign-on also helps the user significantly trust the security of online companies while also saving their time. There are other existing open source and commercial identity provider solutions, like OneLogin, Centrify Identity Service, & Microsoft Azure Active Directory, that can help you implement SSO. 

One of our clients wanted to make their app accessible via SSO using Okta. Since their website was built on top of Drupal 8, in this blog we will illustrate the methods of SSO implementation on a Drupal website. We will also explore how individuals can manage various usernames and passwords.


  • SimpleSAMLphp as the Service Provider (SP)
    SimpleSAMLphp is the robust and complete implementation of SAML (Security Assertion Markup Language) in PHP. Its integration with Drupal makes it possible for the Drupal website to communicate with SAML for authenticating users. Post the successful integration, the Drupal website acts as a SAML service provider (SP).
  • Okta as the Identity Provider (IdP)
    Okta provides an ability to have SSO access to the cloud, on-premise, and mobile applications. Users can sign-in to different web apps using their Okta credentials, thus eliminating the need to sign-up for different accounts and manage multiple credentials. Okta establishes a secure connection with the user’s browser and then authenticates the user to Okta-managed apps.
  • Drupal 8 website


Ready? Now let's dig in!

In 4 Steps, Implement Single Sign-on on Drupal Website

Follow the below-mentioned steps to implement SSO. The following steps are for installation with vanilla Drupal.

#Step1: SimpleSAMLphp installation on Drupal 8 website:

1. Using composer, download the Drupal contributed module for SimpleSAMLphp. This will fetch all its dependencies. We will enable this module later.
`composer require drupal/simplesamlphp_auth`
2. In your Drupal website’s root directory, create a symbolic link to the web-accessible directory of the SimpleSAMLphp library, which is downloaded in the site’s vendor directory upon successful execution of the above composer command.
`ln -s vendor/simplesamlphp/simplesamlphp/www .` // (ln -s path/to/original/ path/to/link)
3. Add the following lines to the existing `.htaccess` file in the site’s root directory:
# Allow access to simplesaml paths.

RewriteCond %{REQUEST_URI} !^/simplesaml

`/simplesaml` is the path where we will have the SimpleSAMLphp installed on the Drupal website. This is the default path in the config file and can be modified as per preference.


4. The configuration templates are present in the `vendor/simplesamlphp/simplesamlphp/config-templates` directory.

               a. Copy the `config.php` (holds SimpleSAMLphp configuration) file to the `vendor/simplesamlphp/simplesamlphp/config` folder and update the following values in the `$config` array:

‘secret_salt’ => ‘anyrandomvalidstringofanylength’

‘auth.adminpassword’ => ‘mypassword’

The `auth.adminpassword` will be used to login as administrator in the SimpleSAMLphp installation. The `technicalcontact_name` and `technicalcontact_email` can also be specified to receive error logs during installation.

SimpleSAMLphp must be configured to use something other than the “phpsession” (default) for session storage. The other alternatives are “memcache” or “sql”. The simplest solution would be to set:

‘store.type’ => ‘sql’

‘store.sql.dsn’ => sprintf('mysql:host=%s;port=%s;dbname=%s', '', '', 'dbname')

'store.sql.username' => 'root',

'store.sql.password' => 'root',

         b. Copy the `authsources.php` file to the `vendor/simplesamlphp/simplesamlphp/config` folder and leave it as is without any changes. (We will be doing the necessary modifications in this file, post successful Okta application setup.)

5. Now, we can visit `http://site.base.url/simplesaml` which will display the SimpleSAMLphp installation page upon successful installation as shown below:

SimpleSAMLphp Installation Page#Step2: Setting up Okta application:

For this, visit and log in using the company’s / client’s sign-in URL (considering that you are setting up Okta for your company or client). If you just want to try it out as a developer, then you can create a test account at which will provide you with a test Okta domain (similar to your company’s or client’s okta domain).

Once you have successfully logged in, follow the below steps to add and set up a new application for your Drupal website in Okta:

1.  Post login, the displayed screen is the default Developer Console, which needs to be switched to ‘Classic UI’ by hovering on ‘Developer Console’ in the Top-LHS bar. Click on the ‘Add Applications’ option in the RHS Shortcuts section.

2.  Click on ‘Create New App’. Let the Platform option be Web and Sign-on method as SAML 2.0 and click Create.

3.  On the next screen of General Settings, specify the metadata for the new app like name, logo and other visibility options. (This metadata will be used to display the app to the users in their Okta dashboard.)

4.  A form would be displayed on the next screen which would generate the XML needed for the app’s SAML request.

    a.  Single Sign-on URL: The location where the SAML assertion is sent with an HTTP POST. This is often referred to as the SAML Assertion Consumer Service (ACS) URL for your application. This can be found under the Federation tab in the SimpleSAMLphp installation on the Drupal website. Click on Show metadata and look for AssertionConsumerService with HTTP-POST binding. The URL in the Location is the Single Sign-on URL value and is usually this:



Keep the ‘Use this for Recipient URL and Destination URL’ checkbox marked.
'Allow this app to request other SSO URLs’ can be selected if we need to add SSO URLs for dev, stage, and prod instances (separate text boxes will be provided to enter multiple URLs). Although multiple URLs are supported but when users will try to access the app via their Okta dashboard, then only the default SSO URL will be used (recommended to add the production instance URL, once the SSO implementation is live, whereas while testing on local we can add the local instance URL as shown above).

  b. Audience URI (SP Entity ID): The application-defined unique identifier that is the intended audience of the SAML assertion. This is most often the SP Entity ID of your application. This can be any relevant unique string (for eg.: okta_sso_drupal).

  c. Rest all fields along-with those in Advanced Settings can be kept as default.

  d. Attribute Statements: Although this is optional, however, specifying an attribute here will help while configuring the Drupal module as these attributes are passed to Drupal site from Okta post successful authentication. Specify Name as ‘email’, Name format as ‘Basic’, and select Value as ‘’. That’s it, click on Next and then fill in some relevant feedback for Okta and then click Finish.

SAML Configuration Settings

5.  Now, you will be redirected to the Sign On tab having a ‘View Setup Instructions’ button. Clicking on which a new browser tab will open with the information needed for the handshake between SimpleSAMLphp and Okta.

  a. Identity Provider Issuer (Entity ID of the IdP)

  b. IDP metadata (needs to be added at the SP end)


Assigning the Application To the Users:

Once we have successfully set-up our application in Okta, we need to assign users to the application. This can be done by navigating to the Assignments tab in the application setup and clicking on Assign.

Assigning the application to the user#Step3: Integrating SimpleSAMLphp and Okta

The entity ID and metadata of the IdP obtained from the Okta needs to be added to the SP (SimpleSAMLphp installed Drupal).

1. In the Drupal codebase, update the ‘entityID’ and ‘IdP’ values for default-sp in the $config array in
`vendor/simplesamlphp/simplesamlphp/config/authsources.php` file.

'entityID' => 'okta_sso_drupal', // Needs to be same as Audience URI or SP Entity ID specified in Okta.

'idp' => '', // Identity Provider Issuer or the Entity ID of the IdP obtained from Okta.

As the name suggests, the default-sp is the default one. Similarly, we can add multiple SP configured with different IdP as different keys in the $config array.

2. For adding the IdP metadata, copy the relevant templates available from the `vendor/simplesamlphp/simplesamlphp/metadata-templates` folder to the `vendor/simplesamlphp/simplesamlphp/metadata` folder (in this case, the file is saml20-idp-remote.php, since we are using SAML 2.0 and we need to specify metadata for remote IdP - Okta).

3. Since the obtained IdP metadata is in XML format and we need to add it in PHP, we will need to parse it. This can be done by navigating to our SimpleSAMLphp installation on our Drupal site and clicking on ‘XML to SimpleSAMLphp metadata converter’ under the Federation tab. (Administrator login is required for this, the username is admin and password is the one specified in ‘auth.adminpassword’ in config.php file.)

4. Copy the XML IdP metadata and paste it in the text area, click Parse and then copy the converted metadata in the PHP format and paste it in the `vendor/simplesamlphp/simplesamlphp/metadata/saml20-idp-remote.php` file.

5. Now check if the handshake is successful by clicking on the ‘Test configured authentication sources’ link under the Authentication tab in SimpleSAMLphp installation. There will be two options - admin and default-sp as defined in `authsources.php` file, click on default-sp and you will see a screen similar to the following one:

Handshake between SimpleSAMLphp and Okta The above screen means the handshake between SimpleSAMLphp and Okta is successful.

STEP 4: Enabling SSO on Drupal site:

Now that we have SimpleSAMLphp installed on our Drupal site and we are done with the Okta application set up along with a successful handshake between both, we can enable SSO on our drupal website by enabling and configuring the contributed drupal module - simplesamlphp_auth which we have already downloaded.

1. Install the module through the UI or via Drush:
`drush en simplesamlphp_auth -y`

Enabling the module by default will change the ‘Who can register accounts?’ setting to Administrators only which can be changed as per the need here: `/admin/config/people/accounts`

2. Post-installation, in order to configure the module navigate to `/admin/config/people/simplesamlphp_auth`.

3. The module can be configured based on your requirements and has descriptive help texts for each config to understand its purpose. We will see some of the configs that need to be configured or taken care of:

a. Basic Settings:

      1. The ‘Activate authentication via SimpleSAMLphp’ checkbox should be only checked once we are completely done with the module configuration, as checking this checkbox would disable default Drupal user login for all users except the superuser (uid: 1).
      2. The Authentication source is what we defined in `vendor/simplesamlphp/simplesamlphp/config/authsources.php` file, which is by default - default-sp.
      3. Federated Login Link Display Name is the link text for the SSO link which is displayed on the default login page of the Drupal website (/user/login).
b. Local Authentication:
  1. Local authentication means allowing users to log in to the site via drupal authentication (/user/login).
  2. By default, superuser or user with id: 1 is allowed to log in with local drupal accounts. We can configure it to allow for particular users by specifying their user ids or for particular roles.

c. User info and syncing:

  1. We have configured an `email` attribute in the Okta application which is sent to drupal site post successful login. We can configure different attributes in Okta which can be used for module configuration in Drupal, but since we have only the `email` attribute we will use it for:
  1. SimpleSAMLphp attribute to be used as a unique identifier for the user
  2. SimpleSAMLphp attribute to be used as the username for the user
  3. SimpleSAMLphp attribute to be used as an email address for the user


5. All the other configurations can be kept as is and now we can enable the ‘Activate authentication via SimpleSAMLphp’ checkbox under Basic settings.

That’s it. Now, we can add SSO into our Drupal site by visiting the default login page (/user/login) and clicking on the ‘Federated Login’ link as seen in the screenshot below:SSO Implementation Test

Summing Up

Single sign-on offers great user experience, productivity, and cost-saving benefits. By implementing it, your organization can increase the business’ capability to respond and securely deliver a convenient and seamless user experience throughout your apps and other domains.

Topics: Drupal Planet, Coding and Tutorial, Drupal 8

Drupal 8 Image Optimization Modules To Scale Your User Experience

Posted by Rohit Tiwari on Nov 30, 2019 6:30:00 AM

Given the facts and figures in this study, there are more than 58 percent of people who prefer their smartphones over desktop or laptop to browse information on the internet. And when those responsible for the development (at the backend) decide to go ahead without any changes for the mobile, the users start getting annoyed. So much so, that 39% of them stop engaging if images don’t load or take too long to load.

In this blog, we will explore some of the awesome Drupal 8 modules for image optimization and how they can help websites reach their desired user experience. 

Drupal 8 Modules For Image Optimization 

Fortunately, Drupal 8 has many useful and out-of-the-box image optimization modules that makes it most appealing among website owners also who look forward to upgrading to Drupal 8.

Read on to find out about those modules that can help you in image optimization-

Responsive Image Module

The Responsive Image module in Drupal 8 encompasses an image formatter and breakpoint mappings to deliver responsive images using the HTML 5 picture tag. It comprises of fallback support for Internet Explorer 8. To get images in IE8 that are not tweaked for a mobile interface, you’ll need to configure the fallback in your display to use your desktop image size rather than “automatic”.

How to Set Up Responsive Images in Drupal 8

Following steps will help you in easy setup of responsive image module-

Step 1: Enable the responsive image module

One of the major changes in building responsive images in Drupal 8 from Drupal 7 is the responsive image module being part of Drupal’s core - there is no need to download an extra module. However, this feature is not enabled by default.

  1. To enable the responsive image module, go to "Admin" > "Configuration" (/admin/config).
  2. Click the checkbox next to "responsive Image".
  3. Click "Install".

Step 2: Setup breakpoints

If you are using a default theme like Bartik, there is no need to create a breakpoints.yml file. Default themes already have this file.

If you have a custom theme, go to your editor. In the root of your theme directory, create a file called "yourthemename.breakpoints.yml".
Your theme directory is usually found at "/themes/custom/yourthemename".

Step 3: Setup the image styles for responsive images

We need to create several image sizes for different breakpoints. Add one image style for each breakpoint you create at your_theme_name.breakpoints.yml.

Step 4: Responsive image styles

We will now assign the image styles with the breakpoints, to create the Responsive Image styles.

Go to ‘Home > Administration > Configuration > Media’ (/admin/config/media/responsive-image-style) and click on ‘Add responsive image’.


Below mentioned is the result of how a responsive image style, once set up,  can turn the tables.


Without the Responsive Image  Module 

With the Responsive Image Module 




next 1








last second





ImageMagick Module

Drupal by default comes with the GD2 image manipulation toolkit which helps the image cache module to create different sized alternatives of the same images. While GD does most of the work, it lacks some important features like converting/supporting gif with image style, & supporting of some extra image formats like TIFF. At this point, we need to use ImageMagick to extend support for gif format with an image style.

Follow the given steps to start with ImageMagick-

1) Install the module by running the following command 

Composer require 'drupal/imagemagick'
2) Enable the module with following path  ‘yoursite/admin/config/media/image-toolkit’

3) Select the ImageMagick image toolkit and configure the image quality to 100%.

By implementing the module, the following improvements can be observed:

1) The gif image format support is now enabled when used with an image style.

2) 20-40% decrease in image size


Please refer to below table for detailed output-

Image Format/ Toolkit

GD2 Toolkit

ImageMagick Toolkit















Note: ImageMagick is preferable over the GD toolkit due to the functionalities it provides.

WebP Module

WebP format is one of the image formats developed by Google, capable of reducing the image size by 20-50%. Some of the dependencies which WebP module had before being used in Drupal are as below:

1) At least one image library (GD2, ImageMagick.). In our case, we are using ImageMagick.

2) Images to be rendered using a responsive image style as the WebP module works with <picture> tag only. Any image which is not rendered using picture tag is not converted to WebP form and remains in its original format.

Note: In some browsers, WebP format is still in testing mode. WebP module detects the unsupported browser and renders the original format image instead of WebP format.

Use the below steps to get started with WebP - 

1) Install the module by running the following command

composer require  'drupal/webp'
2) Enable the module and go to path ‘yoursite/admin/config/media/webp/settings’

3) Configure the image quality to 100%.

Below are the improvements that can be noticed on the site alongside a decrease in image size by 20-25% - 

Please refer to below table for detailed output-

Image Format / Configuration

With ImageMagick and without WebP

With ImageMagic and WebP




White second



new brown 2


brown fina;







Note: Please note that the size of the gif image remains the same in both cases - with and without WebP. The reason is that WebP does not support gif images on some browsers. Hence, we have excluded the gif by applying a patch on the WebP module.

Summing up:

The bounty you get against the efforts you put in is the website images that look vibrant and crisp on all touchpoints, while still downloading efficiently. Users won’t leave your sites disgruntled anymore as images won’t take forever to download; making a huge difference to your engagement ratio, conversions, and other sales activities.

Drupal is a powerful, robust, and scalable website content management system that ensures every element on the website functions well to deliver a seamless digital experience to users. Using its modules, you can surely manage the images efficiently witnessing the boost in site performance.


(Co-authored by Sumit Kumar)

Topics: Drupal, Planet Drupal, User Experience and User Interface, Coding and Tutorial

Adding Tags With Drupal Taxonomy in 9 Steps

Posted by Urvashi Melwani on Sep 20, 2019 3:40:00 PM

While compelling content marketing taxonomy in Drupal isn’t just about enhancing the searchability of your content but to also ascertain content priorities based on what’s in an item. However, there were 56% marketers from B2C and 64% marketers from B2B who didn’t even have a documented content marketing strategy as per this source. 

A content marketing taxonomy aids content strategizing by organizing content in an easy-to-understand way for marketers to analyze data and gaps

Let’s dive into this blog to learn using taxonomy for tagging content-

How to Use Taxonomy in Drupal to Tag Content?

Each set of taxonomy terms is part of a category set that you define and is called a vocabulary. Terms in vocabularies are that can be broken down further and can contain sub-terms.

Therefore, it is of prime importance to first understand how to create vocabulary-

1. Go to Manage >> Structure >> Taxonomy. By default, tags (as a vocabulary) is here.
At this place, we can add vocabulary by clicking on the + Add Vocabulary

Rectangle box with text written inside

Enter the name of the newly created Vocabulary along with the short description.

Box with name and description fields inside

2. Click on the Save button. You will be redirected to the Ingredients page, which shows a list of all the terms that you have added in this vocabulary.Box with text inside it

3. Now click on the Add term. Enter "Butter" in the Name field. Once done, click on the Save button.

Box with Add term title page

4. You will receive a confirmation about the term you created. You can also add more terms, like "Eggs" and "Milk".

5. In the Manage administrative menu, navigate to Structure > Content Types(admin/structure/types). Click Manage fields for your Recipe content type.

6. Click Add field, and enter values from the table below. Once done, click on the Save button and continue.

Field name Explanation Value
Add a new field Select the field type Reference > Taxonomy term
Label The title to give the field Ingredients

Rectangle box with Add field title page

Field name Explanation Value
Type of item to reference The type of entity that is referenced by the field Taxonomy term
Allowed number of values The number of values a user can enter Unlimited


Dropdown menu option in a box

8. On the following configuration screen, enter the values from the table below. Click Save Settings.

Field name Explanation Value
Help text Help showed to users creating content Enter ingredients that site visitors might want to search for
Reference type > Reference method Select the method used to choose allowed values Default
Reference type > Vocabulary Select the vocabulary to choose allowed values from Ingredients
Reference type > Create referenced entities if they don’t already exist Whether new ingredient terms can be created from the content editing form


Box with various fields and options

Click Save Settings. You will be taken back to the Manage Fields page. A message will pop up stating that the configuration for Ingredients is complete.

Box with manage fields page

And you’re done!

You can also watch the video shared below to learn further on setting up taxonomy.

Drupal 8 User Guide- 6.6. Setting Up a Taxonomy (1)

   Video Courtesy:


Following the given steps will help you in implementing clear and concise content marketing taxonomy in Drupal, which as a result, will improve the readability of your editorial calendar. It will also allow all the stakeholders and team members to know what kind of content you’re creating in just a glimpse.

Happy Tagging!

Topics: Drupal, Planet Drupal, Coding and Tutorial

Tutorial: Inspect your Mobile Apps with Appium Desktop & Macaca Inspector

Posted by Komal Batra on Apr 26, 2019 5:04:00 PM

A high-quality and robust app can be produced only with regression testing. Mobile Test Automation increases test efficiency, effectiveness while also enabling faster time-to-market for apps.

While there are various test automation frameworks and tools like Monkey Talk, Robotium, Selendroid, Appium Desktop, UIAutomator, Testdroid etc. to execute test automation effectively and conveniently, we will discuss Appium Desktop and Macaca Inspector.

One of the most significant tests automation frameworks, Appium provides a graphical frontend and inspect applications through Appium Desktops.

In simpler words, Appium Desktop is a graphical interface for starting an Appium server and inspecting the app's structure via Appium. While Macaca is an open-source automation test solution for native, hybrid, mobile web and web application on mobile and desktop platforms.

Here’s a step-by-step guide to inspect mobile applications using Appium Desktop Inspector and Macaca Inspector.

Getting Started

Using Appium Desktop

To write mobile automation scripts with Appium, we need to, first, run Appium Server and also a mechanism to identify the controls/elements of the mobile app which we want to automate.

Considering Appium with global packages is already installed, one of the ways to launch the Appium server is via  Appium Desktop.


To start the server, simply click on Start Server Button.  

After the server has been started, Inspector option is displayed using which we can get the properties of the elements in mobile application

  1. Start your inspector session with Appium Desktop Inspector by clicking on the search icon.

apium-srijan-technologies 1

 2. After the inspector session is launched, you will see the following view on your screen.


You can, now, inspect by creating a new session.

   a) Click on ‘Desired Capabilities’ tab
   b) Input the necessary desired capabilities in ‘JSON representation’ tab.


    3.Click on the ‘Start Session’

       After this, the simulator with the configuration as (stated in desired capabilities)  will be                             launched and the app will be installed. You can access the path through app desired capability.

 Inspect with Existing Session

         If we already have an Appium script.

              a. Run Your Existing Appium Script and your session id will be generated.

              b. As we click on the inspector icon, current session id will be displayed automatically in the                       text box as shown in the screenshot below.

          c. Next, click on ‘Attach to the Session’ tab to attach the current session with the appium                            inspector.


Simulator with the configuration as stated in the script or in desired capabilities will be launched along with the appium desktop inspector view.


And Appium Desktop Inspector View will be displayed that contains the

    1. The App View of the application
    2. The App source / DOM
    3.  Properties of the selected element


Click an element in the app view whose locator we want to know, it automatically selects the element in the Source and the properties of that element are displayed on the right side.

The properties include the selectors - xpath , class , id , name etc.

Inspecting Mobile Elements Using Macaca Inspector 

Install Application Inspector with  

$ npm i app-inspector -g

Launch from the cli with

$ app-inspector -u YOUR-DEVICE-ID

YOUR-DEVICE-ID is the udid of the device in case of IOS. For Android, simply launch your device first, then use adb to list all your devices.

$ adb devices

After the $ app-inspector -u YOUR-DEVICE-ID command is run then open the following link in your browser.

Now Macaca App Inspector displays the view as shown below.

Simply Click on the element in the app view and see the DOM/Source in the next panel and the properties on the right side of the selected element.


And that’s how you can inspect mobile application elements in easy steps. You can use it in your appium scripts for automation of mobile application.

We provide unified Test Automation Framework and tools to enable easy automation for you. Not only it saves up your many hours, but it reduces the operating cost too. Get in touch with our experts at to know more about high-quality test automation solutions.

Topics: Coding and Tutorial

API Automation using Postman

Posted by Deepshikha Singh on Mar 26, 2019 6:01:00 PM

If you are new to Postman, please visit my blog API Testing using Postman for an overview of what Postman is and how it  works.

Automation has now become a norm across sectors, and when used in testing, it can undoubtedly improve the depth and scope of the tests. Since it is not possible to test everything manually, so using Postman automation can save our time as well as efforts.

Postman is the ultimate tool for API automation. In this blog, we will be taking a look at how to achieve API automation using Postman.

Example Scenario for Automation using Postman:

Let’s say, a login API generates a Token (oAuth, JWT etc) and refreshes it every hour. The API utilizes this token to get the expected response. If we don’t automate, we will have to copy and paste this every time for each API.

And for a test suite containing 100 API’s, a good amount of manual effort will go into pasting the token for each API. By introducing automation, this process of updating the newly generated taken can be automated.

Let’s automate the Login API through Postman

  1. Open Postman Application and create a collection.

  2. Then make a POST request to Login API by passing the correct credentials (Body Parameter+Header)



3. Now click on Send button to see the response.


4. In the above response, we get an “access_token” key which would be used in all the following API’s. Hence, we need to write a custom code which can define the variable, and update its value with every hit to the Login API. Going further, we will use this variable for access_token value and would not need to copy and paste the access_token value for each API.

5. Also, we can create the variable for host_name, protocol etc so that we don’t need to write the protocol and host_name for each API, instead we can use the variable.

6. Another important aspect is to check for correct Response/Status Code and the Response Time for each API. It would be really great if we can write the code for this task too, so that when we run our entire API test suite. We can easily view all the failing tests which does not give correct status code as well as all those API’s which are taking more than an acceptable response time to execute.

7. Below is the sample code snippet which would solve all the above problems.


8. You can see the value of access_token is set in the environment variable as expected.


9. Also, under TestResults tab you can check if the assertions you wrote have been passed or failed.


10. Postman itself provides us with lots of code snippets, you just need to click on the desired code snippet and the generated code adds in the Tests tab to perform assertions/actions. To get the snippets, click on Tests tab and then on the arrow “<” to see the snippets.


11. If you want to add more custom code, you can always do so depending on your requirements, a couple of them are mentioned in the screenshots below:





That’s all for this blog. Happy Testing!!

Topics: API Management, Coding and Tutorial

How to Upload Audio Stream to Amazon S3

Posted by Trilok Nagvenkar on Jan 10, 2019 12:02:00 PM

Through this blog, I’ll take you through the tutorial on uploading AWS live audio streaming to AWS S3 audio streaming using AWS SDK.

We will use certain services of AWS which includes Amazon Cognito Identity Pools (federated identities), and S3 ofcourse.

AWS Configurations

Assuming you have logged into the AWS console, let us get started by creating a S3 Bucket, where all the audio files will be stored. To create the bucket, navigate to AWS S3 -> Create bucket

Once the bucket is created, our next step is to create a Federated Identity which provides the necessary permission for a file upload from browser to S3 bucket.

To create the Federated Identity please navigate to the Cognito service - > Manage identity pools > Create new identity.

Give the Identity pool name and check the Enable access to unauthenticated identities Or Authentication Providers.

The next screen is all about setting the necessary permission for the Federated Identity via IAM roles. Here, we will create a new IAM role with specific permission defined via custom policy as policy mentioned below:




"Version": "2012-10-17",


"Statement": [




"Effect": "Allow",


"Action": [












"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*"







view rawpolicy.json hosted with ❤ by GitHub

Post creation, it will provide the Identity Pool Id. That ID is required to communicate with AWS services.

Front-end App

Now we will create a small front-end app to record and upload audio stream to S3.



<button type="button" class="btn kc record" id="record_q1" disabled="disabled" onclick="AudioStream.startRecording(">Record</button>


<button type="button" class="btn kc stop" id="stop_q1" disabled="disabled" onclick="AudioStream.stopRecording(">Stop</button>

view rawindex.html hosted with ❤ by GitHub



We will create a AudioStream class which will have functions used in above HTML events and also the one used to upload the audio stream to s3.


1- audioStreamInitialize function is used to request the microphone permission, and on receiving the data, it will create a multi-part upload.

  audioStreamInitialize() {
  Feature detecting is a simple check for the existence of "navigator.mediaDevices.getUserMedia"
  To use the microphone. we need to request permission.
  The parameter to getUserMedia() is an object specifying the details and requirements for each type of media you want to access.
  To use microphone it shud be {audio: true}
  .then(function(stream) {
  Creates a new MediaRecorder object, given a MediaStream to record.
  self.recorder = new MediaRecorder(stream);
  Called to handle the dataavailable event, which is periodically triggered each time timeslice milliseconds of media have been recorded
  (or when the entire media has been recorded, if timeslice wasn't specified).
  The event, of type BlobEvent, contains the recorded media in its data property.
  You can then collect and act upon that recorded media data using this event handler.
  self.recorder.addEventListener('dataavailable', function(e) {
  var normalArr = [];
  Here we push the stream data to an array for future use.
  here we create a blob from the stream data that we have received.
  var blob = new Blob(normalArr, {
  type: 'audio/webm'
  if the length of recordedChunks is 1 then it means its the 1st part of our data.
  So we createMultipartUpload which will return an upload id.
  Upload id is used to upload the other parts of the stream
  It Uploads a part in a multipart upload.
  if (self.recordedChunks.length == 1) {
  self.startMultiUpload(blob, self.filename)
  } else {
  self.incr is basically a part number.
  Part number of part being uploaded. This is a positive integer between 1 and 10,000.
  self.incr = self.incr + 1
  self.continueMultiUpload(blob, self.incr, self.uploadId, self.filename, self.bucketName);

view rawaudioStreamInitialize.js hosted with ❤ by GitHub


Start and Stop Recording:

1- startRecording function will be triggered when we start a new recording.

  The MediaRecorder method start(), which is part of the MediaStream Recording API,
  begins recording media into one or more Blob objects.
  You can record the entire duration of the media into a single Blob (or until you call requestData()),
  or you can specify the number of milliseconds to record at a time.
  Then, each time that amount of media has been recorded, an event will be delivered to let you act upon the recorded media,
  while a new Blob is created to record the next slice of the media
  startRecording(id) {
  var self = this;
  1800000 is the number of milliseconds to record into each Blob.
  If this parameter isn't included, the entire media duration is recorded into a single Blob unless the requestData()
  method is called to obtain the Blob and trigger the creation of a new Blob into which the media continues to be recorded.
  PLEASE NOTE YOU CAN CHANGE THIS PARAM OF 1800000 but the size should be greater then or equal to 5MB.
  As for multipart upload the minimum breakdown of the file should be 5MB
view rawstartRecording.js hosted with ❤ by GitHub


2- stopRecording

  When the stop() method is invoked, the UA queues a task that runs the following steps:
  1 - If MediaRecorder.state is "inactive", raise a DOM InvalidState error and terminate these steps.
  If the MediaRecorder.state is not "inactive", continue on to the next step.
  2 - Set the MediaRecorder.state to "inactive" and stop capturing media.
  3 - Raise a dataavailable event containing the Blob of data that has been gathered.
  4 - Raise a stop event.
  stopRecording(id) {
  var self = this;
view rawstopRecording.js hosted with ❤ by GitHub


Uploading to AWS S3 Audio Streaming:

1- startMultiUpload

  Initiates a multipart upload and returns an upload ID.
  Upload id is used to upload the other parts of the stream
  startMultiUpload(blob, filename) {
  var self = this;
  var audioBlob = blob;
  var params = {
  Bucket: self.bucketName,
  Key: filename,
  ContentType: 'audio/webm',
  ACL: 'private',
  self.s3.createMultipartUpload(params, function(err, data) {
  if (err) {
  console.log(err, err.stack); // an error occurred
  } else {
  self.uploadId = data.UploadId
  self.incr = 1;
  self.continueMultiUpload(audioBlob, self.incr, self.uploadId, self.filename, self.bucketName);
view rawstartMultiUpload.js hosted with ❤ by GitHub


2- continueMultiUpload

  Uploads a part in a multipart upload.
  The following code uploads part of a multipart upload.
  it specifies a file name for the part data. The Upload ID is same that is returned by the initiate multipart upload.
  continueMultiUpload(audioBlob, PartNumber, uploadId, key, bucketName) {
  var self = this;
  var params = {
  Body: audioBlob,
  Bucket: bucketName,
  Key: key,
  PartNumber: PartNumber,
  UploadId: uploadId
  self.s3.uploadPart(params, function(err, data) {
  if (err) {
  console.log(err, err.stack)
  } // an error occurred
  else {
  Once the part of data is uploaded we get an Entity tag for the uploaded object(ETag).
  which is used later when we complete our multipart upload.
  if (self.booleanStop == true) {


3- completeMultiUpload




Completes a multipart upload by assembling previously uploaded parts.




completeMultiUpload() {


var self = this;


var outputTag = [];




here we are constructing the Etag data in the required format.




self.etag.forEach((data, index) => {


const obj = {


ETag: data,


PartNumber: ++index








var params = {


Bucket: self.bucketName, // required


Key: self.filename, // required


UploadId: self.uploadId, // required


MultipartUpload: {


Parts: outputTag






self.s3.completeMultipartUpload(params, function(err, data) {


if (err) {


console.log(err, err.stack)


} // an error occurred


else {


// initialize variable back to normal


self.etag = [], self.recordedChunks = [];


self.uploadId = "";


self.booleanStop = false;





  alert("we have successfully saved the questionaire..");


This is how you can upload the live streaming audio to S3 bucket via front-end using AWS SDK. 

To find a working example, refer to my blog on uploading audio stream to AWS S3.


Topics: AWS, Coding and Tutorial

How to Secure Alexa Account Linking

Posted by Yashica Gupta on Jan 10, 2019 11:56:00 AM

With the ongoing increase in human-device interaction, Alexa devices have found a strong place in the market. Echo devices are now placed in home and offices to control the lights, check news, get the status of a task etc with just voice command. Every user now has their private (virtual) assistant to make their life easier.

But an important part of the chain is the Alexa skill developers. His aim is to build a skill which can reduce the manual work of the user and make user’s life convenient. Though developing Alexa skill is not difficult but there are many challenges that developers face while building a skill. Especially when it requires another software/app.
Currently, an Alexa skill which requires another software/app needs an account linking feature to be enabled.
or example: to enable and use Uber Alexa skill, the user needs to link his Uber account with Alexa. Once the user links his account, Uber software sends an access token to Alexa skill as a unique key for the user and hence the account linking is complete. Next time the user invokes Uber Alexa skill, the request sends the access token to Uber software and fetches the information.

We faced the same blocker while developing an Alexa account linking skill closely integrated with Jira software. The Alexa skill is built primarily for Scrum Masters to read and write to their Jira and help them stay up-to-date with their projects.

The most challenging part of developing this skill was to allow account linking because Jira software needs both server-client authentication, i.e., to link an account every user has to manually add our Alexa skill as an authorized client in their Jira dashboard and then provide us with the access token.

The solution implemented to reduce the inconvenience was to create a custom HTML page ( hosted on S3)  for account linking. The user just needs to add his credentials - username, password and the Jira server URL, and the account will be linked successfully.

As of now, we were not using Jira directly to authenticate users via account linking but rather as a message carrier between Alexa and Jira. This makes the account linking process easy for users but possesses a high-security risk to their credentials.

To make the process secure, the following architecture was implemented in our skill. One of the key components of the architecture is that it is built completely on AWS services, namely :

  1. API Gateway

  2. S3 bucket

  3. Lambda

  4. DynamoDB 

alexa account linking

Explanation :

When a user enables the Alexa skill, he is redirected to our HTML page hostel on S3 bucket. Once the user fills his credentials and clicks on the submit button, it sends a GET request to an endpoint deployed on AWS API Gateway with query parameters.

The API Gateway then invokes a connected Lambda function and send query parameters as an event.

alexa account linking

 Using the parameters in the event sent to Lambda, it sends a GET request to Jira REST API to validate and authenticate the user. In case the user credentials are incorrect, it sends an error message or a success message with an access token created by the Lambda. In case of successful validation, the Lambda also stores the encoded user credentials in DynamoDB table with access token as the key.

def lambda_handler(event, context):
               print (event)
               username = event["username"]        
               password = event["password"]        
               server = event["server"]        
               skill = event["skill"]        
               table_name = ""        
               if skill == "skill_name":            
                    table_name = “Table_name”                

              a = validity(password,username,server)        
              b = event       

              js = {}        
              for k,v in event.items():                    
                     a = {               
                          "S" : v           
                     js[k] = a                    

              js["password"] = {            
                    "B" : encrypt(session,event["password"], "alias/alias_name")        
             accesstoken =
''.join(random.SystemRandom().choice(string.ascii_uppercase +
string.digits) for _ in range(10))        
            js["accesstoken"] = {"S" : accesstoken}                                

            item = js        
            dynamodb_client.put_item(TableName=table_name, Item=item)        
            return a

The javascript then displays “Invalid Credential” error message if an incorrect error message is received. In case of success message, the javascript then sends the access token to Alexa redirect URL and thus successfully links the account.

 alexa account linking

 The access token is the main component as it is used to identify the user.

When the user invokes our skill, Alexa sends a JSON request to Lambda function with a key carrying the access token. The lambda then queries the DynamoDB table with the access token to identify the user and fetch his credentials. Once the credentials are fetched, the lambda then sends a request to Jira REST API and based on the users intent and returns back the message to Alexa as a JSON.

The Alexa then voices the message to the user and enables the user to now use his Jira with just his voice!

Topics: AWS, Architecture, Coding and Tutorial

Lex - Lambda policy limitation [Solved]

Posted by Sanjay Rohila on Dec 11, 2018 3:19:00 PM
Error: Maximum policy size of xxx bytes exceeded for Lambda xxx.

We can assign lambda function in Lex Intents. This gives lots of power to our bot, we can do lots of stuff with lambda. But there is a problem - when we assign lambda in intent, it asks for invocation permission and when we give that permission, it adds that as function policy to lambda. This works fine till we have so many intents that we exceed the limit of policy document length. Once we add lambda to so many intents, that function policy document is at its limit, we can't that lambda to any more intents.


The problem is, it's not easy to find where these policies are going and where it's adding whenever we give permission for each intent. So it's hidden under this small button (screenshots below). This button is only visible if you have access to  lambda:GetPolicy action.




The solution would be - instead of adding permission for every intent individually, we could add permission for all intents. That will reduce policy document size and we can live peacefully. But we can only see the function policy (if we have permission to lambda:GetPolicy) document in the console; we can't modify it from console interface. We have to do it via APIs - either command-line interface or SDKs. The API which will be able to do this is: lambda:AddPermission. If you are a python expert, just use boto3 API and use add_permission (

Function Policy Document (before):

"Statement": [
"Sid": "lex-us-east-1-my_first_intent",
"Effect": "Allow",
"Principal": {
"Service": ""
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:us-east-1:xxx:function:My_Lex_Lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:lex:us-east-1:xxx:intent:my_first_intent:*"
"Sid": "lex-us-east-1-my_second_intent",
"Effect": "Allow",
"Principal": {
"Service": ""
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:us-east-1:xxx:function:My_Lex_Lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:lex:us-east-1:xxx:intent:my_second_intent:*"
# ... all other intents in which this lambda assigned

Function Policy Document (after):

"Statement": [
"Sid": "lex-us-east-1-my_bot",
"Effect": "Allow",
"Principal": {
"Service": ""
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:us-east-1:xxx:function:My_Lex_Lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:lex:us-east-1:xxx:intent:*"

Topics: AWS, Coding and Tutorial

Testing with Emulator: A step-wise guide

Posted by Deepshikha Singh on Oct 19, 2017 2:52:00 PM

What is an Emulator?

An Emulator is an application that emulates real mobile device software, hardware, and operating systems, allowing us to test and debug our application. It is generally provided by the device manufacturer. Mobile emulators are free and provided as a part of SDK with each new OS release. As a developer or a tester, you can configure the emulator to closely resemble the devices on which you plan to deploy your application.

The emulator window consists of a device screen on the left and phone controls/keyboard on the right. The device screen initially displays ‘ANDROID‘, and then displays the graphical logo, while the Android platform associated with the AVD is initializing.

What is AVD Manager?

AVD Manager is a tool to create and manage Android Virtual Devices(AVDs), which define device configuration for the Android Emulator. Before you can actually run an emulator, you need to install the Android SDK on your machine and define an AVD, which defines the hardware characteristics of the emulator. You can define things like the device RAM, whether there is touch screen and/or keyboard, camera support, audio playback support, etc. You can create several AVDs to test your device on several virtual devices.

How to Create an Android Virtual Device?

Once installation of all the prerequisites are done, we need to launch the SDK Manager to download relevant files to create an emulator and run the virtual device.

  • In command line(cmd), type in android sdk. It would open the SDK Manager for you to download the relevant files
  • In the SDK Manager select the files as shown below. This will help you create a virtual device

There are two ways to create Android Virtual Devices for mobile testing:

  • Using Command Line
  • Using AVD Manager

Creating a New Android Virtual Device using AVD Manager

The first step is to launch the AVD Manager and for that, there are a number of options you can follow:

  • Launch AVD Manager using Program Menu : Go to Start →  All Program →  Android SDK Tools → AVD Manager
  • Launch AVD Manager using Command Line : Go to Run and type cmd to open command prompt window. Type: android avd
  • Launch AVD Manager from folder location : Browse to Android SDK folder and Double-click on AVD Manager.

Either of the above ways, it would open the Android Virtual Device Manager, which would help you create the virtual devices. The AVD Manager main screen shows one default virtual device, which is Nexus 5.

1. Click on Create button to create a new AVD

Create button to create a new AVD

2. A pop up will open, follow the below screenshot for the values. In case you are simulating a specific device, you might want to name it as such. For example NexusSix-4.4 refers to an AVD that simulate Nexus-SIx which runs on Android 4.4.2 version.

NexusSix-4.4 refers to an AVD

In this test I choose a very generic name Android to keep it simple.

Note: You can play around with different configurations for your virtual device.

 3. Click on OK to proceed with the save changes.

Click on OK to proceed

4. After you complete above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now select the AVD name and click on Start on the right.

AVD name and click on Start on the right

5. This would launch the pop up with few options, you may choose as you want. Once done, click on Launch, this will launch the emulator.

click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to start from a previously saved snapshot of the device’s state. The emulator launches much faster when launched from a snapshot.
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit

 6. Emulator will launch, it would take few minutes to display the Home Screen of the virtual android device. The time taken is actually equal to the switch on time on the real mobile device.


  • You can use the command ‘adb devices‘ to see if the adb is detecting the emulator. This basically completes the Android SDK installation part.
  • If ANDROID logo appears for more than 15-30 minutes, something has probably gone wrong. Reboot your computer, start AVD Manager, delete our created ‘Android’ AVD, recreate this AVD, and relaunch new AVD.

The emulator window launched with 5554:Android in its title bar. Value 5554 identifies a console port that you can use to query and control the AVD’s environment. Android supports a maximum of 16 concurrently executing AVDs, where each AVD is assigned an even-numbered port number that starts at 5554. It means that we can initialize more than one AVD at the same time and can test the app with parallel execution.

Creating an Android Emulator using Command Line

Android Emulator can be configured differently for simulating different android devices. With the help of AVD configuration, Android Emulator can simulate:

  • Target platforms versions
  • Screen sizes
  • Solutions
  • Input methods
  • Hardware configurations
  • External storage sizes for SD card

Although there are a good number of default device configurations present in AVD Manager, if you have any particular device in mind that you want to emulate, you need to create an AVD that resembles the features of your target device. For example, you need to know the screen size and resolution and so on. 

Creating Default Device Definition

1. Go to Device Definitions tab and select one of the presets and click ‘Create AVD’

2. Every default device will provide you some preset AVD and that can be changed according to your need

how 6

Note: Notice that the OK button is disabled, just because under CPU/ABI it says that “No system Images installed for this target“. To avoid this, select different Device configuration from Device & select any skin or select none for skin and proceed.

Below are the list of optional and mandatory settings:

  • AVD NAME: Choose a name for the AVD like NexusSix-4.4 that refer to your device configuration
  • Device: Select the AVD resolution as per the device options
  • Target: Select the Android version / Test Environment or Target Platform
  • CPU/ABI: Select Intel Atom (x86) for 32-bit and ARM (armeabi-v7) for 64-bit.
  • Keyboard: Select this box to use the keyboard in the AVD
  • Skin: Select this box to get the hardware buttons
  • Front Camera: If the system have a webcam, that can be used with AVD
  • Back Camera: If the system have a webcam, that can be used with AVD
  • Memory Options: Set RAM & VM Heap for the device according to your need
  • Internal Storage: Set this as per your need and Select GiB/MiB from the drop down
  • SD Card: Set this as per your need and Select GiB/MiB from the drop down
  • Emulation Options

- Snapshot : Select this to persist, it lets you quickly start the emulator after the first startup.

- Use Host GPU: Select this to use the computer graphics

3. After you complete the above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now Select the newly added AVD name and click on Start on the right.

4. This would launch the pop up with few options, you may choose as you want. Once done click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to be started from a previously saved snapshot of the device’s state.The emulator launches much faster when launched from a snapshot
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit

Once done, Emulator will launch. It would take few minutes to display the Home Screen of the virtual Android device.

Creating a Custom Device Definition

In case the available device definitions do not match the device type you’d like to emulate, you can create a custom device definition for your AVD:

1. If the AVD manager is opened, go to Device Definitions

2. Click on Create Devices.

3. The Configure Hardware Profile window will display and it will allow you to specify various configurations such as the screen size, memory options, input type, and sensors.

Note: Once all the information is filled properly, Create Device button will get enabled and you may proceed then.

4. After you complete above steps, the emulator will be displayed under the Device Definition tab, configured on the AVD Manager. Now Select the newly created AVD name and click on Create AVD on the right.

Automation using Android emulator

  • Get Emulator Platform Version
  • Unlock Android emulator screen
  • Go to Settings. You will find About Phone under settings.
  • Go to About Phone. It will show you Android version
  • Verify calculator App Is Available In Emulator 


We are going to run an Appium test for calculator application so it should be there in emulator. Generally, calculator app will be already installed in emulator. To check if it is installed or not,

1. Unlock emulator

2. Verify if there is an application with name Calculator

3. Get app activity and package name. We need launcher activity and package name of calculator app. Activity and package name of calculator app for me are:

  • Package name :
  • Activity name :

4. Create Appium Test Script In Eclipse. Now we are ready to create and run our first Appium test on Android emulator for calculator application. I have prepared Appium test script as below. I have used RemoteWebDriver of Selenium Webdriver to launch app with required capabilities.

5. To start an instance of the emulator from the command line, navigate to the tools/ folder of the SDK. Enter emulator command like this: emulator -avd <avd_name> [<options>]

So that was all about performing Appium testing on emulator. In case you are just starting off with mobile application testing, you can explore our list of the best tools for mobile test automation. You can also check out our Appium tutorial webinar to level up your mobile testing skills.

Topics: Coding and Tutorial, QA and Testing

Building a code review process that works

Posted by love.huria on Sep 25, 2017 2:57:00 PM

A while back, I talked about the need for maintaining coding standards and how it simplifies the development process for the whole team. The next step is putting in place a code review process.

Why It’s Important?

Code reviews are very crucial for

  • knowledge transfer
  • avoiding small/common mistakes
  • maintaining best practices throughout the dev team


Let’s take my current team for example. We are around eleven developers in the team, all producing code which needs to be reviewed. So yeah, that’s a whole lot of code!

Pushing code to production is easy. Anyone can do it, right? What concerns us is the quality of code pieces we are going to deploy. 

The code can be completely fine or it can be a piece which makes everything fall apart. To maintain high code quality, we all need to have peer code reviews. This does not mean that the team writes bad code that needs to be checked. We all are on the same team and we have a common goal, that is to deliver the highest quality product. And a code review process makes sure someone on the team catches the errors that somebody else might have missed.

You must be thinking, “Is it worth it”?  Absolutely yes. 

Not having a code review process integrated into projects can result in big problems. Toyota had to settle a $3 million lawsuit because they did not pay enough attention to code reviews. 

There were a lot of reasons why this incident happened and one of the reasons was an absence of a peer code review. After reviewing their source code they found possible bit flips, task deaths that would disable the fail-safes, memory corruption, single-point failures, inadequate protections against stack overflow and buffer overflow, single-fault containment regions, and more. The list of deficiencies in process and product was lengthy.

Obviously then, it makes business sense to make code reviews a critical part of your development process.

How Do We Do It?

General Development Process


General development Process: code review process

This is our general flow which is being followed in most of the projects. Of course, columns may vary depending on the different aspects, clients, and projects.

Assigning responsibility

We have set a few guidelines around who will be responsible for the code review process:

  • There will be one senior and one junior code reviewer for each ticket.
  • Reviews will be done right after the daily standup, depending on how long takes
  • When there are tickets added on the board, one person will review at least one ticket in a day
  • It is both the code reviewer and developer’s responsibility to ensure that all tickets are in respective columns according to the latest update
  • If there is any feedback from the senior code reviewer, it’s the junior code reviewer's responsibility to look what he/she has missed

Github Code review flow

Maintaining your code review process on GitHub is super easy. We can create new projects on our repo and use it as we want. 

Setting projects on GitHub: Code Review Process

Generally, we have four columns, which are:

  • Ready for review: You can add cards (pull requests) to this column if your code is ready to be reviewed
  • In review: Now it’s the code reviewer’s responsibility to move the card to "In review" column so that it gets updated on the branch and the concerned dev understands that his/her ticket is currently being reviewed
  • Change Requested: Again it’s the code reviewer’s responsibility to move the card to this column if the review has failed standards. Then concerned dev will fix the issue and push the ticket back to “Ready for review”
  • Closed/Done: If the card is in the  “Closed” column, that means the PR has passed the requisite coding standards

 Things to look out for

These are the aspects which we consider while reviewing code: 

Making sure the process is followed

We regularly review our way of working against a documented process to identify if there are any gaps in our performance. We also look for ways to improve the process and ensure that it’s not a burden/blocker for anyone’s work.

The process is clearly defined and maintained on different platforms, depending on the client and teams involved. We usually use Confluence and highly recommend it to anyone who is reading this post.

Learning from our mistakes

We maintain a code feedback sheet where we mention common mistakes we need to avoid. Everyone on the team has access to it and can add points on where we can improve, new techniques to achieve certain functionality, coding patterns to avoid etc.

Goals of a Coding Review Process

With a finely tuned code review process in place, development teams can:

  • Enhance the learning of individuals to become better programmers
  • Improving the quality of codebase, even as it grows more complex as we scale
  • Focus on not just quantity, but quality deliverable throughout
  • Maintaining discipline within the team and understand the seriousness of spaghetti code

That’s about it! There would definitely be things I missed here or certain code review practices that are unique to your team. Leave a comment to let us know.

Topics: Architecture, Coding and Tutorial


Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us