Drupal 8 Image Optimization Modules To Scale Your User Experience

Posted by Rohit Tiwari on Nov 30, 2019 6:30:00 AM

Given the facts and figures in this study, there are more than 58 percent of people who prefer their smartphones over desktop or laptop to browse information on the internet. And when those responsible for the development (at the backend) decide to go ahead without any changes for the mobile, the users start getting annoyed. So much so, that 39% of them stop engaging if images don’t load or take too long to load.

In this blog, we will explore some of the awesome Drupal 8 modules for image optimization and how they can help websites reach their desired user experience. 

Drupal 8 Modules For Image Optimization 

Fortunately, Drupal 8 has many useful and out-of-the-box image optimization modules that makes it most appealing among website owners also who look forward to upgrading to Drupal 8.

Read on to find out about those modules that can help you in image optimization-

Responsive Image Module

The Responsive Image module in Drupal 8 encompasses an image formatter and breakpoint mappings to deliver responsive images using the HTML 5 picture tag. It comprises of fallback support for Internet Explorer 8. To get images in IE8 that are not tweaked for a mobile interface, you’ll need to configure the fallback in your display to use your desktop image size rather than “automatic”.

How to Set Up Responsive Images in Drupal 8

Following steps will help you in easy setup of responsive image module-

Step 1: Enable the responsive image module

One of the major changes in building responsive images in Drupal 8 from Drupal 7 is the responsive image module being part of Drupal’s core - there is no need to download an extra module. However, this feature is not enabled by default.

  1. To enable the responsive image module, go to "Admin" > "Configuration" (/admin/config).
  2. Click the checkbox next to "responsive Image".
  3. Click "Install".

Step 2: Setup breakpoints

If you are using a default theme like Bartik, there is no need to create a breakpoints.yml file. Default themes already have this file.

If you have a custom theme, go to your editor. In the root of your theme directory, create a file called "yourthemename.breakpoints.yml".
 
Your theme directory is usually found at "/themes/custom/yourthemename".
 

Step 3: Setup the image styles for responsive images

We need to create several image sizes for different breakpoints. Add one image style for each breakpoint you create at your_theme_name.breakpoints.yml.
 

Step 4: Responsive image styles

We will now assign the image styles with the breakpoints, to create the Responsive Image styles.

Go to ‘Home > Administration > Configuration > Media’ (/admin/config/media/responsive-image-style) and click on ‘Add responsive image’.

 

Below mentioned is the result of how a responsive image style, once set up,  can turn the tables.

  Devices 

Without the Responsive Image  Module 

With the Responsive Image Module 

Desktop

1-526074867869569249

 

next 1

 

Tablet

3,2

 

7-8

 

Mobile

last second

 

lasttttt

 

 

ImageMagick Module

Drupal by default comes with the GD2 image manipulation toolkit which helps the image cache module to create different sized alternatives of the same images. While GD does most of the work, it lacks some important features like converting/supporting gif with image style, & supporting of some extra image formats like TIFF. At this point, we need to use ImageMagick to extend support for gif format with an image style.

Follow the given steps to start with ImageMagick-

1) Install the module by running the following command 

Composer require 'drupal/imagemagick'
2) Enable the module with following path  ‘yoursite/admin/config/media/image-toolkit’

3) Select the ImageMagick image toolkit and configure the image quality to 100%.

By implementing the module, the following improvements can be observed:

1) The gif image format support is now enabled when used with an image style.

2) 20-40% decrease in image size

 

Please refer to below table for detailed output-

Image Format/ Toolkit

GD2 Toolkit

ImageMagick Toolkit

PNG

1.car

 

2.carr

 

JPG

3,carrrr

 

4.carrr

 

GIF

5.screen

 

6.screen

 

Note: ImageMagick is preferable over the GD toolkit due to the functionalities it provides.

WebP Module

WebP format is one of the image formats developed by Google, capable of reducing the image size by 20-50%. Some of the dependencies which WebP module had before being used in Drupal are as below:

1) At least one image library (GD2, ImageMagick.). In our case, we are using ImageMagick.

2) Images to be rendered using a responsive image style as the WebP module works with <picture> tag only. Any image which is not rendered using picture tag is not converted to WebP form and remains in its original format.

Note: In some browsers, WebP format is still in testing mode. WebP module detects the unsupported browser and renders the original format image instead of WebP format.

Use the below steps to get started with WebP - 

1) Install the module by running the following command

composer require  'drupal/webp'
2) Enable the module and go to path ‘yoursite/admin/config/media/webp/settings’

3) Configure the image quality to 100%.

Below are the improvements that can be noticed on the site alongside a decrease in image size by 20-25% - 

Please refer to below table for detailed output-

Image Format / Configuration

With ImageMagick and without WebP

With ImageMagic and WebP

PNG

2.carr-1

 

White second

 

JPG

new brown 2

 

brown fina;

 

GIF

6.screen

 

5.screen

 

Note: Please note that the size of the gif image remains the same in both cases - with and without WebP. The reason is that WebP does not support gif images on some browsers. Hence, we have excluded the gif by applying a patch on the WebP module.

Summing up:

The bounty you get against the efforts you put in is the website images that look vibrant and crisp on all touchpoints, while still downloading efficiently. Users won’t leave your sites disgruntled anymore as images won’t take forever to download; making a huge difference to your engagement ratio, conversions, and other sales activities.

Drupal is a powerful, robust, and scalable website content management system that ensures every element on the website functions well to deliver a seamless digital experience to users. Using its modules, you can surely manage the images efficiently witnessing the boost in site performance.

 

(Co-authored by Sumit Kumar)

Topics: Drupal, Planet Drupal, User Experience and User Interface, Coding and Tutorial

Adding Tags With Drupal Taxonomy in 9 Steps

Posted by Urvashi Melwani on Sep 20, 2019 3:40:00 PM

While compelling content marketing taxonomy in Drupal isn’t just about enhancing the searchability of your content but to also ascertain content priorities based on what’s in an item. However, there were 56% marketers from B2C and 64% marketers from B2B who didn’t even have a documented content marketing strategy as per this source. 

A content marketing taxonomy aids content strategizing by organizing content in an easy-to-understand way for marketers to analyze data and gaps

Let’s dive into this blog to learn using taxonomy for tagging content-

How to Use Taxonomy in Drupal to Tag Content?

Each set of taxonomy terms is part of a category set that you define and is called a vocabulary. Terms in vocabularies are that can be broken down further and can contain sub-terms.

Therefore, it is of prime importance to first understand how to create vocabulary-

1. Go to Manage >> Structure >> Taxonomy. By default, tags (as a vocabulary) is here.
At this place, we can add vocabulary by clicking on the + Add Vocabulary

Rectangle box with text written inside

Enter the name of the newly created Vocabulary along with the short description.

Box with name and description fields inside

2. Click on the Save button. You will be redirected to the Ingredients page, which shows a list of all the terms that you have added in this vocabulary.Box with text inside it

3. Now click on the Add term. Enter "Butter" in the Name field. Once done, click on the Save button.

Box with Add term title page

4. You will receive a confirmation about the term you created. You can also add more terms, like "Eggs" and "Milk".

5. In the Manage administrative menu, navigate to Structure > Content Types(admin/structure/types). Click Manage fields for your Recipe content type.

6. Click Add field, and enter values from the table below. Once done, click on the Save button and continue.

Field name Explanation Value
Add a new field Select the field type Reference > Taxonomy term
Label The title to give the field Ingredients

Rectangle box with Add field title page

Field name Explanation Value
Type of item to reference The type of entity that is referenced by the field Taxonomy term
Allowed number of values The number of values a user can enter Unlimited

 

Dropdown menu option in a box

8. On the following configuration screen, enter the values from the table below. Click Save Settings.

Field name Explanation Value
Help text Help showed to users creating content Enter ingredients that site visitors might want to search for
Reference type > Reference method Select the method used to choose allowed values Default
Reference type > Vocabulary Select the vocabulary to choose allowed values from Ingredients
Reference type > Create referenced entities if they don’t already exist Whether new ingredient terms can be created from the content editing form

Checked

Box with various fields and options

Click Save Settings. You will be taken back to the Manage Fields page. A message will pop up stating that the configuration for Ingredients is complete.

Box with manage fields page

And you’re done!

You can also watch the video shared below to learn further on setting up taxonomy.

Drupal 8 User Guide- 6.6. Setting Up a Taxonomy (1)

   Video Courtesy: Drupalize.me

Conclusion

Following the given steps will help you in implementing clear and concise content marketing taxonomy in Drupal, which as a result, will improve the readability of your editorial calendar. It will also allow all the stakeholders and team members to know what kind of content you’re creating in just a glimpse.

Happy Tagging!

Topics: Drupal, Planet Drupal, Coding and Tutorial

Tutorial: Inspect your Mobile Apps with Appium Desktop & Macaca Inspector

Posted by Komal Batra on Apr 26, 2019 5:04:00 PM

A high-quality and robust app can be produced only with regression testing. Mobile Test Automation increases test efficiency, effectiveness while also enabling faster time-to-market for apps.

While there are various test automation frameworks and tools like Monkey Talk, Robotium, Selendroid, Appium Desktop, UIAutomator, Testdroid etc. to execute test automation effectively and conveniently, we will discuss Appium Desktop and Macaca Inspector.

One of the most significant tests automation frameworks, Appium provides a graphical frontend and inspect applications through Appium Desktops.

In simpler words, Appium Desktop is a graphical interface for starting an Appium server and inspecting the app's structure via Appium. While Macaca is an open-source automation test solution for native, hybrid, mobile web and web application on mobile and desktop platforms.

Here’s a step-by-step guide to inspect mobile applications using Appium Desktop Inspector and Macaca Inspector.

Getting Started

Using Appium Desktop

To write mobile automation scripts with Appium, we need to, first, run Appium Server and also a mechanism to identify the controls/elements of the mobile app which we want to automate.

Considering Appium with global packages is already installed, one of the ways to launch the Appium server is via  Appium Desktop.

appium-desktop-srijan-technologies

To start the server, simply click on Start Server Button.  

After the server has been started, Inspector option is displayed using which we can get the properties of the elements in mobile application

  1. Start your inspector session with Appium Desktop Inspector by clicking on the search icon.

apium-srijan-technologies 1

 2. After the inspector session is launched, you will see the following view on your screen.

automatic-server-srijan-technologies

You can, now, inspect by creating a new session.

   a) Click on ‘Desired Capabilities’ tab
   b) Input the necessary desired capabilities in ‘JSON representation’ tab.

automatic-server-srijan-technologies-2

    3.Click on the ‘Start Session’

       After this, the simulator with the configuration as (stated in desired capabilities)  will be                             launched and the app will be installed. You can access the path through app desired capability.

 Inspect with Existing Session

         If we already have an Appium script.

              a. Run Your Existing Appium Script and your session id will be generated.

              b. As we click on the inspector icon, current session id will be displayed automatically in the                       text box as shown in the screenshot below.

          c. Next, click on ‘Attach to the Session’ tab to attach the current session with the appium                            inspector.

automatic-server-srijan-technologies-capabilities

Simulator with the configuration as stated in the script or in desired capabilities will be launched along with the appium desktop inspector view.

srijan-assistant

And Appium Desktop Inspector View will be displayed that contains the

    1. The App View of the application
    2. The App source / DOM
    3.  Properties of the selected element

srijan-assistant-1

Click an element in the app view whose locator we want to know, it automatically selects the element in the Source and the properties of that element are displayed on the right side.

The properties include the selectors - xpath , class , id , name etc.

Inspecting Mobile Elements Using Macaca Inspector 

Install Application Inspector with  

$ npm i app-inspector -g

Launch from the cli with

$ app-inspector -u YOUR-DEVICE-ID

YOUR-DEVICE-ID is the udid of the device in case of IOS. For Android, simply launch your device first, then use adb to list all your devices.

$ adb devices

After the $ app-inspector -u YOUR-DEVICE-ID command is run then open the following link http://192.168.10.100:5678 in your browser.

Now Macaca App Inspector displays the view as shown below.
macaca-app-inspector-srijan-technologies

Simply Click on the element in the app view and see the DOM/Source in the next panel and the properties on the right side of the selected element.

       Done!

And that’s how you can inspect mobile application elements in easy steps. You can use it in your appium scripts for automation of mobile application.

We provide unified Test Automation Framework and tools to enable easy automation for you. Not only it saves up your many hours, but it reduces the operating cost too. Get in touch with our experts at business@srijan.net to know more about high-quality test automation solutions.

Topics: Coding and Tutorial

API Automation using Postman

Posted by Deepshikha Singh on Mar 26, 2019 6:01:00 PM

If you are new to Postman, please visit my blog API Testing using Postman for an overview of what Postman is and how it  works.

Automation has now become a norm across sectors, and when used in testing, it can undoubtedly improve the depth and scope of the tests. Since it is not possible to test everything manually, so using Postman automation can save our time as well as efforts.

Postman is the ultimate tool for API automation. In this blog, we will be taking a look at how to achieve API automation using Postman.

Example Scenario for Automation using Postman:

Let’s say, a login API generates a Token (oAuth, JWT etc) and refreshes it every hour. The API utilizes this token to get the expected response. If we don’t automate, we will have to copy and paste this every time for each API.

And for a test suite containing 100 API’s, a good amount of manual effort will go into pasting the token for each API. By introducing automation, this process of updating the newly generated taken can be automated.

Let’s automate the Login API through Postman

  1. Open Postman Application and create a collection.

  2. Then make a POST request to Login API by passing the correct credentials (Body Parameter+Header)

    api-automation-using-postman-srijan-technologis

api-automation-using-postman-technologies-1

3. Now click on Send button to see the response.

api-automation-using-postman-srijan-technologies-2

4. In the above response, we get an “access_token” key which would be used in all the following API’s. Hence, we need to write a custom code which can define the variable, and update its value with every hit to the Login API. Going further, we will use this variable for access_token value and would not need to copy and paste the access_token value for each API.

5. Also, we can create the variable for host_name, protocol etc so that we don’t need to write the protocol and host_name for each API, instead we can use the variable.

6. Another important aspect is to check for correct Response/Status Code and the Response Time for each API. It would be really great if we can write the code for this task too, so that when we run our entire API test suite. We can easily view all the failing tests which does not give correct status code as well as all those API’s which are taking more than an acceptable response time to execute.

7. Below is the sample code snippet which would solve all the above problems.

Srijan-technologies-api-automation-using-postman

8. You can see the value of access_token is set in the environment variable as expected.

api-automation-using-postman-srijan-technologies-3

9. Also, under TestResults tab you can check if the assertions you wrote have been passed or failed.

api-postman-using-postman-srijan-technologies-4

10. Postman itself provides us with lots of code snippets, you just need to click on the desired code snippet and the generated code adds in the Tests tab to perform assertions/actions. To get the snippets, click on Tests tab and then on the arrow “<” to see the snippets.

test-sricpts-written-in-java-scripts-srijan-technologies

11. If you want to add more custom code, you can always do so depending on your requirements, a couple of them are mentioned in the screenshots below:

api-automation-using-postman-srijan-technologies-5

api-automation-using-postman-srijan-technologies-6

srijan-technologies-api-automation-srijan-technologies7

srijan-technologies-api-automation

That’s all for this blog. Happy Testing!!

Topics: API Management, Coding and Tutorial

How to Upload Audio Stream to Amazon S3

Posted by Trilok Nagvenkar on Jan 10, 2019 12:02:00 PM

Through this blog, I’ll take you through the tutorial on uploading AWS live audio streaming to AWS S3 audio streaming using AWS SDK.

We will use certain services of AWS which includes Amazon Cognito Identity Pools (federated identities), and S3 ofcourse.

AWS Configurations

Assuming you have logged into the AWS console, let us get started by creating a S3 Bucket, where all the audio files will be stored. To create the bucket, navigate to AWS S3 -> Create bucket

Once the bucket is created, our next step is to create a Federated Identity which provides the necessary permission for a file upload from browser to S3 bucket.

To create the Federated Identity please navigate to the Cognito service - > Manage identity pools > Create new identity.

Give the Identity pool name and check the Enable access to unauthenticated identities Or Authentication Providers.

The next screen is all about setting the necessary permission for the Federated Identity via IAM roles. Here, we will create a new IAM role with specific permission defined via custom policy as policy mentioned below:

 

{

 

"Version": "2012-10-17",

 

"Statement": [

 

{

 

"Effect": "Allow",

 

"Action": [

 

"s3:PutObject",

 

"s3:GetObject",

 

"s3:ListMultipartUploadParts",

 

"s3:ListBucketMultipartUploads"

 

],

 

"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*"

 

}

 

]

 

}

view rawpolicy.json hosted with ❤ by GitHub

Post creation, it will provide the Identity Pool Id. That ID is required to communicate with AWS services.

Front-end App

Now we will create a small front-end app to record and upload audio stream to S3.

HTML:

 

<button type="button" class="btn kc record" id="record_q1" disabled="disabled" onclick="AudioStream.startRecording(this.id)">Record</button>

 

<button type="button" class="btn kc stop" id="stop_q1" disabled="disabled" onclick="AudioStream.stopRecording(this.id)">Stop</button>

view rawindex.html hosted with ❤ by GitHub

 

JS:

We will create a AudioStream class which will have functions used in above HTML events and also the one used to upload the audio stream to s3.

Initialization:

1- audioStreamInitialize function is used to request the microphone permission, and on receiving the data, it will create a multi-part upload.

  audioStreamInitialize() {
  /*
  Feature detecting is a simple check for the existence of "navigator.mediaDevices.getUserMedia"
   
  To use the microphone. we need to request permission.
  The parameter to getUserMedia() is an object specifying the details and requirements for each type of media you want to access.
  To use microphone it shud be {audio: true}
   
  */
  navigator.mediaDevices.getUserMedia(self.audioConstraints)
  .then(function(stream) {
  /*
  Creates a new MediaRecorder object, given a MediaStream to record.
  */
  self.recorder = new MediaRecorder(stream);
   
  /*
  Called to handle the dataavailable event, which is periodically triggered each time timeslice milliseconds of media have been recorded
  (or when the entire media has been recorded, if timeslice wasn't specified).
  The event, of type BlobEvent, contains the recorded media in its data property.
  You can then collect and act upon that recorded media data using this event handler.
  */
  self.recorder.addEventListener('dataavailable', function(e) {
  var normalArr = [];
  /*
  Here we push the stream data to an array for future use.
  */
  self.recordedChunks.push(e.data);
  normalArr.push(e.data);
   
  /*
  here we create a blob from the stream data that we have received.
  */
  var blob = new Blob(normalArr, {
  type: 'audio/webm'
  });
   
  /*
  if the length of recordedChunks is 1 then it means its the 1st part of our data.
  So we createMultipartUpload which will return an upload id.
  Upload id is used to upload the other parts of the stream
   
  else.
  It Uploads a part in a multipart upload.
  */
  if (self.recordedChunks.length == 1) {
  self.startMultiUpload(blob, self.filename)
  } else {
  /*
  self.incr is basically a part number.
  Part number of part being uploaded. This is a positive integer between 1 and 10,000.
  */
  self.incr = self.incr + 1
  self.continueMultiUpload(blob, self.incr, self.uploadId, self.filename, self.bucketName);
  }
  })
  });
  }
   
   

view rawaudioStreamInitialize.js hosted with ❤ by GitHub

 

Start and Stop Recording:

1- startRecording function will be triggered when we start a new recording.

  /*
  The MediaRecorder method start(), which is part of the MediaStream Recording API,
  begins recording media into one or more Blob objects.
  You can record the entire duration of the media into a single Blob (or until you call requestData()),
  or you can specify the number of milliseconds to record at a time.
  Then, each time that amount of media has been recorded, an event will be delivered to let you act upon the recorded media,
  while a new Blob is created to record the next slice of the media
  */
  startRecording(id) {
  var self = this;
   
  /*
  1800000 is the number of milliseconds to record into each Blob.
  If this parameter isn't included, the entire media duration is recorded into a single Blob unless the requestData()
  method is called to obtain the Blob and trigger the creation of a new Blob into which the media continues to be recorded.
  */
  /*
  PLEASE NOTE YOU CAN CHANGE THIS PARAM OF 1800000 but the size should be greater then or equal to 5MB.
  As for multipart upload the minimum breakdown of the file should be 5MB
  */
  this.recorder.start(1800000);
   
  }
view rawstartRecording.js hosted with ❤ by GitHub

 

2- stopRecording

  /*
  When the stop() method is invoked, the UA queues a task that runs the following steps:
  1 - If MediaRecorder.state is "inactive", raise a DOM InvalidState error and terminate these steps.
  If the MediaRecorder.state is not "inactive", continue on to the next step.
  2 - Set the MediaRecorder.state to "inactive" and stop capturing media.
  3 - Raise a dataavailable event containing the Blob of data that has been gathered.
  4 - Raise a stop event.
  */
  stopRecording(id) {
  var self = this;
  self.recorder.stop();
  }
view rawstopRecording.js hosted with ❤ by GitHub

 

Uploading to AWS S3 Audio Streaming:

1- startMultiUpload

  /*
  Initiates a multipart upload and returns an upload ID.
  Upload id is used to upload the other parts of the stream
  */
  startMultiUpload(blob, filename) {
  var self = this;
  var audioBlob = blob;
  var params = {
  Bucket: self.bucketName,
  Key: filename,
  ContentType: 'audio/webm',
  ACL: 'private',
  };
  self.s3.createMultipartUpload(params, function(err, data) {
  if (err) {
  console.log(err, err.stack); // an error occurred
  } else {
  self.uploadId = data.UploadId
  self.incr = 1;
  self.continueMultiUpload(audioBlob, self.incr, self.uploadId, self.filename, self.bucketName);
  }
  });
  }
view rawstartMultiUpload.js hosted with ❤ by GitHub

 

2- continueMultiUpload

  /*
  Uploads a part in a multipart upload.
  The following code uploads part of a multipart upload.
  it specifies a file name for the part data. The Upload ID is same that is returned by the initiate multipart upload.
  */
  continueMultiUpload(audioBlob, PartNumber, uploadId, key, bucketName) {
  var self = this;
  var params = {
  Body: audioBlob,
  Bucket: bucketName,
  Key: key,
  PartNumber: PartNumber,
  UploadId: uploadId
  };
  console.log(params);
  self.s3.uploadPart(params, function(err, data) {
  if (err) {
  console.log(err, err.stack)
  } // an error occurred
  else {
  /*
  Once the part of data is uploaded we get an Entity tag for the uploaded object(ETag).
  which is used later when we complete our multipart upload.
  */
  self.etag.push(data.ETag);
  if (self.booleanStop == true) {
  self.completeMultiUpload();
  }
  }
  });
  }

 

3- completeMultiUpload

 

/*

 

Completes a multipart upload by assembling previously uploaded parts.

 

*/

 

completeMultiUpload() {

 

var self = this;

 

var outputTag = [];

 

/*

 

here we are constructing the Etag data in the required format.

 

*/

 

self.etag.forEach((data, index) => {

 

const obj = {

 

ETag: data,

 

PartNumber: ++index

 

};

 

outputTag.push(obj);

 

});

   
 

var params = {

 

Bucket: self.bucketName, // required

 

Key: self.filename, // required

 

UploadId: self.uploadId, // required

 

MultipartUpload: {

 

Parts: outputTag

 

}

 

};

   
 

self.s3.completeMultipartUpload(params, function(err, data) {

 

if (err) {

 

console.log(err, err.stack)

 

} // an error occurred

 

else {

 

// initialize variable back to normal

 

self.etag = [], self.recordedChunks = [];

 

self.uploadId = "";

 

self.booleanStop = false;

 

self.disableAllButton();

 

self.removeLoader();

  alert("we have successfully saved the questionaire..");
  }
  });
  }

 

This is how you can upload the live streaming audio to S3 bucket via front-end using AWS SDK. 

To find a working example, refer to my blog on uploading audio stream to AWS S3.

 

Topics: AWS, Coding and Tutorial

How to Secure Alexa Account Linking

Posted by Yashica Gupta on Jan 10, 2019 11:56:00 AM

With the ongoing increase in human-device interaction, Alexa devices have found a strong place in the market. Echo devices are now placed in home and offices to control the lights, check news, get the status of a task etc with just voice command. Every user now has their private (virtual) assistant to make their life easier.

But an important part of the chain is the Alexa skill developers. His aim is to build a skill which can reduce the manual work of the user and make user’s life convenient. Though developing Alexa skill is not difficult but there are many challenges that developers face while building a skill. Especially when it requires another software/app.
Currently, an Alexa skill which requires another software/app needs an account linking feature to be enabled.
F
or example: to enable and use Uber Alexa skill, the user needs to link his Uber account with Alexa. Once the user links his account, Uber software sends an access token to Alexa skill as a unique key for the user and hence the account linking is complete. Next time the user invokes Uber Alexa skill, the request sends the access token to Uber software and fetches the information.

We faced the same blocker while developing an Alexa account linking skill closely integrated with Jira software. The Alexa skill is built primarily for Scrum Masters to read and write to their Jira and help them stay up-to-date with their projects.

The most challenging part of developing this skill was to allow account linking because Jira software needs both server-client authentication, i.e., to link an account every user has to manually add our Alexa skill as an authorized client in their Jira dashboard and then provide us with the access token.

The solution implemented to reduce the inconvenience was to create a custom HTML page ( hosted on S3)  for account linking. The user just needs to add his credentials - username, password and the Jira server URL, and the account will be linked successfully.

As of now, we were not using Jira directly to authenticate users via account linking but rather as a message carrier between Alexa and Jira. This makes the account linking process easy for users but possesses a high-security risk to their credentials.

To make the process secure, the following architecture was implemented in our skill. One of the key components of the architecture is that it is built completely on AWS services, namely :

  1. API Gateway

  2. S3 bucket

  3. Lambda

  4. DynamoDB 

alexa account linking

Explanation :

When a user enables the Alexa skill, he is redirected to our HTML page hostel on S3 bucket. Once the user fills his credentials and clicks on the submit button, it sends a GET request to an endpoint deployed on AWS API Gateway with query parameters.

The API Gateway then invokes a connected Lambda function and send query parameters as an event.


alexa account linking

 Using the parameters in the event sent to Lambda, it sends a GET request to Jira REST API to validate and authenticate the user. In case the user credentials are incorrect, it sends an error message or a success message with an access token created by the Lambda. In case of successful validation, the Lambda also stores the encoded user credentials in DynamoDB table with access token as the key.

def lambda_handler(event, context):
               print (event)
               username = event["username"]        
               password = event["password"]        
               server = event["server"]        
               skill = event["skill"]        
               table_name = ""        
               if skill == "skill_name":            
                    table_name = “Table_name”                

              a = validity(password,username,server)        
              b = event       

              js = {}        
              for k,v in event.items():                    
                     a = {               
                          "S" : v           
                      }           
                     js[k] = a                    

              print(js)        
              js["password"] = {            
                    "B" : encrypt(session,event["password"], "alias/alias_name")        
             }       
             accesstoken =
''.join(random.SystemRandom().choice(string.ascii_uppercase +
string.digits) for _ in range(10))        
            js["accesstoken"] = {"S" : accesstoken}                                

            item = js        
            dynamodb_client.put_item(TableName=table_name, Item=item)        
           print("done")        
            return a

The javascript then displays “Invalid Credential” error message if an incorrect error message is received. In case of success message, the javascript then sends the access token to Alexa redirect URL and thus successfully links the account.

 alexa account linking

 The access token is the main component as it is used to identify the user.

When the user invokes our skill, Alexa sends a JSON request to Lambda function with a key carrying the access token. The lambda then queries the DynamoDB table with the access token to identify the user and fetch his credentials. Once the credentials are fetched, the lambda then sends a request to Jira REST API and based on the users intent and returns back the message to Alexa as a JSON.

The Alexa then voices the message to the user and enables the user to now use his Jira with just his voice!

Topics: AWS, Architecture, Coding and Tutorial

Lex - Lambda policy limitation [Solved]

Posted by Sanjay Rohila on Dec 11, 2018 3:19:00 PM
Error: Maximum policy size of xxx bytes exceeded for Lambda xxx.

We can assign lambda function in Lex Intents. This gives lots of power to our bot, we can do lots of stuff with lambda. But there is a problem - when we assign lambda in intent, it asks for invocation permission and when we give that permission, it adds that as function policy to lambda. This works fine till we have so many intents that we exceed the limit of policy document length. Once we add lambda to so many intents, that function policy document is at its limit, we can't that lambda to any more intents.

Debug:

The problem is, it's not easy to find where these policies are going and where it's adding whenever we give permission for each intent. So it's hidden under this small button (screenshots below). This button is only visible if you have access to  lambda:GetPolicy action.

 

view-permissionslambda-function-policy

Solution:

The solution would be - instead of adding permission for every intent individually, we could add permission for all intents. That will reduce policy document size and we can live peacefully. But we can only see the function policy (if we have permission to lambda:GetPolicy) document in the console; we can't modify it from console interface. We have to do it via APIs - either command-line interface or SDKs. The API which will be able to do this is: lambda:AddPermission. If you are a python expert, just use boto3 API and use add_permission (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.add_permission)

Function Policy Document (before):

"Statement": [
{
"Sid": "lex-us-east-1-my_first_intent",
"Effect": "Allow",
"Principal": {
"Service": "lex.amazonaws.com"
},
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:us-east-1:xxx:function:My_Lex_Lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:lex:us-east-1:xxx:intent:my_first_intent:*"
}
}
},
{
"Sid": "lex-us-east-1-my_second_intent",
"Effect": "Allow",
"Principal": {
"Service": "lex.amazonaws.com"
},
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:us-east-1:xxx:function:My_Lex_Lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:lex:us-east-1:xxx:intent:my_second_intent:*"
}
}
},
# ... all other intents in which this lambda assigned
]

Function Policy Document (after):

"Statement": [
{
"Sid": "lex-us-east-1-my_bot",
"Effect": "Allow",
"Principal": {
"Service": "lex.amazonaws.com"
},
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:us-east-1:xxx:function:My_Lex_Lambda",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:lex:us-east-1:xxx:intent:*"
}
}
}
]

Topics: AWS, Coding and Tutorial

Testing with Emulator: A step-wise guide

Posted by Deepshikha Singh on Oct 19, 2017 2:52:00 PM

What is an Emulator?

An Emulator is an application that emulates real mobile device software, hardware, and operating systems, allowing us to test and debug our application. It is generally provided by the device manufacturer. Mobile emulators are free and provided as a part of SDK with each new OS release. As a developer or a tester, you can configure the emulator to closely resemble the devices on which you plan to deploy your application.

The emulator window consists of a device screen on the left and phone controls/keyboard on the right. The device screen initially displays ‘ANDROID‘, and then displays the graphical logo, while the Android platform associated with the AVD is initializing.

What is AVD Manager?

AVD Manager is a tool to create and manage Android Virtual Devices(AVDs), which define device configuration for the Android Emulator. Before you can actually run an emulator, you need to install the Android SDK on your machine and define an AVD, which defines the hardware characteristics of the emulator. You can define things like the device RAM, whether there is touch screen and/or keyboard, camera support, audio playback support, etc. You can create several AVDs to test your device on several virtual devices.

How to Create an Android Virtual Device?

Once installation of all the prerequisites are done, we need to launch the SDK Manager to download relevant files to create an emulator and run the virtual device.

  • In command line(cmd), type in android sdk. It would open the SDK Manager for you to download the relevant files
  • In the SDK Manager select the files as shown below. This will help you create a virtual device

There are two ways to create Android Virtual Devices for mobile testing:

  • Using Command Line
  • Using AVD Manager

Creating a New Android Virtual Device using AVD Manager

The first step is to launch the AVD Manager and for that, there are a number of options you can follow:

  • Launch AVD Manager using Program Menu : Go to Start →  All Program →  Android SDK Tools → AVD Manager
  • Launch AVD Manager using Command Line : Go to Run and type cmd to open command prompt window. Type: android avd
  • Launch AVD Manager from folder location : Browse to Android SDK folder and Double-click on AVD Manager.

Either of the above ways, it would open the Android Virtual Device Manager, which would help you create the virtual devices. The AVD Manager main screen shows one default virtual device, which is Nexus 5.

1. Click on Create button to create a new AVD

Create button to create a new AVD

2. A pop up will open, follow the below screenshot for the values. In case you are simulating a specific device, you might want to name it as such. For example NexusSix-4.4 refers to an AVD that simulate Nexus-SIx which runs on Android 4.4.2 version.

NexusSix-4.4 refers to an AVD

In this test I choose a very generic name Android to keep it simple.

Note: You can play around with different configurations for your virtual device.

 3. Click on OK to proceed with the save changes.


Click on OK to proceed

4. After you complete above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now select the AVD name and click on Start on the right.

AVD name and click on Start on the right

5. This would launch the pop up with few options, you may choose as you want. Once done, click on Launch, this will launch the emulator.

click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to start from a previously saved snapshot of the device’s state. The emulator launches much faster when launched from a snapshot.
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit


 6. Emulator will launch, it would take few minutes to display the Home Screen of the virtual android device. The time taken is actually equal to the switch on time on the real mobile device.

Note:

  • You can use the command ‘adb devices‘ to see if the adb is detecting the emulator. This basically completes the Android SDK installation part.
  • If ANDROID logo appears for more than 15-30 minutes, something has probably gone wrong. Reboot your computer, start AVD Manager, delete our created ‘Android’ AVD, recreate this AVD, and relaunch new AVD.

The emulator window launched with 5554:Android in its title bar. Value 5554 identifies a console port that you can use to query and control the AVD’s environment. Android supports a maximum of 16 concurrently executing AVDs, where each AVD is assigned an even-numbered port number that starts at 5554. It means that we can initialize more than one AVD at the same time and can test the app with parallel execution.

Creating an Android Emulator using Command Line

Android Emulator can be configured differently for simulating different android devices. With the help of AVD configuration, Android Emulator can simulate:

  • Target platforms versions
  • Screen sizes
  • Solutions
  • Input methods
  • Hardware configurations
  • External storage sizes for SD card

Although there are a good number of default device configurations present in AVD Manager, if you have any particular device in mind that you want to emulate, you need to create an AVD that resembles the features of your target device. For example, you need to know the screen size and resolution and so on. 

Creating Default Device Definition

1. Go to Device Definitions tab and select one of the presets and click ‘Create AVD’

2. Every default device will provide you some preset AVD and that can be changed according to your need

how 6

Note: Notice that the OK button is disabled, just because under CPU/ABI it says that “No system Images installed for this target“. To avoid this, select different Device configuration from Device & select any skin or select none for skin and proceed.


Below are the list of optional and mandatory settings:

  • AVD NAME: Choose a name for the AVD like NexusSix-4.4 that refer to your device configuration
  • Device: Select the AVD resolution as per the device options
  • Target: Select the Android version / Test Environment or Target Platform
  • CPU/ABI: Select Intel Atom (x86) for 32-bit and ARM (armeabi-v7) for 64-bit.
  • Keyboard: Select this box to use the keyboard in the AVD
  • Skin: Select this box to get the hardware buttons
  • Front Camera: If the system have a webcam, that can be used with AVD
  • Back Camera: If the system have a webcam, that can be used with AVD
  • Memory Options: Set RAM & VM Heap for the device according to your need
  • Internal Storage: Set this as per your need and Select GiB/MiB from the drop down
  • SD Card: Set this as per your need and Select GiB/MiB from the drop down
  • Emulation Options

- Snapshot : Select this to persist, it lets you quickly start the emulator after the first startup.

- Use Host GPU: Select this to use the computer graphics

 
3. After you complete the above steps, the emulator will be displayed under the Android Virtual Devices tab, configured on the AVD Manager. Now Select the newly added AVD name and click on Start on the right.

4. This would launch the pop up with few options, you may choose as you want. Once done click on Launch, this will launch the emulator.

  • Scale display to real size: This causes the resolution of the emulator’s display to be scaled to match the screen size
  • Wipe user data: This would wipe any previous app installation you have done and would launch a plain fresh emulator
  • Launch from snapshot: This causes the emulated device to be started from a previously saved snapshot of the device’s state.The emulator launches much faster when launched from a snapshot
  • Save to snapshot: This causes the emulated device’s state to be saved to a snapshot upon device exit

Once done, Emulator will launch. It would take few minutes to display the Home Screen of the virtual Android device.

Creating a Custom Device Definition

In case the available device definitions do not match the device type you’d like to emulate, you can create a custom device definition for your AVD:

1. If the AVD manager is opened, go to Device Definitions

2. Click on Create Devices.

3. The Configure Hardware Profile window will display and it will allow you to specify various configurations such as the screen size, memory options, input type, and sensors.

Note: Once all the information is filled properly, Create Device button will get enabled and you may proceed then.

4. After you complete above steps, the emulator will be displayed under the Device Definition tab, configured on the AVD Manager. Now Select the newly created AVD name and click on Create AVD on the right.

Automation using Android emulator

  • Get Emulator Platform Version
  • Unlock Android emulator screen
  • Go to Settings. You will find About Phone under settings.
  • Go to About Phone. It will show you Android version
  • Verify calculator App Is Available In Emulator 

 

We are going to run an Appium test for calculator application so it should be there in emulator. Generally, calculator app will be already installed in emulator. To check if it is installed or not,

1. Unlock emulator

2. Verify if there is an application with name Calculator

3. Get app activity and package name. We need launcher activity and package name of calculator app. Activity and package name of calculator app for me are:

  • Package name : com.android.calculator2
  • Activity name : com.android.calculator2.Calculator

4. Create Appium Test Script In Eclipse. Now we are ready to create and run our first Appium test on Android emulator for calculator application. I have prepared Appium test script as below. I have used RemoteWebDriver of Selenium Webdriver to launch app with required capabilities.

5. To start an instance of the emulator from the command line, navigate to the tools/ folder of the SDK. Enter emulator command like this: emulator -avd <avd_name> [<options>]

So that was all about performing Appium testing on emulator. In case you are just starting off with mobile application testing, you can explore our list of the best tools for mobile test automation. You can also check out our Appium tutorial webinar to level up your mobile testing skills.

Topics: Coding and Tutorial, QA and Testing

Building a code review process that works

Posted by love.huria on Sep 25, 2017 2:57:00 PM

A while back, I talked about the need for maintaining coding standards and how it simplifies the development process for the whole team. The next step is putting in place a code review process.

Why It’s Important?

Code reviews are very crucial for

  • knowledge transfer
  • avoiding small/common mistakes
  • maintaining best practices throughout the dev team

 

Let’s take my current team for example. We are around eleven developers in the team, all producing code which needs to be reviewed. So yeah, that’s a whole lot of code!

Pushing code to production is easy. Anyone can do it, right? What concerns us is the quality of code pieces we are going to deploy. 

The code can be completely fine or it can be a piece which makes everything fall apart. To maintain high code quality, we all need to have peer code reviews. This does not mean that the team writes bad code that needs to be checked. We all are on the same team and we have a common goal, that is to deliver the highest quality product. And a code review process makes sure someone on the team catches the errors that somebody else might have missed.

You must be thinking, “Is it worth it”?  Absolutely yes. 

Not having a code review process integrated into projects can result in big problems. Toyota had to settle a $3 million lawsuit because they did not pay enough attention to code reviews. 

There were a lot of reasons why this incident happened and one of the reasons was an absence of a peer code review. After reviewing their source code they found possible bit flips, task deaths that would disable the fail-safes, memory corruption, single-point failures, inadequate protections against stack overflow and buffer overflow, single-fault containment regions, and more. The list of deficiencies in process and product was lengthy.

Obviously then, it makes business sense to make code reviews a critical part of your development process.

How Do We Do It?

General Development Process

 

General development Process: code review process

This is our general flow which is being followed in most of the projects. Of course, columns may vary depending on the different aspects, clients, and projects.

Assigning responsibility

We have set a few guidelines around who will be responsible for the code review process:

  • There will be one senior and one junior code reviewer for each ticket.
  • Reviews will be done right after the daily standup, depending on how long takes
  • When there are tickets added on the board, one person will review at least one ticket in a day
  • It is both the code reviewer and developer’s responsibility to ensure that all tickets are in respective columns according to the latest update
  • If there is any feedback from the senior code reviewer, it’s the junior code reviewer's responsibility to look what he/she has missed

Github Code review flow

Maintaining your code review process on GitHub is super easy. We can create new projects on our repo and use it as we want. 

Setting projects on GitHub: Code Review Process

Generally, we have four columns, which are:

  • Ready for review: You can add cards (pull requests) to this column if your code is ready to be reviewed
  • In review: Now it’s the code reviewer’s responsibility to move the card to "In review" column so that it gets updated on the branch and the concerned dev understands that his/her ticket is currently being reviewed
  • Change Requested: Again it’s the code reviewer’s responsibility to move the card to this column if the review has failed standards. Then concerned dev will fix the issue and push the ticket back to “Ready for review”
  • Closed/Done: If the card is in the  “Closed” column, that means the PR has passed the requisite coding standards

 Things to look out for

These are the aspects which we consider while reviewing code: 

Making sure the process is followed

We regularly review our way of working against a documented process to identify if there are any gaps in our performance. We also look for ways to improve the process and ensure that it’s not a burden/blocker for anyone’s work.

The process is clearly defined and maintained on different platforms, depending on the client and teams involved. We usually use Confluence and highly recommend it to anyone who is reading this post.

Learning from our mistakes

We maintain a code feedback sheet where we mention common mistakes we need to avoid. Everyone on the team has access to it and can add points on where we can improve, new techniques to achieve certain functionality, coding patterns to avoid etc.

Goals of a Coding Review Process

With a finely tuned code review process in place, development teams can:

  • Enhance the learning of individuals to become better programmers
  • Improving the quality of codebase, even as it grows more complex as we scale
  • Focus on not just quantity, but quality deliverable throughout
  • Maintaining discipline within the team and understand the seriousness of spaghetti code


That’s about it! There would definitely be things I missed here or certain code review practices that are unique to your team. Leave a comment to let us know.

Topics: Architecture, Coding and Tutorial

Keeping it clean: Coding standards that matter

Posted by love.huria on Jun 19, 2017 6:16:00 PM

A year back, I was working on a project that taught me how to keep my code clean, modular, reusable, and all those terms that seem fancy, but are actually good for you in the long run. Interesting? Yeah a bit.
 
But what did I do after getting into those practices?
 
I made mistakes. Believe me, a lot of them. But with every mistake, I learnt a lot of stuff that I had never considered before. It helped me in my thinking process, on how we should build things, what steps we need to consider when we are developing/extending a feature. And most importantly, these learnings were not just personally helpful, but also crucial for team growth.
 
At first we used to get frustrated because we had to follow the additional steps like adding documentation, maintaining changelog files, following the code standards, and keeping them consistent throughout the team. These extra steps seemed cumbersome and we were not able to relate how this can be helpful for the team. And we are still learning/improving everyday in this respect. But after few months we started loving and improvising the process. 
 
So here I am, sharing what I have learnt. And trust me when I say this, once you start doing these, you can't code without following these practices.

This post is focused on what practices we follow everyday to make our lives easier. Although the practices mentioned here are more relatable for PHP/Drupal, they can be easily followed by all developers.
 
Let’s start off with simple things:

Commenting and Documentation Standards

Commenting doesn’t mean adding a bunch of comments and random documentation anywhere while coding. There are things which you should mention to make your colleagues’ lives easier, and yours as well. 

  • Start with writing a description of your component, why you are creating it, and what is the aim you would like to accomplish here, what does it do etc.
  • If there are modifications being done, then those should be logged by creating a Changelog.md file attached to your component. Maintain a specific format to have consistency throughout. 

This is something we follow at Srijan, hope this is pretty clear:
CHANGELOG.md file sample: 

new1

  • It’s good practice to add @see, referencing the class, which will help easily navigate to that class definition by using IDE like PHPStorm, or editors like Sublime etc with just one click.

new 2

 

  • Add @TODOS wherever necessary. It’s very important if you feel your code can be improved in future and you have the ideas on how to do it, but not enough time at the moment. Mention what needs to be improved above that code snippet. One good example could be:

new 3


  • Create README.md files so that others can easily understand what is the working of the module. 

          For example: 

new 4


  • A “Docblock” is a special code comment that offers an explanation of the purpose, arguments, return values, and throw exceptions for a block of code. 

new5

Something very informative in a simple tweet I found:

Formatting

This might include indenting, whitespace, brace positions, and new lines and this can be different according to different languages. In our case, this is specific to PHP (Drupal). There are a lot of plugins available in editors to beautify your code.

Naming Conventions

  • Of course naming conventions depend on the language you are using ( examples: Zend, Google style guides for PHP, CSS, Javascript, Java, etc ) but the main idea is to use descriptive and meaningful words. So, you should avoid calling your variables: xx, yy2, test1, test2 and so on.

  • For example, lower_case for PHP variables and functions, camelCase for JS variables and functions, lower-case-hyphenated for CSS ID's and classes, UPPER_CASE for constants.

  • We should name our variables in a way which easily explains the type of data they contain. Similarly in case of functions, they should describe what kind of functionality they are providing. This is called self-documenting code. Functions should tell what they do, not how. This is called abstraction which allows the underlying implementation to be changed without changing the function name.

Portability

Keep your code as loosely coupled as possible. It is "portable" in that the amount of work required to move it from one platform to another is low. There are few things we should have in mind while coding:

  • Avoid using hardcoded values like absolute file paths, URLs etc  unless it’s a matter of life and death (:P)
  • Avoid using magic numbers in your code. Basically it’s a hard-coded value that may change at a later stage, and hence become hard to update. Almost all numbers other than 0, 1 or 2 should be assigned to a constant at the top of the file. This provides a single point of change if the value changes, rather than a search-and-replace that could affect many files and potentially introduce bugs.

Linters

There are different types of tools available to find syntactic discrepancies in general, especially in interpreted languages like JavaScript, PHP, Python etc. They can also be used as simple debuggers for finding common errors. Here’s a look at the common linters we use at Srijan:

PHP: We use PHP Code Sniffer with Drupal integration. One can easily configure it with your editors such as Sublime which will show common PHP errors on save, which saves a lot of time and prevents errors before committing your changes. 

Javascript: We have

  • We have Drupal JavaScript coding standards (note - these vary in several ways from Mavericks standards) in place but we use JSHint for listing our JS related code checks. 
  • Wherever Drupal's JS formatting conventions conflict with JSHint, JSHint supersedes. 

SCSS:

  • We do not have any SCSS related documentation on Drupal.org but we do have Drupal CSS coding standards (these can be applied to SCSS code).
  • You can find some documentation related to SCSSLint here.
  • Also it’s good to checkout Compass best practices as well. 

Reusability

This is something we are working on quite extensively. We are building reusable components which can be used in across different websites (which have almost the same purpose). These components will provide the basic functionality and styles which can be overridden in different websites according to their own requirements. The most important thing here is to identify what functionality can be turned into a component.The degree to which your code is reusable is largely based on how tightly it’s coupled with the rest of your code base. A good example can be building a banner slider which can be used in most of the websites.

Modular

This basically means keeping your code independent of others, so that one bad change to your code does not break everything else. This is similar to the concept of coupling in object oriented programming. It’s like breaking the website into its basic independent parts of a more manageable size.

Use Continuous Integration Tools

We use Travis CI. It’s a distributed continuous integration service used to build and test projects on github. The service is free for open source projects. You  might be wondering why you didn’t you use it before! ? Don’t worry, It’s never too late and pretty easy to setup with your github repositories.

  • First step is to register Travis-CI, which you can also do with your github account.
  • Setup the .travis.yml file, This file handles the build of your environment and also the execution of the phpunit files.

You can check the simplest basic configuration here:

new6

When you have the phpunit test in place and if it is passed, it will show something like this on your commit:

new7


Simple Travis setup: https://github.com/lhuria94/drupal/pull/3

  • This .travis.yml should be at the root of the project.
  • Travis only runs builds on the commits you push after you’ve enabled the repository in Travis CI.

Code Reviews

We have pretty awesome code review process in place. But this blog is already too long, so I shall cover that in my next blog. Stay tuned. 

Meanwhile, you can check out our webinar by Elijah Manor, on how to sniff out JavaScript code errors.

Topics: Drupal, Coding and Tutorial

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us