Run your Drupal 40% cheaper

Posted by Akshita Rawat on Jul 26, 2019 2:43:00 PM

The expanding data landscape is feeding the demand for higher operational agility. This calls for a more responsive, reliable IT infrastructure — that doesn’t rake up millions — minimizes delays and downtime while improving security and making infrastructure more agile.

Between capacity constraints and unpredictable pricing models, AWS offers suited workloads for growing infrastructure needs with a host of services - IaaS, PaaS, SaaS - for Drupal enterprises. 

Here’s how you can run your Drupal up to 40% cheaper.

Drupal-powered enterprises need to constantly understand and adapt to the changing landscape in order to achieve the business objective

Keeping the Business Innovative and Agile

Increasing demands for performance, scalability and agility have never been higher. Success and growth for businesses depend on it. At the same time, changing landscape is forcing businesses to opt for lower costs, greater use of cloud resources and better customer service

While these changes have implications on infrastructure, compute and networking resources, they also impact storage. 

Lack of enough database storage, for example, can adversely impact application performance. Fast-growing applications may need more storage than expected or immediate storage resources.

Capacity and storage issues can hinder business agility

The continuous need for speed and efficiency is driving businesses to opt for storage as a service (STaaS) model. But there is more to it when it comes to the the benefits. Businesses get:

  • Better insights at reasonable cost: Providing a highly scalable environment at a low cost capable of handling the massive volume and velocity of data, organizations can shift from the two available models (CapEx and OpEx) for more predictable costs.
  • Better collaboration: Cloud-based business solutions accelerate innovation, delivering business analytics at the point of impact and enabling collaboration by creating and linking business networks.
  • Innovation and variety of solutions: Forward-thinking enterprises adopt for STaaS to speed up business innovation, improve overall data-centre efficiency, achieve integrated and innovative business results.  
  • Proven results: Organizations achieve their desired business outcomes by improving the responsiveness of their IT infrastructure without increasing risk or cost.

Capacity and storage issues can hinder your business agility. 

In order to avoid such challenges in the future, Drupal-powered enterprises need to constantly understand and adapt to the changing landscapes.

 

Azim Premji Foundation, Georgia Technical Authority, Department of Homeland security, USA are powered by Drupal and supported by AWS

While Drupal helps balance the rapid data growth, the right cloud storage solution needs to offer security and robust scalability without constraining the budget and prepare IT and marketing for what comes next.

Run your Drupal 40% cheaper

Choosing the right technology is crucial to avoid equipment failures and the costs of upgrading hardware. Small to medium enterprises and non-profit especially need sustainable solutions for future needs to run its operations without overcommitting budgets today. 

Finding the perfect match, organizations such as Azim Premji Foundation, Georgia Technical Authority, UCAS, Department of Homeland security - USA,  are powered by Drupal and supported by AWS.

Enterprises need sustainable solutions without over committing budgets today

AWS offers cloud web hosting solutions that provide businesses, non-profits, and governmental organizations with low-cost ways to deliver their websites and web applications.

The pay-as-you-go approach lets you pay only for the individual services you need, for as long as you use, and without requiring long-term contracts or complex licensing. 

Similar to how you pay for utilities like water and electricity.  

You only pay for the services you consume, and once you stop using them, there are no additional costs or termination fees.

The pricing models give your enterprises the flexibility to grow your business unencumbered by  IT

  • Pay-as-you-go
    With AWS you only pay for what use, helping your organization remain agile, responsive and always able to meet scale demands. Allowing you to easily adapt to changing business needs without overcommitting budgets and improving your responsiveness to changes, reducing the risk of over positioning or missing capacity.

Drupal-AWS-PriceBy paying for services on an as-needed basis, you can redirect your focus to innovation and invention, reducing procurement complexity and enabling your business to be fully elastic.

  • Save when you reserve
    By using reserved capacity, organizations can minimize risks, more predictably manage budgets, and comply with policies that require longer-term commitments.


For certain services like Amazon EC2 and Amazon RDS, enterprises can invest in reserved capacity. With Reserved Instances, you can save up to 75% over equivalent on-demand capacity.

Drupal-AWS-Reserve-price

When you buy Reserved Instances, the larger the upfront payment, the greater the discount.

  • Pay less by using more
    Providing volume-based discounts, organizations can save more by increasing usage. . For services such as S3 and data transfer OUT from EC2, pricing is tiered, meaning the more you use, the less you pay per GB. 

In addition, data transfer IN is always free of charge.
As a result, as your AWS usage needs increase, you benefit from the economies of scale that allow you to increase adoption and keep costs under control.

As your organization evolves, AWS also gives you options to acquire services that help you address your business needs. For example, AWS’ storage services portfolio, offers options to help you lower pricing based on how frequently you access data and the performance you need to retrieve it.

Drupal-AWS-storage-price

To optimize the savings, choosing the right combinations of storage solutions can help reduce costs while preserving performance, security and durability.

The pricing models give your enterprises the flexibility to grow your business unencumbered by  IT.

Case Study: Reducing cost & improving operational efficiency for Drupal application with AWS

Our client which is a legal firm and helps provide jurisdiction and litigant simple, seamless, and secure access to the record of legal proceedings. They built a SaaS-based workflow management application on Drupal to manage and track digital recordings of legal proceedings, transcripts including appeals to the stakeholders.

The goal was to build a robust, cloud-based server to effectively handle the processing and access to a large volume of text, audio and video files.

Since the business model was dependent upon frictionless uploading and downloading of text and media files, AWS cloud-based server came out as a unanimous solution. 

Business benefits

  • Simplified integration of the client's Drupal application with AWS S3, to enable flexible, cloud-native storage
  • As a result of going all-in into the AWS Cloud, the client reduced costs by 40% and increased operational performance by 30-40%
  • Dynamic storage and pay-as-you-go pricing enabled the client to leverage a highly cost-effective cloud-storage solution

Read complete case study on Cloud-Native Storage for Drupal Application with AWS

Get no-cost expert guidance

Designed to help you solve common problems and build faster, Amazon Web Services provides a comprehensive suite of solutions to secure and run your sophisticated and scalable applications.

Srijan is an AWS Advanced Consulting Partner. Schedule a meeting with our experts at no cost or sales pitch and get started with your cloud journey.

Topics: Drupal, AWS, Planet Drupal

Why building bots with Amazon Lex gives best ROI

Posted by Kimi Mahajan on Jul 23, 2019 1:23:00 PM

Did you know, over 80% of businesses are in favour of using chatbots by 2020? The demand for chatbots doesn’t seem to slow down since it encompasses a wide ecosystem of uses including automation, improving customer experience and mitigating latency.

With more than 65 million businesses using social media channels, chatbots have emerged as winners in grabbing marketing and sales opportunities by acquiring and engaging customers through messengers. 

Amazon Lex is an AWS solution, powered by deep learning functionalities like automatic speech recognition (ASR) and natural language processing (NLU), to publish bots for use across different channels. 

Although, Amazon Lex has emerged as a key player in the highly competitive chatbot market, but is it worth the investment?

Chatbots, but Why?

The chatbots aren’t new and have been in use since the late nineties. The first chatbot, Eliza, was built in 1966 at the MIT artificial intelligence laboratory.

timeline of evolution of chatbotsSource: Edureka!

With Chatbots, businesses can get rid of the need for hiring assistants. With minimal initial costs, they can serve the customers 24*7, with flawless customer experience, bringing in better RoI for the business.

Smartphones are changing user behaviour which explains why conversational interface hold huge opportunity for businesses and e-commerce specifically. 

Here's how chatbots can help solve user’s queries quickly and easily:

  • Chatbots work on scripts and carry out actions as per defined workflow, with as many users as possible, without a minute delay. If the chatbot cannot answer an unexpected question it can be channelled to your support team.
  • Are available 24X7, making them the best investment in your business.
  • Can help automate routine mundane tasks, thereby reducing the overwhelming burden and enabling you to better able to focus on areas like sales and marketing, where your expertise is required. 

How Amazon Lex Helps Bot Development?

Amazon Lex is a service which allows enterprises to create chatbots through voice and text inputs in minutes without any coding knowledge, ensuring highly engaging user experience. It enables you to build interfaces by embedding and integrating with a wide range of platforms with the help of deep learning technologies such as:

  1. Automatic Speech Recognition (ASR) for converting speech to text

2.  Natural Language Understanding (NLU) to recognize the intent of the text

The platform helps in developing customized and highly specialized chatbots to interact with your customers efficiently, which comes with a pay-as-you-go feature.

What makes Amazon Lex best?

Amazon Lex-powered bots can become a key competitive advantage for enterprises allowing them to optimize processes and enable cost savings. Let’s understand the benefits of leveraging Amazon Lex to build bots.

  • Automatic speech recognition

With this unique deep learning feature, it enables you to develop chatbots that offer highly engaging user experiences by allowing you to initiate lifelike conversations with your customers.

  • Natural language understanding

The unique in-depth learning feature enables your chatbot to understand the intent of the conversation. It allows delivering a highly interactive user experience based on human-like conversations. The lex powered chatbots have in-built ability to process the information enabling you to quickly and easily build sophisticated, natural language bots.

  • Versatility and automatic scaling

By allowing you to master the above two functionalities, it enables you to define new product categories.

Amazon Lex relieves you from the responsibility of the management of infrastructure by giving the provision of paying only for the features that you used. It allows you to build, test, and deploy your chatbots directly from the lex console and enables you to easily publish your voice or text chatbots to mobile devices, web apps, and chat services. Once published, your Amazon Lex bot processes voice or text input in conversation with your end-users and requires minimal to almost no human intervention post deployment.

  • Seamless experience with easy to use console

Amazon interface gives just the right interface with easy to use, point and click features, guiding almost anyone through the process of building chatbots in a matter of minutes. With a few example phrases, Amazon Lex builds a conversation interface model to answer queries in text and audio format and complete sophisticated tasks.

  • Flawless integration with almost any platform

Amazon Lex can be easily integrated with many other AWS services including Amazon Cognito, and Amazon DynamoDB. AWS platform takes care of the bot's security, monitoring, user authentication, business logic, storage and mobile app development.

  • Cost-effective solution for bot development 

Amazon Lex doesn't involve any upfront costs or minimum fees except for text or speech requests made. It comes with a pay-as-you-go pricing model and with a minimal cost per request, it remains a cost-effective way to build conversational interfaces. 

Amazon Lex-Powered bot helped one business upsell worth 90 million USD

Srijan worked with a global cleaning solution to help onboard assets to the IoT ecosystem, and collect, monitor, and analyze sensor data in real-time with the help of data visualization dashboards. This helped them track equipment conditions across their customer sites and automatically offer to serve as and when required. 

Srijan then suggested building a chatbot in addition to the dashboards to find relevant data. Chatbots worked on “ask-and-answer” approach and on simply asking a query, the bot, available via mobile apps would analyze necessary data to give an answer.

Amazon Lex was used as the base interface for building the chatbots. Deep learning functionalities and natural language understanding in Lex allowed creating conversations that accurately captured the business logic. The bot built was equipped in delivering information on equipment performance metrics, the equipment health, savings potential and could help with which equipment to use in which scenario.

Building bots using Lex helped the client in the upselling business worth 90 million USD and increased the user retention of the beta user group from 8% to 42%, boosting the product upselling. It also allowed the analytics team to automate a lot of tasks which earlier happened over spreadsheets.

Read the case study - Developing Enterprise Chatbots for Instant Access to Asset Performance Data

Contact us

Srijan leverages its expertise and knowledge of Amazon Lex to create unique conversational interfaces using Amazon Lex for your enterprise. Our team of developers are skilled in creating world-class experiences for your clients located globally.

Get in touch to partner with us for your chatbot requirements.

Topics: AWS

4 Advantages to building cloud native applications with AWS

Posted by Gaurav Mishra on Jul 16, 2019 11:18:00 AM

The State of Cloud Native Security report 2018 states that 62% of enterprises today choose to go for cloud-native applications for more than half of their new applications. And this number is set to grow by 80% over the next three years. This is no surprise given the fact that most organizations are already heavily invested in their chosen cloud platform, and would like to use it up to its full potential.

Cloud-native applications are essentially those created specifically for the cloud and designed to leverage the gamut of resources available on the cloud. Being ‘cloud-native’ means that an application has a vast operational landscape, capable of being available from wherever the cloud is instead of being tied down to a physical location. 

The three defining characteristics for cloud native applications are:

  • Built with a microservices-based architecture
  • Containerized development
  • Dynamic orchestration of network and database resources

 

Besides this, agile development methodologies and the CI/CD approach is also common to most cloud-native applications.

The current leaders in cloud services - Amazon Web Services(AWS), Microsoft Azure, and Google Cloud Platform(GCP) - offer a whole host of services to enable the creation of cloud native applications. However, AWS is one of the top performing providers when it comes to cloud infrastructure as a service (IaaS). And this is both in terms of critical analysis, as shown by the 2018 Gartner Magic Quadrant for Cloud IaaS providers, as well as customer preference, as seen in Gartner Peer Insights

AWS is an enterprise favourite on the strength of its global infrastructure network and exhaustive range of serverless computing, storage and database solutions. Supporting giants like Netflix, Airbnb, and Comcast, AWS brings in a set of significant advantages for enterprises creating cloud-native applications.

Here’s a look.

Cost-Effectiveness

30% of on-premise server capacity is idle at any given time, and yet organizations continuously spend money on upkeep and maintenance. With cloud-native applications, this expenditure on unused resources is completely eliminated. 

AWS offers dynamic scaling, allowing you to increase or decrease resource consumption based on application usage. Tools like Auto Scaling and Elastic Load Balancing help manage consumption of resources, ensuring that the massive AWS infrastructure is available to you on demand. 

But what makes this cost-effective is AWS’ pay-as-you-go models for all their cloud services whether it relates to infrastructure, platform or software. You pay only for the amount of resources used, and for the time frame you used them. This results in massive reduction in cloud expenditure outlays as you no longer have to maintain idle resources in anticipation of a surge. 

There are also secondary cost savings generated with cloud-native applications in the form of multitenancy. Several different apps can dynamically share platform resources leading to reduced expenditure.

Flexibility 

Kicking off cloud-native applications is in itself a huge paradigm shift for an organization, in terms of how they function and how application development takes place. On top of that, if your chosen cloud platform also calls for the use of unfamiliar technology in terms of operating systems, languages or databases, things can get really complicated, really fast. Not to mention the added cost of training your team in these new elements.

However, going cloud-native on AWS comes with a lot of flexibility as you can choose any combination of operating system, programming language, web application platform, database, and other services, as per your business needs. So your teams have the advantage of working with the technology tools that they are comfortable with, leaving them more time to focus on building better applications.

Besides that, the AWS platform is easy to use for your application development teams, with well documented resources and APIs, and the AWS Management Console. Once again, this gives a smooth learning curve for your teams, enabling you to start creating cloud-native apps in no time.

No Strings Attached

While AWS does have long term contracts with several enterprises, all their solutions for severless applications operate on a pay-as-you-go basis. There is no minimum usage requirement or even a fixed usage period, with all charges accruing on a per hour basis. So on the off chance that you want to terminate using AWS services, you can do so immediately. Without a lock-in period, your AWS billing stops immediately and you are free to move on to other solutions.

Reliability

One of the key reasons why cloud-native applications are superior to applications simply migrated to the cloud is that they are built as containerized microservices. This means that:

  • Different business functions are built into independent microservices, only loosely coupled with each other, and failure in one does not cause failure of the whole application
  • The application as a whole, or even parts of it, can be easily moved around because the containers are designed to be self-sufficient and will work uninterrupted, no matter where they are hosted

 

This is what makes cloud-native applications more reliable and resilient. Whether a particular part of an application fails or an entire server region goes down, the applications will continue to function.

This reliability is further strengthened when backed by AWS’ global infrastructure. AWS Availability Zones (AZ) currently span five continents, with multiple isolated server locations. Each AZ is designed with physical redundancies to ensure uninterrupted performance. Even in the case of an entire AZ going down, AWS systems ensure that your cloud-native applications can seamless move to the next unaffected location. 

Besides this, AWS has a wide network of technology partners that can help enterprises build resilient cloud-native applications. Owing to the fact that AWS Partners go through a stringent certification and verification process, you can rest assured that they bring in the best experience and expertise to your application development process. 

Cloud-native applications gives enterprises the ability to get to market faster and offer improved customer experiences. Consequently, they gain a competitive advantage that’s hard to beat with applications that are just migrated to the cloud. And there seems to be no better cloud IaaS provider for your applications than AWS.

Srijan’s is an AWS Advanced Consulting Partner, with AWS certified teams that have the experience of working with a range of AWS products and delivering solutions to global enterprises. From chatbots with AWS Lex to creating an enterprise decision analytics platform, our teams have in-depth expertise in solving enterprise challenges.

Ready to build cloud-native applications with AWS? Just drop us a line and our expert team will be in touch.

Topics: AWS, Cloud

Creating an Amazon Lex Bot with Facebook Messenger

Posted by Ishan Yadav on May 2, 2019 4:51:00 PM

Here’s a blog on how you can create an Amazon Lex bot with the Facebook messenger platform. Take a look at the steps:

1.Publish the Bot

a. In the Amazon Lex console, choose one of the bots you created.

b. Verify that the console shows the $LATEST as the bot version next to the bot’s  name.

c. Choose Publish.

d. On the Publish botname wizard, specify the alias BETA, and then choose Publish.

e. Verify that the Amazon Lex console shows the new version next to the bot’s name.

publish-pizza-ordering-srijan-technologies

2. Create a Facebook Application

On the Facebook developer portal, create a Facebook application and a Facebook page. For instructions, refer to this Quick Start documentation on the Facebook Messenger platform documentation. Jot down the following:

  • The App Secret for the Facebook App

  • The Page Access Token for the Facebook page

3. Integrate Facebook Messenger with the Amazon Lex Bot 

  1. To integrate Facebook Messenger with your bot

  • Sign in to the AWS Management Console and open the Amazon Lex console at https://console.aws.amazon.com/lex/.

  • Choose your Amazon Lex bot.

  • Go to Channels.

  • Select the category Facebook under Chatbots. The console will display the Facebook integration page.

  • On the Facebook integration page, do the following:

      • Type the following name: BotFacebookAssociation.

      • For KMS key, choose aws/lex .

      • For Alias, choose the bot alias.

      • For Verify token, type a token. This can be any string that you choose (for example, ExampleToken). This token can be used later in the Facebook developer portal when you set up the webhook.

      • For Page access token, type the token that you obtained from Facebook in Step 2.

      • For App secret key, type the key that you obtained from Facebook in Step 2.

for-app-secret-key-srijan-technologies

  • Choose Activate.

  • The console creates the bot channel association and returns a callback URL. Write down this URL.

2. On the Facebook developer portal, choose your app.

3. Choose the Messenger product, and select Setup webhooks in the Webhooks section of the page.

4. For instructions, refer to the Quick Start document on the Facebook Messenger platform.

5. On the webhook page of the subscription wizard, do the following:       

  • For Callback URL, type the callback URL provided in the Amazon Lex console earlier in the procedure.

  • For Verify Token, type the same token that you used in Amazon Lex.

  • Choose Subscription Fields (messages, messaging_postbacks, and messaging_optins).

  • Verify and Save. This initiates a handshake between Facebook and Amazon Lex.

6. Enable Webhooks integration. Select the page that you created, and then subscribe.

    Note: If you update or recreate a webhook, unsubscribe and then resubscribe to the page.

4. Take your bot live for everyone

To allow your bot to send and receive messages, we will be needing a special permission from facebook, “pages_messaging."

Follow the official facebook document to achieve the same.

And that's how you create a chatbot with Facebook Messenger. Let me know how it goes for you, or if you would have done anything differently. Would love to hear new tricks on doing this.

Topics: AWS, Machine Learning & AI, MarTech

Leveraging AWS Solutions to solve High-Value Enterprise Challenges

Posted by Sanjay Rohila on Mar 29, 2019 4:31:00 PM

The AWS ecosystem is an invaluable asset for enterprises driving their digital transformation. While the AWS Cloud infrastructure is powering a huge slice of enterprises, there are several other AWS solutions, especially in the realm of computation and machine learning, that’s enabling enterprises to leverage emerging technologies.

Here’s a look at some interesting projects and PoCs that Srijan has delivered for enterprise clients, using AWS solutions.

Chatbots powered by Amazon Lex and AWS Lambda

As a leading provider of intelligent cleaning solutions, the client wanted to be able to analyze and optimize the performance of their products. They had a set of data visualization dashboards that track this data in real time. However, these were not easily accessible and involved some effort before stakeholders could extract relevant insights.

The solution was to build enterprise chatbots that could deliver the same insights without taking up too much time or effort on the part of the client stakeholders. They could just type their query into the chatbot, and receive an appropriate response.

Srijan leveraged Amazon Lex as the core conversational interface framework to design the chatbot. Lex’s deep learning functionalities enabled the chatbot to identify the intent behind a particular questions, understand the context, and give back an appropriate response.

The other key solution used was AWS Lambda, that handled the backend extraction of data form the client databases, and computation to generate the correct response. The business logic defined atop Lamba determined how the raw data from various sources would be interpreted and presented to the user as a final answer.

Other AWS services used were:

  • AWS Cognito for user authentication
  • AWS Translate to ensure the chatbot could be used by client stakeholders in any location
  • Amazon S3 to store relevant asset images and performance graphs that could be accessed solely by the chatbot users.

 

READ COMPLETE CASE STUDY

Video Analytics powered by Amazon SageMaker

The cleaning solutions enterprise was also receiving increasing complaints around their floor cleaning machines not performing as expected. The client wanted to have detailed logs of machine performance across all locations, so validate or refute these customer claims, and prevent unwarranted expenditure on recalls and repairs.

Srijan proposed a video analytics algorithm capable of identifying the machine and verifying its performance at given locations. The approach was focussed on recording real-time footage of the machines operating at different customer locations and then automatically analyzing the video feed to identify and verify if the machines are performing as expected.

This was achieved with a deep learning model designed to analyze video feed data. The key objective of the model, built on convolutional neural network, was to accurately identify the machine in a video stream at 5 second intervals. These sightings are then timestamped and put together in a JSON file. This created a continuous log of whether a machine is working or not, in any given location.

Amazon SageMaker was the core solution used for this model. As a managed platform, it allowed for:

  • Creating the deep learning algorithm, with TensorFlow
  • Data augmentation and training the algorithm to accurately recognize the machines in a video stream
  • Quick and efficient scaling of training data to create a more accurate machine learning model

 

Once the model was in place, Srijan used Amazon S3 and AWS Lambda to create the workflow for collecting video feed from various customer locations, analyzing them, and creating detailed logs of machine performance.

READ COMPLETE CASE STUDY

Enterprise Data Analytics Platform with AWS Lambda

OnCorps offers PaaS for insight into enterprise data, to make better decisions using predictive analytics, machine learning and peer comparison. They wanted to create a platform that can do a lot of the heavy lifting when it came to data - right from gathering, to processing, to analytics and visualization.

While the platform was built on Drupal, Srijan leveraged a host of AWS solutions to deliver some powerful functionalities:

Amazon EC2: This offered an easily scalable and cost-effective computation solution. It gave the ability to run data analysis, compute workloads to aggregate data, as well as deliver predictive insight.

AWS Lambda: The frontend interface of the platform needed structured data to work with, preferably in JSON format. Lamba was used to transform the data coming in from various sources into a standard format.

Amazon S3: This was used to host the single page application built on AngularJS. S3 was also used as storage for all files and content assets for the platform.

AWS Cost Explorer: One of the Srijan team’s primary objectives was to keep product development costs on track. AWS Cost Explorer was used to get a clear visualization of operation costs across all solutions, and optimize the budget as much as possible.

With these solutions in place, OnCorps was able to roll out a scalable platform with >99% performance reliability across enterprise customers.

READ COMPLETE CASE STUDY

Cloud Native Storage with Amazon S3

TheRecordExchange (TRX) offers a SaaS-based workflow management application to capture and access recordings of legal proceedings. Their business model is dependent upon frictionless uploading and downloading of text and media files. The application frontend is designed so that transcription agents can easily upload files of legal proceedings, and customers can download them at will.

Given this, they needed a cloud-based server that could efficiently manage all file-related requests on the applications, and robustly render them on the frontend.

With Amazon S3, an object storage solution, Srijan was able to deliver a flexible clou-native storage for TRX. S3 enabled:

  • Addition of any number of files to the application, without worrying about capacity constraints. Since the application layer didn't have to handle file processing, it was lighter and delivered a better user experience.
  • Dynamic spacing, which allowed TRX to scale up or scale down space usage as and when required. With no minimum usage requirements and availability of on-demand usage, S3 proved to be a highly cost-effective solution for the client.

READ COMPLETE CASE STUDY

Srijan is an Advanced Consulting Partner for Amazon Web Services (AWS). It is currently working with enterprises across media, travel, retail, technology and telecom to drive their digital transformation, leveraging a host of AWS solutions.

Looking for an experienced AWS certified team to aid your digital growth strategy? Just drop us a line and our team will get in touch.

Topics: AWS, Cloud, Enterprises

APL: Alexa Presentation Language-basics

Posted by Sanjay Rohila on Feb 4, 2019 11:32:00 AM

I have been using display templates for quite a while now for screen enabled Alexa devices. There are few templates and directives we can use in response for display devices. But there is not as much in customization we can do about templates and layout.

Alexa Presentation Language (APL) is a beta feature which gives lot more power to developers. APL is a new directive type in DisplayRenderer. To give a perspective on what we can do is, below is the simple view I have created (below is code which is responsible for this view):

image showing header and footer block

A Basic APL document must have the type, version, and mainTemplate.

{
"type": "APL",
"version": "1.0",
"mainTemplate": {
"item": {
"type": "Text",
"text": "Hello, world"
}
}
}

There are lots of components (Text, Image, Video etc) that we can use in APL. We can also create a sequence for repeating set of components. For more details see the list of components — https://developer.amazon.com/docs/alexa-presentation-language/apl-component.html

We can define various styles for components, somewhat similar to CSS — color, font, background etc. Have a look at https://developer.amazon.com/docs/alexa-presentation-language/apl-styled-properties.html for details about styles.

Following is the document which is responsible for the above view. 

To use this in Alexa response we have to add Alexa.Presentation.APL.RenderDocument directive and add the above code in document property. The result would be:

{
"version": "1.0",
"response": {
"outputSpeech": {
"type": "SSML",
"ssml": "<speak>This is a minimal APL document</speak>"
},
"directives": [{
"type": "Alexa.Presentation.APL.RenderDocument",
"token": "document1",
"document": <content of above gist>
}]
}
}

To play around APL, try this new tool from Alexa team - https://developer.amazon.com/alexa/console/ask/displays

Stay tuned for followup post where we are going to talk to advance of APL including - dataSources, videos, sequence, html5 (this is gonna be huge, once supported by APL - it means we can run PWA apps in Alexa devices).

Topics: AWS, Machine Learning & AI, Architecture

Solving Alexa’s Accent Understanding Challenge, using Scalar Vector Machines

Posted by Chirash Rupela on Feb 1, 2019 11:48:00 AM

Alexa is great, providing amazing features to control apps and services with just your voice. But it’s understanding of non-American accents leaves much to be desired. Case in point - using Alexa with my Indian accent brings out some serious problems. No matter how many times I try to say “sprint”, it would only understand it as “spend”.

This is terrifying for Alexa developers like me who want to use the NLP power of Alexa to build solutions that cater primarily to the Indian population. Amazon does offer to develop Alexa skill in ‘EN-IN’ but it does not solve the problem. This major flaw in transcribing Indian accent results in a failure in the skill flow and broken conversations.

But should it be a roadblock for you to develop an Alexa skill?

No, because we found a way to solve this problem.

Devising a Solution

The solution is to use the ability to add synonyms for slot values (in custom slot types).

In any Alexa skill, you can add intents and each intent has different slots. You can choose pre-defined AMAZON slot types for your slots or you can create custom slot types. The difference between using AMAZON slot types and custom slot types is when you create a custom slot type, it allows you to add synonyms of slot values.

Using an example from our Alexa skill -

If we added “spend” as a synonym to “sprint” slot value, it would solve our problem. The next time Alexa hears “spend”, it would send slot value as “sprint” and that can be sent to the Lambda function which gives the back an appropriate response.

Quick aside: Our skill now available for beta testing, so do try it out.

manage-projects-jira-assist

This was the exact solution we were looking for.

Now we had the solutions and two ways to make it happen :

  • Manually add synonyms for each slot value based on user data and custom reviews.

  • Predict synonyms for each slot values and automatically add them once-twice a week.

    The manual additions are quite easy to do, but not a scalable option. Consider a case where you have more than 50 slot values and you want to add slot synonyms to each one or most of them. Doing it manually would be tedious.

    This is the reason we went with the Predictive approach and automated the addition of slot synonyms in our skill.

 
Implementing the Solution

To automate the prediction and addition of slot synonyms, we used following AWS resources  :

  • Lambda function

  • EC2 Instance

  • S3 bucket

  • Alexa developers account

 

Now, that all the resources are ready, there are three main steps in the Predictive approach :

       1. Capturing words like “spend” which are poorly transcribed by Alexa 

       2. Predicting the slot value the word “spend” belongs to. 

       3. Adding the word “spend” as a synonym to the predicted slot values.

I will explain steps 1 and 3 in a while, but let’s understand step 2 as it’s the most crucial step.

Prediction requires a machine learning algorithm. In our case, we have used Scalar Vector Machines(SVM) to predict the slot value. It’s one of the simplest yet quite accurate ML algorithm used for text classification.

SVM is a supervised ML algorithm which finds the line or hyperplane with the maximum distance from scalar vectors. Say, you have two classes -

a. Words similar to “sprint”

b. Words similar to “release”

Using SVM, we can find the line which clearly distinguishes these two classes based on the available training dataset. This line will be the maximum distance from the words which are on the outermost part of the clusters or so-called as scalar vectors.

SVM-alexa-srijan

You can learn more about SVM here

The  Architecture

svm-alexa-architecture-srijan

Step 1

To capture the poorly transcribed words such as “spend”, we use our Lambda function to read the request JSON from Alexa and store the word along with its slot name in a CSV file, and store it in S3 bucket.

def checkutterance(data):
   result = []
   for k, v in data.items():
       if "resolutions" in v.keys():
           for i in v["resolutions"]["resolutionsPerAuthority"]:
               if i["status"]["code"] == "ER_SUCCESS_NO_MATCH":
                   result.append({"slot": v["name"], "utterance": v["value"]})
   s3 = boto3.client('s3')
   response = s3.get_object(Bucket="BUCKET_NAME", Key="FILE_NAME")
   data = response['Body'].read().decode('utf-8')
   string = ""
   for j in result:
       string = string + json.dumps(j) + "\n"
   data = data + string
   encoded = data.encode('utf-8')
   s3.put_object(Body=encoded, Bucket='BUCKET_NAME', Key='FILE_NAME')

Step 2 

Once the missed values are stored in a S3 bucket, we use our EC2 instance to read the file.

In our case, we have scheduled a cron job to do it every day.

The script deployed on EC2 instance is responsible for training and predicting classes using SVM. The script reads the missed values from the file and predicts the class for each value.  In our case, it predicts “spend” as a synonym for slot value “sprint”.

Here, we have also set a threshold value in case the slot value matches quite low to either of the class. Such values are again stored in a CSV file and mailed to us so that manually we can add them in the Alexa skill if required.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import boto3
from sklearn.pipeline import Pipeline
from sklearn import svm
from sklearn.utils import shuffle
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer

text_clf = Pipeline([('vect', CountVectorizer()),
                    ('tfidf', TfidfTransformer()),
                    ('clf-svm', svm.SVC(C=1,  class_weight=None, coef0=0.0,
   decision_function_shape='ovr', degree=2, gamma='auto', kernel='rbf',
   max_iter=-1, probability=True,
   tol=0.001, verbose=False),),
])
Step 3

Once the slot value is predicted for each word, using Alexa cli, we update the word as a synonym for the respective slot in the Interaction Model JSON of our Alexa skill.

os.system('ask api get-model -s ALEXA_SKILL_ID -l en-IN > alexamodel.json ')
time.sleep(5)
data_alexa = []
with open('alexamodel.json', 'r+') as f :
   data_alexa = json.load(f)

for i in data_alexa["interactionModel"]["languageModel"]["types"]:

       if i["name"] == "choose":
           for j in i["values"]:
    if j["name"]["value"] =="sprint":
                   synonyms = j["name"]["synonyms"]
                   for s in sprint:
                       if s["utterance"] not in synonyms:
                           synonyms.append(s["utterance"])
                   print("new list of synonyms " , synonyms)
                   j["name"]["synonyms"] = synonyms
               if j["name"]["value"] == "release":
                   synonyms = j["name"]["synonyms"]
                   for r in release:
                       if r["utterance"] not in synonyms:
                           synonyms.append(r["utterance"])

                   print("new list of synonyms " , synonyms)
                   j["name"]["synonyms"] = synonyms

with open('alexa.json', 'w+') as fp :
                       json.dump(data_alexa, fp,ensure_ascii=False)
                       fp.close()
os.system("ask api update-model -s ALEXA_SKILL_ID -f alexa.json -l en-IN")

The Alexa skill is then built using the same skill and hence automating the process of updating synonyms in our Alexa skill.

svm-updating-synonims-alexa-skills-srijan

With this, the problem of transcribing Indian accent with Alexa skill has been solved to some extent. We are continuously updating our training dataset to improve the accuracy of our model.

If you have any suggestions on how to improve an Alexa skill for this particular problem, do let us know in the comments section below. 

Topics: AWS, Machine Learning & AI, Architecture

Polly Voice-over for Web Recordings [Code]

Posted by Sanjay Rohila on Jan 16, 2019 3:05:00 PM

When we are recording on a demo, sometimes we want to show textual information about what the user is doing or what happening in the background. But it will be more useful if we can put voice-over for that textual information.

Now one way is person recording video do the voice-over, but that is not like a professional voice-over. So the question was can we use Polly to do this, in the end, it's text-to-voice which is what Polly is built for. Turned out, we can very well do that.

I have written a small script which uses AWS javascript-SDK and takes cognito-pool-id and then provide API function which can be used to add text on any event or delayed timeOut as a voice-over.

This script has some configuration also, so you can change the color or caption (subtitle) we are adding and can change the voice to different Polly lexicons (Read more here)

 

Let's see the output of a demo (of the script itself) I have recorded:

polly-voice-over-demo

 

Now you wish you can do these voice-overs in different language with fluent accent in each of them. worry not, This script has that part also, you can use translation also with voice-overs. Here is the video:

 

voice-over-with-translation

 

Go Ahead, have a look at https://github.com/crazyrohila/polly-voice-over and use it, tweak it as desired.

Topics: AWS, Architecture

How to Upload Audio Stream to Amazon S3

Posted by Trilok Nagvenkar on Jan 10, 2019 12:02:00 PM

Through this blog, I’ll take you through the tutorial on uploading AWS live audio streaming to AWS S3 audio streaming using AWS SDK.

We will use certain services of AWS which includes Amazon Cognito Identity Pools (federated identities), and S3 ofcourse.

AWS Configurations

Assuming you have logged into the AWS console, let us get started by creating a S3 Bucket, where all the audio files will be stored. To create the bucket, navigate to AWS S3 -> Create bucket

Once the bucket is created, our next step is to create a Federated Identity which provides the necessary permission for a file upload from browser to S3 bucket.

To create the Federated Identity please navigate to the Cognito service - > Manage identity pools > Create new identity.

Give the Identity pool name and check the Enable access to unauthenticated identities Or Authentication Providers.

The next screen is all about setting the necessary permission for the Federated Identity via IAM roles. Here, we will create a new IAM role with specific permission defined via custom policy as policy mentioned below:

 

{

 

"Version": "2012-10-17",

 

"Statement": [

 

{

 

"Effect": "Allow",

 

"Action": [

 

"s3:PutObject",

 

"s3:GetObject",

 

"s3:ListMultipartUploadParts",

 

"s3:ListBucketMultipartUploads"

 

],

 

"Resource": "arn:aws:s3:::S3_BUCKET_NAME/*"

 

}

 

]

 

}

view rawpolicy.json hosted with ❤ by GitHub

Post creation, it will provide the Identity Pool Id. That ID is required to communicate with AWS services.

Front-end App

Now we will create a small front-end app to record and upload audio stream to S3.

HTML:

 

<button type="button" class="btn kc record" id="record_q1" disabled="disabled" onclick="AudioStream.startRecording(this.id)">Record</button>

 

<button type="button" class="btn kc stop" id="stop_q1" disabled="disabled" onclick="AudioStream.stopRecording(this.id)">Stop</button>

view rawindex.html hosted with ❤ by GitHub

 

JS:

We will create a AudioStream class which will have functions used in above HTML events and also the one used to upload the audio stream to s3.

Initialization:

1- audioStreamInitialize function is used to request the microphone permission, and on receiving the data, it will create a multi-part upload.

  audioStreamInitialize() {
  /*
  Feature detecting is a simple check for the existence of "navigator.mediaDevices.getUserMedia"
   
  To use the microphone. we need to request permission.
  The parameter to getUserMedia() is an object specifying the details and requirements for each type of media you want to access.
  To use microphone it shud be {audio: true}
   
  */
  navigator.mediaDevices.getUserMedia(self.audioConstraints)
  .then(function(stream) {
  /*
  Creates a new MediaRecorder object, given a MediaStream to record.
  */
  self.recorder = new MediaRecorder(stream);
   
  /*
  Called to handle the dataavailable event, which is periodically triggered each time timeslice milliseconds of media have been recorded
  (or when the entire media has been recorded, if timeslice wasn't specified).
  The event, of type BlobEvent, contains the recorded media in its data property.
  You can then collect and act upon that recorded media data using this event handler.
  */
  self.recorder.addEventListener('dataavailable', function(e) {
  var normalArr = [];
  /*
  Here we push the stream data to an array for future use.
  */
  self.recordedChunks.push(e.data);
  normalArr.push(e.data);
   
  /*
  here we create a blob from the stream data that we have received.
  */
  var blob = new Blob(normalArr, {
  type: 'audio/webm'
  });
   
  /*
  if the length of recordedChunks is 1 then it means its the 1st part of our data.
  So we createMultipartUpload which will return an upload id.
  Upload id is used to upload the other parts of the stream
   
  else.
  It Uploads a part in a multipart upload.
  */
  if (self.recordedChunks.length == 1) {
  self.startMultiUpload(blob, self.filename)
  } else {
  /*
  self.incr is basically a part number.
  Part number of part being uploaded. This is a positive integer between 1 and 10,000.
  */
  self.incr = self.incr + 1
  self.continueMultiUpload(blob, self.incr, self.uploadId, self.filename, self.bucketName);
  }
  })
  });
  }
   
   

view rawaudioStreamInitialize.js hosted with ❤ by GitHub

 

Start and Stop Recording:

1- startRecording function will be triggered when we start a new recording.

  /*
  The MediaRecorder method start(), which is part of the MediaStream Recording API,
  begins recording media into one or more Blob objects.
  You can record the entire duration of the media into a single Blob (or until you call requestData()),
  or you can specify the number of milliseconds to record at a time.
  Then, each time that amount of media has been recorded, an event will be delivered to let you act upon the recorded media,
  while a new Blob is created to record the next slice of the media
  */
  startRecording(id) {
  var self = this;
   
  /*
  1800000 is the number of milliseconds to record into each Blob.
  If this parameter isn't included, the entire media duration is recorded into a single Blob unless the requestData()
  method is called to obtain the Blob and trigger the creation of a new Blob into which the media continues to be recorded.
  */
  /*
  PLEASE NOTE YOU CAN CHANGE THIS PARAM OF 1800000 but the size should be greater then or equal to 5MB.
  As for multipart upload the minimum breakdown of the file should be 5MB
  */
  this.recorder.start(1800000);
   
  }
view rawstartRecording.js hosted with ❤ by GitHub

 

2- stopRecording

  /*
  When the stop() method is invoked, the UA queues a task that runs the following steps:
  1 - If MediaRecorder.state is "inactive", raise a DOM InvalidState error and terminate these steps.
  If the MediaRecorder.state is not "inactive", continue on to the next step.
  2 - Set the MediaRecorder.state to "inactive" and stop capturing media.
  3 - Raise a dataavailable event containing the Blob of data that has been gathered.
  4 - Raise a stop event.
  */
  stopRecording(id) {
  var self = this;
  self.recorder.stop();
  }
view rawstopRecording.js hosted with ❤ by GitHub

 

Uploading to AWS S3 Audio Streaming:

1- startMultiUpload

  /*
  Initiates a multipart upload and returns an upload ID.
  Upload id is used to upload the other parts of the stream
  */
  startMultiUpload(blob, filename) {
  var self = this;
  var audioBlob = blob;
  var params = {
  Bucket: self.bucketName,
  Key: filename,
  ContentType: 'audio/webm',
  ACL: 'private',
  };
  self.s3.createMultipartUpload(params, function(err, data) {
  if (err) {
  console.log(err, err.stack); // an error occurred
  } else {
  self.uploadId = data.UploadId
  self.incr = 1;
  self.continueMultiUpload(audioBlob, self.incr, self.uploadId, self.filename, self.bucketName);
  }
  });
  }
view rawstartMultiUpload.js hosted with ❤ by GitHub

 

2- continueMultiUpload

  /*
  Uploads a part in a multipart upload.
  The following code uploads part of a multipart upload.
  it specifies a file name for the part data. The Upload ID is same that is returned by the initiate multipart upload.
  */
  continueMultiUpload(audioBlob, PartNumber, uploadId, key, bucketName) {
  var self = this;
  var params = {
  Body: audioBlob,
  Bucket: bucketName,
  Key: key,
  PartNumber: PartNumber,
  UploadId: uploadId
  };
  console.log(params);
  self.s3.uploadPart(params, function(err, data) {
  if (err) {
  console.log(err, err.stack)
  } // an error occurred
  else {
  /*
  Once the part of data is uploaded we get an Entity tag for the uploaded object(ETag).
  which is used later when we complete our multipart upload.
  */
  self.etag.push(data.ETag);
  if (self.booleanStop == true) {
  self.completeMultiUpload();
  }
  }
  });
  }

 

3- completeMultiUpload

 

/*

 

Completes a multipart upload by assembling previously uploaded parts.

 

*/

 

completeMultiUpload() {

 

var self = this;

 

var outputTag = [];

 

/*

 

here we are constructing the Etag data in the required format.

 

*/

 

self.etag.forEach((data, index) => {

 

const obj = {

 

ETag: data,

 

PartNumber: ++index

 

};

 

outputTag.push(obj);

 

});

   
 

var params = {

 

Bucket: self.bucketName, // required

 

Key: self.filename, // required

 

UploadId: self.uploadId, // required

 

MultipartUpload: {

 

Parts: outputTag

 

}

 

};

   
 

self.s3.completeMultipartUpload(params, function(err, data) {

 

if (err) {

 

console.log(err, err.stack)

 

} // an error occurred

 

else {

 

// initialize variable back to normal

 

self.etag = [], self.recordedChunks = [];

 

self.uploadId = "";

 

self.booleanStop = false;

 

self.disableAllButton();

 

self.removeLoader();

  alert("we have successfully saved the questionaire..");
  }
  });
  }

 

This is how you can upload the live streaming audio to S3 bucket via front-end using AWS SDK. 

To find a working example, refer to my blog on uploading audio stream to AWS S3.

 

Topics: AWS, Coding and Tutorial

How to Secure Alexa Account Linking

Posted by Yashica Gupta on Jan 10, 2019 11:56:00 AM

With the ongoing increase in human-device interaction, Alexa devices have found a strong place in the market. Echo devices are now placed in home and offices to control the lights, check news, get the status of a task etc with just voice command. Every user now has their private (virtual) assistant to make their life easier.

But an important part of the chain is the Alexa skill developers. His aim is to build a skill which can reduce the manual work of the user and make user’s life convenient. Though developing Alexa skill is not difficult but there are many challenges that developers face while building a skill. Especially when it requires another software/app.
Currently, an Alexa skill which requires another software/app needs an account linking feature to be enabled.
F
or example: to enable and use Uber Alexa skill, the user needs to link his Uber account with Alexa. Once the user links his account, Uber software sends an access token to Alexa skill as a unique key for the user and hence the account linking is complete. Next time the user invokes Uber Alexa skill, the request sends the access token to Uber software and fetches the information.

We faced the same blocker while developing an Alexa account linking skill closely integrated with Jira software. The Alexa skill is built primarily for Scrum Masters to read and write to their Jira and help them stay up-to-date with their projects.

The most challenging part of developing this skill was to allow account linking because Jira software needs both server-client authentication, i.e., to link an account every user has to manually add our Alexa skill as an authorized client in their Jira dashboard and then provide us with the access token.

The solution implemented to reduce the inconvenience was to create a custom HTML page ( hosted on S3)  for account linking. The user just needs to add his credentials - username, password and the Jira server URL, and the account will be linked successfully.

As of now, we were not using Jira directly to authenticate users via account linking but rather as a message carrier between Alexa and Jira. This makes the account linking process easy for users but possesses a high-security risk to their credentials.

To make the process secure, the following architecture was implemented in our skill. One of the key components of the architecture is that it is built completely on AWS services, namely :

  1. API Gateway

  2. S3 bucket

  3. Lambda

  4. DynamoDB 

alexa account linking

Explanation :

When a user enables the Alexa skill, he is redirected to our HTML page hostel on S3 bucket. Once the user fills his credentials and clicks on the submit button, it sends a GET request to an endpoint deployed on AWS API Gateway with query parameters.

The API Gateway then invokes a connected Lambda function and send query parameters as an event.


alexa account linking

 Using the parameters in the event sent to Lambda, it sends a GET request to Jira REST API to validate and authenticate the user. In case the user credentials are incorrect, it sends an error message or a success message with an access token created by the Lambda. In case of successful validation, the Lambda also stores the encoded user credentials in DynamoDB table with access token as the key.

def lambda_handler(event, context):
               print (event)
               username = event["username"]        
               password = event["password"]        
               server = event["server"]        
               skill = event["skill"]        
               table_name = ""        
               if skill == "skill_name":            
                    table_name = “Table_name”                

              a = validity(password,username,server)        
              b = event       

              js = {}        
              for k,v in event.items():                    
                     a = {               
                          "S" : v           
                      }           
                     js[k] = a                    

              print(js)        
              js["password"] = {            
                    "B" : encrypt(session,event["password"], "alias/alias_name")        
             }       
             accesstoken =
''.join(random.SystemRandom().choice(string.ascii_uppercase +
string.digits) for _ in range(10))        
            js["accesstoken"] = {"S" : accesstoken}                                

            item = js        
            dynamodb_client.put_item(TableName=table_name, Item=item)        
           print("done")        
            return a

The javascript then displays “Invalid Credential” error message if an incorrect error message is received. In case of success message, the javascript then sends the access token to Alexa redirect URL and thus successfully links the account.

 alexa account linking

 The access token is the main component as it is used to identify the user.

When the user invokes our skill, Alexa sends a JSON request to Lambda function with a key carrying the access token. The lambda then queries the DynamoDB table with the access token to identify the user and fetch his credentials. Once the credentials are fetched, the lambda then sends a request to Jira REST API and based on the users intent and returns back the message to Alexa as a JSON.

The Alexa then voices the message to the user and enables the user to now use his Jira with just his voice!

Topics: AWS, Architecture, Coding and Tutorial

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us