AWS - The Right Cloud for Media and Entertainment Workloads

Posted by Kimi Mahajan on Aug 2, 2019 2:32:00 PM

The media landscape is transforming the way content is being produced and consumed, which is giving rise to user expectations to have more personalized experiences, from anywhere, anytime and on any device. 

This is leading to huge operational changes for media companies to migrate from traditional broadcasting method to digital distribution model. Several media giants are increasingly adopting new cloud technologies to manage the explosive growth of the digital content.

Media enterprises are making a shift to AWS, which is the pioneer in cloud hosting, to take advantage of its high scalability, elasticity and secure cloud services. 

But, how beneficial is AWS in terms of solving challenges of media and entertainment industry? Let’s understand the benefits of moving to cloud and why AWS offers the best services in the cloud service arena.

Why Media Enterprises need to Shift to Cloud?

In a survey, 35% of respondents replied that their enterprises moved to cloud for easier collaboration for post-production tasks.

types-of-business-responding-srijan-technologiesSource: Backblaze

The constant pressure among media firms to invest resources in generating high-quality, creative content and the need to prevent data losses due to natural and artificial catastrophes is pushing them to move to cloud.

So, how is cloud helping the Media and entertainment industry with its major challenges? Let’s review them one by one.

1.Huge Consumer Demand

Today’s consumers of media and entertainment content expect huge content choice, with their demand varying rapidly, which have to be dealt with in real time. 

The media and entertainment sector needs to cost-effectively meet volatile demand, and remain flexible in terms of automatically spinning servers up and down as demand increases or decreases.

2. Continuous Supply of Content

In order to stay competitive, content creators in the media field are under constant pressure to produce and/or distribute original content more frequently, at an accelerated rate.

With cloud, it’s easier to store, manage, and deliver gigantic amount of digital content. Hybrid and multi-cloud deployments can provide an even greater measure of flexibility, allowing workloads to be shifted seamlessly across public and private infrastructures.

3. Cost Benefits of Cloud Computing

Cable or broadcast television section of media and entertainment sector are being challenged by new trends in television broadcasting. Agile and low-cost over the top (OTT) companies selling and/or delivering streaming media content directly to consumers over the Internet are competing against the traditional media distribution methods.

Other factors that are challenging media content are the rising costs of content licensing, as well as shortened technology lifecycles.

By shifting to the cloud’s OPEX model, media companies can reduce their costs involving storage and delivery technologies and infrastructures.

4. High Performance With Minimal to Zero Delays

It is critical in terms of user experience for viewer content to stream with minimal delays and downtime. A six-second delay in streaming an ad for a show can cost a huge loss, with customers likely to switch to another entertainment channel.

The cloud provides architectures which supports high availability and un-compromised performance SLAs.

Advantages of AWS for Media Enterprises

Media enterprises can help their users monitor, manage storage and compute usage and costs with the tools and services with AWS. 

For major tasks around content production, storage, processing, and distribution, AWS brings scalable, elastic and secured cloud services. Equipped with deep learning, NLP, ML, NLU it delights the digital media creators with personalized experiences through smarter content investments.

Secure, Scalable and Cost-Effective Solution

66% of respondents say security is their greatest concern while adopting an enterprise cloud computing platform

AWS remains the best choice for media companies who are looking to adopt private cloud model. As per Cloud Security Alliance report, Amazon Web Services is the most popular public cloud infrastructure platform, comprising 41.5% of application workloads in the public cloud. 

Multinational entertainment firms have become scalable and are also making content available to consumers anytime and from anywhere on leveraging AWS cloud services.

It remains a cost-effective solution for media enterprises which can follow pay per use model for the services leveraged.

Cloud Computing is Changing Economics of Media and Publishing

Simplified Content creation and Production

Media enterprises need not worry about geo and resource constraints. The only focus  should be on creating quality content with HDR, VR, AR and beyond, to keep viewers engaged. 

With AWS, you can connect with world-wide production talent, unlimited capacity, unsurpassed security and the most innovative cloud technology partners in the industry. 

Now, you can optimize valuable insights to improve production investment decisions tailored as per consumers’ needs with the help of machine learning and analytics. Pre-processing and optimization for false takes or cuts comes easy with AWS. ML production edit provides quick turn-around for dailies and editorial review. Prohibited content can be easily flagged for filtered viewing.

Efficient Storage Provider

The media enterprises now have a one-stop solution for their storage concerns by opting for AWS multi-tiered storage solution, which includes Amazon Simple Storage Service (Amazon S3), S3 Infrequent access, and Amazon Glacier. These solutions allow for massive data storage, allowing huge data ingestion and elasticity satisfying the ever-increasing demand for storage, along with cost management.

Easens Digital Distribution and Post Production Process

AWS can solve the concerns of broadcasting quality video workflows in the cloud and ensures seamless delivery to any device, at anytime and anywhere.

Media enterprises need not worry about live, linear, and on-demand content, as AWS  specialises in delivering and in creating professional quality media experiences for the viewers in much less time, effort and expenses required in a traditional data center.

Pay-as-you-go pricing and fully-automated resource scaling lets you handle any sized audience without upfront capital investment and instead of managing complex infrastructure, AWS video solutions lets you focus on creating user-engaging content.

Live Streaming, Subtitling, Video on Demand Service

Making content understandable to a large audience is easy with AWS cloud solutions which helps generate multilingual subtitles for live over-the-top streaming. 

With AWS, the viewers will be given a choice to choose a movie/video from a wide array of options with the help of video-on-demand (VOD) content. VOD can be available for broadcast and multi-screen delivery.

Migration of VFX renderings to AWS will help media companies to shorten content production times and foster collaboration with contributors from around the world. 

Let’s understand how AWS has been beneficial for giant names in media and entertainment.

Company Description Business Challenges Solution and Benefits


Prominent name in streaming online content on smart TV, game console, PC, Mac, mobile, tablet and more.

  • Unable to scale
  • Unable to meet user increased demand
  • Huge infrastructure unable to manage data storage
  • Accelerated deployment of  servers and data storage
  • Stream high-quality content from anywhere, any device
  • improved scalability, with a better architecture
  • Containers optimized their microservices architecture

Discovery Communications

Leader in nonfiction media, reaching more than 1.8 billion cumulative subscribers in 218 countries and territories.

  • Required easy to manage website infrastructure
  • Was seeking cost-effective solution
  • Wanted to consolidate multiple delivery engines
  • Needed scalable and flexible solution
  • Wanted to switch to pay-as-you-go model
  • Migrated more than 40 sites to AWS
  • Highly scalable architecture
  • Entire continuous delivery system and development platform built around AWS API
  • Low latency along with cost savings of 20-25 percent and better manageability


Media and entertainment have begun to embrace cloud computing as their technology of choice. Reducing IT operational costs and providing anytime and anywhere accessible high quality content will soon trigger global adoption of cloud solutions by media and entertainment.

Srijan is an AWS Advanced Consulting Partner. Contact us today to discuss how our AWS trained professionals can help you in migrating your media and entertainment-based apps to AWS.

Topics: AWS, Cloud, Media & Publishing

How to conduct AWS cost optimization of your workload

Posted by Gaurav Mishra on Jul 30, 2019 12:04:00 PM

Your enterprise operates on the consumption-based model of AWS, but is your set up cost fully optimized? Are you able to best utilize your resources, achieve an outcome at the lowest possible price point, and meet your functional requirements?

If not, you are underutilizing the capabilities of your AWS cloud.

AWS offers several services and pricing options that can give your enterprise the flexibility to manage both your costs as well as keep the performance at par. And while it is relatively easy to optimize your costs in small environments, to scale successfully across large enterprises you need to follow certain operational best practices, and process automation.

Here’s taking a look at the six AWS cost optimization pillars to follow regardless of your workload or architecture:

Right size your services

AWS gives you the flexibility to adapt your services to meet your current business requirements. It also allows you to shift to the new services option when your demands change, to address new business needs anytime, without penalties or incidental fees.

Thus, through right sizing, you can:

  • use the lowest cost resource that still meets the technical specifications of a specific workload

  • adjust the size of your resources to optimize for costs

  • meet the exact capacity requirements you have without having to overprovision or compromise capacity. This allows you to optimize your AWS workload costs.

Amazon CloudWatch and Amazon CloudWatch Logs are key AWS services that support a right-sizing approach, and allow you to set up monitoring in order to understand your resource utilization.

Appropriately provisioned 

AWS Cloud gives you the ability to modify the attributes of your AWS managed services, in order to ensure there is sufficient capacity to meet your needs. You can turn off resources when they are not being used, and provision systems based on the requirements of your service capacity.

As a result, your excess capacity is kept to a minimum and performance is maximized for end users. This also helps optimize costs to meet your dynamic needs.

AWS Trusted Advisor helps monitors services such as Amazon Redshift and Amazon RDS for resource utilization and active connections. While the AWS Management Console can modify attributes of AWS services, and help align resource needs with changing demand. Amazon CloudWatch is also a key AWS service that supports an appropriately provisioned approach, by enabling you to collect and track metrics of usage.

Leverage the right pricing model

AWS provides a range of pricing models: On-Demand and Spot Instances for variable workloads, and Reserved Instances for predictable workloads. You can choose the right pricing model as per the nature of your workload to optimize your costs.

1. On Demand Instances

In On-Demand Instances, you pay for compute capacity by per hour or per second depending on which instances you run. No long-term commitments or upfront payments are needed. These instances are recommended for applications with short-term or predictable workloads that cannot be interrupted.

For example, in using resources like DynamoDB on demand, there is just the flat hourly rate, and no long-term commitments.

2. Spot Instances

A Spot Instance is an unused EC2 instance that you can bid for. Once your bid exceeds the current spot price (which fluctuates in real time based on demand-and-supply) the instance is launched. The instance can go away anytime the spot price becomes greater than your bid price.

Spot Instances are often available at a discount, and using it can lower your operating costs by up to 90% compared to On-Demand instances. They are ideal for use cases like batch processing, scientific research, image or video processing, financial analysis, and testing.

3. Reserved Instances

Reserved Instances enable you to commit to a period of usage (one or three years) and
save up to 75% over equivalent On-Demand hourly rates. They also provide significantly more savings than On-Demand Instances on applications with predictable usage, without requiring a change to your workload.

AWS Cost Explorer is a free tool to analyze your costs, and identify your expenses on AWS resources, areas that need further analysis, and see trends that can provide a better understanding of your costs.

Geographic selection

Another best practice to architect your solutions is to place your computing resource close to your users. This ensures lower latency, strong data sovereignty and minimizes your costs.

Every AWS region operates within local market conditions, with resource pricing different for each region. It is up to you to make the right geographic selection so that you can run at the lowest possible price globally.

AWS Simple Monthly Calculator can help you estimate the cost to architect your solution in various regions around the world and compare the cost of each. Simultaneously, using AWS CloudFormation or AWS CodeDeploy can help you provision a proof of concept environment in different regions, run workloads, and analyze the exact and complete system costs for each region.

Managed services

Using AWS managed services will not only help remove much of your administrative and operational overheads, but also reduce the cost of managing your infrastructure. Since they operate at cloud scale, the cost per transaction or service is efficiently lowered. And using managed services also helps you save on the license costs.

AWS database, Amazon RDS, Amazon DynamoDB, Amazon Elasticsearch Service, and Amazon EMR are some of the key AWS services that support a managed approach. These services reduce the cost of capabilities and also free up time for your developers and administrators.

Optimize data transfer

Lastly, architecting for a data transfer can help you optimize costs. This involves using content delivery networks to locate data closer to users (effectively done Amazon CloudFront), or using dedicated network links from your premises to AWS (as done by AWS Direct Connect).

Using AWS Direct Connect can help reduce network costs, increase bandwidth, and provide a more consistent network experience than internet based connections.

Starting with these best practices early in your journey will help you establish the right processes and ensure success when you hit scale.

AWS provides a set of cost management tools out of the box to help you manage, monitor, and, ultimately, optimize your costs. Srijan’s is an AWS Advanced Consulting Partner, with AWS certified teams that have the experience of working with a range of AWS products and delivering cost-effective solutions to global enterprises.

Ready to build cloud-native applications with AWS? Just drop us a line and our expert team will be in touch.

Topics: AWS, Cloud

Why should enterprises go for a hybrid cloud strategy?

Posted by Sriram Sitaraman on Jul 24, 2019 11:23:00 AM

Enterprises migrating to the cloud are often faced with the dilemma of choosing between public or private cloud. The right way forward is to choose the one that best suits your organization's workloads, and that could be public, private or even a mix of both.


According to the RightScale State of the Cloud Report, 2019:

  • 58% of respondents stated that hybrid cloud is their preferred approach
  • 17% opted for multiple public clouds
  • Just 10% opted for a single public cloud provider.

So what exactly is a hybrid cloud?

It is the combination of both public and private cloud solutions, where the resources are typically orchestrated as an integrated infrastructure environment. This allows the movement of app and data workloads between private and public clouds in a flexible way as demands, needs, and costs change, giving businesses greater flexibility and more options for data deployment and use.

We take a look at why hybrid cloud is often considered the best strategy, and when should enterprises go for it.

Why go for a Hybrid Cloud strategy

A hybrid cloud strategy brings both public and private cloud environments together, giving enterprises the benefits of combining the best of both worlds:

Public Cloud
The presence of a public cloud environment ensures:

  • Cost-effectiveness and scalability, which are often a primary reason for deploying the cloud
  • Instant provisioning for compute and storage resources on demand, making it easy to handle rapid and seasonal growth
  • Delivery of AI powered services exclusively through the cloud

Private Cloud

A private cloud environment offers:

  • Quick delivery of information, using an on-premise server. Thus a disruption to internet connectivity does not bring business operations to a standstill
  • High levels of customization and compliance regulations.

The combined deployment of these approaches, with a hybrid cloud, provides extended benefit to enterprises in the form of: 


Hybrid cloud solution offers each business the flexibility to choose what pertains to their needs in any specific scenario.oSO depending on their requirements, businesses can deploy the best in-class hardware, software, or services for their use.

Additionally, since there is no vendor lock-in constraints, businesses don’t need to compromise on their IT solution quality. Further, the hybrid approach allows for a flexible policy-driven deployment, that enables the distribution of workloads across public and private infrastructure environments based on security, performance and cost requirements.


For scalability in services or accounts, public cloud solution is the best route. But this brings with it the inherent risks of exposing sensitive IT workloads across the inexpensive public cloud infrastructure.

The private cloud component on the other hand, provides the lowest latency and the best security for sensitive information.

A hybrid cloud strategy thus comes to the rescue, providing the scalability of public cloud, and security of the private cloud environment.


The use of a private cloud infrastructure provides dedicated resources to IT workloads, improving their security status. A hybrid approach thus addresses the needs of businesses who not only require the fast and flexible development options that cloud-based solutions provide, but also the security and control of keeping certain solutions on-premises.

Cost Savings

According to reports, 64% of enterprises report cost savings to be a major objective of the cloud program in 2019. However, respondents also state that 27% of their cloud spend is wasted.

A hybrid cloud approach can bring in cost savings as they occur in both public and private cloud setups.

The use of a private cloud allows you to stick to the pay-as-you-go model. Here, you can purchase hardware, networking infrastructure, and software licenses as needed, and layout costs for staff to maintain and manage the entire solution. Additionally, you can provide adequate security, and pay for power consumption to run and cool the system.

Public cloud deployment allows you to provide cost-effective IT resources in times of demand spikes, without incurring capital expenses and labor costs. IT professionals can help determine the best configuration, service provider, and location for each service, matching your resources and requirements to demand, while offering cost savings at the same time.

So with the hybrid cloud strategy, you can save on costs while running your workloads in a private cloud in an optimized manner, while you only utilize the public cloud for extra capacity during peak demand seasons.

Hybrid cloud enables you to choose the best IT infrastructure for your specific needs. And by combining the two cloud environments, you will be able to run your business more efficiently, while also delivering a variety of products and services to customers. This brings high reliability, better customer engagement, and ability for businesses to diversify their spend and skills by picking vendors as per their capabilities, and not just to avoid vendor lock-ins.

When to go for Hybrid Cloud

Despite the advantages that a hybrid cloud brings, it may not be the best choice for every business. For example, in smaller organizations with a strict IT budget, their interests will be best served by a purely public cloud approach. The public cloud would be more adequate for them when weighed against the upfront costs of private servers, maintenance, time investment, etc. In such cases, it doesn’t make sense to go for a hybrid cloud strategy.

Other limitations of the hybrid cloud include:

  • Hybrid clouds can create a larger attack surface and data traversing cloud networks, making it susceptible to security risks
  • Data shuffling back and forth between private and public aspects of hybrid cloud can create latency, hence latency sensitive applications should not go for hybrid cloud
  • There's some complexity involved in setting up and managing an efficient hybrid cloud as there is one right way to interconnect the public and private cloud environments
  • Applications that require high speed are not suitable for hybrid cloud

Thus between public, private and hybrid clouds, it is often a choice between what an organization needs, and what are their limitations. As for hybrid cloud, they are considered most suitable when:

  • Organizations serving multiple verticals face different IT security, regulatory and performance requirements
  • Businesses are willing to optimize their cloud investments without compromising on the value proposition of either public or private cloud technologies
  • Staff have a good understanding of IT workloads and their essential characteristics that can make the complex hybrid solution work
  • Securing their cloud solutions is a priority, and businesses are willing to deploy secure private networks

If your organization is willing to make such changes and optimal investments in cloud solutions, hybrid cloud is the right strategy for you. Its custom cloud solution designed by keeping in mind your organization’s history, current state and future provides a more practical approach to addressing your current needs, as well as adapting quickly to the changing demands of the business.

Hybrid cloud can function as a catalyst in bringing about a digital transformation in businesses, saving you from the “one size fits all” approach to cloud infrastructure. And such businesses which are willing to adapt quickly to the changing market demands, will definitely stay ahead of the curve.

Srijan is aiding global enterprises in their platform modernization process with highly nuanced cloud migration solutions. As an AWS Advanced Consulting Partner with certified teams, Srijan has extensive experience in working with AWS cloud solutions, along with building cloud-native applications.

Ready to explore a hybrid cloud strategy for your enterprise? Just drop us a line and our expert team will be in touch.

Topics: Cloud

4 Advantages to building cloud native applications with AWS

Posted by Gaurav Mishra on Jul 16, 2019 11:18:00 AM

The State of Cloud Native Security report 2018 states that 62% of enterprises today choose to go for cloud-native applications for more than half of their new applications. And this number is set to grow by 80% over the next three years. This is no surprise given the fact that most organizations are already heavily invested in their chosen cloud platform, and would like to use it up to its full potential.

Cloud-native applications are essentially those created specifically for the cloud and designed to leverage the gamut of resources available on the cloud. Being ‘cloud-native’ means that an application has a vast operational landscape, capable of being available from wherever the cloud is instead of being tied down to a physical location. 

The three defining characteristics for cloud native applications are:

  • Built with a microservices-based architecture
  • Containerized development
  • Dynamic orchestration of network and database resources


Besides this, agile development methodologies and the CI/CD approach is also common to most cloud-native applications.

The current leaders in cloud services - Amazon Web Services(AWS), Microsoft Azure, and Google Cloud Platform(GCP) - offer a whole host of services to enable the creation of cloud native applications. However, AWS is one of the top performing providers when it comes to cloud infrastructure as a service (IaaS). And this is both in terms of critical analysis, as shown by the 2018 Gartner Magic Quadrant for Cloud IaaS providers, as well as customer preference, as seen in Gartner Peer Insights

AWS is an enterprise favourite on the strength of its global infrastructure network and exhaustive range of serverless computing, storage and database solutions. Supporting giants like Netflix, Airbnb, and Comcast, AWS brings in a set of significant advantages for enterprises creating cloud-native applications.

Here’s a look.


30% of on-premise server capacity is idle at any given time, and yet organizations continuously spend money on upkeep and maintenance. With cloud-native applications, this expenditure on unused resources is completely eliminated. 

AWS offers dynamic scaling, allowing you to increase or decrease resource consumption based on application usage. Tools like Auto Scaling and Elastic Load Balancing help manage consumption of resources, ensuring that the massive AWS infrastructure is available to you on demand. 

But what makes this cost-effective is AWS’ pay-as-you-go models for all their cloud services whether it relates to infrastructure, platform or software. You pay only for the amount of resources used, and for the time frame you used them. This results in massive reduction in cloud expenditure outlays as you no longer have to maintain idle resources in anticipation of a surge. 

There are also secondary cost savings generated with cloud-native applications in the form of multitenancy. Several different apps can dynamically share platform resources leading to reduced expenditure.


Kicking off cloud-native applications is in itself a huge paradigm shift for an organization, in terms of how they function and how application development takes place. On top of that, if your chosen cloud platform also calls for the use of unfamiliar technology in terms of operating systems, languages or databases, things can get really complicated, really fast. Not to mention the added cost of training your team in these new elements.

However, going cloud-native on AWS comes with a lot of flexibility as you can choose any combination of operating system, programming language, web application platform, database, and other services, as per your business needs. So your teams have the advantage of working with the technology tools that they are comfortable with, leaving them more time to focus on building better applications.

Besides that, the AWS platform is easy to use for your application development teams, with well documented resources and APIs, and the AWS Management Console. Once again, this gives a smooth learning curve for your teams, enabling you to start creating cloud-native apps in no time.

No Strings Attached

While AWS does have long term contracts with several enterprises, all their solutions for severless applications operate on a pay-as-you-go basis. There is no minimum usage requirement or even a fixed usage period, with all charges accruing on a per hour basis. So on the off chance that you want to terminate using AWS services, you can do so immediately. Without a lock-in period, your AWS billing stops immediately and you are free to move on to other solutions.


One of the key reasons why cloud-native applications are superior to applications simply migrated to the cloud is that they are built as containerized microservices. This means that:

  • Different business functions are built into independent microservices, only loosely coupled with each other, and failure in one does not cause failure of the whole application
  • The application as a whole, or even parts of it, can be easily moved around because the containers are designed to be self-sufficient and will work uninterrupted, no matter where they are hosted


This is what makes cloud-native applications more reliable and resilient. Whether a particular part of an application fails or an entire server region goes down, the applications will continue to function.

This reliability is further strengthened when backed by AWS’ global infrastructure. AWS Availability Zones (AZ) currently span five continents, with multiple isolated server locations. Each AZ is designed with physical redundancies to ensure uninterrupted performance. Even in the case of an entire AZ going down, AWS systems ensure that your cloud-native applications can seamless move to the next unaffected location. 

Besides this, AWS has a wide network of technology partners that can help enterprises build resilient cloud-native applications. Owing to the fact that AWS Partners go through a stringent certification and verification process, you can rest assured that they bring in the best experience and expertise to your application development process. 

Cloud-native applications gives enterprises the ability to get to market faster and offer improved customer experiences. Consequently, they gain a competitive advantage that’s hard to beat with applications that are just migrated to the cloud. And there seems to be no better cloud IaaS provider for your applications than AWS.

Srijan’s is an AWS Advanced Consulting Partner, with AWS certified teams that have the experience of working with a range of AWS products and delivering solutions to global enterprises. From chatbots with AWS Lex to creating an enterprise decision analytics platform, our teams have in-depth expertise in solving enterprise challenges.

Ready to build cloud-native applications with AWS? Just drop us a line and our expert team will be in touch.

Topics: AWS, Cloud

Understanding Cloud Native Applications - What, Why, How

Posted by Nilanjana on Jul 10, 2019 1:23:00 PM

What are Cloud Native Applications?

Cloud native applications are the ones that are designed to optimally leverage the benefits of the cloud computing delivery model. The applications live in the cloud and not on an on-premise data centre. However, merely existing on the cloud does not make an application ‘cloud native’. The term refers to a fundamental change in how applications are developed and deployed, and not just where they are hosted. 

Cloud native applications are best described by a set of key characteristics that differentiate them from traditional applications:

  • Microservices architecture: They are built as a collection of loosely coupled services that handle different functions of the applications. Using the microservices architecture instead of the monolithic approach is what gives cloud native applications much of their speed and scalability.

  • 12 Factor applications: This refers to a set of 12 design principles laid out by Heroku founder to help create applications that are well suited for the cloud. These include defined practices around version control, environment configuration, isolated dependencies, executing apps as stateless resources and more
  • Platform-as-a-Service: Because cloud native apps run on microservices which can number into 100s for any given application, provisioning new environments for each services in the traditional way is time and resource intensive. Using Platform-as-a-Service (PaaA) simplifies this process and can handle rapid provisioning for numerous microservices instances. This is also key to ensuring scalability of cloud native applications.
  • API-based: Independent microservices in a cloud native application communicate via API calls. This preserves their loosely coupled nature and keeps the application fast and scalable.
  • Robust: Cloud native applications are robust, with minimal to zero downtime. Once again the microservices architecture, coupled with being on a highly available cloud environment, makes this possible.

Why go for Cloud Native Applications?

The manner in which cloud native applications are developed brings with it a distinct set of advantages for enterprises. These are:


In a disruption heavy market, the time-to-market for new products and services is extremely crucial to success. Reaching potential customers before your competitors means achieving a faster go-to-market, and that’s possible with cloud native applications. The microservices architecture makes them easy to develop, test and deploy, as compared to monolithic applications. 

These applications also work with smaller but more frequent release cycles, that are easily reversible. So you can constantly introduce new features, functions and bug fixes for your applications, while also having the option of quick rollbacks if needed. 

Finally, with independent microservices, updates to a service need not be integrated with the code of the rest of the services. With the integration time eliminated, new functionalities can be quickly rolled out for these applications.


The microservices architecture makes cloud native applications extremely scalable. This is because each microservice handles a specific function within an application. In cases of increase in demand, the application can be scaled by creating more instances of only those services that are needed to handle that demand. And provisioning new instances of a microservice can be done in seconds because the application is based on the PaaS model. 

Besides this, with cloud providers like AWS you get auto-scaling and elastic load balancing solutions that make it easier to dynamically scale resource utilization for cloud native applications.

Cost Efficiency

For monolithic applications, scaling to meet new demand involves creating a new instance of the entire monolith, and that is both a time and resource intensive process. It also means paying for more hardware resources in the cloud, even though the actual demand spike is only for a limited set of features.

With cloud native applications, scaling means increasing instances for only specific microservices. And that saves money as it eliminates the need to consume resources that will not be utilized. Also, it’s easy to turn off your consumption of extra resources once the spike in demand subsides.

There are also secondary cost savings generated with cloud native apps, in the form of multitenancy. Several different microservices can dynamically share platform resources leading to reduced expenditure.


Cloud native applications are extremely available and that’s also because of their microservices architecture. This works at two levels:

  • If one service goes down, the rest of the applications still continues to be available. This is because the application is designed with failsafes, and can always provision another working instance of the failed microservice.
  • The containerized nature of microservices mean that they are packaged with their runtime environment. This makes them self-sufficient and designed to work uninterrupted, no matter where they are hosted. So in case an entire availability region of your cloud goes down, the application can simply be moved to a different region. And it will continue to be available, with your users none the wiser. 

How to get started with Cloud Native Applications?

Building cloud native applications involves a large scale change in how applications are developed and deployed within the organization. So getting started with it will require some preparation on the part of the enterprise. 

Some of the key aspects to consider would be:

Create your enterprise strategy

The shift to cloud native applications is being considered because it serves specific business goals - creating new products and services, gaining new market share, or increasing revenues. And these business goals is what should be kept front and center while creating your strategy for going cloud native. 

This will also help you avoid the trap of going down the technology-first route. Yes, cloud native applications will involve the use of new technology - languages, frameworks, platforms - by your team. But deciding to first lock down the technology aspects can be disastrous. That’s because the technology you choose should be able to serve your business goals. And if you haven’t figured those out first, the initiative will not be successful or sustainable.

So a good order of priority here is identifying:

  • Business goals to achieve with going cloud native
  • Right teams that can lead this, both in-house and as partners/vendors
  • Technology solutions that best suit your requirements

Transition away from the monolithic application

If you are working with a fairly complex monolithic application that has been put together over time, resist the temptation of a simple lift-and-shift to the cloud. Because of the tight coupling and the myriad dependencies that have developed over the years, it’s unlikely the monolith will run well on the cloud. So you need to plan for breaking down the monolith into constituent services that can be shifted to the cloud.

Moving towards a microservices architecture can seem daunting at first because you are dealing with 100s of different services instead of a single one. However, with practices like event sourcing microservices, deployment with docker, and a host of other design guidelines of building an optimal microservices architecture, the process can be well understood and executed. 

CI/CD approach

Adopting a continuous integration/continuous development approach is key to leveraging the speed benefits for cloud native applications. The system for rapidly developing and testing new features and pushing them out for use, as well as breaking down the traditional software development team silos is crucial for cloud native applications. Frequent, well-tested releases help keep your cloud native application updated and allow for continuous improvement.

So that was a quick look at understanding cloud native applications, their advantages, and where to get started. Moving forward, you would also need to identify your cloud platform of choice, and our take on building cloud native applications with AWS might be helpful.

Srijan is assisting enterprises in modernizing applications with microservices architecture, primarily leveraging Docker and Kubernetes. Srijan is also an AWS Advanced Consulting Partner, with AWS certified teams that have the experience of working with a range of AWS products and delivering cost-effective solutions to global enterprises.

Ready to build modernize your application architecture with microservices? Just drop us a line and our expert team will be in touch.

Topics: Microservices, Cloud, Architecture

Leveraging AWS Solutions to solve High-Value Enterprise Challenges

Posted by Sanjay Rohila on Mar 29, 2019 4:31:00 PM

The AWS ecosystem is an invaluable asset for enterprises driving their digital transformation. While the AWS Cloud infrastructure is powering a huge slice of enterprises, there are several other AWS solutions, especially in the realm of computation and machine learning, that’s enabling enterprises to leverage emerging technologies.

Here’s a look at some interesting projects and PoCs that Srijan has delivered for enterprise clients, using AWS solutions.

Chatbots powered by Amazon Lex and AWS Lambda

As a leading provider of intelligent cleaning solutions, the client wanted to be able to analyze and optimize the performance of their products. They had a set of data visualization dashboards that track this data in real time. However, these were not easily accessible and involved some effort before stakeholders could extract relevant insights.

The solution was to build enterprise chatbots that could deliver the same insights without taking up too much time or effort on the part of the client stakeholders. They could just type their query into the chatbot, and receive an appropriate response.

Srijan leveraged Amazon Lex as the core conversational interface framework to design the chatbot. Lex’s deep learning functionalities enabled the chatbot to identify the intent behind a particular questions, understand the context, and give back an appropriate response.

The other key solution used was AWS Lambda, that handled the backend extraction of data form the client databases, and computation to generate the correct response. The business logic defined atop Lamba determined how the raw data from various sources would be interpreted and presented to the user as a final answer.

Other AWS services used were:

  • AWS Cognito for user authentication
  • AWS Translate to ensure the chatbot could be used by client stakeholders in any location
  • Amazon S3 to store relevant asset images and performance graphs that could be accessed solely by the chatbot users.



Video Analytics powered by Amazon SageMaker

The cleaning solutions enterprise was also receiving increasing complaints around their floor cleaning machines not performing as expected. The client wanted to have detailed logs of machine performance across all locations, so validate or refute these customer claims, and prevent unwarranted expenditure on recalls and repairs.

Srijan proposed a video analytics algorithm capable of identifying the machine and verifying its performance at given locations. The approach was focussed on recording real-time footage of the machines operating at different customer locations and then automatically analyzing the video feed to identify and verify if the machines are performing as expected.

This was achieved with a deep learning model designed to analyze video feed data. The key objective of the model, built on convolutional neural network, was to accurately identify the machine in a video stream at 5 second intervals. These sightings are then timestamped and put together in a JSON file. This created a continuous log of whether a machine is working or not, in any given location.

Amazon SageMaker was the core solution used for this model. As a managed platform, it allowed for:

  • Creating the deep learning algorithm, with TensorFlow
  • Data augmentation and training the algorithm to accurately recognize the machines in a video stream
  • Quick and efficient scaling of training data to create a more accurate machine learning model


Once the model was in place, Srijan used Amazon S3 and AWS Lambda to create the workflow for collecting video feed from various customer locations, analyzing them, and creating detailed logs of machine performance.


Enterprise Data Analytics Platform with AWS Lambda

OnCorps offers PaaS for insight into enterprise data, to make better decisions using predictive analytics, machine learning and peer comparison. They wanted to create a platform that can do a lot of the heavy lifting when it came to data - right from gathering, to processing, to analytics and visualization.

While the platform was built on Drupal, Srijan leveraged a host of AWS solutions to deliver some powerful functionalities:

Amazon EC2: This offered an easily scalable and cost-effective computation solution. It gave the ability to run data analysis, compute workloads to aggregate data, as well as deliver predictive insight.

AWS Lambda: The frontend interface of the platform needed structured data to work with, preferably in JSON format. Lamba was used to transform the data coming in from various sources into a standard format.

Amazon S3: This was used to host the single page application built on AngularJS. S3 was also used as storage for all files and content assets for the platform.

AWS Cost Explorer: One of the Srijan team’s primary objectives was to keep product development costs on track. AWS Cost Explorer was used to get a clear visualization of operation costs across all solutions, and optimize the budget as much as possible.

With these solutions in place, OnCorps was able to roll out a scalable platform with >99% performance reliability across enterprise customers.


Cloud Native Storage with Amazon S3

TheRecordExchange (TRX) offers a SaaS-based workflow management application to capture and access recordings of legal proceedings. Their business model is dependent upon frictionless uploading and downloading of text and media files. The application frontend is designed so that transcription agents can easily upload files of legal proceedings, and customers can download them at will.

Given this, they needed a cloud-based server that could efficiently manage all file-related requests on the applications, and robustly render them on the frontend.

With Amazon S3, an object storage solution, Srijan was able to deliver a flexible clou-native storage for TRX. S3 enabled:

  • Addition of any number of files to the application, without worrying about capacity constraints. Since the application layer didn't have to handle file processing, it was lighter and delivered a better user experience.
  • Dynamic spacing, which allowed TRX to scale up or scale down space usage as and when required. With no minimum usage requirements and availability of on-demand usage, S3 proved to be a highly cost-effective solution for the client.


Srijan is an Advanced Consulting Partner for Amazon Web Services (AWS). It is currently working with enterprises across media, travel, retail, technology and telecom to drive their digital transformation, leveraging a host of AWS solutions.

Looking for an experienced AWS certified team to aid your digital growth strategy? Just drop us a line and our team will get in touch.

Topics: AWS, Cloud, Enterprises

IoT implementation and real-time data monitoring with cloud - a demo

Posted by Aasim Mazahir Khan on Sep 21, 2018 11:56:00 AM

IoT implementation and real-time sensor data monitoring with the cloud.

IoT is the next frontier of enterprise connectivity. From manufacturing to healthcare to smart cities, IoT is bringing about a sea change in how we gather, analyze, and otherwise work with data.

Let's take a look at the official definition provided by Wikipedia.

"The Internet of Things (IoT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these things to connect and exchange data, creating opportunities for more direct integration of the physical world into computer-based systems, resulting in efficiency improvements, economic benefits, and reduced human exertions."

In simple words, an IoT device is a micro-controller connected with a sensor, and has the capability to communicate over the internet.

What’s the business value of IoT?

The number of IoT devices increased 31% year-over-year to 8.4 billion in 2017 and it is estimated that there will be 30 billion devices by 2020. The global market value of IoT is projected to reach $7.1 trillion by 2020.

Industries where IoT is being widely deployed:

  • Wearables
  • Home automation
  • Robotics
  • Automobile
  • Healthcare
  • Agriculture
  • E-Commerce

Are you happy with Amazon's same day delivery? Say thanks to IoT devices. Yes, Amazon is using IoT integrated machines for their "Smart Warehouses".

Here's one more -

A global cleaning solutions company introduced its intelligent laundry dispensers. The system provides real-time and remote visibility into resource consumption and process automation. Energy and water consumption, chemical accuracy, machine downtime and day-to-day operational costs can be closely monitored, to prevent costly disruptions and errors within laundry operations, by using IoT devices in their machines. The IoT powered dispenser connects to any mobile device to give teams visibility of the entire wash process together with actionable insights.

So, an IoT device acts as an agent that engages in physical activity of a process to gather relevant data. This information is then leveraged to ensure smooth processes, predictive maintenance, generate data-driven decisions aimed at enhancing and evolving a business process.

How do IoT devices communicate?

Here’s a look at the normal communication models for IoT:

  • Device-to-Device Communications
  • Device-to-Cloud Communications
  • Device-to-Gateway Mode
  • Back-End Data-Sharing Model

In essence, all IoT communication models are designed to give users better access to an IoT device and its data. The device and the data collected therein is clearly a valuable resource. Adding to this, a cloud web is the most popular platform to utilize this data in real time. Besides this, a data stored on the cloud opens up possibilities like API integration, database integration, notification alerts and more.

In this piece, I am going to demonstrate a Device-to-Cloud communication model using AWS and Python.

Here’s a look at the basic concept involved in this communication model.

How do IoT devices communicate

I created a small web application which shows the real-time data from the IoT on the web browser, using AWS cloud.

Let’s dive in.


For this demonstration, we require a

  • microcontroller with WiFi module
  • general I/O pins
  • battery powered or usb powered IoT device

You might be thinking of Raspberry Pi uptil now, but it’s specification are a bit of an overkill for this demo, and almost none of its features will be utilized under this demo. Here’s why:

Features of Raspberry Pi-3

Do we need it

CPU - 1.2GHz 64-bit quad-core ARMv8

More than what we need

RAM - 1 GB

More than what we need

WiFi 802.11 b/g/n


Bluetooth 4.1

Not required

4 USB ports

Not required

40 GPIO pins

Not required

Full HDMI port

Not required

Combined 3.5mm audio jack

Not required

Camera interface

Not required

Display interface

Not required

Micro SD card slot

Not required

Video Core IV 3D graphics core

Not required


What I used instead was the NodeMCU, with the following specifications

  • CPU - ESP8266(LX106)
  • Memory - 128KB
  • Operating System - XTOS
  • Storage - 4MB
  • WiFi 802.11 b/g/n
  • 30 GPIO pins

Honestly, choosing an IoT device completely depends upon your specific requirements. It’s definitely not a Raspberry Pi vs NodeMCU vs Arduino IoTdevice war. There are many other IoT devices available on the market to work with.

So, NodeMCU module is the best fit for our requirement. Now, let's dive into the hardware and software part of this project.

Hardware used:

Software/Framework used:

  • Arduino UNO
  • Python Flask

Let's build

Step 1

Connect the DHT11 to NodeMCU. Pin connection of DHT11:

  • '+' - 3v3 (Voltage)
  • '-' - GND (Ground)
  • 'out' - D2 (Data)

Step 2

Connect the NodeMCU to your computer via a "Micro USB cable" and setup Arduino IDE.


Note that the port may be vary on different OS, so just find out on which port your device is connected to.

Step 3

  • Install dependent libraries in Arduino IDE
  • Sketch -> Include Library -> Manage Library.
  • Search and add the following libraries
    • DHT
    • Pubsub client
    • ESP8266WiFi

Step 4

Write some code for your microcontroller and upload the code into it. Here’s mine

#include <DHT.h>

#include <ESP8266WiFi.h>

#include <PubSubClient.h>

#define DHTPIN 4

#define DHTTYPE DHT11


struct dht11


 float humidity;

 float heatIndex;

 float temp;


struct dht11 sensor_data;

int led_counter = 0;

int read_sensor(struct dht11* data);

// Update these with values suitable for your network.

const char* ssid = "";

const char* password = "";

const char* mqtt_server = "";

const char* usr = "";

const char* pwd = "";

int timeSinceLastRead = 0;

WiFiClient espClient;

PubSubClient client(espClient);

long lastMsg = 0;

char msg[200];

int value = 0;

int analog_pin = A0;

void setup() {

 pinMode(BUILTIN_LED, OUTPUT);     // Initialize the BUILTIN_LED pin as an output




 client.setServer(mqtt_server, 1883);


void setup_wifi() {


 WiFi.begin(ssid, password);

 while (WiFi.status() != WL_CONNECTED) {





 Serial.println("WiFi connected");


void reconnect() {

 while (!client.connected()) {

   Serial.print("Attempting MQTT connection...");

   if (client.connect("<YOUR CLIENT ID GOES HERE>",usr,pwd)) {


   } else {

     Serial.println("try again in 5 seconds");





void loop() {  

 if (!client.connected()) {




 long now = millis();

 if (now - lastMsg > 2000) {

   lastMsg = now;



     char str_temp[6],str_humidity[6],str_hi[6];

     dtostrf(sensor_data.temp, 4, 2, str_temp);

     dtostrf(sensor_data.humidity, 4, 2, str_humidity);

     dtostrf(sensor_data.heatIndex, 4, 2, str_hi);



     client.publish("<BROKER NAME GOES HERE>", msg);




int read_sensor(struct dht11* data)


 data->humidity = dht.readHumidity();  

 data->temp = dht.readTemperature();



   Serial.println("DHT11 Read Failed");  

   return 1;


 data->heatIndex = dht.computeHeatIndex(data->temp,data->humidity,false);  

 return 0;



Step 5

Install Configure the MQTT broker on the cloud. I am using AWS EC2 and Ubuntu 16.04LTS.

sudo apt-get update

sudo apt-get install mosquitto mosquitto-clients

sudo nano /etc/mosquitto/conf.d/default.conf


allow_anonymous false

password_file /etc/mosquitto/passwd

listener 1883

listener 1883 localhost

listener 8083

protocol websockets

certfile /etc/letsencrypt/live/

cafile /etc/letsencrypt/live/

keyfile /etc/letsencrypt/live/



Step 6

Install Python Flask and use the code in this git repository.

Hola! Run it on your browser by entering public IP of your EC2 instance.

So there’s my two cents on how to make a web app to display data from sensor monitoring. This is, of course, my way of doing it and you can experiments with all the steps to come up with other methods to achieve these results.

Do let me know how your experiments go, and if you would do anything differently.

Topics: Cloud, Architecture

Developer Tip: Auto complete in Solr using file-based dictionary

Posted by Nilanjana on Aug 19, 2015 5:10:00 PM

In Solr, File based dictionary can be used for auto-suggestions.


Multiple dictionary components can be used for different auto-suggestion. This component is configured in solrconfig.xml

file-based dictionary
Auto suggest example

Note: When buildOnCommit is set to true, the file reloads on the commit.

Topics: Cloud, Architecture

Developer Tip: Multiple Google Analytics on Various Pages

Posted by Nilanjana on Aug 5, 2015 10:51:00 AM

Developer tip on placing multiple Google analytics codes on various pages.

MYMODULE = multi_ga

* Implementation of hook_menu().
function MYMODULE_menu() {
  $items['admin/config/multi_ga/settings-ga'] = array(
    'title' => 'Google analytic settings',
    'description' => 'Place google analytic setting for each pages.',
    'page callback' => 'drupal_get_form',
    'page arguments' => array('MYMODULE_ga_config'),
    'access arguments' => array('administer'),
  return $items;

 * Google Analytic settngs.
function MYMODULE_ga_config($form_state) {
  $description = t('Separate each Google Analytic code by semicolon. individual by comma. e.g UA-52705955-1,news-events/*;');
  $format = t('
Format: google_analytic_code,path_pattern;');
  $form['MYMODULE_google_analytic_mapping'] = array(
    '#type' => 'textarea',
    '#title' => t('Google Analytic Pathwise Mapping.'),
    '#description' => $description . $format,
    '#default_value' => variable_get('MYMODULE_google_analytic_mapping', ''),
    '#required' => FALSE
  return system_settings_form($form);

 * Add css to a particular path.
 * @param  [type] $page [description]
 * @return [type]       [description]
function MYMODULE_page_alter(&$page) {

 * Set Google Analytic ID for each path pattern settings.
 * @return [type]     [description]
function MYMODULE_extend_google_anaytics() {
  $site_path = drupal_get_path_alias();
  $MYMODULE_google_analytic_mapping = MYMODULE_determine_ga_mappings('MYMODULE_google_analytic_mapping', '');
  foreach ($MYMODULE_google_analytic_mapping as $pos => $data_info) {
    foreach ($data_info as $path => $data) {
      if (drupal_match_path($site_path, $path)) {
        $google_id = drupal_strtoupper($data['name']);

 * Apply google Analytic on the page.
 * @param  [type] $google_id [description]
 * @return [type]            [description]
function MYMODULE_apply_google_analytic($google_id) {
  if ($google_id != '') {
    $script = "var _gaq = _gaq || [];_gaq.push(['_setAccount', '$google_id']);_gaq.push(['_trackPageview']);(function() {var ga = document.createElement('script');ga.type = 'text/javascript';ga.async = true;ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(ga, s);})();";
    drupal_add_js($script, array('type' => 'inline', 'scope' => 'header'));

 * Get Configuration processed in array form for Google Analytic.
 * @param  string $key [description]
 * @return [type]      [description]
function MYMODULE_determine_ga_mappings($mapping_key, $default_data = '') {
  $ga_label = variable_get($mapping_key, $default_data);
  $ga_label = str_ireplace(" ", "", $ga_label);
  $ga_label_arr = explode(";", $ga_label);
  $ga_label_mapping = array();
  foreach($ga_label_arr as $pos => $data) {
    $data_arr = explode(",", $data);
    if (!isset($data_arr[0])) {
    $key = drupal_strtolower($data_arr[0]);
    $name = (isset($data_arr[1])) ? $data_arr[1] : '';
    if (empty($name)) {
    $ga_label_mapping[$pos][$name]['name'] = $key;
  return $ga_label_mapping;

Topics: Drupal, Cloud, Architecture


Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us