Why Should Your Organization Opt for Infrastructure as a Service (IaaS)

Posted by Kimi Mahajan on Nov 29, 2019 1:29:00 PM

Businesses are getting rid of keeping data in traditional data centers and physical servers and are migrating to innovative and reliable cloud technologies. With several benefits of cloud computing including anytime data access, enhanced disaster recovery, improved flexibility and reduced infrastructure staff burden, enterprises are developing more cost-efficient applications with higher performance and more effortless scalability.

IaaS, one such cloud computing model, has made lives of both enterprises and developers simpler by reducing their burden of thinking about infrastructure.

But, how do enterprises know if they need to opt-in for IaaS?

Understanding Infrastructure as a Service (IaaS)

IaaS refers to the cloud services offered over a network allowing businesses to access their infrastructure remotely. A perfect fit for any size enterprise, it offers the advantage of not having to buy hardware or other equipment, and easily manage firewalls, IP addresses, servers, routers, load balancing, virtual desktop hosting, storage, and much more, cost-effectively through a scalable cloud model.

It gives organizations the flexibility to spend only for the services used, which gives an edge to IaaS cloud computing over traditional on-premise resources. The businesses find it easier to scale by paying per usage from an unlimited pool of computing resources instead of wasting resources on new hardware.

Understanding Infrastructure as a Service (IaaS)

Why Opt For IaaS Cloud Model?

IaaS is beneficial for organizations for a number of reasons. Let’s discuss its benefits in detail-

Usage of Virtual Resources

Your organization might never have to think of investing in resources such as CPU cores, hard disk or storage space, RAM, virtual network switches, VLANs, IP addresses and more, giving you the feeling of owning a virtual datacenter.

It allows multiple users to access a single hardware anywhere and anytime over an internet connection, keeping their users on the move. And in case even if a server goes down or a hardware fails, its services aren’t affected, offering greater reliability.

Cost Savings With Pay-As-You-Go Pricing Model

With metered usage, enterprises need to pay for the time when the services were used and avoid fixed monthly and annual rental fees and any upfront charges. This is beneficial as it leads to lower infrastructure costs and also prevents them from having to buy more capacity to have a back-up for a sudden business spike. IaaS providers gives users an opportunity to purchase storage space, wherein they need to be careful as the pricing may differ with providers.

Highly Scalable, Flexible and Quicker

One of the greatest benefits of IaaS is the ability to scale up and down quickly in response to an enterprise’s requirements. IaaS providers generally have the latest, most powerful storage, servers and networking technology to accommodate the needs of their customers. This on-demand scalability provides added flexibility and greater agility to respond to changing opportunities and requirements. Also, with IaaS the process of time to market the product is much more fastened to get the job done.

High Availability

Business continuity and preparing for disaster recovery are the top drivers for adopting IaaS infrastructure. It remains a highly available infrastructure, and unlike the traditional hostings, even in case of a disaster, it offers its users the flexibility to access the infrastructure via an internet connection.

With a robust architecture and scalable infrastructure layer, organizations can consolidate their different disaster recovery systems into a virtualized environment for disaster recovery, for securing their data. This stands as the perfect use case for IaaS.

By outsourcing their infrastructure, organizations can focus their time and resources on innovation and developing new techniques in applications and solutions.

How Do You Choose Between IaaS, Containers or Serverless?

The next question you might have is how to make a choice between opting for IaaS cloud computing model, containers or serverless model?

Well, the one thing they all share in common is that they simplify the developer’s life by letting them focus only on generating code. Let’s look into the differences:



 

IaaS

Containers

Serverless

Features

Instantly available virtualized computing resources over the internet, eliminating the need of hardware 

Contains application and associated elements needed to run the application  properly with all dependencies

Broken up into functions and hosted by a third-party vendor

Use Case

Organizations can consolidate their disaster recovery systems into one virtualized environment for backup, securing data

Refactoring bigger monolithic application into smaller independent parts, eg: splitting a large application into a few separate services such as  user management, media conversion etc.

For applications which do not always need to be running.

Vendor Operability

Cloud vendor manages infrastructure

No vendor lock-in

Vendor lock-in

Pricing Model

Cheap

At least one VM instance with containers hosted is always running, hence costlier than serverless.

Pay for what you use; cost-effective

Maintenance

User responsible for patching and security hardening

Not maintained by cloud providers; developers are responsible for its maintenance

Nothing to manage

Web Technology Hosting

Can host any technology, Windows, Linux, any web server technology

Only Linux-based deployments

Not made for hosting web applications

Scalability

More

Less

More 

Deployment Time

Instantly available

Take longer to set up initially than serverless 

Take milliseconds to deploy

 

IaaS is the most flexible model and suits best to the needs of temporary, experimental and unexpected workloads. Srijan is an Advanced AWS Consulting Partner. Leveraging AWS’s vast repository of tools, we can help you choose the best option for outsourcing your infrastructure for you to achieve your business goals. Contact us to get started with your IaaS journey.

 

Topics: AWS, Cloud, Architecture

3 Most Common Mistakes To Avoid While Migrating To Cloud

Posted by Kimi Mahajan on Oct 25, 2019 3:00:39 PM

As we mark the onset of 2020, 85% of enterprise workloads will be a cloud-based, says Forbes. 

But, rather easier said than done, migrating to the cloud is quite a challenging task, which might make you rollback entirely in case of any pitfalls, putting to waste the time and efforts.

However, the complex task of migration can be simplified by thorough planning and evaluation. Let’s look into the three most common mistakes which can be avoided easily.

1. Not Enough Research When Planning the Cloud Provider 

The decision to implement the right cloud provider is crucial to the success of migration. Hence, the decision should be taken wisely, considering your application demands, scalability and flexibility needs and meet compliance considerations.

It is not just essential to bring the business-critical applications to the cloud, but also find a future-proof solution to map future growth. Hence, it is advised to conduct a detailed study on existing IT infrastructure needs and identify a suitable cloud service provider accordingly. 

Due to yearly commitments with a vendor, this may result in long lock-in periods which may not give the flexibility to change later on. Hence, for a seamless migration, it is important to determine all these factors at an initial stage.

mistakes_migrating_to_cloud

2. Migration Without Right Resources

Cloud migration can be quite overwhelming for organizations, leaving them completely dependent on cloud service providers. 

In-house cloud experts may handle the applications in the right manner and help in the smooth migration. They are responsible for taking important decisions such as opting to move low impact web applications rather than critical applications and eliminate any possible impediments. They are the first to secure applications from any vulnerability as leakage of any confidential information can cost millions. 

Cloud-experts are responsible for developing a proper proof of concept (POC) demonstrating what is aimed out of the entire migration process. Outlining the entire process will allow any unexpected issues or errors to be investigated as well as resolved and will also help in optimizing cost. 

3. Not Sufficient Cloud Migration Knowledge

It is mandatory to possess the right knowledge on migration, as not every application can be simply be migrated to the cloud. It might just require structure re-architecting. It is important to differentiate between the two and analyze when exactly to rehost or refactor, as reverting back can be extremely costly. 

mistakes_while_migrating

It is important to consider strong internet connection for the complete migration as cloud computing revolves around the web. Considering that the gradual transition to the cloud might mitigate risks and will make the migration process simpler is a myth. Transition can only be simpler when enterprises make the shift swiftly and focus on early adoption as the sooner the new platform is migrated, the lingering questions are solved in less time.

Also, the data protection needs to be taken into account and still needs to be protected even if the cloud provider is compliant with HIPAA and PDI DSS. The first thing to do post cloud migration is to secure the data.

Even though adopting cloud technology is a step to prevent from falling behind the league, organisations need to create resilient strategies to future-proof the entire migration process. There’s a need to understand that rushing in the process is no solution, even though the cloud migration process seems much favorable to avoid unwanted issues. It is better to understand the nitty-gritty and then execute the process with the right resources.

Contact us and we’ll help you understand the in and out of cloud migration as per your business needs.

Topics: Cloud

Refactoring applications for cloud migration: What, when and how

Posted by Nilanjana on Sep 27, 2019 3:54:00 PM

Enterprises migrating their applications to the cloud often face the difficulty of finalizing an approach that is in line with their migration goals. Here are a bunch of questions that can help you in this:

  • What are your business goals?
  • What are your application capacities?
  • What is the estimated cost for your cloud migration process?

Answering these questions, and then selecting the best suitable cloud migration path will guarantee long term success of your enterprise with the migration approach you choose.

In this post, we take a look at one of the most popular methods of cloud migration: Refactoring, what is it and when should you choose it?

What is refactoring migration?

Refactoring is the process of running your applications on the infrastructure of your cloud provider, that is, you will need to completely re-architecture your applications to better suit the new cloud environment. This approach involves modifying your existing applications, or a large chunk of the codebase in order to take better advantage of the cloud-based features and the extra flexibility that comes with them.

Refactoring migration is found to be more complex than the other cloud migration approaches because while making application code changes, you must also ensure that it does not affect the external behavior of the application.

For example, if your existing application is resource intensive, it may cause larger cloud billing because it involves big data processing or image rendering. In that case, redesigning the application for a better resource utilization is required before moving to the cloud.

This approach is the most time-consuming and resource-intensive of all approaches, yet it can offer the lowest monthly spend in comparison. We further take a look at the benefits, and limitations it has to offer:

Benefits of Refactoring

Most benefits of refactoring are delivered in the future. They include:

  • Long-term cost reduction: Refactoring approach ensures an over-time reduction in costs, matching resource consumption with the demand, and eliminating the waste. This results in a better, and more lasting ROI compared to the less cloud-native applications.

  • Increased resilience: By decoupling the application components and wiring together highly-available and managed services, the application inherits the resilience of the cloud.

  • Responsive to business events: Using this approach enables the applications to leverage the auto-scaling features of cloud services that scale up and down according to demand.

Limitations of Refactoring

The disadvantages of this approach include:

  • Vendor lock-in: The more cloud-native your application is, the more tightly it is coupled to the cloud you are in.

  • Skills: Refactoring is not for beginners. It requires the highest level of application, automation and cloud skills and experience.

  • Time taking: Because refactoring is resource-intensive, and much more complicated in terms of changing from a non-cloud application to a cloud-native application, it can take a lot of time to complete.

  • Getting it wrong: Refactoring requires changing everything about the application, so it has the maximum probability of things going sideways. Each mistake can cause delays, cost escalations and potential outages.

 

Refactoring is a complex process, but it is well worth the improvement that you get in return. Some companies even go as far as refactoring parts of their business solutions to make the whole process more manageable. This compartmentalization could also lead to refactor becoming longer and more resource-intensive.

When to choose refactoring?

Now that you are aware of the advantages and limitations associated with Refactoring approach, the next step is to identify when you should choose this approach. Take a look:

1. Enterprise wants to tap the cloud benefits

Does your business have a strong need to add features, scale, or performance? If so, refactoring is the best choice for you. Exploiting the cloud features will give you benefits that are otherwise difficult to achieve in an existing non-cloud environment. 

2. Scaling up or restructuring code

Is your organization looking to scale an existing application, or wants to restructure their code? You can take full advantage of cloud capabilities by migrating via the refactoring process.

3. Boost agility

If your organization is looking to boost agility or improve business continuity by moving to a service-oriented architecture, then this strategy may be worth pursuing. And that’s despite the fact that it is often the most expensive solution in the short-medium term.

4. Efficiency is a priority

Refactoring has the promise of being the most efficient cloud model because your application is cloud-native, and will exploit continuous cloud innovation to benefit from reducing costs and improvements in operations, resilience, responsiveness and security.

How to refactor?

So you know when to choose refactoring, the next question is how? There are in general, four ways to refactor your applications for the cloud.

1. Complete Refactoring

In this type, 50% of the code is changed and the database is updated to utilize as many cloud-native features as required by the application. This strategy can improve performance, operations costs and IT teams' ability to meet the needs of the business. On the downside however, the process could be too costly or complex, and can introduce bugs.

2. Minimum Viable Refactoring

This requires only slight changes in the application, and is therefore, both quick and efficient. Users who take this approach often incorporate cloud-native security, management and perhaps a public cloud database into their migrated workload.

3. Containerization Refactoring

In this, applications are moved into containers with minimal modifications. The applications exist within the containers, which enables users to incorporate cloud-native features and improve portability. 

This approach is found to be more complex because of the learning involved in adapting to new tools. But that is easily checked, as with the popularity of containers and their growing ecosystems, the costs and refactoring times continue to decrease.

4. Serverless Application Refactoring

This approach has similar issues as containerization as it changes the development and operations platform, which requires learning new tools and skills. Some modifications are required to make the application work effectively and take advantage of serverless systems on the public cloud. 

Unlike containers however, serverless platforms don't provide portability, so lock-in is a major downside to this approach.

You can refactor your applications using either of these ways, but it is advisable to do Minimum Viable Refactoring for most of it. Refactoring is a highly variable activity, dependent on the current application complexity. And during its discovery assessment process, it’s not possible to predict how long an application refactor will take. It could be around three-to-six months per application depending on complexity and previous experience.

Hence, a targeted timeline, refactoring in parts, and checking progress with collected data are some of the best practices to keep in mind while taking up Refactoring cloud migration approach. Because of these reasons, this approach is chosen by very few enterprises that have the time, money, and resources for it.

Looking to shift business-critical applications to or even between clouds? Just drop us a line and our expert team will be in touch.

Topics: Cloud, Architecture

Embrace These 6 Technology Trends to Level Up Your Game in FinTech Industry

Posted by Urvashi Melwani on Sep 26, 2019 12:56:00 PM

 

The financial services space (FS) have been lately experiencing the transformation from multiple forces including demographic and social changes to the considerable rise in the adoption of the financial technology. The shift to this new digital model will make the lives of consumers more comfortable eventually and they will be able to experience the services seamlessly. 

So, let's deep dive to know further about the key technology trends that are disrupting the FinTech industry to bring you up to speed on events-

1. Increase in Cloud Adoption By Banks & Financial Institutions

The cloud adoption rate has increased significantly in recent years and has helped in elevating the scope and pace of innovation. As per International Data Corporation estimates (IDC), global spending on public cloud services and infrastructure is expected to grow at a CAGR of 23.8% over 2018 to reach USD 210 billion in 2019 and is forecasted to grow at a CAGR of 22.5% during 2017-22 to reach USD 370 billion in 2022. 

Further, the banking industry is expected to spend more than USD 20 billion on public cloud services in 2019, which indicates the vital progress it has made into the industry.

Cloud captured in circles and connected with various devices

The past has witnessed FS players leverage cloud-based software-as-a-service (SaaS) primarily for non-core applications such as CRM and HR. However, it’s no more same as they are now increasingly using the cloud infrastructure for delivering core applications too.

Cloud offers the following advantages-

  • Improves the speed of innovation

By reducing entry barriers for FinTech innovators, who primarily operate on cloud-based platforms. 

  • Improves the speed to market 

Enables the rapid development of applications and reduces the time and cost of setting up a new server by eliminating the need for organizations to invest in.

  • To gain insights on customers’ preferences

FS players are using the cloud as an analytics platform to derive customer insights by accessing data from multiple partners through application programming interfaces and microservices, helping them to respond to customer needs swiftly.

Moving forward, as digital transformation gathers pace, the potential benefits of the cloud would be hard enough to ignore, making it critical for enterprises to promptly act on their cloud strategies.

2. Securing Up Digital Property Will Be the Top Most Priority Of the FinTech Sector

Financial services executives are already very well-versed with the long-lasting impacts that these cyber-threats leave on their industry. 

The fast-paced digital transformation and elevating interconnectivity in the FS have allowed cybercriminals to target extremely sensitive financial information due to which huge operational, reputational, and financial damages have been incurred by organizations. Unfortunately, this will take some time likely to change for the better due to the following forces-

  • Use of third-party vendors
  • Rapidly evolving, sophisticated, and complex technologies
  • Cross-border data exchanges
  • Increased use of mobile technologies by customers, including the rapid growth of the Internet of Things
  • Heightened cross-border information security threats

 

Although some organizations have started implementing the required measures to leverage advanced technologies and mitigate cyber-threats, more efforts are required to unearth and manage new risks inherent with these emerging technologies.

For example, the advancement in fields such as AI can lead to a more powerful attack as cybercriminals can corrupt ML algorithms’ training data sets easily by transferring biased data to it.

So, organizations need to catch up a lot for accurately addressing these security challenges. For consumers, security and data privacy form an integral part of FS solutions, and the onus for safeguarding the same lies with organizations. 

Going forward, being cyber-resilient against cybercrimes will be the key objective of enterprises to win customers’ trust and push the adoption of digital solutions.

3. Advancements in Robotics and ai Will Target Multiple Capabilities

The alliances of leading incumbent financial services and tech companies have already been formed to implement a combination of robotics and AI to address key pressure points, like - cost reduction and risk mitigation. They are targeting a specific combination of capabilities such as social and emotional intelligence, natural language processing, logical reasoning, identification of patterns and self-supervised learning, physical sensors, mobility, navigation, and more.

There are already some robots present around which can sense the details of their environments like recognizing an object, responding to information and objects with safe & useful behaviors. The biggest example of this is self-driving cars that have been performing very well in real-world tests. Although they are facing some technological hurdles right now three to five years down the lane, we can anticipate rapid gains as new models will combine increasingly powerful and standard modular platforms with the ability to learn.

4. Blockchain in Banking and FinTech

Blockchain has already garnered a lot of attention of multiple industries, as per this report, which states that 84% of respondents are actively involved with Blockchain Technology and also financial services is the most advanced industry in implementing blockchain systems.

Seven blocks interconnected to each other

Distributed ledger technology (DLT) enables simultaneous confirmations of all parties on the network and thus eliminates an entire layer of intermediaries who extract fees from each transaction they execute; thus reducing the considerable cost of infrastructure. This elimination is important as there were a large number of intermediaries involved in moving money, adjudicating contracts, among other activities.

For instance, the cross-border payments became highly inefficient as transactions had to pass through multiple intermediaries for authentication before reaching the end customer; resulting in huge delays and high costs. 

The Blockchain system has removed this layer to deliver payments to end customers in real-time and more securely. 

Similarly, it has revamped the clearing and settlement systems by putting securities on a common, decentralized ledger, thereby eliminating the need for middlemen and delivering services hassle-free.

Another important use case is the blockchain-enabled KYC utility. This will facilitate organizations to store customers’ KYC data from multiple sources in a single decentralized, secure database, and share access to third parties after obtaining due consent, leading to a decrease in duplication and lowered cost for the industry, robust AML/CFT checks and improved customer experience.

This video can further help you understand the FinTech trends that are shaping the finance and banking sector-

5. APis Will Foster Innovation, Speed, & Agility

Building applications by implementing APIs is now considered as the optimal way to address the business and economic challenges impacting the financial industry. FinTech startups have already revamped the landscape through the creation of mobile applications that have challenged and surpassed the banking industry. Using APIs will give an edge to dynamic small businesses by which they can innovate with speed and agility that even more significant and prominent sectors are unable to duplicate. Banks are now also joining this bandwagon by taking part in developer sandboxes and API economy. Due to the number of data banks have collected over the years, they are likely to regain dominance in this area. 

6. Human Digital Interfaces

While it's crucial for banks and financial institutions to implement digital strategy carefully, they can’t ignore technology for it has a huge role to play in augmenting the human experience and thus empowerment of both customers and employees.

Mobile technology has deeply rooted in our lives. For instance, we use our voices to give commands rather than touching our screens or typing. Also, passwords are now being replaced by biometric finger, retinal, or face scans as security checks. 

Consumers can use technology to get deeper insights into their financial situations. Especially when combined with AI and analytics, gamification and principles of behavioral science; the industry can sidestep their traditional role of just being the keeper of assets, and rather use trained bots to empathize, and understand regional and generational differences. 

Technology should be used to evaluate the emotional state of humans based on their interactions with the devices. For the next few years, expect to see these interfaces showing up in more places and being perfected.

Final Words

FinTech is an industry that will evolve continuously and stay at the forefront of innovation. Technology that would take years to design, test, and implement by large financial institutions, are now going from beta to real-world applications in a fraction of the time, by startups that are disrupting the financial industry in a big way.

They are unlocking potential which previously deemed unattainable, and also serving to demographics which was considered undesirable.

While security remains the topmost issue, it’s us, consumers, the winners, who are seeking the benefits; conveniences and functionality, in our banking and financial experiences, thanks to these trends in FinTech technology. 

Topics: Drupal, Cloud, Financial Services

Why Platform as a Service (PaaS) is the answer to high-performance hosting

Posted by Kimi Mahajan on Sep 24, 2019 3:10:00 PM

Running, compiling or configuring your web application on a single virtual server instance can be complex as well as time-consuming.

However, with new technologies emerging and evolving, the entire cloud computing process is getting simplified.

Let’s understand how Forbes has termed Platform as a Service (PaaS) to be the dominant cloud service model and why it stands as the best suited solution for your high-performance hosting needs.

Understanding Platform as a Service

PaaS service delivery model has evolved from Software as a Service (SaaS) cloud offering. It allows the customer to make use of virtualized servers by not purchasing them directly but renting it so as to design, develop, test, deploy and host a web application.

PaaS vendors offer the following along with the cloud offering:

  1. Specific software development tools such as source code editor, a debugger, a compiler, and other essential tools which developers needs to build their application.
  2. Middleware which acts as an intermediate between user-facing applications and the machine's operating system.
  3. Operating system for developers to build an application.
  4. Databases to store data and for developers to administer and maintain them.
  5. Infrastructure to manage servers, storage, and physical data centers.
paas-offerings

Why choose PaaS over IaaS and SaaS?

Before comparing PaaS with Infrastructure as a Service (IaaS) and SaaS, it is important to understand what each service means and how it helps users achieve their goals.

Let’s understand each one by comparing them with modes of transportation.

On- premises IT infrastructure is like owning a car. When you own a car, you take the responsibility for its maintenance.

IaaS is like renting a car. You choose the car as per your own preference and drive it wherever you wish. And when you think of upgrade, you can simply rent a different car. SaaS is like taking a public transport, wherein you share your ride with other fellow passengers with a common route.

However, PaaS can be thought of opting for a cab, wherein you don’t drive the car by yourself, but pay the driver to take you to your destination.understanding-cloud-offeringsNow after understanding what each means, let’s compare IaaS, PaaS and SaaS on the basis of what service you manage (✔) and what you don’t (╳).

Services

IaaS

PaaS

SaaS

Applications

Data

Runtime

Middleware

Operating System

 Virtualization 

Server

Storage

Networking

Examples

(AWS), Cisco Metapod, Microsoft Azure

AWS Elastic Beanstalk, Windows Azure, Google App Engine

Gmail, Google docs, GoToMeeting

 

As per Gartner, global public cloud services market is expected to grow to over $383 billion by 2020.Global Market of Public Cloud Services

Perfectly suited for software developers, PaaS helps them deploy applications and test and manage them without needing all the related infrastructure.

It’s very different from the traditional forms of web hosting like shared or Virtual Private Server hosting, wherein the developer has to take up the responsibility of ensuring the production environment is good enough to host their application and set up the application server, database, run-time platform, set up server configuration and many more, before beginning to code.

With HTTP caching servers, PaaS ensures faster application loading and eliminates issues like latency and downtime even if one server goes down. Applications can be deployed to the servers with a single command. It is useful for high-traffic websites (when your server may be under heavy load) which have performance issues in a shared environment.

PaaS can be thought of as a multi-server high performance solution which virtually distributes the web traffic across multiple terminals, keeping your site performance at peak.

High speed hosting services not only improves the user experience of your site, but they also have a positive impact on search engine ranking and users are likely to stay longer on the site as the site speed and resource delivery will be quick.

Here are 5 advantages that PaaS offers over other cloud offerings:

1. Helps Build Applications Quickly
PaaS allows developers to build applications quickly than they would possibly build, configure, and provision with their own platforms and backend infrastructure. With PaaS vendors providing web servers, storage, networking resources, it allows them to gain instant access to a complete software development environment, without any need of configuring or maintaining them and focus mainly on delivering projects speedily.

2. Minimal Development and Cost-Effective Approach

PaaS services offer templates and code libraries to allow rapid development by providing prebuilt backend infrastructure and other resources. It offers new capabilities to your in-house development team without hiring additional staff, thereby reducing costs associated with development in building applications from scratch.

3. Easy Collaboration on Pre-Built Sophisticated Tools
PaaS offers advantage over traditional hosting in a way it lets developers in distributed teams to collaborate. It allows them to create applications using pre-built software otherwise expensive development tools to develop, test and reiterate.

4. Scalability and Future-Proofing

The reusable code not only facilitates easy app development and deployment but also increases the opportunity for scalability. This allows businesses to scale and modify their product or processes efficiently and focus on core business initiatives rather than maintaining underlying IT infrastructure.

5. Cross-Language Support

PaaS cloud services support developers to build applications on multiple programming languages.

How is PaaS different from Serverless Computing?

PaaS and serverless computing are similar in a way where a developer has to worry about working on code, and the vendor handles all backend processes. However, it is different from serverless computing as mentioned in the below table:

Features

Serverless Computing

PaaS

Scale

Automatically scales

Will not scale unless programmed

Startup time

Instantaneous

Is running most of the time to be available to users

Offerings

Do not provide development tools/frameworks

Provides development tools/frameworks

Pricing Model

Precise

Not precise

Any demerits?

However, before taking the decision to opt for PaaS process, it is important to understand your business needs in order to find a solution that is a good fit.

Firstly, the decision for PaaS providers should be taken wisely as you might not be able to switch the vendor after an application is built. Each vendor may not support the same languages, libraries, APIs, architecture, or operating system used to build and run applications. Although it is possible to switch PaaS providers, the process can be time consuming and may even result in rebuilding the application to fit the new platform.

Another thing to keep in mind is that the external vendor will store most or all of an application’s data, along with hosting its code and may actually store the databases via a third party. So it is important to test the security measures of the service provider and you should know their security and compliance protocols before making the decision.

Srijan can help you take the truly strategic option of opting for PaaS out of various options, so as to potentially deliver more with better functionality. Contact us to get the conversation started.

Topics: Cloud, Agile, Architecture

Cloud migration paths - Which ones should you choose?

Posted by Urvashi Melwani on Sep 11, 2019 3:05:00 PM

As more infrastructure and applications are experiencing a shift towards cloud in reinforcing digital transformation, one of the most critical decisions that enterprises must make well ahead of time is the best approach to cloud migration for the long-term success of their enterprises.

As per the survey conducted by Netscouts in 2018, majority of the enterprise, i.e., 56% of respondents had already started workload migration. Besides, there were 14% respondents who were in the planning stage and  rest 15% had plans to carry out the migration process in less than 6 months to 1 year.

Boxes of various lengths and table with text                                                           Source: Netscout         

And as apparent that there’s no one-sit-fits-all answer; up-front planning would make the migration process easier, and rather the whole cloud transition smoother.

So which is the best cloud migration approach for your business?

This blog takes a look at the three distinct migration approaches to help you choose the right one.

It’s time to reach the cloud

Additionally, this report also predicts that 80% of the companies are feeling the need to move their workloads to the cloud as soon as possible. And although there are multiple approaches for the same, but we will discuss the three most common here. Naturally, there are benefits and disadvantages to each:

  1. Lift and shift aka Rehost
  2. Lift, Tinker, and shift aka Replatform
  3. Refactor

1. Lift and Shift or Rehost 

Rehosting or the lift and shift approach is a forklift approach to migrate applications to the cloud without any modifications in the code. The approach involves lifting either some part of the whole application from on-premise or existing cloud environment to a new cloud environment.

Currently, it is considered as the most common migration methods. It comprises 40% of all migrations because of its agility, simplicity, and speed in comparison to re-platforming and refactoring.                        Two seperate boxes with multiple partitions

This is beneficial for the large enterprises who want to migrate quickly with minimal or no disturbance in the existing application workflow. 

And once the migration is done, it becomes quite easier for them to optimize the applications as they are already done away with the difficult part.

When to choose this approach?

“This works best for organizations looking to reduce their on-premises infrastructure expenses immediately”

Here are some common instances when enterprises should choose the rehosting approach-

  • Large number of migrations over time
    This lift-and-shift approach should be opted if it's simple, quick, and cheap and you have a lot of migrations to do overtime. Additionally, you need to factor the plan, and budget all of the post-migration work involved, like in case you have lifted and shifted non-cloud tools, processes, and people into the cloud.
  • Urgency or pain point
    A common compelling event could be the urgent evacuation of a data center or hosting provider.
    This works best for organizations looking to reduce their on-premises infrastructure expenses immediately, those bearing too much cost in maintaining physical infrastructure or if you have been faced with some cloud disaster (e.g. corrupted database). They should opt for application rehosting to get their applications on the cloud with minor or no modification and also enjoy back up of these for smooth and fast running.
  • Commercial and off-the-shelf applications
    It forms as an apt choice for organizations having some applications on-board that need to be running without any intervention or modification. These are generally commercial and off the shelf applications, and rehosting is a good strategy to first move it onto the cloud with this approach as it is and then optimize.
  • Virtualization and IaaS skillset
    If the available resources are skilled in virtualization and infrastructure as a service, then rehosting matches their skill sets (whereas Replatforming and Refactoring need more skills)
  • Test environments
    Application testing makes an important environment to run the apps successfully. However, if they aren’t managed well, it can be done easily with a lift-and-shift approach to avoid disruption.

Benefits of Rehosting

The benefits of the lift-and-shift approach are-

  • Quick migration
  • Reduced risk (simplicity)
  • Application, hypervisor, and hardware agnostic
  • Can be highly automated with limited or zero downtime
  • Imports configuration and scripts if these are not documented/ hard to reverse engineer

Limitations of the Rehosting approach

“The rehosting method does not let you reap benefits from the native cloud functionality and tools like elasticity”

The rehosting approach works because it is simpler in terms of migration.  However, it involves risks and limitations with it-

  • Migrating brittle processes

When you migrate an application, you also inherit the operating system, generally undocumented configurations, and non-cloud people and processes with it. So, if these processes are not clearly understood pre-migration, this will lead to a fragile application and a brittle end product.

  • Cloud-native features

The rehosting method does not let you reap benefits from the native cloud functionality and tools like elasticity. The app functions the way it should in a single physical server but does not let you to take advantage of added flexibility and scalability offered by cloud environments.

  • Rehosted applications are black boxes

Simply copy-pasting the applications and data without understanding what’s in them implies that you are pulling out everything into the cloud, including malware or insecure configurations.

  • Unbudgeted/planned post rehosting activities

There are always post-rehosting activities that need to be taken care of. This involves additional cost beyond the basic migration process, in regards to money, time, and resources. These activities, if avoided, will prove costly in the long run, with high expenditure incurred on over-provisioned resources.

  • Ingest known and unknown problems

If the application is facing problem outside the cloud, known or unknown, Rehosting will likely bring that problem to the cloud. Retiring technical debt is a big plus of more advanced migration methods like Replatforming and Refactoring or drop-and-shop technique of Repurchasing. 

 2. Lift, Tinker, and Shift or Replatform approach

In replatforming migration, a part of the application or the entire application is optimized with a small amount of up-versioning in API before moving to the cloud.

This varies from adding one or two functionalities to it to completely re-architecturing them before they can be rehosted or refactored and eventually deployed to the cloud. 


Multiple boxes with text written inside them

“Developers can also reuse the resources they are accustomed to working with”

The replatforming approach ensures an interim solution between rehosting and refactoring, allowing workloads to take advantage of base cloud functionality and cost optimization, without the level of resource commitment required.

Developers can also reuse the resources they are accustomed to working with, such as legacy programming languages, development frameworks, and existing caches in the application.

Replatforming can be used to add new features for better scaling and leveraging the reserved resources of your cloud environment. There are even ways to integrate the app with native features of the cloud while little or no code modifications.

When to choose this approach?

Take a look at these scenarios when to opt for this approach-

“Replatforming allows you to reshape them to make it compatible with the cloud”

  • Modification of applications is required
    Replatforming is suitable when organizations want to make changes in the API of the applications (up-versioning) and then deploy it to the cloud. This may be because the source environment is not supporting the cloud, or the organizations want some minor changes without hampering the application’s functioning.
    In such cases, some fine-tuning is required and for that re-platforming is the optimum choice.
  • Avoid post-migration work
    Organizations who deployed rehosting method realized that there is a slew of tasks that needs to be done post-migration to realize the full potential of the cloud. So, the feasible solution is to simply make the changes in the application during the migration itself. Hence, re-platforming works best in such a scenario.
  • Experience with more cloud skills
    If you have the resources available in your organization who have been working with cloud-based solutions lately and can now shape applications for cloud compatibility, or take shortcuts in the migration process, consider using the Replatforming approach.
  • Most apps are common three-tier web apps
    When most of the apps are three-tier web apps, Replatforming allows you to reshape them to make it compatible with the cloud. And once you have reshaped one, you can leverage this far and wide, making significant efforts to improve efficiencies in migration as you move forward.

Benefits of Re-platforming

“Enterprises can leverage cloud-native functionalities without worrying about the risk, complexity, cost, and time of a full refactor”

Replatforming is a cost-efficient solution. It is an optimal place of action between rehosting and refactoring, where enterprises can leverage cloud-native functionalities without worrying about the risk, complexity, cost, and time of a full refactor.

This approach does not require to adjust the cloud server to match the previous environment. Instead, you have the flexibility to start small and scale up as needed, which indicates that you can save a lot while the cloud environment grows with the app itself.

 

various elements in rectangle

Its benefits include-

  • Use of cloud-native functionalities
  • Apps can leverage the base cloud cost application
  • Helps achieve tactical benefits, like reducing the amount of time spent managing database instances
  • Reduce/ replace the common application components with a better cloud service, such as replacing Nginx in a VM with AWS Elastic Load Balancer.

Limitations of Replatforming

“If the cloud service used to replace a component is inappropriate or poorly configured, then the re-platform migration can go wrong”. 

The major risk associated with re-platforming is that the project scope can grow and change if unchecked during the process, to become a complete refactor. Managing scope and avoiding unnecessary changes is key to mitigate this risk.

Secondly, if the cloud service used to replace a component is inappropriate or poorly configured, the replatform migration can go wrong. 

Other limitations include:

  • Overly aggressive change
    Every individual shape during re-platforming increases the risk of causing problems: be circumspect and choose common, well-known shapings. Avoid exotic changes unless it’s a niche opportunity or unavoidable. The goal is a successful re-platform, not an exotic one.
  • Automation is required
    Although the re-platforming approach can be done manually, it has limitations as modifications could be time taking. A better solution, therefore, is to model the application needs using an automation platform and then make modifications to the model to represent the platform shapings.

 

Watch this video to understand further in a better way-

 

A summary of the pros and cons of each approach include:

 

Pros

Cons

Lift-and-Shift

  • Minimal work required to move application



  • Faster migration and deployment
  • Typically does not take advantage of native features of the cloud application

  • May cost more to operate in a cloud

Partial Refactor

  • Only parts of the application are modified

  • Faster migration and deployment than complete refactoring
  • Only takes advantage of some features of the cloud

  • May cost more to operate in a cloud

Complete Refactor

  • Applications typically offer higher performance


  • Applications can be optimized to operate at lower costs
  • Much higher cost since most of the part of application  must change

  • Slower time to deployment

3. Re-architect or Refactor approach

Refactoring is the process where you run your applications on the infrastructure of your cloud provider, also referred to as Platform as a Service (PaaS).

Refactoring is a bit more complex than the other two as while making changes to the code in the application, it must be ensured that they do not impact the external behavior of the application. For example, if your existing application is resource-intensive, it may cause larger cloud billing because it involves big data processing or image rendering. In that case, redesigning the application for a better resource utilization is required before moving to the cloud.


Squares, rectangles and other elements connected to each other


This approach is the most time- consuming and resource-demanding, yet it can offer the lowest monthly spend of the three approaches. And also the full potential of cloud to increase performance, resilience, and responsiveness.

When to choose this approach?

Refactoring comes in handy for the enterprises in the following scenarios-

“Refactoring method helps in reducing cost and improvements in operations, resilience, responsiveness, and security”

  • Enterprises want to leverage cloud benefits
    Refactoring is the best choice when there is a strong business requirement of appending features, scale or enhance performance by deploying cloud- which otherwise is not possible in the existing non-cloud environment. Simply put, the old ways don’t qualify the criteria and if you still stick to the old ways; your business might flip-over as an existential threat in this phase of cut-throat competition.
  • Scaling up or restructuring code
    When an organization is looking to expand its existing application or wants to restructure their code to draw off the complete potential of their cloud capabilities.
  • Boost agility
    If your organization aspires to amplify agility, improve business continuity by moving to a service-based architecture, then this strategy does the trick. And that’s despite the fact that it is often the most expensive solution in the short-medium term.
  • Efficiency is a priority
    Refactoring method helps in reducing cost and improvements in operations, resilience, responsiveness, and security.

Further, you have the option to choose between partial or complete refactor, depending upon your needs. Partial refactor involves modification of the small part of the application which results in faster migration compared to complete refactor.

Benefits of Refactoring

The benefits of refactoring are observed in the future. The current application and its environment configuration determine the complexity of refactoring, and that impacts the time-to-value from the project.

Its benefits include:

“This approach ensures an over-time reduction in costs, matching resource consumption with the demand, and eliminating the waste”

  • Long-term cost reduction
    This approach ensures an over-time reduction in costs, matching resource consumption with the demand, and eliminating the waste. Hence, this brings a better, and more lasting ROI compared to the less cloud-native applications.
  • Increase resilience
    Decoupling the application elements and attaching highly-available and managed services, the application inherits the resilience of the cloud.
  • Responsive to business events
    This approach lets application leverage the auto-scaling features of cloud services that scale up and down as per demand.

Limitations of Refactoring

The limitations are here-

  • Vendor lock-in
    The more cloud-native your application is, the more tightly it is coupled to the cloud you are in.
  • Skills
    Refactoring demands the highest level of application, automation, and cloud skills and experience to carry out the process.
  • Time
    As refactoring is the complicated method of migrating from a non-cloud application to a cloud-native application, it can consume a considerable amount of time.
  • Getting it wrong
    Refactoring involves changing everything about the application, so it has the maximum probability of things going the other way round. Each mistake will cause delays, cost imbalances, and potential outranges.

Refactoring is a complex process but it is well worth the results and improvement that you get in return. It is a resource-demanding process, one that requires plenty of time to complete. Some companies even go as far as refactoring parts of their business solutions to make the whole process more manageable. This compartmentalization could also lead to refactoring becoming longer and more resource exhausting.

Final words

Which one is the best approach?

There is no absolute answer to the question, especially since different use cases require different things. Picking one among the three approaches is a matter of finding the best that suits your specific needs. That said, start by checking if the app can be moved to a cloud environment in its entirety while maintaining cost and keeping operational efficiency in check. If the answer is yes, start with the rehost method.Boxes of various shapes with text written inside If rehosting doesn’t seem like a fit for you or if cost-efficiency is at a level that needs to be refined, you can also consider re-platforming as a good option. Remember that not all apps can be transitioned this way, so you may end up having to find other solutions entirely.

The same approach goes for refactoring. If you have enough time and resources to complete a full refactor of your current solutions, then take SaaS and other alternate solutions into consideration. 

Nevertheless, you can certainly take most of the hassle out of moving to the cloud with the right cloud migration strategy. You can then devote yourself to finding new resources to use, better flexibility to benefit from, and a more effective environment for your apps. 

Take account of these points in mind, and you’ll be able to find the best approach out of these. However, there is no defined path to success. Your organization needs may vary and delve you into adopting a combination of these approaches, i.e. hybrid approach.

For example, it is possible that after conducting a migration analysis for your organization, it is determined that:

  • 50% of your apps need to be re-hosted
  • 10% to be retained on-premises in a colocation facility
  • 40% apps, which are maintenance-level yet business-critical, are flagged for re-platforming/refactoring

What is important in the end is to plan and roll out your migration plan by conducting a thorough analysis of your complete IT system, your infrastructure, and your application workload. 

This assessment will help you determine which strategy to use and which part(s) should be moved to the cloud. 

Topics: AWS, Cloud, Javascript

Exploring How AWS Serverless Web Apps Work

Posted by Kimi Mahajan on Sep 7, 2019 4:10:00 PM

2014 saw a breakthrough release of AWS Lambda offering a powerful new way of running applications on the cloud. However, soon it was realised that a structure is needed to run an application, and it’s difficult to manage all of the containers that lambda introduces.

This gave way to the most powerful framework for building applications exclusively on AWS lambda which was called Serverless framework.

With increasing number of organizations riding the wave of serverless, the way they develop and deliver software has undergone drastic transformation. Let’s get to know the details of serverless web apps and explore how they work.

The What and Why of Serverless

Serverless refers to an application framework for building web applications without going into the detailing of servers. The servers are managed by cloud provider, taking care of its provisioning and allocation.

This makes the application to run in a stateless compute containers that are ephemeral and event-triggered. The productive efforts of developers can be channeled in the right direction, and saves their time in getting caught up in intricate web of modern complex infrastructure.

Pricing is based on pay-per-use model rather and not pre-purchased compute capacity. Serverless has the most realistic offerings, and is anticipated to be one of the most used cloud services in upcoming years.

We have already mentioned in our blog - 5 Reasons To Consider Serverless AWS For Drupal - why AWS is considered the best hosting provider for Drupal websites.

If we compare the architecture for a multi-user, mobile friendly app which requires user authentication of a monolithic with that of a serverless web app, it would look somewhat like the one shown below:

monolithic architectureMonolithic Architecture (Source: Dzone)

The serverless architecture would look something like the one shown below:

serverless architecture

Serverless Architecture (Source: Dzone)

With serverless, application development is dependent on a combination of third-party services, client-side logic and cloud-hosted remote procedure calls, and is hence referred to as Functions as a Service.

FaaS refers to an implementation of serverless architecture where a piece of business logic processes individual requests. It is an independent, server-side, logical functions which are small, separate, units of logic that take input arguments, operate on them and return the result, such as lambda. It is stateless, which means, any two invocations of the same function could run on completely different containers.

AWS Lambda, Azure Functions, IBM OpenWhisk and Google Cloud Functions are most well-known FaaS solutions available, supporting a range of languages and runtimes e.g. Node.js, Python, etc.

3

Source: Slideshare

Composition of Serverless App

Assembling a modern application implies you’re creating a solution by combining SaaS with a managed/serverless service. This makes the process faster, but complex at the same time, as the process requires a lot of manual work to bring all the pieces together.

However, with serverless components, it becomes quite simpler. Every project uses AWS resources and divides these resources into three groups:

  • AWS Lambdas
  • AWS API Gateway REST API
  • AWS other resources such as DynamoDB tables, S3 Buckets, etc.

Serverless projects live exclusively on cloud i.e. AWS and don’t have specific environment. Serverless isolates the AWS resources a project uses for development, testing and production purposes through stages.

Stages can be thought of as environments, except for the fact that they exist merely to separate and isolate your project's AWS resources.

Each serverless project can have multiple stages, and each stage can have multiple AWS Regions.

  • AWS Lambdas

Lambda functions on AWS can be replicated across each region your project uses. When you include a function in your project to a stage, it deploys a lambda. It can be triggered by events from other AWS services but not from direct HTTP requests.

  • AWS API Gateway REST API

If your functions have endpoint data, a REST API on AWS API Gateway will automatically be created for your project which can be replicated across each region.

When you deploy an endpoint to a stage, it builds it on your API Gateway REST API and then creates a deployment in that API Gateway stage.

  • AWS Other Resources

Your project's other AWS resources have separate deployments for each stage which can be replicated across each region your project uses.

Components presents a single experience for you to provision infrastructure and code across all cloud and SaaS vendors saving development time.

Creating a Serverless Application

Let’s take a look at how you can use serverless components to create an entire serverless application. A Serverless solution consists of a web server, Lambda functions (FaaS), security token service (STS), user authentication and database.

Creating a Serverless Application

Source

  • Client Application :  The UI of your application is rendered client side.
  • Web Server: Amazon S3 acts as a robust and simple web server which can serve the static HTML, CSS and JS files for our application.
  • Lambda function (FaaS): It is the key enabler in a serverless architecture. AWS Lambda is used in the above shown framework for logging in and accessing data to read and write from your database and provide JSON responses.
  • Security Token Service (STS) :  STS generates temporary AWS credentials (API key and secret key) for users of the application to invoke the AWS API (and thus invoke Lambda).
  • User Authentication : User login can be added to mobile and web apps by an identity service integrated with AWS Lambda. Not only this, it helps in authenticating users through social identity providers with SAML identity solutions.
  • Database :  AWS DynamoDB provides a fully managed NoSQL database. DynamoDB is used as an example here.

Any cloud service can be packaged as a serverless component.

Understanding Through An Example

You want to write a serverless image processing API that pulls images from S3. To do so, you might create an AWS API Gateway endpoint to call an AWS Lambda function, which then pulls an image from the AWS S3 bucket and modifies it.

All serverless components can be nested in a larger component to create a serverless image processing API, as shown in the image below:You want to write a serverless image processing API that pulls images from S3. To do so, you might create an AWS API Gateway endpoint to call an AWS Lambda function, which then pulls an image from the AWS S3 bucket and modifies it.

Here’s why this is important: when you create this image processing API, you’ll configure each component. However, this can be avoided by nesting those infrastructure-level components in a higher-order component, which can expose simpler configuration, and can be used somewhere else.

Composing components to form an entire application

The complete web application can be built by nesting the serverless components.

6-592

Serverless framework believes in making infrastructure more invisible, enhancing developers’ ability to focus on outcomes, and fosters a community to share and reuse outcomes.7-684376835498614167

CloudFormation is a service that AWS offers as a unit with all the necessary pieces to make a Lambda do any actual work. It treats a complete serverless ‘stack’ as a configuration file that can be moved and deployed in different environments.

How do we tell our Lambda where its running, and how do we give it the configuration that it needs to interact with other services?

We need secrets to authenticate to our DB, but we also need our Lambda to know that it’s running on staging so that it doesn’t try to update the production database during our test runs.

So we can identify three key sections of our serverless app: our function, its resources, and the secrets/configuration that make up its environment.

In a highly virtualized environment it can be difficult to point where is a particular code running. The ‘environment’, stack, its configuration, and secrets will collectively exist across multiple zones or even in services.

An active stack refers to a complete set of functions, resources, and environment. Dev, test and prod can be the three active stacks where you’re running your code. If these production stacks are distributed to three different AWS regions, you again have three active stacks.

It is important to build a plan to manage all these features so as to adopt a serverless model for part of our architecture. You must have:

  • Programmers to write functions and also manage their source code
  • Cloud professionals to manage the resources those functions need
  • Operations and security to deploy these stacks in the right environments

Srijan can help you assemble matching stacks and their environments, and easily define complete applications and re-deploy them in different AWS regions. Contact our experts with your requirements.

Topics: Microservices, AWS, Cloud

12 Factor Apps and Their Benefits For Cloud Native Applications

Posted by Nilanjana on Aug 30, 2019 5:32:00 PM
“Good code fails when you don’t have a good process and a platform to help you. Good teams fail when you don’t have a good culture that embraces DevOps, microservices and not giant monoliths, said Java Framework Spring’s Tim Spann when asked about the reason for choosing 12 factor apps.

If your enterprise team is often struggling with overly complicated, slowed-down app deployment, 12 factor app methodology should be the go to solution for you. 

What are 12 Factor Apps?

A methodology or process created specifically for building Software as a Service (SaaS) apps, the 12 Factor Applications can help you avoid headaches typically associated with long term enterprise software projects.

Laid down by Heroku founder, these 12 design principles function as an outline to guide the development of a good architecture.

  • They include defined practices around version control, environment configuration, isolated dependencies, executing apps as stateless resources and much more
  • Work with a combination of backing services like database, queue, memory cache
  • and, Utilize modern tools to build well structured and scalable cloud native applications

Interestingly however, they are not a recipe on how to design the full system, rather a set of prerequisites that can get your projects off to a great start. Here’s a look at the 12 factors

#1 Codebase

There should be only a single codebase per app, but multiple deployments are possible.

Multiple apps sharing the same code, violates the twelve-factor methodology. The solution is thus to factor the shared code into libraries which can be included through the dependency manager. 

As for multiple deployments, it’s possible with the same codebase being active across them, although in different versions.

#2 Dependencies

A 12-factor app relies on the explicit existence declaration of all dependencies, completely and exactly, via a dependency declaration manifest. And you must also use a dependency isolation tool along with dependency declaration.

  • Dependency declaration is required as it simplifies setup for developers who are new to the app.
  • Using dependency isolation tool during execution ensures that no implicit dependencies “leak in” from the surrounding system.
  • And using both dependency declaration and dependency isolation together is important, because only one is not sufficient to satisfy twelve-factor.

#3 Config

Apps storing config as constants in the code is a violation of the 12-factor. Instead, config should be stored in environment variables (env vars). Why? Config varies substantially across deploys, whereas code does not. Env vars are on the other hand, easy to change between deploys without changing any code.

Secondly, these env vars are independently managed for each deploy. They are never grouped together as environments. And this model scales up smoothly as the app naturally expands into more deploys over its lifetime.

#4 Backing services

Under the 12-factor process, backing services are treated as attached resources, independent of whether they are locally managed or third party services. They can be accessed easily via a URL or other credentials, and even swapped one for the other.

The result? If your app’s database is misbehaving because of a hardware issue, you can simply spin up a new database server restored from a recent backup. The current production database could be detached, and the new database attached – all without any code changes.

#5 Build, release, run

There should be strict separation between the build, release and run stages. This is done to ensure that no changes can be made to the code at runtime, since there is no way to propagate those changes back to the build stage.

Why is this necessary? Because runtime execution (unlike builds) can happen automatically. Such as if there is a server reboot, or a crashed process being restarted by the process manager, these problems which are preventing the app from running could also cause the code to break. And that could be a major problem, particularly if no developers are on hand.

#6 Processes

Twelve-factor processes are stateless and share-nothing. It is never assumed that anything cached in memory or on disk will be available on a future request or job. All the data compiling is done during the build stage, and everything that needs to persist is stored in a stateful backing service, typically a database.

#7 Port binding

A 12-factor app is completely self-contained, and does not rely on the runtime injection of a webserver into the execution environment to create a web-facing service. It always exports services via port binding, and listens to requests coming on that port.

Almost any kind of server software can be run via a process binding to a port, and awaiting incoming requests. Examples include HTTP, ejabberd, and Redis.

#8 Concurrency

To ensure the scalability of your app, you should deploy more copies of your application (processes) rather than trying to make your application larger. The share-nothing, horizontally partitionable nature of twelve-factor app processes means that adding more concurrency is a simple and reliable operation. 

To do this, the developer has to architect the app to handle diverse workloads by assigning each type of work to a process type. For example, HTTP requests may be handled by a web process, and long-running background tasks handled by a worker process.

#9 Disposability

The twelve-factor app’s processes are disposable, i.e.,

  • Startup time is minimal
  • Can shutdown gracefully at a moment’s notice
  • Robust against sudden crashes or failure

All of this facilitates fast elastic scaling, rapid code deployment or config changes, as well as robustness of production deploys.

#10 Dev/prod parity

A 12-factor app is designed for continuous deployment by minimizing the sync gap between development and production. Here’s how:

  • Time gap: reduced to hours
  • Personnel gap: code authors and deployers are the same people
  • Tools gap: using similar tools for development and production

Keeping development, staging and production as similar as possible will ensure anyone can understand and release it. This ensures great development with limited errors, and also enables better scalability.

#11 Logs

Twelve-factor apps should not be concerned about routing and storage of it’s output stream or writing/managing log files. Instead, each running process writes its event stream, unbuffered, to stdout. During local development, the developer will view this stream in the foreground of their terminal to observe the app’s behavior.

This factor is more about excellence than adequacy. While success is possible even without logs as event streams, the pay-off of doing this is significant. 

#12 Admin processes

Apps should run admin/management tasks as one-off processes, in an identical environment as the regular long-running processes of the app. They run against a release, using the same codebase and config as any process run against that release. Admin code must ship with application code to avoid synchronization issues. And dependency isolation techniques should also be the same for all process types.

This factor is more from a managing your app point-of-view than developing services, but is still important.

The Underlying Benefits

Thus the 12-factor apps methodology helps create enterprise applications that:

  • Use declarative formats for setup automation. This minimizes the time and cost for new developers joining the project
  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments
  • Are suitable for deployment on modern cloud platforms, thus removing the need for servers and systems administration
  • Limits differences between development and production, enabling continuous deployment for maximum agility
  • Can scale up without any major changes to tooling, architecture, or development practices, hence performance is a priority

Should You use the 12 Factors?

You are now well aware of the 12 factor apps methodology, as well the advantages they bring. But is it always a great choice to make? Probably not. 

If you are an enterprise with a development team that is still trying to overcome the baggage of your legacy, on-premise applications, 12 factor is not ready for you. The right use case would be for those new apps or instances where you’ve already started the refactoring process for a brownfield project that you’re completely reworking. Or when you are building new cloud-native applications, that's when you definitely need 12 factor apps. 

It’s all about deciding what your main problem is and if this methodology can solve that problem. And of course, as always, you should prioritize what works for your team.

Our expert development teams at Srijan can help you understand your enterprise project requirements, and whether 12 factor apps can ensure a better app architecture. To know more, book a consultation.

Topics: Cloud, Architecture

5 Reasons To Consider Serverless AWS For Drupal

Posted by Kimi Mahajan on Aug 5, 2019 1:01:00 PM

Using cloud is about leveraging its agility among other benefits. For the Drupal-powered website, a right service provider can impact how well the website performs and can affect the business revenue.

A robust server infrastructure, such as AWS, when backs up the most advanced CMS, Drupal, it proves to accelerate the website’s performance, security and availability.

But why AWS and what benefits does it offer over others? Let’s deep dive to understand how it proves to be the best solution for hosting your Drupal websites.

Points To Consider For Hosting Drupal Websites

The following are the points to keep in mind while considering providers for hosting your pure or headless Drupal website.

Better Server Infrastructure: Drupal specialised cloud hosting provider should offer a server infrastructure that is specifically optimized for running Drupal websites in a way they were designed to run.

Better Speed: It should help optimise the Drupal website to run faster and should have the ability to use caching tools such as memcache, varnish, etc.

Better Support: The provider should offer better hosting support with the right knowledge of a Drupal website.

Better Security and Compatibility: The hosting provider should be able to provide security notifications, server-wide security patches, and even pre-emptive server upgrades to handle nuances in upcoming Drupal versions.

Why not a traditional server method?

There are two ways of hosting Drupal website via traditional server setups: 

  • a shared hosting server, where multiple websites run on the same server
  • or  a dedicated Virtual Private Server (VPS) per website. 

However, there are disadvantages to this approach, which are:

  1. With a lot of non-redundant single-instance services running on the same server, there are chances if any component crashes, the entire site can get offline.
  2. Being non-scalable, this server does not scale up or down automatically and requires a manual intervention to make changes to the hardware configuration and may cause server to go down due to an unexpected traffic boost.
  3. The setup constantly runs at full power, irrespective of usage, causing wastage of resources and money.

Hosting Drupal on AWS

Amazon Web Services (AWS) is a pioneer of cloud hosting industry providing hi-tech server infrastructure and is proved to be highly secure and reliable.
With serverless computing, developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. It enables you to build modern applications with increased agility and lower total cost of ownership and time-to-market.

With serverless being the fastest-growing trend with an annual growth rate of 75% and foreseen to be adopted at a much higher rate, let’s understand the significance of the AWS components in the Virtual Private Cloud (VPC). Each of these components proves it to be the right choice for hosting pure or headless websites.

AWS-syanmic-content-srijan-technologiesArchitecture diagram showcasing Drupal hosting on AWS

 

  •  Restrict connection: NAT Gateway

Network Address Translation (NAT) gateway enables instances in a private subnet to connect to the internet or other AWS services. Hence, the private instances in the private subnet are not exposed via the Internet gateway, instead, all the traffic is routed via the NAT gateway. 

The gateway ensures that the site will always remain up and running. AWS takes over the responsibility of its maintenance.

 

  • Restrict access: Bastion Host

Bastion hosts protects the system by restricting access to backend systems in protected or sensitive network segments. Its benefit is that it minimises the chances of any potential security attack.

 

  • Database: AWS Aurora

The Aurora database provides invaluable reliability and scalability, better performance and response times. With fast failover capabilities and storage durability, it minimizes technical obstacles.

 

  • Upload content: Amazon S3

With Amazon S3,  store, retrieve and protect any amount of data at any time in a scalable storage bucket. Recover lost data easily, pay for the storage you actually use, protect data from unauthorized use and easily upload and download your data with SSL encryption.

 

  • Memcached/Redis: AWS Elasticache

Elasticache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store in the cloud.

 

  • Edge Caching: AWS CloudFront

CloudFront is an AWS content delivery network which provides a globally-distributed network of proxy servers which cache content locally to consumers, to improve access speed for downloading the content.

 

  • Web servers: Amazon EC2

Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud.

 

  • Route 53

Amazon Route 53 effectively connects user requests to infrastructure running in AWS and can also be used toroute users to infrastructure outside of AWS.

Benefits of Hosting Drupal Website on AWS

Let’s look at the advantages of AWS for hosting pure or headless Drupal websites.
High Performing Hosting Environment

The kind of performance you want from your server depends upon the type of Drupal website you are building. A simple website with a decent amount of traffic can work well on a limited shared host platform. However, for a fairly complex interactive Drupal site, a typical shared hosting solution might not be feasible. 

Instead, opt for AWS, which provides a server facility and you get billed as per your usage.

Improved Access To Server Environment 

Shared hosting environment restricts its users to gain full control and put a limitation on their ability to change configurations for Apache or PHP, and there might be caps on bandwidth and file storage. These limitations get removed when you're willing to pay a higher premium for advanced level access and hosting services.

This is not true with AWS, which gives you direct control over your server instances, with permissions to SSH or use its interface control panel to adjust settings.

Control over the infrastructure 

Infrastructure needs might not remain constant and are bound to change with time. Adding or removing hosting resources might prove to be difficult or not even possible and would end up paying for the unused resources.

However, opting for AWS will let you pay for the services you use and can shut them off easily if you don’t need them anymore. On-demand virtual hosting and a wide variety of services and hardware types make AWS convenient for anyone and everyone.

No Long-term Commitments

If you are hosting a website to gauge the performance and responsiveness, you probably would not want to allocate a whole bunch of machines and resources for a testing project which might be over within a week or so.

The convenience of AWS on-demand instances means that you can spin up a new server in a matter of minutes, and shut it down (without any further financial cost) in just as much time.

Refrain Physical Hardware Maintenance

The advantage of using virtual resources is to avoid having to buy and maintain physical hardware. 

Going with virtually hosted servers with AWS helps you focus on your core competency - creating Drupal websites and frees you up from dealing with data center operations.

Why Choose Srijan?

Srijan’s team of AWS professionals can help you migrate your website on AWS cloud. With an expertise in enhancing Drupal-optimised hosting environment utilising AWS for a reliable enterprise-level hosting, we can help you implement various AWS capabilities as per your enterprises’ requirements. Drop us a line and let our experts explore how you can get the best of AWS.

Topics: Drupal, AWS, Cloud

AWS - The Right Cloud for Media and Entertainment Workloads

Posted by Kimi Mahajan on Aug 2, 2019 2:32:00 PM

The media landscape is transforming the way content is being produced and consumed, which is giving rise to user expectations to have more personalized experiences, from anywhere, anytime and on any device. 

This is leading to huge operational changes for media companies to migrate from traditional broadcasting method to digital distribution model. Several media giants are increasingly adopting new cloud technologies to manage the explosive growth of the digital content.

Media enterprises are making a shift to AWS, which is the pioneer in cloud hosting, to take advantage of its high scalability, elasticity and secure cloud services. 

But, how beneficial is AWS in terms of solving challenges of media and entertainment industry? Let’s understand the benefits of moving to cloud and why AWS offers the best services in the cloud service arena.

Why Media Enterprises need to Shift to Cloud?

In a survey, 35% of respondents replied that their enterprises moved to cloud for easier collaboration for post-production tasks.

types-of-business-responding-srijan-technologiesSource: Backblaze

The constant pressure among media firms to invest resources in generating high-quality, creative content and the need to prevent data losses due to natural and artificial catastrophes is pushing them to move to cloud.

So, how is cloud helping the Media and entertainment industry with its major challenges? Let’s review them one by one.

1.Huge Consumer Demand

Today’s consumers of media and entertainment content expect huge content choice, with their demand varying rapidly, which have to be dealt with in real time. 

The media and entertainment sector needs to cost-effectively meet volatile demand, and remain flexible in terms of automatically spinning servers up and down as demand increases or decreases.

2. Continuous Supply of Content

In order to stay competitive, content creators in the media field are under constant pressure to produce and/or distribute original content more frequently, at an accelerated rate.

With cloud, it’s easier to store, manage, and deliver gigantic amount of digital content. Hybrid and multi-cloud deployments can provide an even greater measure of flexibility, allowing workloads to be shifted seamlessly across public and private infrastructures.

3. Cost Benefits of Cloud Computing

Cable or broadcast television section of media and entertainment sector are being challenged by new trends in television broadcasting. Agile and low-cost over the top (OTT) companies selling and/or delivering streaming media content directly to consumers over the Internet are competing against the traditional media distribution methods.

Other factors that are challenging media content are the rising costs of content licensing, as well as shortened technology lifecycles.

By shifting to the cloud’s OPEX model, media companies can reduce their costs involving storage and delivery technologies and infrastructures.

4. High Performance With Minimal to Zero Delays

It is critical in terms of user experience for viewer content to stream with minimal delays and downtime. A six-second delay in streaming an ad for a show can cost a huge loss, with customers likely to switch to another entertainment channel.

The cloud provides architectures which supports high availability and un-compromised performance SLAs.

Advantages of AWS for Media Enterprises

Media enterprises can help their users monitor, manage storage and compute usage and costs with the tools and services with AWS. 

For major tasks around content production, storage, processing, and distribution, AWS brings scalable, elastic and secured cloud services. Equipped with deep learning, NLP, ML, NLU it delights the digital media creators with personalized experiences through smarter content investments.

Secure, Scalable and Cost-Effective Solution

66% of respondents say security is their greatest concern while adopting an enterprise cloud computing platform

AWS remains the best choice for media companies who are looking to adopt private cloud model. As per Cloud Security Alliance report, Amazon Web Services is the most popular public cloud infrastructure platform, comprising 41.5% of application workloads in the public cloud. 

Multinational entertainment firms have become scalable and are also making content available to consumers anytime and from anywhere on leveraging AWS cloud services.

It remains a cost-effective solution for media enterprises which can follow pay per use model for the services leveraged.

Cloud Computing is Changing Economics of Media and Publishing

Simplified Content creation and Production

Media enterprises need not worry about geo and resource constraints. The only focus  should be on creating quality content with HDR, VR, AR and beyond, to keep viewers engaged. 

With AWS, you can connect with world-wide production talent, unlimited capacity, unsurpassed security and the most innovative cloud technology partners in the industry. 

Now, you can optimize valuable insights to improve production investment decisions tailored as per consumers’ needs with the help of machine learning and analytics. Pre-processing and optimization for false takes or cuts comes easy with AWS. ML production edit provides quick turn-around for dailies and editorial review. Prohibited content can be easily flagged for filtered viewing.

Efficient Storage Provider

The media enterprises now have a one-stop solution for their storage concerns by opting for AWS multi-tiered storage solution, which includes Amazon Simple Storage Service (Amazon S3), S3 Infrequent access, and Amazon Glacier. These solutions allow for massive data storage, allowing huge data ingestion and elasticity satisfying the ever-increasing demand for storage, along with cost management.

Easens Digital Distribution and Post Production Process

AWS can solve the concerns of broadcasting quality video workflows in the cloud and ensures seamless delivery to any device, at anytime and anywhere.

Media enterprises need not worry about live, linear, and on-demand content, as AWS  specialises in delivering and in creating professional quality media experiences for the viewers in much less time, effort and expenses required in a traditional data center.

Pay-as-you-go pricing and fully-automated resource scaling lets you handle any sized audience without upfront capital investment and instead of managing complex infrastructure, AWS video solutions lets you focus on creating user-engaging content.

Live Streaming, Subtitling, Video on Demand Service

Making content understandable to a large audience is easy with AWS cloud solutions which helps generate multilingual subtitles for live over-the-top streaming. 

With AWS, the viewers will be given a choice to choose a movie/video from a wide array of options with the help of video-on-demand (VOD) content. VOD can be available for broadcast and multi-screen delivery.

Migration of VFX renderings to AWS will help media companies to shorten content production times and foster collaboration with contributors from around the world. 

Let’s understand how AWS has been beneficial for giant names in media and entertainment.

Company Description Business Challenges Solution and Benefits

Netflix

Prominent name in streaming online content on smart TV, game console, PC, Mac, mobile, tablet and more.

  • Unable to scale
  • Unable to meet user increased demand
  • Huge infrastructure unable to manage data storage
  • Accelerated deployment of  servers and data storage
  • Stream high-quality content from anywhere, any device
  • improved scalability, with a better architecture
  • Containers optimized their microservices architecture

Discovery Communications

Leader in nonfiction media, reaching more than 1.8 billion cumulative subscribers in 218 countries and territories.

  • Required easy to manage website infrastructure
  • Was seeking cost-effective solution
  • Wanted to consolidate multiple delivery engines
  • Needed scalable and flexible solution
  • Wanted to switch to pay-as-you-go model
  • Migrated more than 40 sites to AWS
  • Highly scalable architecture
  • Entire continuous delivery system and development platform built around AWS API
  • Low latency along with cost savings of 20-25 percent and better manageability

 

Media and entertainment have begun to embrace cloud computing as their technology of choice. Reducing IT operational costs and providing anytime and anywhere accessible high quality content will soon trigger global adoption of cloud solutions by media and entertainment.

Srijan is an AWS Advanced Consulting Partner. Contact us today to discuss how our AWS trained professionals can help you in migrating your media and entertainment-based apps to AWS.

Topics: AWS, Cloud, Media & Publishing

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us